Friday, October 22, 2010

Using numbers to plan content


Very relevant to our recent KPI conversations. Is there a way for us to try to implement these takeaways as test cases?


http://johnnyholland.org/2010/10/15/using-numbers-to-plan-content/

 Clare OBrien on October 15th, 2010


Something that’s fascinated me about online metrics since I started working in online (quite a long time ago in internet terms) is their immediacy. In fact, it’s their instancy… this real-time sense you get from actually watching people move in and out of a website or email or mobile platform—that really mesmerises. The numbers create a kind of certainty about the clicks, impressions, traffic volume… and based on those numbers we believe we can know what worked (or didn’t work). On the basis of these metrics we do more or less of the same.
As fascinating and addictive as these numbers are, it bothers me that these are the kind of metrics clients and agencies use to back-up ideas and shore up budget planning. Where’s the context? What do the numbers mean?
There’s a clear correlation between the media metrics that took hold of the advertising world in the 1980s, and the kind of metrics used to demonstrate online currency during efficacy during the 1990s. Media metrics, essentially measuring ‘eyeballs,’ or audience volumes, was the established bedrock of media-buying principles. Cost-per-thousand and audience testing entirely centred on brands asking “What do you think of me?”
In those days, when volume and mass ruled, the loudest, most ubiquitous voices were majestic, and the creatives that delivered ‘me-to-you’ messages were governed and controlled by the number crunchers on Madison Avenue and Charlotte Street. Audiences weren’t people – they were traded commodities. Come the turn of the millennium, the sheer cacophony of branded messages started to repel those same audiences, who began to zone out the noise and make their own media and brand choices. Audiences, markets—people!—got message-weary just around the time the internet got domestic.
Unfortunately, early players in the Internet business essentially copied the same kind of media metrics approach but applied it to an entirely different kind of media. You can see the logic… audience volume became the easiest way to describe effectiveness to budget-carrying agencies and a client demographic that felt technologically remote from this new media platform. Anyway – in comparison to the audience mega-transport ships of traditional broadcast and print media, all the Internet had to offer was little more than a landing-raft, in those days!
So, back to the metrics and why they bother me. Online you can measure everything. In fact, you can measure so much, you may drown in numbers before you get a chance to ask what any of them mean.
For instance, I can know how many people come to my website, where they come from, and which and how many of my pages they visit; I can know if they’re unique visitors or returners, and if they dwell for short or long periods; I know what they click on my page—I can even find out if they’re clicking in places that aren’t links. I can know if they’ve started to do something and then stopped, if they complete it, how long it took, and where they go next. I can measure when traffic numbers go up or down, and identify sad and lonely corners where no-one ever goes. I can see which search terms bring people to my site, and I can optimise the content and metadata to capture more of those people. And I can create the most detailed and beautiful charts that carry all these pieces of information back to my colleagues and clients and their bosses. Job done?
No. Traffic numbers are just that. Summaries of individual measures. Anyone can sit alongside a motorway and count cars, know if they’re travelling North or South, what models they are, how fast they’re going… Finding out why they’re on the road, what their journey’s for, and whether the route works? Well, that’s a bit harder, and such is the problem with online metrics and analytics. The appetite to invest in getting to know audiences / users – actually asking people what they want and then verifying their answers—is still pretty small.
CDA are content strategists, and we’ve been trying to figure out what makes good online content for a few years now. Aside from the rules around structure, language, and tone, and how to manage these aspects in the creation and publishing processes we’ve established, increasingly our conclusion is it’s all about context. At its simplest level, the question we want answered is: Is this content relevant and useful for the purpose of someone’s visit?
The plethora of metrics at the end of a mouse click, and more lately, Google Analytic’s richer analysis capabilities, make it possible (with expert input) to correlate different number sets and make experienced guesses at what traffic figures mean. Other bespoke systems such as WebTrends and ComScore let us run specific reports, but there’s still the sense we’re measuring the direction and colour of the traffic—not finding out why it’s on the road so we can build a better route.
The missing factor is a real-life user experience woven into the mix. I want to know if the content a site owner invests in is the content someone wants or needs to complete a task.
I want to know if the content we’re being asked to create is the content people have any interest in at all, or if it’s wasting my client’s budget. I want reliable evidence—from my audience or users—that the content we recommend to a client is worth his money. I want my client to be able to plan and budget his content requirements in the same way he plans and budgets all his business resources and expenditure.
And most of all, I want to understand the different contexts of a user visit, so we can recommend and create flexible content that meets each user’s context of interest.
Tall order?
Well, some while ago we developed the idea of CUT (Content Usefulness Toolkit). In outline, it’s a methodology of common sense.
First of all, CUT makes a big assumption. CUT assumes people respond positively to useful online content. But what’s useful for you may not be useful for me. What’s useful to users in a grocery ecommerce environment may not be useful to users on a recruitment website, or to a corporate site building an international brand. Getting the latest news may be useful on an investment site. Signing up for a daily tip via mobile may be useful for someone on a dieting site. So usefulness itself has to be understood, which is why the starting point with CUT is to find out what’s useful in the broadest context of the property.
And here’s a note: usefulness is not usability. Usability tests whether people can complete tasks within a planned or built structure. Usefulness is about understanding a need and targeting it with content that delivers. I’m seeing evidence the two are often confused.
Step 1: In our scoping model you can see that understanding usefulness within the context of a specific proposition is the critical driver for everything else. And understanding is achieved by talking to people—not by looking at traffic metrics (as beautiful as they can be made to look). Their role is later in the process. This concept is not new. We conceive and design virtually any new product to meet needs and solve problems. Ask any NPD professional.
CDA’s CUT (content Usefulness Toolkit) | identifies content for development or culling - CDA Ltd © 2008
Step 2: Now that we possess a greater understanding of what people find useful, we can plan or audit the content with an informed critical capacity.
Step3: Now we can begin to get smart with our metrics. With much greater insight into our audience, we can set a metrics plan that measures whether or not user traffic responds well to the content planned around our audience’s stated expectations of something useful.
Another note: be very careful with metrics, for example, how you consider ‘bounce’ numbers. There’s a school of thought that supports the negative interpretation that you’ve not engaged a bounced visitor. But consider how a bounce could record a very satisfied user: He had a question that was perfectly answered by the content of the page and immediately bounced off happy. It’s all in the context. If that page was a clear ‘how-to’ explanation of how to fix a leaky tap, for example, then you could well have a very satisfied visitor very quickly. If the page was the first of a 5-step registration, then it could indicate the process was unclear or not what your visitor was seeking.
So, understand what you’re measuring. This means setting your analytics goals to measure traffic behaviours based on what people have said they do or don’t expect to find useful. These goals are your indicators of content success or failure—but they’re only indicators.
Step 4: During Step 4, we ask the audience again. We use an online questionnaire to find out if people got what they wanted or not, based on their reason for being on the site in the first place. We ask why they were there, and ask them to rate their experience.
These quantitative responses, combined with our traffic metrics, pattern out to give clear targets for content development or even culling. The responses drive the strategic content direction and, critically, indicate budget allocation. They give content—the stuff that people come to access, the stuff that doesn’t just happen but which takes considerable planning and skill to get right—an operational and measurable foundation.
CDA are already working the methodology with several live sites. It’s helping make sense of existing metrics. It’s providing a framework that’s informing our recommendations and helping clients take a fresh view of the metrics they capture and use to make decisions about where to make investments.
Early, but exciting days—this is a nascent development but one we’re building into an essential business tool.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.