Go to article
Elizabeth S. Bennett May 14, 2013A series of tubes. (photo by nezume_you)
Here you are. You’ve developed a digital strategy that includes building out the content on your (or your client’s) website and mobile platforms. You’re excited. and fairly certain based on business goals, the competitive environment, and what you know about your target audience that your plan is a good one. Now it’s time to test out the vision with real people to make sure you’re on the right path.
Your goal is to get a good sense of how and whether people respond to your content. What’s the best way to go about evaluating all those words and images?
In my experience there is absolutely no substitution for listening to how real people respond to and interact with your site/experience. That’s not to say you can’t gather some useful information from surveys and other quantitative means at certain junctures, but you will never know your customers/audience/visitors until you listen to their experience in their own words.
Having said that, it’s time to figure out what you’re going to do with these real people when they turn up at your testing facility. How are you going to ensure that you’re eliciting valuable and reliable responses?
There are many approaches to user testing, and it’s important to be thoughtful about which path to take. I’ll be looking specifically at three approaches to testing content and when each one best applies. I’ll also explore the inherent risk in testing content, namely the impulse to zero in on the tone and style or format of the content at the heart of your experience, at the expense of broader learnings.
Testing the Experience
One approach is to focus on the experience. I doing so, problems (or successes) with content should become evident. With this method, you’ll provide some context for the respondent, observe them interacting with the experience, ask open-ended questions, and pick up on verbal cues and body language that invite investigation. Patterns of behavior will emerge and you’ll get a sense of how the experience is and isn’t meeting expectations.
Strengths: Provides a comprehensive customer perspective on the experience. If done well, findings will include what is and isn’t working with the strategy, design, content and interactions.
Drawbacks: May not yield precise degree of detail you or your client is seeking about the content.
When it works best: Really any time, but it’s a particularly good approach when the strategy and experience hasn’t yet been vetted by real customers. It’s also good for testing a complex experience where gaining a foothold on the big picture should be a prerequisite for testing the content.
You might say I cut my teeth on this method back in the day, when I was a consultant at Creative Good, a customer experience consulting firm. I corresponded recently with Mark Hurst, the company’s president and founder, and I asked him how he and his colleagues would test a content concept. He said he never thinks about content as its own phenomenon.
Hurst says that when his team members speak with the customers of clients one-on-one, they don’t set out to test a visual concept and then a content concept. “Those aren’t separate things in our worldview. Instead we look at the user experience overall, one unified experience that is created by the visual and the interactive and the content and a constellation of other factors.”
It can be much harder to sell the big picture testing scenario to clients who often think about their digital experiences as separate pieces in a puzzle, says Hurst, especially when there are multiple business owners responsible for content. But if you immediately zoom in on a sliver of the experience, it can have the opposite effect of sharpening your focus, Hurst says. “You’re cutting out a whole spectrum of observation and knowledge that you could be getting.”
Testing the Content
As the name suggests, this testing approach is focused exclusively on the content – be it prose, labels, marketing messages, or digital assets. The idea here is to invite testing respondents to comment exclusively on content with some specific goals in mind, like assessing tone and voice, article length, video style, navigation labels, etc.
Strengths: Opportunity to probe deeply into content elements, editorial strategy, tone/style, and basically anything you have questions or concerns about.
Drawbacks: Chance you may miss out on crucial strategy/experience findings by focusing exclusively on content.
When it works best: You’ve already vetted the strategy and experience with your target audience and believe it to be ironclad, but you have further questions about how successfully the content is supporting the strategy; you accept that this method is more of a usability study rather than a research method that could influence the overall direction of your site, app, etc. You might also use this approach when your experience and content are still in the concept stage to get an initial read on how customers or visitors respond to your vision.
Colleen Jones, principal at Content Science, a digital content strategy consultancy, and author of a recent content credibility study, says that you can definitely conduct targeted testing on content by “focusing test protocol (questions and tasks) on content issues instead of design and experience issues.” As an example, Jones says that instead of ending a task at finding content, you can end it with answering a question that requires both finding and understanding the content. Or, she says, you can focus the task on understanding the content only.
With content as the focus, Jones explains that the overall experience is a secondary matter. “If the experience causes or exacerbates content problems,” Jones says, “then you can still observe and note that. But, the experience isn’t the focus.”
Sometimes Jones tests content at a very detailed level, such as sample text, imagery, or video. “We have tested samples of text content that reflect different approaches to voice. One sample was lighter and more humorous in tone than the other.”
Jones says the approaches above have been very successful for her clients. For instance, she worked with a health start-up to test and evaluate tone of voice on their site. “We had to experiment with the right techniques in protocol to bring out real response and get at issues we’re interested in without biasing the results,” she explains. “You want responses to be as organic as possible.”
My experience is that when homing in on content – or really any single component of an experience – there’s a risk that important findings won’t emerge. Let’s say you and your client have agreed to test a site’s Videos page. You might get some very useful feedback about videos, but you could be inhibiting respondents from providing other perhaps more valuable feedback. If you haven’t established the environment for broader observations, respondents might not have the opportunity to say how much they dislike the homepage or that they’re super excited about something on a competitor’s site. That’s why I only recommend using this approach if you’ve already vetted a site’s overall strategy and experience through qualitative testing.
Testing the Content in the Course of Testing the Experience
This is a pretty straightforward hybrid between testing the overall experience and testing content in isolation. You and your client can feel confident that you did a thorough job of testing the experience and you still get to satisfy your or their need to zoom in on content questions and concerns.
If you’re hankering to ask overtly about content, this is your opportunity to do some investigating without making content the sole focus of your testing. My recommendation would be to test the overall experience, like in the first scenario, but set aside some time at the end of each interview, say one-third of the total time allotted, to inquire about the most pressing content questions. Best to keep the questions open-ended to elicit the most organic responses.
Strengths: Opportunity to dig deeper into content elements while gaining critical information about the overall experience
Drawbacks: When time with respondents is often precious, it could be that you miss out on important strategy/experience findings in an effort to glean content insights.
When it works best: When broad experience testing isn’t going to address the content questions you need answered. In order to move ahead with your work and validate the experience, it’s vital to know what users think of the content.
I recommend this approach if it’s not the first time you’re testing the experience. If it is and you or your client think it’s critical to call out some specific content questions, I would keep them to a minimum and develop your research plan around testing the overall experience with the expectation that any glaring content problems will arise when you let customers engage with the experience as a whole.
Choosing and Selling the Best Approach
As the content expert in the room, it will be up to you to educate your client about different approaches to testing content as well as to advocate for the one you think is best suited to the occasion. It will be helpful to have some case studies at the ready to illustrate how and when each one works best.
Speaking of case studies, we’d love to hear from you about your experience with testing content and if you have anything to add to the above approaches and when to use them. Thanks for adding your comments to this post.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.