Last week, we reported the results of our Content + Credibility Study and some key findings on perceptions
of credible web content in the U.S. and U.K. We also mentioned the
results of this first phase were gathered from 1,600 people surveyed in
those countries.
What we haven’t written much about—outside of the report itself—is more on our methodology to create, publish, and manage the survey.
Like a fine recipe, we drew on best practices and added our own “spices” to create a classic formula for the first-ever comprehensive study of web content credibility.
Here’s a closer look at our process.
We wanted to know if people perceive that content credibility has changed and what cues they look for to decide if content is trustworthy. Also, what situations do people value credibility the most?
The next ingredients were the survey questions and examples of web content. For the set of questions asking about specific content examples, we used two kinds of sampling: critical case sampling and common case sampling. These methods helped us select topics that relate to important decisions and represent common types of content for select industries.
We then researched and selected an online survey tool that would accommodate
User scenarios within our test cases aided functional tests of the survey. These were invaluable to the functionality tests as they highlighted areas of logic that needed to be added or adjusted to enhance the survey flow and data collection. We also kept an eye on the estimated duration of the survey to reduce abandonment.
Our survey asked about credibility, which can be a vague concept. So, before launching the survey, an advisory panel reviewed the survey questions, we conducted a pilot test, and we analyzed the reliability of the results. The internal consistency reliability analysis found that the pilot participants answered different questions about similar topics consistently. That consistency strongly suggests participants understood our questions clearly.
With the tests and reliability analysis completed it was time to have a (survey) party with our domestic and international friends.
The large sample minimizes sampling error and gives a high confidence level in each market. To ensure we could detect statistically significant differences in responses by age for most questions, we set a quota by age group. So, for each market, each of these age groups had 200 participants:
The result? In terms of unique and useful insights, we think our recipe is a hit. And, we identified potential areas for more analysis in phases 2 and 3 of our study. By choosing the right ingredients (inputs) and testing our recipe different ways we were able to create a new formula for a one-of-a-kind study that's destined to be a classic.
Interested in the detailed protocol and, most importantly, our findings and recommendations? It's all in the report. You can order the report here >
What we haven’t written much about—outside of the report itself—is more on our methodology to create, publish, and manage the survey.
Like a fine recipe, we drew on best practices and added our own “spices” to create a classic formula for the first-ever comprehensive study of web content credibility.
Here’s a closer look at our process.
Choosing the right ingredients
In our case, choosing the right research questions were among the first ingredients.We wanted to know if people perceive that content credibility has changed and what cues they look for to decide if content is trustworthy. Also, what situations do people value credibility the most?
The next ingredients were the survey questions and examples of web content. For the set of questions asking about specific content examples, we used two kinds of sampling: critical case sampling and common case sampling. These methods helped us select topics that relate to important decisions and represent common types of content for select industries.
We then researched and selected an online survey tool that would accommodate
- Different styles of questions, such as multiple choice and Likert scale.
- Showing examples of content, then asking questions.
- Logic to ensure that participants saw content examples for only pertinent topics.
Testing, testing, and more testing
After the questions were written and the survey tool set up, we tested the survey functionality and ran a pilot test to determine if all the ingredients worked together.User scenarios within our test cases aided functional tests of the survey. These were invaluable to the functionality tests as they highlighted areas of logic that needed to be added or adjusted to enhance the survey flow and data collection. We also kept an eye on the estimated duration of the survey to reduce abandonment.
Our survey asked about credibility, which can be a vague concept. So, before launching the survey, an advisory panel reviewed the survey questions, we conducted a pilot test, and we analyzed the reliability of the results. The internal consistency reliability analysis found that the pilot participants answered different questions about similar topics consistently. That consistency strongly suggests participants understood our questions clearly.
With the tests and reliability analysis completed it was time to have a (survey) party with our domestic and international friends.
1,600+ of our closest friends
Okay, they weren't our friends. They were a random sample. But, you get the idea. In early 2012, we launched the survey in two markets: the U.S. and the U.K. In each market, 800 people participated, for a grand total of 1,600.The large sample minimizes sampling error and gives a high confidence level in each market. To ensure we could detect statistically significant differences in responses by age for most questions, we set a quota by age group. So, for each market, each of these age groups had 200 participants:
- 18-24
- 25-34
- 35-44
- 45-65
hot, fresh Results NOW
After a cycle of cleaning the data and organizing it for analysis, we had our first look at the data. In all, we performed three types of analysis on the data to ensure the integrity of the results: reliability analysis, descriptive analysis, and inferential analysis.The result? In terms of unique and useful insights, we think our recipe is a hit. And, we identified potential areas for more analysis in phases 2 and 3 of our study. By choosing the right ingredients (inputs) and testing our recipe different ways we were able to create a new formula for a one-of-a-kind study that's destined to be a classic.
Interested in the detailed protocol and, most importantly, our findings and recommendations? It's all in the report. You can order the report here >
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.