Wednesday, December 29, 2010


788-1.215w.jpgBy Landor Associates

Introduction

It is incredibly rare for a product or organization to be without a brand. There are museum brands (Guggenheim, Smithsonian), people brands (Martha Stewart, David Beckham), political brands (Obama versus McCain, Labour versus Conservatives), destination brands (Australia, Hong Kong), sport brands (Manchester United, New York Yankees, Super Bowl), nonprofit brands (Red Cross, Oxfam, RED), branded associations (YMCA, PGA, Association of Zoos and Aquariums), along with the product, service, and corporate brands with which we are all familiar. Many old marketing textbooks talk about brands versus commodities (no-name products), but in today’s world very few true commodities are left. Even basic foodstuffs have some sort of identifier on them, whether it is a private-label store brand such as Walmart’s Great Value salt or a major brand such as Morton Salt.
Brands help people make a choice, a choice among salts, financial institutions, political parties, and so on, and the choices are increasing. The number of brands on grocery store shelves, for example, tripled in the 1990s from 15,000 to 45,000.1 The purpose of branding is to ensure that your product or service is the preferred choice in the minds of your key audiences (whether customers, consumers, employees, prospective employees, fans, donors, or voters). The way in which the brand affects business performance is illustrated in figure 1.
788-2.330w.gif


Business performance is based on the behavior of customers, whether they choose to buy a particular product or service. And that behavior is based a great deal on the perception customers have of the brand: how relevant it is to them and how differentiated it is from the other brands in the same category. In turn, customers derive their perceptions of a brand from the interactions they have with it. Finally, that customer experience, ideally, is informed by a brand idea—what the brand stands for: the promise it is willing to make and keep in the marketplace. If the first part of this chain of cause and effect is indistinct or irrelevant to customers, there is little chance the rest of the chain will work, and the brand will not affect the business’ bottom line. Yet, despite the proliferation of brands and their inextricable link to business performance, it is not easy to de?ne what a brand is, along with how to create, manage, and value it.

The difference between a brand and branding

Most experts define what a brand is in one of two ways. The first set of definitions focuses on some of the elements that make up a brand:
  • “The intangible sum of a product’s attributes: its name, packaging, and price, its history, its reputation, and the way it’s advertised.”2
  • “A name, sign, or symbol used to identify items or services of the seller(s) and to differentiate them from goods of competitors.”3
The second set of definitions describes the associations that come to mind when people think about a brand:
  • “Products are made in the factory, but brands are created in the mind.”4
  • “A brand is a person’s gut feeling about a product, service, or company.... It’s a person’s gut feeling, because in the end the brand is defined by individuals, not by companies, markets, or the so-called general public. Each person creates his or her own version of it.5
What do we mean by “created in the mind”? When we think of Coke, we may think of the time we went to Disney World years ago. It was an incredibly hot day, and we drank an ice-cold Coke from the iconic glass Coke bottle and there was nothing more refreshing. When we think about the can, we might think red. Today perhaps we think of American Idol (and wonder whether they are really drinking Coke in those plastic cups). We think of how that Christmas polar bear ad made us smile. Those of us who are old enough may remember the “I’d like to teach the world to sing” commercial. These personal Coke brand associations are neither positive nor negative, they just come to mind. Coke has worked incredibly hard at implanting some of these brand associations in our minds: The idea and delivery of refreshment (and the supply management and distribution that are behind this), product placement, the color red, the association with a popular TV program, and the advertising all make us feel good about the brand. Coke has not controlled the buildup of these associations, but it has tried, at every stage of our experience with the brand, to positively influence them.

*
Adopting a female-focused strategy, Preem’s petrol and convenience stores added more relevant merchandise and turned its toilet facilities into a genuine point of differentiation, giving women something to tell their friends about.

Accepting the second set of de?nitions poses more of a challenge. The first definition suggests that the brand is the purview of the marketing department—just get the name, logo, design, and advertising right and you have your brand. The second shows how the brand is inextricably linked to the business. The creation of the brand may begin in the marketing department, but the experience of the brand has to be driven through all parts of the organization. Every interaction, or touchpoint, in a customer’s experience of a brand makes a difference.

*
It can take more than a year for a well-managed brand like Citroën to implement a refreshed brand across all customer touchpoints.

If you consider Apple, the quintessential brand success story, the most powerful parts of the customers’ experience of the brand are not confined to traditional brand elements, such as the logo, the name, or the advertising. It is the environment of the Apple stores that encourages you to stay and explore (and upgrade) and interact with its products and its genius bar. It is iTunes as much as the iPod, the applications as much as the iPhone. It is Apple’s customer service and tone of voice that are seamless, from the instruction manuals to the real-time chat in the support section of the online store. The brand is driven throughout this whole experience, throughout every interaction.

*
Good Co. Coffee’s brand voice uses clever, lighthearted parody to brighten the day of the overstressed corporate coffee drinker.

But if a brand exists in an individual’s mind, and if it is delivered by the business, what is the role of branding? Branding cannot control what people think of a brand, it can only influence. A brand can put some of the elements in place that will help people understand why they should choose or prefer a particular good, service, organization, or idea over another. Branding, and the related marketing disciplines can help influence and explain how many of these associations in our minds have been built, and whether they were built through advertising, PR, employee behavior, supply chain management, and so on.
Branding is about signals—the signals people use to determine what you stand for as a brand. Signals create associations.
Allen Adamson, BrandSimple6
The bulk of this chapter will explain the process that determines the foundational signals of a brand: what a brand stands for (the brand idea); the attitude it projects (the brand personality); its name and how it talks (the verbal identity); what it looks like (the visual identity); and what it feels and sounds like (the sensory identity). Creating these foundational signals is the core business of a branding agency.
Before foundational signals are created, however, a certain amount of groundwork needs to be done to ensure that the best conditions for success are in place. The first two sections explain this essential preparation. The third describes the creation of the foundational signals. The final sections focus on what to do next with these foundational signals once they have been created, looking at delivery of the brand experience, managing the brand, and measuring the performance and value of brands.
To read more, please download our PDF of the complete chapter; table of contents is shown below.
CONTENTS
Introduction
The difference between a brand and branding

Starting a branding project
Start with the right reason
Start with the right commitment
Start with the right business strategy
Start with the right focus—customers
Analyze the brand’s equity
Uncover insights and identify opportunities

The brand strategy
Defining the brand idea
Defining the brand architecture
Defining the brand personality
Producing the creative brief

Creating the brand experience
Crafting the verbal identity
Designing the visual and sensory identities
Testing verbal and visual identities

Delivering the brand experience

Managing a brand

Measuring the performance of a brand
Tracking brand strength
Measuring brand value

Case study: BP
Delivering the brand promise
---
Sarah Wealleans, author of Chapter 4 of The Big Book of Marketing, is consultant and former senior client director with Landor Associates. Additional input provided by Trevor Wade, Hayes Roth, Susan Nelson, Mich Bergesen, and Charlie Wrench.
---
1 McKinsey & Company, “Strike Up the Brands” (2003).
2 David Ogilvy, primary.co.uk/viewpoints (accessed 12 May 2009).
Dictionary of Business and Management (Oxford University Press, 2006).
Walter Landor, founder of Landor Associates.
5 Marty Neumeier, The Brand Gap: How to Bridge the Distance between Business Strategy and Design (AIGA New Riders, 2006).
6 Allen Adamson, BrandSimple: How the Best Brands Keep It Simple and Succeed (Palgrave Macmillan, 2007).

Wednesday, December 22, 2010

Sheplers CEO shares 3 tips for digital commerce growth in 2011


Sheplers CEO shares 3 tips for digital commerce growth in 2011

The strength of the Shop.org digital retail community lies with each of our members – most importantly the best practices, tips, and insights you each share.
As the end of the year approaches and retailers begin to look ahead to 2011, Megan Conniff, editor ofShop.org’s SmartBrief took an opportunity to interview long-time Shop.org member and digital retail executive, Bob Myers, CEO of Western apparel merchant Sheplers on what lies ahead for digital commerce in 2011.
Myers words of advice included:

• “Your success in digital commerce ties directly to a seamless customer experience across multiple channels. This cannot be done in silos. Retailers can no longer afford to build a process or deploy technology that can’t integrate across platforms.”

• “The best investment you could make is in “the team.” As the bar raises across digital commerce, investing in your people continues to be the priority. Retaining and attracting talent is critical. Organizational cultures that promote innovation and collaboration within the four walls of their offices and, even more importantly, allow the team to explore outside the company (Shop.org, for example), will attract and retain future digital leaders.”

• “Invest in technologies that help you customize the experience to individual customer segments, be it locally or globally or preferably both. This holds the biggest potential and is the least flushed out.”
We urge all Shop.org members to keep these in mind as you plan your 2011 and 2012 strategies and tactics and  continue to grow our powerful, vital online and multichannel retail community.
With a number of strong recovery months already under the belts of many retailers, Shop.org is optimistic about the months ahead. We hope to continue to see our retail community grow, creating jobs, improving customer experiences and relationships, and strategically taking advantage of the social and mobile “revolution”.
For more insight from Myers, be sure to read the full interview which ran in Shop.org’s SmartBrief last week.

100+ Content Marketing & Social Media Predictions for 2011


http://www.contentmarketinginstitute.com/2010/12/content-marketing-social-media-predictions/

What started as something small has grown into a rather large predictions party.
We saw about 50 content marketing and social media predictions for 2009. For 2010, there were over 100 predictions from 70 of the leading marketing experts.
Content Marketing & Social Media PredictionsThis year over 100 of the industry’s thought leaders have come together to collectively throw down the magic eight ball on what 2011 will bring for content marketing and social media.
It’s a great (and very thorough) read, and definitely worth it [Download it here].
Upon reviewing all of them, here are some of the interesting trends that kept popping up.
  1. Facebook will take over the world in 2011, apparently becoming an added appendage to most humans on the face of the planet.
  2. While content marketing was clearly accepted in 2010 as a viable marketing practice, 2011 is the year where the majority of companies get serious about it – adding to budget, getting better at measurement, and begin to develop internal processes and staffing around the consistent creation of content.
  3. Marketers will become less obsessed with the channel (i.e. Twitter, Facebook) and get more focused on the story itself. Quality, consistent and differentiated content wins.
  4. Mobile Apps, gamification and 2011 as the year of the authentic brand relationship all stood out.
  5. Curation, curation, curation.
From a big picture perspective, it looks like the novelty of social media has worn off, and the idea that brands are now media companies is in full force.
If I had one sentence to sum up this year’s content marketing and social media predictions, it would be this:
2011 is the year that brands get serious about becoming media companies.
Thanks to our over 100 contributors this year and to zmags for putting the predictions ebook together.  Download it here.
Finally, below is a Wordle of ALL the predictions.  Interesting stuff.
Content Marketing Wordle

Tuesday, December 14, 2010

How do colors affect purchases?

from Kissmetrics
How-Colors-Affect-Purchase-Decisions-Infographic.jpg

Engage with Content Strategy




December 3, 2010
Your client has a message and a CMS. Your client is going mobile and social. Engaging customers should be a cinch, right? Not so fast. As Colleen Jones explained at Gilbane Boston, your client needs content strategy. 
Categories: Content InfluenceContent Strategy

Testing Content


This is the perennial $64,000 question for anything design-related, but especially content.
DECEMBER 14, 2010


Testing Content
Nobody needs to convince you that it’s important to test your website’s design and interaction with the people who will use it, right? But if that’s all you do, you’re missing out on feedback about the most important part of your site: the content.
Whether the purpose of your site is to convince people to do something, to buy something, or simply to inform, testing only whether they can find information or complete transactions is a missed opportunity: Is the content appropriate for the audience? Can they read and understand what you’ve written?

A tale of two audiences

Consider a health information site with two sets of fact sheets: A simplified version for the lay audience and a technical version for physicians. During testing, a physician participant reading the technical version stopped to say, “Look. I have five minutes in between patients to get the gist of this information. I’m not conducting research on the topic, I just want to learn enough to talk to my patients about it. If I can’t figure it out quickly, I can’t use it.” We’d made some incorrect assumptions about each audience’s needs and we would have missed this important revelation had we not tested the content.

You’re doing it wrong

Have you ever asked a user the following questions about your content?
How did you like that information?
Did you understand what you read?
It’s tempting to ask these questions, but they won’t help you assess whether your content is appropriate for your audience. The “like” question is popular—particularly in market research—but it’s irrelevant in design research because whether you like something has little to do with whether you understand it or will use it. Dan Formosa provides a great explanation about why you should avoid asking people what they like during user research. For what’s wrong with the “understand” question, it helps to know a little bit about how people read.

The reading process

Reading is a product of two simultaneous cognitive elements: decoding and comprehension.
When we first begin to read, we learn that certain symbols stand for concepts. We start by recognizing letters and associating the forms with the sounds they represent. Then we move to recognizing entire words and what they mean. Once we’ve processed those individual words, we can move on to comprehension: Figuring out what the writer meant by stringing those words together. It’s difficult work, particularly if you’re just learning to read or you’re one of the nearly 50% of the population who have low literacy skills.
While it’s tempting to have someone read your text and ask them if they understood it, you shouldn’t rely on a simple “yes” answer. It’s possible to recognize every word (decode), yet misunderstand the intended meaning (comprehend). You’ve probably experienced this yourself: Ever read something only to reach the end and realize you don’t understand what you just read? You recognize every word, but because the writing isn’t clear, or you’re tired, the meaning of the passage escapes you. Remember, too, that if someone misinterpreted what they read, there’s no way to know unless you ask questions to assess their comprehension.
So how do you find out whether your content will work for your users? Let’s look at how to predict whether it will work (without users) and test whether it does work (with users).

Estimate it

Readability formulas measure the elements of writing that can be quantified, such as the length of words and sentences, to predict the skill level required to understand them. They can be a quick, easy, and cheap way to estimate whether a text will be too difficult for the intended audience. The results are easy to understand: many state the approximate U.S. grade level of the text.
You can buy readability software. There are also free online tools from Added BytesJuicy Studio, and Edit Central; and there’s always the Flesch-Kincaid Grade Level formula in Microsoft Word.
But there is a big problem with readability formulas: Most features that make text easy to understand—like content, organization, and layout—can’t be measured mathematically. Using short words and simple sentences doesn’t guarantee that your text will be readable. Nor do readability formulas assess meaning. Not at all. For example, take the following sentence from A List Apart’s About page and plug it into a readability formula. The SMOG Indexestimates that you need a third grade education to understand it:
We get more mail in a day than we could read in a week.
Now, rearrange the words into something nonsensical. The result: still third grade.
In day we mail than a week get more in a could we read.
Readability formulas can help you predict the difficulty level of text and help you argue for funding to test it with users. But don’t rely on them as your only evaluation method. And don’t rewrite just to satisfy a formula. Remember, readability formulas estimate how difficult a piece of writing is. They can’t teach you how to write understandable copy.

Do a moderated usability test

To find out whether people understand your content, have them read it and apply their new knowledge. In other words, do a usability test! Here’s how to create task scenarios where participants interpret and use what they read:
  • Identify the issues that are critical to users and the business.
  • Create tasks that test user knowledge of these issues.
  • Tell participants that they’re not being tested; the content is.
Let’s say you’re testing SEPTA, a mass transit website. It offers several types of monthly passes that vary based on the mode of transportation used and distance traveled: For example, a TransPass lets you ride on the subway, bus or trolley. A TrailPass also lets you ride the train, etc. If you only wanted to test the interface, you might phrase the task like this:
Buy a monthly TrailPass.
But you want to test how well the content explains the difference between each pass so that people can choose the one that’s right for them. So phrase your task like this:
Buy the cheapest pass that suits your needs.
See the difference? The first version doesn’t require participants to consider the content at all. It just tells them what to choose. The second version asks them to use the content to determine which option is the best choice for them. Just make sure to get your participants to articulate what their needs are so you can judge whether they chose the right one.
Ask participants to think aloud while they read the content. You’ll get some good insight on what they find confusing and why. Ideally, you want readers to understand the text after a single reading. If they have to re-read anything, you must clarify the text. Also, ask them to paraphrase some sections; if they don’t get the gist, you’d better rewrite it.
To successfully test content with task scenarios and paraphrasing, you’ve got to know what the correct answer looks like. If you need to, work with a subject matter expert to create an answer key before you conduct the sessions. You can conduct live moderated usability tests either in person or remotely. But, there are also asynchronous methods you can use.

Do an unmoderated usability test

If you need a larger sample size, you’re on a small budget, or you’re squeezed for time, try a remote unmoderated study. Send people to the unmoderated user testing tool of your choice like Loop11 or OpenHallway, give them tasks, and record their feedback. You can even use something like SurveyMonkey and set up your study as a multiple-choice test: It takes more work up front than than open-ended questions because you must define the possible answers beforehand, but it will take less time for you to score.
The key to a successful multiple-choice test is creating strong multiple choice questions.
  • State the question in a positive, not negative, form.
  • Include only one correct or clearly best answer.
  • Come up with two–four incorrect answers (distractors) that would be plausible if you didn’t understand the text.
  • Keep the alternatives mutually exclusive.
  • Avoid giving clues in any of the answers.
  • Avoid “all of the above” and “none of the above” as choices.
  • Avoid using “never,” “always,” and “only.”
You may also want to add an option for “I don’t know” to reduce guessing. This isn't the SATafter all. A lucky guess won’t help you assess your content.
Task scenario:
You want to buy traveler's checks with your credit card. Which percentage rate applies to the purchase?
Possible answers:
  • The Standard APR of 10.99%
  • The Cash Advance APR of 24.24%*
  • The Penalty APR of 29.99%
  • I don’t know
(*This is the correct answer, based on my own credit card company’s cardmember agreement.)
As with moderated testing, make it clear to participants that they’re not being tested, the content is.

Use a Cloze test

Cloze test removes certain words from a sample of your text and asks users to fill in the missing words. Your test participants must rely on the context as well as their prior knowledge of the subject to identify the deleted words. It’s based on the Gestalt theory of closure—where the brain tries to fill in missing pieces—and applies it to written text.
It looks something like this:
If you want to __________ out whether your site __________ understand your content, you __________ test it with them.
It looks a lot like a Mad Lib, doesn’t it? Instead of coming up with a sentence that sounds funny or strange or interesting, participants must guess the exact word the author used. While Cloze tests are uncommon in the user experience field, educators have used them for decades to assess whether a text is appropriate for their students, particularly in English-as-an-additional-language instruction.
Here’s how to do it:
  • Take a sample of text—about 125-250 words or so.
  • Remove every fifth word, replacing it with a blank space.
  • Ask participants to fill in each space with the word they think was removed.
  • Score the answers by counting the number of correct answers and dividing that by the total number of blanks.
A score of 60% or better indicates the text is appropriate for the audience. Participants who score 40-60%, will have some difficulty understanding the original text. It’s not a deal breaker, but it does mean that the audience may need some additional help to understand your content. A score of less than 40% means that the text will frustrate readers and should be rewritten.
It might sound far fetched, but give this method a try before you dismiss it. In a government study on healthcare information readability, an expert panel categorized health articles as either easy or difficult. We ran a Cloze test using those articles with participants—who had low to average literacy skills—and found that the results reflected the expert panel’s findings. The average score for the “easy” version was 60, indicating the article was written at an appropriate level for these readers. The average score for the “difficult” version was 39: too hard for this audience.
Cloze tests are simple to create, administer, and score. They give you a good idea as to whether the content is right for the intended audience. If you use Cloze tests—either on their own or with more traditional usability testing methods—know that it takes a lot of cognitive effort to figure out those missing words. Aim for at least 25 blanks to get good feedback on your text; more than 50 can be very tiring.

When to test

Test your content at any point in your site development process. As long as you have content to test, you can test it. Need to convince your boss to budget for content testing? Run it through a readability formula. Got content but no wireframes or visual design? Run a Cloze test to evaluate content appropriateness. Is understanding the content key to a task or workflow? Display it in context during usability testing.

What to test

You can’t test every sentence on your site, nor do you need to. Focus on tasks that are critical to your users and your business. For example, does your help desk get calls about things the site should communicate? Test the content to find out if and where the site falls short.

So get to it

While usability testing watches what users do, not what they say they do, content testing determines what users understand, not what they say they understand.
Whatever your budget, timeline, and access to users, there’s a method to test whether your content is appropriate for the people reading it. So test! And then, either rest assured that your content works, or get cracking on that rewrite. 

Learn More

Related Topics: Content Strategy

Discuss

Was it good for you, too?Join the discussion »

About the Author

Angela ColterAngela Colter has been evaluating the usability of web sites for the better part of a decade. She’s a Principal of Design Research at Electronic Ink in Philadelphia,tweets frequently, and blogs occasionally.