How to test product or service concept

Concept testing is most often used to test the success of a new product or service idea before it is marketed. Potential consumers of the product or service are targeted to provide their reactions to written statements, images or graphics, or actual implementations of the basic idea for the product or service.

Concept testing is frequently a Go/No Go decision driver based on consumer appeal and purchase intent.
Concept testing and development provides the direction and guidance necessary to identify and communicate key product or service benefits, uses, packaging, advertising, sales approaches, product information, distribution, and pricing.

A variety of concept testing options is available to help companies minimize risk and maximize revenue. We will design concept testing to address your particular needs and requirements.

The following applications show the value of concept testing to companies:

– Are you reaching out to a new market segment?
– How do your core customers use and interface with the product class?
– Testing a new product concept before the initial introduction?
– Rank and select the best potential product concepts, name, USP, packaging, logo?
– Determine the optimal pricing point for alternative new product concept bundles?
– Need to make a final go/no-go decision regarding a new product concept?
– Need to test customers’ trial experiences (at home testing) to see if product or communications adjustments should be made.

Key Components of Online Concept Testing

Each of the following testing stages focuses on customer’s critical needs and produces actionable information that can drive product formulation and promotional initiatives.

Screening is critical for any concept to be tested among potential consumers of the product or service. If, for example, we want to truly understand interest and purchase intent of a cleaning tool, this tool must be tested among people who clean their homes on a regular basis.
Needs assessment (frequently referred as “Pain”) examines the core customer needs that may lead to acceptance of the new product or service, for purposes of understanding and segmentation, prior to the actual concept presentation to the relevant consumer group.
Concept presentation – concept is, usually, presented to consumer in a “flyer” (or movie) type, concise format underlining its key feature and benefits.

Example of concept testing flier:
Concept Testing

Another example of concept testing:
Concept Testing

Decision process assessment identifies information sources each purchaser or decision maker relies on to establish the credibility of the product, its benefits and values.
Concept understanding and general purchase intent. The approach allows comparing purchase intent of the product (or service) to the industry benchmark for market success.
Purchase intent and market potential at different price levels for the purposes of understanding price elasticity and volume and revenue forecasting.

Pricing question example:
Concept Testing

Price elasticity and revenue forecast example:
Concept Testing

Product/service features and benefits indentifies features and benefits that are most important to customers. Features can be categorized into those which are “need to haves” vs. “nice to haves.” Customer need must be identified and prioritized for product development and advertising.
Packaging/logo testing – each package is tested on a number of variables vs. each other and vs. the competition.

Packaging testing example:
Concept Testing

Logo testing example:
Concept Testing

Design testing example:
Concept Testing

Design testing example:
Concept Testing

Name and USP (unique sales proposition) assessment and ranking
Distribution and shelving – optimum distribution channels, shelf positioning, etc.

Merchandizing assessment example:
Concept Testing

Concept Testing

Posted in Research Articles | Leave a comment

Email Psychology Survey

MaCorr Research has recently conducted a study to better understand perceptions of email users toward personal and work email accounts.

The survey was conducted online among geographically representative sample of 1,002 US adults 18-65 years of age, who regularly use home and work e-mail accounts.

Among other finding, when asked to define their own “e-mail personality”, 55% of regular work and home email users thought they are “Deleters”, 30% – “Filers”, 10% – “Hoarders” and 10% – “Printers”.
Employee Satisfaction1

Deleter – 55%: You are conscientious and only keep an active inbox, deleting unnecessary e-mails and filing relevant ones. You respond to messages quickly and can be ruthless when deciding whether or not to reply.

Filer – 30%: You regularly start e-mail contact, and your e-mails are generally light hearted. Although you don’t answer immediately you wouldn’t leave it more than 1 day. You deal with a lot of e-mail so use inbox folders to keep conversations organized.

Hoarder – 10%: You have a relaxed attitude to e-mail. You rarely file or delete and don’t pay too much attention to how your email tone might sound. You only answer to e-mails when you are ready.

Printer – 5%: You print e-mails to read them and may also put them in paper files. You always reply with a prepared, considered response, so it could be that some e-mails aren’t answered for a number of days. You are polite and traditional in tone and language.

60% of the regular work and home email users find that “Being asked out on a date” and “Announcing major life decisions” are the most acceptable over e-mail. 59% also think that “Using improper grammar…” in email would not be a problem, as well.
Employee Satisfaction1

Not surprisingly, “Intelligence” is what regular home and work email users judge the most in other peoples’ emails. “Intelligence” is also what they would like to “transmit” by purposefully adapting language, style or tone of their own e-mails.
Employee Satisfaction1

Finally, the next day email reply is quite acceptable. Only 10% of the regular email users will become offended at having to wait for a reply for less than a day.
Employee Satisfaction1

Posted in Research Findings | Leave a comment

How satisfied are we with our service providers?

MaCorr Research has recently conducted a study to better understand consumer satisfaction with the following subscription based industries:

– Internet
– Cable or satellite TV
– Traditional news paper or magazine
– Emergency car services or road side assistance
– Satellite radio
– Cellular phone

Traditional news paper or magazine subscriptions, along with emergency car services or road side assistance, enjoy the highest consumer satisfaction.
9 out of 10 subscribers are very satisfied with the services overall. 90% and 86% of the respondents, respectively, are satisfied with “value for the money” they receive.
More than 4 out of 5 (85% and 86% of the respondents, respectively) are very likely to renew the subscription with the current provider at the end of the term.

On the other hand, cellular phone and cable or satellite TV service providers have noticeably lower level of satisfaction among subscribers. 64% and 68% of the respondents, respectively, are satisfied with the services overall.
3 out of 5 (about 60% of the respondents) are satisfied with “value for the money” and less than 60% (56% and 58% of the respondents, respectively) would likely recommend the provider to a colleague or friend.
Customer Satisfaction Survey

The survey was conducted among a statistically representative sample of current subscribers in North America.
The satisfaction was measured on a 10 points scale, with 1 being very dissatisfied and 10 – very satisfied.

Posted in Research Findings | Leave a comment

Do you feel empowered and satisfied at work?

MaCorr Research has recently conducted a study to examine a correlation between sharing customer information within retail or CPG organisations and improvement in decision making and employees’ satisfaction levels.

The survey was conducted among a geographically representative sample of 351 head office and store management employees at Retail and CPG companies (50 employees or more) in the US.

The main hypothesis: Sharing customer information more widely within a retail or CPG organisation can lead to improved decision making and higher satisfaction levels.
– Individuals who have and use customer information feel more empowered and engaged
– Individuals who don’t have and therefore don’t use customer information are less empowered and engaged.

The research finding indicated that it’s not enough for Retail and CPG companies to just have access to customer information. To improve employee satisfaction; engagement and involvement in the company’s decision making process, customer information has to be shared effectively across the team.

If a retail organization or a CPG manufacture has access to customer information, but doesn’t share it effectively across functional team, its employees feel even more disengaged and dissatisfied than those who work for companies with no access at all to customer information.

Employees of companies that effectively share customer information feel more valuable and empowered to make business decisions. At the same time there are no significant differences between employees or companies that don’t have access to customer information and those who have access, but don’t feel that the information is shared effectively.
Employee Satisfaction1

Employees of organization that effectively share customer information feel that their companies make more effective business decisions. The employees are also more satisfied with the company.
Interestingly enough, employees of companies that don’t have access to customer information feel less frustrated than employees that have access to the information but don’t share it effectively.
Employee Satisfaction2

In addition, employees of companies that have and effectively share consumer information, feel that their businesses leverage social media much better.
Employee Satisfaction3

Posted in Research Findings | Leave a comment

“Market Research” Story

There is a good old story that is often narrated about a young boy and a wise old teacher.

Once upon a time there was a wise teacher who could answer every question that his students asked. But one day one of the students decided to trick the teacher.

He caught a butterfly, held it within his closed fist, and thought:
“I will ask the teacher, if the butterfly in my fist is dead or alive. If the teacher says “the butterfly is dead”, I’ll open my fist and the butterfly will fly.
On the other hand, if the teacher says “the butterfly is alive”, I’ll just crush the butterfly in my fist and the teacher will be wrong again.

So he asked the teacher if the butterfly in his fist was dead or alive. And the teacher said: “whether the butterfly is dead or alive, it depends on you”!

Whether it is about children’s education, a new house, or a critical business decision – the decision is in your hands.

And the role of market research is to help you make your business decisions with confidence, because when it is time to decide, knowing is much better than guessing.

Posted in Research Articles | Leave a comment

Credit cards: how do we use them?

Did you know that 1 of 2 credit card users carry an outstanding balance on his/her credit card every month?

A recent survey conducted by MaCorr Research reveals that 50% of credit cards users in North America carry a monthly outstanding balance of close to $1,700 on average.

An average North American credit card user has 3 credit cards, but uses only 2.
36% of his/her total household spending is paid by credit cards. As oppose to debit cards, cash or checks, credit cards are a preferred payment type for travel and expensive purchases.

The favorable characteristics of credit cards are convenience and cash rewards they provide. Only 25% of the respondents use credit cards to postpone payments or get short term loans.

Credit Card Charts

9 out of 10 credit card users agree that it is their responsibility not to borrow beyond the means.

When choosing a credit card, security and the absence of annual fees seem to be the deciding factors.

Posted in Research Findings | Leave a comment

3 critical elements of a research survey. Part 3 – Analysis.

Now you have the “right” questions – questions that drive meaningful response. You also defined your optimum sample size and collected the data.
The only thing left is to make sense of it!

Some time ago I read this story:

In late 60’s, a fire department of one of the American cities decided to embrace modern data collection and statistical analysis approach to improve and optimize its business model.

They collected lots of data, thoroughly analyzed it and found significant positive correlation between the number of fire fighters sent to extinguish a fire and the amount of damage caused.

The more fire fighters they sent to extinguish a fire, the more damage they brought!!!

Well, based on the finding, the city significantly reduced its fire department. What happened to the damage caused by fire? It increased!

While analyzing the data, they forgot the FIRE.
The larger the FIRE, the more fire fighters were sent to extinguish it, but also more devastation it caused. True correlation between the number of fire fighters and the damage, could only be measured at a comparable fire size.

You can spend hours analyzing collected data, but analysis can become useless or even detrimental if not done correctly.

Posted in Research Articles | Leave a comment

3 critical elements of a research survey. Part 2 – Does the sample size really matter?

So now you have the “right” questions – questions that drive meaningful response – and are ready to go. Next step is sampling.

Consider the following famous example:There are two hospitals: in the first, 120 babies are born every day, in the other, only 12. On average, the ratio of baby boys to baby girls born every day in each hospital is 50/50. However, one day, in one of those hospitals twice as many baby girls were born as baby boys. In which hospital was it more likely to happen?
The answer is obvious for a statistician, but as research shows, not so obvious for a lay person: it is much more likely to happen in the small hospital. The reason for this is that the probability of a random deviation from the mean decreases with the increase of the sample size.

Sampling is the foundation of all research and, if done correctly, should yield valid and reliable information.

The sample size depends on a number of factors:

Population Size – How many people does your sample represent? This may be the number of people in a city you are studying, the number of people who buy smartphones, etc. Often, you may not know the exact population size and may be ignored when it is “large” or unknown.

Confidence interval (error rate) – the plus-or-minus figure usually reported in newspaper or television opinion poll results. For example, if you use a confidence interval of 5 and 90% percent of your sample answered that they “like Fridays more than other days of week” you can be “sure” that if you had asked the question of the entire relevant population between 85% (90-5) and 95% (90+4) would have “liked Fridays” as well.

Confidence level – expressed as a percentage and represents how often the true percentage of the population who would pick an answer lies within the confidence interval. 95% confidence level means that if you repeat the survey 100 times, 95 times out of 100 it will produce the same answers. It gives you an idea how sure you can be in your results.

Your accuracy also depends on the percentage of your sample that picks a particular answer. If 99% of your sample said “Yes” and 1% said “No” the chances of error are remote, irrespective of sample size. However, if the percentages are 51% and 49% the chances of error are much greater.

Here is what I read in a respectable newspaper. It said:
“…Research findings clearly indicate that the majority of the entire adult population will purchase the new product.
The research was conducted among 390 adults, where 53% of the respondents said they would definitely or probably purchase the new product….”

Is there a problem?
The sample of 390 adults ensures statistical accuracy of the results with the error rate of ±5%. It means that, in reality, this 53% can actually be in the range of between 48% (53-5) and 58% (53+5). As a result, it is incorrect to conclude that “the majority of the entire adult population will purchase the new product”.

So does the sample size matter? Yes and no. The large the sample size the smaller the chance for an error, but the sheer size of a sample does not guarantee its ability to accurately represent a target population. Large unrepresentative samples can lead to wrong conclusions the same way as small ones.

Next time will talk about data analysis and see how critical it can be for delivering accurate insights and actionable recommendations.

Please visit MaCorr Research website and download free sample size calculator. You will also find there for more details about sampling.

Posted in Research Articles | Leave a comment

3 critical elements of a research survey or DIY (do-it-yourself) vs. hiring a research vendor

Do-it-yourself (DIY) research , specifically market research surveys, has become very popular during the last several years. No surprise- it provides very cheap and quick turn around research options that practically everyone can use.

The DIY research option is very helpful for student or companies that want to run quick and imprecise (statistically) surveys among their customers (provided they have built a customer contact list) or employees.

DIY survey tools offer, however, limited expertise- as far as actual research quality is concerned. I don’t know about you, but a bunch of data, to me, means nothing unless it provides reliable, practical insights and actionable recommendations.

To achieve reliable research results, any survey must have 3 important elements:

1. It must ask the “right” questions (this will be the topic of our first discussion)
2. It must target a statistically significant sample of the targeted customer or employee group
3. It has to provide practical insights and actionable recommendations

Next time you decided to run a DIY survey, ask yourself if you have those 3 critical elements in place.

Part 1 – Ask the “right” questions…

Here are several real life examples of survey questions.
In the first example, a researcher wanted to better understand consumer awareness of Prebiotics:

Q1. Do you know the main usage of Prebiotics?
– Yes
– No
– No sure

Q2. To the best of your knowledge, which of the following statements about Prebiotics usage is correct (please don’t guess)?
– Prebiotics are used to treat high cholesterol
– Prebiotics are used to restore healthy bacteria
– Prebiotics are used to feed healthy bacteria
– Prebiotics are used to kill harmful bacteria
– Don’t know

While over 50% of the respondents in the general adult population responded “Yes” to Question 1, only 13% were able to answer correctly for Question 2 (“Prebiotics are used to feed healthy bacteria”).

Take a look at another example. Subscribers of an internet magazine were asked to respond to this question:

Q. Why do you like our magazine?
– Because it’s informative
– Because it’s available on line
– Because it’s free
– Because it has great ads
– Other (please specify)

Well, if one conducts this survey to better understand his or her customer, wouldn’t it be more meaningful to ask what is missing in the magazine and what can be improved?

Finally, take a look at this question. The question was asked following the presentation of a new consumer product:

Q. How much would you pay for the product?
– $10.99
– $11.99
– $12.99

I bet you know the answers received by the researcher.

It’s very tempting to do your own research- for free. The question is what value you are going to get from this research.

Next time will talk about sampling and its statistical significance.

Posted in Research Articles | Leave a comment

How to Stay in Business

Great idea, enthusiastic team, vision of flowing profits- everybody is excited and ready for success… 3 months later the business closes down. Sounds familiar?

According to StatsCanada, almost every second small and medium business fails within 5 years.
One of the reasons for the failure is overly optimistic projections about market size and, as a result, unrealistic expectations. Market research, therefore, becomes absolutely essential for businesses to make realistic data-based projections.

In the past, the main excuse was the cost associated with even the most basic market research. Telephone and mail surveys, face-to-face interviews and traditional focus groups where the only available option and only big budget companies could afford to conduct such research.

Fortunately, not anymore. Today, you don’t need big bucks to conduct research. Let us look at two different ways of conducting research quickly, reliably and cost effectively.

1. Website and Web Page based surveys are primarily used for website evaluation, visitors’ profile or e-shopping analysis. Website visitors are invited to participate in a survey using a “banner” type invitation or a “pop-up” window.
These types of surveys are an effective and inexpensive method to obtain the opinions of your current customers or website visitors. The only issue is that while the respondents can be randomly selected, they are invited to opt in and, as a result, are considered “self-selected”.
The same is true for any employee or customer survey with a previously established contact list.

2. A more accurate and cost effective way to conduct unbiased awareness, perception and usage studies is via email online surveys. Respondents for the studies are recruited for participation through email invitation from, so called, web panels.
Web panels are large, demographically and geographically representative internet-based groups for customer, business to business and- sometimes- employee surveys. They include millions of consumers, business owners and professionals. These panels are consistently supported and refreshed to reflect demographic changes and to ensure (a) statistically representative sample(s).
As most surveys and research project require a relatively small sample size (up to 1,000 completed responses), the main reason to support and consistently refresh such large panels is to minimize the impact of “professional” or “self-selected” respondents. Each panel member can expect to participate in the surveys no more than 2 to 3 times per year. For this reason, a participation reward system is also based on random drawings of various prizes depending on length, complexity and topic of each survey.

Panels recruitment sources:
– Web advertising
– Permission-based databases
– Public relations (local newspapers’ web portals)
– Partner-recruited specialty panels
– Alliances with heavily trafficked portals

Major benefits of web panels:
– Worldwide coverage
– Cost efficient (significantly cheaper vs. equivalent phone survey)
– Short reply time (2-5 days) and high response rate (over 50%)
– High accuracy – statistically representative of the general population
– No need to collect demographic information during the survey (the data is collected during the panel design process)
– Supports consistent follow-up analysis of virtually the same sample (change in awareness level before and after advertising, etc.)
– Allows incorporation of visual effects and objects (pictures, movies, etc.)

Posted in Research Articles | Leave a comment