Header image

Selling is about more than moving product. It’s about moving people. Touching them. Connecting with them.

The relationships you build with your customers top the list of your company’s most valuable assets. So how do you know if you’re connecting with them the right way?

Customer Survey

The customer survey, when done right, is one of the most effective business research tools for developing better marketing strategies.

But it’s not just about asking questions. It’s about asking the right questions that will yield truthful responses and allow comparing your customers’ awareness, satisfaction and attitude with other relevant brands, organizations and events.

These are some of the best practices for building and distributing successful customer surveys that will achieve useful and actionable results:

1. Clearly define your research objectives and address them specifically – don’t try covering everything in one shot

Cut unnecessary questions from your surveys. 
Every question you include should have a well-defined purpose and a good reason for being asked. Do you really need to know things like customer name, age or income?

Adding questions you thought are “nice to know” can make survey takers abandon your survey without finishing it.

If you think such questions are important, but not critical, add them to the end of your survey and make them optional.

2. Make your survey short to keep respondents engaged

Find the shortest way to ask a question without losing its intent. It’s not just about reducing the word count. Ask questions that are straight forward and simple to understand.

Your survey length is critical for keeping abandon rates low. Our research indicates that if you keep your survey under 25 questions you will achieve high response rates.

3. Avoid leading and loaded questions

Questions that lead respondents toward a certain answer due bias in their phrasing are harmful for your surveys.
 Try avoiding loaded questions in your surveys by eliminating emotionally charged language that hints at preferences or assumed facts.

Here are several “real life” examples of survey questions:

In the first example, a researcher wanted to better understand consumer awareness of Prebiotics:

Q1. Do you know the main usage of Prebiotics?
- Yes
- No
- No sure

Q2. To the best of your knowledge, which of the following statements about Prebiotics usage is correct (please don’t guess)?
- Prebiotics are used to treat high cholesterol
- Prebiotics are used to restore healthy bacteria
- Prebiotics are used to feed healthy bacteria
- Prebiotics are used to kill harmful bacteria
- Don’t know

While over 50 percent of the respondents responded “Yes” to Question 1, only 13 percent were able to answer correctly Question 2 (“Prebiotics are used to feed healthy bacteria”).

Take a look at another example. Subscribers of an internet magazine were asked to respond to this question:

Q. Why do you like our magazine?
- Because it’s informative
- Because it’s available on line
- Because it’s free
- Because it has great ads
- Other (please specify)

Well, if the survey is conducted to better understand the magazine readers, wouldn’t it be more meaningful to ask what is missing in the magazine and what can be improved?

Finally, take a look at this question. The question was asked following a new product presentation:

Q. How much would you pay for the product?
- $10.99
- $11.99
- $12.99

I bet you know the answers received by the researcher.

4. Add smart open-ended questions and create connection between quantitative and qualitative (open-ended) questions

While some of your most valuable and insightful feedback may come from open-ended questions, nothing may intimidate survey takers more than a huge text box.

Ask a brief quantitative (single/multiple choice, “scales”, etc.) question first to create a sense of progress, and then follow up with a targeted an open-ended question such as, “Why were you dissatisfied with the service?”. This approach will also make the answers you are looking for more specific.

5. Keep rating scales consistent, but randomize question topics

Commonly used survey scales can become confusing when the context changes.

If you start using scale where you ask survey takers to choose between 1-5, where 1 = “Strongly Disagree” and 5 = “Strongly Agree”
 keep the same pattern for all scale type questions.

Do not assign 1 to “Most Important” and 5 to “Least Important” if you had been using 5 as the agreeable answer (“Strongly Agree”) to previous questions.

If you do this, it will not only be confusing to respondents but many of them will miss the change and give inaccurate answers.

If is very beneficial however, for accuracy and quality of your survey, to randomize or mix your question topics (not scales).

6. Consider using interactive survey tools, pictures and movies, but only when necessary

In some cases respondents’ engagement and response rate can be further improved by using interactive survey tools such as Virtual Shelf, Hot Text, Heat Map or other Interactive ranking questions.

When used smart, interactive survey tools can provide better quality and accuracy of your survey results.

Do not overwhelm, however, survey takers with such tools as they also can destruct respondents from the actual question.

Follow the links for demos:
The Virtual Shelf
Hot Text
The Heat Map
The Image Rank Sort
The Rating Scale
The Stack Sort
The Rank Sort
The Slider Scale

7. Make sure your sample statistically represents the targeted population

Sampling is the foundation of all research. Reliable sampling helps you make business decisions with confidence.

A small, representative sample will reflect opinions and behavior of the group from which it was drawn.

The large the sample size the smaller the chance for an error, but the sheer size of a sample does not guarantee its ability to accurately represent a target population. Large unrepresentative samples can lead to wrong conclusions the same way as small ones.

For more information about sampling: http://www.macorr.com/sample-size-methodology.htm
MaCorr Sample Size calculator: http://www.macorr.com/sample-size-calculator.htm

8. Guarantee anonymity and confidentiality, and provide feel of neutrality

If possible, use the Respondent Anonymity Assurance (RAA) approach. This technology allows tracking who has and has not completed the survey and following up with individuals who have not completed the survey.

In addition it allows identifying each respondent and linking his/her specific responses to any additional, customer/ employee specific information such as level in the corporate structure, demographic information, tenure, etc.

In RAA enabled surveys, computer generated identification numbers for individuals are generated. The researcher, in this case, does not have access to both the respondent’s personal information as well as the response data at the same time.

Surveys conducted by an independent research company using RAA usually deliver more honest responses as compared to surveys conducted by employer or service provider themselves.

9. Choose the right timing to send your survey

Our studies found the highest survey open and click-through rates occur on Tuesday, Wednesday and Sunday.
Since there was no significant difference between the response quality gathered on weekdays or weekends, send out surveys first thing during a new week or on the weekend.

10. Reward the respondents

Entice customers to take your survey. Our research shows that incentives can increase survey response rates by up to 30 percent.

If possible, use discounts for your products or services. Alternatively, use easy to distribute and traceable electronic gift cards. Cash electronic cards from amazon.com, for example, will be relevant and enticing for a wide variety of survey respondents.

There is an opinion that freebies can reduce the quality of responses, but our studies show that this isn’t likely to be the case.

11. Use analytical tools and approaches for advance analysis of survey results

Research is about more than just getting answers. It’s about gaining confidence. We believe advanced analytics must be part of every company’s business intelligence strategy, regardless of its size.

If possible, consider using advanced analytical approaches such as correlation, factor and conjoint analyses, quadrant analysis, etc. to arrive at the type of conclusions that’ll drive more precise, meaningful results and, ultimately, better business decisions.

For more information about advanced analytics: http://www.macorr.com/marketing-analytics.htm

The MaCorr Team


Can math and statistics be fascinating and fun? We sure think so. Here is an example we hope you will enjoy.
One deck. Fifty-two cards. How many arrangements? Let’s put it this way: Any time you pick up a well shuffled deck, you are almost certainly holding an arrangement of cards that has never before existed and might not exist again. MaCorr Research intern, with the help of TED-Ed, explains…


Manufacturers have a lot to think about when designing their products. Packaging plays a big part in how consumers view one product over another. But even the smartest, most well received packaging can get neglected by consumers. That’s because it’s what happens once the product hits the store that can make a world of difference between bright profit margins and dull performances.

 Virtual Retail Shelf Research

MaCorr Research was commissioned to get a better understanding of how consumers shop for lighting products.

So, we created a Virtual Shelf to figure it all out. Our Virtual Shelf simulates a real-world shopping experience for all kinds of manufacturers. For our client, we simulated an aisle in one of the major retail chains that consumers choose when shopping for lighting products.

We were able to determine several key things without actually putting the lighting product in the store. Our research was designed to figure out:

• How exactly consumers shop for lighting products.
• How to better optimize the positioning and segmentation in a retail environment that’s crowded with competition.
• Why consumers choose one product over another, and how important shelf location is in driving the purchasing decision.

What did we find? A lot of bright insight—the kind of information that helps our client optimize their retail plan-o-gram and shelf space allocation. And the end result? Profit margins that point to a brighter future.


It’s not easy to go head-to-head with industry heavyweights that command a particular market segment. That’s especially true in the household cleaning industry, where major brands truly outshine and out-sparkle the competition.
How MaCorr Market Research helped a leading US manufacturer really clean up

But that didn’t stop one of our clients from trying to compete with the Swiffer family of products. A leading cleaning products manufacturer in the US, our client was known for its wide variety of household products, but the folks behind Swiffer had beaten them to the punch with their lineup of innovative cleaning solutions.

With a huge desire to mop the floor with the competition, our client wanted to launch a product that competed directly with Swiffer. But simply launching a competing product with the same offering wouldn’t be enough – especially given Swiffer’s market share and strong brand recognition.

So the manufacturer came to MaCorr and asked us to employ our Gap Finder and Concept Testing market research techniques.

Through our Gap Finder process, we pinpointed the gap that exists between what would-be Swiffer users deem most important about a product and the reality of what Swiffer actually delivers.

After narrowing down the gap, we took the rough, sketched out idea, and then tested and analyzed its potential. Our testing process included features expectations analysis, packaging and logo testing, name and USP assessment, and even at-home testing.

The result? Our client has successfully launched their product, which you can find in major retail chains – not to mention in clean homes all over North America.


Concept testing is most often used to test the success of a new product or service idea before it is marketed. Potential consumers of the product or service are targeted to provide their reactions to written statements, images or graphics, or actual implementations of the basic idea for the product or service.

Concept testing is frequently a Go/No Go decision driver based on consumer appeal and purchase intent.
Concept testing and development provides the direction and guidance necessary to identify and communicate key product or service benefits, uses, packaging, advertising, sales approaches, product information, distribution, and pricing.

A variety of concept testing options is available to help companies minimize risk and maximize revenue. We will design concept testing to address your particular needs and requirements.

The following applications show the value of concept testing to companies:

- Are you reaching out to a new market segment?
- How do your core customers use and interface with the product class?
- Testing a new product concept before the initial introduction?
- Rank and select the best potential product concepts, name, USP, packaging, logo?
- Determine the optimal pricing point for alternative new product concept bundles?
- Need to make a final go/no-go decision regarding a new product concept?
- Need to test customers’ trial experiences (at home testing) to see if product or communications adjustments should be made.

Key Components of Online Concept Testing

Each of the following testing stages focuses on customer’s critical needs and produces actionable information that can drive product formulation and promotional initiatives.

- Screening is critical for any concept to be tested among potential consumers of the product or service. If, for example, we want to truly understand interest and purchase intent of a cleaning tool, this tool must be tested among people who clean their homes on a regular basis.
- Needs assessment (frequently referred as “Pain”) examines the core customer needs that may lead to acceptance of the new product or service, for purposes of understanding and segmentation, prior to the actual concept presentation to the relevant consumer group.
- Concept presentation – concept is, usually, presented to consumer in a “flyer” (or movie) type, concise format underlining its key feature and benefits.

Example of concept testing flier:
Concept Testing

Another example of concept testing:
Concept Testing

- Decision process assessment identifies information sources each purchaser or decision maker relies on to establish the credibility of the product, its benefits and values.
- Concept understanding and general purchase intent. The approach allows comparing purchase intent of the product (or service) to the industry benchmark for market success.
- Purchase intent and market potential at different price levels for the purposes of understanding price elasticity and volume and revenue forecasting.

Pricing question example:
Concept Testing

Price elasticity and revenue forecast example:
Concept Testing

- Product/service features and benefits indentifies features and benefits that are most important to customers. Features can be categorized into those which are “need to haves” vs. “nice to haves.” Customer need must be identified and prioritized for product development and advertising.
- Packaging/logo testing – each package is tested on a number of variables vs. each other and vs. the competition.

Packaging testing example:
Concept Testing

Logo testing example:
Concept Testing

Design testing example:
Concept Testing

Design testing example:
Concept Testing

- Name and USP (unique sales proposition) assessment and ranking
- Distribution and shelving – optimum distribution channels, shelf positioning, etc.

Merchandizing assessment example:
Concept Testing

Concept Testing


There is a good old story that is often narrated about a young boy and a wise old teacher.

Once upon a time there was a wise teacher who could answer every question that his students asked. But one day one of the students decided to trick the teacher.

He caught a butterfly, held it within his closed fist, and thought:
“I will ask the teacher, if the butterfly in my fist is dead or alive. If the teacher says “the butterfly is dead”, I’ll open my fist and the butterfly will fly.
On the other hand, if the teacher says “the butterfly is alive”, I’ll just crush the butterfly in my fist and the teacher will be wrong again.

So he asked the teacher if the butterfly in his fist was dead or alive. And the teacher said: “whether the butterfly is dead or alive, it depends on you”!

Whether it is about children’s education, a new house, or a critical business decision – the decision is in your hands.

And the role of market research is to help you make your business decisions with confidence, because when it is time to decide, knowing is much better than guessing.


Now you have the “right” questions – questions that drive meaningful response. You also defined your optimum sample size and collected the data.
The only thing left is to make sense of it!

Some time ago I read this story:

In late 60’s, a fire department of one of the American cities decided to embrace modern data collection and statistical analysis approach to improve and optimize its business model.

They collected lots of data, thoroughly analyzed it and found significant positive correlation between the number of fire fighters sent to extinguish a fire and the amount of damage caused.

The more fire fighters they sent to extinguish a fire, the more damage they brought!!!

Well, based on the finding, the city significantly reduced its fire department. What happened to the damage caused by fire? It increased!

While analyzing the data, they forgot the FIRE.
The larger the FIRE, the more fire fighters were sent to extinguish it, but also more devastation it caused. True correlation between the number of fire fighters and the damage, could only be measured at a comparable fire size.

You can spend hours analyzing collected data, but analysis can become useless or even detrimental if not done correctly.


So now you have the “right” questions – questions that drive meaningful response – and are ready to go. Next step is sampling.

Consider the following famous example:There are two hospitals: in the first, 120 babies are born every day, in the other, only 12. On average, the ratio of baby boys to baby girls born every day in each hospital is 50/50. However, one day, in one of those hospitals twice as many baby girls were born as baby boys. In which hospital was it more likely to happen?
The answer is obvious for a statistician, but as research shows, not so obvious for a lay person: it is much more likely to happen in the small hospital. The reason for this is that the probability of a random deviation from the mean decreases with the increase of the sample size.

Sampling is the foundation of all research and, if done correctly, should yield valid and reliable information.

The sample size depends on a number of factors:

Population Size – How many people does your sample represent? This may be the number of people in a city you are studying, the number of people who buy smartphones, etc. Often, you may not know the exact population size and may be ignored when it is “large” or unknown.

Confidence interval (error rate) – the plus-or-minus figure usually reported in newspaper or television opinion poll results. For example, if you use a confidence interval of 5 and 90% percent of your sample answered that they “like Fridays more than other days of week” you can be “sure” that if you had asked the question of the entire relevant population between 85% (90-5) and 95% (90+4) would have “liked Fridays” as well.

Confidence level – expressed as a percentage and represents how often the true percentage of the population who would pick an answer lies within the confidence interval. 95% confidence level means that if you repeat the survey 100 times, 95 times out of 100 it will produce the same answers. It gives you an idea how sure you can be in your results.

Your accuracy also depends on the percentage of your sample that picks a particular answer. If 99% of your sample said “Yes” and 1% said “No” the chances of error are remote, irrespective of sample size. However, if the percentages are 51% and 49% the chances of error are much greater.

Here is what I read in a respectable newspaper. It said:
“…Research findings clearly indicate that the majority of the entire adult population will purchase the new product.
The research was conducted among 390 adults, where 53% of the respondents said they would definitely or probably purchase the new product….”

Is there a problem?
The sample of 390 adults ensures statistical accuracy of the results with the error rate of ±5%. It means that, in reality, this 53% can actually be in the range of between 48% (53-5) and 58% (53+5). As a result, it is incorrect to conclude that “the majority of the entire adult population will purchase the new product”.

So does the sample size matter? Yes and no. The large the sample size the smaller the chance for an error, but the sheer size of a sample does not guarantee its ability to accurately represent a target population. Large unrepresentative samples can lead to wrong conclusions the same way as small ones.

Next time will talk about data analysis and see how critical it can be for delivering accurate insights and actionable recommendations.

Please visit MaCorr Research website and download free sample size calculator. You will also find there for more details about sampling.


Do-it-yourself (DIY) research , specifically market research surveys, has become very popular during the last several years. No surprise- it provides very cheap and quick turn around research options that practically everyone can use.

The DIY research option is very helpful for student or companies that want to run quick and imprecise (statistically) surveys among their customers (provided they have built a customer contact list) or employees.

DIY survey tools offer, however, limited expertise- as far as actual research quality is concerned. I don’t know about you, but a bunch of data, to me, means nothing unless it provides reliable, practical insights and actionable recommendations.

To achieve reliable research results, any survey must have 3 important elements:

1. It must ask the “right” questions (this will be the topic of our first discussion)
2. It must target a statistically significant sample of the targeted customer or employee group
3. It has to provide practical insights and actionable recommendations

Next time you decided to run a DIY survey, ask yourself if you have those 3 critical elements in place.

Part 1 – Ask the “right” questions…

Here are several real life examples of survey questions.
In the first example, a researcher wanted to better understand consumer awareness of Prebiotics:

Q1. Do you know the main usage of Prebiotics?
- Yes
- No
- No sure

Q2. To the best of your knowledge, which of the following statements about Prebiotics usage is correct (please don’t guess)?
- Prebiotics are used to treat high cholesterol
- Prebiotics are used to restore healthy bacteria
- Prebiotics are used to feed healthy bacteria
- Prebiotics are used to kill harmful bacteria
- Don’t know

While over 50% of the respondents in the general adult population responded “Yes” to Question 1, only 13% were able to answer correctly for Question 2 (“Prebiotics are used to feed healthy bacteria”).

Take a look at another example. Subscribers of an internet magazine were asked to respond to this question:

Q. Why do you like our magazine?
- Because it’s informative
- Because it’s available on line
- Because it’s free
- Because it has great ads
- Other (please specify)

Well, if one conducts this survey to better understand his or her customer, wouldn’t it be more meaningful to ask what is missing in the magazine and what can be improved?

Finally, take a look at this question. The question was asked following the presentation of a new consumer product:

Q. How much would you pay for the product?
- $10.99
- $11.99
- $12.99

I bet you know the answers received by the researcher.

It’s very tempting to do your own research- for free. The question is what value you are going to get from this research.

Next time will talk about sampling and its statistical significance.


Great idea, enthusiastic team, vision of flowing profits- everybody is excited and ready for success… 3 months later the business closes down. Sounds familiar?

According to StatsCanada, almost every second small and medium business fails within 5 years.
One of the reasons for the failure is overly optimistic projections about market size and, as a result, unrealistic expectations. Market research, therefore, becomes absolutely essential for businesses to make realistic data-based projections.

In the past, the main excuse was the cost associated with even the most basic market research. Telephone and mail surveys, face-to-face interviews and traditional focus groups where the only available option and only big budget companies could afford to conduct such research.

Fortunately, not anymore. Today, you don’t need big bucks to conduct research. Let us look at two different ways of conducting research quickly, reliably and cost effectively.

1. Website and Web Page based surveys are primarily used for website evaluation, visitors’ profile or e-shopping analysis. Website visitors are invited to participate in a survey using a “banner” type invitation or a “pop-up” window.
These types of surveys are an effective and inexpensive method to obtain the opinions of your current customers or website visitors. The only issue is that while the respondents can be randomly selected, they are invited to opt in and, as a result, are considered “self-selected”.
The same is true for any employee or customer survey with a previously established contact list.

2. A more accurate and cost effective way to conduct unbiased awareness, perception and usage studies is via email online surveys. Respondents for the studies are recruited for participation through email invitation from, so called, web panels.
Web panels are large, demographically and geographically representative internet-based groups for customer, business to business and- sometimes- employee surveys. They include millions of consumers, business owners and professionals. These panels are consistently supported and refreshed to reflect demographic changes and to ensure (a) statistically representative sample(s).
As most surveys and research project require a relatively small sample size (up to 1,000 completed responses), the main reason to support and consistently refresh such large panels is to minimize the impact of “professional” or “self-selected” respondents. Each panel member can expect to participate in the surveys no more than 2 to 3 times per year. For this reason, a participation reward system is also based on random drawings of various prizes depending on length, complexity and topic of each survey.

Panels recruitment sources:
- Web advertising
- Permission-based databases
- Public relations (local newspapers’ web portals)
- Partner-recruited specialty panels
- Alliances with heavily trafficked portals

Major benefits of web panels:
- Worldwide coverage
- Cost efficient (significantly cheaper vs. equivalent phone survey)
- Short reply time (2-5 days) and high response rate (over 50%)
- High accuracy – statistically representative of the general population
- No need to collect demographic information during the survey (the data is collected during the panel design process)
- Supports consistent follow-up analysis of virtually the same sample (change in awareness level before and after advertising, etc.)
- Allows incorporation of visual effects and objects (pictures, movies, etc.)