Maximum Differential Analysis: Identifying Real Purchase Drivers

6 min

Ask a group of buyers to rate how important ten different attributes are on a scale of one to ten, and you will almost certainly get the same result - everything scores between seven and nine. Price is important. Quality is important. Service is important. Reputation is important. The data looks comprehensive but tells you almost nothing about what actually drives decisions, because the method allows respondents to say everything matters without making a single trade-off.

This is the fundamental problem with traditional importance ratings, and it affects brand research, customer experience measurement, and product development across virtually every category. Maximum Differential Analysis, also known as Best-Worst Scaling, solves it by forcing respondents to choose.

Maximum Differential Analysis is a research method that forces respondents to choose the most and least important attributes from a set, creating a statistically valid ranking of priorities. Instead of rating each attribute independently, respondents see a subset of attributes and must pick the one that matters most and the one that matters least. Across multiple rounds with different combinations, the method produces a clear hierarchy of what genuinely drives decisions versus what is merely acknowledged as relevant.

Why traditional importance ratings fail

The mechanics of traditional rating scales create several systematic problems that undermine their usefulness for strategic decision-making.

The first is acquiescence bias, the tendency for respondents to agree with or endorse statements rather than discriminate between them. When asked whether quality is important in a professional services provider, almost everyone says yes. When asked whether responsiveness is important, almost everyone says yes again. The scale does not require the respondent to weigh quality against responsiveness, so the output cannot tell you which one would actually determine their choice if they had to pick.

The second problem is scale compression. Most respondents avoid extreme ends of a scale, clustering their responses in a narrow band that makes it impossible to distinguish genuine priorities from background noise. If eight attributes all score between 7.2 and 8.1 on a ten-point scale, the data offers no actionable guidance about where to invest.

The third is social desirability. In categories like financial services, healthcare, and professional services, respondents inflate the importance of attributes they feel they should care about (governance, transparency, sustainability — while underweighting the attributes that actually drive their behaviour, such as price or convenience. The gap between stated importance and revealed importance is often substantial and consistently misleading.

The cumulative effect is research that confirms what everyone already assumed — everything is important — and provides no basis for strategic prioritisation. This is not a minor methodological quibble. Decisions about product investment, brand positioning, and communication strategy all depend on understanding which attributes matter most relative to which matter least. A method that cannot make that distinction is not fit for purpose.

How MaxDiff works in practice

In a Maximum Differential exercise, respondents see a series of screens, each showing a subset of attributes — typically four or five from a larger list. For each screen, they select the attribute that is most important to their decision and the attribute that is least important. The combination of attributes shown on each screen is carefully designed so that every attribute appears an equal number of times and in balanced pairings.

Across enough rounds — usually eight to twelve per respondent — the method generates a robust dataset that can be analysed to produce an interval-scale ranking of all attributes. The output is a score for each attribute that reflects its relative importance, with clear separation between tiers.

For example, a professional services firm testing twelve attributes might find that the MaxDiff output produces a clear three-tier structure:

Top tier (genuine decision drivers): relevant sector experience, named team credentials, clear pricing structure.

Middle tier (expected but not differentiating): firm reputation, geographic presence, range of services.

Lower tier (acknowledged but not influential): awards and accreditations, office locations, sustainability commitments, website quality.

This structure is immediately actionable. The firm knows that its pitch materials and brand positioning should lead with sector experience and team credentials, not with awards or sustainability messaging. A traditional importance rating would have shown all twelve attributes scoring between 7 and 9, with no clear hierarchy.

When to use MaxDiff in B2B categories

MaxDiff is valuable in any category where the number of potentially important attributes is large enough that respondents cannot meaningfully rank them all through direct comparison, and where strategic decisions depend on knowing the true priority order.

B2B categories are particularly well-suited to MaxDiff because purchase decisions typically involve multiple evaluated attributes, buyers are accustomed to making trade-offs in their professional decisions, and the commercial consequences of investing in the wrong attribute are significant. A B2B technology vendor that positions on "innovation" when its buyers actually prioritise "integration reliability" is making an expensive mistake that a MaxDiff study would have prevented.

MaxDiff is also effective in categories where price sensitivity is difficult to measure directly. Because the method forces trade-offs, it naturally reveals where price sits in the decision hierarchy relative to non-price attributes, something that direct price importance questions almost always overstate or understate depending on the category context.

What MaxDiff does not do

MaxDiff identifies the relative importance of attributes but does not measure how a brand performs on those attributes. It answers the question "what matters?" but not "how are we doing?" A complete brand measurement programme pairs MaxDiff with brand performance ratings to identify the attributes that matter most and the gaps between a brand's performance and its competitors on those specific attributes.

This combination, MaxDiff for importance, performance ratings for delivery, produces a strategic matrix that shows exactly where investment will have the greatest commercial impact. Without the MaxDiff component, the performance ratings lack a weighting mechanism and every attribute gap looks equally urgent.

Frequently asked questions

How many attributes can MaxDiff handle? MaxDiff works well with lists of eight to twenty-five attributes. Below eight, a simple ranking exercise may be sufficient. Above twenty-five, the number of rounds required to achieve statistical stability increases to the point where respondent fatigue becomes a concern. For most brand and category research, twelve to eighteen attributes is the practical range.

Does MaxDiff work in quantitative online surveys? Yes. MaxDiff is well-suited to online quantitative research because the task is intuitive — pick the most and least important — and does not require interviewer explanation. Response quality in online MaxDiff exercises is typically higher than for lengthy rating grids because the task is engaging and requires active thought on each screen.

Can MaxDiff results be segmented? MaxDiff data can be analysed at segment level to reveal how attribute priorities differ across buyer types, industries, or demographic groups. This is one of its most powerful applications: identifying that what drives decisions for enterprise buyers is fundamentally different from what drives SMB buyers, even within the same category, and adjusting positioning accordingly.

The strategic value of forcing trade-offs

The reason MaxDiff produces better strategic guidance than traditional importance measurement is simple. Real purchase decisions involve trade-offs, and a research method that mirrors those trade-offs will predict behaviour more accurately than one that does not.

No buyer gets everything they want. Every purchase involves accepting more of one attribute in exchange for less of another. A research approach that allows respondents to endorse everything as important is measuring aspiration, not decision-making. MaxDiff measures decision-making and that is what makes it commercially useful.


If you'd like to discuss how MaxDiff or trade-off analysis could sharpen your brand and category research, book a conversation with Brand Health. We help organisations identify the attributes that actually drive decisions.

Let us be your guide

Discover how Brand Health can help you unlock insights to drive your brand's growth!

Related posts

Why Brand Tracking Fails in Low-Engagement Categories (and How to Fix It)

Some categories have a structural problem that makes standard brand tracking almost useless, most customers never actively chose thefalse

Read More »

How to Identify Customer Switching Risk Before It Shows Up in Your Churn Data

Customer churn rarely arrives without warning. Switching intent builds over weeks or months, shaped by accumulating experiences,false

Read More »

How Brand Tracking Can Predict University Enrolment Risk

Most Australian universities run some form of brand tracking. They measure awareness, favourability, and consideration amongfalse

Read More »

Crafting Effective Research Objectives for Impactful Brand Studies

Read More »

Why Brand Tracking Fails in Low-Engagement Categories (and How to Fix It)

Some categories have a structural problem that makes standard brand tracking almost useless, most customers never actively chose the

Read More »

How to Identify Customer Switching Risk Before It Shows Up in Your Churn Data

Customer churn rarely arrives without warning. Switching intent builds over weeks or months, shaped by accumulating experiences,

Read More »

How Brand Tracking Can Predict University Enrolment Risk

Most Australian universities run some form of brand tracking. They measure awareness, favourability, and consideration among

Read More »