Quality and Anchoring Bias: When Your Organization’s First Number Becomes Its Only Number — and Every Quality Target Gets Locked to an Arbitrary Starting Point That Nobody Questioned

Blog

Quality
and Anchoring Bias: When Your Organization’s First Number Becomes Its
Only Number — and Every Quality Target Gets Locked to an Arbitrary
Starting Point That Nobody Questioned

There is a moment in every quality planning session where someone
says a number — and that number, regardless of where it came from,
becomes the anchor around which every subsequent decision orbits. It
doesn’t matter if the number was pulled from last year’s report,
borrowed from a competitor’s public filing, or whispered by someone who
once read an industry benchmark in a magazine. The moment it enters the
room, it sticks. And the truly frightening part is how little anyone
questions it.

This is anchoring bias, and it is quietly warping every quality
target, every specification limit, every cost-of-quality estimate, and
every improvement goal in your organization. The first number spoken
aloud in a meeting doesn’t just influence the conversation — it hijacks
it. And your quality system, with all its rigor and discipline, is
defenseless against it.

What Anchoring Bias Actually
Is

Anchoring bias is a cognitive bias first documented by psychologists
Amos Tversky and Daniel Kahneman in the 1970s. In their landmark
experiments, they demonstrated that people exposed to an arbitrary
number before making an estimate would adjust their answer toward that
number — even when the number was obviously irrelevant.

In one famous study, participants watched a wheel of fortune that was
rigged to stop at either 10 or 65. They were then asked to estimate the
percentage of African countries in the United Nations. People who saw
the wheel land on 10 guessed, on average, 25%. People who saw 65 guessed
45%. A random number from a spinning wheel — in a completely unrelated
task — shifted their estimates by 20 percentage points.

If a spinning wheel can warp a person’s estimate about geography,
imagine what last quarter’s scrap rate does to your team’s assessment of
what’s achievable this quarter.

How Anchoring Destroys
Quality Targets

Here is how anchoring bias infiltrates quality management, step by
step.

The Annual Target Trap. Your leadership team sits
down to set quality targets for the coming year. The quality manager
opens last year’s report: “We finished at 1,200 PPM. Let’s target 1,000
PPM for next year.” The room nods. The number feels reasonable — it’s an
improvement, after all. But nobody asked the fundamental question: Is
1,000 PPM good enough? Is it competitive? Does it reflect what your
process is actually capable of? The target was set by anchoring to last
year’s number, not by analyzing the process capability, the customer’s
requirements, or the competitive landscape.

The result? An organization that improves incrementally toward a
number that was arbitrary from the start, while competitors who started
from a different anchor — perhaps a zero-defect philosophy or a
customer’s actual tolerance threshold — leapfrog ahead.

The Specification Anchoring Disaster. When a new
product moves from design to production, the engineering team often sets
specification limits based on… what came before. “The similar part had a
tolerance of ±0.05mm, so let’s use the same.” That original tolerance
might have been set decades ago for a different material, a different
process, a different customer requirement. But it becomes the anchor,
and every subsequent discussion about capability, measurement systems,
and process control is dragged toward it.

I have seen organizations spend millions tightening processes to hold
tolerances that no customer ever required, simply because the original
anchor was never questioned. I have also seen organizations accept
tolerances that were far too loose for the application because the
original engineer anchored to a standard that didn’t apply.

The Cost of Quality Illusion. Your CFO asks the
quality team to estimate the cost of quality for the current fiscal
year. Someone pulls up last year’s figure: “$2.3 million.” The team then
adjusts upward or downward by 10-15%, arriving at a new estimate that
orbits the old one. But what if last year’s figure was itself an
undercount? What if it only captured visible costs — scrap, rework,
warranty claims — and missed the invisible ones: lost customers,
overtime to recover from quality escapes, engineering time diverted to
contain problems instead of preventing them?

The anchor doesn’t have to be accurate to be powerful. It just has to
be first.

The Anatomy of an Anchored
Decision

Let me walk you through a real scenario I witnessed at an automotive
components manufacturer.

The company was bidding on a new program with a major OEM. During the
quoting phase, the commercial team asked engineering for a defect rate
estimate. The lead engineer, wanting to be helpful, said: “Our similar
product line runs at about 800 PPM. This should be comparable.”

That number — 800 PPM — became the anchor. It was written into the
quotation. It was baked into the cost model. It was presented to the
customer as the expected performance level. And here is where it gets
insidious: when the quality team later did their process capability
study and found the new process was actually capable of running at 150
PPM, the commercial team resisted reporting that to the customer. Why?
Because they had already anchored the customer’s expectations at 800
PPM, and lowering the number would — in their view — either make the
original estimate look incompetent or invite the customer to demand a
price reduction.

So instead of celebrating a process that could run at 150 PPM, the
organization targeted 800 PPM, invested just enough control to stay
below it, and left 650 PPM of unrealized quality on the table. The
anchor didn’t just distort the target. It capped the ambition.

Why Anchoring Is So Hard
to Recognize

Anchoring bias is particularly dangerous in quality management
because it masquerades as experience. When a seasoned engineer says,
“Based on my experience, we should target X,” everyone assumes the
number comes from deep expertise. And sometimes it does. But Tversky and
Kahneman’s research shows that even experts are susceptible to
anchoring. In fact, experts may be more susceptible because
they have more historical numbers floating in their heads, ready to
become anchors.

The other reason anchoring is hard to spot is that the anchored
number often feels responsible. Targeting a 10% improvement
over last year feels prudent. Using a tolerance from a similar part
feels conservative. Basing a cost estimate on historical data feels
evidence-based. The anchor doesn’t feel like a bias — it feels like a
starting point. And that is precisely why it is so dangerous.

Where Anchoring
Hides in Your Quality System

Anchoring bias doesn’t just affect target-setting. It infiltrates
virtually every corner of quality management.

During FMEA development. The team estimates the
severity, occurrence, and detection ratings for each failure mode. The
first failure mode discussed often becomes the anchor for subsequent
ratings. If the first failure mode gets a severity of 7, the team tends
to rate similar failure modes around 7 — even when some should be 4 and
others should be 9. The RPN (Risk Priority Number) analysis becomes a
house of cards built on an anchored foundation.

During supplier audits. The auditor begins the audit
with a certain expectation based on the supplier’s reputation,
certification status, or previous results. This expectation anchors
their perception of everything that follows. A supplier with a strong
reputation may have findings downplayed. A supplier with a history of
problems may have minor issues escalated. The audit becomes a
self-fulfilling prophecy.

During root cause analysis. The first hypothesis
proposed in a root cause investigation often becomes the anchor that the
entire investigation orbits around. The team designs experiments to
confirm it rather than to test it. Data that supports the anchor is
highlighted; data that contradicts it is explained away. I have seen 8D
investigations that spent weeks chasing the first theory while the
actual root cause sat unexamined in the data from day one.

During calibration discussions. When calibration
intervals are set, the original manufacturer’s recommendation becomes
the anchor. Even when field data shows that the instrument drifts far
more slowly (or far more quickly) than the recommendation suggests,
changing the interval feels risky. The anchor protects itself by making
any deviation feel like a gamble.

How to Break the Anchor

Overcoming anchoring bias requires deliberate, structural
countermeasures — not just awareness. Here are the strategies that
actually work.

Start from zero, not from history. When setting
quality targets, begin with the question: “What does the customer
actually need?” not “What did we achieve last year?” Start with the
Voice of the Customer, translate it into Critical to Quality
characteristics, and then determine what your process must deliver to
meet those requirements. Historical data is a reference — not a starting
point.

Use multiple anchors. Instead of allowing one number
to dominate the conversation, deliberately introduce competing reference
points. If you are setting a PPM target, simultaneously analyze: the
customer’s stated requirement, the competitive benchmark, the process
capability index (Cpk), the theoretical limit of the process, and the
cost-benefit curve of improvement. Multiple anchors dilute each other
and force the conversation toward analysis rather than adjustment.

Pre-commit before seeing the anchor. In estimation
tasks, have each team member write down their independent estimate
before anyone speaks a number aloud. This prevents the first speaker
from anchoring the entire group. Compare the independent estimates and
then discuss the differences. You will be astonished at how much
variation exists before the anchor collapses it into consensus.

Red-team your own numbers. Assign someone in every
target-setting session to argue the opposite case. If the team is
gravitating toward 800 PPM, the red team member’s job is to build the
case for 100 PPM — or for why 800 PPM is actually too aggressive. The
goal is not to be contrarian but to force the team to justify the number
against a competing anchor.

Delay the number, extend the analysis. In meetings,
ban specific numbers during the first half of the discussion. Force the
team to talk about factors, variables, customer requirements, process
capabilities, and constraints before anyone puts a number on the table.
By the time numbers enter the conversation, the team has a richer
analytical framework to evaluate them.

Audit your anchors. Once a year, go through your
quality system and ask of every target, tolerance, and threshold: “Where
did this number come from?” You will discover that a surprising number
of your most critical parameters trace back to arbitrary decisions made
years ago by people who are no longer with the company, for reasons
nobody can remember, based on data that no longer exists.

The
Deeper Insight: Anchoring Reveals What Your Organization Values

Here is something most discussions of anchoring bias miss. The anchor
that your organization defaults to tells you what it truly values — not
what it claims to value.

If your quality targets are always anchored to last year’s
performance, your organization values continuity over excellence. If
they are anchored to the customer’s minimum requirement, you value
compliance over differentiation. If they are anchored to a competitor’s
published benchmark, you value parity over leadership. And if they are
anchored to what the process is actually capable of — what the data says
is possible — then you value truth.

Most organizations never examine their anchors because they never
examine their values at that level of specificity. They say they want
“world-class quality” but their targets are anchored to last year’s
mediocrity. They say they want “zero defects” but their tolerances are
anchored to what was achievable in 1997.

The path to genuinely better quality starts not with a new tool or a
new framework. It starts with recognizing that the number currently
shaping your decisions may have arrived in your organization by accident
— and having the courage to replace it with a number that arrived by
analysis.

The Cost of the
Anchor You Never Questioned

Let me leave you with this thought. Somewhere in your organization
right now, there is a number — a PPM target, a tolerance band, a Cpk
threshold, a calibration interval, a cost-of-quality estimate — that was
set years ago by someone who is no longer there, based on information
that is no longer current, for a context that no longer exists. And
every day, that number is shaping decisions about what to produce, what
to inspect, what to invest in, and what to accept.

That number is your anchor. And it is holding you in place — not
because it is right, but because it was first.

The most powerful quality improvement you can make this year may not
require a single capital investment. It may simply require asking, of
every number your quality system relies upon: “Why this number?” And
then being willing to accept that the answer might be: “Because someone
said it once, and nobody ever changed it.”

Your quality system is only as good as the assumptions it rests on.
And the most dangerous assumption is the one that doesn’t look like an
assumption at all — it looks like a number that has always been
there.

Break the anchor. Start from the customer. Start from the data. Start
from what is actually possible. The difference between where you are
anchored and where you could be is the distance between the quality you
have and the quality your customer deserves.


Peter Stasko is a Quality Architect with 25+ years of experience in
automotive, aerospace, and quality transformation. Certified PSCR and
Six Sigma Black Belt.

Scroll top