Quality Run@Rate: When Your Process Has to Prove It Can Survive a Real Shift — Not Just Produce Five Good Parts in a Lab
You’ve seen it a hundred times.
The engineering team spends weeks perfecting a process. They dial in the parameters, run a small sample, measure every dimension, and declare victory. The Cpk numbers are beautiful. The control charts look like they belong in a textbook. Management gets a PowerPoint with green arrows pointing upward. Everyone shakes hands.
Three weeks later, the line is down. Scrap is pouring out of the process like water from a fire hydrant. Operators are standing around waiting for instructions. The quality engineer who certified the process is nowhere to be found — transferred to another project, probably, or hiding in a meeting room pretending the whole thing never happened.
What happened between the pristine sample run and the production disaster?
Simple. Nobody asked the process to prove it could survive a real production shift, at real cycle times, with real operators, on real equipment, using real material from real suppliers. Nobody ran a Run@Rate.
What Is Run@Rate — And Why Isn’t It Just Another Trial?
A Run@Rate — sometimes called a Production Readiness Review, a Process Sign-Off, or a Production Trial Run depending on your industry and your customer’s terminology — is the moment of truth for any manufacturing process. It is a structured, documented demonstration that your process can produce conforming product at the required rate, over a sustained period, under production conditions, using production personnel, production tooling, production materials, and production measurement systems.
Read that sentence again. Every word matters.
Not lab conditions. Not engineering doing the setup. Not hand-picked material from the supplier’s best batch. Not five parts measured on the coordinate measuring machine that never gets used on the shop floor. Production conditions. The whole ugly, messy, imperfect reality of your factory running the way it actually runs on a Tuesday afternoon when the supervisor is out sick and the new operator started yesterday.
The Run@Rate is the bridge between “we proved the concept works” and “we proved the factory can do this reliably.” And that bridge is where most organizations fall into the river.
The Five Lies Organizations Tell Themselves Before Run@Rate
Over 25 years of watching organizations prepare for — and fail — Run@Rate assessments, I’ve catalogued the five most common self-deceptions that guarantee a painful outcome.
Lie #1: “Our Sample Run Was Successful, So We’re Ready”
The sample run — whether it’s a First Article Inspection, a capability study, or a pilot batch — proves that the process can produce conforming parts under controlled conditions. It says nothing about whether the process will produce conforming parts under production conditions.
This is the difference between “possible” and “probable.” Your sample run proved possibility. Run@Rate proves probability. And the gap between those two words has killed more production launches than any other single factor in manufacturing.
I watched an automotive supplier in central Europe produce 30 perfect sample parts for a PPAP submission. Beautiful parts. Every dimension nominal. The customer approved the submission. Four weeks later, when they tried to run 2,000 parts per shift, the scrap rate exceeded 40%. The process had never been tested at cycle time. The thermal drift that was invisible over 30 parts became catastrophic over 2,000. The fixture that held tolerance for 30 cycles loosened after 200. Nobody knew, because nobody ran the process at rate.
Lie #2: “The Engineers Can Run It, So the Operators Can Too”
This is the competence assumption, and it’s one of the most dangerous blind spots in manufacturing launch management.
When engineers run a process, they bring something operators don’t: deep theoretical understanding of the process parameters, the ability to diagnose and adjust in real-time, and the patience to babysit a finicky operation. When you hand that same process to a production operator who was trained for two hours last Thursday, the results are predictably different.
Run@Rate mandates that production operators run the process. Not engineers. Not technicians. The people who will actually do the work when the line goes live. If the process can only produce conforming parts when an engineer is running it, you don’t have a production process — you have a science experiment.
Lie #3: “We’ll Fix That During Production”
This is the “we’ll debug it live” fallacy, and it’s born from a toxic combination of schedule pressure, optimism bias, and willful ignorance.
The thinking goes like this: “We know the process isn’t perfect yet, but we have a production window in two weeks, and we’ll use the first few production runs to iron out the bugs.” This is the manufacturing equivalent of jumping out of an airplane and planning to sew the parachute on the way down.
Run@Rate exists specifically to prevent this thinking. The rule is simple and non-negotiable: if the process can’t demonstrate capability during the Run@Rate, it does not go into production. Period. No exceptions. No waivers. No “we’ll fix it in production.” The production environment is for production, not for debugging. Debugging happens before launch. Run@Rate is the final exam. You don’t get to take the final exam and then study for it afterward.
Lie #4: “Our Suppliers Are Reliable”
Your process is only as good as the material feeding it. And the material that showed up for your sample run — the specially prepared, carefully inspected, hand-carried batch from your supplier’s best production run — is not representative of what you’ll receive on a random Thursday in week seven of production.
Run@Rate requires production-representative material. Material from your normal supply chain. Material that has been processed through your normal receiving inspection, not fast-tracked through the back door by the purchasing manager who doesn’t want to explain to the customer why the launch is delayed.
I’ve seen Run@Rate failures caused by material variation that was completely predictable and completely ignored. A stamping plant that used sample-run coil stock from the prime section of the coil, only to discover during Run@Rate that the tail end of production coils had thickness variation that pushed the process out of spec. A plastics molder who ran samples with virgin material but received regrind blends in production that shifted the shrinkage rate by 0.3%. Not much — just enough to make every critical dimension drift out of tolerance over a four-hour shift.
Lie #5: “We Measured Everything on the CMM”
Measurement during sample runs is often performed on the best measuring equipment, by the most skilled inspectors, in the most controlled environments, with unlimited time. Measurement during production is performed on shop-floor gauges, by operators who have three minutes per part, in an environment where temperature swings ten degrees between morning and afternoon.
Run@Rate requires that production measurement systems be used. The gauges, fixtures, and instruments that will be used in production. The measurement methods specified in the control plan. The operators who will actually perform the measurements. If your process only passes when measured on a CMM in a temperature-controlled lab, but the production measurement system is a go/no-go gauge on the shop floor, you have a measurement gap that will swallow your quality system whole.
The Anatomy of a Proper Run@Rate
A Run@Rate is not a casual event. It’s a structured assessment with defined inputs, defined criteria, and defined outputs. Here’s what it looks like when it’s done right.
Phase 1: Preparation (Before the Run)
Documentation review. Every document in the quality package is reviewed for completeness and correctness: control plan, process flow diagram, PFMEA, work instructions, inspection instructions, measurement system analysis results, preliminary capability studies, material certifications, and operator training records. If any document is missing or incomplete, the Run@Rate does not proceed. This is not bureaucracy — this is discipline.
Equipment verification. Every piece of equipment that will be used in the process is verified: machines, tooling, fixtures, gauges, automation systems, error-proofing devices. Maintenance records are reviewed. Calibration status is confirmed. Tool life expectations are documented. If the mold has a planned life of 50,000 shots and you’re at 48,000, you don’t launch and hope — you replace the mold first.
Personnel readiness. Operators are identified, trained, and assessed. Training records are reviewed. Competence has been verified — not just attendance at a training session, but demonstrated ability to perform the tasks. The distinction is critical. Sitting in a classroom for two hours is not training. Running the process under supervision and producing conforming parts is training.
Material preparation. Production-representative material is staged. Not sample material. Not material from a special batch. Material from the normal supply stream, received through normal channels, inspected using normal methods.
Phase 2: Execution (The Run Itself)
Duration. The Run@Rate must be long enough to expose process behavior under realistic conditions. In automotive, this typically means a minimum of one full shift, or 300 pieces, or a defined multiple of the production batch size — whichever is longest. Some customers require multiple shifts to capture shift-to-shift variation. The key principle: the run must be long enough for the process to reveal its true behavior, not its best behavior.
Rate. The process must run at the declared production cycle time. Not slower. Not with pauses between parts for measurement or adjustment. The rhythm of the Run@Rate must match the rhythm of production. If the declared cycle time is 45 seconds, the process produces one part every 45 seconds. If the process cannot sustain this rate while producing conforming parts, the Run@Rate fails. Period.
Personnel. Production operators run the process. Engineers observe. Quality engineers monitor. But the hands on the machine belong to the people who will be there when the Run@Rate team goes home.
Measurement. Parts are measured using the production measurement system, at the production measurement frequency, by production personnel. The control plan is followed exactly as written. If the control plan says measure every 25th part using a specific gauge, that’s what happens. If the gauge doesn’t exist yet, the Run@Rate doesn’t proceed.
Documentation. Every event is recorded. Every adjustment. Every tool change. Every material lot change. Every interruption. The Run@Rate log is a chronological record of everything that happened during the run, and it tells a story that the numbers alone cannot. “Process ran 300 parts, all conforming” is one story. “Process ran 300 parts, required four adjustments to fixture clamp pressure, one tool change at part 180, and two machine restarts” is a very different story — even if all 300 parts were conforming.
Phase 3: Evaluation (After the Run)
Process capability. The data from the Run@Rate is analyzed for process capability. The minimum acceptable capability index depends on your customer and your industry, but the principle is universal: the process must demonstrate that it can consistently produce conforming product. Not just that it did — that it can. Capability analysis is the mathematical proof of consistency.
Process stability. Control charts from the Run@Rate are reviewed for signs of instability: trends, shifts, cycles, runs, or any pattern that suggests the process is not in statistical control. A process can produce 100% conforming parts and still be unstable — it’s just been lucky so far. Stability is the prerequisite for capability. An unstable process has no capability to predict.
Process performance. Beyond the numbers, the Run@Rate team evaluates the overall performance of the process: Was the cycle time sustained? Were there unplanned stoppages? Did the operators struggle with any aspect of the process? Were the work instructions clear and followed? Was the error-proofing effective? Did the material handling work as planned? These qualitative observations often reveal more than the quantitative data.
Corrective actions. Any nonconformance, any deviation, any unexpected event during the Run@Rate triggers a corrective action. The root cause is identified, a correction is implemented, and the effectiveness is verified. If the corrective action requires a process change — a parameter adjustment, a fixture modification, a work instruction revision — the Run@Rate may need to be repeated, in whole or in part. This is not punishment. This is rigor.
The Run@Rate Failure as a Gift
Here’s the mindset shift that separates excellent organizations from mediocre ones: a Run@Rate failure is not a setback. It’s a gift.
When a process fails during Run@Rate, it has just revealed a problem that would have caused production scrap, customer complaints, warranty claims, and possibly a full-blown quality crisis. The Run@Rate contained that failure in a controlled environment, where it could be studied, understood, and fixed before it ever reached a customer.
The organizations that struggle with Run@Rate are not the ones whose processes fail. Every new process has problems. The organizations that struggle are the ones that hide the problems, excuse the problems, or launch despite the problems because the schedule demands it.
Schedule pressure is real. Customer pressure is real. The financial pressure of a delayed launch is real. But none of these pressures compare to the cost of launching a process that isn’t ready. A late launch costs weeks. A failed launch costs months — the weeks you saved by launching early, plus the weeks you spend containing the crisis, plus the weeks you spend fixing the problem you should have fixed before launch, plus the customer trust you may never recover.
The math is simple. The Run@Rate always pays for itself.
Run@Rate and the Supply Chain: Extending the Discipline
The principle of Run@Rate doesn’t stop at your factory walls. If your suppliers are launching new processes for your components, they should be conducting Run@Rate assessments too — and you should be there to witness them.
Too many organizations accept supplier PPAP packages at face value: the paperwork is complete, the sample parts are conforming, and the capability numbers meet the requirement. But nobody visited the supplier’s factory. Nobody watched the process run at rate. Nobody asked whether the supplier’s operators were trained or whether the sample parts came from a carefully controlled pilot run that bears no resemblance to production reality.
Supplier Run@Rate witnessing is one of the most powerful quality tools available, and one of the most underused. It doesn’t require a team of engineers flying halfway around the world for every new component. It requires a structured approach: critical components get on-site witnessing, standard components get remote review with video evidence, and commodity components get self-certification with audit-based verification. The level of oversight matches the level of risk.
The Digital Dimension: Run@Rate in Industry 4.0
The fundamentals of Run@Rate haven’t changed in decades. Prove the process can produce conforming parts at rate, under production conditions, with production people and production materials. But the tools available to conduct and analyze Run@Rate assessments have been transformed by digital technology.
Real-time process monitoring systems can capture machine parameters at millisecond intervals during the Run@Rate, creating a detailed digital fingerprint of the process behavior. Statistical software can analyze capability and stability in real-time, flagging issues as they emerge rather than waiting for post-run analysis. Digital documentation systems can capture every event, every adjustment, every measurement in a structured, searchable format that becomes a permanent record of the process launch.
But here’s the warning: technology is an amplifier. It amplifies good practices and bad practices equally. A digital Run@Rate with bad fundamentals — untrained operators, unrepresentative material, unrealistic cycle times — is just a faster way to generate the wrong answer with more decimal places.
The technology serves the discipline. Not the other way around.
Building a Run@Rate Culture
The final piece — and the hardest — is cultural. Run@Rate only works in an organization that values truth over narrative, rigor over speed, and discipline over convenience.
This means:
Leadership must protect the integrity of the Run@Rate process. When the plant manager calls and says “we need to launch next week regardless,” the quality leader must have the authority and the backing to say no. Not because they want to be difficult, but because they understand the consequences of launching an unproven process.
Engineering must embrace the Run@Rate as a learning opportunity, not a judgment. A process that fails Run@Rate is not a failed engineering effort. It’s an unfinished engineering effort. The Run@Rate revealed what still needs to be done. That’s valuable information, not an indictment.
Operations must participate fully and honestly. The operators who run the process during Run@Rate must feel safe reporting problems, asking questions, and admitting when something doesn’t feel right. If the culture punishes people for surfacing issues during Run@Rate, those issues will surface later — in production, where the consequences are exponentially worse.
The organization must resist the temptation to “stage” the Run@Rate. Every Run@Rate should be a genuine, unscripted demonstration of process capability. If the Run@Rate is rehearsed, optimized, and performed under conditions that won’t exist in production, it’s theater — not assessment. And the audience that gets fooled is you.
The Bottom Line
Every manufacturing process has a truth. The truth about whether it can produce conforming parts reliably, consistently, and at the required rate. That truth exists whether you discover it or not.
The Run@Rate is your opportunity to discover that truth on your terms — in a controlled environment, with support systems in place, before the customer is waiting, before the line is committed, before the cost of failure multiplies by a hundred.
The organizations that take Run@Rate seriously launch faster, produce better, and spend less on firefighting. The organizations that treat it as a checkbox exercise spend years recovering from launches that should have worked but didn’t.
Your process is going to tell you the truth eventually. The only question is whether you’ll listen during the Run@Rate — or during the crisis.
Choose wisely.
Peter Stasko is a Quality Architect with over 25 years of experience building, auditing, and transforming quality management systems across automotive, industrial, and manufacturing sectors. He specializes in taking complex quality frameworks and making them work where it matters — on the shop floor, in real production conditions, with real people. His approach is practical, evidence-based, and relentlessly focused on results over rhetoric.