Failure Mode Verification Testing: When Your FMEA Stops Being a Spreadsheet and Starts Being a Science Experiment — and Every Assumption Your Team Made About Failure Gets Put on Trial
You’ve been there. The conference room. The FMEA session that stretched into its third hour. The cross-functional team around the table — engineering, quality, production, maintenance — each defending their assumptions about what could go wrong, how often it might happen, and whether anyone would notice before it reached the customer.
Someone suggests a severity rating of 8. Someone else argues for 6. A compromise is struck at 7. The occurrence gets debated with the passion of a courtroom drama, settled by a show of hands. The detection rating? “We’d probably catch that” becomes a 3.
The spreadsheet gets filled. The Risk Priority Numbers get calculated. The recommended actions get listed. The document gets saved, uploaded to the quality management system, and never opened again until the auditor asks for it.
And then — six months later — the exact failure mode everyone rated as “low risk” shows up on your customer’s dock. Not in theory. In physical, measurable, warranty-claim-generating reality.
The FMEA had it listed. Row 47. Rated, ranked, and filed. But nobody ever checked whether the assumption behind that rating was actually true.
That’s the gap. That’s the canyon between identifying a risk and verifying it. And it’s where Failure Mode Verification Testing lives — in the space between “we think this could fail this way” and “we now know, because we tried to make it fail, exactly what happens.”
The Problem: FMEAs Are Stories We Tell Ourselves
Let’s be honest about what most FMEA sessions actually produce: a collective narrative. A story the team constructs about what might go wrong, weighted by individual experience, biased by recent memory, and moderated by group dynamics.
That story has value. It captures knowledge. It organizes thinking. It forces cross-functional conversation. But it has one critical flaw.
It’s untested.
When your engineering lead rates the occurrence of a weld crack at 2 out of 10, what they’re really saying is: “Based on my experience, I believe this is unlikely.” When your quality manager rates detection at 4, she’s saying: “Based on the current controls, I believe we’d catch this.”
Belief. Not evidence.
Failure Mode Verification Testing — FMVT — is the discipline of converting those beliefs into data. It takes the hypothetical failure modes from your FMEA and subjects them to controlled, deliberate testing. Not to confirm that they’ll happen — but to understand the conditions under which they would happen, the speed at which they’d develop, and whether your detection systems would actually catch them in time.
Think of it as your FMEA’s reality check. The moment the spreadsheet meets the laboratory.
Where FMVT Came From — and Why It Matters Now
The concept of verifying failure modes through deliberate testing has roots in the aerospace and automotive industries, where the cost of an unverified assumption isn’t measured in rework hours — it’s measured in lives.
In aerospace, every critical system undergoes failure mode testing that goes far beyond simulation. Actuators are run to failure under extreme conditions. Redundancy systems are deliberately crippled to see if backups perform. Wiring harnesses are subjected to accelerated aging that simulates decades of vibration and thermal cycling.
Automotive adopted a similar philosophy with DVP&R — Design Verification Plan and Report — but the practice often gets reduced to checking that a part meets specification. FMVT goes further. It doesn’t ask “does this meet spec?” It asks “what happens when it doesn’t — and how fast does it get there?”
In an era of compressed development cycles, multi-source supply chains, and increasing product complexity, the gap between assumed risk and actual risk is widening. Organizations are shipping products with FMEAs that were completed under time pressure, by teams that included people who’d never seen the previous generation’s field failures, using rating scales that everyone interpreted differently.
FMVT is the bridge back to reality.
The Framework: How to Build a Failure Mode Verification Test
FMVT isn’t random destruction testing. It’s not about breaking things for the sake of breaking them. It’s a structured, methodical process that connects directly to your FMEA and your control plan.
Step 1: Select the Failure Modes Worth Verifying
Not every row in your FMEA needs verification testing. Start with the failure modes that meet one or more of these criteria:
- High RPN but low confidence in the ratings. If the team debated the severity for twenty minutes and settled on a number through compromise rather than data, that’s a candidate.
- New designs or processes with no field history. If you’ve never made this exact product before, every assumption is untested.
- Changes to existing designs. Any modification — material substitution, supplier change, process adjustment — introduces failure modes that may not behave like the old ones.
- Low occurrence ratings with high severity. These are the sleepers. The “unlikely but catastrophic” scenarios that your FMEA dismisses with a confident “2” but your conscience knows deserves more scrutiny.
- Detection-rated failures where detection has never been challenged. If your detection system has never been tested against the actual failure mode it’s supposed to catch, you don’t have a detection system. You have a detection hope.
Step 2: Design the Verification Test
For each selected failure mode, design a test that answers three questions:
What triggers the failure? Replicate the conditions — stress, environment, cyclic loading, contamination, misuse — that would initiate the failure mode. Be specific. “Thermal stress” isn’t a test. “Twenty cycles between -40°C and +125°C with dwell times of 30 minutes at each extreme” is a test.
How does the failure propagate? Monitor the failure mode as it develops. Don’t just run the test until something breaks and then record the result. Instrument the test to capture the progression — crack initiation, crack growth, performance degradation, dimensional shift — so you understand the timeline.
Would current detection methods catch it? This is the question most teams skip. Run your existing detection systems — the in-process checks, the statistical process controls, the final inspection — against the test samples. See if the failure mode triggers a detection signal before it reaches the customer.
Step 3: Execute and Document
Run the test with the same rigor you’d apply to any formal validation. Document the test plan, the setup, the instrumentation, the parameters, and the results. Record everything — including the failures that didn’t happen. Negative results are data too.
Photograph the failure modes. Capture the measurement data. Record the detection system’s response (or lack of response). This documentation becomes part of your quality system’s permanent knowledge base.
Step 4: Feed the Results Back
This is where FMVT earns its keep. Take what you learned and update three things:
The FMEA. Adjust severity, occurrence, and detection ratings based on actual test data. If you thought a crack would propagate slowly but testing showed it accelerates after 500 cycles, your severity rating just changed. If you thought your SPC chart would catch the drift but testing showed the shift happens between samples, your detection rating just changed.
The control plan. If the test revealed that the failure mode develops faster than expected, increase your monitoring frequency. If the test showed that the failure mode produces a detectable signal before catastrophic failure, add that signal to your control plan as a specific check point.
The design or process. If the test revealed a failure mode that’s more severe, more frequent, or less detectable than the FMEA assumed, you have a choice: accept the risk with full knowledge, or change the design or process to reduce it. FMVT gives you the data to make that choice intelligently.
A Real-World Example: The Silent Bearing
Here’s how this plays out in practice.
An automotive supplier was developing a new electric motor assembly for an EV drivetrain. During the FMEA, the team identified a potential failure mode: bearing preload loss due to thermal cycling. The severity was rated at 7 (motor performance degradation, potential vehicle shutdown). Occurrence was rated at 2 — the bearing supplier had a good track record. Detection was rated at 3 — end-of-line testing would catch any motor with excessive vibration.
RPN: 42. Low priority. The team moved on.
But the quality engineer on the team had been around long enough to know that bearing behavior under thermal cycling was tricky business. She advocated for FMVT on this specific failure mode.
The test was designed: twelve motor assemblies would undergo thermal cycling that simulated five years of seasonal temperature swings. Vibration sensors would monitor each motor continuously. End-of-line test equipment would be used at intervals to simulate production inspection.
The results were sobering.
Four of the twelve motors showed measurable preload loss after the equivalent of 18 months. But here’s the critical finding: the vibration signature didn’t change in a way that the standard end-of-line test would flag. The preload loss was gradual enough, and the vibration increase was subtle enough, that it fell within the pass/fail thresholds of the existing detection method.
The FMEA’s detection rating of 3 was wrong. Based on test data, it should have been a 7 or 8. The recalculated RPN jumped to 98. The team redesigned the bearing retention system, added a supplemental measurement to the end-of-line test specifically targeting preload, and implemented an accelerated thermal cycling screen in production.
The failure mode never reached a customer. Not because the FMEA caught it — but because FMVT proved the FMEA wrong.
Common Objections — and Why They Don’t Hold
“We don’t have time for additional testing.”
You have time for FMEA sessions that produce unverified assumptions, but not time to check whether those assumptions are correct? The hours spent in that conference room are wasted if the output is fiction. FMVT converts those hours into evidence.
“It’s too expensive.”
Compare the cost of a verification test to the cost of a field failure. A bearing test on twelve samples costs a fraction of a single warranty campaign. A thermal cycling chamber for a week costs less than one customer plant shutdown.
The question isn’t whether you can afford FMVT. The question is whether you can afford to ship products based on guesses.
“Our FMEA process is robust enough.”
Is it? When was the last time you went back to a completed FMEA and compared the failure mode ratings to actual field data? If you’ve never done that exercise, you don’t know how accurate your FMEA process is. Most organizations that perform this comparison discover their FMEAs are accurate about 40-60% of the time. That’s a C-minus in any quality system.
“We already do DVP&R.”
DVP&R verifies that your design meets requirements under specified conditions. FMVT verifies that your failure mode assumptions are correct. They’re complementary, not redundant. DVP&R asks “does it work?” FMVT asks “when it stops working, does it behave the way we predicted?”
Building FMVT Into Your Quality System
FMVT shouldn’t be an occasional activity triggered by a near-miss or a skeptical engineer. It should be a systematic part of your product development and process validation workflow.
Make it a gate. Include a Failure Mode Verification review in your APQP timeline. After the FMEA is completed and before the control plan is finalized, identify the top 3-5 failure modes that warrant verification testing and build the test plan into the project schedule.
Budget for it. Include FMVT resources in your project budgets from the start. Testing fixtures, instrumented samples, lab time, and engineering hours should be planned, not scavenged from contingency.
Build a library. Every FMVT produces data. Organize that data into a searchable library of verified failure modes — their triggers, propagation rates, detection signatures, and mitigations. This library becomes your organization’s most valuable risk management tool, informing future FMEAs with real data instead of assumptions.
Connect it to field data. Close the loop. When field failures occur, compare them to the FMEA predictions and the FMVT results. Was the failure mode predicted? Was it verified? Did the verification test capture the actual behavior? This feedback loop continuously improves both your FMEA process and your FMVT methodology.
Train your team. FMVT requires a different mindset than traditional testing. Teach your engineers and quality professionals to think like investigators — to design tests that probe assumptions, challenge ratings, and expose the gaps between what the FMEA says and what reality delivers.
The Deeper Insight: Verification Is a Mindset
FMVT is more than a technique. It’s an expression of a quality philosophy — the belief that assumptions are liabilities until they’re converted into knowledge.
Every FMEA rating is a bet. You’re betting your organization’s reputation, your customer’s trust, and potentially your users’ safety on a number that was generated in a conference room by people who were tired, under deadline pressure, and influenced by the loudest voice in the room.
Failure Mode Verification Testing is the discipline of cashing in those bets before reality does it for you.
It’s the difference between saying “we believe this is unlikely” and saying “we tested this, and here’s exactly how unlikely it is.” It’s the difference between hoping your detection system works and knowing it does — because you fed it a real failure and watched it respond.
In a world where products are more complex, development cycles are shorter, and customer expectations are higher, the margin for unverified assumptions has disappeared. The organizations that will thrive are the ones that treat every failure mode not as a line item in a spreadsheet, but as a hypothesis that deserves to be tested.
Your FMEA tells you what might go wrong. FMVT tells you what actually goes wrong, how fast it happens, and whether you’d catch it in time.
Stop filing your risks. Start testing them.
Peter Stasko is a Quality Architect with 25+ years of experience transforming manufacturing organizations from reactive firefighting to proactive quality systems. He specializes in bridging the gap between theoretical quality frameworks and practical shop-floor implementation — helping teams turn assumptions into evidence and spreadsheets into living quality tools that actually protect the customer.