Summarize the Content of the Blog
There's a pattern I see in nearly every discrete manufacturing operation I work with. It doesn't show up in OEE reports or capacity dashboards. It shows up in the conference room, during the conversations where capital decisions get made and supplier relationships get re-evaluated.
The person in the room with the most knowledge about the problem presents with the least confidence. And the person with the least knowledge presents with the most.
It's the Dunning-Kruger effect, and it plays out in manufacturing more than people realize. The nuanced perspective gets set aside. Not because it's wrong, but because certainty feels like a decision, and nuance feels like delay.
Follow the Data Backward
So to understand how this happens, you have to follow the data backward.
A division director needs to make a capacity decision. New contracts are coming in, maybe $10 million, maybe $20 million, and the question is whether the existing lines can absorb the volume or whether capital equipment is needed. This isn't hypothetical. This is the kind of decision that determines whether a division hits its targets or misses delivery dates to the only other supplier on the planet who can take the business.
The director asks the team for data. What happens next is remarkably consistent across organizations.
An automation engineer pulls a CSV export from the control system. Maybe it's tag data from the Historian, maybe it's production counts from the MES, maybe it's quality records from a separate system. The export captures a snapshot, frozen in time, stripped of context.
That CSV lands on the desk of a subject matter expert. The SME opens Excel and begins the real work: rebuilding the context that should have been attached to the data in the first place. Which shift was running? What recipe was active? Which supplier batch was in the hopper? What were the ambient conditions? Were there any maintenance windows that skewed the numbers?
This reconstruction takes hours. Sometimes days. The SME is basically doing detective work that the data infrastructure should have made unnecessary.
So by the time the analysis is complete, the SME has seen every edge case. They've found the caveats, the exceptions, the "it depends" qualifiers that make the real answer more complicated than anyone wants to hear. They understand the complexity deeply, and that understanding makes them hesitant. They present with nuance. They hedge. They qualify.
Meanwhile, someone else walks into the director's office with a simple narrative and total conviction. "We need a third shift." Or "the problem is the supplier." No spreadsheet. No caveats. Just confidence.
The director, under pressure and needing to move, goes with the confident voice. Not because they're foolish. Because they have four value streams competing for the same resources, a board expecting growth, and no mechanism to distinguish between a well-informed hedge and an uninformed guess.
This Isn't a People Problem
It's tempting to frame this as a communication issue. Maybe the SME should present with more confidence. Maybe the director should ask for more rigor. But I think the real gap is in the data infrastructure.
Think about what the SME is actually doing in that Excel spreadsheet. They're manually reconstructing context that the data should have carried from the moment it was created. Asset identity. Process hierarchy. Engineering units. Shift and batch information. Normal operating ranges. The relationship between this measurement and the three other measurements that affect it.
It's like getting a box of puzzle pieces with no picture on the box. The pieces are all there, but someone has to figure out what the picture is supposed to look like before they can even start assembling it. That's what the SME is doing every single time they get a CSV export. And the picture on the box, that's the context. It exists somewhere, scattered across the ERP, the MES, the quality management system, the historian, the CMMS, and the tribal knowledge of the controls engineer who set up the PLC fifteen years ago. It's just not attached to the data when the data moves.
What This Actually Costs
When context has to be rebuilt manually for every question, a few things happen that quietly add up.
The automation engineer becomes a data concierge. Their job is supposed to be engineering, designing, optimizing, and maintaining control systems. Instead, they spend a meaningful percentage of their time responding to export requests. Every question from a quality engineer, a CI lead, or a plant manager starts with "can you pull me the data for..."
The subject matter expert becomes a translator. Instead of applying their expertise to the problem, they're spending the first 70% of their effort just getting the data into a shape where expertise can be applied. That's a lot of high-value time spent on data prep rather than actual analysis.
And the director gets a recommendation with a shelf life measured in hours. By the time the analysis is done, the shift is over. The batch has moved. The supplier has shipped. The decision window has narrowed or closed entirely.
So the capital case, the one that needs data from six systems correlated across three months of production to justify a $30 million equipment purchase, rarely gets built with the depth it deserves. It moves forward on experience and judgment, finance raises questions, and the equipment arrives later than the timeline called for.
The Visual Factory
.jpg)
Every operations leader I talk to describes the same aspiration: the right metrics, at the right points in the process, checked at the right frequency. A factory that communicates its own status without anyone having to run a report, pull an export, or ask an engineer.
The visual factory. The one where the plant runs without having to say a word.
It's a compelling vision. And it's achievable. But you can't build it on top of a data infrastructure that requires manual context reconstruction for every question. If the metrics on the floor are powered by the same CSV-to-Excel pipeline that produces the quarterly capacity analysis, they'll be just as stale, just as disconnected, and just as untrusted.
The visual factory requires data that arrives with its context already attached. Asset identity, process meaning, operational thresholds, hierarchical relationships, all present at the moment the data is created. Not reconstructed after the fact by whoever happens to know the answer.
The Question Worth Asking
There's no single answer to this. Every manufacturing operation has its own history, its own systems, its own accumulated complexity. The tag naming conventions alone can vary between plants in ways that would make a software engineer weep.
But there is a question worth sitting with:
How often do the people with the deepest understanding of a problem also have the clearest way to communicate what the data is telling them? And what would it look like if the infrastructure made that easier?















