Are Our Programs Simulations? When status replaces reality, programs collapse.

I had some downtime on vacation recently and read Simulacra and Simulation, a book that explores how, over time, signs and images gradually detach from reality until they stop representing it entirely, creating an environment of hyperreality where simulations are treated as more real than reality itself. Pretty interesting, but what does that actually mean?
While unintentional, the book gave me an exciting new lens for thinking about program management and why leaders sometimes make decisions based on representations of progress rather than the reality on the ground. Consider the Challenger disaster as an example, a program confidently marked "green" where critical warnings were lost in reporting. A launch decision was made based on a version of reality that didn't exist. How could that happen?
I wrote a short article about what I'm calling "simulation bias" and why I believe it's going to become a critical challenge for PMOs and tech leaders as AI-mediated insights become the norm for organizations across the globe. Soon, it won't just be our job to define reality for programs, but also to audit the tools mediating that reality to ensure we're interpreting the ground truth correctly.
Are Our Programs Simulations? When status replaces reality, programs collapse.
As program managers, we translate uncertainty into something stakeholders can digest: neat dashboards, simple colors, confident slides. But there's a catch, what we deliver isn't the work itself. It's a representation of the work. And when decisions are made on that representation, simulation bias creeps in.
We've all been there, an in-flight program gets handed to you and it's "green." But once you take a closer look, you realize it's red.
I call these "watermelon programs" (green on the outside, red on the inside). But beneath the surface, these programs point to something deeper and more dangerous: a creeping simulation bias that I feel organizations will increasingly face in the coming years and are not prepared for.
From Reality to Representation
Here's how it happens:
- You update a program status or create an artifact describing an aspect of your program.
- Leadership reviews it and makes strategic decisions based on what they see.
- But the status snapshot represents a moment in time, not the living state of the program.
Projects move faster than reporting cadences. A program marked green this morning could shift yellow or red by Friday, yet executives keep operating on that Monday snapshot as if it's current truth. Not a big deal in 1970, but a pretty big deal at today's productivity pace.
The result? Decisions become biased towards simulation, a representation of the program frozen in time, instead of the reality on the ground.
The Amplification Effect of AI Dashboards
Now layer in AI-driven dashboards and real-time reporting pipelines mediated by AI.
On the surface, this seems like a fix: faster updates, less human error, more transparency. But in reality, AI mediation accelerates simulation bias:
- AI-summarized dashboards compress nuance into clean, confident narratives.
- Leadership consumes metrics without engaging with context.
- Over time, decisions shift from managing programs to managing representations of programs.
The danger isn't that AI gets it wrong. The danger is over-trusting the artifact because it feels more objective and precise than human judgment, even when it's outdated, incomplete, or misaligned.
Why Simulation Bias Matters
Simulation bias isn't about tools. It's about how organizations perceive reality:
- Stakeholders begin trusting the dashboard, not the delivery teams.
- Metrics replace conversations about uncertainty ("How do we get to green?" vs "What is the impact of current program reality?").
- Program health gets redefined by what looks true, not what is true.
The risk compounds: the bigger the org, the more layers between where work happens and where decisions get made. As AI tools become the default mediators, these distortions will scale faster than our ability to detect them.
A New Discipline for Program Leaders (and PMOs)
The role of program managers is shifting. It's no longer enough to just report status. We have to:
- Validate whether artifacts reflect execution reality.
- Educate stakeholders about representation vs. ground truth and how to audit that fact.
- Challenge AI-driven dashboards when the narrative doesn't match delivery.
Because when leadership manages simulations instead of programs, failures compound silently until it's too late to course-correct.
Have you ever been in a program where the dashboard said "green," but reality told a different story?
Curious to hear your take: How confident are you that your programs represent reality, not just the story your artifacts are simulating?
Stay in the loop
Like this post? Get new essays once a month.
