A Day in the Life of a Technical Program Manager in 2035
Imagining how AI-native teams, agent swarms, and simulation bias will reshape a TPM's daily routine.
Why I am writing this
Personally, I think people underestimate how much AI is about to change their day-to-day. I think a big part is how used we are to progress, making incremental progress feel dull in real time. A new chatbot drops and after the novelty wears off, the reaction is, “So what?”. Right now, the best analogy I have for tracking AI progress is that it feels more like watching the sun cross the sky. Stare at it (figuratively, of course) and it seems still; look away to notice the details and we remember we're hurtling through space at 447,000 miles per hour. That is how AI feels to me: trivial in the moment, profound in the aggregate. People need time to spend with these tools to understand them. Aren't we glad we live in an era where there’s no other product in the world competing for our attention?
Here is how I imagine a Technical Program Manager’s day within the next decade. While the exact timing is anyone's guess, the trajectory is becoming clear.
A working day in 2035
You wake up. Coffee is already at your desk because the house robot has learned your routine and your taste. It is small and easy to take for granted, but you think back from memes of Will Smith eating spaghetti to a world that looks familiar at the surface but feels completely different.
Your first standup is with agents, not people. The engineering implementation agent says a task is complete. The PMO governance agent pushes back. It does not meet all product requirements. Both are right. A gap in the requirements created an undefined case and the agents need clarity. You flag it for human review. Five minutes and standup finishes. Ten more meetings like that before you talk to a human. As you move, you confirm blockers, note issues, and capture new dependencies. The system clocks delays from human decisionmaking in seconds. That used to be invisible. Not anymore.
Now the human meeting. A few SMEs, an architect, and product. The aim is simple: give agents what they need and verify the plan still matches the outcomes we set. You clear the earlier requirement gap. The requirements agent asks two sharp questions, gets answers, marks itself unblocked, and leaves. Your PgM agent opens a proposal to improve the predictive requirements model so this edge case does not return. I accept it for my next One-on-One. Today’s glitch becomes tomorrow’s guardrail.
Later in the day, technical agents finish their pieces and start the human confirmation clock. Your role is not to repeat status. It is to check that reported status matches reality. Stakeholder simulations put buy-in at 90%. Good. An SME shared a private concern with their agent. You do not know who, that's by design. The system creates a tradeoff path in case the quiet concern becomes shared risk. We can proceed without pretending nothing was said. Stakeholders do not need you to forward status anymore. Status is ambient and always updated. They need you to keep it tethered to ground truth.
Simulation bias (the central risk for all programs in the future)
We have talked about “watermelon programs” and dashboards that look green until you touch the work. In an agentic environment, the same pattern appears with more polish and more speed. Call it simulation bias. When models, incentives, and tidy charts optimize to a version of reality that is a little too smooth, risk hides in plain sight. This role in 2035 is less about narrating progress and more about preventing that drift.
An example: something feels off on one program.
Last week, scope was removed to stay on track. Tradeoffs were accepted. Governance shows no anomalies, but your intuition keeps nagging. The dashboard looks happy. “Maybe it is happy for the wrong reason?” you think. So, you read the intra-agent thread. (Thankfully, we decided in the 2020s that agents should communicate in human-readable text.)
You discover the problem! A technical agent handed work to another team through dependency intake and framed it as a “small dependency.” On paper it looked minor, even you would have been convinced. But in practice, it was the main effort. The agent was incentivized to hit its date, so the description was just fuzzy enough to pass governance. Not malicious. Just the wrong nudge. You file an exception, reassign the work to the originating agent, and fix the dependency. The plan snaps back to reality in minutes.
What changed, and what didn’t
The new:
Agents run the work.
Status is ambient.
Schedules adjust as inputs change.
What didn't change: Ambiguity is still expensive, incentives still bend behavior, and someone still has to make sure the reality on the dashboard matches the one in reality.
You’re going to keep hearing me coming back to simulation bias. If we do not guard the connection between what we think is happening and what actually shipped, the system will optimize for the wrong thing, faster. That will be inefficient and expensive if we aren’t prepared for this. The work in 2035 is smaller on ceremony and bigger on accuracy. Keep it honest. Keep it moving. Feed the lessons back in. The rest, coffee included, takes care of itself.
/remindme 10 years
-Steve Dolinsky
Stay in the loop
Like this post? Get new essays once a month.
