A few months into leading one of the largest automation programs in Cisco’s customer experience division, I sat across from a senior engineer who had been with the company for eleven years. Smart. Deeply respected. The kind of person whose opinion moves rooms.
He asked me, quietly, whether his role would exist in two years. I didn’t have a clean answer. The honest one, “I don’t know exactly, but here’s what I believe,” felt insufficient for someone who had built his career on certainty. And yet it was the only truthful thing I could offer. What I learned in that conversation, and in dozens like it over the years that followed, is that the fear driving most teams right now isn’t about artificial intelligence. It’s about whether their leader has genuinely thought through what comes next and whether they’re willing to say so.
That’s the leadership challenge of 2026. Not the technology. The trust.
The Silence That’s Costing You More Than You Know
Most business leaders I’ve observed are not ignoring AI. They’re actively investing in it, buying tools, attending conferences, and restructuring workflows. But many of them are doing all of that without having an honest conversation with their teams about what it means.
And their teams notice. When leaders go quiet on the big questions, what does this mean for our jobs? What are we building toward? Are we going to be okay? Employees don’t conclude that everything is fine. They conclude that leadership doesn’t know, or doesn’t care enough to say.
The silence costs more than people realize. Employees who feel uncertain about their future with an organization stop making long-term investments in their work. They stop flagging problems they think might make them look replaceable. They stop raising their hand for new challenges. They show up, but they hedge. And a team of talented people hedging their bets is one of the most expensive business problems you can have, because it’s almost invisible until the damage is done.
The leaders who are navigating this moment well have made a choice that’s harder than it sounds: they’ve decided to talk about the uncertainty instead of waiting until they have answers. Not to perform confidence they don’t have. To be honest about what they know, what they don’t, and how they’re thinking about it. That choice to lead out loud through ambiguity is the single most underrated leadership move available right now.
What “Good” Actually Looks Like When Your Industry Is Changing
I’ve spent years building systems that automate complex tasks at scale. The Lifecycle Automation Platform I led at Cisco reduced manual operational effort by 85% across global enterprise customers. The AI monitoring systems I helped design at Splunk cut diagnostic time by 30%. These weren’t small pilots. They were production systems, live across thousands of enterprise deployments, changing how real people did their jobs every day.
What I observed in every one of those environments was the same pattern: the teams that adapted fastest weren’t the ones with the most technical training. They were the ones with the clearest leadership. Clear didn’t mean certain. It meant their leaders had answered three questions, visibly and explicitly, before the tools ever arrived.
- What problem are we actually trying to solve? Not “we’re adopting AI because the industry is moving there.” What specific pain point, measured how, are we trying to address? When the answer to this question is concrete, people can engage with the change rather than just receiving it.
- What happens to the people doing the work that changes? This question is the one most leaders avoid, and avoidance is read as indifference. You don’t need a perfect answer. You need an honest one: Here’s how we’re thinking about it, here’s what we’re committed to, and here’s what we’ll figure out together.
- How will we know if it’s working? Before deployment, not after. A leader who can answer this question signals that they’ve thought through the consequences and are accountable for the outcomes, not just the initiative.
These three questions aren’t complicated. They’re just uncomfortable. Answering them publicly, before you have clean answers, is what leadership looks like when everything is moving.
The New Accountability Nobody Warned You About
Here is something I didn’t fully understand until I started working directly on AI product development: when you deploy a system that makes decisions autonomously, you don’t transfer accountability to the system. You concentrate it on yourself.
Every recommendation your AI generates, every action it takes on behalf of your business, and every output a customer receives. You are responsible for that. Not the vendor. Not the model. You. This is new. For most of business history, accountability for a decision traveled with the person who made it. Delegating a decision and responsibility went with it. AI breaks that. You can delegate execution and still own every outcome. Leaders who haven’t internalized this yet are going to be surprised. And the surprise tends to be public.
What responsible leadership looks like in practice: stay close to what your AI systems are actually doing. Not to micromanage every output, but to maintain calibration and to know, from regular personal contact with the results, when something feels off before your customers tell you. The leaders who get caught by AI failures are almost always the ones who treated it as infrastructure to be set up and monitored from a distance. The ones who catch problems early are the ones who stay curious about it past the rollout.
This is also, fundamentally, a trust argument. Your team is watching how seriously you take the systems you’re asking them to work alongside. If you’re not personally accountable to the outcomes, if you’re not visibly connected to what’s actually happening, why would they be?
Building a Team That Moves With You, Not After You
The talent conversation in 2026 is not about who gets replaced by AI. It’s about who learns to work alongside it, and more importantly, how you build a team where that learning happens continuously rather than in emergency response to change. The leaders winning that conversation have figured out something counterintuitive: the fastest way to build AI capability in your organization isn’t training programs. It’s strategic first deployments.
Pick the first AI workflow you implement carefully. Not for maximum ROI. For maximum learning. High-frequency, low-stakes, lots of human contact with the outputs. Something your people will interact with dozens of times a week. Customer FAQ drafts. Meeting summaries. Scheduling support. The goal isn’t the efficiency gain — though you’ll get it. The goal is building organizational muscle: people who have learned, from actual experience, how to evaluate whether an AI output is trustworthy.
That capability compounds. Teams that have it can deploy AI into new and more consequential workflows quickly and safely, because they already know how to work with it. Teams that don’t will keep making the same mistakes across every new deployment, regardless of budget.
And the side effect of this approach, the one nobody talks about, is that it answers your engineer’s question better than anything else you could say. When people see their organization deploying AI thoughtfully, involving them in the process, and investing in their ability to work alongside it, they stop asking, “Will I be replaced?” They start asking, “What’s next?”
That shift from defensive to curious is what good AI leadership actually looks like from the inside.
What This Moment Is Really Testing
Trust is built through integrity and destroyed through inconsistency.
I’ve learned that the hard way, leading programs that changed how people worked across organizations I cared deeply about. What I know now is that integrity in this moment doesn’t mean having all the answers. It means saying what you actually believe, doing what you say you’ll do, and staying present for the questions that don’t have clean answers yet.
The technology is real. The disruption is real. But the fundamental test of this moment isn’t whether you’ve adopted the right tools. It’s whether the people who work for you believe that you’re leading them toward something, not just managing them through a transition they had no say in.
That belief doesn’t come from announcements or strategy decks. It comes from the quality of your presence in the conversations that matter, including the uncomfortable ones.
Your team isn’t afraid of AI. They’re looking at you to see if they should be.
Be the first to comment