The moment was brief, but clear. A participant in a leadership workshop clicked on an AI-generated recommendation. No confirmation, no double-check. Decision made. The room went quiet. Someone asked: “Can it do that?“ Yes. It just did.

That moment stayed with me, not because the tech didn’t work, but because it opened my eyes to something more essential. When AI becomes part of the team, it doesn’t just change workflows. It changes responsibility. Roles. Trust. And with that, the very foundation of leadership.

The real shift isn’t technical – it’s social

In many organisations, AI is still positioned as a support function: drafting emails, summarising meetings, prioritising tasks. It sounds harmless. But it isn’t. Because AI is already doing things we used to consider inherently human: making recommendations, setting priorities, shaping direction. And it does so without pause, doubt, or hesitation. That’s where tension arises.

What happens to team dynamics when a new “colleague” never questions, never tires, and never hesitates—just delivers?

Leadership isn’t being replaced. But it is being redefined. This isn’t a futuristic thought experiment. It’s already happening. According to Microsoft (2025), 24% of companies are deploying AI agents at scale, while only 12% are still piloting. One founder reportedly built a digital staffing agency powered entirely by AI, no employees, just code, and will reach $2 million in revenue. These aren’t edge cases anymore. They’re the early signs of a shift in how we understand collaboration, leadership, and responsibility. That’s why we have to actively design the intersection between teams and technology. Not as an afterthought, but as a core leadership task.

"AI expert Daniel Susskind hypothesizes that human work will persist even as AI capabilities expand because of three limits: it can be more efficient to have AI and humans working in tandem, human preference, and moral judgment" 

2025 Work Trend Index Annual Report by Microsoft

Responsibility remains human – even when decisions don’t

Many tools suggest clarity. “Based on available data, we recommend…” But what does that mean in practice? Who questions the assumptions? Who owns the consequences? What happens when the “optimal” answer doesn’t fit lived experience, team capacity, or cultural nuance?

This is where leadership shows up. Not as the final decision-maker, but as the one who holds the space in which real decisions, conflicted, contextual, human, can still be made.

Trust isn’t a technical feature

Studies show that trust in AI is particularly low in small teams. In pairs, people often feel uncertain or inferior. In larger constellations, trust levels even out. Why? Because AI doesn’t send social signals. No hesitation. No frown. No “wait, what do you mean?” What’s left is perceived competence. But without context, it can feel cold, or worse, authoritarian. Whether AI becomes a collaborator or a threat doesn’t depend on its performance. It depends on perception. And that perception is shaped by leadership. Not by translating technical specs, but by creating relationship, in a space that no longer includes tone of voice, facial expression, or shared history.

AI Doesn’t Fix Broken Teams

Let’s drop the illusion: AI doesn’t improve dysfunctional teams. It only works when the basics are already strong, clear roles, open communication, psychological safety. That’s not a motivational poster. It’s backed by data. In high-performing teams, AI becomes a multiplier. In low-performing ones, it becomes a stress test. Without trust and structure, AI doesn’t enable, it destabilises.

So the relevant leadership question isn’t, “Which tool do we implement?”It’s: “Can our team absorb this kind of intelligence, without losing direction, confidence, or cohesion?”

Understanding Human-Agent Teams

When we talk about collaboration between people and AI, we’re not just talking about using tools. We’re talking now about Human-Agent Teams or Multi-Agent-Teams. These are constellations where at least one human and one autonomous AI system work together toward a shared goal. But here’s the shift: the AI doesn’t just follow orders. It makes decisions. It initiates actions. It adapts. Not a passive assistant, but an active agent.

That means the social logic of teams is changing. We’re no longer the sole drivers of direction. The AI doesn’t ask for permission, it participates. And that makes questions of trust, authority, and collaboration more relevant than ever.

Three Phases of Human-AI Collaboration

In a recent study Microsoft identifies three distinct phases of how AI enters team life:

  • Phase 1: AI as Assistant. Humans work more efficiently through automation.

  • Phase 2: Human-AI Teams. AI takes on independent tasks, based on human prompts.

  • Phase 3: Agent Operations. AI handles entire processes; humans steer or step in as needed.

These phases mark a shift toward what some call Frontier Firms, organisations designed already around “intelligence on demand,” with hybrid teams and new roles like the Agent Boss. These companies are no longer experimenting with AI. They’re building around it. But this shift doesn’t just require new tech. It requires different human skills. In my view, employees benefit most when they see AI not as a threat or just another tool, but as a thinking partner, something they can exchange ideas with, identify blind spots, sharpen their outcomes, and challenge their assumptions. Those who learn how to clearly delegate, ask precise questions, provide feedback, and remain flexible will help lift the performance of their entire team.

„The same AI that worked well in high-performing teams failed in low-performing ones. It’s not AI that makes a team good – it’s good teams that make AI effective.“

— based on Bendell et al (2025)

"Many still view AI agents as static tools rather than adaptive, knowledge-enhancing partners. This narrow perspective restricts AI’s potential in dynamic decision-making and strategic reasoning.”
Tina Weisser

What this means for leadership

As we move deeper into this new reality, it’s worth pausing to name something important. I don’t see AI as a replacement for human work, but as a spark for more thoughtful, responsible collaboration between people and technology.

Bringing AI into our teams isn’t just a technical upgrade. It’s an ethical shift. One that asks us, especially as leaders, to stay grounded in what truly matters: fairness, inclusion, shared responsibility. The way we design and integrate AI will shape not just what we do, but who we become, as teams, as organizations, and as a society. That’s why trust and orientation aren’t optional. They’re foundational. Leadership isn’t disappearing. But it is shifting. Not because AI is smarter, but because people need new forms of clarity and care.

This is my credo: AI should serve people, not the other way around.

Three things to focus on:

  • Clarity on who decides, when, and why.
    Formal roles are no longer enough. Teams follow what’s effective, not what’s on the org chart.
    We’re currently prototyping a tool to support this: the Human-Ai-Interaction Journey. It maps who is involved in key decisions, where AI is already acting, and where human clarity is missing. The goal is to link real use cases in a way that reveals the bigger picture.

  • Communication beyond content.
    When AI drafts first versions and sorts inputs, leaders and team members need to reintroduce meaning, what matters, and why.
    Mapping the communication flow can uncover gaps, where messages arrive, but meaning doesn’t.

  • Learning as a leadership responsibility.
    Not (only) in the form of workshops, but in daily practice. The most effective leaders and teams in the AI era will be those who learn openly, fail visibly, and adapt alongside their teams.
    Journey maps can make learning loops visible: Where do we pause? Reflect? Shift course?

A lack of AI literacy among executive teams: Research covering nearly 7,000 executives across 645 firms shows a clear pattern: Companies led by AI-literate teams are significantly more likely to identify where AI can create value – and act on it.

Impact Council FastCompany 2025

What Matters Now

The real questions for leadership today aren’t about tools or vendors. They’re deeper:

  • Where is AI already making decisions in my team, openly or quietly?

  • Where can AI help us as humans, and where is it overwhelming?

  • How do we hold trust when not everyone in the team is human?

Because in the end, it’s not the technology that decides how we work. It’s what we make of it. When AI makes a decision, and no one notices, leadership becomes more visible, not less. That’s the work now.

Further Readings and Sources:

Bendell, R. et al. (2025): Artificial Social Intelligence in Teamwork
Why strong teams make AI impact possible in the first place – and why weak teams struggle or fail.

Georganta, K. & Ulfert, A. (2024): Would You Trust an AI Team Member?
Insights into trust dynamics in Human-AI Teams – and why team size, social embedding, and role clarity are critical factors.

Microsoft (2025): Work Trend Index – The Year the Frontier Firm Is Born
A future report on the rise of Human-Agent Teams and the evolving role of leadership in the age of intelligent agents.