When Decisions start to settle

A team is reviewing a decision that, in many ways, already feels made. The numbers look solid, the recommendation is clear, and there is a quiet sense in the room that the work has already been done. Option A carries an 85% probability of success.

No one says it out loud, but the direction feels set. I’ve been in rooms like this more often recently. And I’ve noticed how easy it is to go along with it, even when something feels slightly off. What strikes me is not the confidence of the system, but the speed with which a conversation settles around it. If the same result were framed differently, as a 15% risk instead of an 85% success rate, would someone pause? Or has the decision already, quietly, been made?

What Changes in the Room

What interests me is not the answer itself. It is what shifts in the room once it is there. Because the system doesn’t just give an answer. It changes what feels like a reasonable question.

The system doesn’t replace thinking.
It reshapes where thinking begins.

In conversations with leaders, this comes up again and again. Systems are introduced to support decisions. To make things feel clearer. To move faster. But they don’t just support. They quietly reshape how situations are seen, how trust is formed, and how decisions take direction.

How Trust Forms Without Being Questioned

It often starts subtly. We begin to speak about the system as if it were part of the team. We say it or he “thinks”, that it “knows”, that it “prefers” one option over another. It sounds like shorthand. But it shifts how we interpret what we see. It creates a sense of intention, and with it, a kind of trust that is rarely questioned.

At the same time, something else happens. The more confident an answer sounds, the more likely it is to be accepted. Not because it is right, but because it leaves less room to disagree. And from there, people start to rely on it. Some teams move quickly. Others hesitate. Both reactions make sense. What I rarely see, though, is teams stopping to ask why they trust it in the first place.

When everyone is part of the decision,
ownership becomes harder to see.

Prof. Dr. Tina Weisser

How Decisions Quietly Narrow

This becomes most visible when decisions are made. The first answer often defines the space. Even when alternatives exist, they tend to stay close to that initial suggestion. The conversation starts to move around it. I have seen teams spend hours refining an answer without ever questioning the starting point. And something else follows. The more we rely on it, the less we check it. Not by decision. More by habit. Attention shifts. Questions become softer. Small inconsistencies pass.

We don’t lose judgment at once.
We adjust it, one small decision at a time.

When Responsibility Becomes Diffuse

This is also where responsibility becomes harder to locate. When decisions emerge between people and systems, ownership spreads. Everyone is involved, which makes it harder for anyone to step forward.

Over time, this changes how people contribute. Some hold back their perspective. Not because they have nothing to say, but because the system appears more precise, more informed. And with that, something valuable is lost before it is even voiced.

What We Stop Practicing

And there is a longer-term effect that is easy to overlook. What we no longer practice begins to fade. Slowly. Skills turn into oversight. Judgment becomes review instead of creation.

For me, the point is not to avoid these dynamics. They are part of working with any system. But I keep coming back to one question: At what point do we stop noticing how much we have already adapted?

The Cumulative Cost of Quiet Decisions

These subtle shifts in decision-making may seem small in isolation. But they are not isolated. They happen in hundreds of teams, dozens of times a day. The cumulative effect is not a single bad decision, but a slow erosion of the organisation’s collective intelligence. The cost is not a line item in a budget. The cost is a portfolio of slightly less ambitious projects, a series of missed market signals, and a culture that slowly learns to stop questioning.

Human-Agent Codex V.1

What I’ve described here are not isolated moments. They tend to repeat themselves, in slightly different forms, across teams and contexts. Often unnoticed while they happen, and only visible in hindsight, when a decision feels oddly narrower than it should have been, or when something important was never really questioned.

Over time, I found myself returning to these situations and asking what exactly had shifted. Not in the system, but in how people related to it. How perception changed, how trust formed, how decisions took direction, and how responsibility was carried or quietly diffused. What emerged from this was not a single explanation, but a set of recurring patterns. Patterns that are not new in themselves, but that take on a different weight once a system becomes part of the conversation. They shape how we interpret what we see, what we accept as plausible, and how far we are willing to challenge it.

I started to map these patterns more deliberately. Not to simplify them, but to make them easier to recognize while they unfold. Because once you begin to notice them, they are hard to unsee. They show up in how we attribute intention to something that has none, how we give weight to confident answers, how we stay close to the first suggestion, how we rely without checking, how responsibility becomes less tangible, and how, over time, certain capabilities quietly recede.

What follows is a way of making these patterns more visible. Not as a framework to apply, but as a lens to look through.

Level 1: PERCEPTION

Biases that influence how we perceive the agent and its output.
Bias
Description
Example
Anthropomorphism
The tendency to attribute human characteristics, intentions, and emotions to an AI agent, even when there is no basis for it.
A team member says, „The agent doesn’t like my suggestions; it always phrases them so critically.“ In reality, the agent is just following its programmed style guidelines.
Authority Bias
The tendency to overly trust and accept information from an agent simply because it is presented confidently and authoritatively, regardless of its actual accuracy.
An agent presents a market analysis with great certainty. The team accepts the conclusions without question, even though the underlying data is outdated.
Framing Effect
The tendency to be influenced in decisions by the way information is presented (framed) by the agent, rather than by the facts themselves.
The agent states, „We have an 80% chance of success.“ The team agrees. If it had stated, „We have a 20% risk of failure,“ the team would have hesitated.
Illusion of Understanding
The tendency to believe one understands how an AI system works, when in reality the understanding is superficial or incorrect.
A manager explains, „I know exactly how the agent arrives at its results,“ but only describes the user interface, not the complex internal weightings of the model.

Level 2: TRUST

Biases that affect the level of trust we place in the agent.
Bias
Description
Example
Automation Bias
The tendency to over-rely on automated systems, leading to uncritical acceptance of its suggestions and overlooking errors.
A doctor overlooks a misdiagnosis in a radiological report because he relies on the AI „having seen it correctly.“
Algorithmic Aversion
The tendency to reject advice from an algorithm and prefer advice from a human, even when it is known that the algorithm is more accurate.
A team ignores the agent’s data-driven sales forecast and instead follows the gut feeling of an experienced sales manager and misses the target.
Algorithmic Appreciation
The opposite of aversion: the tendency to prefer algorithmic advice, sometimes even when it is wrong.
A junior team member trusts the agent’s code suggestion more than the advice of a senior developer because they consider the AI to be more objective.
Trust Miscalibration
A general miscalibration of trust, where the trust in an agent does not match its actual capabilities, leading to over- or under-trust.
A team uses a new agent for critical financial decisions (over-trust) or only for simple text corrections (under-trust) without knowing its true

Level 3: DECISION

Biases that influence the decision-making process in a human-agent team.
Bias
Description
Example
Anchoring Bias
The tendency to rely too heavily on the first piece of information offered (the „anchor“) when making decisions. The agent’s output often serves as a strong anchor.
The agent suggests a project timeline of 6 months. The entire subsequent discussion revolves around 5-7 months, even though 12 months would be realistic.
Confirmation Bias
The tendency to search for, interpret, and recall information in a way that confirms one’s pre-existing beliefs or hypotheses.
A manager who is convinced of a marketing campaign specifically asks the agent, „Find data that proves the success of Campaign X.“
Complacency
A state of reduced vigilance and critical thinking that arises from excessive trust in an automated system, leading to a failure to monitor the system’s performance.
A pilot no longer actively monitors the flight route on autopilot. When an unexpected weather front appears, he notices it too late.

Level 4: RESPONSIBILITY

Biases related to the attribution of responsibility and accountability.
Bias
Description
Example
Diffusion of Responsibility
The tendency for individuals to feel less responsible when they are part of a group. In HATs, responsibility can „dissolve“ between human and agent.
After a wrong decision, a team member says, „I thought the agent was checking that. And Maria agreed, too.“ In the end, no one feels fully responsible.
Moral Licensing
The tendency to feel entitled to make a questionable decision after a series of good or moral decisions. Using an „objective“ AI can feel like a free pass.
An HR team uses an AI recruiter for a fair pre-selection and thus feels entitled to prefer a candidate from their own network in the final step.
Agency Dilution
The feeling that one’s own agency or freedom of choice is diminished by collaborating with a powerful AI, leading to a passive attitude.
An employee says, „What’s the point of me suggesting anything? The agent has better ideas anyway.“ He stops contributing

Level 5: CAPABILITY

Biases that affect human skills and competencies over time.
Bias
Description
Example
Deskilling
The loss of human skills and expertise over time due to over-reliance on automated systems that perform these tasks.
An experienced analyst loses his ability to do quick mental calculations because for months he has only relied on the agent’s Excel evaluations.
Loss of Situational Awareness
A critical failure to perceive and understand the current state of the environment, often caused by being out of the active control loop.
A manager who only reads the agent’s weekly summaries loses his feel for the actual mood and the small problems in his team.
Out-of-the-Loop
A state in which a human operator is no longer actively involved in the decision-making process, leading to a loss of context and the ability to intervene effectively.
A logistics planner lets the agent handle route planning completely. When a bridge is unexpectedly closed, he lacks the knowledge to quickly find a sensible alternative.

If you want to look a bit deeper, these patterns are not new. They have been studied for years, often in very different contexts. What changes now is how closely they come together in everyday decision-making.

On Automation Bias and Complacency
Work by Raja Parasuraman and Dietrich H. Manzey shows how quickly trust in automated systems can become uncritical, and how attention tends to drop once a system is perceived as reliable: Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors

On Diffusion of Responsibility & the “Moral Crumple Zone”
Madeleine Clare Elish describes how responsibility concentrates on humans, even when systems significantly shape the outcome: Elish, M. C. (2019). The Moral Crumple Zone: Cautionary Tales in Human-Robot Interaction. Engaging Science, Technology, and Society, 5, 40–60.

On Deskilling
Research on automation has long pointed to the risk that skills fade when they are no longer actively practiced. Early evidence suggests that AI-supported environments may accelerate this effect: Natali, L., et al. (2025). Cognitive and Human Factors in Medical AI: A Scoping Review.

On Algorithmic Aversion and Appreciation
Behavioral research shows that our relationship with algorithms is not stable. We tend to both reject them too quickly and trust them too easily, depending on context: Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology.

Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103.

On Loss of Situational Awareness
Mica R. Endsley explains what happens when humans are moved out of the loop, and engagement shifts toward passive oversight: Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors,