March 19, 2026

·

5 min read

From ABS to Autopilot: The Five Levels of Banking Autonomy

In 1978, Bosch introduced the first electronic anti-lock braking system. Drivers did not trust it. They pumped the brakes anyway, overriding the system that was trying to save them. It took a generation before people stopped fighting the technology. Today, ABS is invisible, mandatory, and no one questions it.

The same progression is playing out in banking. And most institutions are somewhere in the early stages, mistaking the presence of automation for readiness.

The five levels

The automotive industry uses a framework — the SAE levels of driving automation — to describe the progression from fully manual to fully autonomous. It is useful not because banking is like driving, but because the trust accumulation mechanism is identical: each level requires evidence before the next one unlocks. Trust, earned through evidence.

Level 0: Fully manual. The human does everything. The system provides no assistance. In banking: paper-based processes, spreadsheets, manual approvals. Few institutions are still entirely here, but pockets exist in every bank.

Level 1: Assistance. The system handles specific, well-defined tasks. The human remains in control. In banking: RPA bots extracting data, running eligibility checks, routing applications. Deterministic. Auditable. Most Swiss banks have reached this level in at least some processes. It is the industrial backbone: valuable, but not agentic.

Level 2: Partial automation. The system can handle multiple tasks and make recommendations. The human must stay engaged and supervise. In banking: AI models that score risk, flag anomalies, or suggest pricing, but a human reviews every output before it becomes a decision. Many banks are here today, particularly in credit risk and compliance monitoring. The AI does the analysis. The human does the deciding.

Level 3: Conditional autonomy. The system makes decisions within a defined envelope. The human does not supervise every decision, but must be available to intervene when the system escalates. In banking: an agent that autonomously processes a standard mortgage application, issues an offer, and only escalates when the case exceeds its authorisation parameters. This is where most banks are not. And this is where the structural advantage begins.

Level 4: High autonomy. The system operates independently across a full domain. Human involvement is limited to governance, exception handling, and policy design. In banking: an agent that manages end-to-end mortgage origination, including pricing optimisation, portfolio risk management, and dynamic policy adjustment, with humans governing the envelope, not the individual decisions.

Level 5: Full autonomy. The system operates without human involvement. In banking: theoretically possible for fully commoditised, low-risk products. In practice, Level 5 is regulatory-dependent, jurisdiction-specific, and not the near-term objective. It is the direction of travel, not the destination for this decade.

Where most banks actually are

The honest answer for most Swiss banks: Level 1 to 2, with ambitions for Level 3.

Bots are running. Some AI models are deployed. Risk scoring, document extraction, sanctions screening. These are real. But the gap between "we have AI in production" and "our AI makes decisions autonomously within a governed envelope" is larger than most organisations recognise.

Level 1-2 is automation. Level 3 is an operating model change. The technology difference is incremental. The governance difference is structural. Reaching Level 3 requires escalation architecture, accountability frameworks, reliability evidence, and the organisational willingness to let a system decide.

The transition from Level 2 to Level 3 depends on trust. The technology is the easier part.

How trust accumulates

No regulator, and no board, will approve a jump from Level 2 to Level 4. The progression is earned.

Evidence builds at each level. At Level 1, you prove the bot extracts data accurately. At Level 2, you prove the AI's recommendations align with human decisions. At Level 3, you prove the agent's autonomous decisions fall within acceptable parameters: measured, documented, reviewed. Each level's evidence is the permission slip for the next.

FINMA's Guidance 08/2024 encodes this principle directly: AI systems may be used autonomously once they are "sufficiently reliable and this can ultimately be proven." The regulatory ratchet is built on evidence, not permission. The banks that start generating evidence now are the ones that will reach Level 3-4 while others are still debating Level 2.

This is exactly how it worked with cars. ABS was mandated after decades of evidence. Lane-keeping assist was permitted after years of data. Conditional autonomy (Level 3) is now available in some jurisdictions, but only because manufacturers accumulated millions of kilometres of evidence. The evidence preceded the permission.

The danger of skipping levels

The counter-example matters. The Boeing 737 MAX automated a critical function — the MCAS system — without adequately training pilots or designing the human-machine interface. The system made autonomous decisions. The humans could not understand or override them effectively. The result was catastrophic.

The banking equivalent: deploying an agent that makes autonomous credit decisions before the institution has proven it can govern them. Before the escalation interface is designed. Before the humans who monitor it understand what it is doing. Before the evidence of reliability exists.

The risk is not that autonomy fails. The risk is that you reach for a level you have not earned. FINMA will not allow it. And they are right not to.

The ladder does not wait

Here is the strategic implication that most banks miss: the autonomy ladder is not optional, and it is not static. The demand-side pressure from customer AI agents will force every bank up the ladder. The only question is whether you climb deliberately or get pushed.

A bank at Level 2 competing against a bank at Level 3 is slower and structurally more expensive. Every application that a Level 3 bank processes autonomously costs a fraction of what a Level 2 bank spends with human review. At 10× the inbound volume, that difference is existential.

The banks that start climbing now, building the evidence, designing the governance, earning the trust, will reach Level 3-4 while competitors are still debating whether Level 2 is enough.

It never is. ABS was enough in 1985. It is table stakes today. The same will be true for every level of banking autonomy. Where you are matters less than whether you are moving.