While generative AI has taken over the digital world, bringing that intelligence into physical systems is a whole different beast. But while fully autonomous general-purpose robots might not be here yet, progress is unfolding in unexpected ways. How will AI-driven...
Who Is Responsible When AI Makes a Mistake?
The future is exciting as AI is moving from the lab into the real world—into cars, hospitals, enterprises, our personal lives and more. With this shift comes an increasingly urgent question: Who is responsible when an AI system makes a mistake?
Whether it’s a self-driving car causing an accident, a medical AI making a faulty diagnosis, or an algorithm denying someone a loan, the line of accountability isn’t always clear. As AI becomes more autonomous, our traditional frameworks for responsibility are being tested. So who is responsible:
1. The Developer: “You Built It, You Own It”
One school of thought argues that responsibility lies with the engineers and data scientists who designed and built the system. If a model was trained on biased data, or tested insufficiently, or lacks transparency, then the creators are at fault.
This is a familiar analogy—if a manufacturer produces a faulty product, they’re liable. But AI is different. These systems can evolve, learn from new data, and behave in ways even their creators can’t fully predict. Can we reasonably expect developers to foresee every possible failure? Can we put the burden of responsibility on the developer who developed it initially but anything beyond that was the behavior learned overtime?
2. The Company: “If You Deploy It, You’re Accountable”
Others say the burden lies on those who choose to use the AI in real-world contexts—hospitals, banks, logistics companies, or social media platforms. After all, it’s their decision to trust an AI with critical tasks.
Many global regulators are leaning this way where the burden of responsibility lies with the deployer. The companies need to assess the risk they can manage by figuring out what tasks they can run autonomously and what requires “human-in-the-loop”.
3. The User: “You Were Still in the Loop”
There’s also the argument that human users—whether they’re doctors using AI diagnostics or drivers relying on autopilot—are ultimately accountable. The AI may make a recommendation, but the human makes the final call.
But this is often more complicated than it sounds. How much agency does a human really have when AI is designed to be persuasive, authoritative, or even fully autonomous?
4. A Shared Responsibility Framework
Perhaps the most practical approach is a chain-of-responsibility model, where liability is distributed based on roles:
- Developers are responsible for transparency and testing.
- Companies must ensure appropriate use and safeguards.
- Users need training and awareness.
- Regulators define boundaries and enforcement mechanisms.
This approach mirrors what we do in other safety-critical domains, like aviation. When a plane crashes, we don’t look for one scapegoat—we investigate the system.
Looking Ahead: Trust Demands Accountability
As AI becomes more central to how decisions are made, trust will hinge not just on performance, but on clear lines of responsibility. We must design not just for efficiency, but for accountability.
New regulations, third-party audits, and AI insurance policies are emerging. But perhaps the most important change is cultural: businesses and technologists need to stop thinking of AI as a neutral tool and start treating it as an actor within a system of responsibility.
I believe for widespread adoption of AI we have to think of this as a system responsibility and not look for scapegoats in the process that it fails.
What Do You Think?
Who do you believe should be responsible when AI makes a mistake? The developer? The company? The user? Or should we build a new model entirely?
I’d love to hear your thoughts—especially if you’re working on Agents.