Ethical AI

The question of “Ethical AI” has become increasingly important with the exponential growth of AI functionality. Fears around accountability and misuse have led to popular discussions, articles and even congressional hearings. Many of these conversations, however, have focused on current or near-future uses of AI, limiting any ethical framework to the use cases we can imagine now. Planning for the future requires looking ahead to evolving AI capabilities and the questions we’ll face in two, five, ten years so that we can implement policies and ethical principles now.

Principles that could guide AI development

When we discuss ethical principles for AI development, we need to broaden our approach. We must establish societal principles around the research, deployment, and use of AI, not just particular projects or use cases. Our engagement with AI is changing rapidly as the technology develops. It’s easy to say some applications of AI are unacceptable (e.g. deep fakes) based on something that has already been abused. However, use cases we aren’t even considering yet will someday pose deep moral quandaries.
Institutions, like universities, grant funding bodies, and research organizations are well-equipped to take the lead in establishing codes of conduct and peer review mechanisms, thereby creating a template that industry can adopt.

Unique ethical challenges in specific AI application domains 

We asked Professor Tracy Beth Mitrano, Visiting Professor in the Brooks School and in the Ann S. Bowers College Department of Information Science at Cornell University about unique AI ethics challenges. Professor Mitrano said:

“For AI overall, three unique factors stand out that require special attention. First, more than any other technology, this one is especially ‘narcissistic.’ It is surely about humans ourselves. Second, its appearance as ‘objective science’ makes it all the more concerning — after all, humanity at least accepts that humanity is imperfect. It would be dangerous to assume we can create a technology that is an ideal for ourselves. Third, some observers suggest that AI is an existential threat to humanity.  Joining nuclear technologies in that vein, AI can be distinguished from nuclear, however, because while the destructive potential of nuclear technology is obvious, how and in what ways AI technology is suspected of having or developing the capabilities to manipulate humans is speculated but not certain or specific. Apply these challenges, especially in combination, to any sector, in particular those that touch life, law, order, justice, and fairness closely, and one could readily appreciate why AI poses a potential threat.”  

Meeting the Challenge: Fostering transparency and accountability in AI systems

So how can we push for a better framework for ethical AI development?Incentivizing (and potentially regulating) transparency and accountability in AI is important, but there are a few challenges:

  • AI is a global phenomenon. The two major markets where AI innovation is flourishing –  the United States and the European Union – take very different approaches to regulation; add APAC and beyond, the approaches diverge even further. This makes building a global consensus for AI best practices difficult.
  • Within the US, innovation, early adoption, and tech discoveries are often prioritized. Proactive regulation can be difficult without lawsuits or highly publicized incidents driving a shift in public opinion.

Some ways to drive adoption of positive practices:

  • Offer incentives (financial benefits, marketing benefits) to organizations that audit themselves for issues or submit to third-party testing.
  • Educate AI user bases on potential issues so that they push for more transparency. Consider going further: if AI is to permeate all aspects of our lives, should baseline AI education be introduced in high school curriculum?  An informed citizenry will create stronger engagement, debate and steering.
  • Invest in the systems and products that bake in transparency, security, and accountability. As a VC, we are prioritizing AI investments that have built-in “explainability” at the core of their architectures.

We believe this will not only speed customer adoption but also help the startups implement future regulatory compliance as they emerge.

(p.c. Canva magic media)