Tool or Trouble: Aligning Artificial Intelligence with Human Rights

Artificial Intelligence (AI) is the next technological revolution, a breakthrough that poses both great risks and great rewards for human rights. Our ability to mitigate AI’s risks will depend not on its technical features, but on how and why those features are used. A hammer can build a house or break a skull — the impact of any tool depends on who wields it, their intentions behind its use, and what constraints, if any, have been built around the tool’s use. 

Having studied leading AI technologies, we are optimistic about their capacity to enhance the core values of human rights — including the right to life, equality, freedom of movement, and a sustainable environment. But realizing that potential requires being clear-eyed about the threat AI technologies pose, then addressing those threats. 

AI is already embedded in many aspects of our lives, from mail systems that anticipate our words to Alexa/Siri assistants learning our needs to bank systems recognizing out-of-pattern purchases to playlists shuffling to match our personal tastes. Some find these things convenient; some find them annoying. Few consider them an immediate threat to our lives and freedoms. 

Yet, as AI advances, its presence in the world will come into sharper relief. Consider self-driving vehicles (or AVs). Electric-powered, shareable AVs offer enormous advantages — reducing death and human suffering, improving our environment, and democratizing transportation. Federal regulators have found that driver errors/reactions are responsible for 90% of the approximately 40,000 roadway deaths and 4 million serious roadway injuries that occur each year in the United States. The global toll is almost incomprehensible: over 1.35 million lives per year are lost to automobile fatalities. 

Imagine what AI can offer in the face of all that harm: a system in which cars exhibit ongoing improvements in their ability not only to follow traffic laws, but also perceive, predict, and plan to avoid collisions. This system would also open up transportation, making vehicles available to communities that lack adequate mobility — seniors, disabled individuals, persons in areas where current systems and human drivers struggle to reach.

Yet AVs also pose obvious risks for advancing human rights. The AI tools that navigate AVs include cameras and sensors that — if misused — could detect faces and other features that identify individuals in the vicinity of the vehicle. This information, if retained and used for purposes other than navigation, could become a tool for authoritarian control by governments and consumer manipulation by media or marketers. 

Unless carefully managed, AI also poses a risk of embedding human biases and inequities into its self-taught algorithms — and perpetrating those at scale. Another challenge is that the widespread use of AVs would initially dislocate workers who make their living by driving, and disrupt many other businesses and related labor markets. Without proper engagement by regulators and NGOs, this string of events could intensify economic harm for lower-skilled workers, at least in the near-term.

These risks, while significant, are manageable. But industry, regulators, and human rights NGOs must begin taking the measures necessary to address these challenges. 

Because other institutions generally tend to lag industry in understanding the risks and risk-mitigations for new technologies, new industries should establish voluntary standards. Fortunately, these types of efforts are already beginning. Most companies developing AI products understand that they are involved in a “trust race” as well as a “tech race.” To win customer, regulator, and investor trust, AI companies need to address these concerns proactively, rather than waiting for regulations. This includes establishing clear ethical principles to responsibly govern their use of AI, community standards for partners and consumers, and even collaborations with competitors on industry-wide standards. Interested third parties, like insurers, will need to develop the AI equivalent of an Underwriters Lab (which was established by insurance carriers to ensure the safety of consumer appliances) based on some agreed-upon criteria to protect against implicit bias, invasion of privacy, and other concerns. 

Regulators and governments should also begin working with industry to establish leading standards, just as they did in the airline industry by crafting FAA standards that set the benchmark for aviation safety protocols worldwide. Both should engage with NGOs constructively now, to ensure that concerns are addressed and that the positive benefits of AI can be secured. Eventually, these efforts will mature into national domestic laws, international conventions, and treaties as governments determine which standards are subject to abuse and require legal enforcement. 

AI is burgeoning, but the opportunity remains to set the right foundations. Drawing upon lessons from past technology revolutions, we can see the consequences of ignoring human rights — and this time, make sure our technological aspirations align with our human ones.


About the Authors:

Ambassador Jeff Bleich serves as the Chief Legal Officer at Cruise LLC. His legal career has included serving as Special Counsel to President Obama in the White House, Special Master for the U.S. District Courts, court-appointed federal mediator, trial and appellate counsel, adjunct professor of law, and a managing partner of two international law firms. In addition to other legal and business roles, he served as the 24th U.S. Ambassador to Australia from 2009 to 2013. Jeff was a partner for 17 years at Munger, Tolles & Olson LLP, in San Francisco, where he specialized in high stakes technology litigation. In recognition of his service, he has received some of the nation’s top honors, including the highest awards for a non-career ambassador by the U.S. State Department, the U.S. Navy and the U.S. Director of National Intelligence. The Jeff Bleich Center on Digital Technology, Security, and Governance was established in his honor in 2019 at Flinders University. 

 

Dr. Bradley J. Strawser is a tenured Associate Professor of Philosophy at the US Naval Postgraduate School, formerly a Research Associate at Oxford University’s Ethics, Law, and Armed Conflict Center, and a Resident Research Fellow at the Stockdale Center for Ethical Leadership. Bradley is a founder and CEO of Compass Ethics, an organizational ethics consultancy. He specializes in applied ethics, has written extensively on the ethical use of new and emerging technology, and has experience advising in tech, finance, military, and education sectors. He advises senior leaders across Fortune 500 companies and the Department of Defense on organizational ethics, is the principal ethics instructor for SEAL training courses, and was recently tasked by the CLO of the Navy to coordinate ethics synergy efforts across the Navy.

Previous
Previous

Framing The Issues: The UDHR, Economic Inequality, and the Digital Age

Next
Next

Preparing for Crisis and Learning to Fail Safer in a Complicated World