Institutional Review Boards for Artificial Intelligence?

Drawing on the history of ethics in medicine, perhaps Institutional Review Boards (IRBs) could be how we ensure AI is ethical and good.

This article expands on Dr. Swamidass’s thoughts in a presentation at The Good AI earlier this year.

Artificial intelligence (AI) is expanding its role in society. Several legitimate concerns are rising about how AI might do unexpected and unintended damage. Is AI being used ethically? What sort of world is AI creating for us? Is this new world good? 

From everyday tasks such as asking Siri to set your alarm to more serious proceedings such as medical surgeries, AI is involved in many areas of society and that involvement is growing as technology continues to advance. For instance, within the field of medicine, what started in the 1970s as an AI system that recommends antibiotic treatment has, in the span of just three decades, turned into a tool that can predict, analyze, and identify disease. How do we know if this technology is advancing in the direction that serves society in the best and most ethical way

In the United Kingdom (UK), automated face recognition (AFR) was employed by the South Wales police force as a way to identify wanted criminals. Although seemingly beneficial, this technology received much backlash as a breach of privacy and a system that perpetuated bias. Additionally, innocent children were pulled aside on the street by policemen for questioning as seen in a recent documentary called Coded Bias. And although this AI-enabled facial recognition technology has recently been deemed illegal for the South Wales Police (and only a few cities in the United States), the door is still left ajar for the technology’s potential return. 

This is just one example of the ethical dangers ahead. Governments are moving towards regulating AI now, but the details are far from settled. How do we regulate AI without stifling innovation? 

Drawing on the history of ethics in medicine, perhaps Institutional Review Boards (IRBs) could be part of our approach.

Regulations Loom

There is a tension between regulation and innovation. Too much regulation prevents technological advancement and creativity. Where is the line between advancement and the protection of rights? What exactly is the role for the government to play?

For good reason, several governments are moving to increase regulation of AI. The EU is considering a proposal to increase the degree of government regulation of AI. When AI is used in high risk domains, the proposal would expect AI to comply with several rules designed to limit harm and ensure transparency. This is only the first of many steps that will have a large impact on the field.

In the same way, the United States is signalling that it intends to develop regulations for AI.In 2019, then-President Donald Trump released an Executive Order on Maintaining American Leadership in Artificial Intelligence. This order outlines principles that guide the American AI Initiative to ensure the United States remains competitive within the field of AI: 

“The importance of developing and deploying AI requires a regulatory approach that fosters innovation, growth, and engenders trust, while protecting core American values, through both regulatory and nonregulatory actions and reducing unnecessary barriers to the development and deployment of AI.”

The order mentions “protecting core American values.” The details of how the civil liberties and rights of the public might be safeguarded are not yet worked out.

The Memorandum for the Heads of Executive Departments and Agencies echoes the executive order. It advocated ten principles to guide implementation of AI technology: public trust, public participation, scientific integrity, risk assessment, benefit-cost analysis, flexibility, non-discrimination, transparency, safety, and coordination between agencies. 

This memorandum also emphasized continued advancement in the field of AI and reiterated the cautions about over-regulation:

“To that end, Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth … Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits.”

So both the US and the EU are moving to establish rules to regulate AI. Movement towards a more regulated approach to AI is slow, because no one wants to stifle innovation. We are struggling through a difficult question: how can we best ensure that AI really is “good” AI? 

Medical Research as a Guide

On our quest towards “good” AI, the history of ethics in medicine might give us some helpful guidance. Regulation in medical research wasn’t always present. There were times when research proceeded with little to no oversight or accountability. Then came the Tuskegee Syphilis Study.

From 1932 to 1972, six hundred African American men, of which four hundred had syphilis, were involved in the Tuskegee Syphilis study. For 40 years, researchers engaged in great ethical malpractice. The men with syphilis were not told of their disease.Though a simple cure was available, it was withheld from them. Some of their spouses were infected and became infertile. Several babies were born with preventable birth defects. 

When the full extent of this ethical disaster was made public, the government acted. Congress passed the National Research Act in 1974 which required institutions to form Institutional Review Boards (IRBs) to manage, review, approve, or deny research done with human participants.

These review boards consist of members from the community and operate independently of the government and research institutions. IRBs have the power to stop any projects they deem unethical. These boards were the first, large institutional change that promoted and required ethical research. 

It took time to work out ethical principles to guide IRBs. In 1979, a national commission published the Belmont Report which established three basic ethical principles: (1) respect for persons, (2) beneficence, and (3) justice. For all its strengths and weaknesses, the discussion around ethics in the 1970s was extremely influential in shaping medical research for decades to come.

We are in a similarly important moment for AI right now. The ethical conversations we are having now will shape how AI is brought to society for decades to come. 

A Role for IRBs in AI?

 Perhaps IRBs could be part of our approach to managing AI.

The government requires that every single medical study is reviewed and approved by a locally-run IRB. In the last several decades, IRBs have been an effective way of protecting the public without stifling innovation.

Just as it is in medical ethics, ethical review of AI should be required by the government, but run and managed at the local level. Any company that is bringing a new AI product into practice would bring it to an IRB for approval first. Similar to the medical field, this board would consist of members of the community as well as experts in AI. The IRBs would operate separately from the government, and have the power to approve, suggest modifications, or deny specific proposals.

IRBs, of course, are not the whole solution. IRBs are a proven mechanism of ethical oversight, but they say nothing of the exact ethical standards that should be enforced.  

What are the ethical standards and guidelines that should be adopted? The EU is piloting guidelines based on seven principles. The conversation is just beginning on this in the US. We may need something like the Belmont Report to investigate this more deeply, and articulate our way forward.

The discussion about the ethics of AI is growing, as it should. Governments are moving towards defining how AI will be regulated. The decisions made in the coming years will shape our world for decades to come.

Jul 14, 2021
Jul 14, 2021
Aug 3, 2021
Nov 20, 2024

Join the conversation...

Come to understand and to be understood.
Whatever your personal beliefs, we saved a chair for you.

Discuss on Forum Suggest Changes Revision History

Related articles...