Evangelicals Take on Artificial Intelligence

Appeared in the May 10, 2019 print edition of the Wall Street Journal. The title was chosen by the editor.

Science fiction often depicts artificial intelligence as technical minds embodied in humanlike bodies. Think Commander Data of “Star Trek: The Next Generation.” In reality, AI is mindless and usually disembodied. Yet it’s still important, and scientists shouldn’t be the only ones with a say in its future.

The Ethics and Religious Liberty Commission of the Southern Baptist Convention recently laid a marker down with “Artificial Intelligence: An Evangelical Statement of Principles.” The document addresses topics from sex and medicine to accountability and the image of God. The common theme: What does it mean to be human?

It’s encouraging to see religious leaders consider the implications of new technology. Yet as an artificial-intelligence scientist and evangelical Christian, I found the document disappointing.

Its format resembles the Lausanne Covenant of 1974, one of the most important statements in modern evangelicalism. Drafted in an open process, the Lausanne Covenant was adopted by 2,300 delegates from 150 countries. Yet the ERLC document represents only a narrow slice of the global evangelical experience. Most of the signatories are relatively conservative American Protestants, and their seemingly non-sequitur affirmations of just war and traditional marriage reflect that.

The statement only superficially engages the reality of artificial intelligence. It often reads as if the community of AI scientists and ethicists weren’t even consulted. Most signatories are pastors and theologians, and almost none have expertise in artificial intelligence. Recall Proverbs 4:7: “Get understanding before anything else.”

I can help. For more than 15 years, I’ve used artificial intelligence to understand problems that overlap biology, chemistry and medicine. Drawing on a set of generally applicable principles, my colleagues and I use AI to advance scientific knowledge in surprising ways.

At its core, artificial intelligence is little more than a numerical dance, an intricate series of mathematical operations. “Machine learning”—adaptable programs that identify patterns from data—is the type of AI growing in prominence now. My team uses machine learning to understand how and why drugs become toxic, for example, and to determine which kidneys can safely be transplanted.

AI should be guided by a clear ethical framework, but imprecise ethical cautions could cost lives. The document declares “informed consent” to be “requisite.” Yet informed consent can be ethically waived in many cases that are important for academic research. Insisting on informed consent, without acknowledging waivers, has the effect of shackling life-saving scientific work. This omission erects a poorly considered religious barrier to the type of medical research my group does.

The document also states that “moral decision-making” is the exclusive responsibility of humans. Yet artificial intelligence can maneuver a Tesla. In an accident, the car may need to make moral decisions. Risking the safety of its passengers, should the vehicle dangerously swerve to avoid a pedestrian? The document seems to oppose artificial intelligence where it might delegate moral decisions like this—but it’s not clear. This risks unnecessary prohibitions of life-saving technology.

Speaking of artificial intelligence in the future, the document notes that “God alone has the power to create life.” This phrase appears in traditional theology as an affirmation of God’s providence and authority. Of course it doesn’t prohibit creating life through reproduction. Nor does it proscribe scientific work like creating new viruses in a lab. Citing “God alone,” nonetheless, the document seems to declare artificial minds either impossible or immoral. Why not encourage scientific inquiry?

A “person” like Commander Data has yet to emerge from the electronics of a computer. At the same time, the human mind is somehow entwined in the electronic fluctuations of neurons in our brains. Science always calls for humility, and it’s true that AI already has shown surprising linguistic, artistic and social abilities. Yet none of these feats even remotely approach demonstration of a humanlike mind.

Grand questions loom. Can a computer house a mind? How would scientists and engineers construct a computational mind? How would they know if they succeeded? We can’t know for sure, but these questions welcome all of us, including theologians. Rather than offering a far- reaching statement of religious convictions, it would be better to start with a list of questions.

Print Friendly, PDF & Email

 

Notable Replies

  1. The entire statement of the Ethics & Religious Liberties Commission (including the list of signatories) appears at:

    As I read the ERLC statement, I couldn’t help but imagine the substitution of other tools wherever the acronym AI appeared. For examples, I considered the following imaginary, modified statements:

    We affirm that the development of electricity generators and electric motors are a demonstration of the unique creative abilities of human beings.

    When metallurgy is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him.

    We deny that the use of the internal combustion engine is morally neutral. It is not worthy of man’s hope, worship, or love.

    By the way, other than Star Trek episodes, has the worship of artificially intelligent computer systems been a big problem? Is it likely to be? Just wondering.

    Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as the electronic computer cannot fulfill humanity’s ultimate needs.

    Yet again I should emphasize that these are my own imaginary modifications of the actual ERLC statements. Has the ERLC published major position statements on other kinds of technology?

    I can certainly understand why Article 6 (“Sexuality”) would be timely and topical for the Southern Baptist Convention. But many of the other sections struck me as more alarmist than helpful. Does AI truly pose more serious dangers than other technologies which the ERLC has never addressed? (The first half of the twentieth century managed to be extremely dangerous and tragic without AI.) I know a lot of evangelicals who are already excessively alarmed and even terribly worried about GMOs (Genetically Modified Organisms) in food production. Do we need to see more evangelicals–and the electorate in general—excessively alarmed at the future of AI? Will that be helpful?

    Yes, I don’t want to unfairly mix apples and oranges but I will at least admit that as I read the ERLC statement I thought of centuries past when various Christians preached and published alarming position statements on scientific breakthroughs like lightning rods (protecting sinners from God’s righteous wrath?), anesthesia (pain builds moral character and childbirth is meant to be painful), and flight (“God meant only birds and insects to fly.”) So I sometimes wonder if non-Christians react with similar thoughts when they hear about evangelicals alarmed about AI.

    This final sentence taken from the ERLC statement struck me as particularly odd:

    While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

    If the signatories aren’t fearful, does the statement nevertheless sound overly fearful? I’m not sure. And does any evangelical Christian who reads the statement seriously worry that AI will thwart God’s redemptive plan or “supplant humanity as His image-bearers”? (Obviously, non-Christians are unlikely to care, so I’m focusing on Christians here.)

    DISCLAIMER: I happen to attend a Southern Baptist Convention church and I do know many of the people who signed the AI statement—but I haven’t really followed the history of the ERLC and their previously published position statements. I am curious what factors led to this particular statement on Artificial Intelligence.


    Some of my concerns are addressed in this reply to @swamidass’ article about the ERLC position statement:

    A Critic of the Evangelical Statement on AI Misunderstands the Issues | Mind Matters

    Even so, I can’t help but consider that philosophers, lawyers, and insurance companies have been discussing these kinds of ethical/legal responsibility issues for centuries. (And who hasn’t heard campaign slogans like “Guns don’t kill people. People kill people.”?) Mindmatters author Jonathan Bartlett seems to think that the Tesla company has not thought through the implications of Level 5 self-driving cars (e.g., the owner of the car is asleep in the back seat of the car when an accident occurs) but his alarm seems to really be nothing more than the fact that liability and insurance policies and law will have to be adjusted. Does anyone doubt that they will be?

    Yes, I can imagine ways in which AI can be applied in unfair, de-humanizing, and even very dangerous ways. Even so, other than the Article 6 on human sexuality implications, I’m not sure that evangelicals will find the ERLC position statement all that helpful. Perhaps I’m wrong and short-sighted.

Continue the discussion discourse.peacefulscience.org

Participants

>