Robotics and What It Means to Be Human

Computer scientist Jaron Lanier has a great op-ed in today’s NYTimes entitled “The First Church of Robotics.” In it, he makes an incredibly profound statement for contemporary ethics:

by allowing artificial intelligence to reshape our concept of personhood, we are leaving ourselves open to the flipside: we think of people more and more as computers, just as we think of computers as people.

Machines in general, and robotic machines in particular, are simply tools created by people and used by people. Machines do not have intelligence, and any evidence that they do (a robot which teaches foreign languages to children, for example) is simply a “form of high-tech puppetry.”

The children are the ones making the transaction take place — having conversations and interacting with these machines, but essentially teaching themselves. This just shows that humans are social creatures, so if a machine is presented in a social way, people will adapt to it.

One problem for ethics, at least according to the author, is not only that the anthropomorphizing of machines devalues human thought, but it also provides an excuse for avoiding human accountability.

What all this comes down to is that the very idea of artificial intelligence gives us the cover to avoid accountability by pretending that machines can take on more and more human responsibility. This holds for things that we don’t even think of as artificial intelligence, like the recommendations made by Netflix and Pandora. Seeing movies and listening to music suggested to us by algorithms is relatively harmless, I suppose. But I hope that once in a while the users of those services resist the recommendations; our exposure to art shouldn’t be hemmed in by an algorithm that we merely want to believe predicts our tastes accurately. These algorithms do not represent emotion or meaning, only statistics and correlations. . .

. . . the rest of us, lulled by the concept of ever-more intelligent A.I.’s, are expected to trust algorithms to assess our aesthetic choices, the progress of a student, the credit risk of a homeowner or an institution. In doing so, we only end up misreading the capability of our machines and distorting our own capabilities as human beings. We must instead take responsibility for every task undertaken by a machine and double check every conclusion offered by an algorithm, just as we always look both ways when crossing an intersection, even though the light has turned green.

A student who turns in a paper littered with typos like using “there” instead of “their” or “they’re” and “were” instead of “where” cannot blame the failure of her computer’s spell-checker. No matter how sensitive our tools become, humans beings will ultimately be responsible for the product generated.

Lanier’s point is, however, more foundational than merely insisting on human accountability for our machines. Rather, he is claiming that the way we think of machines influences the way we think of humans. The more grandiose our depictions of human machinery become, the more we devalue human thought. Rightly, he points to a metaphysical explanation for why this is so:

It should go without saying that we can’t count on the appearance of a soul-detecting sensor that will verify that a person’s consciousness has been virtualized and immortalized. There is certainly no such sensor with us today to confirm metaphysical ideas about people, or even to recognize the contents of the human brain. All thoughts about consciousness, souls and the like are bound up equally in faith, which suggests something remarkable: What we are seeing is a new religion, expressed through an engineering culture. . . .

If technologists are creating their own ultramodern religion, and it is one in which people are told to wait politely as their very souls are made obsolete, we might expect further and worsening tensions. But if technology were presented without metaphysical baggage, is it possible that modernity would not make people as uncomfortable?

A human being is not a mere collection of cells, molecules, and atoms which can be deciphered, imitated, recreated. According to Aquinas, a human being is a hylomorphic unity of body and soul, an irreducible unity of matter and spirit partly subject to scrutiny and partially a mystery. Human thought reflects this combination of evidence and mystery. There are many things we know–like which part of the brain “fires” in reaction to a foul-smelling stimulus or an angry face–but for everything we know, there is infinitely more which remains a mystery: Why does person A react so strongly to bad smells while Person B is relatively unaffected? Why does anger foster artistic creativity in Person X while in Person Y, anger leads to pathological behavior?

Certain superficial elements of human thought might be imitated in the work of a machine, such as the ability to recognize patterns or synthesize concepts, but the really interesting, and ultimately mysterious, work is on the human side in the response to the machines. A human, for example, may be able to respond compassionately and affectionately to a machine that looks and acts like a baby. Such a machine is surely a technological feat, but what is more fascinating is what the humans are doing: responding emotionally. Desire, sorrow, love, fear, and joy simply cannot be imitated. Passions such as these, according to Aquinas, are not simply corporeal responses to an external stimulus, but also involve the spirit, the immaterial essence of the person created in the image and likeness of God. The author’s point in the op-ed fundamentally agrees with Thomas: by reducing thought to a mere series of calculations, we are only providing a reductionist understanding of what thought really is–a mystery.


2 comments so far

  1. Mister Reiner on

    What we define as thinking machine components today won’t necessarily be made up of plastic, silicone and other electronics in the future. Once thinking machines go biochemical and scientists can emulate the learning and reasoning functions of the brain, then all that talk about not being intelligent and puppetry goes out the window.

    The future of technology is uncertain. What people thought was science fiction many years ago is now science fact. Don’t limit your thinking to what is possible today, because it won’t mean anything 50-100 years from now.

    Check this out:

  2. everydaythomist on

    Mister Reiner,
    You raise great points and indeed, it is exciting and frightening to speculate on what advances will be made in our lifetime in the field of robotics and technology in general. Still, my point is that technology in so far is it is mechanical (i.e. material) will always be puppetry because the mind, the seat of intelligence and thought, at least according to substance metaphysicians like Aquinas and myself, is composed not only of matter but also spirit. We can imitate the former, but the latter remains beyond our grasp.

    Now, many materialists deny this and say that all thought, creative energy, and rational output has an organic, biochemical, and material origin. But still, even a materialist can admit that in the mind, there is something greater than the sum of the parts, and what that is, I am not sure if we will ever be able to copy.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: