Tuesday, April 04, 2017

Your Car May Decide to Kill You. Or Not. It Depends.


I spent some time writing software and running the development side of StudyWare, a small software company based in San Diego, CA. And after our company grew large enough that we could afford to hire programmers and analysts who actually knew what they were doing, I spent several years managing those who wrote both the software and the content to be used with that software. (I can't tell you how nice it was when we got to the point where we could afford to hire real programmers. I truly enjoyed programming, and I think I did some clever stuff; but compared to the talented, experienced developers we hired, my efforts were laughably inelegant, unsophisticated, and clumsy. But hey, at least I was also slow.)

An early StudyWare software package. The
packaging and the software eventually
became much more sophisticated.
At any rate, the point is that I have been in the trenches, and I've worked with others who've been in the trenches even longer than I. So, I have indeed ridden the software dev dragon and I have tamed (or occasionally been tamed by) the beast.

In other words, when it comes to building and delivering software, I speak from experience. Thus, I can say with some confidence that software behavior is largely about decision-making: Your code does a particular thing until something happens, at which point it does something else. It's a very strict, Boolean environment; the code always behaves according to some very exacting logic. (Not always the logic you had intended, mind you, but that's a subject for a different post.) Essentially, a huge part of the functionality of software hinges on decisions made about whether something is true or false. If X has happened (that is, if it's true), then do Y.  For example, if the system's internal clock has counted out X number of seconds or minutes, then Y should now occur. (In this case, perhaps Y is that a bell should chime to let you know that it's time to go turn off the stove, call your mother, or move your laundry into the dryer.) Or, if the user has entered a particular word into a textbox, find and highlight all occurrences of that word in a document. That sort of thing.

It's a very pragmatic and ruthlessly logical approach. There's not a lot of room for . . . well, heart. Software doesn't feel.

And yet, programmers do have hearts. They do feel. They do have consciences. (I know a programmer who once worked for a defense contractor that built missiles. After several years of doing that, he was looking for a graceful way out for a number of reasons. One of those reasons had to do with the products he was designing. He said, "If I do my job well, somebody dies. If I do my job poorly, somebody else dies.") So, while software may be said to have no heart, we can definitely see examples of software that has to have, for lack of a better term, a conscience of sorts. Or more accurately, it can sometimes come to represent the programmer's or designer's conscience.

One increasingly obvious example of this has to do with the design of autonomous cars. You wouldn't think that conscience or morality would enter into something so utilitarian, but it turns out that programmers working on such devices are having to make decisions that are essentially moral. They involve not math but ethics. (Or more accurately—and much more interestingly—a combination of math and ethics.)

The S60, an experimental autonomous car from
Volvo. The S60 is classed as a Level 3
autonomous vehicle: the driver must be prepared
to take control if/when necessary. (Image used
under the Creative Commons Attribution-Share
Alike 4.0 International license.)
Part of the designer's job is to anticipate certain scenarios, and to program the automobile (in this case, it's truly an automobile) to respond appropriately to certain scenarios. Thus, the car watches for pedestrians who may step in front of the vehicle, vehicles that may run a red light and enter an intersection unexpectedly, traffic signals that are about to change, etc. It's actually very impressive that these systems can almost flawlessly respond to changes in the environment and that they usually render a decision that keeps drivers, passengers, and nearby pedestrians safe. (Of course, usually is not the same as always, so we have seen accidents, some of them fatal. This is dangerous stuff, after all, and we are on the bleeding edge of tech here.)

But imagine a scenario such as this: Bob is in an autonomous vehicle that's proceeding along a one-way, one-lane street, when suddenly a pickup truck enters from a side street on his right. Bob (well, in this scenario, Bob's car) has three options: the car can veer left, veer right, or plow straight ahead. (We'll assume for now that things are happening too quickly for braking to be effective.)

Nothing good can come from any of these options. Perhaps Bob veers left, up onto the sidewalk, where an older couple is slowly making their way over to a nearby vehicle. One possible result? Two dead elderly citizens. The car could veer right, but what if on the sidewalk to the right was a group of schoolchildren being led by a teacher at the front of the line and an adult aide at the end? Possible result? Dead or injured children, along with possible harm to the adult leaders. If the car continues straight ahead, it will T-bone the truck, and the impact will almost certainly harm or even kill the driver of the truck and his passenger; the crash might also harm or kill Bob himself.

You’re probably thinking that this is far-fetched, simplistic, and unrealistic. But it (or something like it) can occur; I would bet that this sort of thing happens at least weekly in every major city. (In 2016, there were 55,350 traffic accidents in Los Angeles, and 260 people were killed in those accidents. About 229 people died in New York City accidents that year.) Of course, when a person is driving the car, that person is responsible for the split-second decision he or she is about to make. Someone is going to get hurt, no matter what. And there often isn't time for a driver to consciously think about that decision; he simply reacts. Hopefully, no one is hurt.

But the programmers and designers and analysts who build autonomous vehicles have to consider such scenarios; they do have time to think, and they have to program into the system what they feel is an appropriate response. They must tell the vehicle, "When faced with this scenario, do this." Those programmers just made a life-or-death decision. They had no choice. They have to tell the car to do something, after all. (Keep in mind that opting not to do anything is also a decision.) They have to encode the system, the "brain" of the car, to behave in a certain fashion in response to certain inputs.

So, what should they decide? Assuming that the technology has advanced to the point that the car can tell what it's about to hit (and I think that is or soon will be the case), does Bob's autonomous vehicle veer left or right? Does it put Bob at risk, or some schoolkids? Or do we aim the car at the elderly couple? Are the schoolkids' lives worth more than the lives of the two older people? Or does the car determine that Bob must sacrifice himself?

It's interesting to talk about this kind of decision-making, of course, and I have had some enjoyable discussions (and even arguments) with students about this sort of thing. (And similar logic/ethic puzzles have been around since long before the advent of autonomous vehicles.) But for the purpose of this discussion, which decision the programmers should make isn't even the main point; the important thing is that we've reached a point at which such decisions have to be (and are being) made.

Technology and morality or ethics have always been connected, of course. After all, technology is used (and misused) by people, and people are moral animals. (Or, depending on your perspective, perhaps you feel they are amoral or even immoral animals.) So how we decide to use a technology, and for what purpose, may have always been a decision that has had an ethical component. (After all, I can use a handgun to protect my family, or I can use it to rob a bank or mug that elderly couple we were discussing a moment ago. Even a lowly hammer can be used to build a home or repair a fence, harm a person or destroy the display window of a downtown shop.)

So, having to consider an ethical component in a technology is certainly nothing new. But having to program an ethical component, having to make those sorts of decisions ahead of time and at a remove, is something that many of us have not considered until now. We (or the car's designers, at least) find ourselves in an uncomfortable position: how do we decide which lives are more valuable than other lives?

That's not a decision I would want to be forced to make.

6 comments:

  1. https://en.wikipedia.org/wiki/Trolley_problem

    ReplyDelete
  2. Exactly! That's the link I used above in the "...similar logic/ethic puzzles have been around..." sentence. But it used to be so . . . theoretical! :)

    ReplyDelete
  3. Very interesting, Rod. Would make a fascinating discussion. Technology and ethics/morality.

    ReplyDelete
    Replies
    1. That would be a fun talk, yes? Or even an interesting book.

      Delete
    2. In a sense, this sort of ethics discusion takes place continually in medical circles. And many hospitals have Ethics Committees (I served on one many years ago as a clergy person) charged with these decisions. What is happening, it seems, is that medical advancements are happening far faster than the corresponding ethical decisions.

      Delete
    3. That does seem to be the way it works! The technology always outpaces the legal and ethical considerations. And I would imagine that the gap is going to continue to widen.

      Delete