Tuesday, April 18, 2017

Are We Hyper-Technologized?

I love technology. After all, it's how I make a living, and I truly enjoy taking advantage of cleverly engineered, well-built tools. (And for me, "tool" could mean anything from a smartphone to a well-made crescent wrench.) I'm old enough that I still feel a bit of a thrill when I power up a computer or realize that, with modern tech, I can do things that in previous years would have been difficult, expensive, or simply impossible. Technology continues to amaze and enthrall me: drones, GPS, digital assistants, desktop publishing…. For me, all of these things bring to mind Arthur C. Clarke's famous dictum (now a bit over-used, I suppose): "Any sufficiently advanced technology is indistinguishable from magic." Much of this stuff still seems magical to me, even though I know something about how it's done.

This is an Oregon Scientific weather station much like the
one that Lesley and I have in our home and which only
one of us has learned to use.

For example, Lesley and I have the world's most awesome weather station. Among other things, it includes an electronic rain gauge that sends a very precise rainfall measurement to a central display unit that's kept in the house. Any time we want, we can simply look at the display and know that we have received exactly 2.736" of rain over the past 24 hours. (Technically speaking, we cannot do this. Lesley can do this. I have not mastered the rigorous calculus that's apparently necessary to tell the display that we want to see the rainfall totals. So I just randomly push buttons until something happens. Sometimes Lesley comes and rescues me, but most often, I end up with a display of temperature or wind direction, or possibly a readout of my next door neighbor's teenage son's digital music collection or my pickup truck's current gas mileage. Both of the last two are kind of depressing.)

The thing is that this rain gauge doesn't even really measure rainfall—not directly, anyway. It's engineered such that a small catchment collects rainwater through a funnel arrangement. That small container is attached to an arm, and once it has collected the appropriate amount of rain, based on weight, the arm swings down and dumps the rainwater out the bottom of the device. And every time that happens, a counter is incremented. Since we know how much water weighs (if you're curious, it's about 8.3 lbs. per gallon, though rainfall in Los Angeles—being full of various poisonous particulates—tends to weigh more) and how much the catchment holds, the machine can be calibrated to convert the number of times the counter has incremented into accurate measurements of rainfall.

That's pretty damned clever, isn't it? There's a lot of math and machining and electronics and manufacturing know-how in that little rain gauge.

Mind you, we also have an old fashioned, clear plastic cylindrical rain gauge in a holder attached to the side of our deck about two feet away from the digital rain gauge. It was manufactured right here in Lincoln, Nebraska by Garner Industries, and it probably cost about $7 a few years back. If Lesley's not around to rescue me, and if I get tired of randomly pushing buttons on our fancy weather station display, I can always just glance over at the plastic "analog" rain gauge and see how much rain we got. And then . . . well, actually, that's it. I'm done. If I need to start a new countdown, I can "reset" the gauge by picking it up and turning it over so that the rainwater dumps out onto our . . . um, whatever those plants are off to the side of the deck. (Plants and weeds all look alike to me, which is how I've gotten out of weeding for the past several years.)

Or one could use a simple, inexpensive plastic rain
gauge, such as this one.
So, although I think it's kinda fun, it would be a stretch to say that we actually need a digital weather station. I like it, but I can't say there aren't other (often easier, almost always less expensive) ways to get the same information.

Sometimes we end up with technology that was created simply because it could be created or because someone thought it would be cool or because we're determined to improve on the old way of solving a problem. Much of the time, we don't really need it, and there may even be times when it's more trouble than it's worth.

Take bomb-detecting. Surely this seems like a worthwhile endeavor and something worth spending money on. So the U.S. armed services (and researchers in their employ) have spent millions on various types of metal detectors, special cameras, and chemical sniffers. This has resulted in about a 50% success rate in Afghanistan and Iraq.

Of course, being able to locate half of the IEDs (improvised explosive devices) scattered along a roadside or in a field is nothing to sneeze at. But you know what's proven to be much, much more effective? A dog. When dogs are used to patrol, that 50% jumps to 80% or more. And the thing is that DARPA (the Defense Advanced Research Projects Agency) has been trying to come up with something that's better than a dog since 1997. Can't do it. Apparently, there is nothing better than a dog. A well-trained dog is very, very good at detecting bombs. (Or hard drives, or dope, or people, or just about anything else you care to train a dog to detect.) There is simply nothing trainable on the planet that's better at literally sniffing things out. (Which makes sense. Consider that the typical human has about 5 million olfactory receptors in his or her nose, while a dog has more than 220 million such sensors. To be a dog is to inhabit a world much richer, more fragrant, and probably much more interesting than the drab one in which you and I live. Also, they get tummy rubs.)

Training and provisioning a dog costs money, of course. Some sources say that a trained bomb-sniffing dog can cost between $5,000 and $25,000 or more. (That's a rather large variance, of course. Perhaps a bomb-sniffing Bichon, being more . . . uh, portable, is worth more than a bomb-sniffing Doberman?) But even at the high end, that's much less than the cost of most hi-tech bomb-detection tools, and the dog is easy to operate and also serves other functions. And in the end, the dog simply works better than the hi-tech tools.

Bichons are SO cute that they look a bit like they
escaped from a comic strip. (Photo licensed under
the Creative Commons Attribution 3.0 Unported
license by user Rocktendo.)
Altogether, the Pentagon has, since 2004, spent about $19 billion on bomb-detecting gadgets and other hi-tech mechanisms meant to deal with insurgent networks and the IEDs they plant. (Even if a trained dog cost $20K, that means that our $19 billion would buy about 950,000 dogs. That's a lot of dogs. I'm pretty sure that if you simply let 950,000 trained dogs loose in Afghanistan, the war would be over in days. Although I'm not sure who will have to clean up the place afterward.) One of these hi-tech gadgets is VaDER (am I the only one who reads a certain evil malevolence into that acronym?), which DARPA would like us to believe stands for Vehicle and Dismount Exploitation Radar, but which is obviously just an excuse to come up with a Star Wars-themed anti-insurgency device. VaDER is a $138 million aircraft-mounted sensor that tracks moving targets from an aircraft. We don't really know how well VaDER works, because a spokesperson said only that it and related tools were "enormously useful." So, that's good; wouldn't want to spend that kind of money on something that was only "mildly useful" or "somewhat useful."

I like clever stuff, but we seem to have a facility for over-engineering solutions, is what I'm saying here. Do we really need a toilet seat that automatically closes when the user (a man, one assumes) walks away? Can't the guy just put the seat down? Or couldn't the next person to use the toilet simply put the lid down? How hard is it, really? Or maybe you need a connected weight-loss fork that vibrates when you've eaten too much! Or possibly some air-conditioned shoes? (These look suspiciously like . . . well, shoes with holes in them. Say, I guess I already have some air-conditioned shoes down in the basement! I would be willing to sell those to you for, oh, $30 each. That's $48 off!) How about a mug that lights up to indicate the temperature of its contents? So you can tell if your tea is too hot, I guess. Just take a sip, dammit! If it burns, it's too hot; go take a walk in your air conditioned shoes for a few minutes while your tea cools off a bit.

Tuesday, April 04, 2017

Your Car May Decide to Kill You. Or Not. It Depends.

I spent some time writing software and running the development side of StudyWare, a small software company based in San Diego, CA. And after our company grew large enough that we could afford to hire programmers and analysts who actually knew what they were doing, I spent several years managing those who wrote both the software and the content to be used with that software. (I can't tell you how nice it was when we got to the point where we could afford to hire real programmers. I truly enjoyed programming, and I think I did some clever stuff; but compared to the talented, experienced developers we hired, my efforts were laughably inelegant, unsophisticated, and clumsy. But hey, at least I was also slow.)

An early StudyWare software package. The
packaging and the software eventually
became much more sophisticated.
At any rate, the point is that I have been in the trenches, and I've worked with others who've been in the trenches even longer than I. So, I have indeed ridden the software dev dragon and I have tamed (or occasionally been tamed by) the beast.

In other words, when it comes to building and delivering software, I speak from experience. Thus, I can say with some confidence that software behavior is largely about decision-making: Your code does a particular thing until something happens, at which point it does something else. It's a very strict, Boolean environment; the code always behaves according to some very exacting logic. (Not always the logic you had intended, mind you, but that's a subject for a different post.) Essentially, a huge part of the functionality of software hinges on decisions made about whether something is true or false. If X has happened (that is, if it's true), then do Y.  For example, if the system's internal clock has counted out X number of seconds or minutes, then Y should now occur. (In this case, perhaps Y is that a bell should chime to let you know that it's time to go turn off the stove, call your mother, or move your laundry into the dryer.) Or, if the user has entered a particular word into a textbox, find and highlight all occurrences of that word in a document. That sort of thing.

It's a very pragmatic and ruthlessly logical approach. There's not a lot of room for . . . well, heart. Software doesn't feel.

And yet, programmers do have hearts. They do feel. They do have consciences. (I know a programmer who once worked for a defense contractor that built missiles. After several years of doing that, he was looking for a graceful way out for a number of reasons. One of those reasons had to do with the products he was designing. He said, "If I do my job well, somebody dies. If I do my job poorly, somebody else dies.") So, while software may be said to have no heart, we can definitely see examples of software that has to have, for lack of a better term, a conscience of sorts. Or more accurately, it can sometimes come to represent the programmer's or designer's conscience.

One increasingly obvious example of this has to do with the design of autonomous cars. You wouldn't think that conscience or morality would enter into something so utilitarian, but it turns out that programmers working on such devices are having to make decisions that are essentially moral. They involve not math but ethics. (Or more accurately—and much more interestingly—a combination of math and ethics.)

The S60, an experimental autonomous car from
Volvo. The S60 is classed as a Level 3
autonomous vehicle: the driver must be prepared
to take control if/when necessary. (Image used
under the Creative Commons Attribution-Share
Alike 4.0 International license.)
Part of the designer's job is to anticipate certain scenarios, and to program the automobile (in this case, it's truly an automobile) to respond appropriately to certain scenarios. Thus, the car watches for pedestrians who may step in front of the vehicle, vehicles that may run a red light and enter an intersection unexpectedly, traffic signals that are about to change, etc. It's actually very impressive that these systems can almost flawlessly respond to changes in the environment and that they usually render a decision that keeps drivers, passengers, and nearby pedestrians safe. (Of course, usually is not the same as always, so we have seen accidents, some of them fatal. This is dangerous stuff, after all, and we are on the bleeding edge of tech here.)

But imagine a scenario such as this: Bob is in an autonomous vehicle that's proceeding along a one-way, one-lane street, when suddenly a pickup truck enters from a side street on his right. Bob (well, in this scenario, Bob's car) has three options: the car can veer left, veer right, or plow straight ahead. (We'll assume for now that things are happening too quickly for braking to be effective.)

Nothing good can come from any of these options. Perhaps Bob veers left, up onto the sidewalk, where an older couple is slowly making their way over to a nearby vehicle. One possible result? Two dead elderly citizens. The car could veer right, but what if on the sidewalk to the right was a group of schoolchildren being led by a teacher at the front of the line and an adult aide at the end? Possible result? Dead or injured children, along with possible harm to the adult leaders. If the car continues straight ahead, it will T-bone the truck, and the impact will almost certainly harm or even kill the driver of the truck and his passenger; the crash might also harm or kill Bob himself.

You’re probably thinking that this is far-fetched, simplistic, and unrealistic. But it (or something like it) can occur; I would bet that this sort of thing happens at least weekly in every major city. (In 2016, there were 55,350 traffic accidents in Los Angeles, and 260 people were killed in those accidents. About 229 people died in New York City accidents that year.) Of course, when a person is driving the car, that person is responsible for the split-second decision he or she is about to make. Someone is going to get hurt, no matter what. And there often isn't time for a driver to consciously think about that decision; he simply reacts. Hopefully, no one is hurt.

But the programmers and designers and analysts who build autonomous vehicles have to consider such scenarios; they do have time to think, and they have to program into the system what they feel is an appropriate response. They must tell the vehicle, "When faced with this scenario, do this." Those programmers just made a life-or-death decision. They had no choice. They have to tell the car to do something, after all. (Keep in mind that opting not to do anything is also a decision.) They have to encode the system, the "brain" of the car, to behave in a certain fashion in response to certain inputs.

So, what should they decide? Assuming that the technology has advanced to the point that the car can tell what it's about to hit (and I think that is or soon will be the case), does Bob's autonomous vehicle veer left or right? Does it put Bob at risk, or some schoolkids? Or do we aim the car at the elderly couple? Are the schoolkids' lives worth more than the lives of the two older people? Or does the car determine that Bob must sacrifice himself?

It's interesting to talk about this kind of decision-making, of course, and I have had some enjoyable discussions (and even arguments) with students about this sort of thing. (And similar logic/ethic puzzles have been around since long before the advent of autonomous vehicles.) But for the purpose of this discussion, which decision the programmers should make isn't even the main point; the important thing is that we've reached a point at which such decisions have to be (and are being) made.

Technology and morality or ethics have always been connected, of course. After all, technology is used (and misused) by people, and people are moral animals. (Or, depending on your perspective, perhaps you feel they are amoral or even immoral animals.) So how we decide to use a technology, and for what purpose, may have always been a decision that has had an ethical component. (After all, I can use a handgun to protect my family, or I can use it to rob a bank or mug that elderly couple we were discussing a moment ago. Even a lowly hammer can be used to build a home or repair a fence, harm a person or destroy the display window of a downtown shop.)

So, having to consider an ethical component in a technology is certainly nothing new. But having to program an ethical component, having to make those sorts of decisions ahead of time and at a remove, is something that many of us have not considered until now. We (or the car's designers, at least) find ourselves in an uncomfortable position: how do we decide which lives are more valuable than other lives?

That's not a decision I would want to be forced to make.