Commentary & Rants

Thursday, October 8, 2020

There's Such A Thing As Being TOO Old School

Here in the US, we're used to needling our government officials when we feel they've done something dumb -- which apparently happens quite often. But US government entities are not the only ones with red faces. 





In the UK, the Department of Health and Social Care, which collates data from public and private labs, among other functions, recently managed to lose about 16,000 COVID-19 records, and they did it in just about the dumbest way possible: The department converted CSV (comma separated value) files to an Excel format in order to facilitate classification, correlation, and examination of that day's data. But they used the wrong Excel format to do it, resulting in the loss.

Excel's older file format (XLS) is limited, having a cutoff point of 1,048,576 rows and 16,384 columns per file. In this case, the CSV files contained many more records than the format could handle; the file was thus truncated, resulting in the loss of the 16,000 records.

Officials with Public Health England characterized this potentially dangerous faux pas as a "technical issue," but that's really not what it was: it was a stupidity issue. Someone's IT department was asleep at the wheel (or, just as likely, no one conferred with the IT department). Had IT been involved and on top of things, someone could have told the data migration people that, while the older version of Excel's data file was limited, the newer version (that is, XLSX files) had no such limits. (Well, XLSX does have limits, but the newer format could have handled roughly 16 times the number of records submitted.)

The whole debacle does, of course, beg the question: Should systems utilizing data this important be using Excel spreadsheets in the first place? Surely there are better, more robust tools that could be employed. (MySQL, anyone?)


Et Tu, Facebook?

Tuesday, February 05, 2019


I bought my first record in the summer of 1964. It was a Beach Boys single, "I Get Around." It was not a stunning example of sophisticated literary poetics:

I'm gettin' bugged driving up and down the same old strip
I gotta find a new place where the kids are hip

Yeah, well. It's not Leonard Cohen, Kris Kristofferson, or Bob Dylan, but I loved it. I was crazy about the Beach Boys. I eventually succumbed to Beatlemania, but for a few years I was a confirmed Beach Boys fanatic.

A young girl with her hula hoop in 1958. The longest verified record for
continuous hula hoop spinning is held by Aaron Hibbs  from Columbus,
Ohio; in 2009, he kept a hoop spinning for 74 hours and 54 minutes.
Why he did this, we're not sure. Photo placed in the public domain by
photographer George Garrigues.
We all were fanatics about one thing or another. And, somehow, we had money to spend on the fads of the day: In the 50s it was Hula-Hoops; Davey Crockett-style coonskin caps; Slinky; and of course, music by Elvis (never really "the king" for me), The Big Bopper, and Fats Domino. In the 60s, we went for bell-bottoms; Beatle boots; balsa-wood airplanes; lava lamps; banana seats on bicycles; granny glasses, slot cars, and of course, music by Paul Anka (I feel really bad about that one), Frankie Avalon (OK, that one, too), the Beach Boys, the Beatles, The Doors, The Jefferson Airplane, and dozens more. (There are probably photos of me wearing bell-bottoms and granny glasses, but these photos will never see the light of day. Why? Because I did all of my stupid sh*t before there was an Internet.)

Not only did we have the money to purchase such items, but advertisers, beginning in the 1950s, knew we had money -- or, through our parents, access to money. Suddenly, teenagers were a potential revenue stream, a big one. Not only that, they became, to a much greater extent than in previous years, what the Saturday Evening Post (yes, it still exists -- online, at any rate) calls the "chief financial officers of family spending." They were -- and remain -- what today we would call important influencers.

The Doors were without a doubt the coolest band I had
seen perform live in my (very) young life. When I was in
junior high school, they (along with several other bands,
including the Jefferson Airplane and the Nitty Gritty Dirt
Band) did a show on the football field of what would be
my high school in a year or two. Photo in the public
domain.
The bottom line (so to speak) is that teens are worth a lot of money. And if you think that advertisers are going to ignore that influence, well . . . ha, ha, ha, boy, are you dumb.

In the last edition of The Geekly Weekly I took Google to task over its blatant attempt to bribe users by paying them to enter personal info via its "Google Survey" app, but Facebook has gone them one better: Zuckerberg and his associates have now released "Facebook Research," an application that, upon installation, requests high levels of access to users' devices, thus enabling The Zuck to collect vast amounts of information about the user. Even the very young user. (The program is really just a rebranded version of an application called Onavo, created earlier by an Israeli company owned by -- you guessed it -- Facebook.) While the app requires "parental consent" before use, really that's just a simple tick-box that anyone -- including your 11-year-old -- could click. This probably satisfies the COPPA legal requirements, but let's face it: young users may not even understand what it is that they're agreeing to.
 
Mark Zuckerberg in 2018. Photo by Anthony
Quintano
 and used under the Creative
Commons 2.0
 license. 
And what they're agreeing to is this: Facebook will pay you $20 per month if you let them collect scads of info about you and your habits, including your phone and Web use. That's what your privacy is worth. $20. (Apple has already jumped on this, telling Facebook that it can no longer distribute the app. You never could download it from Apple, but Facebook had been distributing the iPhone version of the app from its own site, a practice which Apple has now disallowed. So far, Google has not followed Apple's lead.)

Now, if you're willing to give up your privacy for $20 a month, then I suppose that's your business: you are, I assume, a functioning adult, able to make such decisions on your own behalf. But what about your son or daughter? Or your niece or nephew? Does your 14-year-old possess the intellectual wherewithal, the demonstrated maturity required to make such a decision? As I think back to my teenage years, I'm pretty sure that I was not equipped to make smart decisions about such things. Or, come to think of it, about most things.

Monday, December 03, 2018

A Lack of (Contextual) Integrity

I must have the people over at Google thoroughly confused. They now think that I am an impoverished-but-wealthy black, gay, Jewish female who is into cooking and who races Formula I cars in France on the weekends. I'll get to why they think this in a moment. (This is assuming that there are people at Google, and that these days it's not simply a pulsating, gelatinous glob of algorithmically driven hive-mind protoplasm. Although that would be very cool too, and would make an awesome B movie. It's too bad that Tab Hunter is dead.)

You see, like Facebook, Twitter, and the rest, Google has always been about collecting, manipulating, and mining the data we happily supply it. The Goog then takes the data and sells it to people who use it to sell stuff to us, sometimes further manipulating and mining the data along the way. In this fashion, marketers can build up surprisingly accurate—and often chillingly complete—dossiers on us. These are used to present to us items for sale in which we may be interested. (This is only a little unsettling, and could even be helpful.) Sometimes the marketers use the information they've purchased from The Goog in order to sell us things related to things they know we like. For instance, if I've purchased vinyl records, it's probably a good guess that A) I'm interested in turntables and B) I may have a man-bun. If I've purchased (or even simply viewed) a baseball glove from an online vendor, it’s a decent bet that I could be interested in, say, sports memorabilia, season tickets to a sporting event, or perhaps a particular brand of sportswear. (That's getting just a bit creepy.) But what if I have not viewed or commented on or reviewed an item, let's say a guitar, but I have friends who have done those things? For a marketer or data salesperson, it's perfectly reasonable to assume that, since people with like interests tend to hang out together, I may begin seeing guitar-related ads on my various social media feeds or in sidebar ads on websites that I happen to frequent, not because I've looked at such items online, but because my friends have done so. (Okay, now we're getting a seriously creepy.)

What we've encountered here is what some social scientists have called "dataveillance." We're being surveilled digitally, based on the data trail we leave when we traverse the Web. Now, that's not always a terrible thing: sometimes these ads are helpful, just as Facebook's "people you may know" list is occasionally useful and surprisingly accurate.

But we have very little control (read: almost no control) over how that data is used. The real issue with dataveillance, as Cornell University's Helen Nissenbaum has noted, is that it often constitutes a violation of what she calls "contextual integrity." We give someone certain information with a particular understanding of the context in which that information is to be used. I don't mind giving my doctor very private information about myself and my (growing number of) physical ailments. But I would mind very much if she were to share that information with a drug rep or insurance salesman. I explicitly give The Goog data about my travels on the Web, but I did not (knowingly and willingly) give The Goog permission to mine that data, manipulate it, compare it to my friends' data, and then sell it to people who will further refine it and who may then turn around and resell it or combine it with other datasets, the existence of which I am unaware. (If you're dying to read more about Dr. Nissenbaum's work, I interviewed her for my book, which—not at all coincidentally—is available here.)

Of course, The Goog pretends that this is all harmless and that its data collection is benign, incidental, and in fact helpful.

Except that they're not even pretending anymore. You may have encountered a Google program called Google Opinion Rewards. If you sign up, The Goog will pay you to fill out "opinion surveys." For each brief survey, Google will add from 10 to 30 cents or so to your Google Play account; you can then turn around and use that money to buy books, music, apps, etc. on the Google Play store.

But these surveys rarely actually ask your opinion about something. By and large, The Goog doesn't want to know what you think; it wants to know what you are. How much money you make. Whether you rent or own. What sort of car you drive, and if you're likely to be in the market for a new one soon. Here are some sample survey questions:

  • What is the likelihood that you will get a flu shot this year?
  • Did anyone in your household get food stamps . . . In 2017?
  • What is the combined income of all members of your family in 2017?
  • Are you covered by any kind of insurance or health plan . . . ?
  • What medical condition or concern are you most embarrassed to ask your doctor about?
  • Which [of the following categories] best describes your political views?

These are sent to you with the disclaimer that they will be used "to show you more relevant advertising" or to "improve Google products." (Which is more than a little ironic, given that, in the end, you are the product.)

But I've gamed the system: I simply give wildly inaccurate (and often contradictory) answers to the survey questions. Thus, The Goog is now completely confused about who I am, which is only fair, given that I am also confused about who I am. (I mean, in an existential sense, aren't we all confused about who we are, about our place in the world? My personal existential crisis began in 1973 with my attempt to understand the lyrics to songs by the Steve Miller band.) I think it's only fair that Google's algorithms should be just as confused as the rest of us. Perhaps the algorithm in charge of all of Google's other data-mining algorithms has called an 8 a.m. meeting to discuss what went wrong and to argue about which of the junior algorithms was supposed to bring doughnuts to the meeting.


Wednesday, August 29, 2018

Facebook: Killing Us One Stone At A Time


They killed Margaret Clitherow on the 25th of March, 1586. They did it very slowly, by laying her own front door on top of her and then piling rocks on top of it until she was crushed to death, a process called "pressing." It took about 15 painful minutes for her to die. (Which is nothing compared to the ordeal of 81-year-old Giles Corey of Salem, Massachusetts. Corey was pressed to death for refusing to plead after having been accused of witchcraft. He was a stubborn old man. It took him 3 days to die, and each time his torturers asked him if he was ready to plead, he is said to have responded by crying, "More weight!") Margaret's crime was not witchery, it was that she belonged to the wrong religion at the wrong time. She was a Catholic (and was later sainted), which was not exactly a crime at the time, though it was mightily frowned upon. What was a crime was harboring Catholic priests and failing to attend the prescribed and approved church. (Keep this in mind when you hear someone argue for the compulsory presence of religion in schools, in politics, and in society in general. Be sure to ask them which religion they're talking about. After all, you wouldn't want to select the wrong one.) Margaret failed to attend church and she harbored priests, and then—like Corey—refused to plead. (They refused to plead because that way their families, including children, could not be called to trial and tortured until they gave "evidence," which would then give the authorities the right to repossess any land or other property belonging to the family.) Corey and Clitherow suffered excruciating deaths largely to spare their families; they were tougher than you and me.

The Black Swan Inn in York, where Margaret Clitherow is said to
have housed priests hiding from the authorities. Image copyright
Peter Church and licensed for reuse under the Creative
Commons Attribution-ShareAlike 2.0 license.
Naturally, thinking of huge, heartless entities crushing innocents to death made me think of Facebook.

Facebook collects information about us—about you and me. A lot of information. Then they sell that information (supposedly anonymized and aggregated) to their "partners," companies that wish to sell us goods.

How much data, you ask? Well, you can find out for yourself fairly easily. Just go to your Facebook settings; then select Settings and then click “Download a copy of your Facebook data.” The company will send you a ZIP file containing about 25 folders, each of which contains several HTML documents full of data the company has collected about you. (The complete process is nicely explained here: https://tinyurl.com/ybpp7drb.) I did that, and it was an enlightening process.

Here's just some of what Facebook sent me:

A 'Stuff About Me' folder containing face recognition data and address book info (friends, institutions, etc., going back 2 yrs)
An 'ADS' folder containing:
o   Ad interests: 41 pgs of data, 1329 items, ranging from academy awards to action movies, from MacBooks to Method acting, from Smartphones to Sonny Bono (?!), and from tattoos to time travel.
o   An ‘Advertisers Who Uploaded a Contact List With Your Information’ document, whiuch was explained thusly: "Advertisers who run ads using a contact list they uploaded that includes contact info you shared with them or with one of their data partners."
·  This included a list of 211 advertisers, from AARP to Zappos
o   Advertisers I've interacted with (which consisted of about 100 clicked ads)
An ‘Apps and Websites’ folder: Apps I've used Facebook to log into (stretching back to 2013)
A document containing every FB post on which I've commented—including the text of the commentgoing back to 2013
o   A list of every person I'm following and every person who's following me, every page I've ever Unfollowed, and every person I've "friended" and when (dating back to 2009)
A ‘Posts and Comments’ document that included every "like" (or any other reaction) I've posted on a post or comment
A ‘Location History’ folder. Mine is empty, since I've never "checked in" or otherwise informed FB of my location. (But you may have.)
A list of every FB message I've sent or received and from/to whom
A ‘Photos & Videos’ folder containing every… Well, you get the idea.
o   Security & Log-In Info that included session cookies updated (148 MS Word pages, about 7,000 or so cookies), all devices authorized to log in (back to 2013) , and a list of where I've logged in from and when
A document listing my complete search history
And a handy Index.html doc that lets you get to all of this stuff a lot more easily than poking around in every damned folder, which is what I did. Unfortunately, I found this document last.

As you can see, that's a lot of information about me—and honestly, I'm a pretty boring person! Really. You can ask anyone.

He doesn't look like an evil person, does he? At least, he didn't back
in his Harvard days. Image licensed under the Creative Commons
Attribution 2.5 Generic license.
Something should be pointed out here… Near the top of this list is a document that lists advertisers who run ads using a contact list that I shared with them or with one of their data partners. Now, I am perfectly happy (well, moderately happy) to share data with companies that sell products in which I'm interested: computers, say, or archery or cars or motorcycles. But I have no idea who these "data partners" are. It turns out that when I share data with an entity, I'm in effect also sharing it with whomever they decide to share it with. And I have no control over who that might be.

I don't like that.

Really, most of these bits of data are relatively insignificant. If any one or two or five of them got out in public or were sold to a marketer, it probably wouldn't matter much. But, like the stones that killed Giles Corey and Margaret Clitherow, eventually, the combined weight of the stones reaches a critical mass and that one last stone finishes you off. Facebook has collected a LOT of stones, enough to build a fairly accurate—and quite valuable—dossier on every one of its over 2 billion customers. Eventually, we might end up being crushed by those stones.


Sunday, July 15, 2018

Printing Death in 3D

I generally don't write about politics here; after all, this blog is supposed to be about technology. But sometimes technology and politics overlap, as in this case.

In Ch. 8 of Leveling the Playing Field (which I'm sure you've all read!), I talked about the advent of 3D printing and how it has changed manufacturing, mostly for the better. But not always for the better. One significant worry I had (and have) about 3D printing is that it can enable the proliferation of homemade weaponry, including very accurate reproductions of weapons such as the venerable 1911 semiautomatic pistol and the AR-15-type rifles that have been used in so many mass shootings over the past few years.

Now, I own weapons. I like to think of myself as one of those "responsible gun owners" we hear about. I own guns for sport, for protection, and for hunting. But I don’t believe that just anyone should be able to own just any gun, nor do I think there is anything wrong with having to pass background checks in order to purchase a weapon or being required to register many types of firearms. I'm not anti-gun; I'm anti-idiot.

Lesley is not a big gun person, but she has gone shooting with
me a couple of times. Naturally, it turns out that she's an
excellent shot.
Of course, what I think doesn't matter much, and just how little it matters was brought home to me a couple of weeks ago when the Department of Justice surrendered to a "First Amendment" argument that a 3D data file representing a weapon was in fact protected free speech and could be hosted on (and downloaded from) a public-facing website. (The suit was filed by Cody Wilson, the inventor of the Liberator 3D-printed pistol about which I wrote in the book.) After a long, drawn-out court case, it appears that the DOJ has quietly settled with Wilson, whose stated goal has been to moot the gun control debate by showing that it can't be controlled. In the words of a recent Wired Magazine article, the DOJ promised to:

…change the export control rules surrounding any firearm below .50 caliber—with a few exceptions like fully automatic weapons and rare gun designs that use caseless ammunition—and move their regulation to the Commerce Department, which won't try to police technical data about the guns posted on the public internet.

Basically, this means that Wilson and his supporters have won the war. They've successfully blurred the line between the First and Second Amendments, guaranteeing that anyone can design and/or download-3D printer-compatible plans to just about any firearm. And, as any hacked corporation or repressive government can tell you, it's very, very difficult to police digital data. Even if you wanted to hide it (which Wilson and his allies do not), the data would get out; after all, it's just information. And these days, information (and misinformation) is pretty much everywhere.

It doesn't look like much, but this is a mockup of The Liberator,
possibly the first functional 3D printed handgun. Posting the
data file for this gun online is what got Cody Wilson embroiled
in a years-long lawsuit. The DOJ finally capitulated just weeks
ago. Image used under the Creative Commons Attribution-
Share Alike 3.0 Unported license.
I don't really worry much about Wilson himself. He's an intelligent and seemingly stable young man, just one with whom I disagree politically. I'm not worried that he's about to snap and become a mass murderer. But I wonder how many mass murderers he's about to enable. Even one would be too many, I would think.

Some have drawn an analogy to an automobile--another tool that kills many thousands every year, pointing out that it is possible to build a motorized vehicle. But there are differences. The purpose of an automobile is not to kill people, of course. Like a hammer or other tool, it can be used to hurt people, but that's a misapplication of the tool, not its purpose. And it's certainly true that I could collect or (even build) parts and create a car. (Well, in my case, I'd have to make a few phone calls to my friend George Kelley, if I wanted the car to actually run.) But look what happens when I'm finished building this car, this tool capable of killing thousands every year: I'd have to license and register it. And I would myself have to be tested and licensed if I wanted to use the car.

This is Jeff Sessions. As US Attorney General, he is the
man in charge of the Department of Justice, the
cabinet department that just settled a lawsuit with Cody
Wilson that will result in the widespread proliferation
of 3D-printed weaponry. Image in the public domain.
I'm fine with having to register my car and license its driver. I'm also fine with having to register certain firearms and with having to license their users. But this technology—and the DOJ's capitulation to Wilson and the other plaintiffs—will make it very difficult to police the proliferation of this weaponry. Even if the authorities were to confiscate my weapon on some grounds (perhaps I'm a felon, perhaps I violated a restraining order, perhaps I've shown myself to have anger issues and have committed assaults), I could simply go home and (assuming I own the proper equipment), press a button, and go have dinner. By the time I'm finished with my after-dinner port (not that I would drink port—who the hell drinks port?!), I'd have a nice shiny new .45 pistol or an AR-15 receiver sitting in my printer.

And if I could do that, what could an angry ex-husband or wife do? What could a gang or a cartel do?


Monday, June 04, 2018

Jesse Pinkman: "It's SCIENCE, B*tch!"

Lesley and I have been crossing a lot of bridges lately. (I mean the literal kind, not the metaphorical ones.) First in our trailer and now in our motorhome, we've been doing a lot of driving throughout the Pacific Northwest and Northern California, and there's plenty of water hereentire oceans of it, in fact. And where there is water, there are—not surprisingly—bridges to enable us to cross that water.

Early on, this was nerve-wracking. Towing a 19' trailer across a bridge with a fairly small Chevy pickup truck, trying to stay in the middle of what seemed a terrifyingly narrow lane was, at first, pretty scary—especially if the wind was up. We eventually got used to the feeling of being suspended on this thin concrete-and-steel lifeline hundreds of feet above the water, dragging all of our worldly goods behind us. Eventually, we got to the point where we could cross a bridge, even a narrow one, without giving it too much thought. Now we do the same thing in a somewhat larger motorhome. And, as expected, it was frightening at first, but eventually became second nature. Other than making sure that it’s not too windy, we now cross bridges without giving the crossing a second thought.

But even at our most terrified, one thing we never worried much about was the integrity of the bridge itself. We might veer off of the bridge, or be blown off of it, or be pushed off by a trucker who'd lost his brakes or been blown into another lane, but we never thought, "Oh, my God! What if the bridge falls down?!" Circumstances might intervene to do us damage, but the bridge itself would stand, we could be pretty sure.

Sometimes the "guarantee" is implied by a sign that you can
see before you get on the bridge itself. (Image in the public
domain.)
That's because bridges are engineered. And with only a few exceptions, they are well engineered, designed by men and women who understand both physics and structural engineering. These people are civil engineers and architects, experienced designers who know how materials will react to a given amount of stress and to the wear and tear of wind and weather and traffic. How do they know? They know because they hypothesize and calculate and test and revisit the original hypothesis, all while taking into account the known properties of various materials. As Aaron Paul's Jesse Pinkman said so eloquently on Breaking Bad, "It's science, Bitch!"

Bridges are usually massive and are guaranteed to carry a certain amount of traffic. The Yaquina Bay Bridge, which we cross almost every week, was built in 1936, one of a series of bridges designed by Conde B. McCullough. It is over 3,000 feet in length, and it stands 133 feet above the water at its highest point. It contains 30,000 cubic yards of concrete and over 3,000 tons of steel. As I said, massive. (And this bridge is quite small compared to many other suspension bridges around the world.) And because it's so substantial and so well-designed, the bridge is guaranteed to be able to hold the weight of the traffic crossing it.

Sometimes the "guarantee" is fairly explicit, as in the
case of the Clark's Bridge, a covered bridge in New
Hampshire, which specifically states that it will carry
200 tons. (Image  licensed under the Creative
Commons Attribution-Share Alike 3.0 Unported
license.)
Which brings us to software. 

When I was heading up the software development team for a publishing company in Texas, the powers-that-be (of which I was most assuredly not one) decided that all of our programmers would be given a new title: henceforth, they would all be known not as developers or programmers, but as "software engineers." I really didn't care what they were called, so long as they showed up at the office and did cool programmy things, preferably while wearing shoes and long pants. And to tell the truth, the programmers didn't care, either. You could call them whatever you wanted; as long as they got paid and had snacks and got to do cool software things, they were happy. (And most of them wore shoes and long pants most of the time.)

But one of my developers emphatically did not want to be called a "software engineer." This man—we'll just call him "John," because, well, that was his name—felt that as programmers, they did not deserve to be called engineers. The programming profession, he felt, was not precise enough, nor its results predictable enough, to be called "engineering." Engineering, he said, meant that the end result, the product, was designed in such a way that the builders could guarantee the outcome of its use.

The example he used was, in fact, a bridge. A bridge is designed and built and guaranteed to carry a certain amount of weight. If built correctly, it will in fact carry that weight, and it will do so for a specified period of time.

Software, on the other hand, is never guaranteed. It's too complex and used in too many different environments for the developer to absolutely guarantee that it will function as designed. And sure enough, if you go looking for guarantees for software you've purchased (or, more likely, licensed), you will find a lot of vague legal-ese that basically boils down to "This really should work, but if not, well, we're not responsible. Sorry." If you go looking for remedies for failure, you'll find that those remedies are almost always limited to replacement of the media on which the software was supplied. (Which is even more meaningless these days, since most of your software was probably downloaded or is provided as a cloud service.)

Code is complicated. And the interaction
of thousands (sometimes millions) of lines
of code with one another and with the
software and hardware environments
within which that code runs make it close
to impossible to guarantee that a software
product will behave as designed at all
times.
John felt that, until programming had evolved to the point where designers and programmers could guarantee their work, then it was not deserving of the name engineering, and he would rather just have his title listed as "Programmer" on his business card.

I sympathized with John and told him that I would convey his feelings to the aforementioned powers-that-be. I did so, and the PTB explained to me that they were going to do exactly what they had intended to do all along, that John's title was now "Software Engineer," and that I should now scuttle back to my dark and forbidding lair and prepare for the next in a seemingly endless series of product delivery deadlines.

I returned to John, gave him the bad news, and sympathized heartily with him, while patting him gently on the shoulder. Then I asked him to please put his shoes back on.

He was right, though.


February 6, 2018

The Power of Media Compels You

I don't care much for sports bars. They’re usually loud and filled with obnoxious people who've had a few too many beers, and everyone's yelling at each other and at various television monitors mounted all over the room as their favorite (or least favorite) teams cavort onscreen, running around on a field doing various . . . uh, sports things.

Of course, I have (and almost always exercise) the option of simply avoiding sports bars; that way I can have a nice peaceful lunch or dinner, and the sports fanatics among us can scream spittle-flecked invective at the television, eat wings, and smear various sauces all over themselves while watching the Falcons, Penguins, Cardinals, Orioles, Seahawks or other such avian-themed sports teams. (In America, there seems to be a weird association between ornithology and sports teams. I suppose there's a Master's thesis in there someplace. Or maybe the Audubon Society would sponsor a grant.)

 


Now, this is a bar. This is the Table Bluff hotel and saloon in 1889. Not a TV in sight. Image courtesy of the Sonoma County Library. 

But I'm not immune to the allure of sports, just sports bars. Why, only yesterday, I watched the Super Bowl. I don't remember which Super Bowl it was, maybe Super Bowl MMMMCMXCIX. But the Patriots and the Eagles (see?!) "gave it everything they had," "brought their A game," "played to win," and all of them "gave 110 percent." And to be honest, it was a good game, especially since I didn't really care who won. Also, we had enchiladas and beer.

But even when I go out to eat at a restaurant that's NOT a sports bar, I can't escape the television on the wall. These days, most restaurants have a television or two or twelve scattered about. Given that I am a child of the 60s, my eyes are unavoidably drawn to any flickering image in a box. (Perhaps students today would pay more attention to teachers if we found a way to flicker.) This is unfortunate when eating dinner with my wife and/or a group of friends. I may be paying close, even rapt attention to what is no doubt a very important discussion about . . . uh, something, but then out of the corner of my eye I can see that flickering, blue-tinged image beckoning. I always turn to look. I must turn to look. I've been conditioned to do so. The power of media compels me. And when Lesley draws my attention by coughing gently and touching me on the hand (perhaps with the business end of a fork), I have to pretend that I was not absent during the last 30 seconds or so of the conversation. I usually just smile and nod and try to look intrigued and amenable to whatever has just been said. (Sometimes this results in me accidentally agreeing to go hiking. I don't really see the point of hiking. I spent a lot of money on a very nice truck. It has air conditioning, soft leather seats, and XM radio. Hiking does not have those things.)

But it's not just television; non-electric media also compels us. Lesley and her mother enjoy putting together jigsaw puzzles. (God knows why. Perhaps it's some Episcopalian form of penance. Like flagellation, but more painful.) These puzzles are usually laid out on the dining room table, because it's the most convenient large, flat surface in the house. But I have to watch Mom and Lesley very closely during dinner. We'll be enjoying our food and talking, and I can see their eyes beginning to steal away, glancing surreptitiously at the puzzle, just a few inches from our plates. Eventually they desert our meal (and me) and enter into a full-fledged puzzle-solving frenzy. Like me watching a television image, they can't not do it. At first, they were sheepish about it, but now they don't even bother pretending that they think it's weird to work on a puzzle during dinner. 

Media, it turns out, is media, and none of us (well, few of us) are immune.

 

You've all seen this photo of commuters ignoring one another in order to concentrate on their newspapers. It's supposed to make the ironic point that it's not only modern media that has distanced us from one another. Which it does, of course. But keep in mind that these folks may well have finished their papers on the way and then spoken with one another about what they had just read. Image in the public domain.



I don't think that the kids we berate for spending their lives with their faces in Facebook or Instagram or Snapchat (or whatever is big these daysI may be a few weeks out of date) are really any different than any of the rest of us; they too are compelled by media, but I suppose it's a matter of degree. They are all about being connected, all the time. It's very difficult for them to disconnect. I saw this during the college classes I taught; asking students to put their phones away for an hour was almost physically painful for them. The thing about other forms of media is that they're there and then gone. We read the newspaper (remember those?) and then we were finished and perhaps we even (God forbid!) spoke to people about what we'd just read. We would connect intermittently to television (perhaps after school or in the evening), we might read a book or listen to the radio, but then we were finished, at least for the time being. These days, though, people (not all of them kids) are connected to other people all the time. It must be exhausting! Who would want to be connected to everyone 24/7?! I don't even like people that much! (But I love dogs. If we could connect to dogs, now, that would be different. I could definitely connect with dogs all day. I would most certainly sign up for DogBook or InstaPaw or PupChat or something.)

My granddaughter is going on a medical mission trip to Guatemala this summer, during which she and other scientifically-minded students will teach basic hygiene, measure villagers' blood pressure and glucose levels, and do other science-y things for a couple of weeks. (All of this will be covered in greater detail in my upcoming book, entitled My Grandchild is Smarter than Your Grandchild and All of Your Entire Family Put Together—and Better-Looking, Too.) But on this trip, she has to disconnect for an entire 10 days! No phone. No texting. No Instagram. No computer. For almost two weeks, she will be in a foreign country, forced to interact with actual people in real-time. I shudder to think what this might do to her. What if she accidentally reads a newspaper?


January 7, 2018

Right again, dammit. 

Sometimes I hate being right. (I am informed by my wife that this doesn't happen often enough for it to be a real concern, so…) Nonetheless, I knew that this would happen.

If you've read my book—and of course you have!—you may recall that I spent some time talking about the dark, dangerous side of the Internet. I love the Internet, but along with all of the wonderful things it has brought us, there's also quite a lot of ugliness. 


An FBI SWAT team training in New York. Image used under the Creative Commons Attribution 2.0 Generic license.

I refer, for example, to various forms of bullying: name-calling, vituperative attacks, and malicious threats delivered mostly by folks who revel in their ability to deliver messages of hate while cloaked in the anonymity provided by the Internet. Often this hate is directed at women, of course, but we're all vulnerable; and much more frightening still, our children are vulnerable. (Bullying used to be restricted mainly to schools, but now that kids are pretty much connected 24/7, they're even bullied at home or while out and about. These kids—and some adults—must feel as if there is simply no escape from their assailants.)

In chapter 6 of Leveling the Playing Field, I recounted an interview with two of the Gamergate principals, Zoe Quinn and Alex Lifschitz. You may recall the Gamergate incident: It began as a reasonable argument amongst gamers about ethics in game-related journalism, but quickly escalated into vicious attacks, name-calling, doxing, and death threats, all delivered (mostly anonymously) via the Web. Zoe and Alex were two of the favored (if that's the right word) targets, and the two of them essentially had to go on the run, afraid to stay at their own homes or to be seen in public with their friends and colleagues. 


Zoe Quinn is one of the Gamergate folks targeted
by folks who attacked her, released her personal
information (called "doxing"), and threatened to
"swat" her and her friends.

Perhaps worst of all, said Zoe, was the threat of swatting, in which an attacker calls the police and reports a fake emergency at the target's address. The caller might tell the police that he had heard gunshots at the target's address or that he knows someone is holed up there with firearms and/or explosives. You get the idea. The goal is to get the cops to deploy a SWAT team to that address; the attacker's presumed goal is to get someone at that address hurt or possibly even killed. This is why Zoe Quinn and others have called swatting "attempted murder by proxy."

It's incredibly dangerous. You have heavily armed, nervous, excited (and sometimes frightened) police officers breaking open a door and entering the premises of someone who has no idea what is happening, why these assault weapons are suddenly pointed at him, or why these people are breaking into his house. (I suppose that the smart thing for the victim to do would be to drop to the floor with his hands behind his head, but in those circumstances, who would have the presence of mind to do the smart thing? Then again, what if, when you drop, the cop thinks that you're diving for a weapon? And how is he or she supposed to know that you're not?!)

Swatting has been going on for several years now (the FBI estimates that some 400 cases occur each year), and it finally resulted—as we knew it eventually would—in someone's death. On December 28th, Andrew Finch was killed in Wichita, KS when he came to the front door in response to police officers who had been sent there by a swatting "prank." (Ironically, the address the attacker had wasn't even the correct address; Finch was not a party to the argument that caused the swatting call, and was therefore unaware that there was a problem, making the whole nightmare doubly tragic.)

 

Representative Katherine Clark
(D - Mass.) sponsored the Interstate
Swatting Hoax Act of 2015 and almost
immediately became  the victim of a
swatting attack herself. Image in the
public domain.

A few years ago, Democratic congressperson Katherine Clark introduced a bill that would impose serious penalties for such online attacks and hoax calls, especially ones that result in death. (Naturally, Clark herself became the target of swatting attacks and other online threats.) The bill is still in committee, and people who know about such things say that it has little chance of passing.

Some argue that we don't need laws specifically aimed at swatting, in any case. The L.A. Times Editorial Board, for example, argues that existing laws cover such situations: there are, for instance, already laws against making threats and against filing false police reports. However, most of these laws are local in nature, and each state determines how to file charges. In most cases, callers are charged with misdemeanors, but even if a felony charge is brought, the punishment may vary wildly.

Zoe Quinn is right: a swatting call is attempted murder by proxy. If apprehended, the "prankster" should be charged with attempted murder or another serious felony. If someone dies as a result of a swatting call, the caller should be charged with, at minimum, manslaughter.

And if it takes a federal law to ensure that this happens, then so be it.


December 17, 2017

Bows and Arrows, Sticks and Stones

I've been reading about primitive bow-making. Not the kind of bow one might use on a gift box (though my efforts in that area are certainly primitive enough), but bows that one might use to shoot arrows while hunting or for target practice. You know, bows of the sort that archers use.

Bows and arrows have been around for many thousands of years, of course, and with the introduction of fiberglass, and then the addition of pulleys, sighting mechanisms, carbon laminates, stabilizers, and other such paraphernalia, the technology of bows has advanced such that some modern compound bows look only vaguely like the bows carried by Native American and early European and Asian archers. To my eye, they don't look like bows at all. 

This is Albina Nikolayevna Loginova, a Russian compound
archer. She is the current world archery champion in women's
compound archery. Her modern compound bow, simpler than
some, nonetheless looks pretty complicated. Image licensed
under the Creative Commons Attribution-Share Alike 3.0
Unported license
.

I'm much more interested in traditional bows, generally made of wood—though they can also be made of horn—and sometimes adorned or enhanced with backings of sinew, silk, or rawhide. They seem simple, and somehow pure. (Primitive is, of course, the wrong word for it, laden as that term is with both ambiguity and potentially negative connotations. Traditional is the favored term, one that works well, since a given tradition can be pinpointed somewhat accurately in both time and place; thus, one can pattern a bow after those of the Klamath, Kiowa, Siletz, or Comanche tribes, and we can also classify a bow shape or construction according to when it was made.)

As a technologist of sorts, I've wondered what attracts me to these seemingly less technical endeavors; why, for instance, do I get such pleasure out of seeing a straightforward, simple design? It's not that I don't appreciate the complexity of a modern compound bow, I do—and in much the same way that I appreciate any other sophisticated design, whether in a motorcycle, a computer, a building, or a piece of software. Basically, I'm a sucker for good engineering of any sort. Sometimes that engineering is pretty complicated, but often—as in the case of a so-called primitive bow—it's quite simple, at least on the surface. (A good bowyer—for that is what we call someone who makes bows—can explain the many ways in which traditional bows are actually quite sophisticated. But, at least at first glance, they look to the eye disarmingly simple, even rudimentary.)

 

Simple can nonetheless be beautiful. This is a yew longbow. 
It’s a selfbow, i.e., a bow carved from one piece of wood.
A medieval longbow had a range approaching 400 yards.
Image placed in the public domain by the photographer,
James Cram.

As with most of forms of engineering, a traditional bow results from the melding of art and craft and science; as with any good woodworker, a seasoned bowyer puts his skill and knowledge and sense of aesthetics to work creating something new, building something beautiful with his hands and his heart and his brain. It's hard not to like something like that.

And yet, there's an irony here, one that I thoroughly enjoy. I'm reading and learning about primitive archery and bowyery, but I'm doing it in a modern home; when my coffee cools as I read, I walk over to a microwave oven to reheat it. I sit in front of a flat-panel TV, streaming an Amazon movie (which I'm mostly ignoring) and read a book about “primitive” bow-making, but the book I’m reading is being displayed on my iPad; as my eyes tire (which they do, because I am old), I can enlarge the font or change the background color. If I run across a particularly interesting illustration, I click a few buttons and—thanks to Wi-Fi—the illustration prints out on a printer upstairs near my desk. Later on, I may take some notes in the Kindle app I'm using to read this book and drop them, along with snippets of the associated text from the book, into a OneNote or Evernote notebook. Come to think of it, if I ever decide to try my hand at this, I may begin by using a CAD/CAM program to lay out a basic design.

I wonder if there's a word or phrase that describes the use of sophisticated technology to create primitive artifacts. Can't think of the term right now, but the most obvious example that comes to mind is nuclear war


November 5, 2017

Low-Tech Coffee

As a tech writer and a recovering software developer, I am naturally something of a technophile. I've already mentioned some of my favorite tools for writing (computer, OneNote, word processor, Skype, email, etc.) and also for camping (GPS, RV campsite apps, Wi-Fi and cellular boosters, etc.), and I have to admit that I'm a bit of a gadget hound. If it's shiny and has buttons and lights, I like it.

 

Now this is a phone. It only did one thing, and it did it well. And
it made a very satisfying noise when you slammed it down into
the cradle. Image used under the GNU Free Documentation
License.

On top of that, I'm old—and getting older all the time! I'd like to think of myself as middle-aged, but I suppose that would mean that I'd have to assume I could live to be 120 or 130 . . .  Not very likely to happen. But the thing about being a techie "of a certain age" is that I can remember what it was like before we all had computers and smart phones and GPS and The Facebooks and all like that. When I started writing news articles, we had to call people on the telephone to interview them! (And not on a smartphone, either; I'm talking clunky, corded phones with rotary dials.) If we wanted to check a date or a quote, we had to go to the library or local newspaper office. I mean, we had to physically go there! It was awful. All that moving around, speaking with actual people. Ugh. Very unsanitary.

Since I'm old enough to remember all of this, I'm still a bit awestruck by what the tech revolution has wrought. The idea that I can carry so much computing power in my pocket, that I can instant message someone on the other side of the country (or even the other side of the world), that my granddaughter can Facetime me to show me her newly painted room, that the little box on my dashboard (or the phone in my pocket) can tell me how to get to an exact location from 5 or 50 or 500 miles way . . . I'm still kind of shocked by all of this. I use all of these technologies daily—in fact, I'm rather knowledgeable about many of them—but they still sometimes astound me. I mean, our daughter Amy makes a living as an online fashion maven! That wasn't even a thing a few years ago.

So, yes, I do love technology: using it, writing about it, building it. Still . . . 

Sometimes I find that the low-tech approach works better. It's simpler. Often faster. Almost always less expensive. For instance, I tend to make a lot of notes. I tell my wife that it's because my brain is always whirring away, coming up with brilliant ideas for articles, books, programs, and the like—but mainly it's just that if I don't jot an idea (or name or task) down within about 15 seconds, I'll lose it. I'll remember that I had an important idea, but it'll be gone; I'll have no idea what that idea was. I can hear the Whooosh! sound it makes as it leaves my addled brain. So, lots of notes; it's the only way I can survive. And in spite of the fact that I've tried many note-taking apps, I keep coming back to . . . I'm a little ashamed to admit this . . . a pencil and a pocket-sized spiralbound notebook. Yep. Pencil and paper. (Preferably lined paper, and preferably a #2 soft pencil.) It just works better, and it's faster. With an app, I have to tap the icon to open the app, then open a page (or start a new one), then attempt to use the tiny onscreen keyboard with my fat, clumsy fingers, then I have to save the note and quit the app. Honestly, for me, a pencil and paper is better, faster, and easier.


My favorite analog note-taking app. Complete with "stylus."

And notetaking isn't the only task for which I prefer a low-tech approach. As an avid RVer, I'm very much into all the gadgets and gizmos people use with campers, trailers, and motorhomes. In much the same way that editors argue about placement of semicolons or the use of the Oxford comma (now, don't get me started), RVers argue about the best way to do . . . well, 
anything. Whether it's making coffee, sealing a leaking windowframe, using solar panels, or traveling with full or empty tanks, those of us in the RV fraternity are happy to argue about all of it. (BTW, everyone at least agrees that it's best to avoid traveling with half-full tanks; too much sloshing around. That might be 150 lbs. or more of water surging fore and aft, enough to throw your rig seriously off-balance.)

But let's just concentrate on coffee for a moment. (I say "just," but that word does a serious disservice to possibly the most important liquid ever. More important even than bourbon. And that's not something I say lightly. I don't know why the inventor of coffee hasn't been canonized. Oh, wait . . .  yes, I do . . .  He was almost certainly an Ethiopian Muslim . . .   At any rate, all I can do before my first cup of coffee is grunt, and it's generally a nasty, spiteful grunt, at that.) There are dozens of ways that RVers make coffee on the road, many of them pretty high-tech. There are French presses, AeroPresses, and full-blown Braun- or Mr. Coffee-type coffee-makers. Some people take their Keurigs camping with them! (That's just . . . wrong. Those Philistines! Not that I would judge.) Not surprisingly, Coleman (a company that’s been in the camping biz for over 100 years) makes a $70 propane-powered coffeemaker; this seems to me an over-engineered solution, but again . . . no judging. Some people like to use the old-fashioned stovetop percolator. I like that. It makes sounds that remind me of the way that my mother's kitchen sounded in 1958, and it smells wonderful. And really, there's nothing very complicated or high-tech about how a percolator works. (It's just physics. My friend Rick Brown could explain it to you, if you have a few hours.) This is an almost perfect way to make coffee while camping.


Our little Melita, filter, and teapot, in our trailer, ready for business.
Photo by Lesley Scher.

But notice that I said "almost." When camping, there is usually a need to conserve water. Even if you have a ready supply of fresh water, your waste water is going to go into a holding tank, and once that tank fills up, you may have to break camp and find an RV dump station; there may be one nearby, or it may be a few miles down the road. Either way, it's a pain. No matter which approach you take, you're going to use water to make your coffee. But then you're going to use even more water cleaning up the grounds, washing out the percolator (remember trying to clean the grounds out of a percolator?), and cleaning the press or decanter or whatever you're using.

Lesley and I take the easy way out: We use a small Melita cone filter coffeemaker. You can get them in various sizes, but we just use the 1-cup size. Heat up some water on the propane stove, drop a filter in the Melita, add a scoop or two of ground coffee. Then place the funnel-shaped coffeemaker on your mug, pour the water in, and wait about one minute. Done. Decent coffee, no mess, nothing to wash (even the mug and coffeemaker can just be wiped clean), and no wasted water.

It's low-tech, simple, cheap, and fast. Although I'm sure that any day now, Melita will release a Wi-Fi-enabled cone brewer, and then I won't know what to do, especially if it has flashing lights. I can only be so strong.

September 5, 2017

Cyber Insecurity

I know what you Tweeted last summer. Also this summer. And during that particularly nasty rainstorm in the winter of 2015. In fact, I know what you posted on YouTube, Reddit, Instagram, and Flickr. (Also VK, if you happen to be into Russian social networking.) And if you posted anything while you were supposed to be hard at work in an office building or manufacturing plant, well, there's a pretty good chance that I can find that out, also.

 

This is a series of social media posts that originated in my old
high school over the past several days. I can click on the icons
and see who posted what. 

Of course, none of this is secret, right? You didn't really post it on the Internet and expect it to remain private, did you? I mean, c'mon, if there's anything the Web is bad at, it's maintaining your privacy; just ask any number of breached and outed and exposed criminals, trolls, Hollywood insiders, and a slew of embarrassed AshleyMadison.com members. (Not that one might not be all four, of course.)

The Internet is great at sharing information; that's what it was made for. (No, it wasn't set up to provide a backup communication net in the event of a nuclear attack. It was invented and intended for use by university researchers looking for ways to communicate and share data.) Unfortunately, it's not so great at protecting information.

As someone who has worked on the security side of technology, I have access to some tools that might make my search a little easier, simpler, or faster, but the truth is that all of that information is out there. Everything you've ever typed. Every Google search you've ever made. (Yes, even that one.) Everything you've posted, commented, searched for, or communicated is stored somewhere; and all it takes is a little time and effort to uncover. If it's supposedly protected by virtue of it being stored on a "secure" site (think Facebook, Dropbox, your corporate network, etc.), well, I have bad news for you. As security-conscious sages (including former FBI director Robert Mueller) have said many times: "There are only two types of companies: Those that have been hacked and those that will be hacked." (I might add a subset of the first type: those that have in fact been hacked, but don't know it yet.)


Robert Mueller was the 6th director of the Federal
Bureau of Investigation. He is currently occupied
with other security-related endeavors. Image in
the public domain.


But I'm not necessarily talking about sophisticated, hardcore tech attacks here, the sort of thing that some shady operator in a basement in Odesa or Kiev or Omaha might use to force his way through a firewall or other cyber defense. Those types of attacks certainly exist. But why would anyone go through the trouble? It's time-consuming and expensive, and it requires skills that most of us don't have. And besides, there's often no need. The info is either already out there (in the form of social networking posts and other communications—many of which can easily be viewed or uncovered with a bit of sleuthing) or else it can be had simply for the asking.

That's what social engineers do. When they need the keys to the (yours, your boss's, your client's) kingdom, they just ask. Of course, they might have to lie a bit. (Well, let's say prevaricate. It sounds better.) They might (read: probably will) get in simply by emailing you a dodgy link. Occasionally, they might need to invent some pretext to get into an office: Perhaps the social engineer shows up at your place of business in a blue shirt holding a clipboard, and wearing a baseball cap with a service company logo. He either just waltzes in (if your company is foolish enough to leave its campus buildings unlocked) or else stops at the reception desk to tell the folks manning the desk that he's "here to check on your <insert name of make and model> corporate printers, to ensure that they're working correctly." Or perhaps he's (supposedly) with a janitorial service and he'd like to see if he can outbid your current provider, the name of which he just happens to know. (He also knows how much they're charging you. In fact, he seems to know a lot, more than enough to convince you that he's on the up-and-up.)

Or maybe he keeps it simple. He just picks up the phone and starts calling your employees; when someone answers, he says, "Hey, Sarah, this is Todd from IT. We're working on something here and I need to get into your system to see if you've been updated. It doesn't look like the last security update was installed, for some reason." Of course, he's using a phone-spoofing application that makes his call look as if it's coming from inside your building, so for all you know, it's legit. (Do you know everyone in your IT department? Really? Everyone?) And you wouldn't want your system to be out of date, would you? Vulnerable to attack?! If the caller is good—and professional social engineers are very, very good—odds are that "Todd" will eventually find someone to give him a password; after that, he's off to the races. And by "off to the races," I mean that he's successfully infiltrated your network. (Note that I’m saying “he,” but keep in mind that the social engineer could just as easily be female. There are some truly exceptional social engineers out there who happen to be ladies. I don’t think that they’re necessarily better liars or any more duplicitous than the guys, but perhaps we’re simply not expecting to get hacked by a woman. Whatever it is, the ones I know of or have met are very good at this.)

You see, social engineers are hackers of a sort, but they don't hack systems; they hack people. And people are easily hacked. We're great targets, because we're trusting and we're helpful. I hate to say it, but we need to learn to be more suspicious and wary. C'mon, people—stop being so nice, so trusting! We should all be more like those people we see writing in comments on the Internet: angry, cantankerous, distrustful. Well, maybe only a little like them. No need to get nasty or insulting.

And here I'm going to put in a plug for my friend, Chris Hadnagy. Chris runs Social-Engineer.com (and Social-Engineer.org), a penetration-testing company that specializes in using social engineering to uncover weaknesses in your company's "human network." He and his team are very good—scary good, in fact.  They can lie and wheedle and schmooze their way into almost any network. If you're wondering if your network has weaknesses, it does, trust me—especially your human network. It's porous and shaky at best, and Chris and his folks can help uncover those weaknesses. But my favorite of Chris's endeavors is the Innocent Lives Foundation (ILF). The foundation specializes in unmasking child predators and in providing useful, usable evidence to law enforcement officials so that these people can be found and prosecuted. It's a worthwhile endeavor with a talented board of directors and headed up by a guy who's the epitome of the "white hat hacker." (Also, he has a very large, vicious-looking dog, the name of which I can never remember, so I keep referring to it as "Fluffy." Someday, "Fluffy" is going to show up on my front porch and drag me out to the woods and bury me like a very large bone, and I'll never be seen again. So, if you don't hear from me…)


June 3, 2017

Begun, The Agrarian Software Wars Have

Let's talk about farmers.

Growing up in Los Angeles, I didn't know much about farms or farmers. I figured that eggs just . . . well, showed up, somehow naturally and neatly deposited in those tidy, clean cardboard cartons. Milk was magically placed in bottles, at first, and then later on in cartons and plastic jugs. Meat was from an animal, I knew, but I liked to think of it as it came to me: clean, sanitary, packed in cellophane and Styrofoam. (And I preferred to think that nothing had died just so that I could enjoy that juicy ribeye; or that if something did have to die, it was a quick, painless death to which the animal had stoically been looking forward.) I thought of farming as something simple, elemental, and pastoral, in a Rockwellian sort of way. Farmers were close to the earth, literally and figuratively; it was, I thought, a simple, peaceful way to make a living. 

A modern John Deere tractor. Image by
Wikimedia user HCQ, licensed under the Creative
Commons Attribution-Share Alike 3.0 Unported license.

When I moved to Nebraska, I discovered that I was wrong about some (perhaps most) of this. I got to meet many farmers, and I worked with many men and women who had grown up on farms. They were quick to disabuse me of my naïve notions.

Farming, it turns out, is hard. It's a lot of work, and it's work that never really ends: If a farmer is not planting something, he's selling it, or tending to it, or harvesting it, or preparing the fields for the next planting. If he's not on a tractor or other piece of equipment, he's fixing or maintaining that equipment. There's almost nothing on a farm or ranch that doesn't involve some arcane set of skills and a whole lot of work. (I once accepted an invitation to go "baling hay" with a friend. It sounded like fun. It's not. It's hot, scratchy, seemingly endless work in the fields for which my puny, citified muscles were not at all prepared, and there's no rest, because the flatbed truck or trailer on which you're supposed to toss the bales—which easily weigh 870 pounds apiece—keeps moving down the field, whether you're ready or not. The woman on the trailer upon which I was attempting to toss bales of hay got a good deal of enjoyment, I'm sure, out of watching me struggle and pant. She could throw those bales around as if they were nothing; I couldn't move for days afterward.)

Farming is also expensive these days. It requires sophisticated equipment to plow, fertilize, plant, and harvest crops. (It also requires some specialized knowledge to operate such tools.) A Missouri farmer I interviewed for Leveling the Playing Field showed me a large outbuilding in which sat several large pieces of equipment: tractors, a combine, cultivators, a backhoe, etc. The farmer to whom I was speaking reckoned that he had "a few million dollars" invested in this equipment. (Of course, this is on top of the cost of land, feed, seed, fertilizer, manpower, and so on.)

 

The cab of a modern combine can look much like the 

cockpit of a jet. Image courtesy of Challenger/Caterpillar,

Inc.

So, now we know two important things about farmers: They work hard, and even the small-scale "mom and pop" farms (of which there are frighteningly few left) cost a fair amount of money to operate.

There's something else about farmers, too: They can do, fix, build, repair, or maintain just about anything. They are the ultimate in self-reliance. They have to be. If a tractor conks out, someone has to fix it, and it has to be done now, not in a few days or weeks. If a farm truck breaks down, someone needs to get it running again, and fast. If a fence is down, someone has to rebuild it. If something falls off of a piece of complicated equipment, the farmer needs to understand how the piece is supposed to work and then find a way to reattach it in such a fashion that it once again can function, at least temporarily. Trust me, if there's ever a zombie apocalypse, you want to be very good friends with a farmer. (In fact, it wouldn't hurt if you were to find a deserving farmer right now and send him a bottle of good bourbon. You know, just to pave the way before the apocalypse actually starts. Or send me the bottle, and I will see that it gets to a deserving farmer. Sooner or later.)

The bottom line is that you don't want to piss off a farmer. But that's exactly what John Deere seems intent on doing.

The issue has to do with software. (See? Technology—you knew I'd get to this.) Many of the newer machines, even the supposedly "simple" ones, like tractors, use software. Software feeds GPS signals from the cab of a tractor to its steering and determines when to turn and where to begin the next row. When fertilizing, software consults a database, downloads data, and determines how much fertilizer to use on a given piece of land, based on how much was used last season and the resultant yield: If this piece of land didn't perform as well as expected, perhaps it gets an extra blast of fertilizer; another field may get less fertilizer than it got last year, if it didn't really seem to need as much. (Fertilizer is expensive, folks. Farmers try to use it wisely.) If the machine has mechanisms through which grain or other substances flow, that output is metered by software so that the flow remains constant, efficient, and measurable.

What this means is that when a farmer buys, say, a tractor, he's also buying a lot of software.

Except that he's not. Not buying the software, that is. Technically, he's licensing the software, just as you and I do when we "buy" a copy of Microsoft Excel or Adobe Photoshop. We don't really own that software; we've simply licensed the right to use it under certain conditions. (A couple of those conditions being that we're not allowed to tinker with the software, say, or make copies to sell to our buddies.)

And just as we cannot look at, reverse-engineer, or otherwise fiddle with the software we've licensed, John Deere is telling its customers (read: farmers) that they cannot repair or modify—or have their local fix-it guy repair or modify—their equipment. If your John Deere tractor breaks down, you may not be allowed to fix it, even if you know how to fix it. And you may not be able to let your cousin Warren fix it, either, even though Warren has been repairing tractors all over the county for 30 or 40 years. You might have to send the machine 100 miles away or wait for an authorized technician to get out to your farm in order to get the machine repaired.

Unsurprisingly, this does not sit well with many of the farmers—the work-hardened, self-reliant men and women who A) are used to repairing equipment right there in the field, if need be and B) don't like to be told what to do in the first place.

Who could have predicted that software and farming would collide in such a manner?

It doesn't look good for John Deere, by the way. First, in a recent court case, the Supreme Court determined that if someone buys a Lexmark laser printer, Lexmark has no right to stop the buyer from refilling toner cartridges or from buying cartridges refilled by someone else. In other words, if the buyer bought the printer, he bought the whole thing, and could do with it whatever he liked. (In the Court's words: "Today, the Supreme Court reaffirmed that a patent does not confer unfettered control of consumer goods to the patent owner.") It's not difficult to imagine the Court reaching a similar decision about the software that makes your tractor or combine work.

Second, and perhaps more importantly, farmers are a sturdy, realistic lot, and they don't take well to being bullied. If competing heavy machinery companies are smart, they'll simply start offering equipment that does allow the farmer or rancher more freedom to tinker, repair, or modify equipment they've bought. Many of these farmers lease new machines every year; it will not bother them one bit to be seen in a combine that happens to be Case red, Caterpillar yellow, or Kubota orange, rather than one in the more traditional John Deere green.

Don’t piss off farmers, especially when they’re your best customers.


April 18, 2017

Are We Hyper-Technologized?

I love technology. After all, it's how I make a living, and I truly enjoy taking advantage of cleverly engineered, well-built tools. (And for me, "tool" could mean anything from a smartphone to a well-made crescent wrench.) I'm old enough that I still feel a bit of a thrill when I power up a computer or realize that, with modern tech, I can do things that in previous years would have been difficult, expensive, or simply impossible. Technology continues to amaze and enthrall me: drones, GPS, digital assistants, desktop publishing…. For me, all of these things bring to mind Arthur C. Clarke's famous dictum (now a bit over-used, I suppose): "Any sufficiently advanced technology is indistinguishable from magic." Much of this stuff still seems magical to me, even though I know something about how it's done.


This is an Oregon Scientific weather station much like the
one that Lesley and I have in our home and which only
one of us has learned to use.


For example, Lesley and I have the world's most awesome weather station. Among other things, it includes an electronic rain gauge that sends a very precise rainfall measurement to a central display unit that's kept in the house. Any time we want, we can simply look at the display and know that we have received exactly 2.736" of rain over the past 24 hours. (Technically speaking, we cannot do this. Lesley can do this. I have not mastered the rigorous calculus that's apparently necessary to tell the display that we want to see the rainfall totals. So, I just randomly push buttons until something happens. Sometimes Lesley comes and rescues me, but most often, I end up with a display of temperature or wind direction, or possibly a readout of my next-door neighbor's teenage son's digital music collection or my pickup truck's current gas mileage. Both of the last two are kind of depressing.)

The thing is that this rain gauge doesn't even really measure rainfall—not directly, anyway. It's engineered such that a small catchment collects rainwater through a funnel arrangement. That small container is attached to an arm, and once it has collected the appropriate amount of rain, based on weight, the arm swings down and dumps the rainwater out the bottom of the device. And every time that happens, a counter is incremented. Since we know how much water weighs (if you're curious, it's about 8.3 lbs. per gallon, though rainfall in Los Angeles—being full of various poisonous particulates—tends to weigh more) and how much the catchment holds, the machine can be calibrated to convert the number of times the counter has incremented into accurate measurements of rainfall.

That's pretty damned clever, isn't it? There's a lot of math and machining and electronics and manufacturing know-how in that little rain gauge.

Mind you, we also have an old fashioned, clear plastic cylindrical rain gauge in a holder attached to the side of our deck about two feet away from the digital rain gauge. It was manufactured right here in Lincoln, Nebraska by Garner Industries, and it probably cost about $7 a few years back. If Lesley's not around to rescue me, and if I get tired of randomly pushing buttons on our fancy weather station display, I can always just glance over at the plastic "analog" rain gauge and see how much rain we got. And then . . . well, actually, that's it. I'm done. If I need to start a new countdown, I can "reset" the gauge by picking it up and turning it over so that the rainwater dumps out onto our . . . um, whatever those plants are off to the side of the deck. (Plants and weeds all look alike to me, which is how I've gotten out of weeding for the past several years.)

Or, one could use a simple, inexpensive
plastic rain gauge, such as this one.

Sometimes we end up with technology that was created simply because it could be created or because someone thought it would be cool or because we're determined to improve on the old way of solving a problem. Much of the time, we don't really need it, and there may even be times when it's more trouble than it's worth.So, although I think it's kinda fun, it would be a stretch to say that we actually need a digital weather station. I like it, but I can't say there aren't other (often easier, almost always less expensive) ways to get the same information.

Take bomb-detecting. Surely this seems like a worthwhile endeavor and something worth spending money on. So the U.S. armed services (and researchers in their employ) have spent millions on various types of metal detectors, special cameras, and chemical sniffers. This has resulted in about a 50% success rate in Afghanistan and Iraq.

Of course, being able to locate half of the IEDs (improvised explosive devices) scattered along a roadside or in a field is nothing to sneeze at. But you know what's proven to be much, much more effective? A dog. When dogs are used to patrol, that 50% jumps to 80% or more. And the thing is that DARPA (the Defense Advanced Research Projects Agency) has been trying to come up with something that's better than a dog since 1997. Can't do it. Apparently, there is nothing better than a dog. A well-trained dog is very, very good at detecting bombs. (Or hard drives, or dope, or people, or just about anything else you care to train a dog to detect.) There is simply nothing trainable on the planet that's better at literally sniffing things out. (Which makes sense. Consider that the typical human has about 5 million olfactory receptors in his or her nose, while a dog has more than 220 million such sensors. To be a dog is to inhabit a world much richer, more fragrant, and probably much more interesting than the drab one in which you and I live. Also, they get tummy rubs.)

Training and provisioning a dog costs money, of course. Some sources say that a trained bomb-sniffing dog can cost between $5,000 and $25,000 or more. (That's a rather large variance, of course. Perhaps a bomb-sniffing Bichon, being more . . . uh, portable, is worth more than a bomb-sniffing Doberman?) But even at the high end, that's much less than the cost of most hi-tech bomb-detection tools, and the dog is easy to operate and also serves other functions. And in the end, the dog simply works better than the hi-tech tools.


This is a cute dog. Bichons are so cute that
they basically look like they just escaped
from a comic strip. A Bichon might make a good
drug sniffer, but as an attack dog, it falls somewhat
short. Photo licensed under the Creative Commons
Attribution 3.0 Unported license by user Rocktendo.

Altogether, the Pentagon has, since 2004, spent about $19 billion on bomb-detecting gadgets and other hi-tech mechanisms meant to deal with insurgent networks and the IEDs they plant. (Even if a trained dog cost $20K, that means that our $19 billion would buy about 950,000 dogs. That's a lot of dogs. I'm pretty sure that if you simply let 950,000 trained dogs loose in Afghanistan, the war would be over in days. Although I'm not sure who will have to clean up the place afterward.) One of these hi-tech gadgets is VaDER (am I the only one who reads a certain evil malevolence into that acronym?), which DARPA would like us to believe stands for Vehicle and Dismount Exploitation Radar, but which is obviously just an excuse to come up with a Star Wars-themed anti-insurgency device. VaDER is a $138 million aircraft-mounted sensor that tracks moving targets from an aircraft. We don't really know how well VaDER works, because a spokesperson said only that it and related tools were "enormously useful." So, that's good; wouldn't want to spend that kind of money on something that was only "mildly useful" or "somewhat useful."

I like clever stuff, but we seem to have a facility for over-engineering solutions, is what I'm saying here. Do we really need a toilet seat that automatically closes when the user (a man, one assumes) walks away? Can't the guy just put the seat down? Or couldn't the next person to use the toilet simply put the lid down? How hard is it, really? Or maybe you need a connected weight-loss fork that vibrates when you've eaten too much! Or possibly some air-conditioned shoes? (These look suspiciously like . . . well, shoes with holes in them. Say, I guess I already have some air-conditioned shoes down in the basement! I would be willing to sell those to you for, oh, $30 each. That's $48 off!) How about a mug that lights up to indicate the temperature of its contents? So you can tell if your tea is too hot, I guess. Just take a sip, dammit! If it burns, it's too hot; go take a walk in your air-conditioned shoes for a few minutes while your tea cools off a bit.


April 4, 2017

Your Car May Decide to Kill You. Or Not. It Depends. 

I spent some time writing software and running the development side of StudyWare, a small software company based in San Diego, CA. And after our company grew large enough that we could afford to hire programmers and analysts who actually knew what they were doing, I spent several years managing those who wrote both the software and the content to be used with that software. (I can't tell you how nice it was when we got to the point where we could afford to hire real programmers. I truly enjoyed programming, and I think I did some clever stuff; but compared to the talented, experienced developers we hired, my efforts were laughably inelegant, unsophisticated, and clumsy. But hey, at least I was also slow.)

An early StudyWare package. The packaging
and the software eventually became much more
sophisticated.

In other words, when it comes to building and delivering software, I speak from experience. Thus, I can say with some confidence that software behavior is largely about decision-making: Your code does a particular thing until something happens, at which point it does something else. It's a very strict, Boolean environment; the code always behaves according to some very exacting logic. (Not always the logic you had intended, mind you, but that's a subject for a different post.) Essentially, a huge part of the functionality of software hinges on decisions made about whether something is true or false. If X has happened (that is, if it's true), then do Y.  For example, if the system's internal clock has counted out X number of seconds or minutesthen should now occur. (In this case, perhaps Y is that a bell should chime to let you know that it's time to go turn off the stove, call your mother, or move your laundry into the dryer.) Or, if the user has entered a particular word into a textbox, find and highlight all occurrences of that word in a document. That sort of thing.
At any rate, the point is that I have been in the trenches, and I've worked with others who've been in the trenches even longer than I. So, I have indeed ridden the software dev dragon and I have tamed (or occasionally been tamed by) the beast. 

It's a very pragmatic and ruthlessly logical approach. There's not a lot of room for . . . well, heart. Software doesn't feel.

And yet, programmers do have hearts. They do feel. They do have consciences. (I know a programmer who once worked for a defense contractor that built missiles. After several years of doing that, he was looking for a graceful way out for a number of reasons. One of those reasons had to do with the products he was designing. He said, "If I do my job well, somebody dies. If I do my job poorly, somebody else dies.") So, while software may be said to have no heart, we can definitely see examples of software that has to have, for lack of a better term, a conscience of sorts. Or more accurately, it can sometimes come to represent the programmer's or designer's conscience.

One increasingly obvious example of this has to do with the design of autonomous cars. You wouldn't think that conscience or morality would enter into something so utilitarian, but it turns out that programmers working on such devices are having to make decisions that are essentially moral. They involve not math but ethics. (Or more accurately—and much more interestingly—a combination of math and ethics.)

Part of the designer's job is to anticipate certain scenarios, and to program the automobile (in this case, it's truly an automobile) to respond appropriately to certain scenarios. Thus, the car watches for pedestrians who may step in front of the vehicle, vehicles that may run a red light and enter an intersection unexpectedly, traffic signals that are about to change, etc. It's actually very impressive that these systems can almost flawlessly respond to changes in the environment and that they usually render a decision that keeps drivers, passengers, and nearby pedestrians safe. (Of course, usually is not the same as always, so we have seen accidents, some of them fatal. This is dangerous stuff, after all, and we are on the bleeding edge of tech here.)

But imagine a scenario such as this: Bob is in an autonomous vehicle that's proceeding along a one-way, one-lane street, when suddenly a pickup truck enters from a side street on his right. Bob (well, in this scenario, Bob's car) has three options: the car can veer left, veer right, or plow straight ahead. (We'll assume for now that things are happening too quickly for braking to be effective.)

Nothing good can come from any of these options. Perhaps Bob veers left, up onto the sidewalk, where an older couple is slowly making their way over to a nearby vehicle. One possible result? Two dead elderly citizens. The car could veer right, but what if on the sidewalk to the right was a group of schoolchildren being led by a teacher at the front of the line and an adult aide at the end? Possible result? Dead or injured children, along with possible harm to the adult leaders. If the car continues straight ahead, it will T-bone the truck, and the impact will almost certainly harm or even kill the driver of the truck and his passenger; the crash might also harm or kill Bob himself.

You’re probably thinking that this is far-fetched, simplistic, and unrealistic. But it (or something like it) can occur; I would bet that this sort of thing happens at least weekly in every major city. (In 2016, there were 55,350 traffic accidents in Los Angeles, and 260 people were killed in those accidents. About 229 people died in New York City accidents that year.) Of course, when a person is driving the car, that person is responsible for the split-second decision he or she is about to make. Someone is going to get hurt, no matter what. And there often isn't time for a driver to consciously think about that decision; he simply reacts. Hopefully, no one is hurt.

But the programmers and designers and analysts who build autonomous vehicles have to consider such scenarios; they do have time to think, and they have to program into the system what they feel is an appropriate response. They must tell the vehicle, "When faced with this scenario, do this." Those programmers just made a life-or-death decision. They had no choice. They have to tell the car to do something, after all. (Keep in mind that opting not to do anything is also a decision.) They have to encode the system, the "brain" of the car, to behave in a certain fashion in response to certain inputs.

So, what should they decide? Assuming that the technology has advanced to the point that the car can tell what it's about to hit (and I think that is or soon will be the case), does Bob's autonomous vehicle veer left or right? Does it put Bob at risk, or some schoolkids? Or do we aim the car at the elderly couple? Are the schoolkids' lives worth more than the lives of the two older people? Or does the car determine that Bob must sacrifice himself?

It's interesting to talk about this kind of decision-making, of course, and I have had some enjoyable discussions (and even arguments) with students about this sort of thing. (And similar logic/ethic puzzles have been around since long before the advent of autonomous vehicles.) But for the purpose of this discussion, which decision the programmers should make isn't even the main point; the important thing is that we've reached a point at which such decisions have to be (and are being) made

Technology and morality or ethics have always been connected, of course. After all, technology is used (and misused) by people, and people are moral animals. (Or, depending on your perspective, perhaps you feel they are amoral or even immoral animals.) So how we decide to use a technology, and for what purpose, may have always been a decision that has had an ethical component. (After all, I can use a handgun to protect my family, or I can use it to rob a bank or mug that elderly couple we were discussing a moment ago. Even a lowly hammer can be used to build a home or repair a fence, harm a person or destroy the display window of a downtown shop.)

So, having to consider an ethical component in a technology is certainly nothing new. But having to program an ethical component, having to make those sorts of decisions ahead of time and at a remove, is something that many of us have not considered until now. We (or the car's designers, at least) find ourselves in an uncomfortable position: how do we decide which lives are more valuable than other lives?

That's not a decision I would want to be forced to make.


March 5, 2017

The Sky Isn't Falling. Yet.

I really love the Internet. I get a kick out of technology in general, of course, but I'm crazy about the Internet in particular. When you think about what it's given us—communication, information, empowerment, and more—it's difficult to come up with too many other technologies that have had this great an impact. To a great extent, the Internet has truly democratized information.

And yet . . .  When I stop and think about it, I kind of freak out. I mean, I don't want to sound alarmist or anything, and I generally like to stay calm about the issues, but I THINK WE'RE ALL TOTALLY SCREWED!!

OK, there. I feel better now. I'm calm. But here's what I mean…


This is Hollywood Presbyterian Medical Center in East
Hollywood, CA. The hospital paid $17,000 to recover
its ransomed data files.

Let’s start with ransomware: This is malware that, when accidentally downloaded (generally by people who have ignored the basic security rules that tech people keep trying to get them to follow), encrypts your files, which it then holds for ransom. (The ransom varies, but $300 to $500 or so is a typical ballpark for individuals being attacked: enough to make it worthwhile for the bad guys, and just barely cheap enough for most of us to at least consider paying the ransom.) In most cases, the encryption is done very well and very quickly; you are not getting those files back unless you pay the ransom. (Or unless you have a good backup and know how to restore your files from that backup.)

Businesses and individuals have been getting hit with ransomware regularly, but more recently, the bad guys have discovered other tempting targets: municipal entities, law enforcement agencies, and hospitals, for instance. Think about it: A small police department or hospital has data that is very important, sometimes literally a matter of life and death, including such things as patient records, info from medical devices (sometimes from various implants), evidence stored for court cases, and more. This is critical stuff. The data should have been backed up and the organization should have a relatively bulletproof backup-and-restore process in place, but many such entities do not. That's why the combination is almost irresistible to bad guys: These organizations have critical data they cannot afford to lose, and crappy (or sometimes non-existent) IT departments. The result? These are big, juicy targets; crooks can easily mount an attack, and the payoff can be big. 

How big? Last year, bad guys encrypted data from the Hollywood Presbyterian Medical Center, and demanded $3.4 million (in untraceable Bitcoin, a digital cryptocurrency) to give it back. Hospital executives declared a state of emergency and employees reverted to paper and faxes. (Ironically, it's sometimes possible to negotiate with the thieves; in this case, the hospital eventually paid about $17,000 to get its files back. Still, $17,000 is a pretty good chunk of change)

Of course, there are other attacks, and other types of attacks.

Last December 23rd, unknown intruders (possibly state-sponsored actors under Russian control, though this remains unproven) hacked into the computers of the Ukraine's (please do not ask me to pronounce this) Prykarpattyaoblenergo electrical control center. Operators watched, dumbfounded and helpless, as the intruder simply navigated through onscreen menus, shutting down some 30 electrical substations, one mouse-click at a time. The hacker then disabled backup power supplies in two of the region’s three electrical distribution centers, leaving all concerned literally and figuratively in the dark.

About 230,000 people were suddenly without electricity in an area where the temperature that evening dropped to around 14 degrees Fahrenheit. (Lest you think that the U.S. power grid is more secure and sophisticated than a control center in Ukraine, note that many experts said that the Ukrainian station was better secured than many U.S. stations.)

This is the first known hack of a power grid that resulted in a power outage of that size, but it's probably not the last. (For a sensational—some reviewers said sensationalist—read on the subject, see Ted Koppel's Lights Out.) The reality is that, as unsecure as our private infrastructures (see the hospitals and corporations mentioned above) are, many government and quasi-government infrastructures are even more disorganized and less secure. (If this surprises you, then you haven't been paying attention to news of the DNC—and now RNC and other—hacks. Also, you've never been in the Army.)

Ted Koppel's book is a sobering look
at the vulnerabilities of the US
power grid.

Here's the problem in a nutshell: We took an inherently unsecure technology, the Internet (which was created to share, not hide, information), and made it into the backbone of both our infrastructure and our economy. We've taken steps to make it more robust and mitigate its weaknesses, but the reality is that just about everything—from our power grid to our banking industry and from hospitals to law enforcement—now runs on what turns out to be a vulnerable and easily crippled technology.

And it's going to get worse as the Internet of Things takes hold. The IoT involves connecting literally billions of things to the Internet, everything from your toothbrush to your thermostat and from your doorbell to your dog’s water bowl. Those connections will, for the most part, make your life much easier. Until suddenly they don't.

Take baby monitors, for instance. It's comforting to know that your child is safe and snug in his bed; being able to hear the cooing sounds your toddler makes as he sleeps is soothing. Hearing the voice of some stranger speaking to your child through the monitor is definitely not soothing, but it has happened on occasion. Why? Well, the baby monitor is on your wireless network, and is probably not very well protected. Neither you nor the manufacturer took steps to secure that device.

 

This is just one of several brands of baby monitor
that has been hacked.

But the technology itself is not the only major problem. The other weakness is . . . well, us. Any security pro will tell you that the biggest vulnerability is human, the people standing between the palace door and the storeroom in which the crown jewels are held. Basically, people are not very good at security, because we're lazy, naïve, and entirely too nice. We really, really want to be helpful, so when we get an email asking for information, we're all too ready to part with that information. When someone claiming to be a hardware tech or copier repair person shows up at a place of business with a clipboard, a baseball cap with a company logo, and a good story, people are almost always willing to "help" him by parting with names, phone numbers, even passwords.

Almost without exception, we are the weak link in the security chain. We click links in phishing emails, visit sketchy websites, download suspicious files, and answer the (seemingly innocent) questions of people who wander into our places of business. We place all our very personal information on the Internet for anyone to see: between Facebook, LinkedIn, and Twitter, anyone looking for information about you or your business has all he needs. 

Chris Hadnagy is a security expert and a penetration tester; companies pay him to break into their networks in order to uncover flaws. Chris says that he can "social engineer" (read: schmooze, lie, or finagle) his way onto any corporate network well over 90% of the time. Years ago, says Chris, the difficult part of his job was uncovering enough information to be able to mount a convincing deception. Now, he says, with all the information floating around on the Internet, his biggest problem is sifting through the tons of data available to decide which pieces are most useful.

Still, a hacked baby monitor or an individual who’s fallen victim to ransomware is not what worries me. We can learn to protect ourselves; if we don't, then we have only ourselves to blame.

But state-sponsored attacks on infrastructure are another story. Weapons are rarely made without someone wanting to find an excuse to use them, and the Internet is, among other things, a weapon. It's simply too terrifyingly easy to conduct an attack that could turn into a full-blown cyber war. A digital attacker risks nothing, really. It's a form of warfare that, unlike all other forms, is cheap, fast, simple, and deniable. That’s a temptation too alluring to ignore. You can engage an enemy anonymously from half a world away, and there's absolutely no risk that you or any of your fellow "soldiers" will get hurt. You can cripple a region—or possibly an entire country—with just a few well-placed strikes. Whether the attacker is a state actor (or someone who operates at the behest of such actors) or an independent guerilla operator, the technology is too available, the risk is too small, and the payoff too big to ignore.

And that is what worries me. I do believe that we will eventually address many or even most of these security issues, but I suspect that our actions will be reactive in nature: nothing will be done until something very bad happens, and then suddenly security will be on everyone's mind, from our legislators to our law enforcement people, and from infrastructure developers to IoT manufacturers.

We should probably be thinking about such matters before the sky starts falling.


September 11, 2016

The Internet: Making Smart People Do Stupid Things Since 1590

Let me tell you a very sad story. (You'll probably need some tissues. I can wait while you get some.... Ready? OK.)

Nichol, a Frenchman stuck in a Spanish prison, has very little time left. He is dying, and the bad food and damp, dank air in the prison are contributing to his ill health and hastening his impending end. He knows he will soon die, but he has something very important to do: He must save his daughter. With Nichol gone, sweet, innocent Mary, only 17 years old, will be destitute. But Nichol has a trick up his ragged sleeve: He has bribed a jailer to deliver a letter to Mr. Fitch, a man of wealth and power who lives in America. The letter notes that Nichol has access to vast sums of money, or would if he were free. The money is in fact hidden not far from where Fitch resides, because Nichol himself, on a previous visit to America, buried the funds in a forest near Fitch's home town. He can direct Mr. Fitch to the money if Fitch will pay Mary's passage to America and then agree to raise the young woman as his ward. Nichol may well die, but at least his fortune and his daughter will be safe.


A real news report about an actual airline disaster, which
a scammer will almost certainly now attempt to use as part
of an advance fee fraud.

Of course, the saddest thing about this story is the fact that so many people believed it and sent money to Spain so that Mary could live happy and free. (And so that they could pocket Nichol's fortune after his demise.) In other words, it was a con. A fraud. A swindle. 

If the con sounds familiar, it's because there truly is nothing new under the sun. This is called an "advance fee" fraud, because the victim is asked to pay a relatively small fee in advance of receiving a much larger payment. Of course, that larger payment never shows up.

This is perhaps the most direct ancestor of such modern advance fee frauds as the so-called Nigerian scam: swindles in which a mark is persuaded to pay various "fees," "insurance," or "taxes" ahead of receiving his share of some enormous fortune. Some versions of the scam may involve checks being delivered to the victim, out of which he is supposed to pay certain fees, taxes, or shipping costs by forwarding a percentage of the received monies to a "government official" or "shipping" company. Of course, the check is bogus; the "shipping company" is in fact the scammer himself, and when all is said and done, the victim is on the hook to the bank, having deposited a bad check into his account and sent his own money to the scammer.

There is almost no end to the types of advance fee frauds one might encounter: work-at-home schemes, model and escort agency dodges, employment frauds, cash handling (read: money laundering) cons, lottery scams, and Craigslist ruses in which someone selling legitimate goods is sent a fake check for more than the selling price, with the extra to be wired to a third party. (Once again, the check is bad, and the seller is on the hook for the check and any "purchased" goods he may have already shipped to the scammer.)

There's also almost no end to how much the scammer will attempt to bleed the mark. Once you pay the initial fees, you've established yourself as the type of person a scammer loves most: gullible and affluent. The next step, of course, is to inform you that more fees are due or that some other issue has arisen that requires more funds. This will continue until the mark is bled dry or finally realizes that he's being scammed.


But let's not blame the Internet, because all of this really has little to do with technology, and much to do with the nature of people. We're greedy, and we like to think that we can get something for nothing. We can't. But we never stop trying: This sort of fraud (known during the 19th century as "the Spanish prisoner con") has been going on since the 16th century, and there's no reason to believe that it will ever stop. 


Still, the Internet does help the scammer: Digital communications make it much easier to scam more people more quickly. (Let's hear it for efficiency!) After all, it costs the scammer almost nothing to send out hundreds of scam emails. If the return is very, very small, it doesn't matter, because it didn't really cost him anything. 


Not only that, but the scam quickly becomes self-selecting: The scammers want the smart people, the ones who are a bit wary, to pass on the scam as quickly as possible, because those are the people who would wise up before the scam was successful. The scammers would end up wasting time trying to convince someone who is already wise to the con, so they'd just as soon those people immediately delete the initial scam emails. What's left? Gullible people. Greedy people. Folks who are desperate to make a quick fortune. Those are the ones they can string along for weeks or months, the whole time siphoning off funds that the victims probably can't afford to lose.


And, believe it or not, advance fee scammers do make money. A lot of money: According to Ultrascan, a group of Dutch fraud investigators, $12.3 billion was lost to the con globally in 2013. This is, after all, why the scam is still with us: Dating back to the late 1500s, it's almost literally "the oldest trick in the book," but it still works.


September 5, 2016

Yep, Your Mom Was Right. Again.

A while back I received a nice, chatty Facebook message from my good friend LaWanda, to whom I’ve not actually spoken face-to-face in several years. (I've changed the name to protect the innocent—also because I really like the name LaWanda.) I worked with the woman a few years back, and we've stayed in touch, more or less, via FB. I get to hear all about (and see photos of) her kids and grandkids and granddogs and new bathroom tile and incredibly intelligent houseplants and the like, and she gets to hear all about . . . well, mostly about my books and occasionally about my incredibly intelligent granddaughter. (But, see, my granddaughter really is incredibly intelligent. And beautiful. Also, if she’s reading this, she should CLEAN HER ROOM!)

Anyway, LaWanda was just "checking in" to see how I was doing. And I thought, "Well, how nice!! This woman not only remembers me, but actually cares how I'm doing. Man, I must be a whole lot more personable than I thought!"

But it turns out that I'm not personable at all. My “friend” jumped right from "checking in" into wanting to know if I'd heard her good news—which turned out to be that she had won some sort of multi-thousand dollar lottery prize, using secrets that she was willing to share with me. Because I'm just so damned personable.

So, you know where this is going . . . .  When I received the second message, I realized that someone was trying to scam me. And I also remembered that a few weeks before this, I had accepted a "friend" request from LaWanda, even though we were already friends. I would like to say that I accepted the duplicate request because I had thought about it and assumed that for some reason she had had to start a new FB account, but in truth, I'm just old and forgetful, and I can barely remember the name of my dog. (It's "Annie,” OK? The dog's name is "Annie." I'm just making a point here; work with me, alright?) Basically, I was on automatic pilot and didn't give it much thought; I knew LaWanda, and that was good enough for me. Click.

Annie, protecting us from the evil squirrels.


Dumb. But smart enough not to let it go any further, and awake enough to warn the other people (there were only a few, which was a giveaway in itself) on the fake LaWanda's "friends" list that they (and I) had "friended" a fake LaWanda.

This sort of fakery (I almost typed something else there) has been going on for a while now. Facebook is terrible at policing itself and watching for this sort of thing. There are tons of scams littering everyone's favorite social network. Almost any time you see something like, "How many likes can we get for poor Fred here?" it's a scam of some sort. Poor Fred is almost certainly not stuck in some cancer or burn ward in a faraway hospital. (And if he were, your "likes" wouldn't help him. Also, his name's probably not Fred.) And you’re not going to get cheap Ray-Ban sunglasses, either. (You'll get cheesy knockoffs, if you get anything.) You’re also not going to win a red or blue Camaro, a Land Rover or Land Cruiser (not that I can ever remember which is which), an all-expenses paid 3-day trip to a tropical (or any) island, a classic 1970 Dodge Charger (though I would really, really like one of those), or a fancy motorhome. Nor are you going to win that free cross-country flight on South West Airlines; the airline does not spell its name that way and its website is not at www.south-west-air.com or www.south-west-airlines.com, or any of a dozen other almost correct URLs.

Most of these things are either like-farming or survey scams. In a like-farming scam, the crook really just wants to collect as many "likes" and "shares" as he can, so that he can turn around and sell his “high volume” page to other scammers who will use it to do even worse things. 

Yeah, you're not gonna win this (or any) Dodge Charger
(or cabin in the woods, luxury home, or new RV). Sorry.
Image courtesy of Brett Christensen, Hoax-Slayer.com.


One of those “worse things” is a survey scam. This is a swindle in which you’re offered something very, very cool (a free MacBook, for instance, or a nice camera) and All you have to do is Like and Share our page!! Except that, really, you’re just going to get sucked into a series of online “confirmation forms” and surveys, and when you get finished there will not be a free MacBook waiting for you. Get it? There is NEVER going to be a free MacBook. Or a free anything, even after you jump through all the hoops. The scammer is trying to collect as much info about you as he can so that he can sell that data to other scammers (or possibly use it himself to steal your identity), and while he’s at it, he gets paid for every dumb “survey” you fill out. (You might also find out that you’ve just signed up for expensive messaging services, etc.)

In other words, the Internet is full of lies. And liars. Which is too bad, because there really is some kid in a burn ward or cancer ward somewhere, and that kid really does deserve our "likes" and maybe even our money, but it's almost impossible to figure out which one of the FB posts about him is legit.

All of this boils down to, “Mom was right.” If something sounds too good to be true, guess what? It’s not true. (Mom is almost always right. She’s the mom, after all.)


Trust Mom. (And also Snopes.com and Hoax-Slayer.com.)


July 8, 2016

FYI: It's the end of the world as we know it

All of these new-fangled technologies—texting, emojis, email, social networking and the likeare destroying our ability to communicate. They're making it impossible for young people to concentrate, to speak and write grammatically, and to communicate effectively; in the end, they're doing serious harm to the very language itself.

 

 

Or so say many. As one teacher complained about his students, “They use ‘cuz’ instead of ‘because,’ and IDK instead of ‘I don’t know.’ They’re shortening their lingo instead of using proper English." (I'll just point out that "lingo" is itself a shortened term derived from Portuguese via the Latin Lingua Franca.)

Jacquie Ream, a former teacher and the author of K.I.S.S.: Keep It Short and Simple, noted, “We have a whole generation being raised without communication skills.” She and others contend that texting is destroying the way young people think and write.

And yet, the destruction seems awfully . . . slow. Technology has apparently been ruining the language for quite a while now—many dozens or hundreds or even thousands of years. And yet here we (and it) still are. You would think that, by now, technology would have succeeded in destroying the language. Perhaps it needs to work harder; apparently, destroying a language—or our ability to use a language—is not as easy as it looks.

There have always been plenty of critics ready to point out the dangers that new technologies pose to our ability to communicate and to think. And they have been ready for a very long time, beginning with the most foundational technologies—ones that predate the iPhone and texting and Facebook not by years, but by centuries.

Writing itself, for instance. In his Phaedrus, Plato has Socrates recounting a story in which the inventor of writing seeks a king's praise. But instead of praising him, the king says, “You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant.”

So, at least for some people, even the invention of writing horrified the older generation. It was, after all, a new technology, and one which completely altered the acquisition, storage, and dissemination of information. Talk about a game-changer—and you know how we old people hate change.

 And that's always how it goes; the younger generation adopts new tools, while the older generation looks on aghast, certain that what they're witnessing presages the end of our ability to think, to work, to communicate.

 These days we're fine with writing. In fact, it's the demise of writing we're worried about.

 In another recent development, it turns out that the use of pictures (such as emoticons and emojis) to replace words confusesand perhaps angerssome people. British journalist and actress Maria McErlane told The New York Times that she was “deeply offended" by emoticons. "If anybody on Facebook sends me a message with a little smiley-frowny face ... I will de-friend them ... I find it lazy. Are your words not enough?” Ms. McErlane apparently has a very short temper and way too much time on her hands.

 Invented in 1982 by Scott E. Fahlman, a computer science professor at Carnegie Mellon, the first emoticon was a sideways smiley face made up of a colon, a hyphen, and a right-parenthesis. It was created explicitly to add information to plain text messages, the underlying context of which might otherwise be misunderstood. (Was that a joke? Is he serious? Should I be angry? WHAT DID HE MEAN BY THAT?! OMG!) 

And thus began the end of the world as we know it. I mean, not counting Socrates and such.

John McWhorter, with whom I traded emails while researching Leveling the Playing Field, is a linguistics professor at Columbia University. He has studied texting and writing—and communication in general—and he says that we're looking at this whole texting thing all wrong. Texting, says Dr. McWhorter, isn't writing at all, and thus has little or no effect on writing. Texting, says McWhorter, is actually "fingered speech."

In McWhorter's view, rather than being a bastardized form of writing, texting is more akin to—and follows fairly closely the rules of—spoken language, complete with its shortcuts, telegraphic delivery, fragmented utterances, and the use of "body language" (in this case, emoticons, emojis, and the like) to clarify and add context to an otherwise potentially ambiguous communication.

Many of us seem to think of texting as something less than writing, something that represents some sort of communicative decline, but McWhorter insists that this is not so. “We think something has gone wrong, but what is going on is a kind of emergent complexity.”

Which may be a way of saying that my granddaughter was right. During a discussion of this topic, she suggested that perhaps what we're seeing is not the death of one language, but the birth of a new one.

Of course, there are almost certainly other problems caused or exacerbated by technology and social media; there's even an argument that so-called social media has, ironically, made us less social—but that's a topic for another post, or perhaps another book. But for now, it appears that our students' inability to communicate does not seem to have been caused by new technologies. If we're encountering young people who no longer know how to punctuate, how to write a coherent sentence, or how to craft a cohesive essay (and I see such students daily), perhaps we should look elsewhere for the cause; it may turn out to be a failure of some other system.

 


No comments:

Post a Comment