Apart from the works of the late Douglas Adams, I've read little science fiction in the last two or three decades, but I obviously read way too much in my youth because some of today's technology headlines scare the bejeebers out of me. I don't even know what a bejeeber is or how many I used to have, but I know that they've all been scared out of me.
You know the plot line. Humans develop technology. Technology turns on humans. Human folly is laid bare. The most famous example is probably HAL in 2001: A Space Odyssey. Everyone who saw it remembers HAL's chilling line, "I'm sorry, Dave. I'm afraid I can't do that." Sci-fi fans can no doubt name several variations on that theme.
What made me think of this? The New York Times recently published an article, titled "A New Model Army Soldier Rolls Closer to the Battlefield," about the coming of robot soldiers. How scary is that?
The Pentagon is planning to spend $127 billion (we can trust the military to come in on budget, right?) on a project called Future Combat Systems. At first, the robot soldiers developed under this project will be remote controlled, but according to the article, "Military planners say robot soldiers will [eventually] think, see and react increasingly like humans." Great, my computer still periodically generates the blue screen of death, and I can't load a software patch without having to reboot before it will take effect, but they think that they're going to build robots capable of reliably distinguishing between the good guys and the bad guys in order to take lethal action against the latter. I hope that they haven't hired the programmers who were responsible for Windows security.
Why are they doing this? One reason is obvious. Why put lives at risk when you don't have to? That's a good point, but it's also one part of what I find frightening. I don't want a single American or any other life wasted. However, war without consequences for one of the sides is a chilling prospect. I have confidence in the good intentions of the government. What I am less confident in is its ability to make decisions that will consistently advance its goals rather than, inadvertently, working against them.
The decision to go to war should be the most difficult decision any government makes. I would like to think that when a government is contemplating putting its own citizens in harm's way, they will give a thought or two to whether or not the cause is really worth the cost. I won't always agree with their assessment of that balance, but who am I to judge these things?
My question is this: What happens to the war decision-making process if you remove the human consequences from one side of the equation? What if, because all of their soldiers were robots, one side didn't have to put the lives of any of its citizens at risk? I suspect that the government would not think quite as long and hard before pushing the button to send its robots off to war. That's fine if the toy soldiers are in the hands of the good guys, but it's not always easy to tell who the good guys are. And what about the lives lost on the side without the robot soldiers? Some of them will be innocent civilians. The lives on the other side never seem to count for nearly as much in war calculations. That's OK when it's our side making the decision, but we would likely be less thrilled when it's the other side deciding whether or not to attack us.
I shouldn't be surprised that the military wants to build robot soldiers. I suspect that since the beginning of time, if there was any possible way for a useful technology to be employed in war or other violent purpose, someone did it. How soon after the discovery of fire did someone use it to burn down an enemy's encampment? How soon after the wheel was invented did someone use it to roll over his enemies? I don't know, but I bet it was faster than you can say, "Open the pod bay doors, please, HAL."
Why can't we be content using our machines to do only useful things like building cars and hula hoops? It's not as if robotics has progressed as far as it can on the peaceful front. I'm still waiting for someone to make a versatile and inexpensive android maid. I'd prefer that they develop an automaton to clean up my condo before they create one to blow it up. If a robot is going to hunt me down, I would be mortified if it found a pig sty when it arrived. Then again, as things are now, I could successfully hide from the robot by taking refuge behind one of the enormous mounds of paper and dust bunnies that are scattered around my home.
I probably shouldn't worry about this so much. After all, this is the same government that is considering scrapping a not-yet-implemented, $170 million FBI Virtual Case File system--a system that was considered to be critical after the intelligence failures leading up to September 11, 2001--because it doesn't work. And this is the same military whose last two missile defense system tests--tests that cost $85 million each--failed to launch because of some glitch. The government spends about $10 billion a year on that program. So my fears about this are probably premature. Given its track record, I will probably be long dead of natural causes before the military gets its toy soldiers working.
Joel Klebanoff is a consultant, a writer, and president of Klebanoff Associates, Inc., a Toronto, Canada-based marketing communications firm. Joel has 25 years experience working in IT, first as a programmer/analyst and then as a marketer. He holds a Bachelor of Science in computer science and an MBA, both from the University of Toronto. Contact Joel at
LATEST COMMENTS
MC Press Online