Mark and Paul

It's a blustery January afternoon in pleasant suburban Cambria Heights, New York- The sort of thing you'd expect to see in a Norman Rockwell painting. Cars go to and fro, people perform their daily tasks, hell, there's probably a bird singing in whatever tree you happen to be standing by. And Paul is catnapping on the red and black velvet couch when the phone rings. "Did you hear? AT&T crashed!"

No hello, no chit-chat.

"What did you say?"

"AT&T crashed. It's on the news."

It's Mark. And he's scared. Paul understands immediately. Ordinarily, the fact that the nation's largest and most reputable long distance telephone carrier was currently grinding to a screeching halt would probably go un-noticed by a couple of middle-class white kids in Queens. But Mark and Paul were no ordinary kids.

They'd been doing it for months. Hacking, that is. One by one, the pair methodically pressed their way into one phone company computer or the next. It was a terrific power trip, compounded by the fact that they considered themselves modern day Robin Hoods. They'd never hurt an innocent person, they'd never damage a harmless system, and they'd certainly never bring the nation's long distance service to a halt. At least not intentionally. And although they're not saying it, right now the two only have one though on their minds. "Please, God, don't let this be my fault."

Miles away, in Bedminster, New Jersey, Bill Leach had his hands full. He'd never heard of Mark and Paul, probably wouldn't care if he had. Right now, there were other things to worry about.

From his perch at the rear of AT&T's Network Operations Center, the view was spectacular. Below, a dozen or so workers scrambled to and fro, manning this workstation or another. And lining the front of the room was a magnificent video display which measured the pulse of every telephone circuit AT&T operated. And it was flashing red.

Flashing red lights are never a good thing.

It all started innocently enough. At 2:25 pm, a minor glitch occurred in the New York City Telephone Switching Center. No cause for alarm, really. There are 114 major AT&T switches in the United States, and they are designed to work in cooperation should one fail. Just as it was designed, the New York switch handed off its calls to other switches around the nation, and quietly reset itself to clear the fault.

But something was wrong. Rather then coming back on-line and resuming normal operation, it crashed again. And this time, so did the switches in Atlanta, Detroit, and St. Louis. Bill immediately picked up the telephone and called the Chief of Network Operations. "You better come down here," he stuttered, "I think we've got a big one."

That was an understatement. The failures cascaded, running from one switch to another as the calls piled up. Within minutes, nearly two-thirds of AT&T's automated switching centers were down, and those that remained on-line were swamped with traffic. Electronic gridlock enveloped the nation.

AT&T had dealt with it's share of problems- every phone company had. Lightning struck, tornados blew, and over-zealous backhoe operators cut. But never before had a problem this widespread occurred. As millions of calls were turned away, technicians scrambled frantically to diagnose the problem. But the machines were working fine.

Reports began to leak out. A software failure. Hackers in the system. The connection was obvious. But it was wrong. Mark and Paul may have been guilty of a lot of things, but they didn't crash the phone system.

By 9:00pm, AT&T engineers had finally found, and admitted the problem. A tiny error in a trivial software update issued to every AT&T switching computer in America and triggered by a particular series of events had manifested itself. Hackers had not tampered with the system, the bug was caused by AT&T itself.

But the damage was done. A new national paranoia had risen. Hackers on the loose. Every teenager with a computer a potential terrorist. Why, if they could bring down the world's most powerful communications company, what was next?

Life and Death

Incredible as it may have seemed, the AT&T crash was really not that big of a deal. Sometimes, lives are at stake.

Ray Cox had seen enough of hospitals. After having a tumor removed from his shoulder, he found himself at the East Texas Cancer Center, undergoing a series of radiation treatments. But on the day of ninth and hopefully final treatment, disaster struck.

A radiation technician placed Cox face down on a table, beneath the mighty Therac-25 linear accelerator, and prepared to administer X-ray treatment, just as he had done dozens of times before. But when he activated the powerful machine, something went wrong. Cox felt a massive burning sensation, and heard a frying sound. Through his shut eyes, Cox saw a brilliant flash of light, and then another. He leapt from the table, screaming in agony.

The next day, Cox began spitting up blood. A lesion appeared on his shoulder. He lost sweat function, his pupils dilated, his eyelids drooped. Paralysis followed. Ray Cox spent five agonizing months confined to a hospital bed before he finally slipped into a coma and died.

His case was neither the first nor the last. At least 4 other people would fall victim to the Therac-25 before it's manufacturers finally found, and admitted the problem. Once again, a tiny software bug activated by a particular sequence of keystrokes caused the machine to malfunction. A multi-purpose machine, the Therac-25 had two settings. In electron-therapy mode, a low power beam of electrons was generated inside the accelerator and delivered directly to the patient. In X-ray mode however, a tungsten shield was used to convert the electron beam into harmless X-rays. However the process was extremely inefficient, and therefore required a beam of 25 million electron volts, in order to produce the necessary X-ray output.

When the machine malfunctioned, the beam power was switched up to the highest setting, but the tungsten shield failed to deploy. As a result, the full, lethal power of the machine was delivered directly into the patient.

The Therac-25 is no longer in production, and existing models have been updated. But this is not intended as a case study, rather it is an example. There are others, to be sure. A pacemaker that, for no apparent reason, stops. A patient monitor that mixed up readings between patients. A cardiac monitor that over-estimated cardiac output... Or a plane that falls out of the sky.

War, and other fun things

Technology is meant to empower mankind. But can there be too much of a good thing? Can information overload be attributed to a case of mistaken identity?

July 3, 1988. Iran Air Flight 655, an Airbus A300, sits on the ground, delayed. Inside the cabin, pilots and passengers squirm. The flight is to be a short one, just twenty minutes from Bandar-Abbas to Dubai. Captain Mohsen Rezayan has made the run hundreds of times. Finally, after a 27 minute wait, the plane is cleared for takeoff.

A few miles away, the USS Vincennes sails the Strait of Hormuz. It is a tense time. In fact, the Vincennes, and its fellow ship, the USS Montgomery, are under attack by a small fleet of Iranian gunboats. But Captain Will Rogers III is hardly worried. Vincennes is the shining star of the fleet, and carries with it the latest in advanced naval defense. The awesome $500 million Aegis weapons system represents the beginning of a new age in modern warfare. Its sophisticated computer system is capable of tracking and destroying enemy targets with chilling efficiency. And one by one, the attacking vessels are sunk.

Unaware of the conflict below, Flight 655 approaches the Vincennes. Deep in the Combat Information Center of the warship below, radar operators spot the oncoming craft. Fatigued, confused, and anxious, crewmen mis-identify the oncoming craft as an Iranian F-14. The Aegis system automatically labels the aircraft, and plots a fire control solution.

For several minutes, conflicting reports poured into the CIC. On-board tracking systems reported closing hostile contact, but Tactical Officers insisted that a civilian aircraft was merely in the area. Conflicting transponder signals confused the on-board computer, and caused the crew to see multiple readouts.

The exact events which follow are in question, but at 10:53:30, the Fire Control key was turned to enable, and a pair of SM-2 surface to air missiles lifted off from the deck, striking their target with lethal precision.

Of the two hundred and ninety people aboard flight 655, none survived.

In this case, the problem lie not with the system, but with the implementation. Crewmen simply were not able to adequately deal with the amount of information at their hands.

But however tragic this event may have been, the consequences of a fully functional technology may be graver.

Colonel Mike Tansley, of the US Army's ultra-secret Intelligence and Security Command paints an interesting picture. The next time some tyrant oversteps his bounds, we respond not with missiles and bombs, but with a far more subtle attack.

It begins when a simple virus is introduced into enemy systems. It doesn't matter how. A corruptible manufacturer, a covert operative, a simple hack. Once behind enemy lines, the virus could travel "trusted" paths into enemy phone switches, dispatch systems, financial computers, and combat systems. Destroy his economy, his infrastructure, and his leadership. And all without firing a single shot.

But Colonel Richard Szafranski of the Air Force's Air War College, exposes a darker side. "When people talk about the tremendous potential of this warfare, they need to take a bite out of the reality sandwich." America, it seems, may be uniquely vulnerable to such attack, due to our heavy reliance on the very kind of systems we may target. Is information warfare the greatest Pandora's Box in human history?

"It doesn't require huge masses of money," said Donald Latham, a former Pentagon communications guru, "A few very smart guys with computer workstations and modems could endanger lives and cause economic disruption."

Perhaps William Sherman was right. "War is cruelty, and you cannot refine it.

Order, at what cost?

If you're like most people, you own a car. And if you're like most car owners, you look at your car as a means of getting from point A to point B, and occasionally point C, on weekends. And while you might not know exactly what goes on under the hood, you'd probably never guess that your car might actually be spying on you!

We're not talking about the plot to some new James Bond movie, just the Federal Government's latest toy.

It's called On Board Diagnostics Level 3, or OBD3 for short.

In theory, OBD is a Good Idea(tm). The system constantly monitors engine performance, emissions controls, and vehicle condition. When a fault is detected, on comes the infamous "Check Engine" light, alerting the diligent driver that it's time to see Mr. Goodwrench.

But the current system, OBD2, stops there. It has no power to take corrective action. And how many of us have driven thousands of miles with that light staring us in the face, with no adverse effects? After all, not only are emissions controls failure-prone, but when they do fail, they rarely affect driveability.

Enter Orwell.

OBD3 is an interactive system. But it doesn't interact with the driver, at least not directly. OBD3 equips your new car with a radio transmitter. When a fault is detected, a signal is beamed to your friendly local Department of Motor Vehicles office, alerting them that you are now in violation of the Federal Clean Air Act. Unless you rectify the problem, and pronto, expect a summons in the mail.

But the system doesn't stop there. OBD3 is integrated into the vehicles various computer systems, meaning that it has access to every piece of data flowing through your car. Vehicle speed for example. Or your exact location via GPS. Some cars even have sensors in the seat to detect the number of occupants.

This is all supposition, but let's say that a bored police officer punches in your number, and determines via GPS that you are in a 45mph zone. He then queries your OBD3 system to learn that you are traveling at 58 miles per hour. It's a simple matter to signal the engine management computer to gradual slow to a halt, whereby the officer can drop by and issue you a ticket at his leisure.

Impossible? Of the 255 data addresses available to OBD3, only 6 have been allocated. What exactly does the Fed have in mind for the rest?

Armed... and dangerous?

It was a simple enough idea. Provide the American public, free of charge, with a secure, yet easy to use system with which they could protect their electronic communications from interception by unwanted eyes.

And before it was over, it would transform a simple computer programmer into a watned criminal, catapult him into the public eye, and make him an icon for democracy and free speech all around the world.

"It" was Pretty Good Privacy, PGP for short, and it was the brainchild of Philip Zimmermann, a software engineer with over 20 years experience in data security and encryption, and with such prestigious clients as IBM, Sun Microsystems, Hewlett-Packard, and Allied Signal, to name just a few.

And the problem? Just a little thing called the Munitions List, a government document which attempts to spell out what may and may not be exported from the country. Drugs, weapons, encryption software... Encryption software? Yes. It is illegal to export encryption software without a license, and of course that licence is hard to come by if the government feels it can't easily break the codes.

So Phil Zimmermann wrote PGP and released it into the public domain. And someone downloaded a copy outside the US. Making Zimmerman an illegal arms exporter. It wasn't hard to see coming. While PGP was underway, the US Senate was busy trying to pass Bill # 266, the Anti-Crime bill. It's language is muddy, but in short, it requires that communications providers must implement a way for the government to easily gain access to all communications. And secure cryptography would spoil that fun.

So why did he do it? Why would a successful young engineer risk everything to publish a program from which he would never profit? Zimmerman himself explains it best:

The Clinton Administration seems to
be attempting to deploy and entrench a
communications infrastructure that
would deny the citizenry the ability to 
protect its privacy. This is unsettling 
because in a democracy, it is possible for 
bad people to occasionally get elected-- 
sometimes very  bad people. Normally, a
well-functioning democracy has ways to
remove these people from power. But
the wrong technology infrastructure
could allow such a future government
to watch every move anyone makes to
oppose it. It could very well be the last
government we ever elect.