Thursday, January 13, 2011

The Ultimate Cancer Detection Technology: Blessing or Curse?



At least in industrialized countries, the two major causes of death are heart disease and some form of cancer. I don’t know about you, but if I had to choose my poison, I’d prefer heart disease, because cancer is so slow, insidious, and creepy. Besides, my wife is a breast cancer survivor (http://breastcancer.about.com/) and both my parents died of cancer. So when I heard about a research team at Massachusetts General Hospital that has moved a blood-test machine for cancer cells closer to commercialization, I had mixed feelings.
So far, the device is apparently intended only to monitor the condition of patients who are already known to have cancer. It consists of a credit-card-size plate covered with thousands of tiny posts, each one of which has a different kind of molecule that binds to proteins on a specific type of cancer cell. When blood flows over the posts, very small concentrations of cancer cells leave traces on the posts, and you get a number saying how many of what kind of cell is present in the blood. The hope is that this kind of test can supplement or even replace the expensive MRI or CT scans usually employed to monitor progress of chemotherapy by observing the size of macroscopic tumors. Of course, there are many potholes in the road to commercial use, but the researchers have my best wishes for success. I think.

Let’s extrapolate this kind of technology to its ultimate limit. We are told that everybody has some cells that are, if not out-and-out malignant, then highly inclined to develop into cancer. But in healthy people, the immune system is on the lookout for such misbehaviors as a liver cell setting up shop in your biceps muscle, and takes care of misbehaving cells by attacking them as though they were foreign invaders like bacteria. Cancer is not so much the mere occurrence of malignant cells as it is their successful multiplication into a colony whose numbers and size overwhelm the body’s defenses.

So what if we had a blood test for any kind of cancer cell, down to the concentrations that exist in healthy people? Would that be a good thing?

On the face of it, yes. I suppose you could establish some kind of baseline limit as we have done for serum cholesterol. Below the limit you’d be told you were healthy and above the limit, well, you’d be worried, at least. We’d have to go through clinical trials to see what kind of numbers are associated with cancers that are worth fighting. We are already in this situation with regard to the protein-specific antigen (PSA) for prostate cancer. There is a simple blood test that tells you your PSA level, but it turns out that PSA is not an infallible signal that tells you either (a) you’ve got nothing to worry about or (b) make sure your will is in order and you’ve picked out the music you want for your funeral. And even people who have genuine prostate cancer, depending on their age, are sometimes told that not treating it is an option because treatment can sometimes be worse than the disease. But it’s hard to tell when that’s the case.

If we are so tangled in ambiguities about as simple a thing as the PSA test for prostate cancer, imagine what it would be like if we had a blood test, even a reliable one, for most of the common dangerous cancers such as those of the lung, breast, colon, skin (including melanoma), and so on. On the one hand, it’d be nice not to undergo chest X-rays, mammograms, and colonoscopies and instead just provide a blood sample. But on the other hand, I’m sure we would face a world of difficult decisions that would be highly biased by the economics of cancer treatment. When you add the current U. S. health-care law (and its future fate) to the mix, you get quite a brew that could raise as many problems and issues as it solves.

Does this mean we should stop such research? I don’t think so. Knowledge as such, including knowledge of one’s physical condition, is of value, but only if we also consider the context in which such knowledge will be used. It does no good to come up with a cheap test for cancer if we do not also use that new knowledge to work on better and less debilitating treatments that take advantage of the early notice that such a test would give. Otherwise you move toward the ultimate nightmare (which fortunately will never come to pass) of knowing at the outset that for example you, a 23-year-old man, will die at the age of 46 of thus-and-such disease, but there’s nothing anybody can do about it.

My metaphorical hat is off to the MGH researchers, and I hope they succeed in at least their immediate goals of developing better ways of monitoring the progress of cancer treatment. As to the ultimate cancer blood test, if it ever comes to pass, let’s just hope that by then we have come up with a wise way to use it for the benefit of patients as well as the medical industry.

Sources: MIT’s online version of Technology Review carried an article about the MGH research on Jan. 3, 2011 at http://www.technologyreview.com/blog/editors/26218/.

Monday, January 03, 2011


Does Improving Efficiency Really Save Energy?


You might almost say that what health is to doctors or justice is to lawyers, efficiency is to engineers. Making machines more efficient summarizes a good bit of everything that has gone on in technology and engineering over the last couple of hundred years or so. And if you broaden the definition of efficiency to include useful (or desirable) work performed per unit cost (and not just per unit of raw energy input), then everything from airplanes to zippers has gotten more efficient over the years. Increased efficiency in energy-consuming products has been viewed as the no-brainer answer to the problem of rising energy demands around the globe. Instead of building more coal-fired power plants, conservationists say, just replace X million incandescent bulbs with compact fluorescents, and you’ve save tons of carbon at virtually no infrastructure cost. This is all very well, but a recent article in The New Yorker calls into question the universally accepted idea that increasing energy efficiency truly leads to less energy consumed.
A nineteenth-century economist named William Stanley Jevons was among the first to point out that improved energy efficiency in manufacturing iron, for example (Jevons’ father was an iron merchant) doesn’t necessarily mean that you will end up using less coal to make iron in the long run. What can happen, especially when the cost of energy is a large portion of the finished product cost, is that when the price goes down due to smaller energy usage, people start using more iron—so much more, in fact, that even with more energy-efficient production, the total amount of iron sold is so much larger that the industry as a whole ends up consuming more energy than before, not less.

Jevons’ idea obviously applies to a lot of things besides iron. Take computers, for example. The first electronic computer occupied a room the size of a small house and consumed about 150 kilowatts of power. Its computing ability was much less than what a tiny 8-pin embedded microprocessor can do today. On a strict efficiency basis, measured by almost any yardstick—energy consumption, cost, space, weight—today’s microprocessor is thousands or millions of times more efficient. But guess what? In 1946 there was exactly one electronic computer of the type I’m describing (ENIAC, installed at the U. S. Army’s Aberdeen Proving Ground), and today there are many millions of computers of all sizes, plus giant server farms that tax the power-generating capability of the entire power grid of the Northwest U. S. The total amount of electricity devoted to electronic computing has gone from 150 kW in 1946 to many gigawatts today, if you count all the mobile phones on batteries, the computerized cash registers, and so on.
So what’s an engineer to do? Give up on making things more efficient because people will only use more of them? This is a great example of a case where doing the right thing in a micro-environment (a single company or even industry) may lead to complicated consequences in a macro-environment such as the economy of a country or even the globe. In fact, it goes to the heart of what engineering is all about, and makes one face the question of how to justify energy consumption on a fundamental level.
While this blog is not about global warming, there are those who believe that radical reductions in the world’s carbon footprint are imperative if we are to avoid a gigantic creeping disaster that will flood most of the world’s coastal cities, which means, more or less, many of the world’s cultural and political capitals. Oh, and by the way, millions will die prematurely. Although I do not happen to agree with this premise, let’s grant it for the sake of argument. Given an immediate need to reduce energy consumption by a large fraction, what should we do? Make everything that uses energy more efficient? Jevons’ idea says this simply won’t work. In the broad definition of efficiency we’ve been using, improving efficiency often leads to more energy use, not less.
The unpleasant alternative to what looked like a win-win solution—improved energy efficiency and less energy usage—is some form of rationing: either energy taxes, or simple flat-out restrictions on energy use. Many countries practice this already: it’s called power outages. Power is on only at night, or three hours a day, or not for weeks at a time. It’s arbitrary, unfair, and hits the poorest hardest, but it works. The tax alternative has the advantage that it provides some economic incentive for improving efficiency—but if technology really improves to the point that the tax is compensated for, you’re right back where you started. The only sure-fire way to keep people from using energy as much as they want is to put them under the government’s thumb somehow. Cuba, I understand, has raised this process to an art form—if you consider old cars towed by mules artistic.
Don’t get the idea I think efficiency is bad. If I did, I couldn’t very well call myself an engineer. However, Jevons reminds us that, like many other things in life, energy efficiency can be helpful in limited circumstances. But expecting it to solve all the world’s energy problems is not only unrealistic, but probably counterproductive as well.
Sources: David Owen’s article “The Efficiency Dilemma” appeared in the Dec. 20 & 27, 2010 issue of The New Yorker, pp. 78-85.

Monday, December 27, 2010


A Night to Remember on the Deepwater Horizon


Walter Lord, in his classic nonfiction book A Night To Remember, used dozens of interviews and historical documents to recount the 1912 sinking of the Titanic in vivid and harrowing detail. Now David Barstow, David Rohde, and Stephanie Saul of the New York Times have done something similar for the Deepwater Horizon disaster last April 20. While official investigators will probably take years to complete a final technical reconstruction with all the available information, the story these reporters have pieced together already highlights some of the critical shortcomings that led to the worst deepwater-drilling disaster (and consequential environmental damage) in recent memory.

Their 12-page report makes disturbing reading. They describe how Transocean, the company which owned the rig and operated it for the international oil giant BP, was under time pressure to cap off the completed well and move to the next project. They show something of the complex command-and-control system for the rig that involved all kinds of safety systems (both manual and automatic) as well as dozens of specialists out of the hundred or so engineers, managers, deckhands, drillers, cooks, and cleaning personnel who were on the rig at the time. And they reveal that while the blowout that killed the rig was about the worst that can happen on an offshore platform, there were plenty of ways the disaster could have been minimized or even avoided—at least in theory. But as any engineering student knows, there can be a long and rocky road between theory and practice. I will highlight some of the critical missteps that struck me as common to other disasters that have made headlines over the years.

I think one lesson that will be learned from the Deepwater Horizon tragedy is that current control and safety systems on offshore oil rigs need to be more integrated and simplified. The description of the dozens of buttons, lights, and instrumentation in physically separate locations that went off in response to the detection of high levels of flammable gas during the blowout reminds me of what happened at the Three Mile Island nuclear power reactor in 1979. One of the most critical personnel on the rig was Andrea Fleytas, a 23-year-old bridge officer who was one of the first to witness the huge number of gas alarms going off on her control panel. With less than two years experience on the rig, she had received safety training but had never before experienced an actual rig emergency. She, like everyone else on the rig, faced some crucial decisions in the nine minutes that elapsed between the first signs of the blowout on the rig, and the point where the explosions began. Similarly, at Three Mile Island, investigators found that the operators were confused by the multiplicity of alarms going off during the early stages of the meltdown, and actually took actions that were counterproductive. In the case of the oil-rig disaster, inaction was the problem, but the cause was similar.

Andrea Fleytas or others could have sounded the master alarm, instantly alerting everyone that the rig was in serious trouble. She could have also disabled the engines driving the rig’s generators, which were potent sources of ignition for flammable gas. And the crew could have taken the drastic step of cutting the rig loose from the well, which would have stopped the flow of gas and given them a chance to survive.

But each one of these actions would have exacted a price, ranging from the minor (waking up tired drill workers who were asleep at 11 o’clock at night with a master alarm) to the major (cutting the rig loose from the well meant millions of dollars in expense to recover the well later). And in the event, the confusion of the situation with unprecedented combinations of alarms going off and a lack of coordination among critical personnel in the command structure meant that none of these actions that might have mitigated or avoided the disaster were in fact done.

It is almost too easy to sit in a comfortable chair nine months after the disaster and criticize the actions of those who afterward did courageous and self-sacrificing things while the rig burned and sank. None of what I say is meant as criticism of individuals. The Deepwater Horizon was above all a system, and when systems go wrong, it is pointless to focus on this or that component (human or otherwise) to the exclusion of the overall picture. In fact, a lack of overall big-picture planning appears to be one of the more significant flaws in the way the system was set up. Independent alarms were put in place for specific locations, but there were no overall coordinated automatic systems that would, for example, sound the master alarm if more than a certain number of gas detectors sensed a leak. The master alarm was placed under manual control to avoid waking up people with false alarms. But this meant that in a truly serious situation, human judgment had to enter the loop, and in this case it failed.

Similarly, the natural hesitancy of a person with limited experience to take an action that they know will cost their firm millions of dollars was just too much to overcome. This sort of thing can’t be dealt with in a cursory paragraph in a training manual. Safety officers in organizations have to grow into a peculiar kind of authority that is strictly limited as to scope, but absolute within its proper range. It needs to be the kind of thing that would let a brand-new safety officer in an oil refinery dress down the refinery’s CEO for not wearing a safety helmet. That sort of attitude is not easy to cultivate, but it is vitally necessary if safety personnel are to do their jobs.

Disasters teach engineers more than success, it is said, and I hope that the sad lessons learned from the Deepwater Horizon disaster will lead to positive changes in safety training, drills, and designs for future offshore operations.

Sources: The New York Times article “The Deepwater Horizon’s Final Hours” appeared in the Dec. 25, 2010 online edition at http://www.nytimes.com/2010/12/26/us/26spill.html.

Sunday, December 19, 2010


Cheaters 1, Plagiarism-Detection Software 0


The Web and computer technology have revolutionized the way students research and write papers. Unfortunately, these technologies have also made it vastly easier to plagiarize material: that is, to lift verbatim chunks of text from published work and pass it off as your own original creation. In response, many universities have promoted the use of commercial plagiarism-detection software, marketed under names such as Turnitin and MyDropBox. Still more unfortunately, in a systematic test of how effective these programs are in detecting blatant, wholesale plagiarism, the software bombed.

Why is plagiarism perceived as getting worse than it used to be? One factor is the physical ease of plagiarism nowadays. Back in the Dark Ages when I did my undergraduate work, it was not quite the quill-pen-by-kerosene-lamp era, but if had ever I decided to plagiarize something, it would have taken a good amount of effort: hauling books from the library, photocopying journal papers, dragging them to my room, and typing them into my paper letter by letter with a manual typewriter. With all that physical work and dead time involved, copying a few paragraphs with the intent of cheating wasn’t much easier than simply thinking up something on your own. The physical labor was the same.

Fast-forward to 2010: there’s Microsoft Word, there’s Google, and if you’re under 22 or so these things have been there for at least half your life. The “copy” and “paste” commands are vastly easier than hunting and pecking out your own words. And you suspect that a good bit of everything out on the Web was copied and pasted from somewhere else anyway. So what is the big deal some professors make about this plagiarism thing? The big deal is this: it’s wrong, because it constitutes theft of another person’s ideas, and fraud in that you give the false impression that you wrote it yourself.

In engineering, essays and library-research reports make up only a small part of what students turn in, so I do not face the mountains of papers that instructors in English or philosophy have to wade through every semester. But with plagiarism being so easy, I do not blame them for resorting to an alleged solution: the use of plagiarism-detection software. Supposedly, this software goes out and compares the work under examination with web-accessible material and if it finds a match, it flags the work with a color code ranging from yellow to red. Work that passes muster gets a green.

In a recent paper in IEEE Technology and Society Magazine, Rebecca Fiedler and Cem Kaner report their tests of how well two popular brands of plagiarism-detection software actually work on papers that were copied word-for-word from academic journals. The journals themselves were not listed in the article, but appear to be the usual type of research journal which requires payment (either from an individual or a library) for online access. There is the key, I think, to why the software failed almost completely to disclose that the entire submission was copied wholesale, in twenty-four trials of different papers. If I interpret their data correctly, only one of the two brands tested was able to figure this out, and even then it was in only in two of the twenty-four cases. Fiedler and Kaner conclude that professors who rely exclusively on such software for catching plagiarism are living with a false sense of security, at least where journal-paper plagiarism is concerned.

I think the results might have been considerably better for the software if the authors had chosen to submit material that is openly accessible on the Web, rather than publications that are sitting behind fee-for-service walls that require downloading particular papers. In my limited experience with doing my own plagiarism detection, I was able simply to Google a suspiciously well-written passage out of an otherwise almost incomprehensible essay, and located the university lab’s website where the writer had found the material he plagiarized. And I didn’t need the help of any detection software to do that.

As difficult as it may seem, the best safeguard against plagiarism (other than honesty on the part of students, which is always encouraged) is the experience of instructors who become familiar with the kind of material that students typically turn in, and even with passages from well-known sources which might be plagiarized. No general-purpose software could approach the sophistication of the individual instructor who deals with this particular class of students about a particular topic.

Of course, if we’re talking about a U. S. History class with 400 students, the personal touch is hard to achieve. Especially at the lower levels, books are more likely to be plagiarized from than research papers, and as Google puts pieces of more and more copyrighted books on the Web, plagiarism detection software will probably take advantage of that to catch more students who try to steal material. It’s like any other form of countermeasure: the easy cheats are easily caught, but the hard-working cheats who go find stuff from harder-to-access places are harder to catch. But it’s not impossible, and one hopes that by the time students get to be seniors, they have adopted enough of their chosen discipline’s professionalism to leave their early cheating ways behind. Sounds like a country-western song. . . .

If any students happen to be reading this, please do not take it as an encouragement to plagiarize, even from obscure sources. The fact that your instructors’ cheating-detection software doesn’t work as well as it should is no reason to take advantage of the situation. Anybody reading a blog on engineering ethics isn’t likely to be thinking about how to plagiarize more effectively, anyway—unless they have to write a paper on engineering ethics. In that case, leave this blog alone!

Sources: The article “Plagiarism Detection Services: How Well Do They Actually Perform?” by Rebecca Fiedler and Cem Kaner appeared in the Winter 2010 (Vol. 28, no. 4) issue of IEEE Technology and Society Magazine, pp. 37-43.

Monday, December 13, 2010


The Irony of Technology in “Voyage of the Dawn Treader”


I write so often about bad news involving engineering and technology because engineers usually learn from mistakes more than they learn from success. But not always. A more positive theme in engineering ethics takes exemplary cases of how engineering was done right, and asks why and how things worked out so well. That’s what I’m going to do today with the latest installment of the series of “Chronicles of Narnia” movies, namely “The Voyage of the Dawn Treader.”
It is ironic that the most advanced computer-generated imagery (CGI) and computer animation was used to bring to the screen a story by a man who was a self-proclaimed dinosaur, an author who wrote all his manuscripts by hand with a steel pen and never learned how to drive a car. C. S. Lewis, who died in 1963 after achieving fame as one of the greatest imaginative Christian writers of the twentieth century, also wrote one of the most prescient warnings about the damage that applied science and technology could do to society. In The Abolition of Man, Lewis warned that the notion of man’s power over technology was wrongly conceived. The thing that increased scientific and technological abilities really allow, is for those in control of the technology to wield more power over those who are not in control. Of course, he granted that technological progress had also led to great benefits, but that was not his point.

Perhaps the most popular of all his works of fiction is the “Chronicles of Narnia” series, a set of seven interrelated books for children in which he drew upon his vast learning as a scholar of medieval and Renaissance literature to produce one of the most completely realized works of fantasy ever written. I have read all of the stories many times. And like many other readers, I had my doubts that any cinematic version of them would stand a chance to live up to the unique standard posed by the books. For one thing, Lewis’s descriptions of fantastic beings such as minotaurs, centaurs, and fauns are suggestive rather than exhaustive, leaving much to the reader’s imagination, as most good literature does. This throws a great burden upon anyone who attempts to render the stories in a graphic medium. I was saddened to see at the end of the movie the dedication “Pauline Baynes 1922-2008.” Baynes was the artist chosen by both Lewis and his friend J. R. R. Tolkien to provide illustrations for the “Chronicles” and for many of Tolkien’s imaginative works as well. Baynes’s drawings fit in with Lewis’s descriptions so well because they did what book illustrations are supposed to do: namely, they enhanced the reader’s experience without turning the story in a direction not intended by the author.

And that is what the hundreds of IT professionals, artists, technicians, computer scientists, entrepreneurs, and others involved in “The Voyage of the Dawn Treader” film have done. As computer graphics has advanced, people engaged in what began as a purely engineering task—to render a realistic image of a natural feature such as the hair on a rat being blown by the breeze atop the mast of a sailing ship—find themselves having not only to deal with the sciences of mechanics and fluid dynamics, but even now and then making fundamental advances in our understanding of how air flows through fibrous surfaces or how light travels through a complex mineral surface. Fortunately for the moviegoing public, none of this needs to be understood in order to watch the movie, the production of which is comparable in today’s terms with the effort needed to build part of a medieval cathedral. But anyone can walk into a cathedral and enjoy the stained-glass windows without understanding how they were made. This connection is not lost on the moviemakers. In fact, the very first scene in the film focuses on a stained-glass window showing the Dawn Treader ship, just before the camera zooms away to reveal a tower in the city of Cambridge, where the story begins.

It is this sensitivity to the spirit of the tales and the style, if you will, of Narnia that makes the movie both an essentially faithful rendition of the book, and an excellent adventure on its own. For cinematic reasons, the screenwriters did some mixing of plot elements and originated a few new ones, but entirely within the spirit of what G. K. Chesterton calls the “ethics of elfland.” Chesterton expresses the ethic this way: “The vision always hangs upon a veto. All the dizzy and colossal things conceded depend upon one small thing withheld.” The chief plot innovation concerns a search for the seven swords of the lost lords of Narnia, which unless I’m mistaken were not in the original story. But until these swords are placed on a certain table, the Narnians cannot triumph over a strong force of evil that threatens to undo them.

What would C. S. Lewis think? Well, those who believe in an afterlife can conclude that he will find out eventually about what has been done with his stories, and perhaps some of us will some day be able to ask the man himself. He may answer, but then again he may view his earthly works in the same light that St. Thomas Aquinas viewed his own magisterial works of philosophy toward the end of his life. According to some reports, Aquinas was celebrating Mass one day when he had a supernatural experience. He never spoke of it or wrote it down, but it caused him to abandon his regular routine of dictation. After his secretary Reginald urged him to get back to work, Aquinas said, “Reginald, I cannot, because all that I have written seems like straw to me.” Once one encounters that joy which, in Lewis’s words, is the “serious business of Heaven,” the fate of a children’s story at the hands of this or that film crew may not seem all that important. But those of us still here in this life can rejoice in a faithful rendition of a spiritually profound work, made possible in no little part by engineers who simply did their jobs well and with sensitivity to the spirit of the project.

Sources: The Chesterton quotation is from chapter 4, “The Ethics of Elfland,” of Chesterton’s 1908 book Orthodoxy. I used material from the Wikipedia article on St. Thomas Aquinas in the preparation of this article.

Monday, December 06, 2010


TSA Has Gone Too Far


It’s not too often that I take an unequivocal stand on a controversial issue. But this time I will. The U. S. Transportation Safety Administration (TSA) is wasting millions of dollars putting thousands of harmless passengers through humiliating, indecent, and probably unconstitutional searches, while failing in its primary mission to catch potential terrorists. I say this as a participant in the invention of one of the two main technologies currently being deployed for whole-body scans at U. S. airports.

Back in 1992 when airport security checks of any kind were a novelty, I was consulting for a small New England company whose visionary president anticipated the future demand for whole-body contraband scans. I helped in the development of a primitive version of the millimeter-wave scanning technology that is now made by L3Comm. The scan took 45 minutes, had very low resolution, but produced recognizable images of non-metallic objects hidden under clothes. As I recall, the main reason the company didn’t pursue the technology further was that it revealed too many details of the human body, and we thought the public would rise up in revolt if some bureaucrat proposed to electronically strip-search all passengers.

Well, here we are eighteen years later, and the TSA is now installing that technology plus a similar (but even more detail-revealing) X-ray technology at dozens of airports across the land. The agency is reluctant to share any information that would cause it problems, but the few images that have gotten into the public media are enough to tell us that Superman’s X-ray vision is indeed here. In the movie of the same name starring the late Christopher Reeve, the X-ray vision thing was played for a joke in his encounter with Lois Lane. But forcing thousands of ordinary, harmless citizens, including elderly folks and young children, none of whom have been charged with a crime, to subject themselves to electronic invasions of privacy, with the potential for abuse that entails, is an outrage.

Not only is it an outrage, but it is unlikely to achieve the purpose which the TSA says it is achieving at this tremendous price: lowering the risk of terrorist acts in the air. So far, airport body scans have caught zero terrorists. None. All the interceptions and near-misses we have had lately have been thwarted either by alert passengers (and incompetent terrorists), by tips from people with knowledge of the plots, or by old-fashioned detective work that doesn’t stop looking when it runs up against a matter of political correctness. The U. S. is nearly unique among all major nations in relying on this inefficient and intrusive blanket of technologically-intensive measures to achieve safe air travel, rather than focusing limited resources on groups and individuals who are the most likely to cause trouble, as the Israelis do.

The current administration is bending over backwards not to offend Muslim sensibilities in this or any other situation. I am all for respecting and allowing religious freedom, but when nearly all crimes of a certain kind are associated with members of an identifiable group, whether they be Muslim, Jewish, Christian, liberal, conservative, red-haired, or whatever, I don’t want those charged with the responsibility of catching them to purposely throw away that information and instead impose punitive and humiliating (and ineffective) searches on every single person who chooses to fly. And I haven’t even gotten to the “enhanced” pat-downs that the TSA offers as alternatives to the whole-body scans. That amounts to asking whether you would rather have your thumb squeezed with a pair of pliers or in a vise.

The public statements of the TSA on this matter have been about what you’d expect from a rogue bureaucracy. Inanities such as saying “if you don’t want to be searched, just don’t fly” are as useful today as saying “if you don’t like risking your life in automobile traffic, get out and walk.” Here is where the best hope of reversing this egregious and unconstitutional overreaching lies: in the boycotting of airports where the new systems are used. If air travel decreases to the point that the airlines notice it, they will become allies to the public in the battle, and there will be at least a chance that Washington will listen to corporations that employ a lot of union workers, rather than the great unwashed masses that have been ignored repeatedly on everything from health care to offshore oil drilling already.

Civilizations can decline either with a bang or by slow degrees. In historian Jacques Barzun’s monumental From Dawn to Decadence: 1500 to the Present, we find described as one of the characteristics of modern life a slow encrustation of restrictions on freedom exacted by bureaucracies whose ostensible purpose is to make life better in the progressive fashion. I think Barzun had in mind things like income-tax forms and phone trees, but he lives right down the road in San Antonio, whose airport just installed the new scanning systems. I doubt that he flies much anymore (he turned 103 last month), but if he does, he will be faced with a good example of his own observation: some hun-yock* in a blue uniform will treat the dean of American historians, a man whose family fled World War I to the U. S. and freedom, to the degrading and wholly unnecessary humiliation of being suspected as a terrorist and having his naked body exposed to the eyes of some nosy minion of the government.

To Jacques Barzun and to all the other people who simply want to get from A to B on a plane and have no malevolent intentions regarding their mode of transportation, I apologize on behalf of the engineers and scientists whose work has been misused, among whom I count myself.

Sources: The millimeter-wave technology used for whole-body scans is described well in the Wikipedia article “Millimeter-wave scanner,” and the X-ray system can be read about at http://epic.org/privacy/airtravel/backscatter/#resources. My Jan. 10, 2010 entry in this blog has a reference to my published work on the early version of the millimeter-wave scanner. *The word “hun-yock”, which I find spelled on the Web as “honyock” or “honyocker” was used by my father to indicate a person who did something unwise and publicly irritating. I can think of no better term for the present situation.

Monday, November 29, 2010


Holes in the Web


Tim Berners-Lee, inventor of the WorldWideWeb, thinks we should worry about several threats to the Web’s continued integrity and usefulness. When someone of this importance says there are things to worry about, we should at least listen to him. I for one think he has some good points, which I will now summarize from a recent article he wrote in Scientific American magazine.

The first threat Berners-Lee points out is the practice of creating what he calls “silos” of information on the otherwise universally accessible Web. Facebook, iTunes, and similar proprietary sites treat information differently than a typical website does. The original intent was that every bit of information on the Web could be accessed through a URL, but as those (such as myself) who have no Facebook page have discovered, there is information inside Facebook that only people who have Facebook pages can gain access to. And the iTunes database of information about songs and so on is accessible only through Apple’s proprietary software of the same name.

The second threat he sees is the potential breaching of the firewall between the Web (which is a software application) and the Internet (which is basically the networking hardware used to run the Web). Again, the original intent was that once you pay for an Internet connection of a certain speed, you are able to access absolutely anything on the Web just as easily as anyone else with the same speed of connection. This is called “net neutrality” and recently it has been under attack by institutions as powerful as Google and Verizon, who as Berners-Lee points out, moved last August to create special rules for Internet connections using mobile phones. They say that the limited spectral bandwidth of mobile phones makes it necessary for companies to discriminate (e. g. charge extra) for certain types of applications, or make it harder for users to access sites that are not part of the institution’s own setup.

One motivation for Berners-Lee’s cautions is an old communications-network principle that dates back to the early days of the telephone. Larger communications networks are more valuable to the users than smaller ones, but the value increases faster than just the number of users. Since each new user can not only gain access to all the others, but all the other users can also access the new user, the usefulness of a network tends to increase as the square of the number of users. That is, a network with 20 users is not twice as useful as one with ten, but four times as useful. Extrapolate this to the billions that apply to the Web, and you see how organizations that persist in walling off information and users may reap some short-term selfish benefits, but at a cost to the usefulness of the Web as a whole.
The last major concerns that Berners-Lee voices are matters of privacy and due process. There is now a way to crack open the individual packets of information that carry Web traffic and associate particular URLs with particular users. He sees this as a major privacy threat, although it isn’t clear how widely it’s being used yet. Another thing that threatens the freedom of people to use the Web is a recent trend by some European governments to cut off Web access to people who are even suspected of illegal downloading of copyrighted material. No trial, no defendant in court, no hearing: just a company’s word that they think you did something wrong. Since access to the Web is now as taken for granted as access to electricity, Berners-Lee sees this as a violation of what in Finland is now regarded as a fundamental human right: the right to access the Web.
These warnings need to be taken seriously. As director of the World Wide Web Consortium, the organization that is formally charged with the continued development of the Web, Berners-Lee is in a good position to do something about them. But he can’t control the actions of private companies or governments, so consumers and voters (at least in countries where votes mean something) will have to go along with his ideas to make a difference.
The Web is a new kind of creature in political, governmental, and economic terms. There has never before been a basically technical artifact which is simultaneously international in scope, beyond the regulatory authority of any single governmental entity, not produced by a single firm or monopolistic group of firms, and fundamentally egalitarian in nature without any controlling hierarchy. Of course, a good deal of the nature of the Web was expressly intended by its founder, who because of his youth at the time he developed it (he was only 35) is still very much with us and able to give helpful suggestions on this, the twentieth anniversary of the Web. (For those who care, Berners-Lee got the first Web client-server connection running on Christmas Day, 1990.)
What actually happens with the Web in the future, therefore, depends in a peculiar way on what its own users decide, and to much less of a degree what private companies or governments choose to do. There is probably much good in that way of doing things, since it prevents anything from happening that violently opposes the will or desires of the majority of users. But it also builds in a lot of immunity from what you might call reform efforts that go against common but less than salutary desires: the need to reduce Web pornography traffic, for instance.
For better or worse, Sir Timothy (he was knighted by his native England in 2004) has impressed a good deal of his open-source, egalitarian philosophy on his brainchild the Web, which has grown vastly beyond his initial expectations. As any good father does, he wants his child to grow and prosper and be a good citizen. Now that you have heard some of Berners-Lee’s cautionary words, you can do your part, however minor, to see that this happens.

Sources: The December 2010 issue of Scientific American carried Berners-Lee’s article “Long Live the Web” on pp. 80-85. It can also be accessed (without charge!) at the Scientific American website