Given the realities of today's society, most of us would agree that there clearly are times when it is necessary for the sanctity of private communications to be breached for the common good. The most commonly known such interception is the wiretap, which dates to the very dawn of telecommunications.
We've come a long way since the invention of the telegraph and the development of the telephone. The Internet in particular, through its vast reach and increasingly ubiquitous nature, has opened up a Pandora's Box of problems when considering the ramifications of wiretap-type interceptions, even when they are for the most laudable of purposes.
When considering these issues, it is all too easy to fall into the trap of focusing our attention on particular instances and specific hardware or software systems. At this moment, the spotlight is blaring brightly on the FBI's Carnivore system, which, according to the Bureau, is used to monitor Internet e-mail addressing and related data under court authorization.
Since the inner workings and operational parameters of this system have not been known publicly (in fact, the system's existence was only recently revealed to the public), considerable skepticism has been voiced regarding whether or not the system actually functions "as advertised" and would always be operated in an appropriate and correct manner.
As important as these considerations are, we feel that it is a serious mistake for so much attention to be focused on these specific issues and this specific system, instead of on the much more serious and broader policy implications and questions related to the entire area of Internet "wiretaps," regardless of the specific instrumentality through which they are implemented.
To a significant extent, it appears that the ongoing controversy regarding an "independent review" of the Carnivore system is actually diverting public attention from the more significant issues that desperately need to be addressed. With regard to any officially-authorized Carnivore analysis, the U.S. Department of Justice has severely constrained the possible results. In particular, their requirements prevent any meaningfully independent evaluation; they reserve the right to censor and edit all resulting reports, and they confine the analysis solely to the source code -- ignoring important considerations such as the operational environment. In the final analysis, the results of any such Carnivore review will contribute little or nothing towards resolving the much more important policy questions relating to this entire area.
The essential nature of these questions revolve around the fundamental issue of when it is appropriate to intercept private telecommunications channels in the first place, and under what conditions. There has been a disturbing trend for increasing amounts of data that most observers would consider to be integral parts of communications, to be treated instead as "addressing" information for interception and legal purposes.
This is not an unimportant distinction. In general, the procedure for obtaining authorization to intercept communication address data is much less rigorous than that for obtaining communication contents. In a telephone context, this is the difference between monitoring the specific phone numbers dialed from a particular telephone line (the so-called "pen register" system) and actually overhearing the parties speaking on the calls.
Even before the Internet issues moved to center stage, the blurring of these demarcations was becoming increasingly problematic. It has become common, for example, for the actual message data sent to pagers to be treated merely as addressing information from the standpoint of interception authorizations. The rationale for this determination is difficult to understand, because by any normal analogy, the contents of a pager message are comparable to the contents of a telephone call. It appears that the specific mechanisms of the technology have been used as an excuse for treating pager message contents in this sort of seemingly illogical (but convenient) manner.
When we move into the Internet universe, similar kinds of issues arise, but in guises that are orders of magnitude more complex. One obvious issue is the question of control. Most traditional wiretaps (at least until very recently) have usually been under the ostensible control of the telephone companies themselves, and involved specific telephone lines. It would have been unthinkable in most "routine" law enforcement interception situations for Ma Bell or her descendents to hand over masses of calls relating to non-targeted individuals (a "trunk-side tap") to officials for them to pick through as they saw fit, without telephone company supervision or control.
Systems such as Carnivore are very much an analogue of trunk taps and by definition cannot be controlled by the Internet Service Providers (ISPs) who must install them deep within their networks. In contrast, the correct venue for the control of interceptions should actually be the ISPs themselves, not black boxes under outside control. Such ISP control might entail the creation of standards to assist the ISPs in responding to such matters in a reasonably uniform manner from a technical standpoint, but it does not follow that "tapping" systems need to be designed into the networks themselves (an intrusive concept which has been roundly rejected by most network technologists).
Perhaps most importantly, ISP technical standards in this regard can be completely open and public in nature. Closed standards and secret software source code do not and can not engender public confidence. The argument that the source code for a system such as Carnivore must be kept secret to protect it from hackers or from being bypassed seems overstated.
As discussed above, whereas we feel that too much emphasis on the technical side of these issues misses the critical points, it is at least prudent that the technical systems operate in as open an environment as possible. We appreciate that even the availability of source code is of only limited value due to its ephemeral nature and ease of alteration, but there's simply no excuse for a completely closed approach in this kind of situation.
There is nothing magical or even particularly complex about packet filters (the heart of such systems), but it is possible for implementation errors or intentionally placed Trojan horses to cause them to behave in inappropriate manners. Such errors would be best exposed by wide public inspection -- sunlight remains the best disinfectant. Properly implemented, the availability of source code would not permit anyone to bypass the systems based on such knowledge.
The key to the usefulness of such interception systems is that the targets of surveillance must not be aware of the systems' use. Once a target realizes that it is under surveillance, the probability of its using easily available mechanisms (such as encryption, alternative addresses, etc.) to complicate the task of observers rises dramatically. Neither source-code dissemination, nor the placing of interception systems under responsible ISP control as we recommend, is likely to alter any of these factors.
Another stated reason for the source code secrecy in the Carnivore case is to protect the commercial interests of the software firm that wrote the original source code upon which Carnivore is based. This may be a reasonable attitude from a commercial standpoint, but it demonstrates again why a better course would be open systems where such commercial considerations could not easily warp crucial public policy considerations.
Other aspects of these issues regarding the Internet relate back to our earlier discussion of addresses versus information content. A given packet of Internet data may contain text, segments of an image, a piece of a voice phone call, or innumerable other sorts of data. The specificity with which determinations are made regarding which kinds of data may be intercepted in any given situation are extremely important. Current trends in this regard are not at all encouraging.
For example, from the standpoint of interception and other law enforcement purposes, the record of visited Web addresses (URLs) is often treated as roughly analogous to addresses on conventionally mailed envelopes. This is an inappropriate and incorrect analysis. URLs allow for the tracing of complete interactions deep into specific areas of Web sites, including keyword searches and other information lookups, and in many cases data submissions, login/password information and other detailed data as well. Web users' URL histories are effectively a diary of nearly every aspect of their Web use, and are more properly analogous to the contents of an envelope, not to what was written on the outside. However, given the abuse of this same sort of URL data for commercial purposes (such as tracking users via Web cookies and other means), this unfortunate state of affairs should not be at all surprising.
When we look at the overall situation, a continuum of both policy and technical system issues is apparent and most important. At a minimum on the technical side of the equation, it is crucial that system architecture and operation continually satisfy the system requirements for security and privacy, and that they be independently verified. For this to be possible, the detailed system requirements must be known to the public, and independent assurances are needed that the system in operation remains consistent with those requirements into the future.
The analysis of source code can lend some credibility to the process, and should be among the minimum requirements, but this only represents a snapshot -- such code can be perpetually changing over time. Therefore, these processes must also include some demonstrable assurances that the code subjected to analysis was actually the code in use, and that any subsequent changes have left the entire system operationally compatible with the previously verified requirements. Any seemingly positive analysis of a particular piece of source code is inherently incomplete in and of itself. Given the serious vulnerabilities that exist in most commercial operating systems and application software programs today, it is the overall interaction of system issues, taken in their totality, that matters most in this regard.
Beyond such technical considerations, the policy issues that play into all aspects of these questions and systems must be rigorously analyzed and understood by all concerned. This is too important a complex of issues to be handled in sloppy or offhanded fashions. The Internet is rapidly becoming the foundation of all manner of society's most basic functions. Routine purchases, bill payments, personal and business phone calls, education, law enforcement -- the myriad aspects of the most public and private aspects of our lives -- are finding their way onto the conduits of the Internet.
Society must have the will to apply the basic precepts and protections of our cultures to the Internet. We must not be seduced into permitting these basic concepts to be undermined by technological details or related diversionary tactics in any environments, either on or off the Internet. These principles apply regardless of whether we're dealing with physical mail, electronic mail, pagers, conventional phone calls, Internet telephony, or the various component parts of the World Wide Web. Society should be unwilling to accept anything less.
firstname.lastname@example.org or email@example.com
Co-Founder, PFIR - People For Internet Responsibility - http://www.pfir.org
Moderator, PRIVACY Forum - http://www.vortex.com
Member, ACM Committee on Computers and Public Policy
Peter G. Neumann
firstname.lastname@example.org or email@example.com or firstname.lastname@example.org
Co-Founder, PFIR - People For Internet Responsibility - http://www.pfir.org
Moderator, RISKS Forum - http://catless.ncl.ac.uk/Risks
Chairman, ACM Committee on Computers and Public Policy