Resource Allocation ------------------- One year after its meteoric rise to fame, Napster faces a host of problems. The best known of these problems is the lawsuit filed by the Recording Industry Association of America against Napster, inc. Close behind is the decision by several major universities, including Yale, the University of Southern California, and Indiana University, to ban Napster traffic on their systems, thus depriving the Napster network of its highest-bandwidth music servers. The most popular perception is that universities are blocking Napster access out of fear of lawsuit. There is another reason. Napster users use large and unbounded amounts of bandwidth. By default, when a Napster client is installed, it configures the host computer to serve as many MP3s to as many other Napster clients as possible. University users, who tend to have faster connections than most others, are particularly effective servers. In the process, however, they can generate enough traffic to saturate a network. It was this reason which Harvard University cited when deciding to allow Napster, yet limit its bandwidth use. Gnutella, the distributed replacement for Napster, is even worse: not only do downloads require large amounts of bandwidth, but searches require broadcasting to a set of neighboring Gnutella nodes, which in turn forward the request to other nodes. While the broadcast does not send the request to the entire Gnutella network, it still requires large amounts of bandwidth. As universities limit Napster bandwidth or shut it off entirely due to bandwidth usage, the utility of the Napster network degrades. As the Gnutella network grows, searching and retrieving items becomes more cumbersome. Each service threatens its own destruction -- and for reasons independent of the legality of trading MP3s. Instead, the problem is *resource allocation*. Traditional file systems and communication mediums maintain centralized control over their respective resources. File systems use quota limits to restrict the amount of data that users may store on the systems. ISPs measure the bandwidth their clients are using -- such as the traffic generated from a hosted web site -- and charge a proportional fee. Without these controls, each user has the incentive to squeeze all the value out of the resource in order to maximize personal gain. If one user has this incentive, however, so do all the others. Biologist Garrett Hardin labeled this economic plight the "Tragedy of the Commons." ["The Tragedy of the Commons", Garrett Hardin, 1968]. The "commons" is any resource which is shared by a group of people. Such things as the air we breathe, water we drink, land for farming and grazing, or fish from the sea all come from commons. The tragedy of the commons is that a commonly owned resource will be overused until it is degraded, as all agents pursue self-interest first. Freedom in a commons brings ruin to all; in the end, the resource is exhausted. We can describe the problem by further borrowing from economic and political science theory. Mancur Olson explained the problem of collective actions and public goods: [do we need to start out with 'indeed'? -rd] "Indeed, unless the number of individuals in a group is quite small, or unless there is coercion or some other special device to make individuals act in their common interest, rational, self-interested individuals will not act to achieve their common or group interests." [Olson, Mancur, "The Logic of Collective Action" Rational Man and Irrational Society. Eds. Brian Barry and Russell Hardin. Beverly Hills: Sage, 1982. 44], argues. The usual solution for commons problems is to assign ownership to the resource. This ownership allows a party to profit from the resource, thus providing incentive to care for it. Most real-world systems take this approach with a fee-for-service business model. Decentralized peer-to-peer systems have similar resource allocation and protection requirements. The total storage or bandwidth provided by the sum of all peers is still finite. Systems need to protect against two main areas of attacks: * Communications denial of service (DoS) attacks: a system's bandwidth or processing ability is overused, causing the loss of service of a particular network service or all network connectivity. For example, a Web site accessed millions of times may be unavailable or temporarily cease operation. * Storage flooding attacks: a user exploits a system by storing a disproportionally large amount of data, such that no more space is availiable for other users. Systems must expect both honest users which may accidentally misuse resources, as well as passive and active adversaries which intentionally exploit weaknesses. This multi-tiered threat model is common to most aspects of computer and network security requirements. This chapter focuses on the means by which collaborative systems may protect against resource allocation attacks. A peer-to-peer system requires some measure of accountability between peers. Accountability should be used to identify entities which do not behave according to protocol. For example, they do not fulfill a promise to store data for a certain duration, or they do not provide the specified "always-on" access to data. The organization of this chapter is as follows. First, we further explain the difficulties in achieving accountability within the requirements framework of decentralized, collaborative systems. Next, we refer to some historical examples of accountability measures in computer systems. We proceed to describe the accountability methods that may be used for resource protection, and their incorporation into existing peer-to-peer systems and designs. The most common of these accountability measures are micropayment schemes and reputation systems. [placeholder: don't forget to say here that we present a case study (free haven), *if* we decide to put it in. -rd] Lastly, we present some open problems and future directions for accountability schemes. Without a way to protect against the tragedy of the commons, the practicality of collaborative networking is suspect. But through the use of various accountability measures, peer-to-peer systems may continue to expand as overlay networks through the existing Internet. The dream of a distributed, anonymous storage or communication service would otherwise remain unrealized.