Chapter: Accountability Measures for Peer-to-Peer Systems
---------------------------------------------------------
One year after its meteoric rise to fame, Napster faces a host of
problems. The best known of these problems is the lawsuit filed by
the Recording Industry Association of America against Napster, Inc.
Close behind is the decision by several major universities, including
Yale, the University of Southern California, and Indiana University,
to ban Napster traffic on their systems, thus depriving the Napster
network of its highest-bandwidth music servers. The most popular
perception is that universities are blocking Napster access out of fear of
lawsuit. But there is another reason.
Napster users eat up large and unbounded amounts of bandwidth.
By default, when a Napster client is installed, it configures the
host computer to serve MP3s to as many other Napster clients as
possible. University users, who tend to have faster connections than
most others, are particularly effective servers. In the process, however,
they can generate enough traffic to saturate a network. It was this reason
that Harvard University cited when deciding to allow Napster, yet limit
its bandwidth use.
Gnutella, the distributed replacement for Napster, is even worse: not
only do downloads require large amounts of bandwidth, but searches
require broadcasting to a set of neighboring Gnutella nodes, which in
turn forward the request to other nodes. While the broadcast does not
send the request to the entire Gnutella network, it still requires
bandwidth for each of the many computers queried.
As universities limit Napster bandwidth or shut it off entirely due to
bandwidth usage, the utility of the Napster network degrades. As the
Gnutella network grows, searching and retrieving items becomes more
cumbersome. Each service threatens to dig its own grave -- and for
reasons independent of the legality of trading MP3s. Instead, the
problem is resource allocation.
Problems in resource allocation come up constantly in offering
computer services. Traditionally they have been solved by making
users accountable for their use of resources. Such
accountability in distributed or peer-to-peer systems requires
planning and discipline.
Traditional file systems and communication mediums use accountability
to maintain centralized control over their respective resources -- in
fact, the resources allocated to users are commonly managed by "user
accounts." File systems use quotas to restrict the amount of data
that users may store on the systems. ISPs measure the bandwidth their
clients are using -- such as the traffic generated from a hosted web
site -- and charge some monetary fee proportional to this amount.
Without these controls, each user has an incentive to squeeze all the
value out of the resource in order to maximize personal gain.
If one user has this incentive, so do all the users.
Biologist Garrett Hardin labeled this economic plight the "Tragedy of
the Commons." ["The Tragedy of the Commons", Garrett Hardin, 1968].
The "commons" (originally a grazing area in the middle of a village)
is any resource shared by a group of people: it includes the air we
breath, water we drink, land for farming and grazing, and fish from
the sea. The tragedy of the commons is that a commonly owned resource
will be overused until it is degraded, as all agents pursue
self-interest first. Freedom in a commons brings ruin to all; in the
end, the resource is exhausted.
We can describe the problem by further borrowing from economics and
political science. Mancur Olson explained the problem of
collective actions and public goods as follows:
"...unless the number of individuals in a group is quite small, or
unless there is coercion or some other special device to make
individuals act in their common interest, rational, self-interested
individuals will not act to achieve their common or group interests."
[Olson, Mancur, "The Logic of Collective Action" Rational Man and
Irrational Society. Eds. Brian Barry and Russell Hardin. Beverly
Hills: Sage, 1982. 44], argues.
The usual solution for commons problems is to assign ownership to the
resource. This ownership allows a party to profit from the resource,
thus providing the incentive to care for it. Most real-world systems take
this approach with a fee-for-service business model.
Decentralized peer-to-peer systems have similar resource allocation and
protection requirements. The total storage or bandwidth provided by the
sum of all peers is still finite. Systems need to protect against two
main areas of attacks:
* Denial of service (DoS) attacks.
These overload a system's bandwidth or processing ability, causing
the loss of service of a particular network service or all network
connectivity. For example, a Web site accessed millions of times
may show "404" unavailable messages or temporarily cease
operation.
* Storage flooding attacks.
These exploit a system by storing a disproportionally large amount
of data so that no more space is available for other users.
As the Napster and Gnutella examples show, attacks need not be
malicious. System administrators must be prepared for normal peaks in
activity, for accidental misuse, and for the intentional exploitation
of weaknesses by adversaries. Most computers that offer services on a
network share these kinds of threats.
Without a way to protect against the tragedy of the commons,
collaborative networking rests on shaky ground. Peers can abuse the
protocol and rules of the system in any number of ways, such as:
* Providing corrupted or low-quality information.
* Reneging on promises to store data.
* Going down during periods when they are needed.
* Claiming falsely that other peers have abused the system in these
ways.
These problems must be addressed before peer-to-peer systems can
achieve lasting success. Through the use of various accountability
measures, peer-to-peer systems--including systems that offer protection
for anonymity--may continue to expand as overlay networks through the
existing Internet.
This chapter focuses on types of accountability that collaborative
systems can use to protect against resource allocation attacks.
The problem of accountability is usually broken into two parts:
* Restricting access.
Each computer system tries to limit its users to a certain number of
connections, a certain quantity of data that can be uploaded or
downloaded, and so on. The technologies for doing this are commonly
called micropayments, a useful term even though at first it
can be misleading. (They don't necessarily have to involve an
exchange of money, or even of computer resources.)
* Selecting favored users.
This is normally done through maintaining a reputation for
each user the system communicates with. Users with low reputations
are allowed fewer resources or mistrusted and find their
transactions are rejected.
The two parts of the solution apply in different ways, but work
together to create accountability. In other words, a computer system
that is capable of restricting access can then use a reputation system
to grant favored access to users with good reputations.
Section: The Difficulty of Accountability
-----------------------------------------
In simple distributed systems, rudimentary accountability measures are
often sufficient. If the list of peers is generally static and all are
known to each other by hostname or address, misbehavior on anyone's
part leads to a permanent bad reputation. Furthermore, if the
operators of a system are known, pre-existing mechanisms such as legal
contracts help ensure that systems abide by
protocol.
In the real world, these two social forces -- reputation and law -- have
provided an impetus for fair trade for centuries. Since the earliest
days of commerce, buyers and merchants have known each others'
identities, at first through the immediacy of face-to-face contact, and
later through postal mail and telephone conversations. This knowledge
has allowed them to research the past histories of their trading
partners, and to seek legal reprisal when deals go bad. Much of today's
e-commerce uses a similar authentication model: clients (both consumers
and businesses) purchase items and services from known sources over the
Internet and the World Wide Web. These sources are uniquely identified
by digital certificates, registered trademarks, and other addressing
mechanisms.
Peer-to-peer technology removes central control of such resources as
communication, file storage and retrieval, and computation. Therefore, the
traditional mechanisms for ensuring proper behavior can no longer provide
the same level of protection.
Subsection: Special Problems Posed By Peer-to-Peer Systems
----------------------------------------------------------
Peer-to-peer systems have to treat identity in special ways for
several reasons:
1. The technology makes it harder to uniquely and permanently identify
peers and their operators. Connections and network maps might be
transient. Peers might be able to join and leave the system.
Participants in the system might wish to hide personal identifying
information.
2. Even if users have an identifying handle on the peer they're dealing
with, they have no idea who the peer is, and no good way to assess
its history or predict its performance.
3. Individuals running peer-to-peer services are rarely bound by
contracts, and the cost and time delay of legal enforcement would
generally outweigh their possible benefit.
We choose to deal with these problems--rather than give up and force
everyone on to a centralized system with strong user
identification--to pursue two valuable goals on the
Internet: privacy and dynamic participation.
Privacy is a powerfully appealing goal in distributed systems, as
discussed in Chapter 11, Free Haven. The design of many such systems
features privacy protection for people offering and retrieving
files.
Privacy for people offering files requires a mechanism for inserting
and retrieving documents either anonymously or pseudonymously.
[footnote: A pseudonymous identity allows other participants to link
together all the activities a person does on the system, without being able
to determine who the person is in real life. Pseudonymity is explored
later in this chapter and in Chapter 11, Free Haven. .]
Privacy for people retrieving files requires a means to communicate --
via email, telnet, FTP, IRC, a Web client, etc. -- while not divulging
any information that could link the user to his or her real-world
persona.
[XXXX footnote. In retrospect, the Internet appears not to be an ideal
medium for anonymous communication and publishing. Internet services
and protocols make it both passive sniffing and active attack too
easy. For instance, email headers include the routing paths of email
messages, including DNS hostnames and IP addresses. Web browsers
normally display user IP addresses; cookies on a client's browser may
be used to store persistent user information. Commonly-used online
chat applications such as ICQ and Instant Messenger also divulge IP
addresses. Network cards in promiscuous mode can read all data
flowing through the local Ethernet. With all these possibilities,
telephony or dedicated lines might be better suited for this goal of
privacy-protection. However, the ubiquitous nature of the Internet
has made it the only practical consideration for digital
transactions across a wide area, like the applications discussed in
this book. .]
Dynamic participation has both philosophical and practical
advantages. The Internet's loosely-connected structure and explosive
growth suggest that any peer-to-peer system must be similarly flexible
and dynamic, in order to be scalable and sustain long-term
use. Similarly, the importance of ad-hoc networks will probably
increase in the near future as wireless connections get cheaper and
more ubiquitous. A peer-to-peer system should therefore let peers
join and leave smoothly, without impacting functionality. This design
also decreases the risk of system-wide compromise, as more peers join
the system. (It helps if servers run a variety of operating systems
and tools, so that a single exploit cannot compromise most of the
servers at once.)
Subsection: Peer-to-Peer Models and Their Impacts on Accountability
-------------------------------------------------------------------
There are many different models for peer-to-peer systems. As the
systems become more dynamic and diverge from real-world notions of
identity, it becomes more difficult to achieve accountability and
protect against attacks on resources.
The simplest type of peer-to-peer system has two main characteristics.
First, it contains a fairly static list of servers: additions and
deletions are rare and may require manual intervention. Second, the
identities of the servers (and to some extent their human operators)
are known, generally by DNS hostname or static IP host address. Since
the operators can be found, they may have a legal responsibility or
economic incentive -- leveraged by the power of reputation -- to
fulfill the protocols according to expectation.
An example of such a peer-to-peer system is the Mixmaster remailer. A
summary of the system appears in Chapter 7, Mixmaster. The
original Mixmaster client software was developed by Lance Cottrell and
released in 1995.
[Cottrell, Lance. Mixmaster and Remailer Attacks. Web-published essay.
http://www.obscura.com/~loki/remailer/remailer-essay.html]
Currently, the software runs on about 30 remailer nodes, whose
locations are published to the newsgroup alt.privacy.anon-server and
at web sites such as efga.org. The software itself can be found at
http://mixmaster.anonymizer.com.
Remailer nodes are known by hostname and remain generally fixed.
While anybody can start running a remailer, the operator needs to
spread information about his new node to web pages that publicize node
statistics, using an out-of-band channel (meaning that something
outside the Mixmaster system must be used--most of the time, manually
sending email). The location of the new node is then manually added to
each client's software configuration files. This process of manually
adding new nodes leads to a system that remains generally static.
Indeed, there are fewer than 30 widely-known Mixmaster nodes.
[Electronic Frontiers Georgia List of Public Mixmaster Remailers.
Web Page. http://anon.efga.org/Remailers]
A slightly more complicated type of peer-to-peer system still has
identified operators, but is dynamic in terms of members. That is, the
protocol itself has support for adding and removing participating
servers. One example of such a system is Gnutella. It has good support
for new users (which are also servers) joining and leaving the system,
but at the same time the identity and location of each of these
servers is generally known through the hosts list, which advertises
existing hosts to new ones that wish to join the network. These sorts
of systems can be very effective, because they're generally easy to
deploy (there's no need to provide any real protection against people
trying to learn the identity of other participants) while at the same
time they allow many users to freely join the system and donate their
resources.
Farther still along the scale of difficulty lie peer-to-peer systems
that have dynamic participants and pseudonymous servers. In these
systems, the actual servers that store files or proxy communication
live within a digital fog that conceals their geographic location and
other identifying features. Thus, the mapping of pseudonym to
real-world identity is not known. A given pseudonym may be pegged
with negative attributes, but a user can just create a new pseudonym
or manage several at once. Since a given server can simply disappear
at any time and reappear as a completely new entity, these sorts of
designs require a micropayment system or reputation system to provide
accountability on the server end. An example of a system in this
category is the Free Haven design: each server can be contacted via a
mixmaster reply block and a public key, but no other identifying
features are available.
The final peer-to-peer model on this scale is a dynamic system with
fully anonymous operators. A server that is fully anonymous lacks even
that level of temporary identity provided by a pseudonymous system
like Free Haven. Since an anonymous peer's history is by definition
unknown, all decisions in an anonymous system must be based only on
the information made available during each protocol operation. In
this case, peers cannot use a reputation system, since there is no
real opportunity to establish a profile on any server. This leaves a
micropayment system as the only reasonable way to establish
accountability. On the other hand, because the servers themselves have
no long-term identity, this may limit the number of services or
operations such a system could provide. For instance, such a system
would have difficulty offering long-term file storage and backup
services.
Subsection: Purposes of Micropayments and Reputation Systems
------------------------------------------------------------
The main goal of accountability is to maximize a server's
utility to the overall system, while minimizing its potential
threat. There are two ways to minimize the threat.
One approach is to limit our risk (in bandwidth used, disk space lost,
or whatever) to an amount roughly equivalent to our benefit from the
transaction. This suggests the fee-for-service or micropayment model
mentioned at the beginning of the chapter.
The other approach is make risk proportional to our trust in the other
parties. This calls for a reputation system.
In the micropayment model, a server makes decisions based on fairly
immediate information. Payments and the value of services are
generally kept small, so that a server only gambles some small amount
of lost resources for any single exchange. If both parties are
satisfied with the result, they can continue with successive
exchanges. Therefore, parties require little prior information about
each other for this model, as the risk is small at any one time. As
we will see later in the chapter, where we discuss actual micropayment
systems, the notion of payment might not involve any actual currency
or cash.
In the reputation model, for each exchange, a server risks some amount
of resources that is proportional to its trust that the result will be
satisfactory. As a server's reputation grows, other nodes become more
willing to make larger payments to it. The micropayment approach of
small, successive exchanges is no longer necessary.
Reputation systems require careful development, however, if the system
allows impermanent and pseudonymous identities. If an adversary can
gain positive attributes too easily and establish a good reputation,
she can damage the system. Worse, she may be able to "pseudospoof" - she
may establish many seemingly "distinct" identities which all
secretly collaborate with each other.
Conversely, if a well-intentioned server can incur negative points
easily from short-lived operational problems, she can lose reputation
too quickly. (This is the attitude feared by every system
administrator: "Their Web site happened to be down when I visited, so
I"ll never go there again.') The system would lose the utility
offered by these "good" servers.
As we will see later in the chapter, complicated protocols and
calculations are required for both micropayments and reputation
systems. Several promising micropayment systems are in operation,
while research on reputation systems is relatively young. These fields
need to develop ways of checking the information being transferred,
efficient tests for distributed computations, and more broadly, some
general algorithms to verify behavior of decentralized systems.
There is a third way to handle the accountability problem: ignore the
issue and engineer the system simply to survive some faulty servers.
Instead of spending time on ensuring that servers fulfill their
function, leverage the vast resources of the Internet for redundancy
and mirroring. We might not know, and we might be unable to find out,
if a server is behaving according to protocol, whether that protocol
is storing files and responding to file queries, forwarding email or
other communications upon demand, or correctly computing values or
analyzing data. Instead, if we replicate the file or functionality
through the system, we can ensure that the system works correctly with
high probability, despite misbehaving components. This is the model
used by Napster, along with some of the systems in this book
such as Freenet and Gnutella.
In general, the popular peer-to-peer systems take a wide variety of
approaches to solving the accountability problem. For instance:
* Freenet dumps unpopular data on the floor, so people flooding the system
with unpopular data are ultimately ignored. Popular data is cached
near the requester, so repeated requests won't traverse long sections
of the network.
* Gnutella doesn't "publish" documents anywhere except on the publishers
computer, so there's no way to flood other systems.
(This has a great impact on the level of anonymity actually offered.)
* Publius limits the submission size to 100k. (It remains to be seen
how successful this will be; they recognize it as a problem.)
* MojoNation uses micropayments for all peer-to-peer exchanges.
* Free Haven requires publishers to provide reliable space of their own
if they want to insert documents into the system. This economy of
reputation tries to ensure that people donate to the system in
proportion to how much space they use.
Subsubsection: Junk mail as a resource allocation problem
---------------------------------------------------------
The familiar problem of junk mail (which is known more formally as
unsolicited commercial email, and popularly as spam) yields some
subtle insights into resource allocation and accountability. Junk
mail abuses the unmetered nature of email and of Internet bandwidth in
general. Even if junk email only achieves an extremely small success
rate, the sender is still successful because the costs of sending each
message are essentially zero.
Spam wastes both global and individual resources. On a broad scale,
it congests the Internet, wasting bandwidth and server CPU cycles. On
a more personal level, filtering and deleting spam can waste an
individual's time (which, collectively, can represent significant
person-hours). Users also may be faced with metered connection charges,
although recent years has seen a trend towards un-metered service and
"always-on" access.
Even though the motivations for junk mail might be economic, not
malicious, senders who engage in such behavior play a destructive role
in "hogging" resources. This is a clear example of the tragedy of the
commons.
Just as some environmental activists suggest ways of curbing pollution
by making consumers pay the "real costs" of the manufacturing
processes that cause pollution, some Internet developers are
considering ways of stopping junk mail by placing a tiny burden on
each email sent, thus forcing the sender to balance the costs of bulk
email against the benefits of responses. The burden need not be a
direct financial levy; it could simply require the originator of the
email to use significant resources. The cost of an email message
should be so small that it wouldn't bother any individual trying to
reach another; it should be just high enough to make junk email
unprofitable. We'll examine such micropayment schemes later in the
chapter.
We don't have to change the infrastructure of the Internet to see a
benefit from email micropayments. Individuals can adopt personal
requirements as recipients. But realistically, individual,
non-standard practices will merely reduce the usability of email.
Although individuals adopting a micropayment scheme may no longer be
targeted, the scheme would make it hard for them to establish
relationships with other Internet users, while junk mailers would
continue to fight over the commons.
Subsection: Pseudonymity and Its Consequences
---------------------------------------------
Many, if not most, of the services on the Internet today do not
directly deal with legal identities. Instead, web sites and chat rooms
ask their users to create a handle or pseudonym by
which they are known while using that system. These systems should be
distinguished from those which are fully anonymous; in a fully
anonymous system, there is no way to refer to the other members of the
system.
Subsubsection: Problems from Pseudospoofing and Possible Defenses
-----------------------------------------------------------------
The most important difficulty caused by pseudonymity is
pseudospoofing. A term first coined by L. Detweiler on the
cypherpunks mailing list, pseudospoofing means that one person creates
and controls many phony identities at once. This is a particularly
bad loophole in reputation systems that blithely accept input from
just any user, like current Web auction sites. An untrustworthy
person can pseudospoof to return to the system after earning a bad
reputation, and can even create an entire tribe of accounts that pat
each other on the back. Pseudospoofing is a major problem inherent in
pseudonymous systems.
Lots of systems fail in the presence of
pseudospoofing. Web polls are one example; even if a web site requires
registration, it's easy for someone to simply register and then vote
ten, fifteen, or 1500 times. Another example is a free web hosting
site, such as GeoCities, which must take care to avoid someone
registering under six or seven different names to obtain extra web
space.
Pseudospoofing is hard in the real world, and so most of us don't think
about it. After all, in the real world, changing one's appearance and
obtaining new identities is relatively rare, spy movies to the contrary.
When we come online, we bring with us the assumptions built up over a
lifetime of dealing with people who can be counted on to be the "same
person" next time we meet them. Pseudospoofing works, and works so well,
because these assumptions are completely unjustified online. As
shown by the research of psychologist Sherry Turkle and others, multiple
identities are common in online communities.
So what can we do about pseudospoofing? Several possibilities present
themselves.
* Abandon pseudonymous systems entirely. Require participants
in a peer-to-peer system to prove conclusively who they are.
This is the direction taken by most work on public key
infrastructures (PKI), which try to tie online users to
some legal identity. Indeed, VeriSign used to refer to its
digital certificates as "driver's licenses for the information
superhighway."
This approach has a strong appeal. After all, why should
people be allowed to "hide" behind a pseudonym? and how can
we possibly have accountability without someone's real
identity?
Unfortunately, this approach is unnecessary, unworkable,
and in some cases undesirable. It's unnecessary for at least
three reasons.
1. Identity does not imply accountability. For example, if a
misbehaving user is in a completely different jurisdiction,
other users may know exactly who he or she is and yet be unable to do
anything about it. Even if they are in the same jurisdiction,
the behavior may be perfectly legal, just not very nice.
2. Accountability is possible even in pseudonymous systems;
this point will be developed at length in the rest of this
chapter.
3. The problem with pseudospoofing is not that someone acts
under a "fake" name, but that someone acts under more than one
name; if we could somehow build a system that ensured that
every pseudonym was controlled by a distinct person, a
reputation system could handle the problem.
Furthermore, absolute authentication is unworkable because it
requires verifying the legal identities of all participants.
On today's Internet, this is a daunting proposition. VeriSign
and other Public Key Infrastructure companies are making
progress in issuing their "digital driver's licences," but we
are a far cry from that end. In addition, one then has to
trust that the legal identities have not themselves been
fabricated. Verification can be expensive and leaves a system
relying on it open to attack if it fails.
Finally, this proposed solution is undesirable because it
excludes users who either cannot or will not participate.
International users of a system may not have the same ways of
verifying legal identity. Other users may have privacy
concerns.
* Allow pseudonyms, but ensure that all participants
are distinct entities. This is all that is strictly necessary to
prevent pseudospoofing. Unfortunately, it tends to be not
much easier than asking for everyone's legal identity.
* Monitor user behavior for evidence of pseudospoofing.
Remove or "expose" accounts that seem to be controlled by
the same person. The effectiveness of this approach varies
widely with the application. It also raises privacy concerns
for users.
* Make pseudospoofing unprofitable. Give new accounts in a
system little or no resources until they can "prove
themselves" by doing something for the system. Make it so
expensive for an adversary to "prove itself" multiple times
that it has no reason to pseudospoof. This is the approach
taken by the Free Haven project, which deals with "new"
servers by asking them to donate resources to the good of the
system as a whole.
All of these alternatives are just rules of thumb. Each of them might
help us combat the problems of pseudospoofing, but it's hard to reach
a conclusive solution. We'll return to possible technical solutions
later in the chapter when we describe the Advogato system.
Subsubsection: Reputation for sale -- SOLD!
-------------------------------------------
Pseudonymous systems are based on the assumption that each pseudonym
is controlled by the same entity for the duraction of the system. That
is, the adversary's pseudonyms stay controlled by the adversary, and
the good guys' pseudonyms stay controlled by the good guys.
What happens if the adversary takes control of someone who already has
a huge amount of trust or resources in the system? Allowing accounts
to change hands can lead to some surprising situations.
The most prevalent example of this phenomenon comes in online
multiplayer games. One of the best-known such games is Ultima Online.
Players gallivant around the world of Brittania, completing quests,
fighting foes, traipsing around dungeons, and in the process
accumulating massive quantities of loot. Over the course of many, many
hours, a player can go from a nobody to the lord and master of his own
castle. Then he can sell it all to someone else.
Simply by giving up his username and password, an Ultima Online player
can transfer ownership of his account to someone else. The new owner
obtains all the land and loot that belonged to the old player. More
importantly, he obtains the reputation built up by the old player.
The transfer can be carried out independently of the game; no one need
ever know that it ever happened. As far as anyone else knows, the game
personality is the "same person." Until the new owner does something
"out of character," or until the news spreads somehow, there is no way
to tell that a transfer has occurred.
This has led to a sort of cottage industry in trading game identities
for cash online. Ultima Online game identities, or "avatars," can be
found on auction at eBay. Other multiplayer online games admit the
occurrence of similar transactions. Game administrators can try to
forbid selling avatars, but as long as it's just a matter of giving up
a username and password, it will be an uphill battle.
The point of this example is that reputations and identities do not
bind as tightly to people online as they do in the physical
world. Reputations can be sold or stolen with a single password. While
people can be coerced or "turned" in the physical world, it's much
harder. Once again, the assumptions formed in the physical world turn
out to be misleading online.
One way of dealing with this problem is to embed an important piece of
information, such as a credit card number, into the password for an
account. Then revealing the password reveals the original user's credit
card number as well, creating a powerful incentive not to trade away
the password. The problem is that if the password is ever accidentally
compromised, the user now loses not just the use of his or her account,
but the use of a credit card as well.
Another response is to make each password only "good" for a certain
number of logins; to get a new password, the user must prove that he
was the same person as applied for the previous password. This does not
stop trading passwords, however - it just means the "original" user must
hang around to renew the password each time.
Common Methods For Dealing With Flooding and DoS Attacks
--------------------------------------------------------
We've seen some examples of resource allocation problems and
denial of service attacks. These problems have been around for a long
while in various forms, and there are several widespread strategies
for dealing with them. We'll examine them in this section to show that
even the most common strategies are subject to attack--and such
attacks can be particularly devastating to peer-to-peer systems.
Subsection: Caching and Mirroring
---------------------------------
One of the simplest ways to maintain data availability is to mirror
it. Instead of hosting data on one machine, host it on several. When
one machine becomes congested or goes down, the rest are still
available. Popular software distributions like the Perl archive CPAN
and the GNU system have a network of mirror sites, often spread across
the globe to be convenient to several different nations at once.
Another common technique is caching: if certain data is requested very
often, save it in a place which is closer to the requestor. Web
browsers themselves cache recently visited pages.
Simple to understand and straightforward to implement, caching and
mirroring are often enough to withstand normal usage loads.
Unfortunately, an adversary bent on a denial of service attack can
target mirrors one by one until all are dead.
Subsection: Active Caching
--------------------------
Simple mirroring is easy to do, but it also has drawbacks. Users
must know where mirror sites are and decide for themselves which mirror to
use. This is more hassle for users, and inefficient to boot, as users do
not generally know their network well enough to pick the fastest web site.
In addition, users have little idea of how loaded a particular mirror is;
if many users suddenly decide to visit the same mirror, they may all
receive worse connections than if they had been evenly distributed across
mirror sites.
In 1999, Akamai Technologies became an overnight success with a
service which could be called "active" mirroring. Web sites redirect
their users to use special "Akamaized" URLs. These URLs contain
information used by Akamai to dynamically direct the user to a farm of
Akamai web servers that is close to the user on the network. As the
network load changes and as server loads change, Akamai can switch
users to the "best" server farm of the moment.
For peer-to-peer systems, an example of active caching comes in the
Freenet system for file retrieval. In Freenet, file requests are
directed to a particular server, but this server is in touch with
several other servers. If the initial server has the data, it simply
returns the data. Otherwise, it forwards the request to the
neighboring servers and keeps a record of the original requestor's
address. The neighboring servers do the same thing, creating a "chain"
of servers. Eventually the request reaches a server that has the data,
or times out. If the request reaches a server that has the data, the
server sends the data back through the chain to the original
requestor. Every server in the chain, in addition, caches a copy of
the requested data. This way, the next time the data is requested, the
chance that the request will quickly hit a server with the data is
increased.
Active caching and mirroring offer more protection than ordinary
caching and mirroring against the "slashdot effect" and flooding
attacks. On the other hand, systems using these techniques then need
to consider how an adversary could take advantage of them. For
instance, is it possible for an adversary to fool Akamai into thinking
a particular server farm is "better" or "worse" situated than it
actually is? Can particular farms be targeted for denial of service
attacks? In Freenet, what happens if the adversary spends all day long
requesting copies of the complete movie Lawrence of Arabia
and thus loads up all the local servers to the point where they have
no room for data wanted by other people? These questions can be
answered, but they require thought and attention.
For specific answers on a specific system, we might be able to answer
these questions through a performance and security analysis. For
instance, Chapter 14, Performance, uses Freenet as a model
for performace analysis. Here, we can note two general points about
how active caching reacts to adversaries.
First, if the cache chooses to discard data according to which data
was least recently used, the cache is vulnerable to an active
attack. An adversary can simply start shoving material into the cache
until it displaces anything already there. In particular, an adversary
can simply request that random bits be cached. Active caching systems
whose availability is important to their users should have some way of
addressing this problem.
Next, guaranteeing service in an actively cached system with multiple
users on the same cache is tricky. Different usage patterns fragment
the cache and cause it to be less useful to any particular set of users.
The situation becomes more difficult when adversaries enter the picture:
by disrupting cache coherency on many different caches, an adversary may
potentially wreak more havoc than by mounting a denial of service attack
on a single server.
One method for addressing both these problems is to shunt users to
caches based on their observed behavior. This is a radical step
forward from a simple least-recently-used heuristic. By using past
behavior to predict future results, a cache has the potential to work
more efficiently. This past behavior can be considered a special kind
of "reputation," a topic we'll cover in general later in this chapter.
But systems can also handle resource allocation using simpler and
relatively-well tested methods involving micropayments. In the next
section, we'll examine some of them closely.
Section: Micropayment Schemes
-----------------------------
Accountability measures based on micropayments require that each party
offer something of value in an exchange. Consider Alice and Bob, both
servers in a peer-to-peer system that involves file sharing or
publishing. Alice may be inserting a document into the system and
want Bob to store it for her. Alternatively, Alice may want Bob to
anonymously forward some email or real-time Internet protocol message
for her. In either case, Alice seeks some resource commodity --
storage and bandwidth respectively -- from Bob. In exchange, Bob asks
for a micropayment from Alice to protect his resources from overuse.
There are two main flavors of micropayments schemes. The first do not
offer Bob any real redeemable value; their goal is simply to slow
Alice down when she requests resources from Bob. She pays with a
proof of work (POW), showing that she performed some computationally
difficult problem. These payments are called non-fungible, because
Bob cannot turn around and use them to pay anybody else. With the
second type of scheme, fungible micropayments, Bob receives a payment
that holds some intrinsic or redeemable value. The second type of
payment is commonly known as digital cash.
Both of these schemes may be used to protect against resource
allocation attacks. POWs can prevent communication denial of service
attacks. Bob may require someone who wishes to connect to submit a
POW before he allocates any non-trivial resources to communication.
In a more sophisticated system, he may start charging only if he
detects a possible DoS attack. Likewise, if Bob charges to store
data, an attacker needs to pay some (prohibitively) large amount to
flood Bob's disk space. Still, POWs are not a perfect defense against
an attacker with a lot of CPU capacity; such an attacker could
generate enough POWs to flood Bob with connection requests or data.
Subsection: Varieties of Micropayments or Digital Cash
------------------------------------------------------
The difference between micropayments and digital cash is a semantic
one. The term "micropayment" has generally been used to describe
schemes using small-value individuals payments. Usually, Alice will
send a micropayment for some small, incremental use of a resource
instead of a single large digital cash "macropayment" for, say, a
month's worth of service. We'll continue to just use the
commonly-accepted phrase "micropayment" in this chapter without
formally differentiating between the two types, but we'll describe
some common designs for each type.
Digital cash may be either anonymous or identified. Anonymous schemes
do not reveal Alice's identity to Bob or the bank providing the cash,
while identified spending schemes expose her information. Hybrid
approaches can be taken: Alice might remain anonymous to Bob but not
to the bank; or might remain anonymous to everybody yet traceable.
The latter system is a kind of pseudonymity; the bank or recipient
might be able to relate a sequence of purchases, but not link them to
an identity.
No matter the flavour of payment -- non-fungible, fungible, anonymous,
identified, large, or small -- we want to ensure that a malicious user
can't commit forgery or spend the same coin more than once without
getting caught. A system of small micropayments might not worry about
forgeries of individual micropayments, but would have to take steps to
stop large-scale, multiple forgeries.
Schemes identifying the spender are the digital equivalent of debit or
credit cards. Alice sends a "promise of payment" that will be honored
by her bank or financial institution. Forgery is not much of a
problem here, because, as with a real debit card, the bank ensures
that Alice has enough funds in her account to complete the payment and
transfers the specified amount to Bob. Unfortunately, though, the
bank has all knowledge of all of Alice's transactions.
Anonymous schemes take a different approach, and are the digital equivalent
of real cash. The electronic coin itself is "worth" some dollar amount.
If Alice loses the coin, she's lost the money. If Alice manages to pay
both Bob and Charlie with the same coin and not get caught, she's
successfully double-spent the coin.
In the real world, government mints use special paper, microprinting,
holograms, and other technologies to prevent forgery. On the other
hand, duplication is easy in a digital medium: just copy the bits! We
need to find alternative methods to prevent this type of fraud. Often,
this involves looking up the coin in a database of spent coins. Bob
might have a currency unique to him, so that the same coin couldn't
be used to pay Charlie. Or coins might be payee-independent, and Bob
would need to verify with the coin's issuing "mint" that it has not
already been spent with Charlie.
With this description of micropayments and digital cash in mind, let's
consider some various schemes.
Subsection: Non-Fungible Micropayments
--------------------------------------
Proofs of work were first advocated by Cynthia Dwork and Moni Naor in
1992 as "pricing via processing" to handle resource allocation
requests.
[Cynthia Dwork and Moni Naor. Pricing via processing or
combatting junk mail. In Ernest F. Brickell, editor, Advances in
Cryptology--CRYPTO '92, volume 740 of Lecture Notes in Computer
Science, pages 139-147. Springer-Verlag, 1993, 16-20 August 1992.]
The premise is to make a user to compute a moderately hard, but not
intractable, computation problem before gaining access to some
resource. The problem takes a long time to solve, but only a short
time to verify that the user found the right solution. Therefore,
Alice must perform a significantly greater amount of computational
work to prove the problem than Bob has to perform to verify that she
did it.
Dwork and Naor offer their system specifically as a way to combat
electronic junk mail. As such, it can impose a kind of accountability
within a distributed system.
To make this system work, a recipient refuses to receive email unless
a POW is attached to each message. The POW is calculated using the
address of the recipient and must therefore be generated specifically
for the recipient by the sender. These POWs serve as a form of
electronic poststamp, and the way the recipient's address is included
makes it trivial for the recipient to determine whether the POW is
malformed. Also, a simple lookup in a local database can be used to
check whether the POW has been spent before.
The computational problem takes some time proportional to the time
needed to write the email, and is small enough that its cost is
negligible for an individual user or a mail distribution list. Only
unsolicited bulk mailings would spend a large amount of computation
cycles to generate the necessary POWs.
Recipients can also agree with individual users or mail distribution
lists to use an access control list ("frequent correspondent list")
so that some messages do not require a POW. These techniques are
useful for social efficiency: if private correspondence instead cost
some actual usage fee, users may be less likely to send email that
would otherwise be beneficial, and the high bandwidth of the
electronic medium may be underutilized.
Dwork and Naor additionally introduced the idea of a POW with a trap
door: a function that is moderately hard to compute without knowledge
of some secret, but easy to compute given this secret. Therefore,
central authorities could easily generate postage to sell for
pre-specified destinations.
Subsubsection: Extended Types of Non-Fungible Micropayments
------------------------------------------------------------
Hash cash, designed by Adam Back in late 1997, is an
alternative micropayment scheme that is also based on POWs.
[http://www.cypherspace.org/~adam/hashcash/]
Here, Bob calculates a hash or digest, a
number that can be generated easily from a secret input, yet no one
can reasonably guess this input knowing only the hash (See Chapter 15,
Trust.) Bob then asks Alice to guess the input through a
brute-force calculation; he can set how much time Alice has to "pay" by
specifying how many bits she must guess. Typical hashes used for
security are 128 bits or 160 bits in size. Finding another input
which will produce the entire hash (which is called a "collision")
requires a prohibitive amount of time.
Instead, Bob requires Alice to guess an input which will produce
a number with the identical low-order bits as the hash. If we call this
number of bits k, Bob can set a very small k to require
a small payment or a larger k to require a larger payment.
Formally, this kind of algorithm is called a "k-bit partial
hash collision."
For example, the probability of guessing a 17-bit collision is 2^{-17};
this problem takes approximately 65,000 tries on average. To give a
benchmark for how efficient hash operations are: in one test, our
Pentium-III 800 Mhz machine performed approximately 312,000 hashes
per second.
Hash cash protects against double-spending by using individual
currencies. Bob generates his hash from an ID or name known to him
alone. So the hash cash coins given to Bob must be specific to Bob,
and he can immediately verify their validity against a local
spent-coin database.
Another micropayment scheme based on partial hash collisions is
client puzzles, suggested by researchers Ari Juels and John
Brainard of RSA Labs. Client puzzles were introduced to provide a
cryptographic countermeasure against "connection depletion" attacks,
whereby an attacker exhausts a server's resources by making a large
number of connection requests and leaving them unresolved.
[A. Juels and J. Brainard. Client Puzzles: A Cryptographic Defense
Against Connection Depletion Attacks. NDSS '99.]
When client puzzles are used, a server usually accepts connection
requests as usual. However, when it suspects that it is under attack,
marked by a significant rise in connection requests, it responds to
requests by issuing each requestor a puzzle: a hard cryptographic
problem based on the time and information specific to the server and
client request.
[http://www.rsasecurity.com/news/pr/000211.html]
Like hash cash, client puzzles require that the client needs to find
some k-bit partial hash collisions. To decrease the chance
that a client might just guess the puzzle, each puzzle could
optionally be made up of multiple subpuzzles that the client must
solve individually. Mathematically, a puzzle is a hash, for which
a client needs to find a corresponding input which would produce it.
[XXX Footnote:
For example, by breaking a puzzle into eight subpuzzles, you can
increase the amount of average work required to solve the puzzle by
the same amount as if you left the puzzle whole but increased the size
by 3 bits. However, breaking up the puzzle is much better in terms of
making it harder to guess. The chance of correctly guessing the
subpuzzle version is 2^{-8k}, while that of guessing the larger single
version is just 2^{-(k+3)}, i.e., without performing a brute-force
search by hashing randomly-selected inputs to find a collision.
XXX]
Subsubsection: Non-Parallelizable Work Functions
------------------------------------------------
Both of the hash-collision POW systems in the previous section can
easily be solved in parallel. In other words, a group of n
machines can solve each problem in 1/n the amount of time as
a single machine. Historically, this situation is like the encryption
challenges (including a challenge to break the old 40-bit DES that was
the U.S. standard until recently) that were solved relatively quickly
by dividing the work among thousands of users.
Parallel solutions may be acceptable from the point of view of
accountability. After all, users still pay with the same expected
amount of burnt CPU cycles, whether a single machine burns m
cycles, or n machines burn m cycles collectively.
But if the goal of non-fungible micropayments is to ensure public
access to Bob's resources, parallelizable schemes are weak because
they can be overwhelmed by distributed denial of service attacks.
Let's actually try to make Alice wait for a fixed period between two
transactions, in order to better protect Bob's resources. Consider a
"proof of time": we desire a client to spend some amount of time, as
opposed to work, to solve a given problem. An example is MIT's LCS35
Time Capsule Crypto-Puzzle. The time capsule, sealed in 1999, will be
opened after either 35 years or the supplied cryptographic problem is
solved, whichever comes first. The problem is designed to foil any
benefit of parallel or distributed computing. It can be solved only as
quickly as the fastest single processor available.
Time-lock puzzles, such as the LCS35 time capsule, were first
presented by Ron Rivest, Adi Shamir, and David Wagner. These types of
puzzles are designed to be "intrinsically" or "inherently" sequential
in nature. The LCS35 problem used is to compute 2^{2^t} (mod
n), where n is the product of two large primes
p*q, and t can be arbitrarily chosen to set
the difficulty of the puzzle. This puzzle can be solved only by
performing t successive squares modulo n. There is
no known way to speed up this calculation without knowing the
factorization of n. The reason is the same reason
conventional computer encryption is hard to break: there is no
existing method for finding two primes when only their product is
known.
[Ronald L. Rivest, Adi Shamir, and David A. Wagner. Time-lock puzzles
and timed-release Crypto. Feb 1996.].
It's worth noting in passing that the previous construction is not
proven to be non-parallelizable. Besides the
product-of-two-primes problem, its security rests on no one knowing
how to perform the repeated modular squaring in parallel. This
problem is tied up with the "P vs. NC" problem in computational
complexity theory and is outside the scope of this chapter. Similar
to the better known "P vs. NP" problems, which concerns the question
"which problems are easy?" the P vs. NC problem asks "which problems
are parallelizable?" [XXX Footnote: Historical note: NC stands for
Nick's Class, named after Nicholas Pippenger, one of the first
researchers to investigate such problems.] We refer the reader to [
Raymond Greenlaw, H. James Hoover, and Walter L. Ruzzo. Limits to
Parallel Computation: P-Completeness Theory. Oxford University Press
1995 ] for more detail.
Subsection: Fungible Micropayments
----------------------------------
All of the micropayment schemes we have previously described are
non-fungible. While Alice pays Bob for resource use with some coin that
represents a proof of work, he cannot redeem this token for something of
value to him. While this micropayment helps prevent DoS and flooding
attacks, there's no measure of "wealth" in the system. Bob has no
economic incentive to engage in this exchange.
Non-fungible micropayments are better suited for ephemeral resources, like
TCP connections, than they are for long-term resources, like data storage.
Consider an attacker who wants to make a distributed data store unusable.
If an attacker is trying to fill up a system's storage capacity and is
allowed to store data for a long time, the effects of DoS attacks can be
cumulative. This is because the attacker can buy more and space on the
system as time goes on.
If micropayments just use up CPU cycles and cannot be redeemed for
something of value, an attacker can slowly nibble at resources,
requesting a Meg now and then as it performs enough work to "pay" for
the space. This can continue, bit by bit, until the attacker controls a
large percentage of the total. Furthermore, the victim is unable to use
these payments in exchange for other peers' resources, or alternatively to
purchase more resources.
Enter redeemable payments. This compensation motivates users to
donate resources and fixes the cost for resources in a more stable
way.
Subsubsection: Freeloading
--------------------------
The history of the Internet records a number of users who have freely
donated resources. The hacker ethos and the free software movement can be
relied on to provide resources to an extent. Johan Helsingius ran the
initial "anonymous" remailer -- anon.penet.fi -- for its own sake. The
cypherpunk (type I) and Mixmaster (type II) remailers for anonymous
email are run and maintained for free from around the globe. Processing
for SETI@home and distributed.net is also performed without
compensation, other than the possibility of fame for finding an alien
signature or cracking a key.
Unfortunately, not everybody donates equally. It is tempting for a
user to "let somebody else pay for it" and just reap the rewards.
Peer-to-peer systems may combat this effect by incorporating coercive
measures into their design or deployment, ensuring that users actually
donate resources. This is not a trivial problem. Napster provides a
good example: users need to "connect" to Napster only when actually
searching for MP3 files; otherwise they can remain offline.
Furthermore, users are not forced to publicly share files, although
downloaded files are placed in a public directory by default.
A fairly recent analysis of Gnutella traffic showed a lot of free
riding.
[http://www.parc.xerox.com/istl/groups/iea/papers/gnutella/FreeRidingA.html]
Upwards of 70% of Gnutella users shared no files, and 90% of the users
answered no queries.
Free Haven tackles this problem by attempting to ensure that users
donate resources in amounts proportional to the resources they use.
The system relies on accountability via reputation, which we discuss
later. Mojo Nation, on the other hand, pays users "Mojo" -- the
system's private digital currency -- for donated resources. The
"Mojo" has no meaning outside the system yet, but it can be leveraged
for other system resources.
Subsubsection: Fungible payments for accountability
----------------------------------------------------
Fungible micropayments are not used solely, or even largely, for
economic incentives. Instead, they act as an accountability measure.
Peers can't freeload in the system, as they can earn wealth only by
making their own resources available (or by purchasing resource tokens
via some other means). This is a more natural and more effective way
to protect a system from flooding than proofs of work. In order to tie
up resources protected by fungible payments, an adversary needs to
donate a proportional amount of resources. The attempted denial of
service becomes self-defeating.
If payments can actually be redeemed for real-world currencies, they
provide yet another defense against resource misuse. It may still be
true that powerful organizations (as opposed to a script-kiddie
downloading DoS scripts from RootShell.com) can afford to pay enough
money to flood a system. But now the victim can purchase additional
physical disk space or bandwidth from the money earned. Since the
prices of these computer resources drop weekly, the cost of
successfully attacking the system increases with time.
Business arrangements, not technology, link digital cash to real-world
currencies. Transactions can be visualized as foreign-currency
exchanges, because users need to convert an amount of money to digital
cash before spending it. The Mark Twain Bank "issued" DigiCash eCash
in the U.S. in the mid-1990s, joined by other banks in Switzerland,
Germany, Austria, Finland, Norway, Australia, and Japan.
[http://www.canada.cnet.com/news/0-1003-200-332852.html]
ECash can as easily be used for private currencies lacking real-world
counterparts; indeed, "Mojo" is based on eCash technology (although
without, in default form, the blinding operations that provide
anonymity.) The digital cash schemes we describe, therefore, can be
used for both private and real-world currencies.
Subsubsection: Micropayment Digital Cash Schemes
------------------------------------------------
Ronald Rivest and Adi Shamir introduced two simple micropayment
schemes, PayWord and Micromint, in 1996. PayWord is a credit-based
scheme based on chains of paywords (hash values), while MicroMint
represents coins by k-way hash-function collisions. Both of these
schemes follow the lightweight philosophy of micropayments: nickles
and dimes don't matter. If a user loses a payment, or is able to
forge a few payments, we can ignore such trivialities. The security
mechanisms in these schemes are not as strong, nor as expensive, as in
the full macropayment digital cash schemes we will discuss later. At
a rough estimate, hashing is about 100 times faster than RSA signature
verification, and about 10,000 times faster than RSA signature
generation.
[R. Rivest and A. Shamir. PayWord and MicroMint: Two simple
micropayment schemes. Lecture Notes in Computer Science 1189,
Proc. Security Protocols Workshop, Cambridge, Springer Verlag (1997). 69-87.]
PayWord is designed for applications in which users engage in many
repetitive micropayment transactions. Some examples are pay-per-use
Web sites and pay-per-minute online games or movies. PayWord relies
on a broker (better known as a "bank" in many digital cash schemes),
mainly for online verification, but seeks to minimize communication
with the broker in order to make the system as efficient as possible.
It works like this. Alice establishes an account with a broker and is
issued a digital certificate. When she communicates with vendor Bob
for the first time each day, she computes and commits to a new payword
chain w_1, w_2, ..., w_n. This chain is created by choosing some
random w_n, and moving backward to calculate each hash w_i from the
hash w_{i+1}.
Alice starts her relationship with Bob by offering w_0. With each
micropayment she moves up the chain from w_0 to w_n. Just knowing
w_0, vender Bob can't compute any paywords and therefore can't make
false claims to the broker. But Bob can easily verify the i-th payment
if he knows only w_{i-1}. Bob reports to the broker only once at the
end of the day, offering the last (highest-indexed) micropayment and
the corresponding w_0 received that day. The broker adjusts accounts
accordingly.
As payword chains are both user- and vendor-specific, the vendor can
immediately determine if the user attempts to double-spend a payword.
Unfortunately, however, PayWord does not provide any transaction
anonymity. As this is a credit-based system, the broker knows that
Alice paid Bob.
MicroMint takes the different approach of providing less security, but
doing so at a very low cost for unrelated, low-value payments.
Earlier, we described k-bit collisions, in which Alice found
a value that matched the lowest k bits in Bob's hash.
MicroMint coins are represented instead by full collisions: all the
bits of the hashes have to be identical. A k-way collision
is a set of distinct inputs (x_1, x_2, ..., x_k) that all map to the
same digests. In other words, hash(x_1) = hash(x_2) = ... = hash(x_k).
These collisions are hard to find, as the hash functions are
specifically designed to be collision-free!
[XXX Footnote: We need to examine approximately 2^{(n(k-1)/k)} inputs
to find a k-way collision, essentially the "birthday paradox",
see [Rivest, Shamir paper]]]
The security in MicroMint rests in an economy of scale: minting
individual coins is difficult, yet once some threshold of calculations
have been performed, coins can be minted with accelerated ease.
Therefore, the broker can mint coins cost-effectively, while
small-scale forgers can produce coins only at a cost exceeding their
value.
As in Payword, the MicroMint broker knows both Alice, to whom the
coins are issued, and vender Bob. This system is therefore not
anonymous, allowing the broker to catch Alice if she attempts to
double-spend a coin.
Payword and Micromint are just two representative examples of
micropayment schemes. Many others exist. The point to notice in both
schemes is the extreme ease of verification and the small space
requirements for each coin. Not only are these schemes fast, but they
remain fast even in environments with severe resource constraints or
at larger amounts of money.
Micropayment schemes such as these make it possible to extend
peer-to-peer applications beyond the desktop and into embedded and
mobile environments. Routers can use micropayments to retard denial of
service attacks with minimal extra computation and store the proceeds.
Music players can act as mobile Napster or Gnutella servers, creating
ad hoc local networks to exchange music stored in their memories (and
possibly make money in the process). These possibilities are just
beginning to be explored.
Subsubsection: Making money off of others' work
------------------------------------------------
Proofs of work can be exchanged for other resources in a system, even
without a system-wide digital cash scheme. The key is to make a POW
scheme in which Bob can take a POW submitted by Alice and pass it on
to Charlie without expending any signicant calculation of his own.
This scheme was introduced by Markus Jakobsson and Ari Juels in 1999
as a bread pudding protocol. Bread pudding is a dish that
originated with the purpose of reusing stale bread. In a similar
spirit, this protocol defines a POW that may be reused for a separate,
useful, and verifiably correct computation. This computation is not
restricted to any specific problem, although the authors further
specify a simple bread pudding protocol for minting MicroMint coins.
[Markus Jackobsson, Ari Juels. Proofs and Work and Bread Pudding
Protocols. In B. Preneel, ed., Communications and Multimedia Security,
pages 258-272, Kluwer Academic Publishers, 1999.]
In this variant on MicroMint's original minting scheme, the broker no
longer has to generate each, individual payment made by each
user. Instead, a single, valid coin can redistributed by users in the
system to satisfy each other's POWs. The fundamental idea is to make
clients solve partial hash collisions, similar to the concept of hash
cash. This computation is useful only to the mint, who holds some
necessary secret. With enough of these POWs, the minter can offload
the majority of computations necessary to minting a coin.
Effectively, Bob is asking Alice to compute part of a micromint coin,
but this partial coin is useful only when combined with thousands or
millions of other similar computations. Bob collects all of these
computations and combines them into Micromint coins. Without
requiring system-wide fungible digital cash, Bob can reuse others'
computation work for computations of value to him (and only to him).
The scheme is extensible and can potentially be used with many
different types of payment schemes, not just MicroMint.
Subsubsection: Anonymous Macropayment Digital Cash Schemes
----------------------------------------------------------
Up until now, we have described payments where the value of each coin
or token is fairly small. These make forgery difficult because it's
useful only if it can be performed on a large scale. Now we will move
to more complex schemes that allow large sums to be paid in a secure
manner in a single transaction. These schemes also offer multi-party
security and some protection for user privacy.
The "macropayment" digital cash schemes we about to discuss offer
stronger security and anonymity. However, these protections come at a
cost. The computational and size requirements of such digital cash is
much greater. In cryptographic literature, micropayments are usually
considered extremely small (such as $0.01), and use very efficient
primitives such as hash functions. In contrast, macropayment digital
cash schemes use public-key operations, such as exponentiation, which
are much slower. Use of techniques from elliptic curve cryptography
can alleviate this problem by making it possible to securely use much
shorter keys. Other clever tricks, such as "probabilistic" checking --
only checking some and not other payments on the grounds that large-scale
forgery will be caught eventually -- can help macropayment techniques
compete with micropayment schemes. This is an active research area and a
source of continual innovation. Macropayment schemes will prove useful
when used with the reputation systems discussed later in this paper,
because reputation systems let us make large transactions without assuming
incommensurate risk.
Pioneering work on the theoretical foundations of anonymous digital
cash was carried out by David Chaum in the early 1980s. Chaum holds
patents on his system, and founded DigiCash in 1990 to implement the
technologies.
[D. Chaum, Blind signatures for untraceable payments. Advances in
Cryptology - Crypto '82, Springer-Verlag (1983), 199-203)]
[D. Chaum, Security without identification: transaction systems to make
big brother obsolete, Communications of the ACM 28 (10) (1985),
1030-1044.]
[D. Chaum, A. Fiat, M. Naor. "Untraceable Electronic Cash,"
Advances in Cryptology -- Proceedings of Crypto '88, Lecture Notes
in Computer Science, no. 403, Springer-Verlag, pp. 319-327.]
[D. Chaum "Achieving Electronic Privacy," (invited) Scientific American,
August 1992 pp96-101
http://www.chaum.com/articles/Achieving_Electronic_Privacy.htm]
The electronic cash system he developed is based on an extension of
digital signatures, called blind signatures. A digital signature uses
a Public Key Infrastructure (PKI) to authenticate that a particular
message was sent by a particular person. (See Chapter 15,
Trust for a greater description of signatures and PKI.) A
blind signature scheme allows a person to get a message signed
by another party, while not revealing the message contents to that party
or allowing the party to recognize the message later.
In digital cash, Alice creates some number (the proto-coin) and
multiplies it by a secret random number. She sends this resulting
proto-coin -- blinded by the random number -- to the bank. The bank
checks that she has a positive balance, and signs the proto-coin with
the assurance that "this is a valid coin" using a private key specific
to the particular amount of money in the coin. The bank returns this
to Alice, and subtracts the corresponding amount from Alice's bank
account. Alice then divides out the random number multiplier and
finds herself with a properly minted coin, carrying a valid signature
from the bank. This is just one way to do digital cash, but it will
suffice for this discussion.
In real life, the digital cash transaction would be like Alice
slipping a piece of paper into a carbon-lined envelope, representing
the blinded proto-coin. The bank just writes its signature across the
envelope, which imprints a carbon signature onto the inside paper.
Alice removes the paper and is left with a valid coin.
Alice can then spend this coin with Bob. Before accepting it, Bob
needs to check with the issuing bank that the coin hasn't already been
spent, a process of online verification. Afterwards, Bob can deposit
the coin with the bank, which has no record to whom that coin was
issued. It saw only the blinded proto-coin, not the underlying
"serial" number.
This digital cash system is both anonymous and untraceable. Its
disadvantage, however, is that coins need to be verified online during
the transaction to prevent double-spending. This slows down each
transaction.
Stefan Brands proposed an alternative digital cash scheme in 1993. It
forms the core of a system implemented and tested by CAFE, a European
project with both academic and commercial members. His patents are
currently held by the Montreal-based privacy company Zero-Knowledge
Systems, Inc.
Brands' digital cash scheme allows offline checking of double-spending
for fraud-tracing, with obvious performance benefits compared to
online verification. It is also well-suited for incorporation into
smartcards, and the underlying technology provides an entire framework
for privacy-protecting credential systems.
[Stefan Brands. Untraceable Off-Line Cash in Wallet with Observers,
Advances in Cryptology - CRYPTO '93, Lecture Notes in
Computer Science, no. 773, Springer-Verlag, pp. 302-318. ]
[Stefan Brands. Rethinking Public Key Infrastructures and Digital
Certificates; Building in Privacy. Cambridge: MIT Press, 2000.]
[Stefan Brands. Online Book Chapter from Rethinking Public Key
Infrastructures and Digital Certificates; Building in Privacy.
http://www.xs4all.nl/~brands/cash.html ]
Brands' scheme uses a "restrictive" blind signature protocol rather than a
normal blind signature protocol as proposed by Chaum. In the digital cash
context, this certificate is a valid coin, represented as three values:
secret key, public key, and digital signature certified by the bank. A
key aspect of this protocol is that the bank knows -- and encodes
attributes into -- part of Alice's secret key, but has no idea
what the corresponding public key and certificate look like (except
that they are consistent with the known part of the secret key.) At
the end of the issuing protocol, Alice uses techniques somewhat
similar to Chaum's protocol to generate a valid coin.
Payment is very different in Brands' system. Alice's coin contains
her secret key, so she doesn't actually give her coin to the vender
Bob. Instead, she proves to Bob that she is the proper holder of that
coin (that is, that she knows the secret key associated with the
public key), without actually disclosing its contents. This type of
payment, a signed proof of knowledge transcript, is a fundamental
shift from Chaum's ecash model, in which the coin is merely an
"immutable" bit string.
Privacy is maintained together with honesty by Brands' system in a
very clever manner. If Alice only spends the coin once, the bank can
gain no information as to her identity. After all, during the issuing
protocol, the bank only saw part of Alice's secret key, not the public
key used to verify Alice's payment transcript signature. Nor did the bank
see its own signature on that public key. Yet, if
Alice double-spends the coin, the bank can use it to extract her
identity!
We won't provide the math necessary to understand the security in this
system, but you can understand why it works by comparing it to a
Cartesian x-y plane. The first random number challenge used during
payment provides one point (x0,y0) on this plane. An infinite number
of lines can pass through this one point. If Alice uses the same coin
to pay Charlie, a different random number is used. Now we know a
second (x1,y1) point, and two points uniquely define a line. In the
same way, Alice's identity will be exposed if she spends the coin
twice.
Brands' scheme -- useful for both digital cash and credentials -- can
be used to encode other useful information, such as negotiable
attributes exposed during payment, or high-value secrets that can
prevent lending. A "high-value secret" refers to the same strategy we
discussed when trying to prevent people from sharing their accounts --
if a certificate to do X includes the user's credit card number, the
user will be less willing to loan the certificate to others so they can
do X.
The "negotiable attributes" are an extension of a powerful idea - that
of "credentials without identity." If you have a credential without
identity, you have a way of proving that you belong to a certain class
of people without actually having to prove anything about "who you are."
For example, you may have a credential which certifies that you are over
21, but doesn't include your name. The entity that authorized your age
wouldn't want you to be able to lend this certificate to someone else
without one. Therefore, we utilize these high-value secrets: the user
needs to know the secret in order to use the over-21 certificate.
Brands' scheme takes this farther and allows you to selectively reveal
or hide various certifications "on the fly," thereby allowing you to
"negotiate" your degree of privacy.
One final note. Whether a peer-to-peer system uses micropayments or
macropayments, system designers must consider the possibility that these
can be DoS targets in themselves. Perhaps an attacker can flood a system
with cheaply-minted counterfeit coins, eating up computational resources
through verification checking alone. The extent of this problem depends
largely on the computational complexity of coin verification. Many of
the systems we describe -- hash cash, client puzzles, MicroMint,
PayWord -- can very quickly verify coins, with only a single or a few
hash operations. Public-key operations, such as modular exponentiation,
can be significantly more expensive. Again, digital cash schemes are an
active area of cryptographic research; before specifying a scheme it is
worth checking out the proceedings of the Financial Cryptography, CRYPTO,
and EUROCRYPT conferences.
Subsection: The use and effectiveness of micropayments in peer-to-peer systems
-------------------------------------------------------------------------------
So far, we have spent quite a bit of time describing various
micropayment and digital cash schemes. Our discussion is not meant as
exhaustive, yet it provides some examples of various cryptographic
primitives and technologies used for electronic cash: public-key
encryption, hash functions, digital signatures, certificates, blinding
functions, proofs-of-knowledge, and different one-way and trap-door
problems based on complexity theory. The list reads like a
cryptographic cookbook. Indeed, the theoretical foundations of digital
cash -- and the design of systems -- have been actively researched and
developed over the past two decades.
Only in the past few years, however, have we begun to see the
real-world application of micropayments and digital cash, spurred by
the growth of the Internet into a ubiquitous platform for
connectivity, collaboration, and even commerce. Electronic cash
surely has a place in future society. But its place is not yet
secured. We are not going to try to predict either how fast or how
widespread its adoption will be; that depends on too many economic,
social, and institutional factors.
Instead, we'll focus on how micropayments might be useful for
peer-to-peer systems: what issues the developers of peer-to-peer
systems need to consider, when certain digital cash technologies are
better than others, how to tell whether the micropayments are working,
and how to achieve stronger or weaker protections as needed.
Subsubsection: Identity-based payment policies
----------------------------------------------
When designing a policy for accepting micropayments, a peer-to-peer
system might wish to charge a varying amount to peers based on
identity. For instance, a particular peer might charge less to local
users, to specified friends, to users from academic or non-commercial
subnets, or to users within specified jurisdictional areas.
Such policies, of course, depend on being able to securely identify
peers in the system. This can be hard to do both on the Internet as a
whole (where domain names and IP addresses are routinely spoofed) and
in particular systems that allow anonymity or pseudonymity. Chapter
15, Trust, discusses this issue from several general angles,
and Chapter 11, Free Haven, discusses how we try to solve the
problem in one particular pseudonymous system. Many other systems,
like ICQ chat and other instant messaging services, use their own
naming schemes and ensure some security through passwords and central
servers. Finally, systems with many far-flung peers need a reputation
system to give some assurance that peers won't abuse their privileges.
Subsubsection: General considerations in an economic analysis of micropayment design
------------------------------------------------------------------------------------
Designers choosing a micropayment scheme for a peer-to-peer system
should not consider merely the technical and implentation issues of
different micropayment schemes, but also the overall economic impact
of the entire system. Different micropayment schemes have different
economic implications.
A classic economic analysis of bridge tolls serves as a good analogy
for a peer-to-peer system. In 1844, the French engineer Jules Dupuit
reached a major breakthrough in economic theory by arguing the
following way: the economically efficient toll on an uncongested
bridge is zero, because the extra cost from one more person crossing
the bridge is zero. Although bridge building and maintenance is not
free -- it costs some money to the owner -- it is socially inefficient
to extract this money through a toll. Society as a whole is worse off
because some people choose not to cross due to this toll, even though
their crossing would cost the owner nothing, and they would not
interfere with anyone else's crossing (because the bridge is
uncongested). Therefore, everyone should be allowed to cross the
bridge for free, and the society should compensate the bridge-builder
through a lump-sum payment.
[Arsene Jules Etienne Dupuit, (1804-66), "On Tolls and Transport
Charges", 1842, Annales des Ponts et Chaussees (trans. 1962, IEP).]
This bridge example serves as a good analogy to a peer-to-peer system
with micropayments. Tolls should be extracted only in order to limit
congestion and to regulate access to people who value crossing the
most. Given the same economic argument, resource allocation to limit
congestion is the only justifiable reason for micropayments in a
peer-to-peer system. Indeed, excessive micropayments can dissuade
users from using the system, with negative consequences on the whole
society (known as "social inefficiencies"). Users might not access
certain content, engage in e-commerce, or anonymously publish
information that exposes nefarious corporate behavior.
This analysis highlights the ability of micropayments to prevent
attacks and adds the implied ability to manage congestion. Congestion
management is a large research area in networking. Micropayments can
play a useful role in peer-to-peer systems by helping peers prioritize
their use of network bandwidth or access to storage space. Users who
really want access will pay accordingly. Of course, such a system
favors wealthy users, so it should be balanced against the goal of
providing a system with the broadest social benefits. Reputation can
also play a role in prioritizing resource allocation.
Most economic research relevant for micropayments has focused on
owner-side strategies for maximizing profit. AT&T researchers
compared flat-fee versus pay-per-use fee methods where the owner is a
monopolist, and concluded that more revenues are generated with a
flat-fee model. Similar research at MIT and NYU independently
concluded that content bundling and fixed-fees can generate greater
profits per good.
[Competitive pricing of information goods: Subscription pricing versus
pay-per-use, P. C. Fishburn and A. M. Odlyzko, Economic Theory 13 (1999),
447-470. http://www.research.att.com/~amo/doc/competitive.pricing.ps]
[Y. Bakos and E. Brynjolfsson. Bundling Information Goods: Pricing,
Profits, and Efficiency. Management Science, (December 1999).
www.stern.nyu.edu/~bakos/big.pdf]
We try to take a broader view. We have to consider the well-being of
all economic agents participating in and affected by the system. Three
general groups come to mind in the case of a peer-to-peer system: the
owner of the system, the participants, and the rest
of society.
How does an micropayment scheme impact these three agents?
Participants face direct benefits and costs. The owner can collect
fees or commissions to cover the fixed cost of designing the system
and the variable costs of its operation. The rest of society can
benefit indirectly by synergies made possible by the system, knowledge
spillovers, alternative common resources that it frees up, and so on.
To simplify our discussion, we assume that whatever benefits
participants also benefits society. Furthermore, we can realistically
assume a competitive market, so that the owner is probably best off
serving the participants as well as possible. Therefore, we focus on
the cost-benefit analysis for the participant.
We believe that a focus on costs and benefits to participants is
more suited to the peer-to-peer market than the literature on
information services, for several reasons. First, peer-to-peer system
owners do not enjoy the luxury of dictating exchange terms, thanks to
competition. Second, non-fungible micropayments do not generate
revenues for the owner; it is not always worthwhile to even consider
the benefit to the owner. Third, we expect that a large amount of
resources in peer-to-peer systems will be donated by users:
people donate otherwise unused CPU cycles to SETI@home calculations,
unused bandwidth to forward Mixmaster anonymous email, and unused
storage for Free Haven data shares. For these situations, the sole
role of micropayments is to achieve optimal resource allocation among
participants. In other words, micropayments in a peer-to-peer system
should be used only to prevent congestion, where this concept covers
both bandwidth and storage limitations.
Subsubsection: Moderating security levels: an accountability slider
--------------------------------------------------------------------
Poor protection of resources can hinder the use of a peer-to-peer
system on one side; attempts to guard resources by imposing
prohibitive costs can harm it on the other. Providing a widely-used,
highly-available, stable peer-to-peer system requires a balance.
If a peer-to-peer system wishes only to prevent query-flooding
(bandwidth) attacks, the congestion management approach taken by client
puzzles -- and useable with any form of micropayment -- answers the
problem. Query-flooding attacks are transient; once the adversary stops
actively attacking the system, the bandwidth is readily available to
others.
As we have suggested, other forms of congestion are cumulative, such
as those related to storage space. Free Haven seeks "document
permanence," whereby peers promise to store data for a given time
period (although the Free Haven trading protocol seeks to keep this
system dynamic, as discussed in Chapter 11.) If we wait until the
system is already full before charging micropayments, we've already
lost our chance to adequately protect against congestion.
This is the freeloading problem: we wish to prevent parasitic users
from congesting the system without offering something of value in
return. Furthermore, an adversary can attempt to flood the system
early to fill up all available space. Therefore, for systems in which
resource pressures accrue cumulatively, micropayments should always be
required. Free Haven's answer is to require that peers offer storage
space proportional to that which they take up. (Even though
micropayments are not used, the idea of payment by equivalent
resources is similar.)
The alternative to this design is the caching approach taken by
systems such as Freenet. Old data is dropped as newer, more "popular"
data arrives. This approach does not remove the resource allocation
problem, however, it only changes the issue. First, flooding the
system can flush desirable data from nearby caches as well. System designers
simply try to ensure it will not congest the resources of more
"distant" peers. Second, freeloading is not as much of a concern, for
peers are not charged with offering best-effort availability to
documents. However, if many peers rely on a few peers to store data,
only the most popular data remains. Social inefficiencies develop if
the system loses data that could be desired in the long run because
short-term storage is simply insufficient to handle the requirements
of freeloading peers.
Furthermore, disk space is only one of several resources to consider.
For instance, massive freeloading can affect scalability through
network congestion.
Always charging for resources can prevent both freeloading and
attacks. On the other hand, excessive charges are disadvantageous in
their own right. So it would be useful to find a balance.
Consider an accountability slider: peers negotiate how much
payment is required for a resource within a general model specified
by the overall peer-to-peer system. Using only a micropayment model,
systems like Free Haven or Publius may not have much leeway. Others,
like Freenet, Gnutella, or Jabber, have notably more. When we later
add the concept of reputation, this accounting process becomes even
more flexible.
Each of the micropayment schemes described earlier in this chapter can
be adapted to provide a sliding scale:
* Hashcash: Partial hashes can be made arbitrarily
expensive to compute, by choosing the desired number of bits
of collision, denoted by k. No matter how big
k gets, systems providing the resources can verify
the requests almost instantly.
* Client puzzles: The work-factor of these puzzles is also
based on the number of bit-collisions. The number of
sub-puzzles can also be increased to limit the possibility of
random guessing being successful; this is especially
important when k becomes smaller.
* Time puzzles: The LCS35 timelock includes a parameter
t that sets the difficulty of the puzzle.
* MicroMint: The "cost" of a coin is determined by its
number of colliding hash values. The greater the
k-way collision, the harder the coin is to generate.
* Payword: In the simplest adaptation, a commitment within
PayWord can be a promise of varying amount. However, PayWord
is designed for iterative payments. To be able to use the
same PayWord chain for successive transactions, we want to
always pay with coins of the same value. Therefore, to
handle variable costs, we can just send several paywords for
one transaction. The very light-weight cost of creating and
verifying paywords (a single hash per payword) makes this
multiple-coin approach more suitable than for macropayment
schemes.
* Anonymous digital cash: Coins can have different
denominations. In Chaum's design, the bank uses a different
public-key to sign the coin for different denominations. In
Brands' model, the denomination of the coin is encoded
within the coin's attributes; the bank's public-key is unique
to currency, not denomination. Brands' digital cash system
also allows "negotiable attributes" to be revealed or kept
secret during payment. This additional information can play a
role in setting the accountability slider.
By negotiating these various values, we can change the level of
accountability and security offered by a peer-to-peer system. Based
on overall system requirements, this process can be fixed by the
system designers, changed by the administrators of individual peer
machines, or can even fluctuate between acceptable levels at run-time!
While payment schemes can be used in a variety of peer-to-peer
situations -- ranging from systems in which peers are fully identified
to systems in which peers are fully anonymous -- they do involve some
risk. Whenever Alice makes an electronic payment, she accepts some
risk that Bob will not fulfill his bargain. When identities are
known, we can rely on existing legal or social mechanisms. In fully
anonymous systems, no such guarantee is made, so Alice attempts to
minimize her risk at any given time by making small, incremental
micropayments. However, there is another possibility for pseudonymous
systems, or identified systems for which legal mechanisms should only
be used as a last resort: reputation schemes. In this next section,
we consider the problem of reputation in greater depth.
Section: Reputations
---------------------------
While micropayments provide an excellent mechanism for anonymous
exchange, a number of systems call for more long-term pseudonymous or
even public relationships. For instance, in the case of transactions
where one party promises a service over a long period of time (such as
storing a document for three years), a simple one-time payment
generally makes one party in the transaction vulnerable to being
cheated. A whistleblower or political dissident who publishes a
document may not wish to monitor the availability of this document and
make a number of incremental micropayments over the course of several
years, since this requires periodic network access and since continued
micropayments might compromise anonymity. (While third-party escrow
monitoring services or third-party document sponsors might help to solve
this issue, they introduce their own problems.)
In addition, some systems might want to base decisions on the observed
behavior of entities -- how well they actually perform -- rather
than simply how many resources they can provide.
In the real world, we make use of information about users to help
distribute resources and avoid poor results. Back before the days of
ubiquitous communication and fast travel, doing business over long
distances was a major problem. Massive amounts of risk were involved
in, say, sending a ship from Europe to Asia for trade. Reputations
helped make this risk bearable; large banks could issue letters of
credit that could draw on the bank's good name both in Europe and the
Orient, and they could insure expeditions against loss. As the bank
successfully helped expeditions finance their voyages, the bank's
reputation grew, and its power along with it. Today's business
relationships still follow the same path: two parties make a decision
to trust each other based on the reputations involved.
While the online world is different from the brick-and-mortar world,
it too has benefitted from reputations--and can continue to benefit.
The main difference between reputation-based trust systems and
micropayment-based trust systems is that, in reputation-based trust
systems, parties base their decisions in part on information provided
by third parties. Peers are motivated to remain honest by fear that
news of misdealings will reach yet other third parties.
Reputation systems are not useful in all situations. For instance, if
there were thousands of buyers and one or two vendors, being able to
track vendor performance and reliability would not help buyers who
desire to pick a good vendor. On the other hand, tracking performance
might provide feedback to the vendor itself on areas in which it might
need improvement. This in turn could result in better performance down
the road, but only if the vendor acted on this feedback.
Similarly, in fields where a given buyer generally doesn't perform
transactions frequently, the benefits of a reputation system are more
subtle. A buyer might benefit from a real estate reputation system,
since she expects to rent from different people over time. Even for a
domain where she expects to do just one transaction over her whole
lifetime (such as laser eye surgery) she would probably contribute to
a reputation site -- first out of altruism, and second in order to
give the surgeon an incentive to do well.
This is the tragedy of the commons in reverse: when the cost of
contributing to a system is low enough, people will contribute to it
for reasons not immediately beneficial to themselves.
Chapter 17, Reputation describes some of the practical uses
for a reputation system and the difficulties of developing such a
system. The chapter focuses on the solution found at Reputation
Technologies, Inc. In this chapter we'll give some background on
reputation and issues to consider when developing a reputation system.
Subsection: Early Reputation Systems Online
-------------------------------------------
The online world carries over many kinds of reputation from the real
world. The name "Dennis Ritchie" is still recognizable, whether listed
in a phone book or in a posting to comp.theory. Of course,
there is a problem online -- how can you be sure that the person
posting to comp.theory is the Dennis Ritchie? and
what happens when "Night Avenger" turns out to be Brian Kernighan
posting under a pseudonym? These problems of identity--covered earlier
in this chapter--complicate the ways we think about reputations,
because some of our old assumptions no longer hold.
In addition, new ways of developing reputations evolve online. In the
bulletin board systems of the 1980's and early 1990's, one of the more
important pieces of data about a particular user was the
upload/download ratio. Users with particularly low ratios were derided
as "leeches," because they consumed scarce system resources (remember,
when one user was on via a phone line, no one else could log in)
without giving anything in return. As we will see, making use of this
data in an automated fashion is a promising strategy for providing
accountability in peer to peer systems.
Subsubsection: Codifying reputation on a wide scale: the PGP web of trust
-------------------------------------------------------------------------
Human beings reason about reputations all the time. A large-scale
peer-to-peer application, however, cannot depend on human judgements
for more than a negligible portion of its decisions if it has any hope
of scalability. Therefore the next step in using reputations is to
make their development and consequences automatic.
We've already mentioned the value of knowing upload/download ratios in
bulletin board systems. In many systems, gathering this data was
automatic. In some cases, the consequences were automatic as well:
drop below a certain level and your downloading privileges would be
restricted or cut off entirely. Unfortunately, these statistics did
not carry over from one BBS to another--certainly not in any organized
way--so they provided only for reputations on a microscopic scale.
One of the first applications to handle reputations in an automated
fashion on a genuinely large scale was the "web of trust" system
introduced in Phil Zimmermann's Pretty Good Privacy (PGP). This was
the first program to bring public-key cryptography to the masses. In
public-key cryptography, there are two keys per user. One is "public"
and can be used only to encrypt messages. The other key is "private"
and can be used only to decrypt messages. Every user publishes their
public key and keeps the private key safe. Then anyone can use their
public key to send them messages which only they can read.
With public key cryptography comes the "key certification problem," a
type of reputation. Reputations are necessary because there is no way
to tell from the key alone which public key belongs to which
person.
For example, suppose Alice would like people to send her encrypted
messages. She creates a key and posts it with the name "Alice."
Unbeknownst to her, Carol has also made up a key with the name "Alice"
and posted it in the same place. When Bob wants to send a message to
Alice, which key does he choose? This happens in real life; as an
extreme example, the name "Bill Gates" is currently associated with
dozens of different PGP keys available from popular PGP key servers.
So the "key certification problem" in PGP (and other public-key
services) consists of verifying that a particular public key really
does belong to the entity to whom it "should" belong. PGP uses a
system called a "web of trust" to combat this problem. Alice's key
may have one or more certifications that say "Such and such person
believes that this key belongs to Alice." These certifications exist
because Alice knows these people personally; they have established to
their satisfaction that Alice really does own this key. Carol's fake
"Alice" key has no such certifications, because it was made up on the
spot.
When Bob looks at the key, his copy of PGP will assign it a "trust
level" based on how many of the certifications are made by people he
knows. The higher the level, the more confidence Bob can have in
using the key.
A perennial question about the web of trust, however, is whether or not
it scales. Small groups of people can create a web of trust easily,
especially if they can meet each other in person. What happens when we
try to make the web of trust work for, say, a consumer and a merchant
who have never met before? The conventional wisdom is that the Web of
Trust does not scale. After all, there is a limit to how many people
Alice and Bob can know.
The most frequently cited alternative to the web of trust is a
so-called Public Key Infrastructure. Some trusted root party issues
certificates for keys in the system, some of which go to parties which
can issue certificates in turn. The result is to create a
"certification tree." An example is the certificate system used for
SSL web browsers; here VeriSign is one of the trusted root Certificate
Authorities responsible for ensuring that every public key belongs to
the "right" entity. A hierarchical system has its own problems (not
least the fact that compromise of the root key, as may recently have
happened to Sun Microsystems, is catastrophic),
[ Sun Security Bulletin 198
and
http://sunsolve5.sun.com/secbull/certificate_howto.html
"Revocation of Sun Microsystems Browser Certificates"
Computer Emergency Response Team Bulletin CA-2000-19
http://www.cert.org/advisories/CA-2000-19.html ]
but at least it scales, right?
As it turns out, the web of trust may not be as unworkable as conventional
wisdom suggests. A study of publically available PGP keys in 1997 showed
that on average, only six certificates linked one key to another.
[ Neal McBurnett PGP Web of Trust Statistics.
http://bcn.boulder.co.us/~neal/pgpstat]
This "six degrees of separation" or "small worlds" effect means that for a
merchant and a user who are both good web of trust citizens--meaning that
they certify others' keys and are in turn certified--the odds are good
that they will have reason to trust each others' keys. In current
practice, however, most commercial sites elect to go with VeriSign.
The one major commercial exception is Thawte's Freemail Web of Trust
system.
[Thawte. Personal Certificates for Web and Mail: The Web of Trust
http://www.thawte.com/certs/personal/wot/about.html]
A more serious problem with PGP's implementation of the web of trust,
however, comes with "key revocation." How do you tell everyone that your
key is no longer valid? How do you tell everyone that your certificate on
a key should be changed? For that matter, what exactly did Bob mean
when he certified Charlie's key, and does Charlie mean the same thing when
he certifies David's key?
A more sophisticated--but still distributed--approach to trust
management that tries to settle these questions is the Rivest and
Lampson Secure Distributed Security Infrastructure/Simple Public Key
Infrastructure (SDSI/SPKI). A thorough comparison of this with PGP's
web of trust and PKI systems is given by Yang, Brown, and Newmarch.
[Yinan Yang, Lawrie Brown, Jan Newmarch Issues of Trust in Digital
Signature Certificates http://www.cs.adfa.oz.au/~yany97/auug98.html ]
All of this brings up many issues of trust and public key
semantics, to which we refer to Khare and Rifkin.
[ Rohit Khare and Adam Rifkin
Weaving a Web of Trust. http://www.cs.caltech.edu/~adam/papers/trust.html]
The point we're interested in here is the way in which the web of
trust depends on reputation to extend trust to new parties.
Subsubsection: Who will moderate the moderators: slashdot
---------------------------------------------------------
The news site www.slashdot.org is a very popular news service that
attracts a particular kind of "slashdot reader"--lots of them. Each
and every slashdot reader is capable of posting comments on slashdot
news stories, and sometimes it seems like each and every one actually
does. Reputations based on this interaction can help a user figure out
which of the many comments is worth reading.
To help wade through the resulting mass of comments, slashdot has
a "moderation" system for postings. Certain users of the system are picked
to become "moderators." Moderators can assign scores to postings
and posters. These scores are then aggregated and can be used to tweak a
user's view of the posts depending on a user's preferences. For example, a
user can request to see no posts rated lower than 2.
The slashdot moderation system is one existing example of a partially
automated reputation system. Ratings are entered by hand, using trusted
human moderators, but then these ratings are collected, aggregated,
and displayed in an automatic fashion.
Although moderation on slashdot serves the needs of many of its
readers, there are many complaints that a posting was rated too high
or too low. It is probably the best that can be done without trying to
maintain reputations on moderators themselves.
Subsubsection: A reputation system without automation: eBay
-----------------------------------------------------------
The eBay Feedback system is another example of a reputation service in
practice. Because buyers and sellers on eBay usually have never met
each other, neither has much reason to believe that the other will
follow through on their part of the deal. They need to decide whether
or not to trust each other.
To help them make the decision, eBay collects "Feedback" about
each eBay participant in the form of ratings and comments. After a trade,
eBay users are encouraged to post Feedback about the trade and rate their
trading partner. Good trades, where the buyer and seller promptly exchange
money for item, yield good feedback for both parties. Bad trades, where
one party fails to come through, yield bad feedback which goes into the
eBay record. All this feedback can be seen by someone considering a trade.
The idea is that such information will lead to more good trades,
and fewer bad trades -- which translates directly into more and better
business for eBay. As we will see, this isn't always the case in practice.
It is the case often enough, however, to give eBay a reputation of its own
as the preeminent Web auction site.
Subsection: A reputation system that resists pseudospoofing: Advogato
---------------------------------------------------------------------
Another example of reputations at work is the "trust metric"
implemented at http://www.advogato.org, which is a portal for
open source development work. The reputation system is aimed at
answering the fundamental question, "How much can you trust code from
person X?" This question is critical for people working on open
source projects, who may have limited time to audit contributed
code. In addition, in an open source project, attempts by one
contributor to fix the perceived "mistakes" of another contributor may
lead to flame wars that destroy projects. As of this writing, the
open source development site http://www.sourceforge.net (host
to Freenet) is considering a similar route.
The stakes at Advogato are higher than they are at slashdot. If
the slashdot moderation system "fails," a user sees stupid posts or misses
something important. If the Advogato trust metric incorrectly certifies a
potential volunteer as competent when he or she is not, a software project
may fail. At least, this would be the case if people depended on the trust
metric to find and contact free software volunteers. In practice,
Advogato's trust metric is used mostly for the same application as
slashdot's: screening out stupid posts.
The method of determining trust at Advogato also contains features
that distinguish it from a simple rating system like slashdot
moderation; in particular, the Advogato "trust metric" resists a
scenario in which many people join the system with the express purpose
of boosting each others' reputation scores.
Advogato considers trust relationships as a directed flow graph. That
is, trust relationships are represented by a collection of nodes,
edges, and weights. The system is illustrated in Figure 16-1. The
nodes are the people involved. An edge exists between A and B if A
trusts B. The weight is a measure of "how much" A trusts B.
Figure 16-1. Users and trust relationships in Advogato.
What we are interested in, however, is not just how much A trusts B,
but how much B is trusted in general. So Advogato measures how much
trust "flows through" B, by designating a few special trusted accounts
as a "source" and by designating B as a "sink." It then defines a
"flow of trust" from the source to B as a path from the source to
B. Advogato assigns numbers to edges on the path that are less than or
equal to the edge weights. The edge weights act as constraints on how
much trust can be "pushed" between two points on the flow
path. Finally, the trust value of B is defined as the maximum amount
of trust that can be pushed from the source to B: the "maximum flow."
Calculating flows through networks is a classic problem in computer
science. Advogato uses this history in two ways. First, the
Ford-Fulkerson algorithm
lets the system easily find the maximum flow, so B's
trust value can always be computed. Second, Advogato uses a result
called the "maxflow-mincut theorem" to resist the pseudospoofing
described earlier in the section "Problems from pseudospoofing and
possible defenses." Even if one entity joins under many different
assumed names, none of these names will gain very much more trust than
if they had joined alone.
Pseudospoofing is resisted because each of the new names, at least at
first, is connected only to itself in the graph. No one else has any
reason whatsoever to trust it. Therefore, there is no trust flow from
the
source to any of the pseudospoofing nodes, and none of them are
trusted at all. Even after the pseudospoofing nodes begin to form
connections with the rest of the graph, there will still be "trust
bottlenecks" that limit the amount of trust arriving at any of the
pseudospoofing nodes.
[Raph Levien. Advogato's Trust Metric.
http://www.advogato.org/trust-metric.html ]
This property is actually quite remarkable. No matter how many fake
names an adversary uses, it is unable to boost its trust rating very
much. The downside is that nodes "close" to the source must be careful
to trust wisely. In addition, it may not be readily apparent what
kinds of weights constitute "high trust" without knowing what the
entire graph looks like.
Subsubsection: System Successes and Failures
-----------------------------------------
Reputation in the brick and mortar world seems to work quite well;
spectacular failures, such as the destruction of Barings Bank by the
actions of a rogue trader, are exceptions rather than the rule. Which
reputation-based systems have worked online, and how well have they
worked?
The slashdot and advogato moderation systems seem to work. While it
is difficult to quantify what "working" means, there have been no
spectacular failures so far. On the other hand, the eBay fraud of mid-2000
[cnet.com news article
http://www.canada.cnet.com/news/0-1007-200-1592233.html ]
shows some of the problems with reputation systems used naively.
In the eBay case, a group of people engaged in auctions and behaved
well. As a result, their trust ratings went up. Once their trust
ratings were sufficiently high to engage in high-value deals, the
group suddenly "turned evil and cashed out." That is, they used their
reputations to start auctions for high priced items, received payment
for these items, and then disappeared, leaving dozens of eBay users
holding the bag.
As for PGP's Web of Trust, it has been overtaken by hierarchical
Public Key Infrastructures (PKI), like those offered by VeriSign, as a
widespread means of certifying public keys. In this case, peer-to-peer
did not automatically translate into "successful."
Subsection: Scoring Systems
---------------------------
Reputation systems depend on scores to provide some
regularity to the ratings and allow automation. As shown in Chapter
17, scores can be very simple or involve multiple scales and
complicated calculations.
In a reputation system for vendors, buyers might give ratings -- that
is, numbers that reflect their satisfaction with a given transaction
-- for a variety of different dimensions for each vendor. For
instance, a given vendor might have good performance in terms of
response time or customer service, but the vendor's geographic
location might be inconvenient. Buyers provide feedback on a number
of these rating dimensions at once, to provide a comprehensive view of
the entity. The job of the reputation system is to aggregate these
ratings into one or more published scores that are meaningful and
useful to participants in the system.
A good scoring system is:
\begin{itemize}
\item Accurate for long-term performance: The system reflects the
confidence (the likelihood of accuracy) of a given score. It can also
distinguish between a new entity of unknown quality and an entity with
bad longterm performance.
\item Weighted towards current behavior: The system recognizes and
reflects recent trends in entity performance. For instance, an entity
that has behaved well for a long time but suddenly goes downhill is
quickly recognized and no longer trusted.
\item Efficient: It would be convenient if the system can recalculate
a score quickly. Calculations that can be performed incrementally are
important.
\item Robust against attacks: The system should resist attempts of any
entity or entities to influence scores other than by being more honest
or having higher quality.
\item Amenable to statistical evaluation: It should be easy to find
outliers and other factors that can make the system rate scores
differently.
\item Private: No one should be able to learn how a given rater rated
an entity except the rater himself.
\item Smooth: Adding any single rating or small number of ratings
doesn't jar the score much.
\item Understandable: It should be easy to explain to people who use
these scores what they mean -- not only so they know how it works, but
so they can evaluate for themselves what each score implies.
\item Verifiable: A score under dispute can be supported with data.
\end{itemize}
Note that a number of these requirements seem to be contradictory. We
will explore the benefits and tradeoffs from each of them over the course
of the rest of this section.
Subsubsection: Attacks and Adversaries
--------------------------------------
Several questions determine how we evaluate reputation systems.
First, what information needs to be protected? Second, who are the
adversaries?
The capabilities of potential adversaries and the extent to which they
can damage or influence the system dictate how much energy should be
spent on security. For instance, in the case of Free Haven, if
political dissidents actually began using the system to publish their
reports and information, government intelligence organizations might
be sufficiently motivated to spend millions of dollars to track the
documents to their source.
Similarly, if corporations who are planning to do a fifty million
dollar transaction based their decisions on the reputation score that
Reputation Technologies Inc. provides, it could be worth many millions
of dollars to influence the system so that a particular company is
chosen. Indeed, there are quite a few potential adversaries in the case
of Reputation Technologies, Inc. A dishonest vendor might want to forge
or bribe good feedback to raise his resulting reputation. In addition
to wanting high reputations, vendors might like their competitors'
reputations to appear low. Exchanges -- online marketplaces which
try to bring together vendors to make transactions more convenient --
would like their vendors' reputations to appear higher than vendors
that do business on other exchanges. Vendors with low reputations --
or those with an interest in people being kept in the dark -- would like
reputations to appear unusably random. Dishonest users might like the
reputations of the vendors that they use to be inaccurate, so that their
competitors will have inaccurate information.
Perhaps the simplest attack that can be made against a scoring system
is called shilling. The term is often used to refer to
submitting fake bids in an auction. But it can be considered in a
broader context of submitting fake or misleading ratings. In
particular, an entity might submit positive ratings for one of his
friends (positive shilling), or might submit negative ratings for his
competition (negative shilling). Either of these ideas introduces more
subtle attacks, such as negatively rating a friend or positively
rating a competitor to try to trick others into believing that
competitors are trying to cheat.
Shilling is a very straightforward attack, but many systems are
vulnerable to it. A very simple example is the AOL Instant Messenger
system. You can click to claim that a given user is abusing the
system. Since there is no support for detecting multiple comments from
the same person, a series of repeated negative votes will exceed the
threshold required to kick the user off the system for bad behavior,
effectively denying him service. Even in a more sophisticated system
that detects multiple comments by the same person, an attacker could
mount the same attack by assuming a multitude of identities.
Vulnerabilities from overly simple scoring systems are not limited to
"toy" systems like Instant Messenger. Indeed, Ebay suffers from a very
similar problem. In Ebay, the reputation score for an individual is
the linear combination of good and bad ratings, one for each
transaction. Thus, a vendor who has performed dozens of transactions
and cheats on only one out of every four customers will have a
steadily rising reputation, whereas a vendor who is completely honest
but has only done ten transactions will be displayed as less
reputable. As we have seen, a vendor could make a good profit (and
build a strong reputation!) by being honest for several small
transactions, and then being dishonest for a single large transaction.
Weighting ratings by size of transaction helps the issue, but does not
solve it. In this case, large transactions would have a large impact
on the reputation score of a vendor, and small transactions would have
only a small impact. Since small transactions don't have much weight,
vendors have no real incentive to be honest for them -- making the
reputation services useless for small buyers. Breaking reputation into
many different dimensions, each representing the behavior of the
vendor on a given category of transaction (based on cost,
volume, region, etc) can solve a lot of these problems. See the
section below on algorithms for more details and an analysis of this
idea.
Subsubsection: Aspects of a scoring system
------------------------------------------
The particular scoring system or algorithm used in a given domain
should be based on the amount of information available, the extent to
which information must be kept private, and the amount of accuracy
required.
In some situations, such as verifying voting age, a fine-grained
reputation measurement is not necessary -- simply indicating who seems
to be "sufficient" or "insufficient" is good enough.
In a lot of domains, it is very difficult to collect enough
information to provide a comprehensive view of each entity's behavior.
It might be difficult to collect information about entities because
the volume of transactions is very low, as we see today in large
online business markets.
But there's a deeper issue than just whether there are transactions,
or whether these transactions are trackable. More generally: does
there exist some sort of proof (a receipt or other evidence) that the
rater and ratee have actually interacted?
Being able to prove the existence of transactions reduces problems on
a wide variety of fronts. For instance, it makes it more difficult to
forge large numbers of entities or transactions. Such verification
would reduce the potential we described in the previous section for
attacks on AOL Instant Messenger. Similarly, EBay users currently
might directly purchase a high reputation by giving EBay a cut of a
dozen false transactions which they claim to have performed with their
friends; with transaction verification, they would be required to go
through the extra step of actually shipping goods back and forth.
Proof of transaction provides the basis for Amazon.com's simple referral
system, "Customers who bought this book also bought..." Indeed, it
is hard to imagine that someone would spend money on a book just to
affect the rating system. However, this is not always the case. For
instance, a publishing company was able to identify the several dozen
bookstores across America that were used as sample points for the New
York Times bestseller's list; they purchased thousands of their favorite
book at these bookstores, skyrocketing the score of that book in the
charts.
[
BOOK AGENT'S BUYING FUELS CONCERN ON INFLUENCING BEST-SELLER LISTS
By David D Kirkpatrick
08/23/2000
New York Times Abstracts
Section C
Pg. 1, Col. 2
c. 2000 New York Times Company
]
In some domains, it is to most raters' perceived advantage that everyone
agree with the rater. This is how chain letters, Amway, and other
Ponzi schemes get their shills: through establishing a system where
customers are motivated to recruit other customers. Similarly, if a
vendor offered to discount past purchases if enough future customers buy
the same product, it would be hard to get honest ratings for that vendor.
This applies to all kinds of investments; once you own an investment,
it is not in your interest to rate it negatively so long as it holds
any value at all.
Subsubsection: Collecting ratings
---------------------------------
One of the first problems in developing reputation systems is how to
collect ratings. The answer depends highly on the domain, of course,
but there are a number of aspects that are common across many domains.
The first option is simply to observe as much activity as possible and
draw conclusions based on this activity. This can be a very effective
technique for automated reputation systems that have a lot of data
available. If you can observe the transaction flow and notice that a
particular vendor has very few repeat customers, he probably has a low
reputation. On the other hand, lack of repeat customers may simply
indicate a market where any given buyer transacts infrequently.
Similarly, finding a vendor with many repeat customers might indicate
superior quality, or it might just indicate a market where one or a
few vendors hold a monopoly over that product. Knowledge of the domain
in question is crucial to knowing how to correctly interpret data.
In many circumstances, it will be difficult or impossible to observe
the transaction flow, or it is unreasonable to expect parties to take
the initiative in providing feedback. In these cases, a reasonable
option is to solicit feedback from parties involved in each
transaction. This can be done either by publicizing your existence and
interest in such feedback and providing incentives to respond, or even
by actively going to each party after an observed transaction and
requesting comments. Reputation Technologies, Inc., for instance,
aggressively tries to obtain feedback after each transaction.
Tying feedback to transactions is a very powerful way of reducing
vulnerabilities in the system. It's much more difficult for people to
spam positive feedback, since each item of feedback has to be
associated with a particular transaction, and presumably only the
latest piece of feedback on a given transaction would actually count.
On the surface, it looks like this requires an Exchange
or other third-party transaction moderator, to make it difficult to simply
fabricate a series of several thousand transactions and exploit the
same vulnerability. However, vendors could provide "blinded" receipts
for transactions, where vendors would not be able to identify which
buyer was providing the ratings. Without such a receipt, the
reputation system would ignore feedback from a given buyer. Thus,
buyers could not provide feedback about a vendor without that vendor's
permission. There are a number of new problems introduced by this
idea, such as how to respond if vendors fail to provide a receipt, but
it seems to address many of the difficult issues about shilling in a
decentralized environment.
A final issue to consider when collecting ratings is how fair the
ratings will be -- that is, how evenly distributed are the raters out
of the set of people who have been doing transactions? If the only
people who have incentive to provide ratings are those that are
particularly unhappy with their transaction, or if only people with
sufficient technical background can navigate the rating submission
process, the resulting scores may be skewed to the point of being
unusable. One solution to this could involve incentives that lead more
people to respond; another approach is to simply collect so much data
that the issue is no longer relevant. (The slashdot moderation system,
for instance, depends on the participation of huge numbers of
independent moderators.) But systematic errors or biases in the
ratings will generally defeat this second approach.
Subsubsection: Bootstrapping
----------------------------
One of the tough questions for a reputation-based trust system is how to
get started. If users make choices based on the scores that are available
to them, but the system has not yet collected enough data to provide
useful scores, what incentive do buyers have to use the system? More
importantly, what incentive do they have to contribute ratings to the
system?
Free Haven can avoid this problem through social means. Some
participants will be generous and willing to try out new nodes just to
test their stability and robustness. In effect, they will be
performing a public service by risking some of their reputation and
resources evaluating unknown nodes. Businesses, particularly
businesses just getting started in their field and trying to make a
name for themselves, won't necessarily be as eager to spend any of
their already limited transaction volume on trying out unknown
suppliers.
The way to present initial scores for entities depends on the domain.
In some non-commercial domains, it might be perfectly fine to present
a series of entities and declare no knowledge or preference; in
others, it might be more reasonable only to list those entities for
which a relatively certain score is known. Reputation Technologies
needs to provide some initial value to the users; this can be done by
asking vendors to provide references (that is, by obtaining
out-of-band information), and then asking those references to fill out
a survey describing overall performance and happiness with that
vendor. While this bootstrapping information may not be as useful as
actual transaction-based feedback (and is more suspect), it is a good
starting point for a new system.
Bootstrapping is a much more pronounced issue in a centralized system
than in a decentralized system. This is because in a decentralized
system, each user develops his own picture of the universe: he builds
his trust of each entity based on his own evidence of past performance
and on referrals from other trusted parties. Thus, every new user
effectively joins the system "at the beginning," and the process of
building a profile for new users is an ongoing process throughout the
the entire lifetime of the system. In a centralized environment, on
the other hand, ratings are accumulated across many different
transactions and over long periods of time. New users trust the
centralized repository to provide information about times and
transactions that happened before the user joined the system.
In a newly developed system, or for a new entity in the system, the
choice of the default reputation score is critical. If it's easy to
create a new identity (that is, pseudonym), and new users start out
with an average reputation, users who develop a bad reputation are
encouraged to simply drop their old identity and start over with a new
one. One way to deal with this problem is to start all new users with
the lowest possible reputation score; even users with a bad track
record will then have an incentive to keep their current identity.
Another approach to solving this problem is to make it difficult to
create a new identity. For instance, this can be done by requiring
some proof of identity for registration, or a monetary fee. Tying the
user to his real-world identity is the simplest, and probably most
effective, way to reduce abuse -- but only if it's appropriate for that
system.
Subsubsection: Personalizing Reputation Searches
------------------------------------------------
The user interface -- that is, the way of presenting scores and asking
questions -- is a crucial element of a reputation system. Scores
cannot simply be static universal values representing the overall
quality of an individual. Since a score is an attempt to predict
future performance, each score must be a prediction for a particular
context. That is, the user interface must allow participants to query
the system for the likelihood of a successful transaction for their
particular situation. The more flexibility a client has, the more
powerful and useful the system is (so long as users can still
understand how to use it).
The user interface must also display a confidence value for
each score--that is, how likely the score is to reflect the reality of
the subject's behavior. The mechanism for generating this confidence
value depends on the domain and on the scoring algorithm. For
instance, it might reflect the number of ratings which were used to
generate the score; it might reflect the standard deviation of the set
of ratings; or it might reflect the level of agreement between several
different scoring algorithms that were all run against the ratings
set. Confidence ratings are a major topic in Chapter 17.
Not only does a confidence value allow users to have a better feel for
how firm a given score is, but it can also allow a more customized
search. That is, a user might request that only scores with a certain
minimum confidence value be displayed, which would weed out new users
as well as users with unusual (widely varying) transaction patterns.
In some domains, qualitative statements (like verbal reviews) can
enhance the value of a quantitative score. Simply providing a number
may not feel as genuine or useful to users -- indeed, allowing for
qualitative statements can provide more flexibility in the system,
because users providing feedback might discuss topics and dimensions
which are difficult for survey authors to anticipate. On the other
hand, it is very difficult to integrate these statements into more
numerical scores, particularly if they cover unanticipated dimensions.
Also, as the number of statements increases, it becomes less useful to
simply print all of them. However, choosing which statements to print
not only requires manual intervention and choice, but might also lead
to legal liabilities. Another problem with providing verbal
statements as part of the score is the issue of using this scoring
system in many different countries. Statements may need to be
translated, but numbers are universal.
Subsubsection: Scoring Algorithms
---------------------------------
As we've seen in the previous sections, there are many different
aspects to scoring systems. While we believe that query
flexibility is perhaps the most crucial aspect to the system, another
important aspect is the actual algorithm used to aggregrate ratings
into scores. Such an algorithm needs to answer most of the
requirements that we laid out in the section "Scoring Systems."
Broadly speaking, the scoring algorithm should provide accurate
scores, while preventing dishonest users from affecting the system and
also preventing privacy leaks (as detailed in the next section).
Preventing dishonest users from affecting the system can be done in
several ways. One simple way is to run statistical tests independent
of the actual aggregation algorithm, to attempt to detect outliers or
other suspicious behavior such as a clique of conspiring users. Once this
suspicious behavior has been identified, system operators can go back,
manually examine the system, and try to prune the bad ratings. While this
appears to be a very time-intensive approach that could not possibly
be used in a deployed system, Ebay has used exactly this to try to
clean up once dishonest users have been noticed.
[eBay Feedback Removal Policy. http://pages.ebay.com/help/community/fbremove.html].
A more technically sound approach is to weight the ratings by the
credibility of each rater. That is, certain people contribute
more to the score of a given entity based on their past predictive
ability. Google makes use of this idea in its Internet search engine
algorithm. Their algorithm counts the number of references to a given
page: the more pages that reference that page, the more popular it
is. In addition, the pages that are referenced from popular pages are
also given a lot of weight. This simple credibility metric produces
much more accurate responses for web searches.
By introducing the notion of local credibility rather than a
simple global credibility for each entity, the system can provide a
great deal of flexibility and thus stronger predictive value. Local
credibility means that a rating is weighted more strongly if the
situation in which that rating was given is similar to the current
situation. For instance, a small farmer in Idaho looking into the
reputation of chicken vendors cares more about the opinion of a small
farmer in Arkansas than he does about the opinion of the Great Chinese
Farming Association. Thus, the algorithm would generate a score which
more accurately reflects the quality of the vendor according to other
similar buyers. Similarly, if Google knew more about the person doing
the web search, it could provide an even more accurate answer.
[Press Release for Open Profiling Standard. http://home.netscape.com/newsref/pr/newsrelease411.html]
One of the problems with incorporating credibility into the scoring
algorithm is that, in some domains, an individual's ability to perform
the protocol honestly is very separate from an individual's ability to
predict performance of others.
In the Free Haven system, for instance, a server may be willing to
store documents and supply them to readers, but keeps no logs about
transactions or trades (so it has no idea which other servers are
behaving honestly). In the case of Reputation Technologies, one
vendor might be excellent at providing high-quality products on time,
leading to a high reputaton score, but possess only average skill at
assessing other vendors. Indeed, a consulting firm might specialize
in predicting performance of vendors, but not actually sell any
products of its own.
One way to solve this problem is to have separate scores for
performance and for credibility. This makes it more complex to keep
track of entities and their reputations, but could provide tremendous
increases in accuracy and flexibility for scoring systems.
Weighting by credibility is not the only way to improve the accuracy
and robustness of the scoring algorithm. Another approach is to assert
that previous transactions should carry more weight in relation to how
similar they are to the current transaction. Thus, a vendor's ability
to sell high-quality granny smith apples should have some bearing on
his ability to sell high-quality red delicious apples. Of course this
could backfire if the vendor in question specializes only in granny
smith apples and doesn't even sell red delicious apples. But in
general, weighting by the so-called category of the
transaction (and thus the vendor's reputation in related categories)
is a very powerful idea. Separating reputations into categories can
act as a defense against some of the subtle shilling attacks described
above, such as when a vendor develops a good reputation at selling
yo-yo's and has a side business of fraudulently selling used cars.
But the category idea raises very difficult questions. How do we pick
categories? How do we know which categories are related to which other
categories, and how related they are? Can this be automated somehow,
or do the correlation coefficients have to be estimated manually?
In the case of Free Haven, where there is only one real commodity -- a
document -- and servers either behave or they don't, it might be
feasible to develop a set of categories manually and allow each server
to manually configure the numbers that specify how closely related the
categories are. For instance, one category might be files of less than
100 kilobytes that expire within a month. A strongly related category
would be files between 100 kilobytes and 200 kilobytes that expire
within a month; perhaps we would say that this category is 0.9-related
to the first. A mostly unrelated category would be files more than 500
megabytes in size that expire in 24 months. We might declare that this
category is 0.05-related to the first two.
With some experience, an algorithm might be developed that can tweak
the correlation coefficients on the fly based on how effective the
current values have been at predicting the result of future
transactions. Similarly, we might be able to reduce the discrete
categories into a single continuous function that converts "distance"
between file size and expiration date into a correlation coefficient.
Reputation Technologies is not so lucky. Within a given
exchange, buyers and sellers might barter thousands of different types
of goods, each with different qualities and prices; the correlation
between any pair of categories might be entirely unclear. To make matters
worse, each vendor might only have a few transactions on record, leaving
data too sparse for any meaningful comparison.
While we've presented some techniques to provide more accuracy and
flexibility in using ratings, we still haven't discussed actual
algorithms that can be used to determine scores. The simplest such
algorithm involves treating reputations as probabilities. A reputation
is effectively an estimate of how likely a future transaction in that
category is to be honest. In this case, scores are simply computed as
the weighted sum of the ratings.
More complex systems can be built out of neural networks or data
clustering techniques, to try to come up ways of applying nonlinear
fitting and optimizing systems to the field of reputation. But as the
complexity of the scoring algorithm increases, it becomes more and
more difficult for actual users of these systems to understand the
implications of a given score or understand what flaws might be
present in the system.
Finally, we should mention the adversarial approach
to scoring systems. That is, in many statistical or academic
approaches the goal is simply to combine the ratings into as accurate
a score as possible. In the statistical analysis, no regard is given
for whether participants in the system can conspire to provide ratings
that break the particular algorithm used.
A concrete example might help to illustrate the gravity of this point.
One of the often-referenced pitfalls of applying neural networks to
certain situations comes from the US military. They wanted to teach
their computers how to identify tanks in the battlefield. Thus they
took a series of pictures that included tanks, and a series of
pictures that did not include tanks. But it turns out that one of
the sets was taken during the night, and the other set
was taken during the day. This caused their
high-tech neural network to learn not how to identify a tank but how
to distinguish day from night. Artificial intelligence developers
need to remember that there are a number of factors that might be
different in a set of samples, and that their neural network might not
learn quite what they want it to learn.
But consider the situation from our perspective: what if the Russian
military were in charge of providing the tank pictures? Is there a
system that can be set up to resist bad data samples? Many would
consider that learning how to identify a tank under those
circumstances is impossible. How about if the Russians could provide
only half of the pictures? Only a tenth? Clearly this is a much more
complicated problem. When developing scoring systems, we need to keep
in mind that simply applying evaluation techniques that are intended
to be used in a "clean" environment may introduce serious
vulnerabilities.
Subsubsection: Privacy and Information Leaks
--------------------------------------------
Yet another issue to consider when designing a good scoring system is
whether the system will be vulnerable to attacks that attempt to learn
about the tendencies or patterns of entities in the system. In a
business-oriented domain, knowledge about transaction frequency,
transaction volume, or even the existence of a particular transaction
might be worth a lot of time and money to competitors. The use of a
simple and accurate scoring algorithm implies that it should be easy
to understand the implication of a vendor's score changing from 8.0 to
9.0 over the course of a day. Perhaps one or more ratings arrived
regarding large transactions, and those ratings were very positive.
The objectives of providing timeliness and accuracy in the scoring
algorithm and of maintaining privacy of transaction data seem to be at
odds. Fortunately, there are a number of ways to help alleviate the
leakage problems without affecting accuracy too significantly. We will
describe some of the more straightforward of these techniques in this
section.
The problem of hiding transaction data for individual transactions is very
similar to the problem of hiding source and destination data for messages
going through mix networks.
[ "Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms"
D. Chaum, Communications of the ACM, vol. 24 no. 2, February, 1981]
More specifically, figuring out what kind of rating influenced a
published score by a certain amount is very similar to tracking a
message across a middleman node in a mix network. In both cases,
privacy becomes significantly easier as transaction volume increases.
In both cases, adversaries observe external aspects of the system (in
the case of the scoring system, the change in the score; in the case
of the mix network, the messages on the links to and from the mix
node) to try to determine the details of some particular message or
group of messages (or the existence of any message at all).
One common attack against the privacy of a scoring system is a
timing attack. For instance, the adversary might observe
transactions and observe changes in the scores, and then try to
determine the rating values that certain individuals
submitted. Alternatively, the adversary might observe changes in the
scores and attempt to discover information about the timing or size of
transactions. These attacks are like watching the timings of messages
going through various nodes on a Mix network, and trying to determine
which incoming message corresponds to which outgoing message.
A number of solutions exist to protect privacy. First of all,
introducing extra latency between the time that ratings are submitted
and the time when the new score is published can make timing
correlation more difficult. (On the other hand, this might reduce the
quality of the system, because scores are not updated immediately.)
Another good solution is to queue ratings and process them in
bulk. This prevents the adversary from being able to determine which
of the ratings in that bulk update had which effect on the score.
A variant of this approach is the pooling approach, in which
some number of ratings is kept in a pool. When a new rating arrives,
it is added to the pool and a rating from the pool is chosen at random
and aggregated into the score. Obviously, in both cases, a higher
transaction volume makes it easier to provide timely score updates.
An active adversary can respond to bulk or pooled updates with what is
known as an identification flooding attack. He submits
ratings with known effect, and watches for changes in the score that
are not due to those ratings. This approach works because he can
"flush" the few anonymous ratings that remain by submitting enough
known ratings to fill the queue. This attack requires the adversary to
produce a significant fraction of the ratings during a given time
period.
But all this concern over privacy may not be relevant at all. In some
domains, such as Free Haven, the entire goal of the reputation system
is to provide as much information about each pseudonymous server as
possible. For instance, being able to figure out how Alice performed
with Bob's transaction is always considered to be a good thing. In
addition, even if privacy is a concern, the requirement of providing
accurate timely scores may be so much more important that no steps
should be taken to increase user privacy.
Subsection: Decentralizing the scoring system
---------------------------------------------
Many of the issues we've presented apply to both centralized and
decentralized reputation systems. In a decentralized system such as
Free Haven, each server runs the entire reputation-gathering system
independently. This requires each node to make do with only the
information which it has gathered first-hand, and generally requires a
broadcast mechanism in order for all nodes to keep their information
databases synchronized.
Another approach is to decentralize the scoring system itself,
spreading it among the entire set of machines participating in the
system. In this section, we present two ways of decentralizing a
scoring system. The first exploits redundancy along with user
flexibility to reduce the risk from cheating or compromised
servers. The second is a more traditional approach to decentralizing a
system, but it also brings along the more traditional problems
associated with decentralization, such as high bandwidth requirements
and difficult crypto problems.
Subsubsection: Multiple trusted parties
---------------------------------------
Assume there is a set of scorers around the world, each independently
run and operated. When a transaction happens, the vendor chooses a
subset of the scorers and constructs a set of tickets. Each
ticket is a receipt allowing the buyer to rate the vendor at a
particular scorer. The receipts are blinded so that the vendor is not
able to link a ticket with any given buyer.
At this point, the buyer can decide to which scorer or scorers he
wishes to submit his ratings. Since each scorer could potentially use
its own algorithm and have its own prices or publishing habits, each
scorer might have its own set of tradeoffs based on accuracy, privacy,
and security. This technique allows the vendor to veto some of the
scorers first. Then the rater chooses from among the remaining
scorers. Thus, the ratings will only be submitted to mutually
agreeable scorers.
We could extend this scenario to allow both parties in the transaction
to provide tickets to each other, creating a more symmetric rating
process. This approach introduces complications, because both parties
in the transaction need to coordinate and agree on which tickets will
be provided before the transaction is completed, and there needs to be
some mechanism to enforce or publicize if one side of the transaction
fails to provide the promised receipts.
The beauty of decentralizing the scoring system in this manner is that
every individual in the system can choose which parts of the system
they want to interact with. Participants in transactions can list
scorers whom they trust to provide accurate scores, raters can choose
scorers whom they trust to not leak rating information, and users
looking for scores on various entities can choose scorers whom they
trust to provide accurate scores.
Of course, this decentralization process introduces the issue of
meta-reputation: how do we determine the reputations of the reputation
servers? This sort of reputation issue is not new. Some Mixmaster nodes
are more reliable than others,
[Electronic Frontiers Georgia Remailer Uptime List. http://anon.efga.org/ ]
and users and operators keep uptime and performance lists of various
nodes as a public service. We expect that reputation scoring services
would similarly gain (external) reputations based on their reliability
or speed.
Subsubsection: True decentralization
------------------------------------
In this scenario, both sides of the transaction obtain blinded
receipts as above. Apart from these raters, the system also
consists of a set of collectors and a set of
scorers. They are illustrated in Figure 16-2.
[figure: Four columns. First column has one rater. Second column has a
vertical line of collectors. Third column has a vertical line of
S's. Fourth column has a couple of "clients." The rater in the first
column has arrows pointing to every collector. Each scorer in the
third column has bidirectional arrows pointing to every
collector (ellipse arrows as necessary). The fourth column has some
client's doing queries onto the scorer's.]
Figure 16-2. Truly decentralized scoring system.
After the transaction, each rater splits up his rating using Shamir's
secret sharing algorithm (described in Chapter 10, Publius)
or some other k-of-n system. At this point, the rater submits one
share of his rating to each collector. This means that the collectors
together could combine the shares to determine his rating, but
separately they can learn no information. It is the job of the scorers
to provide useful information to clients: when a client does a
reputation query for a specific category (situation), the Scorer does
the equivalent of an encrypted database query on the set of collectors.
[ Private Information Retrieval and Oblivious Transfer. Tal Malkin.
PhD Thesis, MIT 1999].
A number of technical challenges need to be solved in order
to make this scheme work. First of all, the collectors need to have some
mechanism for authenticating a rating without reading it. Similarly, they
need to have some way of authorizing a rater to put his share onto the
system without being able to know the author of a given rating. Without
this protection, malicious raters could simply flood the system with data
until it overflowed.
Once these problems are solved, we need to come up with some sort of
computationally-feasible and bandwidth-feasible way of communicating
between the scorers and the collectors, and a set of rules that allow
the scorers to get the information they need to answer a given query
without allowing them to get "too much" information and learn more
than they ought to learn about raters.
With this decentralization comes some subtle questions. Can scorers
"accidentally forget" to include a specific rating when they're computing
a score? Said another way, is there some way of allowing scorers to
provide proof that they included a certain rating in the calculation of
the score, without publishing the actual ratings that were used? This
question is similar to the question of allowing Mix nodes to prove that
they forwarded a given message, without yielding information that might
help an adversary determine the source or destination of the message.
[Masayuki Abe. Universally Verifiable MIX-network with Verification Work
Independent of the Number of MIX Servers. EUROCRYPT '98 Springer-Verlag LNCS]
Section: A Case Study: Accountability in Free Haven
---------------------------------------------------
As described in Chapter 11, the Free Haven project is working toward
creating a decentralized and dynamically changing storage service that
simultaneously protects the anonymity of publishers, readers, and
servers, while ensuring the availability of each document for a
lifetime specified by the publisher. Our goals of strong anonymity and
long-term persistent storage are at odds with each other. Providing as
much anonymity as possible while still retaining sufficient
accountability is a very difficult problem. Here we will describe the
accountability requirements in greater detail than in Chapter 11, and
discuss some approaches to solving them.
Our job is two-fold: we want to keep people from overfilling the
bandwidth available from and between servers; and we want to keep people
from overfilling the system with data. We will examine each of these
goals separately.
Subsection: Micropayments
-------------------------
In general, there are a number of overall problems with using
micropayments in p2p systems. This general analysis will help motivate
our discussion of using micropayments in the Free Haven context. We'll
talk about them, then we'll try to apply them to Free Haven.
Subsubsection: The difficulty of distributed systems: how to exchange micropayments among peers
-----------------------------------------------------------------------------------------------
Consider the simple approach to micropayments introduced early in this
chapter. Alice wants resources operated by Bob. Alice pays Bob with
some micropayments. Bob provides Alice with the access she purchased
to his resources.
This sounds like a great model for economically-based distribution
that provides both accountability and effective congestion management
of resources. However, the problem is rarely so simple in the case
of peer-to-peer distributed systems on the Internet. The reason is
that many intermediaries may be involved in a transaction -- and one
doesn't know who they are before the transaction starts, or perhaps
even after the transaction is finished.
Consider an anonymous remailer like Mixmaster. Alice sends an email
to Bob through a number of intermediate proxy remailers, which strip
all identifying information from the message before transmitting it.
This design is used to distribute trust across operational and often
jurisdictional lines. Only a very powerful adversary -- able to observe
large sections of the network and use advanced traffic analysis techniques
-- should be able to link the sender and recipient of any given message.
Hence, we achieve an "anonymous" communications path for email.
Consider, again, the Gnutella routing protocol. Alice seeks some
piece of information contained in the network. She sends out a query
to all peers that she knows about (her "friends"); these peers in turn
propagate the request along, branching it through the network.
Hopefully, before the time-to-live (TTL) of the query expires, the
request traverses enough intermediate hops to find Bob, who responds
with the desired information. The Freenet routing protocol works
similarly, covering some fraction of the surrounding network over the
course of the search.
These examples highlight a design quite common in peer-to-peer systems,
especially for systems focusing on anonymity (by distributing trust) or
searching (by distributing content). That is, endpoint peers are not
the sole ones involved in an operation; Alice and Bob are joined by any
number of intermediate peers. So how should we handle micropayments?
What are the entities involved in a transaction? Let's examine four
possible strategies, illustrated in Figure 16-3.
[Sketch of 4 modes:
end to end:
Alice paying Bob through a long link of intermediate nodes]
pairwise:
each paying each other
amortized:
each paying each other of lessening amount
all points:
Alice paying each, make sure we show possibly necessary anonymity]
Figure 16-3. Ways micropayments could be used in a distributed file
storage system.
* End-to-end model:
The simplest approach is to make Alice send Bob some form of
payment and not worry about what happens to any intermediaries.
This model works fine for operations which do not make use of
intermediate nodes. But if intermediate peers are involved, they
lack any protection from attack. Bob might even be fictitious;
Alice can attack any number of intermediate peers by routing her
queries through them, using up their bandwidth or wiping out the
data in Freenet-style data caches. This problem is precisely our
motivation for using micropayments!
* Pairwise model:
Recognizing the problem of the end-to-end model, we can take a
step upward in complexity and blindly throw micropayments into
every transaction between every pair of peers. One long route
can be modeled as a number of pairwise transactions. This might
appear to protect each recipient of payments, but is also
fundamentally flawed.
Using fungible micropayments, each peer earns one unit from its
predecessor, then spends one unit with its successor. Assuming
equal costs throughout the network, Alice is the only net debtor
and Bob the only net creditor. But if a single, malicious
operator is in charge of both Alice and Bob, these two peers have
managed to extract work from the intermediate nodes without
paying -- a more-subtle DoS or flooding attack!
Using non-fungible micropayments, Alice remains a net debtor, but
so are all intermediate peers. Alice can make use of greater
computational resources (centralized or distributed) to flood
intermediate peers with POWs. Being properly-behaving nodes,
these peers attempt to make good on the micropayment exchange,
and start churning out POWs for the next hop in the
protocol...and churning...and churning. Eventually Alice can
exhaust the resources of a whole set of smaller, honest peers.
* Amortized pairwise model:
Taking what we learned about the risks of the previous model, we
can design a more sophisticated one that amortizes Alice's cost
throughout the network route by iteratively decreasing the cost
of transactions as they move through the system. Say, Alice pays
X with 4 units of micropayment, X pays Y with 3 units, Y pays Z
with 2 units, and Z finally pays Bob only 1 unit.
In the case of non-fungible POWs, we still lose. First of all,
Alice can still make use of greater wealth, economies of scale,
distributed computing, etc. in order to attack intermediate
nodes. While the load decreases as it moves though the system,
peers X, Y, and Z still need to devote some of their own
resources; they may be unable to afford that load.
For fungible payments, this model appears more promising.
Intermediate nodes end up as net creditors: their resources are
paid for by the cut they take from Alice's initial lump-sum
payment.
But this model has another weakness from a security
point-of-view: we leak information regarding the route-length.
We mentioned the Mixmaster mix-net at the beginning of this
section; the system allows a sender to specify the number and
identity of intermediate remailers. This number of hops and their
corresponding identities are unknown to all other parties. [XXX
Footnote: We ignore the possibility of traffic analysis here, and
assume that the user chooses more than one hop.] But if we use
amortized payments, each peer in the chain has to know the amount
it is given and the function used to decreasing payments, so
intermediate peers can extrapolate how many hops are in the
route, as well as their relative position in the chain.
Furthermore, Alice may not know the route length. If a system
uses Gnutella- or Freenet-type searching, Alice has no idea how
many hops are necessary before the query reaches Bob.
Finally, as Alice's query branches out throughout the network,
payments could become prohibitively expensive. Traversing a
balanced binary tree involves an average of O(lg n) hits, where n
is the number of nodes in the tree. Thus, getting something from
a network of 1000 nodes requires an average of 10 hits and 10
payments. Peer-to-peer networks may allow each node to reach more
than two other nodes, but they could also become even more
expensive because they will not be balanced.
* All points model:
These previous problems lead us to settle on an all-points model.
Alice pays each peer engaged in the protocol, intermediate and
endpoint alike. Of course, we immediately run into the same
problem we had in the previous model, where Alice may not know
which peers are involved, especially during a search. But let's
assume for this discussion that she knows which nodes she'll be
using.
This solution is ideal for such fully-identified systems. The cost of
resource use falls solely upon its instigating requestor.
Anonymous systems add a few difficulties to using this model.
First of all, we lose some of our capacity to use interactive payment
models. For the forward-only Mixmaster mix-net, intermediate nodes
cannot tell Alice what client puzzle she should solve for them because
only the first hop knows Alice's identity. Therefore, payments
must be of a non-interactive variety.
To stop double-spending, the
scheme must use either a centralized bank server model (for instance,
Chaumian ecash) or have recipient-specific information encoded in
the payment (for instance, hash cash). This recipient-specific information
should further be hidden from view, so as to protect an
eavesdropper from being able to piece together the route by looking
at the micropayments. Recipient-hiding cryptosystems
[David Hopwood. hopwood@zetnet.co.uk. Recipient-Hiding Blinded Public-Key
Encryption. Draft Manuscript. ]
help ensure that the act of encrypting the micropayment does not
itself leak information about to whom the data is encrypted.
In short, the all-points payment model -- while offering advantages
over the prior three models -- presents its own difficulties.
Micropayment schemes can help ensure accountability and resource
allocation in peer-to-peer systems. But the solution requires careful
design and a consideration of all security problems: it's not a simple
cut-and-paste solution.
Subsubsection: Micropayments in the Free Haven context
------------------------------------------------------
Most Free Haven communication is done by broadcast. Every document
request or reputation referral is sent to every other Free Haven
server. Even if we can solve the micropayments issue for mix nets as
described above, we still need to ease the burden of multiple payments
incurred by each server each time it sends even a single Free Haven
message.
The first step is to remember that our communications
channel already has significant latency. Nobody will care if we
introduce a little bit more. We can queue the broadcasts and send a
batch of them out every so often -- perhaps once an hour. This approach
makes the problem of direct flooding less of a problem, because no
matter how many broadcasts we do in the system, our overall use of the
mix-net by the n Free Haven servers is limited to n^2 messages per
hour. We assume that the size of the message does not dramatically
increase as we add more broadcasts to the batch; given that each Free
Haven communication is very small, and given the padding already present
in the mix-net protocol, this seems like a reasonable assumption.
However, batching helps the situation only a little. For several reasons--the
lack of a widely-deployed electronic cash system, our desire to
provide more frictionless access regardless of wealth, and the
complex, central-server model used most fungible payment systems
to issue coins--non-fungible POW micropayments are better-suited for
Free Haven. Likewise, non-fungible payments work best with the
expensive all-points payment scheme. We still have the problem,
therefore, that every server must pay each intermediate node used to
contact every other server each hour.
It is conceivable that spreading the waste of time for each message
over the hour would produce a light enough load. Servers could simply do the
computation with idle cycles and send out a batch of broadcasts whenever
enough calculations have been performed.
We can solve this more directly by thinking of the server Alice as a
mailing list that uses pay-per-send email as described earlier in this
chapter in the section " Non-Fungible Micropayments." In this case,
users attach special tickets to messages sent to Alice, so they don't
have to perform a timewasting computation. Similarly, we might be
able to introduce into the mix-net protocol a "one free message per
hour" exception. But making this exception
introduces a difficult new problem -- our primary purpose is to
maintain the anonymity of the senders and recipients through the
mix net, but at the same time we want to limit each server to sending
only one message per recipient in each hour. Thus, it seems that we
need to track the endpoints of each message, in order to keep the
count of who sent what.
Having Alice distribute blinded tickets as an end-to-end solution
doesn't easily work either, as these tickets are used with the
intermediate mix-net nodes. The tickets would need to assure the nodes
both of Alice's identity as a Free Haven server and of her certification of
the user's right to mail her, while still maintaining the pseudynomity
of both parties.
The alternative is to have node-specific tickets for our all-points
model. More precisely, each mix-net node issues a limited number of
blinded tickets for each hour and user. This design also adds the
functionality of a prepaid payment system, if we want one. Project Anon,
an anonymous communications project, suggests such a technique.
[Oliver Berthold, Hannes Federrath, Marit Kohntopp [first o in Kohntopp
has two dots on top]. Project "Anonymity and Unobservability in the
Internet." Workshop on Freedom and Privacy by Design / Conference on
Computers, Freedom and Privacy 2000, Toronto/Canada, April 4-7, 2000.]
It's important to note that most blind signature techniques use
interactive protocols, which are less suitable for our type of
application.
Introducing a free message every hour to the mix-net protocol also allows
for smooth integration of another Free Haven feature: we want to allow
anonymous users to proxy a document retrieve request through certain
(public) Free Haven servers. Specifically, a user generates a one-time
mix-net reply block and a one-time key pair, and passes these to a Free
Haven node along with a handle to the document being requested. This
Free Haven node broadcasts the query to all other servers, just as
in a normal retrieve operation. Because bundling extra broadcasts into
each hourly message is virtually free, we can allow these extra
anonymous requests without much extra cost. Of course, a concerted flood
of document requests onto a server could cause its hourly message to be
very large; public Free Haven servers may have to drop document requests
after a certain threshold, or find some other mechanism for limiting
this threat of flooding.
Overall, providing bandwidth accountability along with anonymity is a
tough problem. What we describe above does not provide any clear
solution for an environment where we want to maintain strong anonymity.
This is perhaps why our current implemenation with mix nets doesn't use
micropayments to address accountability. Further research is certainly
necessary.
Subsection: Reputation Systems
------------------------------
The Free Haven reputation solution has two parts: first we need to
notice servers that drop data early, and second we need to develop a
process for "punishing" these servers.
It's very difficult to notice if a server drops data early, and we
still haven't solved the problem completely. The "buddy system" laid
out in Chapter 11 is our current approach, and it may well be good
enough. After all, we simply have to provide a system that is
difficult to reliably fool -- it doesn't have to catch every
single instance of misbehavior.
As for punishing misbehaving servers, that's where our reputation
scheme comes in. The first step in developing a solution that uses
reputation systems is to examine the situation more thoroughly and try
to understand our goals and limitations. Every situation contains
features that make it hard to develop a reputation solution and
features that make it easier.
We expect the Free Haven domain to include a number of generous
individuals who will take some risks with their reputation and
resources. Since disk space is very cheap and getting cheaper, and
there's no significant "loss" if a single trade goes bad, the Free
Haven environment is relatively lenient.
Ratings in the reputation system are tied to transactions and include
digitally signed receipts. So we can be pretty certain either that a
given transaction actually did happen, or at least that the two
parties are conspiring. At regular intervals, each Free Haven server
broadcasts a "reputation referal," or a package of ratings of other
servers.
Nodes should broadcast reputation referrals in several circumstances:
* When they log the honest completion of a trade
* When they check to see if a buddy to a share they hold is still
available, and find that it is missing
* When there's a substantial change in the reputation or credibility of
a given server, compared to the last reputation referral about that
server
How often to broadcast a referral can be a choice made by each
server. Sending referrals more often allows that server to more easily
distribute its current information and opinions to other servers in
the network. On the other hand, frequent broadcasts use more
bandwidth, and other servers may ignore servers that talk too much.
Servers get most of their information from their own transactions and
trades. After all, those are the data points which they are most certain
they can trust. Each server keeps its own separate database of information
that it knows, based on what information it has locally observed and
what information has come to it. Thus every server can have a different
view of the universe, and a different impression of which servers are
reputable and which aren't. Indeed, these "local decisions" introduce a
lot of flexibility into the design: each server operator can choose her
own thresholds for trust, broadcast frequency, which trades are accepted
or offered, etc. These decisions can be made to suit her particular
situation, based for instance on available bandwidth and storage or the
expected amount of time that she'll be running a Free Haven server.
Since each server is collecting referrals from other servers (and some
of those servers may be trying to discredit good servers or disrupt
the system in other ways) we need a robust algorithm for combining the
referrals. Each server operator can use an entirely separate
algorithm, but realistically speaking, most of them will provide a
default recommended by the Free Haven designers.
Some ways of choosing a good algorithm are described earlier in the
section "Scoring Systems." In Free Haven, we don't expect to have to
focus on very many parameters in order to get a reasonable score. Our
basic approach to developing a score for a given server is to iterate
through each rating available on that server, and weight each rating
based on how important and how relevant it appears to be. Some
parameters that we might want to examine while weighting a score
include:
* How recent is this rating?
Newer ratings should get more weight.
* How similar (in terms of size and expiration date) is this rating to the
transaction I'm currently considering?
Similar ratings should get more weight.
* In my experience, has this server accurately predicted the behavior
that I have observed?
This deals with the credibility of the rater.
* How often does the server send referrals?
If a server is verbose, we might choose to assign a lower rate to
each rating. On the other hand, if we've only ever gotten one
referral from this server, we might regard it with skepticism.
* How long has the rating server been a Free Haven server?
We will probably have greater confidence in servers that have been
part of the system for a long time.
As explained in Chapter 11, each server needs to keep two values to
describe each other server it knows about: reputation and credibility.
Reputation signifies a belief that the server in question will obey
the Free Haven Protocol. Credibility represents a belief that the
referrals from that server are valuable information. For each of
these two values, each server also needs to maintain a confidence
rating. This indicates how much it should relay on the server in
relation to others.
When new servers want to join the system, they must contact certain
servers that are acting as introducers. These introducers are
servers that are willing to advertise their existence in order to
introduce new servers to the rest of the servnet. Introducing simply
consists of broadcasting a reputation referral with some initial
reputation values. Each introducer can of course choose her own
initial values, but following the discussion in the section "Problems
from Pseudospoofing and Possible Defenses," it seems most reasonable
to broadcast an initial referral value of zero for both reputation and
confidence.
On first glance, it seems that we do not need to worry about
information leaks from the compiled scores -- after all, the entire
goal of the system is to communicate as much information as possible
about the behavior of history of each pseudonym (server). But a closer
examination indicates that a large group of ratings might reveal some
interesting attributes about a given server. For instance, by looking
at the frequency and quantity of transactions, we might be able to
learn that a given server has a relatively large hard drive. We
currently believe that leaking this type of information is acceptable.
Subsection: Other Considerations From the Case Study
----------------------------------------------------
Alice has paid for some resource. But did she get what she paid for?
This question deals with the problem of trust, discussed more fully in
Chapter 15. But given our discussion so far, we should note a few
issues that apply to various distributed systems.
In the case of data storage, Alice can query Bob at a later date for
her document adn verify its checksum in order to be sure Bob has
properly stored her document. She cannot be sure Bob has answered all
requests for that document, but she may be more convinced if Bob can't
determine that she's the one doing the query.
A distributed computation system can check the accuracy
of the results returned by each end user. As we saw in Chapter 11,
some problems take a lot longer to solve than a checker takes to
verify the answer. In other situations, we can use special algorithms
to check the validity of aggregate data much faster than performing
each calculation individually. For example, verifying many digital
signatures at once turns out to have special "batch verification"
methods that run much faster than checking each signature
individually.
[ Mihir Bellare, Juan A. Garay, Tal Rabin: Fast Batch Verification for
Modular Exponentiation and Digital Signatures. EUROCRYPT 1998: 236-250 ]
On the other hand, sometimes these schemes leave themselves open to attack.
[Attacking and Repairing Batch Verification Schemes
Colin Boyd and Chris Pavlovski ASIACRYPT 2000 ]
The methods we've described take advantage of particular properties
of the problem at hand. Not all problems are known to have these
properties. For example, the SETI@Home project would benefit from some
quick method of checking correctness of its clients. This is because
malicious clients have tried to disrupt the SETI@Home project in the past.
Unfortunately, no quick, practical methods for checking SETI@Home
computations are currently known.
[ David Molnar. The SETI@Home Problem. ACM Crossroads. September 2000
http://www.acm.org/crossroads/columns/onpatrol/september2000.html ]
Verifying bandwidth allocation can be a trickier issue. Bandwidth
often goes hand-in-hand with data storage. For instance, Bob might
host a web page for Alice, but is he always responding to requests? A
starting point for verification is to sample anonymously at random and
gain some statistical assurance that Bob is up. Still, the Mixmaster
problem returns to haunt us. David Chaum, who proposed mix-nets in
1981,
[ Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms,
D. Chaum, Communications of the ACM, vol. 24 no. 2, February, 1981]
suggested that mix nodes publish the outgoing batch of messages.
Alternatively, they could publish some number per message, selected at
random by Alice and known only to her. This suggestion works well for a
theoretical mix net endowed with a public bulletin board, but in Internet
systems it is difficult to ensure that the mix node actually sends out
these messages. Even a bulletin board could be tampered with.
Above, we have described some approaches to addressing accountability
in Free Haven. We can protect against bandwidth flooding through the
use of micropayments in the mix net that Free Haven uses for
communication, and protect against data flooding through the use of a
reputation system. While the exact details of these proposed solutions
are not described here, hopefully the techniques described to choose
each accountability solution will be useful in the development of
similiar peer-to-peer publication or storage systems.
Section: Conclusion
-------------------
Now we've seen a range of responses to the accountability problem. How
can we tell which ones are best? We can certainly start making some
judgements, but how does one know when one technique is better suited
than another?
Peer-to-peer remains a fuzzy concept. A strict definition has yet to be
accepted, and the term covers a wide array of systems that are only
loosely related (such as the ones in this book). This makes hard and
fast answers to these questions very difficult. When one describes
operating systems or databases, there are accepted design criteria that
all enterprise systems should fulfill, such as security and
fault-tolerance. In contrast, the criteria for peer-to-peer systems can
widely differ for various distributed application architectures:
file sharing, computation, instant messaging, intelligent searching, and
so on.
Still, we can describe some general themes.
This chapter has covered the theme of accountability. Our
classification has largely focused on two key issues:
1. Restricting access and protection from attack.
2. Selecting favored users.
Dealing with resource allocation and accountability problems is a
fundamental part of designing any system that must serve many users.
Systems that do not deal with these problems have found and will
continue to find themselves in trouble, especially as adversaries find
ways to make such problems glaringly apparent.
With all the peer-to-peer hype over the past year -- and probably
spurred on by this publication -- we want to note a simple claim.
Peer-to-peer won't save you.
Two examples of resource allocation problems are the "slashdot effect"
and distributed denial of service attacks. From these examples, it's
tempting to think that somehow being peer-to-peer will save a system
from thinking about such problems -- after all, there's no longer any
central point to attack or flood!
That's why we began the chapter talking about Napster and Gnutella.
Unfortunately, as can be seen by Gnutella's scaling problems, the
massive amounts of Napster traffic, and flooding attacks on file storage
services, being peer-to-peer doesn't make the problems go away. It just
makes the problems different. Indeed, it makes the problems harder,
because with peer-to-peer there might be no central command or central store of data.
The history of cryptography provides a cautionary tale here. Systems
designers have realized the limits of theoretical cryptography for
providing practical security. Cryptography is not pixie dust to
spread liberally and without regard over network protocols, magically
achieving protections from adversaries. Buffer overflow attacks and
unsalted dictionary passwords are only two examples of easy exploits.
A system is only as secure as its weakest link.
The same assertion holds for decentralized peer-to-peer systems. A range of
techniques exists for solving accountability and resource allocation
problems. Particularly powerful are reputation and micropayment
techniques, which allow a system to collect and leverage local
information about its users. Which techniques should be used
depends on the system being designed.
Section: Acknowledgements
-------------------------
First and foremost, we'd like to thank our editor, Andy Oram, for
helping us to make all of these sections fit together and make
sense. Without him, this would still be just a jumble of really neat
ideas. We would also like to thank a number of others for reviews and
comments: Robert Zeithammer for economic discussion of micropayments,
Jean-Francois [letter c has french squiggle underneath - see
freehaven_2.ps] Raymond and Stefan Brands for micropayments, Nick
Mathewson and Blake Meike for the reputations section,
and Daniel Freedman, Marc Waldman, Susan Born, and Theo Hong for overall
edits and comments.