Skip to Main Content
How would you like to share files with another user without having to explicitly place them in a designated external location? The recent successes of (and controversies surrounding) Napster, Gnutella, and FreeNet have drawn attention to peer-to-peer computing, which allows precisely such interactions between information and service providers and their customers. The author takes a brief look at peer-to-peer computing, or P2P, and its main variants, both those that are popular and those that ought to be. P2P can be defined most easily in terms of what it is not: the client-server model, which is currently the most common model of distributed computing. In the client-server model, an application residing on a client computer invokes commands at a server. In P2P, an application is split into components that act as equals. The client-server model is simple and effective, but it has serious shortcomings, which are discussed. P2P is by no means a new idea. The distributed computing research community has studied it for decades. Networks themselves demonstrate P2P in action: Ethernet is nothing if not a P2P protocol, and network routing operates through routers acting as peers with other routers. The difference in the recent focus on P2P seems to be that it has finally caught the imagination of people building practical systems at the application layer; and for good reason.