Chapter 1. Introduction

For the past decade, “distributed computing” has been one of the biggest buzz phrases in the computer industry. At this point in the information age, we know how to build networks; we use thousands of engineering workstations and personal computers to do our work, instead of huge behemoths in glass-walled rooms. Surely we ought to be able to use our networks of smaller computers to work together on larger tasks. And we do—an act as simple as reading a web page requires the cooperation of two computers (a client and a server) plus other computers that make sure the data gets from one location to the other. However, simple browsing (i.e., a largely one-way data exchange) isn’t what we usually mean when we talk about distributed computing. We usually mean something where there’s more interaction between the systems involved.

You can think about distributed computing in terms of breaking down an application into individual computing agents that can be distributed on a network of computers, yet still work together to do cooperative tasks. The motivations for distributing an application this way are many. Here are a few of the more common ones:

  • Computing things in parallel by breaking a problem into smaller pieces enables you to solve larger problems without resorting to larger computers. Instead, you can use smaller, cheaper, easier-to-find computers.

  • Large data sets are typically difficult to relocate, or easier to control and administer located where they are, so users have to rely on remote data servers to provide needed information.

  • Redundant processing agents on multiple networked computers can be used by systems that need fault tolerance. If a machine or agent process goes down, the job can still carry on.

There are many other motivations, and plenty of subtle variations on the ones listed here.

Assorted tools and standards for assembling distributed computing applications have been developed over the years. These started as low-level data transmission APIs and protocols, such as RPC and DCE, and have recently begun to evolve into object-based distribution schemes, such as CORBA, RMI, and OpenDoc. These programming tools essentially provide a protocol for transmitting structured data (and, in some cases, actual runnable code) over a network connection. Java offers a language and an environment that encompass various levels of distributed computing development, from low-level network communication to distributed objects and agents, while also having built-in support for secure applications, multiple threads of control, and integration with other Internet-based protocols and services.

This chapter gives an introduction to distributed application development, and how Java can be used as a tool towards this end. In the following chapters, we’ll start by reviewing some essential background material on network programming, threads, and security. Then we’ll move into a series of chapters that explore different distributed problems in detail. Where appropriate, we’ll use RMI, CORBA, or a homegrown protocol to implement examples. If you are developing distributed applications, you need to be familiar with all possible solutions and where they’re appropriate; so where we choose a particular tool, we’ll try to discuss how things would be better or worse if you chose a different set of tools in building something similar.

Get Java Distributed Computing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.