This basic pattern focuses on avoiding unnecessary network latency.
Communication between nodes is faster when the nodes are close together. Distance adds network latency. In the cloud, “close together” means in the same data center (sometimes even closer, such as on the same rack).
There are good reasons for nodes to be in different data centers, but this chapter focuses on ensuring that nodes that should be in the same data center actually are. Accidentally deploying across multiple data centers can result in terrible application performance and unnecessarily inflated costs due to data transfer charges.
This applies to nodes running application code, such as compute nodes, and nodes implementing cloud storage and database services. It also encompasses related decisions, such as where log files should be stored.
The Colocation Pattern effectively deals with the following challenges:
One node makes frequent use of another node, such as a compute node accessing a database
Application deployment is basic, with no need for more than a single data center
Application deployment is complex, involving multiple data centers, but nodes within each data center make frequent use of other nodes, which can be colocated in the same data center
In general, resources that are heavily reliant on each other should be colocated.
A multitier application generally has a web or application server tier that accesses a database tier. It is often desirable to minimize network ...