Chapter 3. Growing with S3, ELB, Auto Scaling, and RDS

We have done quite a lot in just a couple of chapters. We have explored what it means to design, build, and operate virtual infrastructures on AWS. We have looked at the opportunities it provides, and we have moved a real-world application, Kulitzer.com, to AWS. Although we have done many things that are usually very difficult on physical infrastructures, we have not yet looked at the biggest benefit of AWS: an elastic infrastructure that scales with demand.

With growing traffic, our initial setup will soon be insufficient. We know we can scale up with bigger machines, but we prefer to scale out. Scaling up is OK in certain situations. If you have a fixed upper limit in traffic—for example, if your users are internal to an organization—you can safely assume it is not necessary to go through the trouble of implementing load balancing and/or autoscaling. But if your application is public, and perhaps global, this assumption can be costly. Scaling up is also manual on AWS, requiring reboots and temporary unavailability of your instance. The same goes for the opposite, scaling down (Figure 3-1).

Scaling out, on the other hand, requires changes to your application. You have to make sure you can have several instances doing the same work, preferably on their own. If the instances can operate in their own context, it doesn’t matter if you have three or seven instances. If this is possible, we can handle traffic with just the right number ...

Get Programming Amazon EC2 now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.