You are previewing Programming Amazon EC2.

Programming Amazon EC2

Cover of Programming Amazon EC2 by Flavia Paganelli... Published by O'Reilly Media, Inc.

Chapter 4. Decoupling with SQS, SimpleDB, and SNS

Take a moment to look back at what we’ve achieved so far. We started with a simple web application, with an infrastructure that was ready to fail. We took this app and used AWS services like Elastic Load Balancing and Auto Scaling to make it resilient, with certain components scaling out (and in) with traffic demand. We built an infrastructure (without any investment) that is capable of handling huge traffic at relatively low operational cost. Quite a feat!

Scaling beyond this amount of traffic requires more drastic changes in the application. We need to decouple, and for that we’ll use Amazon SimpleDB, Amazon Simple Notification Service (SNS), and Amazon Simple Queue Service (SQS), together with S3, which we have already seen in action. But these services are more versatile than just allowing us to scale. We can use the decoupling principle in other scenarios, either because we already have distinct components or because we can easily add functionality.

In this chapter, we’ll be presenting many different use cases. Some are already in production, others are planned or dreamed of. The examples are meant to show what you can do with these services and help you to develop your own by using code samples in various languages. We have chosen to use real-world applications that we are working with daily. The languages are Java, PHP, and Ruby, and the examples should be enough to get you going in other languages with libraries available.


In the SQS Developer Guide, you can read that “Amazon SQS is a distributed queue system that enables web service applications to quickly and reliably queue messages that one component in the application generates to be consumed by another component. A queue is a temporary repository for messages that are awaiting processing.”

And that’s basically all it is! You can have many writers hitting a queue at the same time. SQS does its best to preserve order, but the distributed nature makes it impossible to guarantee it. If you really need to preserve order, you can add your own identifier as part of the queued messages, but approximate order is probably enough to work with in most cases. A trade-off like this is necessary in massively scalable services like SQS. This is not very different from eventual consistency, as seen in S3 and (as we will show soon) in SimpleDB.

You can also have many readers, and SQS guarantees each message is delivered at least once. Reading a message is atomic—locks are used to keep multiple readers from processing the same message. Because in such a distributed system you can’t assume a message is not immediately deleted, SQS sets it to invisible. This invisibility has an expiration, called visibility timeout, that defaults to 30 seconds. If this is not enough, you can change it in the queue or per message, although the recommended way is to use different queues for different visibility timeouts. After processing the message, it must be deleted explicitly (if successful, of course).

You can have as many queues as you want, but leaving them inactive is a violation of intended use. We couldn’t figure out what the penalties are, but the principle of cloud computing is to minimize waste. Message size is variable, and the maximum is 64 KB. If you need to work with larger objects, the obvious place to store them is S3. In our examples, we use this combination as well.

One last important thing to remember is that messages are not retained indefinitely. Messages will be deleted after four days by default, but you can have your queue retain them for a maximum duration of two weeks.

We’ll show a number of interesting applications of SQS. For Kulitzer, we want more flexibility in image processing, so we’ve decided to decouple the web application from the image processing. For Marvia, we want to implement delayed PDF processing: users can choose to have their PDFs processed later, at a cheaper rate. And finally, we’ll use Decaf to have our phone monitor our queues and notify when they are out of bounds.

Example 1: Offloading Image Processing for Kulitzer (Ruby)

Remember how we handle image processing with Kulitzer? We basically have the web server spawn a background job for asynchronous processing, so the web server (and the user) can continue their business. The idea is perfect, and it works quite well. But there are emerging conversations within the team about adding certain features to Kulitzer that are not easily implemented in the current infrastructure.

Two ideas floating around are RAW images and video. For both these formats, we face the same problem: there is no ready-made solution to cater to our needs. We expect to have to build our service on top of multiple available free (and less free) solutions. Even though these features have not yet been requested and aren’t found on any road map, we feel we need to offer the flexibility for this innovation.

For the postprocessing of images (and video) to be more flexible, we need to separate this component from the web server. The idea is to implement a postprocessing component, picking up jobs from the SQS queue as they become available. The web server will handle the user upload, move the file to S3, and add this message to the queue to be processed. The images (thumbnails, watermarked versions, etc.) that are not yet available will be replaced by a “being processed” image (with an expiration header in the past). As soon as the images are available, the user will see them. Figure 4-1 shows how this could be implemented using a different EC2 instance for image processing in case scalability became a concern. The SQS image queue and the image processing EC2 instance are introduced in this change.

Offloading image processing

Figure 4-1. Offloading image processing

We already have the basics in our application, and we just need to separate them. We will move the copying of the image to S3 out of the background jobs, because if something goes wrong we need to be able to notify the user immediately. The user will wait until the image has been uploaded, so he can be notified on the spot if something went wrong (wrong file image, type, etc.). This simplifies the app, making it easier to maintain. If the upload to S3 was successful, we add an entry to the images queue.


SQS is great, but there are not many tools to work with. You can use the SQS Scratchpad, provided by AWS, to create your first queues, list queues, etc. It’s not a real app, but it’s valuable nonetheless. You can also write your own tools on top of the available libraries. If you work with Python, you can start the shell, load boto, and do the necessary work by hand.

This is all it takes to create a queue and add the image as a queue message. You can run this example with irb, using the Ruby gem from RightScale right_aws. Use images_queue.size to verify that your message has really been added to the queue:

require 'rubygems'
require 'right_aws'

# get SQS service with AWS credentials

# create the queue, if it doesn't exist, with a VisibilityTimeout of 120 (seconds)
images_queue = sqs.queue("images", true, 120)

# the only thing we need to pass is the URL to the image in S3

That is more or less what we will do in the web application to add messages to the queue. Getting messages from the queue, which is part of our image processing component, is done in two steps. If you only get a message from the queue—receive—SQS will set that message to invisible for a certain time. After you process the message, you can delete it from the queue. right_aws provides a method pop, which both receives and deletes the message in one operation. We encourage you not to use this, as it can lead to errors that are very hard to debug because you have many components in your infrastructure and a lot of them are transient (as instances can be terminated by Auto Scaling, for example). This is how we pick up, process, and, upon success, delete messages from the images_queue:

require 'rubygems'
require 'right_aws'


# create the queue, if it doesn't exist, with a VisibilityTimeout of 120 (seconds)
images_queue = sqs.queue("images", true, 120)

# now get the messages
message = images_queue.receive

# process the message, if any
# the process method is application-specific
if (process(message.body)) then

This is all it takes to decouple distinct components of your application. We only pass around the full S3 URL, because that is all we need for now. The different image sizes that are going to be generated will be put in the same directory in the same bucket, and are distinguishable by filename. Apart from being able to scale, we can also scale flexibly. We can use the queue as a buffer, for example, so we can run the processing app with fewer resources. There is no user waiting and we don’t have to be snappy in performance.


This is also the perfect moment to introduce reduced redundancy storage, which is a less reliable kind of storage than standard S3, but notifies you when a particular file is compromised. The notification can be configured to place a message in the queue we just created, and our post-processing component recognizes it as a compromised file and regenerates it from the original.

Reduced redundancy storage is cheaper than standard S3 and can be enabled per file. Especially with large content repositories, you only require the original to be as redundant as S3 originally was. For all generated files, we don’t need that particular security.

Notification is through SNS, which we’ll see in action in a little bit. But it is relatively easy to have SNS forward the notification to the queue we are working with.

Example 2: Priority PDF Processing for Marvia (PHP)

Marvia is a young Dutch company located in the center of Amsterdam. Even though it hasn’t been around for long, its applications are already very impressive. For example, it built the Speurders application for the Telegraaf (one of the largest Dutch daily newspapers), which allows anyone to place a classified ad in the print newspaper. No one from the Telegraaf bothers you, and you can decide exactly how your ad will appear in the newspaper.

But this is only the start. Marvia will expose the underlying technology of its products in what you can call an API. This means Marvia is creating a PDF cloud that will be available to anyone who has structured information that needs to be printed (or displayed) professionally.

One of the examples showing what this PDF cloud can do is, a weekly film schedule that is distributed in print in and around Amsterdam. Cineville gets its schedule information from different cinemas and automatically has its PDF created. This PDF is then printed and distributed. Cineville has completely eradicated human intervention in this process, illustrating the extent of the new industrialization phase we are entering with the cloud.

The Marvia infrastructure is already decoupled, and it consists of two distinct components. One component (including a web application) creates the different assets needed for generating a PDF: templates, images, text, etc. These are then sent to the second component, the InDesign farm. This is an OSX-based render farm, and since AWS does not (yet) support OSX, Marvia chose to build it on its own hardware. This architecture is illustrated in Figure 4-2.

Marvia’s current architecture

Figure 4-2. Marvia’s current architecture

The two components already communicate, by sharing files through S3. It works, but it lacks the flexibility to innovate the product. One of the ideas being discussed is to introduce the concept of quality of service. The quality of the PDFs is always the same—they’re very good because they’re generated with care, and care takes time. That’s usually just fine, but sometimes you’re in a hurry and you’re willing to pay something extra for preferential treatment.

Again, we built in flexibility. For now we are stuck with OSX on physical servers, but as soon as that is available in the cloud, we can easily start optimizing resources. We can use spot instances (see the Tip), for example, to generate PDFs with the lowest priority. The possibilities are interesting.

But let’s look at what is necessary to implement a basic version of quality of service. We want to create two queues: high-priority and normal. When there are messages in the high-priority queue, we serve those; otherwise, we work on the rest. If necessary, we could add more rules to prevent starvation of the low-priority queue. These changes are shown in Figure 4-3.

Using SQS to introduce quality of service

Figure 4-3. Using SQS to introduce quality of service

Installing the tools for PHP

AWS has a full-fledged PHP library, complete with code samples and some documentation. It is relatively easy to install using PHP Extension and Application Repository (PEAR). To install the stable version of the AWS PHP library in your PHP environment, execute the following commands (for PHP 5.3, most required packages are available by default, but you do have to install the JSON library and cURL):

$ pear install pecl/json
$ apt-get install curl libcurl3 php5-curl

$ pear channel-discover
$ pear install aws/sdk

Writing messages

The first script receives messages that are posted with HTTP, and adds them to the right SQS queue. The appropriate queue is passed as a request parameter. The job description is fictional, and hardcoded for the purpose of the example. We pass the job description as an encoded JSON array. The queue is created if it doesn’t exist already:


We have used the AWS PHP SDK as is, but the path to sdk.class.php might vary depending on your installation. The SDK reads definitions from a config file in your home directory or the directory of the script. We have included them in the script for clarity.

    require_once( '/usr/share/php/AWSSDKforPHP/sdk.class.php');

    define('AWS_SECRET_KEY', 'w2Y3dx82vcY1YSKbJY51GmfFQn3705ftW4uSBrHn');
    define('AWS_ACCOUNT_ID', '457964863276');

    # get queue name
    $queue_name = $_GET['queue'];

    # construct the message
    $job_description = array(
      'template' => 
      'assets' =>
      'result' => 
    $body = json_encode( $job_description);

    $sqs = new AmazonSQS();

    $high_priority_jobs_queue = $sqs->create_queue( $queue_name);
    $high_priority_jobs_queue->isOK() or
        die('could not create queue high-priority-jobs');

    # add the message to the queue
    $response = $sqs->send_message( 

    pr( $response->body);

    function pr($var) { print '<pre>'; print_r($var); print '</pre>'; }

Below, you can see an example of what the output might look like (we would invoke it with something like http://<elastic_ip>/write.php?queue=high-priority):

CFSimpleXML Object
    [@attributes] => Array
            [ns] =>

    [SendMessageResult] => CFSimpleXML Object
            [MD5OfMessageBody] => d529c6f7bfe37a6054e1d9ee938be411
            [MessageId] => 2ffc1f6e-0dc1-467d-96be-1178f95e691b

    [ResponseMetadata] => CFSimpleXML Object
            [RequestId] => 491c2cd1-210c-4a99-849a-dbb8767d0bde


Reading messages

In this example, we read messages from the SQS queue, process them, and delete them afterward:


    define('AWS_SECRET_KEY', 'w2Y3dx82vcY1YSKbJY51GmfFQn3705ftW4uSBrHn');
    define('AWS_ACCOUNT_ID', '457964863276');

    $queue_name = $_GET['queue'];

    $sqs = new AmazonSQS();

    $queue = $sqs->create_queue($queue_name);
    $queue->isOK() or die('could not create queue ' . $queue_name);

    $receive_response = $sqs->receive_message( $queue->body->QueueUrl(0));

    # process the message...

    $delete_response = $sqs->delete_message( $queue->body->QueueUrl(0),
    $body = json_decode($receive_response->body->Body(0));
    pr( $body);

    function pr($var) { print '<pre>'; print_r($var); print '</pre>'; }

The output for this example would look like this (we would invoke it with something like http://<elastic_ip>/read.php?queue=high-priority):

stdClass Object
    [template] =>
    [assets] =>
    [result] =>

Example 3: Monitoring Queues in Decaf (Java)

We are going to get a bit ahead of ourselves in this section. Operating apps using advanced services like SQS, SNS, and SimpleDB is the subject of Chapter 7. But we wanted to show using SQS in the Java language too. And, as we only consider real examples interesting (the only exception being Hello World, of course), we’ll show you excerpts of Java source from Decaf.

If you start using queues as the glue between the components of your applications, you are probably curious about the state. It is not terribly interesting how many messages are in the queue at any given time if your application is working. But if something goes wrong with the pool of readers or writers, you can experience a couple of different situations:

  • The queue has more messages than normal.

  • The queue has too many invisible messages (messages are being processed but not deleted).

  • The queue doesn’t have enough messages.

We are going to add a simple SQS browser to Decaf. It shows the queues in a region, and you can see the state of a queue by inspecting its attributes. The attributes we are interested in are ApproximateNumberOfMessages and ApproximateNumberOfMessagesNotVisible. We already have all the mechanics in place to monitor certain aspects of your infrastructure automatically; we just need to add appropriate calls to watch the queues.

Getting the queues

For using the services covered in this chapter, there is AWS SDK for Android. To set up the SQS service, all you need is to pass it your account credentials and set the right endpoint for the region you are working on. In the Decaf examples, we are using the us-east-1 region. If you are not interested in Android and just want to use plain Java, there is the AWS SDK for Java, which covers all the services. All the examples given in this book run with both sets of libraries.

In this example, we invoke the ListQueues action, which returns a list of queue URLs. Any other information about the queues (such as current number of messages, time when it was created, etc.) has to be retrieved in a separate call passing the queue URL, as we show in the next sections.

If you have many queues, it is possible to retrieve just some of them by passing a queue name prefix parameter. Only the queues with names starting with that prefix will be returned:

import com.amazonaws.auth.BasicAWSCredentials;

// ...

// prepare the credentials
String accessKey = "AKIAIGKECZXA7AEIJLMQ";
String secretKey = "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";

// create the SQS service
AmazonSQS sqsService = new AmazonSQSClient(
            new BasicAWSCredentials(accessKey, secretKey));

// set the endpoint for us-east-1 region

// get the current queues for this region
this.queues = sqsService.listQueues().getQueueUrls();

Reading the queue attributes

The attributes of a queue at the time of writing are:

  • The approximate number of visible messages it contains. This number is approximate because of the distributed architecture on which SQS is implemented, but generally it should be very close to reality.

  • The approximate number of messages that are not visible. These are messages that have been retrieved by some component to be processed but have not yet been deleted by that component, and the visibility timeout is not over yet.

  • The visibility timeout. This is how long a message can be in invisible mode before SQS decides that the component responsible for it has failed and puts the message back in the queue.

  • The timestamp when the queue was created.

  • The timestamp when the queue was last updated.

  • The permissions policy.

  • The maximum message size. Messages larger than the maximum will be rejected by SQS.

  • The message retention period. This is how long SQS will keep your messages if they are not deleted. The default is four days. After this period is over, the messages are automatically deleted.

You can change the visibility timeout, policy, maximum message size, and message retention period by invoking the SetQueueAttributes action.

You can indicate which attributes you want to list, or use All to get the whole list, as done here:


// ...

GetQueueAttributesRequest request = new GetQueueAttributesRequest();

// set the queue URL, which identifies the queue (hardcoded for this example)
String queueURL = "";
request = request.withQueueUrl(queueURL);

// we want all the attributes of the queue
request = request.withAttributeNames("All");

// make the request to the service
this.attributes = sqsService.getQueueAttributes(request).getAttributes();

Figure 4-4 shows a screenshot of our Android application, listing attributes of a queue.

Showing the attributes of a queue

Figure 4-4. Showing the attributes of a queue

Checking a specific queue attribute

If we want to check the number of messages in a queue and trigger an alert (email message, Android notification, SMS, etc.), we can request the attribute ApproximateNumberOfMessages:

import com.amazonaws.auth.BasicAWSCredentials;

// ...

// get the attribute ApproximateNumberOfMessages for this queue
GetQueueAttributesRequest request = new GetQueueAttributesRequest();
String queueURL = "";
request = request.withQueueUrl(queueURL);
request = request.withAttributeNames("ApproximateNumberOfMessages");

Map<String, String> attrs = sqsService.getQueueAttributes(request).getAttributes();

// get the approximate number of messages in the queue
int messages = Integer.parseInt(attrs.get("ApproximateNumberOfMessages"));

// compare with max, the user's choice for maximum number of messages
if (messages > max) {
    // if number of messages exceeds maximum, 
    // notify the user using Android notifications...
    // ...


All the AWS web service calls can throw exceptions—AmazonServiceException, AmazonClientException, and so on—signaling different errors. For SQS calls you could get errors such as “Invalid attribute” and “Nonexistent queue.” In general, you could get other errors common to all web services such as “Authorization failure” or “Invalid request.” We are not showing the try-catch in these examples, but in your application you should make sure the different error conditions are handled.

The best content for your career. Discover unlimited learning on demand for around $1/day.