Chapter 4. Drupal Coding for Optimal Performance

One of the great things about Drupal is the ease with which you can extend or override core functionality in order to customize it for your specific needs. However, if you are not careful with how you code, you may introduce a huge performance bottleneck into your contributed or custom module. This chapter will give an overview of Drupal APIs relevant to performance and scalability, common coding best practices, and pitfalls to be aware of when trying to approach common tasks.

Context Matters

Before discussing the APIs and patterns that Drupal provides, it’s worth discussing which types of issues are often introduced when writing Drupal code.

Performance and scalability issues in code can affect CPU, memory, filesystem, database, and network usage, either individually or in combination. All code uses at least some CPU and memory, and all sites will access the database and filesystem and potentially make network requests. Whether any of these turns out to be a performance bottleneck is always down to context.

There are no hard and fast rules about what makes code “fast” or “slow”—exactly the same code could be acceptable in one situation but not in another, and performance often needs to be balanced against other programming issues such as testability, readability, and maintainability.

When writing or reviewing code, it’s important to think of the context the code will be executed in—both the immediate use case and whether it might also be applied to other contexts. The following are some general questions to ask, before you start trying to optimize at all:

  • Does the code get executed on every request?
  • Could it run more than once during a request? If so, a few times, or hundreds or thousands?
  • If the code runs less frequently, will it affect end user performance? And how critical is end user performance in that case?
  • Does the code have side effects that could affect the performance of other requests, such as writing to the database or flushing caches?
  • Is the code an isolated unit, or will it be affected by other code or the configuration and state of the Drupal installation it runs on? For example, the amount of content, users, themes, or modules installed can dramatically change the characteristics of how code performs.

Only after considering these questions should you attempt to apply one or more of the approaches outlined here.

False Optimizations

It’s entirely possible to make the performance of code worse by “optimizing” it. This happens when additional code is added to avoid expensive processing, but the expensive processing happens anyway. The result is that both the original expensive code and the new code run, adding additional overhead to an already bad situation.

An example of this is the fairly common micro-optimization of replacing array_key_exists() with isset(). (Please note that this is used only as an example, and we’re not explicitly recommending doing so!):

isset()
This is a language construct that tells you whether a variable is set or not, and returns false if that variable is explicitly set to NULL.
array_key_exists()
This is a function that tells you if an array key exists regardless of the value.

Function calls in PHP have more overhead than language constructs, so isset() takes less time than a function call, and while the semantics are different, they can be used interchangeably if you don’t need to explicitly check for array keys set to NULL. Hence, a common micro-optimization is to use isset() unless it’s absolutely necessary to check for NULL.

Let’s assume you had some code that definitely needed to use array_key_exists() because of the NULL check, but you wanted to try to run the faster isset() first, to skip the function call when it’s not needed. You might write code like this:

<?php
$array = array('foo' => NULL);

isset($array['foo']); // returns FALSE.

array_key_exists('foo', $array); // returns TRUE.

isset($array['foo']) || array_key_exists('foo', $array); // returns TRUE.
?>

The last example is semantically identical to just an array_key_exists() call, but in the case that $array[‘foo’] is set to a non-NULL value, only the isset() check needs to be made, avoiding the more expensive function call.

However, if $array[‘foo’] doesn’t exist or is set to NULL, then the code actually has to do more work—checking isset() then the array_key_exists(), as well as the || operator—all of which is going to be slower than just running array_key_exists() in the first place!

The only way to know the effect of this is to create a realistic scenario or test on a real install, and see which code execution path is actually the most common. This comes back to context—it’s not so much the content of the code itself that determines its performance, but how exactly it is executed.

Whether this kind of optimization is a problem depends on the relative performance increase you hope to gain.

For example, when checking access rights, you may need to check an administrative permission via user_access() as well as access permissions based on an entity ID, which requires loading the entity via entity_load() first. Both checks are necessary regardless, but the order is important.

While very few users might have the administrative permission, a call to user_access() takes a fraction of the resources that loading and access-checking an entity does and won’t cause a measurable delay. It’s worth doing the cheaper check first even if the second, more expensive check will run too.

This is the same with almost any pattern that attempts to circumvent code execution rather than completely rewriting it. For example, adding persistent caching to a function that is a cache miss in 99.9% of cases will mean extra time spent checking and writing to the cache, as well as extra space being taken up in cache storage, on top of the original code being executed. However, if the code being executed is very expensive, then the overhead of cache misses may well be outweighed regardless.

With this in mind, we’ll first cover a common task for Drupal custom and contributed modules, and look at ways to ensure that this task is executed as fast as possible. Then we’ll move on to the APIs that Drupal provides specifically to aid with performance and scaling.

Listing Entities

Whether it’s on the front page of a blog or in a gallery of images or a comment thread, much of the work done on a Drupal site involves getting a list of entities and then rendering them.

There are two APIs introduced in Drupal 7, and only slightly changed in Drupal 8, that help with this: entityQuery() and entity_load_multiple().

entityQuery()

Rather than a direct database query to entity and field tables, EntityQuery() relies on a storage controller to handle building and executing the query for the appropriate entity storage backend. This has the advantage that any query run through entityQuery() is storage agnostic, so if you’re writing a contributed module or working on a site where it might be necessary to move to alternative entity storage in the future, all your queries will transparently use the new storage backend without any refactoring. EntityQuery() can be used whether you’re writing queries by hand in custom code or via the entityQuery() Views backend.

Multiple Entity Loading

Once you have some entities to list, you’ll need to load and then render them.

A common pattern would be to loop over each node and load them individually:

<?php
/**
 * Provide an array of rendered entities given the IDs.
 *
 * @param array $ids
 *     The entity IDs to load
 *
 * @return $rendered_entities
 *  The array of rendered entities.
function render_entities($ids) {
  $rendered_entities = array();
  foreach ($ids as $id) {
    $rendered_entities[$id] = entity_view(entity_load($id));
  }
  return $rendered_entities;
}
?>

Drupal 7 introduced multiple entity loading and rendering so that tasks such as fetching field values from the database could be done once for all nodes with an IN() query rather than executed individually:

<?php
function render_entities($ids) {
  $entities = entity_load_multiple($ids);
  return = entity_view_multiple($entities);
}
?>

By using the multiple load and view functions, assuming 10 nodes need to be loaded and rendered, 10 similar queries to the same table can be reduced to just one. Since an individual node load could require 10 or 20 database queries, this can result in dozens or hundreds of database queries saved when loading and rendering multiple nodes at the same time.

Note that this applies to hook implementations as well; for example, hook_entity_load() acts on an array of entities.

One often overlooked hook is hook_entity_prepare_view(). Often, custom themes will need to add fields from user accounts/profiles when rendering nodes or comments—this could be the user’s full name, avatar, registration date, etc. A common pattern for this is preprocess. Let’s take nodes as an example:

<?php
template_preprocess_node(&$variables) {
  $node = $variables['node'];
  $variables['account'] = user_load($node->uid);
  // Set up custom variables based on account here.
}
?>

When rendering several different nodes or comments by different authors, this pattern can result in a lot of round trips to the database as each account is fetched individually. The following example provides the same functionality while resolving the performance issue:

<?php
hook_entity_prepare_view($entity_type, $entities) {
  if ($entity_type != 'node') {
    return;
  }
  $uids = array();
  foreach ($entities as $entity) {
    $uids[] = $entity->uid;
  }
  $accounts = user_load_multiple($uids);
  foreach ($entities as $entity) {
    $entity->account = $accounts[$entity->uid];
  }
}
?>

Then $entity->account is available in preprocess:

<?php
template_preprocess_node(&$variables) {
  $account = $variables['node']->account;
}
?>

Caching

Caching is often the quickest way to solve a performance issue. By adding caching in a particular code path, you can ensure that it will only be executed on cache misses.

Before adding caching, though, there are a few things to consider:

  • Is it possible to optimize the code so that it doesn’t need to be cached?
  • Is there already caching of the code at a higher level, for example page caching, that might affect the hit rate?
  • Will the cached code path be considerably quicker than the current code path?
  • Does the cache need to be cleared on particular events? Is it OK for it to be stale sometimes?
  • Is the code run multiple times with the same output during a single request?

Static Caching

When code is run multiple times per request, a common optimization is to add a static cache around it. For example, you might rewrite the following code:

<?php
function my_function() {
  return something_expensive();
}

as

<?php
function my_function() {
  static $foo;
  if (!isset($foo)) {
    $foo = something_expensive();
  }
  return $foo;
}
?>

Because $foo is declared as static, it will be held in memory for the duration of the request regardless of how many times the function gets called. This means once this function has run once, it will run the isset() check and then immediately return.

While it only takes a couple of lines of code to add a static cache, doing so has implications that aren’t always immediately obvious.

Let’s look at the code inside something_expensive():

<?php
function something_expensive() {
  return friends_count($GLOBALS['user']);
}
?>

Whoops. If $GLOBALS[‘user’] changes during the request, then something_expensive() will return different output. This often happens during automated tests using Drupal’s simpletest adaption, or in a drush process that might be sending emails to multiple different users.

It’s not impossible to fix this, of course. For example, we can key the cache based on the global user’s ID:

<?php
function my_function() {
  static $foo;
  global $user;
  if (!isset($foo[$user->uid])) {
    $foo[$user->uid] = something_expensive();
  }
  return $foo[$user->uid];
}
?>

Now, regardless of how many times the global user object is swapped out during the request, our function will return correctly, whilst still statically caching the results.

But the problems don’t end there. What if the number of friends the user has changes during the request as well? This might well happen during a functional test or a long-running drush job. Additionally, this is where memory usage starts to be a problem: a drush job processing one million users could eventually end up with a million items in this static cache.

Drupal core has a solution for this in the form of the drupal_static() function. This operates similarly to static caching, except that the static cache can be accessed from different functions, both for retrieval and for reset.

Now our function looks like this:

<?php
function my_function() {
  // Only this line changes.
  $foo = &drupal_static(__FUNCTION__);
  global $user;
  if (!isset($foo[$user->uid])) {
    $foo[$user->uid] = something_expensive();
  }
  return $foo[$user->uid];
}
?>

Code in unit tests that updates the user’s friends count or needs to reclaim some PHP memory can then call drupal_static_reset(‘my_function’) to empty the static cache.

Since drupal_static() is a function call, it has a lot more overhead than declaring static and including an isset() check. This can lead to a situation where static caching is added to micro-optimize a function, then converted to drupal_static() for testing purposes, which leads to the function being slower than when it had no caching at all. If you absolutely need to use drupal_static() and your function is going to be called dozens or hundreds of times during a request, there’s the drupal_static_fast pattern:

<?php
function my_function() {
  static $drupal_static_fast;
  if (!isset($drupal_static_fast)) {
    $drupal_static_fast['foo'] = &drupal_static(__FUNCTION__);
  }
  $foo = $drupal_static_fast['foo'];
  global $user;
  if (!isset($foo[$user->uid])) {
    $foo[$user->uid] = something_expensive();
  }
  return $foo[$user->uid];
}
?>

This adds testability and performance at the expense of quite a bit of complexity.

There are two issues with my_function() now. One is a development process issue, and the other is architectural.

In terms of process, if we look back at the original function, we can see it’s only a wrapper around something_expensive(). While a real example probably wouldn’t be a one-line wrapper, if the only thing that needs caching is something_expensive(), this isn’t the right place to add that caching. What we should have done was add the caching directly to something_expensive(), which also knows about any global state it depends on and any other factors that might influence the result (and, if you’re lucky, is in a contributed module rather than your custom code).

When you add caching to a wrapper rather than to the function itself, the following bad things happen:

  • Any other code that calls the function (here, something_expensive()) does not get the benefit of the static caching.
  • If the function or another function that calls it adds static caching at a later point, the same data will be added to the cache twice, leading to both higher memory usage and potentially hard-to-find cache invalidation bugs.

From an architectural/readability perspective, we can see the gradual change from a very simple function to one that is balancing various variables in global state. A major change in Drupal 8 has been the migration from procedural APIs to object-oriented code based on dependency injection. Most classes are loaded via a factory method or plug-in manager, or accessed from the dependency injection container. When this is the case, simply using class properties is sufficient for managing state between methods, and no static caching is necessary at all.

Persistent Caching

Drupal core ships with a rich caching API, defaulting to database caching but with contributed support for files, Memcache, Redis, MongoDB, APC, and other backends.

While static caching allows code to skip execution when called within a single PHP request, persistent caching is shared between PHP processes and can retain data for anything from a few seconds to several weeks.

The Cache interface is well documented on the Drupal API site, and there are numerous examples of basic usage in core modules. Rather than duplicating that information here, we’ll discuss some of the lesser known features and ones new to Drupal 8.

Cache chains

A new feature in Drupal 8 is the cache chain backend, a means of stringing together different cache storage backends in a way that is transparent to the calling code. This feature is primarily designed for combining two persistent storage backends together—for example, APC and database caching—in order to get the best of both. With an APC and database chain, the cache will check APC first and return immediately if an item is found. If not, it will check the database and then write back to APC if the item is found there; and on cache misses, it will write to both. It’s also possible to use the memory backend shipped with Drupal core and any other persistent backend to emulate the static + persistent caching pattern shown earlier, without the code complexity.

Cache bins

Drupal core defines several different cache bins, including “bootstrap” for information required on every request, the default cache bin, and use-case-specific bins such as “page” which is only used for cached HTML pages. The cache API not only allows for storage to be swapped out but also allows it to be changed for each cache bin, via the $conf[cache_backends] variable in Drupal 7 and the dependency injection container in Drupal 8.

The bootstrap cache bin is designed for items needed for every request; it’s used primarily by low-level Drupal APIs such as the theme registry or hook system. The cache items in this bin tend to be invalidated infrequently—often when a module is enabled or disabled—and since they’re requested all the time will have an extremely high hit rate.

On the other hand, the “block” cache bin is used to cache the output of Drupal’s blocks system. Blocks may have different cache items per role, per user, and/or per page, which can result in hundreds of thousands or more potential entries in the bin. The bin is also cleared often on content updates, so it has a high insert/delete/update rate.

In most cases, sites will want to set up a single cache backend such as Memcache or Redis to handle all cache bins, but the option is there to use different backends with different bins if desired.

When using the cache API, you’ll likely use the default cache bin, or create a custom bin. A custom bin should only be used if there’s going to be a very large amount of data to cache.

getMultiple()/setMultiple()/deleteMultiple()

As with entity loading, the cache API allows for loading, setting, and deleting multiple cache objects at once. Any situation where you know the cache IDs of multiple objects in advance is a candidate for using these methods, and many different storage backends natively support multiple get, allowing a single round trip to the cache storage and a shorter code execution path.

Cache tags

A new feature of the core cache API in Drupal 8 is cache tags.

There is often confusion between cache tags as a concept and cache IDs, so let’s explain cache IDs first.

When creating a cache ID in Drupal, the following conventions are important:

  • Use the module name or another unique prefix at the start of the cache ID to avoid naming conflicts with others.
  • Where a cache ID depends on context, include enough information about this context in the ID to ensure uniqueness. That is, for a cache item that varies per user, you might use:

    <?php $cid = 'my_module:' . $uid; ?>

    If it varies by language as well, then use:

    <?php $cid = 'my_module:' . $uid . ':' $langcode; ?>

In this case, the semantics of what makes up the cache ID aren’t important; all that matters is that one user doesn’t get presented content that was cached for another user or translated in a different language to the one they’re viewing the content in..

Note

One exception to this is key-based invalidation—using the updated timestamp of an entity as part of the cache key means that when the entity is updated, so is the cache key, resulting in a cache miss and new cache entry without having to explicitly clear the old key.

Cache tags, rather than guaranteeing the uniqueness of cache items, are intended for cache invalidation.

A good example of this is entity rendering. Entities may be rendered on their own with multiple view modes, as part of a listing of multiple entities via Views, as part of a block, or embedded within the rendering of another entity via entity references.

A rendered node may include information from referenced entities, such as the name of the user that authored the node and that user’s avatar. A Views listing might include multiple nodes like this.

To maintain coherency when entities are updated, there are two common approaches:

Set long TTLs and clear all caches of rendered content
The cache will be completely emptied whenever a single item of content is updated, even though the majority of the cache will be unaffected. On sites with frequent content updates, this approach can lead to low hit rates and the potential for cache stampedes. However, the cache will always be accurate.
Set short TTLs so that content is only stale for a few seconds or minutes
This results in lower hit rates regardless of the frequency of content updates. However, not explicitly clearing the cache all at once when an item is updated means there’s less likelihood of cache stampedes.

Cache tags allow for a “best of both worlds” scenario, where all cache items that include an entity are tagged with that entity’s ID, and saving the entity invalidates those cache items but no others. This allows for both cache coherency (assuming consistent tagging in the first place) and longer TTLs.

CacheArray

CacheArray was originally added to Drupal 8 but has been backported to Drupal 7, along with several patches integrating it with core subsystems. As a highly dynamic system, and with so much functionality provided by modules, Drupal has evolved to carry a lot of metadata about what functionality is provided from where. This includes the theme registry (a large array of all theme hooks, templates, and preprocessors), the schema cache (which contains metadata about every database table defined by a module, which often reaches 200 or so), and several other registries. On a default install of Drupal core, these usually reach a few hundred kilobytes at most; however, many Drupal sites end up with as many as a hundred or even several hundred contributed modules enabled, each of which may be defining new database tables, theme templates, and the like.

Prior to Drupal 7.7, each subsystem would store these arrays in one large cache item. This meant that for the theme registry, every theme function or template registered on a particular site would be loaded on every page—including theme functions for specific administrative tables that might not be used, or for functionality that might not be exposed on the site itself due to configuration. For the schema cache, while the schema metadata is only used for tables passed to drupal_write_record() or drupal_schema_fields_sql()—often as few as 10–15 tables on most sites—metadata about every database table on the site would nevertheless be loaded from the cache for every request.

CacheArray provides a mechanism to drastically reduce the size of these cache entries by emulating a PHP array using ArrayAccess. When an array key is requested that hasn’t already been cached, it’s treated as a cache miss and looked up, and then the array is populated with the returned value. At the end of the request, any newly found array keys and values get written back to the cache entry so that they’ll be a cache hit for the next request. This allows the cache item to be built on demand, populated only with data that is actually in use on the site and often excluding infrequently accessed items such as those used for administrative pages that may not be visited during normal site operation. Relatively few contributed modules need to maintain as much metadata as some of these core subsystems, but CacheArray provides a solution to this problem when you run into it.

Note

CacheArray is in the process of being replaced by CacheCollector in Drupal 8. CacheCollector has the same internal logic but uses public methods for get and set instead of ArrayAccess.

Render caching

Drupal’s render API takes a structured array of data and converts it to HTML, running it through the theme system and collecting associated assets such as CSS and JavaScript. One of the more powerful but underused features of the render system is its integrated cache handling.

When writing code that generates HTML, there are two main phases that the content goes through:

  • Building the array of data (e.g., a list of nodes based on the results of a query)
  • Rendering the array to HTML, which mainly involves running it through the theme system

Render caching allows the majority of time spent in these operations to be skipped. We’ll take the example of a custom block that shows the five most recently published article titles, taking it from no caching at all to using the render cache as much as possible:

/**
 * Implements hook_block_info().
 */
function example_block_info() {
  $blocks['example_render_cache'] = array(
    'info' => t('Render caching example.'),
    'cache' => DRUPAL_CACHE_CUSTOM,
  );
  return $blocks;
}

/**
 * Implements hook_block_view().
 */
function example_block_view($delta = '') {
  switch ($delta) {
    case 'example_render_cache':
      $query = new EntityFieldQuery();
      $query->entityCondition('entity_type', 'node')
        ->entityCondition('bundle', 'article')
        ->propertyCondition('status', 1)
        ->range(0, 5)
        ->propertyOrderBy('created', 'ASC');
      $result = $query->execute();
      $nids = array_keys($result['node']);
      $nodes = node_load_multiple($nids);
      $titles = array();
      foreach ($nodes as $node) {
        $titles[] = l($node->title, 'node/' . $node->nid);
      }
      $block['subject'] = t('Render caching example');
      $block['content'] = array(
        '#theme' => 'item_list',
        '#items' => $titles,
      );
      break;
  }
  return $block;
}

When the block is rendered with each request, first the hook_block_view() implementation is called. Then the resulting render array is run through drupal_render() (the second phase).

Just adding #cache to the render array would skip theming, but the entity query and loading would continue to happen with every request without some reorganization. Render caching allows us to skip that work as well, by moving that code to a #pre_render callback. This is the most complicated aspect of using render caching, so rather than adding the cache first, we’ll start by moving that code around.

hook_block_view() now looks like this:

/**
 * Implements hook_block_view().
 */
function example_block_view($delta = '') {
  switch ($delta) {
    case 'example_render_cache':
      $block['subject'] = t('Render caching example');
      $block['content'] = array(
        '#theme' => 'item_list',
        '#pre_render' => array('_example_render_cache_block_pre_render'),
      );
    break;
  }
  return $block;
}

/**
 * Pre-render callback for example_render_cache block.
 */
function _example_render_cache_block_pre_render($element) {
  $query = new EntityFieldQuery();
  $query->entityCondition('entity_type', 'node')
    ->entityCondition('bundle', 'article')
    ->propertyCondition('status', 1)
    ->range(0, 5)
    ->propertyOrderBy('created', 'ASC');
  $result = $query->execute();
  $nids = array_keys($result['node']);
  $nodes = node_load_multiple($nids);
  $items = array();
  foreach ($nodes as $node) {
    $items[] = l($node->title, 'node/' . $node->nid);
  }
  $element['#items'] = $items;

  return $element;
}

hook_block_view() now returns only the minimum metadata needed; the bulk of the work is transferred to the render callback, which will be called by drupal_render() itself when the element is rendered.

Once this is done, adding caching requires only a small change to hook_block_view():

/**
 * Implements hook_block_view().
 */
function example_block_view($delta = '') {
  switch ($delta) {
    case 'example_render_cache':
      $block['subject'] = t('Render caching example');
      $block['content'] = array(
        '#theme' => 'item_list',
        '#pre_render' => array('_example_render_cache_block_pre_render'),
        '#cache' => array(
          'keys' => array('example_render_cache'),
        ),
      );
    break;
  }
  return $block;
}

Adding #cache means that drupal_render() will check for a cache item before doing any other processing of the render array, including the #pre_render callback. Profiling a page with this block before and after should show that the EntityFieldQuery and node loading has been removed on cache hits. See Chapter 6 for more information about how to check this.

Queues and Workers

Drupal core ships with a robust queue API, defaulting to MySQL but with contributed projects providing support for Redis, Beanstalkd, and others.

The queue API is most useful when you have expensive operations triggered by actions on the site. For example, saving a node or comment may require updating the search index, sending email notifications to multiple recipients, and clearing various caches. Performing all of these actions directly in hook_node_update() will mean the request that actually saves the node takes considerably longer, and introduces single points of failure in the critical path of updating content. Depending on the implementation, failures in search indexing or sending emails may show up as errors to the end user or interrupt the content saving process altogether.

Instead of doing all this work inline, in your hook_node_update() implementation, you can create a queue item for that node; then, in the worker callback, you can perform whichever tasks on it are necessary.

This has the following advantages:

  • Expensive processing is taken out of the critical path of saving nodes into a background process. This allows the Apache process to be freed up quicker and pages to be served more quickly to users.
  • The background process may be run by drush or a queue daemon. Any operations that require high memory limits won’t bloat Apache, and they don’t necessarily need to run on the main web server at all. If queues are processed by Jenkins, it’s also possible to isolate reporting of failures for particular queues.
  • Multiple queue workers may run at the same time, allowing infrastructure usage to be maximized when there are lots of items in various queues. In contrast, Drupal’s hook_cron() only allows one cron invocation to run at a time.
  • Queue items are processed individually and can be returned to the queue if not successful. For example, if a queue item needs to call an external service but the API call fails with a 503 response, it can be returned to the queue to be retried later.

In sum, pages can be served to end users faster, you have more flexibility when scaling up your infrastructure, and your application will be more robust against failures or performance issues in external providers.

Cache Stampedes and Race Conditions

As sites reach large numbers of simultaneous processes, the potential for stampedes and race conditions increases.

A stampede can happen when a cache item is empty or invalid and multiple processes attempt to populate it at the same time. Here’s an example with Drupal 7’s variable cache:

  • Process A requests the variable cache, but there is no valid entry, so it starts loading the variables from the database and unserializing them.
  • Process B then comes in; there is no cache entry yet, so it also queries the variables from the database.
  • Process C comes in and does the same thing.
  • Process A finishes building the page and caches it.
  • Process D requests the page and gets the cached item.
  • Processes B and C finish and overwrite the cache item with their own identical versions.

In this case, only one cache item was needed, but it was created three times.

If this is an expensive task, it can put the server under high load as multiple different processes all do duplicate work.

There are two approaches to handling this scenario:

  • When it’s OK for a few requests to be served an invalid cache item, it’s possible to use the $allow_invalid parameter to $cache->get() so that invalidated but still present cache items are returned by the API. The first request to get an invalid cache item can acquire a lock using Drupal core’s lock API, then proceed to build the new cache item and return it to the caller. Subsequent requests will fail to acquire the lock and can immediately return the stale cache item; this will happen until the new cache item is available.
  • When the cache item must be up to date at all times, or if a cache item is completely empty, it’s not possible to serve stale content. The first process to get a cache miss will still acquire a lock and proceed to build the fresh cache item. If the item is very expensive to build, then subsequent requests can be put into a holding pattern using $lock->wait(). This will return as soon as the lock is released, after which the cache should be available.

Using the locking system can have its own problems—when a cache stampede turns into a lock stampede—and it would be remiss not to discuss these:

  • By default, acquiring a lock requires a database write, and polling for locks queries the database quite frequently. Locking to save one inexpensive database query can be counterproductive in terms of performance since it may have as much overhead as rebuilding the cache item. The lock API has pluggable storage, so this can be improved by installing one of the alternative backends.
  • Items that are cached per page are less likely to be requested simultaneously than items cached once for the whole site. Where there is very little chance of a cache stampede, the extra overhead of acquiring the lock is not worth it for these items.
  • Items that are invalidated very frequently—say, every 10 seconds—will result in a constant acquiring and freeing of locks. Since processes that don’t acquire locks usually poll to see if the item is created, they may miss the window of the valid cache item and continue to poll.

If you are running into issues with locking, consider whether the lock may be making things worse rather than better. Alternatively, it may be necessary to rethink the functionality altogether; for example, refactoring a per-page element to work across all pages on the site.

Get High Performance Drupal now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.