Posted on by & filed under django, programming, python.

For most developers working on a web application useful enough that people actually want to use it, a time comes when they need to start thinking more about how to make it faster. There are several good books (and videos) on how to go about this for web applications in general. There’s also pretty good advice for optimizing Python applications (like this, which covers some of the profiling tools I mention below). For getting performance data from a live site, you probably want to use something like New Relic, which can give you a lot of useful information with relatively little performance impact. But sometimes, you’ve already figured out that the code which serves a particular URL is slow, and you just want to optimize that (preferably in a local test server without needing to hook up a lot of external services to it, so you can quickly measure before and after performance data). Logging statements and debuggers can be useful, but another convenient option for Django applications is a profiling middleware.

Because middleware can change the way any request is processed based on conditions you define, this can be much more convenient than needing to manually tweak each particular Django view you want to profile. This is hardly a new idea, and people have written several of these. The problem is, there doesn’t seem to be one that really incorporates the best features of all of them (which admittedly depends a bit on exactly how you want to use it). So let’s fix that. ¬†Using this one as a starting point and stripping out a few things that we don’t plan to use in order to keep it simple, we start out with this:

import cProfile
from cStringIO import StringIO
import marshal
import pstats

from django.conf import settings
from django.core.exceptions import MiddlewareNotUsed

class ProfileMiddleware(object):
    def __init__(self):
        if not settings.DEBUG:
            raise MiddlewareNotUsed()
        self.profiler = None

    def process_view(self, request, callback, callback_args, callback_kwargs):
        if settings.DEBUG and ('profile' in request.GET):
            self.profiler = cProfile.Profile()
            args = (request,) + callback_args
            return self.profiler.runcall(callback, *args, **callback_kwargs)

    def process_response(self, request, response):
        if settings.DEBUG:
            if 'profile' in request.GET:
                self.profiler.create_stats()
                out = StringIO()
                stats = pstats.Stats(self.profiler, stream=out)
                stats.sort_stats('time').print_stats(.2)
                response.content = out.getvalue()
                response['Content-type'] = 'text/plain'
        return response

By adding ProfileMiddleware to the MIDDLEWARE_CLASSES setting, adding a profile parameter to any GET request replaces the normal response output with something like the following directly in the browser:

         42786 function calls (41322 primitive calls) in 0.048 seconds

   Ordered by: internal time
   List reduced from 547 to 109 due to restriction <0.2>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
      458    0.002    0.000    0.005    0.000 /Users/jeremy/Source/SBO/heron/ve/lib/python2.7/site-packages/Django-1.5.5-py2.7.egg/django/utils/functional.py:190(wrapper)
        1    0.002    0.002    0.002    0.002 {method 'execute' of 'psycopg2._psycopg.cursor' objects}
     96/2    0.002    0.000    0.025    0.013 /Users/jeremy/Source/SBO/heron/ve/lib/python2.7/site-packages/Django-1.5.5-py2.7.egg/django/template/base.py:241(parse)
      252    0.002    0.000    0.007    0.000 /Users/jeremy/Source/SBO/heron/ve/lib/python2.7/site-packages/Django-1.5.5-py2.7.egg/django/template/base.py:531(__init__)
      252    0.002    0.000    0.005    0.000 /Users/jeremy/Source/SBO/heron/ve/lib/python2.7/site-packages/Django-1.5.5-py2.7.egg/django/template/base.py:680(__init__)
      598    0.001    0.000    0.002    0.000 /Users/jeremy/Source/SBO/heron/ve/lib/python2.7/site-packages/Django-1.5.5-py2.7.egg/django/utils/text.py:355(smart_split)
      626    0.001    0.000    0.003    0.000 /Users/jeremy/Source/SBO/heron/ve/lib/python2.7/site-packages/Django-1.5.5-py2.7.egg/django/template/base.py:201(create_token)
     5389    0.001    0.000    0.001    0.000 {isinstance}
...

Some of the nice features of this, as opposed to some of the alternatives:

  • It uses the relatively fast cProfile profiler instead of the deprecated hotshot one (some of those middleware snippets were written quite a while ago)
  • It removes itself from the pipeline completely when DEBUG is False, so you don’t need to do any additional configuration for local testing vs. production deployment beyond what you already do
  • It doesn’t generate any temp files

That’s all well and good, but it can be kind of tough to make sense out of that data at a glance. This middleware uses the nice gprof2dot utility to generate a visual call graph for the profiling data (but doesn’t have a couple of the other advantages pointed out above). Lets update process_response() to support that, and in the spirit of these middleware classes add support for a little more control over the display of the function listing:

    def process_response(self, request, response):
        if settings.DEBUG and 'profile' in request.GET:
            self.profiler.create_stats()
            mode = request.GET['profile']
            if not mode:
                if not which('dot'):
                    return text_response(response, 'Could not find "dot" from Graphviz; please install Graphviz to enable call graph generation')
                if not which('gprof2dot.py'):
                    return text_response(response, 'Could not find gprof2dot.py; have you updated your dependencies recently?')
                with tempfile.NamedTemporaryFile() as stats:
                    stats.write(marshal.dumps(self.profiler.stats))
                    stats.flush()
                    cmd = ('gprof2dot.py -f pstats {} | dot -Tpdf'.format(stats.name))
                    proc = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE,
                                            stdout=subprocess.PIPE)
                    output = proc.communicate()[0]
                    retcode = proc.poll()
                    if retcode:
                        raise Exception('gprof2dot/dot exited with {}'.format(retcode))
                response.content = output
                response['Content-type'] = 'application/pdf'
                return response
            elif mode == 'help':
                return text_response(response, ProfileMiddleware.__doc__)
            else:
                out = StringIO()
                stats = pstats.Stats(self.profiler, stream=out)
                restrictions = []
                if 'filter' in request.GET:
                    restrictions.append(request.GET['filter'])
                if 'fraction' in request.GET:
                    restrictions.append(float(request.GET['fraction']))
                elif 'lines' in request.GET:
                    restrictions.append(int(request.GET['lines']))
                elif not 'filter' in request.GET:
                    restrictions.append(.2)
                stats.sort_stats(mode).print_stats(*restrictions)
                return text_response(response, out.getvalue())

So our middleware now has a few more toys, which are conveniently explained in the middleware class’s new docstring (which can also now be shown in the browser by adding a profile=help parameter to the URL):

    """
    Code profiling middleware which can display either a call graph PDF or
    a table of called functions ordered by the desired statistic.  For the call
    graph, just append "?profile" to the URL.  For the graph generation to
    work, install Graphviz from http://www.graphviz.org/Download.php

    For a statistics table, use the statistic you want to sort by as the
    parameter (such as "?profile=time").  Sorting options include:
     - calls           call count
     - cumulative      cumulative time
     - file            file name
     - module          file name
     - pcalls          primitive call count
     - line            line number
     - name            function name
     - nfl                      name/file/line
     - stdname         standard name
     - time            internal time

    Additional parameters can be added when generating a statistics table:
     - fraction        The fraction of total lines to display (the default of .2 is omitted if lines or filter are specified)
     - lines           The number of lines to display
     - filter          Regular expression filter for function display names

    To get these instructions in the app if you forget the usage options, use
    "?profile=help" in the URL.

    Inspiration:
     - https://gist.github.com/kesor/1229681
     - https://bitbucket.org/brodie/geordi
    """

You may have noticed that the new code uses a couple of convenience functions to keep things easy to read, here those are for completeness:

def text_response(response, content):
    """Return a plain text message as the response content."""
    response.content = content
    response['Content-type'] = 'text/plain'
    return response

def which(program):
    """Return the path of the named program in the PATH, or None if no such
    executable program can be found."""

    def is_exe(fpath):
        return os.path.isfile(fpath) and os.access(fpath, os.X_OK)

    fpath, _fname = os.path.split(program)
    if fpath:
        if is_exe(program):
            return program
    else:
        for path in os.environ["PATH"].split(os.pathsep):
            path = path.strip('"')
            exe_file = os.path.join(path, program)
            if is_exe(exe_file):
                return exe_file

    return None

If you’re using Django-debug-toolbar, you probably want to turn it off when profiling so you don’t clutter your performance data with a bunch of code that doesn’t get run for your live site. You can do that in your Django settings file with something like the following:

    def should_show_toolbar(request):
        """Don't show the toolbar when profiling."""
        if 'profile' in request.REQUEST:
            return False
        return settings.DEBUG

    DEBUG_TOOLBAR_CONFIG = {
        'SHOW_TOOLBAR_CALLBACK': should_show_toolbar,
    }

So now we have a pretty useful and convenient profiling tool available whenever we want it, right in the browser! Yay! Granted, I haven’t added support for all of the features available for filtering and displaying cProfile statistics (and this middleware that uses pycallgraph as an alternative to gprof2dot looks pretty interesting too). So now I need to get this cleaned up and bundled into an easily installed package…or just wait for somebody else to write YetAnotherProfileMiddleware that does a good enough job.

Tags: Django, profiling, Python,

Comments are closed.