Your measurement of time spent on site and on pages is one you may end up struggling with from time to time. Still, knowing how much time visitors spend browsing information on your site can help you begin to understand some of the most common usability issues all sites face.
It's ironic that the exquisitely exact, to-the-second statistics on time spent on a web site are, in fact, quite soft and anything but exquisite. When you scratch their surface, you usually find them to be a little flawed. Still, it's possible to learn from them, especially if you improve their accuracy, examine page-to-page variations, and establish meaningful comparison points.
To understand the problems that arise when calculating time spent on site, it's best to start with some descriptions of how these statistics are usually calculated and brief descriptions of commonly observed patterns.
Time spent per page is obtained by subtracting the time of one page request from the time of the next page request. The variation from page to page is usually marked, and can be tapped for insight into page design and content.
Time spent on the site is the time of the visit's first request subtracted from the time of the final request during the visit. The length of an "average" visit is remarkably consistent over time, often varying by only a few seconds from month to month, with most variation happening after major site events such as redesigns. The greatest variability in visit time is sometimes between workday browsing and after-work visits. On some retail sites, for example, after-work visits tend to be longer, in terms of numbers of pages as well as time spent per average page.
After 30 minutes of inactivity during a visit [Hack #1] , most web measurement programs consider the visit finished, and the last request before the pause marks the visit termination time. Even if the visitor hasn't left the site and starts clicking again, the new activity will be counted as a new visit (and will baffle us by appearing in referrer reports [Hack #58] as having a referrer from our own site because the new visit's referrer is the last page clicked before the pause).
The preceding three measurements work fairly well in most instances and appear precise on the surface. Unfortunately, the Internet is far from perfect, and so we commonly see three factors that introduce inaccuracy.
The first source of inaccuracy is simply that the reported viewing time of a page is different from the time the page spends fully displayed on the visitor's screen. This is because an unknown and highly variable component of "read time" is actually the transmission and rendering of the page in the web browser. If your site's visitors tend to use dial-up, for example, the viewing time may represent some viewing time and a lot of waiting time.
To get an idea of page transmission and load time, visit a site such as (www.websiteoptimization.com) to get an estimate of transmission time at various connection speeds, or see the hack in this book on [Hack #69] .
Another source of inaccuracy happens because the view time of the last page of a visit isn't reported at all. This is because, by definition, an exit page has no "next request" to mark the end of its viewing. The last request is, however, included in page count reports. If a web measurement program doesn't account for this missing information, some time statistics can be skewed. As a very simple example, consider a visit with only two page views, reported as a five minute visit measured by the "subtract the first request time from the last request time" method. How long was the "average page view"? Some web analysis programs will report 2.5 minutes (the reported visit length divided by the number of pages requested). But since the last page's view time is completely unknown, the truth is that the first page was viewed for five minutes, and a better estimate of "average page viewing time" would be five minutes.
If you want to know which method your web measurement program uses, do the following:
Choose a page that appears high on the exit pages report and jot down four statistics for it: number of times it was an exit page, the number of times it was viewed, its total viewing time in seconds or minutes, and its average viewing time.
Divide the total viewing time by the number of times it was viewed, and also divide it (separately) by the number of times it was viewed minus the number of times it was an exit page.
Compare the two quotients to the reported average viewing time. If the reported average viewing time resembles the first quotient more than the second, your web measurement program is using the less-accurate calculation method.
The final big source of inaccuracy is perhaps the most obvious: the reported "viewing time"for a page will sometimes be quite different from the time the typical visitor really spends looking at it. It's inevitable that some visitors leave their browsers open for long periods when they aren't actively engaged with the site. Even a few of these can greatly distort basic viewing time statistics. For example, Figure 4-3 shows how only two very long page view events out of 50 page views can pull the calculated average quite far from the number most of us would call the typical visit length—namely, the tall bar to the left.
One way to get a more accurate idea of a "typical" visit in a skewed distribution is to use a statistic called a median, which is calculated differently from the average. The median, despite being the more accurate measurement in this context (see Figure 4-3), is available in a few web measurement packages; consult your vendor to see if median is available to you.
It would be nice to lop off all really long page times that involve relatively little visitor attention, wouldn't it? It's possible to do it if your measurement program allows you to change the visit expiration time from 30 minutes. Simply decide on the page view time that you want to ignore (for example, eight minutes and up) and change the analysis program's visit expiration time correspondingly. In the resulting report, all page views that are followed by eight-or-more minute periods of inactivity will be treated as final pages of visits, which means they will not be counted at all in view time calculations. The average viewing time per page will drop dramatically to something more realistic. Expect a decrease of as much as 70 percent for some pages.
If you try this, it's best to do it as a separate analysis, because an eight-minute timeout can wreak havoc on your other numbers. Not all vendors support changing the visit expiration time, and keep in mind that if you make this change, all of your visit-based calculations will change as well, which can have a dramatic effect on your overall analysis.
If and when you think you've obtained fairly accurate viewing times for individual pages, you can start learning from them. Here's where the rubber hits the road: answering questions like "What's a 'good' page view time?" If you've been working toward more accurate time estimates, your terrific numbers can suddenly seem mushy and obscure when you start thinking about evaluating them. Short page viewing times may be interpreted as pages where visitors move quickly because the pages are overwhelming, uninteresting, unimportant, or, paradoxically crystal clear and quickly understood. Long viewing times may correspond to difficulty reading or understanding content.
Develop an estimate of a normal viewing time for a particular page. It's actually not too difficult to obtain an approximate ideal viewing time using a bit of your own expertise and bit of systematic evaluation. Just go over the page with fresh eyes, a stopwatch, and a mindset approximating that of a site visitor. Have other people do the same. After you do this three or four times, you should have a pretty good idea of what people are trying to do on that page and how long it should take.
With this simple approach, you can come up with a reasonable minimum and maximum viewing time range that you feel pretty good about. The minimum should roughly correspond to "got just enough information to proceed," and the maximum should correspond to "got most of the information without being slowed down by usability issues."You'll almost certainly be surprised by these objective numbers because most people over-or underestimate page view times. And it's also likely that you'll see more consistency in your stopwatch times from person to person than you expected.
Of course, the next step is to compare the times in your reports to your quasi-objective ideal ranges for each page. Look for reported times that lurk at the edges of your range or outside it. The greater the number of visits where that page is viewed longer or shorter than your ideal range, the higher the likelihood the page is a problem.
The last step is simply to go back to those pages with the lurking times. Take a close look and do some hard thinking about possible reasons for discrepancies. Did we say last step? We didn't mean that.
As an astute web analyst, you will inevitably want to leverage insights about time spent on site as part of the continuous improvement process [Hack #2] . You may start to think about segmenting your data into first-time and returning visitors because you'd expect repeat visitors to get through pages more quickly. You can watch for longer viewing times when deeper in a site, because you think the commitment is greater. You can find the time of day or night when your visitors have longer page view times and wonder whether your marketing can capitalize on diurnal patterns. You're limited only by the amount of time you have!
—Chris Grant and Eric T. Peterson