18.4. Average Latency Scenario

For the average latency scenario, assume you are running a large retail business intelligence implementation with several hundred product-related data changes being added overnight, every night. These additions come in the form of stocking and pricing changes. Actual sales information arrives in your data warehouse periodically and your users really want to see the data under reasonable real-time conditions. For this case, assume updates are available every two hours or so and your cube typically takes about an hour to process. However your users are willing to see old data for up to four hours. Assume the data partition itself is not large (say, less than 5 GB) for this scenario.

18.4.1.

18.4.1.1. Proactive Caching with MOLAP Storage Option

Let's say you have built the cube, and dimensions are updated nightly using incremental processing. Incremental processing is good whenever you want the current dimensions to be used by customers, because incremental processing can take place in the background and not prevent customers from querying the data.

The case for which it makes sense to use Proactive Caching with the MOLAP storage option is when you need to update the sales information (or other information) into the measure groups on a periodic basis so that users see near real-time data without any performance degradation. In this case, the data arrives in your data warehouse in the form of a bulk load from your relational transactional database. ...

Get Professional SQL Server™ Analysis Services 2005 with MDX now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.