Chapter 35. Closing Thoughts

This section covered a lot of ground, and a lot of deeply technical issues. Aggregations bring a power and flexibility to Elasticsearch that is hard to overstate. The ability to nest buckets and metrics, to quickly approximate cardinality and percentiles, to find statistical anomalies in your data, all while operating on near-real-time data and in parallel to full-text search—these are game-changers to many organizations.

It is a feature that, once you start using it, you’ll find dozens of other candidate uses. Real-time reporting and analytics is central to many organizations (be it over business intelligence or server logs).

But with great power comes great responsibility, and for Elasticsearch that often means proper memory stewardship. Memory is often the limiting factor in Elasticsearch deployments, particularly those that heavily utilize aggregations. Because aggregation data is loaded to fielddata—and this is an in-memory data structure—managing efficient memory usage is important.

The management of this memory can take several forms, depending on your particular use-case:

  • At a data level, by making sure you analyze (or not_analyze) your data appropriately so that it is memory-friendly

  • During indexing, by configuring heavy fields to use disk-based doc values instead of in-memory fielddata

  • At search time, by utilizing approximate aggregations and data filtering

  • At a node level, by setting hard memory and dynamic circuit-breaker limits ...

Get Elasticsearch: The Definitive Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.