In the previous chapters, the basic workflow was to import all the data into memory as Python objects every time we ran the code. That's perfectly fine and efficient when we work with small pieces of data.
At some point, you may have noticed that the performance of our code was debilitated, especially when we started importing country boundaries along with all the attributes. This happened because importing attributes is slow.
Secondly, although our filtering mechanisms worked pretty well, we may have problems when dealing with huge datasets.
The formula to solve these problems is very simple and consists of only two basic ingredients:
The first point is about getting only ...