We can now set up our caching on all our cluster nodes; the question, however, is whether our application is still working. To answer this, we will take a very simple case where two requests are executed in parallel:
Node 1
|
Node 2
|
put data1 in cache at time t1 | - |
put data1 in cache at time t1 | |
access data1 at time t3 | access data1 at time t3 |
With this simple timeline, we can immediately see that using a local in-memory cache can lead to inconsistencies, since nodes will likely not cache the data at the same time (cache is generally lazy, so the cache is populated at the first request or when the machine starts, if eager, which may lead to potentially inconsistent data in both cases).
If the data is ...