Use rsync over ssh to mirror exactly what you want, to any number of servers
For O'Reilly's web publishing system, we built a web "cluster" that spreads the work of one machine across several, increasing both performance and reliability. But how do you keep the actual data in the filesystem of each server in sync with each other?
One method is to use NFS to provide a common filesystem for your data. While making it simple to update content across all of the servers simultaneously, NFS has some significant drawbacks. File locking on NFS is notoriously tricky. Performance tuning on NFS is generally considered something of a black art. (When it works, it's very good, and when it doesn't . . . that's why we're on call, right?) But probably the biggest drawback to a monolithic NFS server is that it introduces a single point of failure for your entire enterprise. If the NFS server becomes unavailable (or overloaded) for any reason, all of the machines dependent on it will suffer.
One alternative approach is to asynchronously update servers using a tool such as rsync. With the cost of disk space at an all-time low, having a locally cached copy of your data makes sense: not only will the single point of failure be eliminated, but the files will be served much faster from the local disk than they could be from network storage.
It is very straightforward to add an rsync job to cron on each of the web servers, containing something ...