How to deal with heavy loads and spikes of log data?
Our typical load tests on a very basic desktop sustain up to 10k log statements per second. This, of course, depends on the nature of the data, efficiency and size of the database and many other factors. logFaces server has very flexible configuration which will allow you to tune your server for the best performance. However, if server can't sustain the load of your application logs then there is not much we can do about it but to reduce the load or by switching to a faster hardware.
There are several measures you may want to consider :
- Type of database
- Server JVM heap memory size
- Database commit buffer size
- Disk overflow caching
We observed significant variations in performances amongst relational and non-relation databases. In our case, MongoDB significantly outperforms all relational databases, particularly with the large volumes. Sometimes it goes 2-10 fold difference in inserts and queries time.
For more details, please read the paragraph "How do I tune the server for the best performance?" in our user manual.