Traditionally, Scaling of Enterprise applications meant scaling out. With the advent of BigMemory scaling solutions, its easier and more cost effective to scale up before you eventually scale out.
Traditional Java Applications use the Heap to store the hot set of data. It makes logical sense to keep data that is used over and over again in a place where it can be accessed efficiently with low latency.
However with increased business we deal with larger data sets and more CPU and resource utilization. Also Java applications have to contend with what I call a necessary evil - Garbage Collection.You cannot do without GC but at the same time it periodically slows down your application. It is the bane of existence of distributed caching environments.
Keeping this in mind Enterprise applications design on a scaling out architecture. Have a small heap but distribute it across multiple boxes. Depending on the size of the data sets this might be the right solution architecture.
However, there is an easier way to scale up without GC pauses - providing a clean and low latency solution. A Big Memory solution.
As of Java 1.4 there is an API which enables you to store and retrieve data in off-heap memory. This bypasses traditional GC, provides significantly higher storage, consistent and predictable latencies and eliminates tuning and ineffective workarounds for GC. The onus is now not on the software or architecture used but on the hardware. Big Memory has been tested with the strongest box found - 350GB of RAM. And so far there is no high-limit found.
One more advantage of this solution is that it bypasses the coherency issues that needs to be dealt with in distributed caching solutions.
Distributed Architecture with Terracotta is the right solution for very large data sets. The purpose of this blog is to encourage architects to consider scaling up before scaling out.
Take a few minutes and review this white-paper about Ehcache
http://terracotta.org/resources/whitepapers/ehcache-user-survey-whitepaper
To get started using Ehcache, visit http://www.ehcache.org
Traditional Java Applications use the Heap to store the hot set of data. It makes logical sense to keep data that is used over and over again in a place where it can be accessed efficiently with low latency.
However with increased business we deal with larger data sets and more CPU and resource utilization. Also Java applications have to contend with what I call a necessary evil - Garbage Collection.You cannot do without GC but at the same time it periodically slows down your application. It is the bane of existence of distributed caching environments.
Keeping this in mind Enterprise applications design on a scaling out architecture. Have a small heap but distribute it across multiple boxes. Depending on the size of the data sets this might be the right solution architecture.
However, there is an easier way to scale up without GC pauses - providing a clean and low latency solution. A Big Memory solution.
As of Java 1.4 there is an API which enables you to store and retrieve data in off-heap memory. This bypasses traditional GC, provides significantly higher storage, consistent and predictable latencies and eliminates tuning and ineffective workarounds for GC. The onus is now not on the software or architecture used but on the hardware. Big Memory has been tested with the strongest box found - 350GB of RAM. And so far there is no high-limit found.
One more advantage of this solution is that it bypasses the coherency issues that needs to be dealt with in distributed caching solutions.
Distributed Architecture with Terracotta is the right solution for very large data sets. The purpose of this blog is to encourage architects to consider scaling up before scaling out.
Take a few minutes and review this white-paper about Ehcache
http://terracotta.org/resources/whitepapers/ehcache-user-survey-whitepaper
To get started using Ehcache, visit http://www.ehcache.org