Abraham Harold Maslow uttered the memorable words, “If you only have a hammer, you tend to see every problem as a nail.” These words should come to mind when contemplating whether to use server-side cache for database acceleration or to place all of your data on flash. OCZ’s ZD-XL SQL Accelerator has the unique capability to allow you to flexibly allocate its flash resources between an all-flash volume and flash-based caching acceleration of underlying HDD volumes. This article tries to clarify when to use the server-side flash volume of ZD-XL SQL Accelerator and when to use its flash caching capabilities.
Historically if you wanted to run a NoSQL database (like Cassandra or MongoDB), a clustered database or OLTP systems, using public clouds wasn’t an option. The reason for this was that a cloud built on nodes using spinning disk wasn’t good at handling low-latency I/O-intensive applications.
Most enterprises worldwide are already using Microsoft Business Intelligence (BI) software like SharePoint, Excel, Power View, PowerPivot, Analysis Services, Master Data Services. In view of this, it is no surprise that Microsoft SQL Server—an enterprise-class relational database management system—is steadily gaining market share in enterprise data centers.
Fellow DBAs: Brace yourselves!
A business intelligence storm is brewing. We can all see it on the horizon. You can lock yourself in the war room for a little longer. You can hide for a little while behind the safety of technically cryptic terms such as “in-memory columnar indexes” and “flash based memory buffer extensions.” But beware; the hordes of end users are already amassing at your doors as more and more get wind of the pending change. They have heard that the Gordian Knot has been cut. Analysis and long wait times are no longer bound together. The writing is on the wall. The once-a-day-batch-reporting empire is crumbling. They will soon break down your protective layers of code. And when the general availability of SQL Server 2014 is announced, they will all come rushing in. From the CEO down to the last of the business line managers, they will throw their concurrent random queries at your precious systems. They will want answers and they will want them immediately. They will demand that you provide their dose of real time data.
So many software solutions, how’s an IT person to choose? Virtualization has changed the game when it comes to how enterprises deploy and manage their IT infrastructures, and when I/O and random access demands increase, traditional storage methods are challenged thereby creating a data bottleneck that can bring an enterprise to its knees. The OCZ VXL Software Solution maximizes performance of virtualized server environments by combining advanced application-optimized caching with dynamic allocation of on-host flash. Since its deployment in February 2012, VXL has been successfully implemented by various clients looking to enable intelligent and efficient on-demand distribution of flash between VMs based on need. Multiple VMs running concurrently? No problem there! Need for installing additional agents? Not a chance!
If as IDC predicts big data revenues will reach $23.8 billion by 2016, then one must consider the role of flash-based solid state drives for big data applications. As mentioned in “Johns Hopkins Uses SSDs to Reach for the Stars,” John Hopkins addresses the storage requirements of the Sloan Digital Sky Survey project using 400 OCZ Deneva 2 SSDs. According to Alexander Szalay, a professor of physics and astronomy at Johns Hopkins University, the performance gains from using SSDs has been nothing but “stunning.” In addition to enhancing performance, the use of OCZ SSDs have led to tremendous savings in power and cooling. How does this help advance the science of astronomy? If you are an astronomer, now you don’t have to pull down a huge database over a slow internet connection. All you have to do is use remote login to the SDSS project database and run your own analysis. This enormous database of the stars is used by ~500,000 people a day!
When ‘every day is the same old thing,’ does caching become a very simple exercise?
The endlessly recurring cycles of weeks, months and years have ruled our lives since time immemorial. Just like the Mesopotamian farmer looking at the sun’s celestial position to know when to sow and when to reap, we look at our outlook calendars to know when to pay taxes and when to take our vacations. Just like the Roman Centurion waiting for the first of each Julian month to get his ration of salt, we look to our monthly bank statement to check that Caesar gave on to us, what we are due. And though the four solar seasons have been replaced with four fiscal quarters, still our enterprises and their data centers endlessly revolve around the calendric repetitive rituals called business processes. With nightly runs, weekly batch jobs, monthly reports, quarterly finances, and yearly budget tracking, the comings and goings of our IT managers, and the data centers they manage, are still ruled by celestial tracks in more ways than we realize.
Why a 3,000-year old idea is Very Relevant for caching big data today
Across from Luxor, on the other side of the Nile, lay the Theban Necropolis. Within this necropolis and nested among the ritual tombs of kings and queens is a rather unique tomb, not of an ancient Egyptian royal, but of a scribe.1 Named Menna, he held the literal title ‘Scribe of the Fields,’ and was charged with feeding the hundreds of thousands of people it took to build and maintain an economic, cultural and military empire. His efficient approach continued the traditions that made Egypt one of the strongest empires in the world for thousands of years.
Lessons learned from tight hardware/software integration
The motive behind my post today is to tout the importance of hardware and software working tightly together to satisfy a common end-user requirement. For too long, hardware and software engineers tend to be arrogant about their purpose where both fronts assume too quickly that everything is solvable solely from their respective approaches, either from hardware or from software. Amongst the development frenzy that we all are experiencing, it is time to tear down these walls that hamper our technological progress and discover the huge benefits of hardware and software tightly integrated and working in unison.