Scaling Storage for Databases — How Did We Get Here?

posted in: Western Digital 0

We’ve seen the impact of digitization and the proliferation of data change just about everything in our world. And, as digitization grew, so did the demands on our infrastructure. What once were ‘simple’ computing devices have now become complex networks of computers to calculate weather patterns, model genetic research, and map the spread of viral outbreaks. But, it’s only the beginning. Yet, how did we get here? What did this evolution look like and where are we today? Here are a few thoughts on how we got here and how organizations can successfully architect storage for scaling databases.

Years in the Making

At first, we started with single CPU systems like the HP 9000 I70s (yes, there were others before this). These became the K-Class Servers. Then, the Sun rose and we had the E10K, the V-Class, and the P-Series. While we kept scaling-up CPU resources, and to a certain degree memory resources, it simply wasn’t enough. To get bigger and faster, we started scaling out.

Now, clusters of CPUs and memory were linked together with what felt like gauze, thirty-weight oil, and ball-bearings, and the results were impressive. The speed of these highly specialized systems helped to deliver new, deeper insights, that drove better decision making for science, health, and businesses.

Yet, the world needed more. As a next step, investigating how to get more performance, the industry saw that Moore’s Law had been diligent with CPUs and memory, but had not had the same results with the storage and retrieval platforms. And, as a result, this is where we started to scale-up and scale-out our storage.

It is hard to believe, but at first we were only scaling from megabytes to 100s of megabytes. In the 90s, gigabytes were mind boggling, and then things took off. Today, we have multi-petabyte systems available off the shelf and massive object storage to land-lock seas of data. We have companies building exabyte scale solutions, and we are already creating the building blocks to fully enable the zettabyte age.

Scaling Databases – Scale-Up or Scale-Out?

Now, scaling is all about the data. To harness insights, you need more data. So, companies are looking at keeping all of their data and getting it as close to the CPUs as possible for better performance.

While everything is scaling – and some products requiring you to choose whether to scale-up or scale out, what do you do with databases and how do you decide which is right for you?

Scaling out provides access to additional storage, but increases the overall performance – both latency and bandwidth; whereas, scaling-up each of these units provides petabyte capacities to handle your most challenging growth models.

We believe you should not be limited. You need to be able to do both when it suits your business objectives and goals, along with you organization’s best practices for data strategy.

For example, our hybrid, all-flash and NVMe™ IntelliFlash™ arrays provide the option for both architectures; however, there are some advantages to one model over the other that you should consider:

• Redundant Controllers

• Expand by adding controllers and storage

• Performance increases vs. performance could improve

• Complexity and power increases

Again, the decision should be based on your business and data strategy goals.

Oracle® Grid Infrastructure (ASM, Automatic Storage Manager) – Example Use Case

Let’s look at a real world business application example. If you are, or you plan, to leverage Oracle Grid Infrastructure (ASM, Automatic Storage Manager), this application allows you flexibility when leveraging IntelliFlash.

For example, if your goal is to achieve as much performance as possible, looking at the chart above, choosing a scale-out architecture would be the most beneficial. At near 25GB/s and 200µs of latency, you can scale the N-Series system up to 140GB/s of bandwidth with 1/2 rack solution.

How IntelliFlash Supports Scale-Out and Scale-Up

With the increase in initiatives from Oracle AI / ML and Microsoft AI/ML solutions, IntelliFlash provides the flexibility to scale – whether out or up – to meet the demands of these products.

From an operating system that simplifies data management to data services (i.e. replication, snapshot, live LUN migration, and cloud integration), organizations are easily able to scale your application and data needs.

Whether you need to scale-out for flat latency and/or you need huge bandwidth to process data as fast as your edge devices can create it, with the N-Series arrays from Western Digital, you have the freedom to choose the infrastructure that best fits your needs, when you need it

Simply putting Data at the Center of your organization helping you deliver Speed. Insight. Decisions.

Ready to Tip the (Data) Scale?

Today’s data needs demand extreme performance at low latency. Through SAS, NVMe, and hybrid Intelliflash flash arrays, our solutions help enterprises balance performance with cost to scale-up and scale-out.

• We tested real-world Oracle performance on NVMe. Here are our test results

• Want to learn more about how to manage your data growth with Oracle or SQL Server®? Check out a recent webinar where we shared how IntelliFlash NVMe simplifies data management challenges. We jumped into the traditional infrastructure elements, such as capacity, performance, and consumption of data, and how Oracle and SQL Server databases have developed scale-out and scale-up approaches.

• Intelliflash combines flash, NVMe, and advanced data services. Watch the video

 

The post Scaling Storage for Databases — How Did We Get Here? appeared first on Western Digital Corporate Blog.