DESIGN TOOLS
Innovations

Looking back on the evolution of enterprise storage

Currie Munce | November 2024

Thanksgiving is a time for reflecting on what we’re grateful for in our lives — our families, our friends, our careers and our growth.

This Thanksgiving, I’m thinking back on my career in storage and all the significant technological advances that have occurred during that time. It’s a simple story of challenges to the status quo, rapid evolution, and the best product and technology for the job eventually winning. It’s like biological evolution, just at a much faster pace.

The story starts in the early 1980s with the first storage product I worked on, the IBM Model 3380, a direct access storage device (DASD) that combined a controller and storage media. In 1985, it had 5.04GB of storage across two head disk assemblies (HDA), each weighing about 77 kilograms with two independent actuators per spindle (1.26GB per actuator). This product featured a data throughput of 3 MB/s and an average seek time of 16 milliseconds. The spindle used heavy-duty ball bearings designed for trucks because of the weight of the 14-inch disks.

A key concern for increasing the capacity of future models was performance. Engineers debated how much capacity the file system could tolerate on a single actuator. At the time, some engineers felt the limit was about 1GB per actuator, which meant something had to change to allow scaling. As is often the case, that estimate proved to be conservative.

The 1990s saw the replacement of a single large expensive disk (SLED) with an array of smaller redundant array of inexpensive (independent) disks (RAID). RAID meant replacing a rack of two large 14-inch diameter disk stacks with arrays of 3.5-inch standard hard disk drives (HDDs). This change led to the following improvements and staved off performance concerns for over a decade:

  • Much higher volumetric density for storage in a rack
  • Higher reliability with redundancy for rebuilding failed drives in place
  • More actuators per gigabyte (due to the smaller diameter disks)
  • Faster seek performance due to smaller, lighter actuators
  • Higher bandwidth because of striping of data across many drives in parallel


Then in the late 1990s and early 2000s, advancements in HDD technologies significantly increased areal density and drive capacities, achieving an annual doubling at its peak. This success renewed the focus on the need for performance improvements. To enhance bandwidth and reduce seek time, smaller stacks of disks (65mm) rotating at 10,000 RPM were introduced, followed by even smaller diameter disks (48mm) spinning at 15,000 RPM.

These performance enhancements, however, also increased the cost per bit compared to traditional baselines. By the mid-2000s, 2.5-inch HDDs with rotational speeds of 10,000 RPM became the standard for enterprise storage systems. In the late 2000s, a new class of HDDs known as nearline or large capacity emerged, featuring 3.5-inch disks initially with six platters per spindle rotating at 5,400 RPM. With the advent of helium-sealing technology, modern HDDs can now accommodate up to 11 platters and achieve rotational speeds of 7,200 RPM.

Around 2007, enterprise storage systems began exploring NAND flash technology for use in enterprise data storage. These solid-state disks (SSDs) featured nonvolatile memory chips instead of mechanical disks, making them faster but more costly than HDDs. SSDs met the need for fast access and extremely low latency in some important storage scenarios, such as write buffers in HDD arrays. The first enterprise class SSD I helped bring to market in 2010 was a 100GB SLC SAS-interface SSD developed jointly by HGST and Intel.

By the mid to late 2010s, as SSD bit cost dropped and system architectures and software evolved to take advantage of the much faster SSD, the SSD started replacing all the 15,000 and then the 10,000 RPM HDDs for the high-performance use cases. The mainstream for enterprise HDDs pivoted to nearline, 3.5-inch 7,200 RPM drives — which now stored 85% to 90% of the enterprise data — but ceded the high-performance applications to SSDs.

Storage technology continuously evolves. While NAND flash costs are decreasing, HDDs face challenges in improving areal density due to magnetic scaling limits, necessitating innovative technologies like energy-assisted or heat-assisted recording. Nor is the performance of HDDs increasing as the mechanics of the drive limits it. Meanwhile, generative AI and GPUs are increasing demands on storage workloads, requiring better performance. Although storage software has advanced to manage the declining performance per terabyte of HDDs, its capabilities have limits. HDD suppliers are revisiting older strategies, such as using more actuators per spindle, to enhance performance.

The trends of declining relative costs and rising performance needs are motivating a shift toward using exceptionally large-capacity SSDs in storage arrays. Like the move from SLED to RAID, denser storage reduces system costs beyond just drive cost. Systems built with large-capacity SSDs also offer better energy efficiency, up to 7.5 times better throughput per watt and up to three times more capacity per watt than HDDs.

For example, the recently announced Micron 6550 ION SSD is the fastest and most energy efficient 60TB data center SSD available. As data centers expand with more GPUs and face power constraints, large-capacity SSD systems will become more cost-effective and more energy-efficient than HDD systems. By the end of the decade, we may see 250TB and even 500TB SSDs that can fit 40 wide in a standard 2RU chassis and deliver over 100PB in a single rack.

The bottom line — the one constant in storage — is evolution. New designs, architectures and technologies continue to emerge to meet customer needs for capacity, cost, performance and power. While HDDs will remain the lowest-cost media solution for long-term data retention when modest streaming and retrieval performance is required, large-capacity SSDs are poised to drive the next big wave, displacing a sizable percentage of data center bits from HDDs to SSDs.

Not only has the technology changed, but I’m grateful to have been part of this transformation. During my career, I’ve seen the evolution from 5GB in a rack to 100PB and from 3 MB/s per rack to potentially 1 TB/s. These advances are driven by our insatiable need to store more data. And these innovations have been made possible by brilliant minds, many of whom I’ve been thankful to work with over the years. And the story continues…

Currie Munce is a Senior Technology Advisor and Strategist for Micron’s Storage Business helping to define storage architecture and technology directions for the company.

Currie Munce