Author: Levi Norman, ISS Storage Strategist, Industry Standard Servers
From the time SAS was envisioned to this point, we have heard time and again it is to be much more than parallel SCSI ever hoped to be. It should deliver unsurpassed ROI, new and unique configurations and unparalleled performance gains, etc. It actually began as an engineering dilemma of how to overcome the issue of speeding up the bus and keeping all the data synchronized, on path, and ready to arrive at its destination in a timely and safe manner. pSCSI could no longer guarantee the safety or even the timely arrival of the ‘payload,’ thus the look elsewhere for a competent answer.
The idea of serial technology has been around for quite some time in varying forms and formats, alongside varying speeds and levels of security and connectivity. But the idea of using it cost-effectively outside of the Fibre Channel world with a cost-effective connection, (although smaller, stronger connectors, thanks to IB and ATA), had only recently been thought of. Long story short, SAS was born. It was born with 3Gb speeds, it was born with great expandability above and beyond pSCSI, and it is born with better inherent management than pSCSI.
ROI on a data center is easily calculated, right? Yes, if you are starting from the ground up. What about those folks that have a datacenter today? What do they do? How do they realize an immediate benefit? How do they calculate it? First, SAS is inherently SCSI and inherently connects into the architecture, yet immediately has architectural and performance benefits. The drives became more robust as they became smaller, point-to-point architectures sped up the transactions, and scalability went up nearly tenfold. Results get better immediately for even the existing datacenter. And secondly, HP as a company, offers a host of methods to measure performance and transactions with a variety of tools and industry monitored tests.
Given that we have a brief understanding of where SAS came from, a basic understanding of why it’s valuable, let’s look at the next step, or n+1. We’ll take this from a storage device level view, walk through the brains (the controller) behind that device, move through a view of the server that hosts the controller, and wind up with a quick look at the storage systems that would attach to the server infrastructure. And then make some out-of-bounds predictions about the future, well past say, n+3.
Hard Disk Drives
So drives – to be clear, hard disk drives and not tape drives – are reaching unprecedented levels of capacity. Today HP offers up to 300 GB of capacity at the enterprise SAS level and up to 750 GB of capacity in the bulk storage SATA device. Tomorrow shows a promise of even greater densities for spinning media and a fresh look at Solid State Storage as a viable solution for use in volume installations at cost-effective rates. What about the far-off future? Well, we could see devices or components as difficult to imagine as barium titanium oxide nanowires (100,000 times thinner than hair) suspended in water holding up to 12.8 million GB per square centimeter (originally described in Computerworld, May 2006).
Drives are on a continual and unrelenting path to being able to store all the human data we can possibly derive. So then, what about the management component of that data, both active and passive? By active, I mean the controller, not necessarily the management software. And by passive, I mean near-line data, say up to five years but not quite out to the Sarbanes Oxley level of seven years+, which begins to involve information life-cycle management (ILM) strategies and tapes libraries, etc.
Again, by management, I mean the brains behind the devices – the controllers. Today HP offers a full range of SAS controllers and host bus adaptors to suit all device connectivity and manageability needs. Each HP controller is the benefactor of eight generations of development by the HP Smart Array engineering family. With this comes a proven and hardened RAID stack, one of very few in the industry, and a constant passion to further users’ experience via performance gains, management gains, or increased levels of data protection. After all, data protection has become critical, and no data definitely equals no business these days. HP is constantly looking to improve upon our already stellar reputation in the market with things like device-level encryption, enhanced data management choices, 6Gbps speeds, etc. The question then moves towards what do the servers do and when?
The servers in the industry-standard/x86 world always move towards stable volume plays. What I mean is they move towards high-volume components that can be easily and readily acquired for packaging into their unique design. Unique design doesn’t necessarily mean the physical look. It can mean the secret sauce and HP, in my humble opinion, has the sauce de la sauce, of management tools. Seamlessly and completely across our enterprise level of servers and storage, HP offers manageability tools that blend effortlessly into each other. This equates with value at a base level of compute and store. How valued do you think it becomes as the infrastructure grows in size and complexity? It has enormous value and very difficult to determine. But if you had to throw a number at it, how about 44? That would be 44 consecutive quarters of HP market leadership as measured by IDC.
As the architectures around SAS are groomed for growth (meaning 6Gbps speeds, unique schemas in terms of design and scalability), the management tools will encompass those new capabilities and also take advantage of unforeseen opportunities as it develops. Servers will incorporate the technology bumps as soon as volume looks feasible and inflection points make fiscal sense. Today the view is that somewhere in late ’09 to early ’10, industry standard servers will roll out full-blown designs encompassing 6Gbps speeds on the latest controllers with the latest capacity in SFF drive devices.
So this leaves storage platforms, not devices, but actual enclosures. Today HP offers a full line of SAS-capable storage enclosures in the form of the MSA line. Tomorrow look for higher end storage platforms, such as EVA to offer enclosures that accept SAS SFF drives that push FC out the back into the SAN. And finally SAS, does it remain a simple interconnect or does it grow? If you listen to the innovators, and I often do, they say it’s on the verge of becoming its own fabric. That would be something for the SAS interconnect that was viewed as a minor disruptor, and was supposed to go the way of FireWire. Everything in the SAS specification is designed for growth. But as with all architectures, limits surely exist, and we just haven’t found SAS’ limitations yet. And that’s good.
Let’s go full circle and answer the initial questions of what and how pervasive will SAS become? SAS has acquired the throne previously held by pSCSI – it’s an interconnect in its adolescent years and positioned for success in its adult years. SAS is on the verge of becoming a fabric. SAS is on the verge of moving higher into the storage realm. SAS controllers are becoming ever more intelligent, scalable, security conscious, and performance oriented all in one. SAS servers are taking full advantage of the density and power savings that became available and have transformed themselves either physically in terms of density with the rack, i.e. blades, or have transformed themselves in utility, i.e., the HP ProLiant DL360 1U server moving from a two-drive configuration to a six-drive configuration, making it attractive to an altogether different market space.
And finally, storage boxes are taking full advantage of the simple complexity of a SAS backplane that can easily accommodate either SAS or SATA devices. So the question of how pervasive SAS becomes winds up belonging to the data center manager’s creativity and need to implement technologies that can adapt, change, and grow with changing times and evolving data centers. SAS is an incredible technology (and tools) the members of the STA community have been able to deliver into the market place.