Independent Data Storage Analyst
The Industry in Aggregate
It may have taken longer than expected, but RAID arrays equipped with SAS are now shipping. IBM is the first to get in the game with the launch of its EXP3000 array at the end of August. Soon to come are SAS-based RAID boxes from the likes of Dell, EMC, Hitachi, HP, and Sun. NetApp enters the fray, too, with filers based on SAS by year-end.
Almost universally, the new SAS-based external storage boxes feature 3.5″ disk drives. The IBM EXP3000 deploys a single RAID controller and twelve SAS drives in a 2u shelf, giving it the capability to pack almost 4TB in the configuration. Though SAS drives are available in both 2.5″ and 3.5″, most in the industry agree that 2.5″ SAS drives will primarily reside in performance-centric enterprise servers and not external arrays since the smaller size means less capacity per unit. This allows for maximum capacity as storage needs expand. Meanwhile, expect value-based servers to adopt 3.5″ SAS drives in order to save cost.
Without question, disk array vendors have one metric in mind as it relates to SAS: its very favorable price/performance. In fact, if you ask the largest suppliers of enterprise systems in the market today why they favor SAS for their forthcoming array offerings, you are likely to hear this new interface described as a “game changer”. With the ability to mix & match the same arrays with either SATA or SAS drives, the R&D costs are significantly lower relative to those which incorporate Fibre Channel disks. And SAS’s 3GB/s performance is extremely competitive at this lower price point. True, SAS disks are actually more expensive than Fibre Channel drives today when comparing HDDs of identical capacities. However, once parallel SCSI is completely displaced by SAS over the next few years, SAS will be shipping in much greater volumes than Fibre Channel, and that should make SAS the cheaper alternative.
SAS is not replacing Fibre Channel at one fell swoop, however. In fact, SAS seems very well positioned to become the predominant choice in the segment of the external storage market that International Data Corporation (IDC) terms price bands one through three (i.e. those boxes whose end user price is below $15,000). Such entry-level solutions, today, are typically single-controller arrays and are directly attached to the host server and often times not a SAN. Many vendors have explored the prospects for less-functional Fibre Channel or even iSCSI in order to extend the addressable market for SANs to include the cheapest of disk arrays. However, the costs of each of these host attachment alternatives appear too high to justify, and the additional headache of managing a SAN is just too burdensome for the entry-level customer, in most cases. As a result, DASD is sufficient for much of this price category.
As SAS evolves from 3 to 6Gb/s, it is certainly well positioned to move from the entry level into the mid-range segment of the market. But this progression could be gradual. In fact, SAS is already under the microscope in this first phase, its entree into the external storage realm. And since it is a new interface, it will have to prove reliable. For example, the ability to handle complex data traffic from multiple I/O initiators will test the durability of the SAS interface, even if the connectivity entails something as simple as dual-note DASD. Additionally, the expanders supported by SAS will get an initial test for scalability in these entry-level offerings, a crucial phase for this functionality. Provided they perform reliably, SAS expanders give the interface unlimited potential in its efforts to migrate up the value chain within the lucrative storage array market.
Many are asking what took so long for SAS to come to fruition in the disk array sector. One of the key benefactors to getting SAS off the ground is Intel, whose SAS-enabled “Blackford” server chipset along with its “Woodcrest” dual-processor CPU became available mid-year 2006, later than originally planned. But now that both Intel and AMD are shipping SAS-enabled server chipsets, all of the major providers of servers to the enterprise market are shipping SAS as an I/O option, and some are even shipping SAS exclusively today from a server standpoint.
But the late launch of SAS-enabled servers is not the lone culprit to the tardy launch of disk arrays based on the new interface. Collectively, system-vendor OEMs admit that they underestimated the amount of work required to transition from parallel SCSI to SAS. Host stacks are now rewritten, configurations like failover and multi-pathing are reconfigured, and backplanes are redesigned to incorporate SAS, but tasks like these take quite a bit of time to complete. Furthermore, one should be mindful of all of the different operating systems that exist in the marketplace, from Windows to Linux to the miscellaneous flavors of Unix, all of which had to be recoded to incorporate SAS functionality. Consider, too, that the typical supplier of servers and storage solutions is resource-constrained due to R&D cutbacks in recent years.
Fortunately, the heavy lifting is now behind us, and unit shipments of SAS-based hard disk drives are on pace to exceed parallel SCSI before we exit the year. Consequently, 2007 is slated to be a big adoption year for SAS. In fact, most vendors agree that all of their parallel SCSI-based DASD solutions must be replaced by ones that incorporate SAS over the next few years. In light of the fact that entry-level storage solutions make up about one-third of the overall revenue generated in the external storage systems market, SAS is positioned to post significant revenue gains over the next few years as its market adoption accelerates.