Author: Mike Karp, VP and Principal Analyst
Ptak, Noel and Associates
SCSI devices — first parallel and now, SAS — provide the lion’s share of enterprise storage devices, and it seems likely that this leadership role will continue well into the foreseeable future. This is because as demands for better performance, configuration flexibility and manageability have increased over time, the SAS standard and the products that support it have also evolved. Second-generation SAS products are well established in the marketplace today, and progress continues with the design of a third-generation product set, which will begin to roll out in 2012.
Because first-generation SAS operated at 3Gb per second, second-generation operates at 6Gb per second, and the next generation will operate at 12Gb per second, most end-users continue to think of each generation in terms of its bus speed. Bus bandwidth provides a handy and descriptive shorthand, but there is much more going on than faster I/O.
Building Up, Building Down
The build-out ability of the current generation of SAS is a case in point. If we were discussing parallel SCSI, 16 bus addresses (with one reserved for the host bus adapter) would be the limit. Even with multiple HBAs on board, the idea of hundreds of ports connected to a single server would have been a practical impossibility — the size of the connectors, the width of the internal ribbon cables and the weight of the differential cables that reached outside the box would have made the whole idea ridiculous.
SAS’s much smaller second-generation internal Mini-SAS HD connectors match up nicely with the new 1.8″ Small Form Factor (SFF) drives to make for easier inside-the-box connectivity. Additionally, smaller connectors plus smaller drives open up a host of options for system builders who want to deliver more storage, more spindles, or just different storage. SATA discs work perfectly well on a SAS bus, so scale-out with cheaper discs is easy. Fortunately, scale-up using small form factor solid-state devices (SDDs) is also part of the picture. System builders thus have the efficiency of using a single backplane to combine low-cost SATA, high- and medium-performance SAS, and ultra-high performance low-latency solid-state disks in just about any mix-and-match combination a data center manager might want.
Moving Out, Moving In
The key to scaling out lies in the SAS expanders, inexpensive switching devices interposed between the SAS controller and the endpoint devices that multiply the number of ports that may be connected to each controller. They do more than that though. The new generation of SAS expanders not only scales out the SAS architecture, it provides electrical isolation and clean signaling between the controller and the disk drives. These expanders can be added within servers whenever they are needed, allowing for relatively inexpensive pay-as-you-grow systems that start small and build out to very large configurations when the need arises. Cascading these expanders provides support for many more devices than had previously been possible: today, hundreds (and in extreme cases, thousands) of ports can be made available using available technology. Because of this, the capacity limit for storage within a SAS array is now dictated not by a bus limitation, but only by the volume of free space available and by power considerations.
Outside the box, expanders can be combined into switches, creating a SAS fabric that allows direct-attach connectivity to enormous amounts of storage. Because this is a DAS connection, this would result in both high-speed and deterministic I/O. Not only would there be extremely low latency whatever storage media were being used (obviously, only latencies introduced by the network are affected by this; those latencies inherent in the storage devices themselves, of course, remain), the various indeterminacies associated with network storage would be eliminated. The randomness caused by resource contention on a network is eliminated, and is replaced by deterministic I/O. Many testing environments, and any application that requires high-speed, predictable I/O, will find this extremely useful.
Of course, a SAS fabric also enables multiple servers to access a common set of storage. The new distances provided by active cabling (copper allows distances of up to 20m, while fiber cable permits runs of up to 100m) mean that the servers do not need to be in close proximity to either one another or to the storage itself. To a casual observer, such a built-out SAS topology — servers, expanders, controllers, switches, storage devices — might easily be mistaken for a SAN fabric, and they wouldn’t be far wrong. Like both Fibre Channel and iSCSI SANs, data on the SAS fabric is addressed as blocks, and may be addressed by more than one server. The SAS fabric however, relying on relatively cheap expander technology, provides a truly economical way for multiple servers to address a large common data set.
The SAS Connectivity Management system provides autodiscovery and rapid fault isolation, which should provide both improved reliability and better serviceability. The result of this is that storage managers using built-out SAS storage can expect to minimize costs associated with configuration errors and several aspects of system maintenance.
The new 12Gb/s SAS generation will be defined by T10 as part of the SAS-3 and SPC (SAS Protocol Layer) specifications and can be expected to provide further management enhancements. Like its predecessors, 12Gb/s SAS will double the bandwidth. Prototypes are expected within 2 years and initial products expected 12-18 months after prototypes are available. If you are curious about where SAS is going, check out the SAS Advanced Connectivity Roadmap on the SCSI Trade Association Website.