Revolution in the Storage Network – Serial Attached SCSI (SAS) as a Fabric Interconnect

Author: Sam Barnett, Product Line Manager, Serial Attached SCSI and Serial ATA
Storage Products Division, Vitesse Semiconductor

Mention Serial Attached SCSI (SAS) technology to a gathering of IT professionals and the room is immediately charged with an air of excitement. Take the dialogue one step further and associate SAS with Fibre Channel front-ends/fabrics/switching or a Storage Area Network (SAN), and the mood turns somber – until economic reality sets in.

SAS technology offers a wealth of benefits to the enterprise server and enclosure customer (high reliability and performance, mixed enterprise/desktop drive support, improved economies of scale), but it was not developed with networking in mind. Even so, with modest extensions to the existing standard SAS could be positioned to dominate the changing storage network landscape.

This article explores SAS technology and its logical extrapolation as a viable protocol for the Storage Area Network (SAN) of tomorrow.

Understanding SAS

Serial Attached SCSI is the evolutionary follow-on to the parallel SCSI interface. Like other serial storage technologies such as Serial ATA (SATA) and Fibre Channel, SAS was originally envisioned as a point-to-point drive connection mechanism only—
but it has become much more. In its simplest configuration SAS provides a physical connection between a host controller and some number of targets. Figure 1 illustrates this connection type.


Figure 1: SAS Physical Connection


As the standard gained momentum it became apparent that OEMs needed a more robust, expanded connection construct for supporting large storage topologies. The concept of an “expander” was thus born. Like a Fibre Channel switch the expander provides a switching matrix for connection of multiple devices such as host controllers (initiators), hard disk drives (targets), and other expanders within a SAS domain.

Large topologies (up to 16,384 devices in a single domain) can be built through expander cascades and different connection routing mechanisms (direct, subtractive and table routed). Figure 2 illustrates a large SAS topology using expanders.


Figure 2: Large SAS Topology Using Expanders


With the large topology building blocks (routing capable and self-configuring expanders) inherent to SAS, taking the next step and defining SAS as a “fabric” is not extending the technology too far.

What is a Fabric?

Loosely defined, a fabric is a pathway in computing, networking, or storage devices that provides chip-to-chip, adapter-to-adapter, or device-to-device connections for transferring information within computing, networking, or storage systems/subsystems. In essence, a fabric is a switch or cooperative switching facility – much like an expander – well, almost.

Fabric Switch Architectures

Switches come in many different sizes and flavors, but loosely defined, a switch is a traffic director which routes protocol data units (PDUs) from an input port to an output port based on some combination of criteria. The switch must also resolve any contention resulting from the simultaneous arrival of PDUs at a common egress (output) port.

Most switches are based on one of several internal architectures; shared memory, shared bus (also known as shared medium), crosspoint matrix, and ring. With all their similarities, the underlying architectures are distinguished from each other principally by their buffer (queue) servicing policies. More exotic architectures combine design concepts of multiple switching and buffering schemas, but are not addressed in this article.

Fabric Basics

The Basic Switching Element

According to our definition, the switch is capable of providing two functions:

  • Routing of PDUs between input ports and output ports
  • Resolution of contention through buffering or other means

Figure 3 depicts the structure of the basic switching element.

Note that it contains a set of decision logic that operates on PDU headers, a latch to hold the result of the basic switching decision for the transit duration of the PDU, delay lines to synchronize the PDU contents with the basic switching decision, and a 2 x 2 cross-connect. The cross-connect is simply a dual multiplexer that can be set in either a “bar” or “cross” state, resulting in routing of an input to either output with the other input correspondingly routed to the other output.


Figure 3: Structure of the Basic Switching Element


Space Division Switches

In space division switches, multiple concurrent paths exist between input and output ports. PDUs arriving on different input ports and destined for different output ports can transit the switching elements on separate paths without interfering with each other. Examples of space division switches include Crossbar matrix switches, Multi-stage interconnection networks (Banyan, Batcher-Banyan, Benes), and Hypercubes.

Time Division Switches

Time division switches route all PDU traffic through a common point (e.g., a common memory/buffer or bus) in the switch but manage PDU separation on the basis of time. The most common types of time division switches are Shared Memory and Shared Bus/Shared Medium.

The Need for Queuing

Regardless of design, all switches need some queuing or buffering mechanism to prevent PDU loss. For the sake of argument, consider a non-buffering switch of NxN elements where two PDUs arrive simultaneously on different inputs but are bound for the same output. Calculating the mean of lost PDUs for a simple binomial distribution shows a loss rate approaching 37% of all PDUs arriving at the switch. In a real world application this loss rate is unacceptable. Fortunately, no one builds a bufferless switch.

The Challenges to SAS Becoming a Fabric

Connection-Oriented Transfer

SAS is a connection-oriented protocol, meaning that a connection between two SAS devices must exist before data transfer can occur. SAS addresses link utilization deficiencies by implementing the traditional SCSI Disconnect/Reselect function to free an unused or under-used link while waiting on the mechanical (and sometimes lengthy) response from a disk drive or other target. However, SATA devices lack this capability and require an active affiliation between initiator and target during the entire length of a transaction. In effect, this limits the SATA target to only instructions from the affiliated host – locking out other hosts that may have a need to connect during the outstanding transaction.

By evolving the SAS protocol to support a connectionless yet reliable transfer scheme, the problems of low link utilization, poor long-haul transfer performance and SATA/STP host starvation/lock-out can be avoided.

Physical Connections

Unlike its network-capable counterpart Fibre Channel, SAS does not currently contain an optical interface. For most intra-data center connections (shelf-to-shelf, rack-to-rack, or box-to-box), standard 4-wide SAS cables are more than sufficient. To support greater distances, an optical interface for SAS and its unusual out-of-band (OOB) signaling, must be defined.


The routing structures in SAS were originally designed with direct-attach and limited topology sizes in mind. Top level (fan-out) expanders today require complete knowledge of their connected domain, limiting the effective size of a storage system. Through the addition of a routing summarization feature, no one expander in the domain would be required to maintain knowledge of the entire domain, meaning that topologies of arbitrarily large sizes could be built.


Each SAS device has a set of hard-coded addresses (the SAS address) that identify the device to the rest of the system. These addresses are either “burned-in” at the factory or assigned by firmware at system boot time. For routing information to be summarized as outlined above, a mechanism must be put into place that allows the OEM to re-map these physical addresses to more logical addresses. An address resolution protocol would provide the basis for mapping hardware addresses to virtual addresses.

Intelligent Expanders

Today, SAS expanders are essentially circuit switches with a great deal of supporting logic for making connections between SAS initiators and SAS targets. Most implementations are based on a cut-through type architecture, meaning that no buffering of any frame (PDU) is provided.

Marrying the SAS expander technology available today with one of several fabric switch architectures will enable SAS to become the back-end SAN technology of choice for tomorrow.

How Do We Get There From Here?

One word – Innovation. By modifying the existing expander building blocks, improving the transport protocol in key areas (reliable connectionless transfer, routing summarization, address virtualization) and adding an optical interface, SAS can and will evolve.

Summary and Conclusion

The future of storage and storage networking is contingent upon the evolution of SAN and NAS architectures, the distribution model for storage, and advances in transparent protocol communication techniques. As with any new technology development, whether revolutionary or evolutionary, one size will never fit all. There will be complementary technologies that address different market segments and the optimal solution will differ by application, connectivity requirements, scalability, performance, and price sensitivity.

What does this mean for SAS’ future as a fabric? Only time will tell.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.