Storage area network

From Wikipedia, the free encyclopedia
  (Redirected from Storage area networks)
Jump to: navigation, search

A storage area network (SAN) is a dedicated network that provides access to consolidated, block level data storage. SANs are primarily used to enhance storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear like locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium sized business environments.

A SAN does not provide file abstraction, only block-level operations. However, file systems built on top of SANs do provide file-level access, and are known as SAN filesystems or shared disk file systems.

Storage[edit]

Historically, data centers first created "islands" of SCSI disk arrays as direct-attached storage (DAS), each dedicated to an application, and visible as a number of "virtual hard drives" (i.e. LUNs).[1] Essentially, a SAN consolidates such storage islands together using a high-speed network.

Operating systems maintain their own file systems on their own dedicated, non-shared LUNs, as though they were local to themselves. If multiple systems were simply to attempt to share a LUN, these would interfere with each other and quickly corrupt the data. Any planned sharing of data on different computers within a LUN requires advanced solutions, such as SAN file systems or clustered computing.

Despite such issues, SANs help to increase storage capacity utilization, since multiple servers consolidate their private storage space onto the disk arrays.

Common uses of a SAN include provision of transactionally accessed data that require high-speed block-level access to the hard drives such as email servers, databases, and high usage file servers.

SAN and NAS[edit]

NAS (Network Attached Storage) was a solution to the problems of Direct Attached Storage (DAS). The solution was that servers would share a connection to the storage devices via a network connection through the LAN. This set up allows the server to be used loaded with software and applications instead of being split between two duties. With the old DAS setup the server would be split between application use and storage use. With NAS there is no longer a need for support of the traditional storage interface (SCSI) and now the server or client may access NAS storage with a network connection. The drawback to this is that there is no longer a high-speed connection between the CPU and storage units – they still must use the LAN to communicate and this creates bandwidth bottlenecks. In addition requests are processed using file access protocols and CPU cycles must be used to convert into block requests that a server may use to retrieve files and information. This has relegated NAS to be used as data backup more than anything else[citation needed].

SAN-NAS hybrid[edit]

Hybrid using DAS, NAS and SAN technologies.

Despite the differences between SAN and NAS, it is possible to create solutions that include both technologies.[citation needed]

Benefits[edit]

Sharing storage usually simplifies storage administration and adds flexibility since cables and storage devices do not have to be physically moved to shift storage from one server to another.

Other benefits include the ability to allow servers to boot from the SAN itself. This allows for a quick and easy replacement of faulty servers since the SAN can be reconfigured so that a replacement server can use the LUN of the faulty server. While this area of technology is still new, many view it as being the future of the enterprise datacenter.[2]

SANs also tend to enable more effective disaster recovery processes. A SAN could span a distant location containing a secondary storage array. This enables storage replication either implemented by disk array controllers, by server software, or by specialized SAN devices. Since IP WANs are often the least costly method of long-distance transport, the Fibre Channel over IP (FCIP) and iSCSI protocols have been developed to allow SAN extension over IP networks. The traditional physical SCSI layer could only support a few meters of distance - not nearly enough to ensure business continuance in a disaster.

The economic consolidation of disk arrays has accelerated the advancement of several features including I/O caching, snapshotting, and volume cloning (Business Continuance Volumes or BCVs).

Network types[edit]

Most storage networks use the SCSI protocol for communication between servers and disk drive devices. A mapping layer to other protocols is used to form a network:

Storage networks may also be built using SAS and SATA technologies. SAS evolved from SCSI direct-attached storage. SATA evolved from IDE direct-attached storage. SAS and SATA devices can be networked using SAS Expanders.

SAN infrastructure[edit]

Qlogic SAN-switch with optical Fibre Channel connectors installed.

SANs often use a Fibre Channel fabric topology - an infrastructure specially designed to handle storage communications. It provides faster and more reliable access than higher-level protocols used in NAS. A fabric is similar in concept to a network segment in a local area network. A typical Fibre Channel SAN fabric is made up of a number of Fibre Channel switches.

Today, all major SAN equipment vendors also offer some form of Fibre Channel routing solution, and these bring substantial scalability benefits to the SAN architecture by allowing data to cross between different fabrics without merging them. These offerings use proprietary protocol elements, and the top-level architectures being promoted are radically different. They often enable mapping Fibre Channel traffic over IP or over SONET/SDH.

Compatibility[edit]

One of the early problems with Fibre Channel SANs was that the switches and other hardware from different manufacturers were not compatible. Although the basic storage protocols FCP were always quite standard, some of the higher-level functions did not interoperate well. Similarly, many host operating systems would react badly to other operating systems sharing the same fabric. Many solutions were pushed to the market before standards were finalized and vendors have since innovated around the standards[citation needed].

SANs in media and entertainment[edit]

Video editing workgroups require very high data transfer rates and very low latency. Outside of the enterprise market, this is one area that greatly benefits from SANs.

SANs in Media and Entertainment are often referred to as Serverless SANs due to the nature of the configuration which places the video workflow (ingest, editing, playout) clients directly on the SAN rather than attaching to servers. Control of data flow is managed by a distributed file system such as StorNext by Quantum.[5]

Per-node bandwidth usage control, sometimes referred to as Quality of Service (QoS), is especially important in video workgroups as it ensures fair and prioritized bandwidth usage across the network, if there is insufficient open bandwidth available.

Storage virtualization[edit]

Storage virtualization is the process of abstracting logical storage from physical storage. The physical storage resources are aggregated into storage pools, from which the logical storage is created. It presents to the user a logical space for data storage and transparently handles the process of mapping it to the physical location, a concept called location transparency. This is implemented in modern disk arrays, often using vendor proprietary solutions. However, the goal of storage virtualization is to group multiple disk arrays from different vendors, scattered over a network, into a single storage device. The single storage device can then be managed uniformly.[citation needed]

SAN Storage QoS (Quality of Service)[edit]

SAN Storage QoS (Quality of Service) is the coordination of capacity and performance in a dedicated storage area network. This enables the desired storage performance to be calculated and maintained for network customers accessing the device.

Key factors that affect Storage Area Network QoS(Quality of Service) are:

  • Bandwidth – The rate of data throughput available on the system.
  • Latency – The time delay for a read/write operation to execute.
  • Queue depth – The number of outstanding operations waiting to execute to the underlying disks (Traditional or SSD).

QoS can be impacted in a SAN storage system by unexpected increase in data traffic (usage spike) from one network user that can cause performance to decrease for other users on the same network. This can be known as the “Noisy Neighbor Effect.” When QoS services are enabled in a SAN storage system, the “Noisy Neighbor Effect” can be prevented and network storage performance can be accurately predicted.

Using SAN storage QoS is in contrast to using disk over-provisioning in a SAN environment. Over-provisioning can be used to provide additional capacity to compensate for peak network traffic loads. However, where network loads are not predictable, over-provisioning can eventually cause all bandwidth to be fully consumed and latency to increase significantly resulting in SAN performance degradation.

See also[edit]

References[edit]

  1. ^ "Novel Doc: OES 1 - Direct Attached Storage Solutions". 
  2. ^ "SAN vs DAS: A Cost Analysis of Storage in the Enterprise". SAN vs DAS: A Cost Analysis of Storage in the Enterprise. 31 October 2008. Retrieved 2010-01-28. 
  3. ^ "TechEncyclopedia: IP Storage". Retrieved 2007-12-09. 
  4. ^ "TechEncyclopedia: SANoIP". Retrieved 2007-12-09. 
  5. ^ "StorNext Storage Manager - High-speed file sharing, Data Management and Digital Archiving Software". Quantum.com. Retrieved 2013-07-08. 

External links[edit]