This article relies largely or entirely on a single source. (June 2011)
This article contains content that is written like an advertisement. (May 2015) (Learn how and when to remove this template message)
In computer storage, NetApp filer, known also as NetApp Fabric-Attached Storage (FAS), NetApp All Flash FAS (AFF), or NetApp's network attached storage (NAS) device is NetApp's offering in the area of storage systems. A FAS functions in an enterprise-class storage area network (SAN) as well as a networked storage appliance; for this reason they are colloquially referred to as "toasters". It can serve storage over a network using file-based protocols such as NFS, SMB, FTP, TFTP, and HTTP. Filers can also serve data over block-based protocols such as Fibre Channel (FC), Fibre Channel over Ethernet (FCoE) and iSCSI. NetApp Filers implement their physical storage in large disk arrays.
Most other large-storage filers from other vendors tend to use commodity computers with an operating system such as Microsoft Windows Server, VxWorks or tuned Linux. NetApp filers use highly customized hardware and the proprietary Data ONTAP operating system with WAFL file system, all originally designed by NetApp founders David Hitz and James Lau specifically for storage-serving purposes. Data ONTAP is NetApp's internal operating system, specially optimised for storage functions at high and low level. It boots from FreeBSD as a stand-alone kernel-space module and uses some functions of FreeBSD (command interpreter and drivers stack, for example).
All filers have battery-backed NVRAM, which allows them to commit writes to stable storage quickly, without waiting on disks. Early filers connected to external disk enclosures via SCSI, while modern models (as of 2009[update]) use FC and SAS protocol. The disk enclosures (shelves) support FC hard disk drives, as well as parallel ATA, serial ATA and Serial attached SCSI.
Implementers often organize two filers in a high-availability cluster with a private high-speed link, either Fibre Channel, InfiniBand, or 10 Gigabit Ethernet. One can additionally group such clusters together under a single namespace when running in the "cluster mode" of the Data ONTAP 8 operating system.
This section does not cite any sources. (November 2017) (Learn how and when to remove this template message)
Modern NetApp filers consist of customized computers with Intel processors using PCI. Each Filer has a proprietary NVRAM adapter to log all writes for performance and to play the data log forward in the event of an unplanned shutdown. One can link two filers together as a cluster, which NetApp (as of 2009) refers to using the less ambiguous term "Active/Active".
Each filer model comes with a set configuration of processor, RAM and NVRAM, which users cannot expand after purchase. With the exception of some of the entry point storage controllers, the NetApp filers have at least one PCIe-based slot available for additional network, tape and/or disk connections. In June 2008 NetApp announced the Performance Acceleration Module (or PAM) to optimize the performance of workloads which carry out intensive random reads. This optional card goes into a PCIe slot and provides additional memory (or cache) between the disk and the filer RAM/NVRAM, thus improving performance.
NetApp supports either SATA, Fibre Channel, or SAS disk drives, which it groups into RAID (Redundant Array of Inexpensive Disks or Redundant Array of Independent Disks) groups of up to 28 (26 data disks plus 2 parity disks). Multiple RAID groups form an "aggregate"; and within aggregates Data ONTAP operating system sets up "flexible volumes" to actually store data that users can access. An alternative is "Traditional volumes" where one or more RAID groups form a single static volume. Flexible volumes offer the advantage that many of them can be created on a single aggregate and resized at any time. Smaller volumes can then share all of the spindles available to the underlying aggregate. Traditional volumes and aggregates can only be expanded, never contracted. However, Traditional volumes can (theoretically) handle slightly higher I/O throughput than flexible volumes (with the same number of spindles), as they do not have to go through an additional virtualisation layer to talk to the underlying disk. NetApp FAS storage systems which contain only SSD drives with installed SSD-oprimzed ONTAP OS called All Flash FAS (AFF).
PAM / Flash Cache
NetApp Filer can have PAM ( Performance Accelerate Module ) or Flash Cache (PAM II) which can reduce read latencies and allows the filer to support more read intensive work without adding any further disk to the underlying RAID.
Metro Cluster (MC)s free functionality for FAS and AFF systems for metro high availability with synchronous replication between two sites, this configuration require additional equipment. Metro Cluster uses plex technique where on one site number of disks form one or more RAID groups aggregated in a plex, while on the second site have same number of disks with same type and RAID configuration. One plex synchronously replicates to another. Two plexes form an aggregate where data stored and in case of disaster on one site second site provide read-write access to data.
Clustered Metro Cluster
With Metro Cluster it is possible to have more than one storage controller per site to form a cluster or Clustered Metro Cluster (MCC). In MCC configuration each one remote and one local storage node form Metro HA or Disaster Recovery Pare (DR Pare) while two local nodes form local HA pare, thus each node synchronously replicates to two nodes: one remote and one local. For small distances Metro Cluster require at least one FC-VI or newer iWARP card per controller. FAS and AFF systems with ONTAP software versions 9.2 and older utilize FC-VI cards and for long distances require 4 dedicated Fibre Channel switches (2 on each site) and 2 FC-SAS bridges per each disk shelf stack, thus minimum 4 total for 2 sites and minimum 2 dark fiber ISL links with optional DWDMs for long distances.
Metro Cluster over IP
Starting with ONTAP 9.3 Metro Cluster over IP was introduced with no need for dedicated back-end Fibre Channel switches, FC-SAS bridges and dedicated dark fiber ISL. Metro Cluster over IP require Ethernet cluster switches with installed ISL and utilize iWARP cards in each storage controller for synchronous replication.
Data ONTAP OS
NetApp filers using proprietary OS called ONTAP (Previously Data ONTAP). Main purpose for OS in a storage system is to serve data to clients in non-disruptive manner with data protocols like CIFS, NFS, iSCSI, Fiber Channel and to provide enterprise features like High Availability, Disaster Recovery and data Backup. ONTAP OS provide enterprise level data management features like FlexClone, SnapMirror, SnapLock etc, most of them snapshot-based WAFL File System capabilities.
WAFL File System
WAFL, as a robust versioning filesystem in NetApp's proprietary OS ONTAP, it provides snapshots, which allow end-users to see earlier versions of files in the file system. Snapshots appear in a hidden directory:
~snapshot for Windows (SMB) or
.snapshot for Unix (NFS). Up to 255 snapshots can be made of any traditional or flexible volume. Snapshots are read-only, although ONTAP provides additional ability to make writable "virtual clones", based at "WAFL snapshots" technique, as "FlexClones".
ONTAP implements snapshots by tracking changes to disk-blocks between snapshot operations. It can set up snapshots in seconds because it only needs to take a copy of the root inode in the filesystem. This differs from the snapshots provided by some other storage vendors in which every block of storage has to be copied, which can take many hours.
Prior to the release of ONTAP 8, individual aggregate sizes were limited to a maximum of 2TB for FAS250 models and 16TB for all other models.
The limitation on aggregate size, coupled with increasing density of disk drives, served to limit the performance of the overall system. NetApp, like most storage vendors, increases overall system performance by parallelizing disk writes to many different spindles (disk drives). Large capacity drives, therefore limit the number of spindles that can be added to a single aggregate, and therefore limit the aggregate performance.
Each aggregate also incurs a storage capacity overhead of approximately 7-11%, depending on the disk type. On systems with many aggregates this can result in lost storage capacity.
However, the overhead comes about due to additional block-checksumming on the disk level as well as usual file system overhead, similar to the overhead in file systems like NTFS or EXT3. Block checksumming helps to insure that data errors at the disk drive level do not result in data loss.
Data ONTAP 8.0 supports a new 64bit aggregate format, which increases the size limit of FlexVolume to approximately 100TB (depending on storage platform) and also increases the size limit of aggregates to more than 100 TB on newer models (depending on storage platform) thus restoring the ability to configure large spindle counts to increase performance and storage efficiency. ()
|Model||Status||Released||CPU||Main memory||NVRAM||Raw capacity||Benchmark||Result|
|FASServer 400||Discontinued||Jan 1993||50 MHz Intel i486||? MB||4 MB||14 GB||?|
|FASServer 450||Discontinued||Jan 1994||50 MHz Intel i486||? MB||4 MB||14 GB||?|
|FASServer 1300||Discontinued||Jan 1994||50 MHz Intel i486||? MB||4 MB||14 GB||?|
|FASServer 1400||Discontinued||Jan 1994||50 MHz Intel i486||? MB||4 MB||14 GB||?|
|FASServer||Discontinued||Jan 1995||50 MHz Intel i486||256 MB||4 MB||? GB||640|
|F330||Discontinued||Sept 1995||90 MHz Intel Pentium||256 MB||8 MB||117 GB||1310|
|F220||Discontinued||Feb 1996||75 MHz Intel Pentium||256 MB||8 MB||? GB||754|
|F540||Discontinued||June 1996||275 MHz DEC Alpha 21064A||256 MB||8 MB||? GB||2230|
|F210||Discontinued||May 1997||75 MHz Intel Pentium||256 MB||8 MB||? GB||1113|
|F230||Discontinued||May 1997||90 MHz Intel Pentium||256 MB||8 MB||? GB||1610|
|F520||Discontinued||May 1997||275 MHz DEC Alpha 21064A||256 MB||8 MB||? GB||2361|
|F630||Discontinued||June 1997||500 MHz DEC Alpha 21164A||512 MB||32 MB||464 GB||4328|
|F720||Discontinued||Aug 1998||400 MHz DEC Alpha 21164A||256 MB||8 MB||464 GB||2691|
|F740||Discontinued||Aug 1998||400 MHz DEC Alpha 21164A||512 MB||32 MB||928 GB||5095|
|F760||Discontinued||Aug 1998||600 MHz DEC Alpha 21164A||1 GB||32 MB||1.39 TB||7750|
|F85||Discontinued||Feb 2001||256 MB||64 MB||648 GB|
|F87||Discontinued||Dec 2001||1.13 GHz Intel P3||256 MB||64 MB||576 GB|
|F810||Discontinued||Dec 2001||733 MHz Intel P3 Coppermine||512 MB||128 MB||1.5 TB||4967|
|F820||Discontinued||Dec 2000||733 MHz Intel P3 Coppermine||1 GB||128 MB||3 TB||8350|
|F825||Discontinued||Aug 2002||733 MHz Intel P3 Coppermine||1 GB||128 MB||3 TB||8062|
|F840||Discontinued||Aug/Dec? 2000||733 MHz Intel P3 Coppermine||3 GB||128 MB||6 TB||11873|
|F880||Discontinued||July 2001||Dual 733 MHz Intel P3 Coppermine||3 GB||128 MB||9 TB||17531|
|FAS920||Discontinued||May 2004||2.0 GHz Intel P4 Xeon||2 GB||256 MB||7 TB||13460|
|FAS940||Discontinued||Aug 2002||1.8 GHz Intel P4 Xeon||3 GB||256 MB||14 TB||17419|
|FAS960||Discontinued||Aug 2002||Dual 2.2 GHz Intel P4 Xeon||6 GB||256 MB||28 TB||25135|
|FAS980||Discontinued||Jan 2004||Dual 2.8 GHz Intel P4 Xeon MP 2 MB L3||8 GB||512 MB||50 TB||36036|
|FAS250||EOA 11/08||Jan 2004||600 MHz Broadcom BCM1250 dual core MIPS||512 MB||64 MB||4 TB|
|FAS270||EOA 11/08||Jan 2004||650 MHz Broadcom BCM1250 dual core MIPS||1 GB||128 MB||16 TB||13620*|
|FAS2020||EOA 8/12||June 2007||2.2 GHz Mobile Celeron||1 GB||128 MB||68 TB|
|FAS2040||EOA 8/12||Sept 2009||1.66 GHz Intel Xeon||4 GB||512 MB||136 TB|
|FAS2050||EOA 5/11||June 2007||2.2 GHz Mobile Celeron||2 GB||256 MB||104 TB||20027*|
|FAS2220||EOA 3/15||June 2012||1.73 GHz Dual Core Intel Xeon C3528||6 GB||768 MB||180 TB|
|FAS2240||EOA 3/15||November 2011||1.73 GHz Dual Core Intel Xeon C3528||6 GB||768 MB||432 TB||38000|
|FAS2520||EOA 12/17||June 2014||1.73 GHz Dual Core Intel Xeon C3528||36 GB||4 GB||840 TB|
|FAS2552||EOA 12/17||June 2014||1.73 GHz Dual Core Intel Xeon C3528||36 GB||4 GB||1243 TB|
|FAS2554||EOA 12/17||June 2014||1.73 GHz Dual Core Intel Xeon C3528||36 GB||4 GB||1440 TB|
|FAS2620||Nov 2016||1 x 6 core||64 GB||8 GB||1440 TB|
|FAS2650||Nov 2016||1 x 6 core||64 GB||8 GB||1243 TB|
|FAS3020||EOA 4/09||May 2005||2.8 GHz Intel Xeon||2 GB||512 MB||84 TB||34089*|
|FAS3040||EOA 4/09||Feb 2007||Dual 2.4 GHz AMD Opteron 250||4 GB||512 MB||336 TB||60038*|
|FAS3050||Discontinued||May 2005||Dual 2.8 GHz Intel Xeon||4 GB||512 MB||168 TB||47927*|
|FAS3070||EOA 4/09||Nov 2006||Dual 1.8 GHz AMD dual core Opteron||8 GB||512 MB||504 TB||85615*|
|FAS3140||EOA 2/12||June 2008||Single 2.4 GHz AMD Opteron Dual Core 2216||4 GB||512 MB||420 TB||SFS2008||40109*|
|FAS3160||EOA 2/12||Dual 2.6 GHz AMD Opteron Dual Core 2218||8 GB||2 GB||672 TB||SFS2008||60409*|
|FAS3170||EOA 2/12||June 2008||Dual 2.6 GHz AMD Opteron Dual Core 2218||16 GB||2 GB||840 TB||SFS97_R1||137306*|
|FAS3210||EOA 11/13||Nov 2010||Single 2.3 GHz Intel Xeon(tm) Processor (E5220)||8 GB||2 GB||480 TB||SFS2008||64292|
|FAS3220||EOA 12/14||Nov 2012||Single 2.3 GHz Intel Xeon(tm) Quad Processor (L5410)||12 GB||3.2GB||1.44 PB||??||??|
|FAS3240||EOA 11/13||Nov 2010||Dual 2.33 GHz Intel Xeon(tm) Quad Processor (L5410)||16 GB||2 GB||1.20 PB||??||??|
|FAS3250||EOA 12/14||Nov 2012||Dual 2.33 GHz Intel Xeon(tm) Quad Processor (L5410)||40 GB||4 GB||2.16 PB||SFS2008||100922|
|FAS3270||EOA 11/13||Nov 2010||Dual 3.0 GHz Intel Xeon(tm) Processor (E5240)||40 GB||4 GB||1.92 PB||SFS2008||101183|
|FAS6030||EOA 6/09||Mar 2006||Dual 2.6 GHz AMD Opteron||32 GB||512 MB||840 TB||SFS97_R1||100295*|
|FAS6040||EOA 3/12||Dec 2007||2.6 GHz AMD dual core Opteron||16 GB||512 MB||840 TB|
|FAS6070||EOA 6/09||Mar 2006||Quad 2.6 GHz AMD Opteron||64 GB||2 GB||1.008 PB||136048*|
|FAS6080||EOA 3/12||Dec 2007||4 to 8 2.6 GHz AMD dual core Opteron||64 GB||4 GB||1.176 PB||SFS2008||120011*|
|FAS6210||EOA 11/13||Nov 2010||2x 2.27 GHz Intel Xeon(tm) Processor E5520||48 GB||8 GB||2.40 PB|
|FAS6220||EOA 3/15||Feb 2013||2x 64-bit 4-core Intel(R) Xeon(R) Processor E5520||96 GB||8 GB||4.80 PB|
|FAS6240||EOA 11/13||Nov 2010||2x 2.53 GHz Intel Xeon(tm) Processor E5540||96 GB||8 GB||2.88 PB||SFS2008||190675|
|FAS6250||EOA 3/15||Feb 2013||2x 64-bit 4-core||144 GB||8 GB||5.76 PB|
|FAS6280||EOA 11/13||Nov 2010||2x 2.93 GHz Intel Xeon(tm) Processor X5670||192 GB||8 GB||2.88 PB|
|FAS6290||EOA 3/15||Feb 2013||2x 64-bit 6-core||192 GB||8 GB||5.76 PB|
|FAS8020||EOA 12/17||Mar 2014||1 x Intel Xeon CPU E5-2620 @ 2.00GHz||24 GB||8 GB||1.92 PB||SFS2008||110281|
|FAS8040||EOA 12/17||Mar 2014||1 x 64-bit 8-core 2.10 GHz||64 GB||16 GB||2.88 PB|
|FAS8060||EOA 12/17||Mar 2014||2 x 64-bit 8-core 2.10 GHz E5-2658||128 GB||16 GB||4.80 PB|
|FAS8080X||EOA 12/17||Jun 2014||2 x 64-bit 10-core 2.80 GHz||256 GB||32 GB||8.64 PB||SPC-1 IOPS||685,281.71*|
|FAS8200||Nov 2016||1 x 16 core 1.70 GHz D-1587||128 GB||16 GB||4.80 PB||SPEC SFS2014_swbuild||4130 MBps / 260 020 IOPS @2.7ms (ORT = 1.04 ms)|
|FAS9000||Nov 2016||2 x 18-core 2.30 GHz E5-2697V4||512 GB||64 GB||14.4 PB|
|AFF8040||EOA 10/17||Mar 2014||1 x 64-bit 8-core 2.10 GHz||64 GB||16 GB|
|AFF8060||EOA 11/16||Mar 2014||2 x 64-bit 8-core 2.10 GHz E5-2658||128 GB||16 GB|
|AFF8080||EOA 10/17||Jun 2014||2 x 64-bit 10-core 2.80 GHz||256 GB||32 GB|
|AFF A200||2016||1 x 6-core Intel Xeon D-1528 @ 1.90GHz||64 GB||8 GB|
|AFF A300||2016||1 x 16-core Intel Xeon D-1587 @ 1.70GHz||128 GB||16 GB|
|AFF A700||2016||2 x 18-core 2.30 GHz E5-2697V4||512 GB||64 GB|
|AFF A700s||2017||2 x 18-core 2.30 GHz E5-2697V4||512 GB||32 GB||SPC-1||2 400 059 IOPS @0.69ms|
|Model||Status||Released||CPU||Main memory||NVRAM||Raw capacity||Benchmark||Result|
EOA = End of Availability
SPECsfs with "*" is clustered result. SPECsfs performed include SPECsfs93, SPECsfs97, SPECsfs97_R1 and SPECsfs2008. Results of different benchmark versions are not comparable.
- Nabrzyski, Jarek; Schopf, Jennifer M.; Węglarz, Jan (2004). Grid Resource Management: State of the Art and Future Trends. Springer. p. 342. ISBN 978-1-4020-7575-9. Retrieved 11 June 2012.