OneFS distributed file system
|Introduced||2003 (OneFS 1.0 -- based on FreeBSD)|
|Directory contents||B+ trees|
|File allocation||B+ trees|
|Max. file size||4TB|
|Max. number of files||Cluster size dependent|
|Max. filename length||255 bytes|
|Max. volume size||15PB+ (143+ nodes at 108TB each); 65535 nodes theoretical limit|
|Allowed characters in filenames||All bytes except NUL and '/'|
|Dates recorded||Create time, rename time, mtime, ctime, atime|
|Forks||Yes (extended attributes and Alternate Data Streams)|
|File system permissions||Yes (Unix permissions and NTFS ACLs)|
|Supported operating systems||OneFS|
The OneFS file system is a parallel distributed networked file system designed by Isilon Systems for use in its Isilon IQ storage appliances. OneFS is a FreeBSD variant and utilizes zsh as its shell. OneFS has its own specialized command set, all of which start with "isi", which is used to administer the system.
All data structures in the OneFS file system maintain their own protection information. This means in the same filesystem, one file may be protected at +1 (basic parity protection) while another may be protected at +4 (resilient to four failures) while yet another file may be protected at 2x (mirroring); this feature is referred to as FlexProtect. FlexProtect is also responsible for automatically rebuilding the data in the event of a failure. The protection levels available are based on the number of nodes in the cluster and follow the Reed Solomon Algorithm. Blocks for an individual file are spread across the nodes; for example, block 0 may be on Node 3, block 1 on Node 1, and the related parity block on Node 5. This allows entire nodes to fail without losing access to any data. File metadata, directories, snapshot structures, quotas structures, and a logical inode mapping structure are all based on mirrored B+ trees. Block addresses are generalized 64-bit pointers that reference (node, drive, blknum) tuples. The native block size is 8192 bytes; inodes are 512 bytes on disk.
One distinctive characteristic of OneFS is that metadata is spread throughout the nodes in a homogeneous fashion. There are no dedicated metadata servers. The only piece of metadata that is replicated on every node is the address list of root btree blocks of the inode mapping structure. Everything else can be found from that starting point, following the generalized 64-bit pointers.
Nodes running OneFS must be connected together with a high performance, low-latency back-end network for optimal performance. OneFS 1.0-3.0 used Gigabit Ethernet as that back-end network. Starting with OneFS 3.5, Isilon offered Infiniband models. Now all nodes sold utilize an Infiniband back-end.
Data, metadata, locking, transaction, group management, allocation, and event traffic go over the back-end RPC system. All data and metadata transfers are zero-copy. All modification operations to on-disk structures are transactional and journaled.
OneFS is equipped with options for accessing storage via NFS, CIFS/SMB, FTP, HTTP, iSCSI, and HDFS. It can utilize non-local authentication such as Active Directory, LDAP, and NIS. It is also capable of interfacing with backup devices using NDMP.
- "EMC Isilon Delivers World's Largest Single File System for Big Data", May 9, 2011, accessed July 19, 2011.
- "OneFS Command Line Reference"
- "Data Protection and Backup"
- Determined by the __FreeBSD_version definition in /usr/include/sys/param.h. See FreeBSD Porter's Handbook for more information.
- The FreeBSD Documentation Project. "__FreeBSD_version values". FreeBSD Porter's Handbook. Retrieved 2011-12-01.
- Patel, Mona (2012-11-16). "EMC Isilon OneFS 7.0: Converging Big Data and The Enterprise". EMC Big Data Blog. Retrieved 2013-01-18.
- Grocott, Sam (2013-10-30). "Isilon OneFS 7.1 Big Data Scale Out Storage Is Finally Here!". EMC Pulse Blog. Retrieved 2013-10-30.
|This computer storage–related article is a stub. You can help Wikipedia by expanding it.|