Moose File System

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Moose File System
MooseFS logo.png
Developer(s) Jakub Kruszona-Zawadzki[1] / Core Technology[2]
Stable release
3.0.81-1 / 25 July 2016; 61 days ago (2016-07-25)[3][4][5]
Preview release
3.0.81-1 / 25 July 2016; 61 days ago (2016-07-25)[3][6][7]
Operating system Linux, FreeBSD, Solaris, OpenIndiana,[8] Mac OS X
Type Distributed file system
License GPLv2 / proprietary
Website moosefs.com
Repository github.com/moosefs/moosefs

Moose File System (MooseFS) is an Open-source, POSIX-compliant distributed file system developed by Core Technology. MooseFS aims to be fault-tolerant, highly available, highly performing, scalable general-purpose network distributed file system for data centers. Initially proprietary software, it was released to the public as open source on May 5, 2008.

Currently two editions of MooseFS are available:

  • MooseFS - released under GPLv2 license,
  • MooseFS Professional Edition (MooseFS Pro) - release under proprietary license in binary packages form.

Design[edit]

The MooseFS follows similar design principles as Fossil (file system), Google File System, Lustre or Ceph. The file system comprises three components:

  • Metadata server (MDS) — manages the location (layout) of files, file access and namespace hierarchy. The current version of MooseFS does support multiple metadata servers and automatic failover. Clients only talk to the MDS to retrieve/update a file's layout and attributes; the data itself is transferred directly between clients and chunk servers. The Metadata server is a user-space daemon; the metadata is kept in memory and lazily stored on local disk.
  • Metalogger server — periodically pulls the metadata from the MDS to store it for backup. Since version 1.6.5, this is an optional feature.
  • Chunk servers (CSS) — store the data and optionally replicate it among themselves. There can be many of them, though the scalability limit has not been published. The biggest cluster reported so far consists of 160 servers.[9] The Chunk server is also a user-space daemon that relies on the underlying local file system to manage the actual storage.
  • Clients — talk to both the MDS and CSS. MooseFS clients mount the file system into user-space via FUSE.

Features[edit]

To achieve high reliability and performance MooseFS offers the following features:

  • Fault-tolerance — MooseFS uses replication, data can be replicated across chunkservers, the replication ratio (N) is set per file/directory. If (N-1) replicas fail the data will still be available. At the moment MooseFS does not offer any other technique for fault-tolerance. Fault-tolerance for very big files thus requires vast amount of space - N*filesize instead of filesize+(N*stripesize) as would be the case for RAID 4, RAID 5 or RAID 6. Version 4.x PRO of MooseFS will have RAID6.
  • Striping — Large files are divided into chunks (up to 64 megabytes) that might be stored on different chunk servers in order to achieve higher aggregate bandwidth.
  • Load balancing — MooseFS attempts to use storage resources equally, the current algorithm seems to take into account only the consumed space.
  • Security — Apart from classical POSIX file permissions, since the 1.6 release MooseFS offers a simple, NFS-like, authentication/authorization.
  • Coherent snapshots — Quick, low-overhead snapshots.
  • Transparent "trash bin" — Deleted files are retained for a configurable period of time.
  • Data tiering / storage classes — Possibility to "label" servers, create label definitions called "Storage Classes" and decide, on which types of servers the data is stored[10]
  • "Project" quotas support
  • POSIX locks, flock locks support

Hardware, software and networking[edit]

Similarly to other cluster-based file systems MooseFS uses commodity hardware running a POSIX compliant operating system. TCP/IP is used as the interconnect.

MooseFS in figures[11][edit]

  • Storage size is up to: 264 Bytes = 16 EiB = 16 384 PiB
  • Single file size is up to: 257 Bytes = 128 PiB
  • Number of files is up to: 231 = 2.1 × 109
  • Number of active clients is unlimited it depends on number of file descriptors in the system

See also[edit]

References[edit]

External links[edit]