Jump to content

Grid-oriented storage

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by TomCerul (talk | contribs) at 14:06, 22 November 2013 (attempt to convert marketing speak to encyclopedic style.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Grid-oriented Storage (GOS) was a term used for data storage by a university project during the era when the term grid computing was popular.

Description

GOS was a successor of the term network-attached storage (NAS). GOS systems contained hard disks, often RAIDs (redundant arrays of independent disks), like traditional file servers.

GOS was designed to deal with long-distance, cross-domain and single-image file operations, which is typical in Grid environments. GOS behaves like a file server via the file-based GOS-FS protocol to any entity on the grid. Similar to GridFTP, GOS-FS integrates a parallel stream engine and Grid Security Infrastructure (GSI).

Conforming to the universal VFS (Virtual Filesystem Switch), GOS-FS can be pervasively used as an underlying platform to best utilize the increased transfer bandwidth and accelerate the NFS/CIFS-based applications. GOS can also run over SCSI, Fibre Channel or iSCSI, which does not affect the acceleration performance, offering both file level protocols and block level protocols for storage area network (SAN) from the same system.

In a grid infrastructure, resources may be geographically distant from each other, produced by differing manufacturers, and have differing access control policies. This makes access to grid resources dynamic and conditional upon local constraints. Centralized management techniques for these resources are limited in their scalability both in terms of execution efficiency and fault tolerance. Provision of services across such platforms requires a distributed resource management mechanism and the peer-to-peer clustered GOS appliances allow a single storage image to continue to expand, even if a single GOS appliance reaches its capacity limitations. The cluster shares a common, aggregate presentation of the data stored on all participating GOS appliances. Each GOS appliance manages its own internal storage space. The major benefit of this aggregation is that clustered GOS storage can be accessed by users as a single mount point.

GOS products fit the thin-server categorization. Compared with traditional “fat server”-based storage architectures, thin-server GOS appliances deliver numerous advantages, such as the alleviation of potential network/grid bottle-necks, CPU and OS optimized for I/O only, ease of installation, remote management and minimal maintenance, low cost and Plug and Play, etc. Examples of similar innovations include NAS, printers, fax machines, routers and switches.

An Apache server has been installed in the GOS operating system, ensuring an HTTPS-based communication between the GOS server and an administrator via a Web browser. Remote management and monitoring makes it easy to set up, manage, and monitor GOS systems.

History

Frank Zhigang Wang and Na Helian proposed a funding proposal to the UK government titled “Grid-Oriented Storage (GOS): Next Generation Data Storage System Architecture for the Grid Computing Era” in 2003. The proposal was approved and granted one million pounds[citation needed] in 2004. The first prototype was constructed in 2005 at Centre for Grid Computing, Cambridge-Cranfield High Performance Computing Facility. The first conference presentation was at IEEE Symposium on Cluster Computing and Grid (CCGrid), 9–12 May 2005, Cardiff, UK. As one of the five best work-in-progress, it was included in the IEEE Distributed Systems Online. In 2006, the GOS architecture and its implementations was published in IEEE Transactions on Computers, titled “Grid-oriented Storage: A Single-Image, Cross-Domain, High-Bandwidth Architecture”. Starting in January 2007, demonstrations were presented at Princeton University, Cambridge University Computer Lab and others. By 2013, the Cranfield Centre still used future tense for the project.[1]

Peer-to-peer file sharings use similar techniques.

Notes

  1. ^ "Centre for Grid Computing". Cranfield University. Retrieved June 14, 2013.

Further reading

  • Frank Wang, Na Helian, Sining Wu, Yuhui Deng, Yike Guo, Steve Thompson, Ian Johnson, Dave Milward & Robert Maddock, Grid-Oriented Storage, IEEE Distributed Systems Online, Volume 6, Issue 9, Sept. 2005.
  • Frank Wang, Sining Wu, Na Helian, Andy Parker, Yike Guo, Yuhui Deng, Vineet Khare, Grid-oriented Storage: A Single-Image, Cross-Domain, High-Bandwidth Architecture, IEEE Transaction on Computers, Vol.56, No.4, pp. 474–487, 2007.
  • Frank Zhigang Wang, Sining Wu, Na Helian, An Underlying Data-Transporting Protocol for Accelerating Web Communications, International Journal of Computer Networks, Elsevier, 2007.
  • Frank Zhigang Wang, Sining Wu, Na Helian, Yuhui Deng, Vineet Khare, Chris Thompson and Michael Parker, Grid-based Data Access to Nucleotide Sequence Database with 6x Improvement in Response Times, New Generation Computing, No.2, Vol.25, 2007.
  • Frank Wang, Yuhui Deng, Na Helian, Evolutionary Storage: Speeding up a Magnetic Disk by Clustering Frequent Data, IEEE Transactions on Magnetics, Issue.6, Vol.43, 2007.
  • Frank Zhigang Wang, Na Helian, Sining Wu, Yuhui Deng, Vineet Khare, Chris Thompson and Michael Parker, Grid-based Storage Architecture for Accelerating Bioinformatics Computing, Journal of VLSI Signal Processing Systems, No.1, Vol.48, 2007.
  • Yuhui Deng and Frank Wang, A Heterogeneous Storage Grid Enabled by Grid Service, ACM Operating System Review, No.1, Vol.41, 2007.
  • Yuhui Deng & Frank Wang, Optimal Clustering Size of Small File Access in Network Attached Storage Device, Parallel Processing Letters, No.1, Vol.17, 2007.