|This is the talk page for discussing improvements to the Ceph (software) article.
This is not a forum for general discussion of the article's subject.
|This article is of interest to the following WikiProjects:|
Conflicts of interest
I am concerned that one of the primary editors of this article has a conflict of interest since he participated in the development of Ceph. — Preceding unsigned comment added by 22.214.171.124 (talk) 16:51, 14 June 2012 (UTC)
- Ceph as in cephalopod, as the article explains - sounds like seff. --Streaky (talk) 16:18, 22 May 2012 (UTC)
- Reading the article as suggested by May 2012 post, "keff" from ke-pha-LEE seems in order? "...The name "Ceph" is a common nickname given to pet octopuses and derives from cephalopods, a class of molluscs, and ultimately from Ancient Greek κεφαλή (ke-pha-LEE)..."? — Preceding unsigned comment added by 126.96.36.199 (talk) 16:53, 6 November 2013 (UTC)
old page that got deleted
Here is the old page that got deleted by blatant advertising...
Ceph File:Ceph-logo1.jpg Developer(s) University of California, Santa Cruz Operating system Linux Type Distributed file system License LGPL Website http://ceph.sourceforge.net
Ceph is still in development, please check out the project homepage at http: //ceph.sourceforgen.net This makes Ceph not suitable for building large systems right now, like Lustre i s.
Like all other free software distributed parallel fault tolerant file systems, which there aren't so many around, Ceph has certain features. Basic ones, like many servers work together is obvious. So let's list the unique ones:
- Adaptive metadata. This means that a single directory may be distributed among many servers using a hash-like function. A simple example of this may be a mailserver using Maildir for storage, with a couple of millions mail in a single storage, getting hammered by all servers at once.
- Fault tolerance. Together with GlusterFS, Ceph has built in fault-tolerance. This means a file may be automatically replicated among many servers.
- High performance. Like GlusterFS and Lustre, Ceph may stripe files over many servers. Ceph has a good througput on also small files, using a custom disk object file system on the servers.
Mark Shuttleworth put 1 Mio. into Inktank
http://www.h-online.com/open/news/item/Mark-Shuttleworth-puts-1M-into-Inktank-1704709.html — Preceding unsigned comment added by 188.8.131.52 (talk) 05:46, 12 September 2012 (UTC)
I think that File:Ceph-konzept.png is not correct when it comes to cephfs. It suggest that librados is involved. But the FUSE driver doesn't use it and the native FS driver is part of the kernel. I think that this image depicts it correctly: http://storageio.com/images/Ceph_Architecture1.gif
- Hello! But, File:Ceph-konzept.png and http://storageio.com/images/Ceph_Architecture1.gif depict pretty much exactly the same internal layout? Did you maybe have another image in mind? — Dsimic (talk | contribs) 21:42, 19 February 2015 (UTC)
Repair and disaster recovery tools
The CephFS (filesystem) implementation lacks standard file system repair tools, and the Ceph user documentation does not recommend storing mission critical data on this architecture because it lacks disaster recovery capability and tools.
The statement is not applicable for the latest release (v10.2.0 is deemed production-ready). The documentation is not up-to-date yet, so that statement can be removed/updated once it has been updated.Wasill37 (talk) 04:20, 22 April 2016 (UTC)