Notice
This document is for a development version of Ceph.
Intro to Ceph
Ceph can be used to provideCeph Object Storage toCloudPlatforms and Ceph can be used to provideCeph Block Device servicestoCloud Platforms. Ceph can be used to deploy aCeph FileSystem. AllCeph Storage Cluster deployments begin with setting upeachCeph Node and then setting up the network.
A Ceph Storage Cluster requires the following: at least one Ceph Monitor and atleast one Ceph Manager, and at least as manyCeph Object StorageDaemons (OSDs) as there are copies of a given object stored in theCeph cluster (for example, if three copies of a given object are stored in theCeph cluster, then at least three OSDs must exist in that Ceph cluster).
The Ceph Metadata Server is necessary to run Ceph File System clients.
Note
It is a best practice to have a Ceph Manager for each Monitor, but it is notnecessary.

Monitors: ACeph Monitor (
ceph-mon) maintains maps of thecluster state, including themonitor map, managermap, the OSD map, the MDS map, and the CRUSH map. These maps are criticalcluster state required for Ceph daemons to coordinate with each other.Monitors are also responsible for managing authentication between daemons andclients. At least three monitors are normally required for redundancy andhigh availability.Managers: ACeph Manager daemon (
ceph-mgr) isresponsible for keeping track of runtime metrics and the currentstate of the Ceph cluster, including storage utilization, currentperformance metrics, and system load. The Ceph Manager daemons alsohost python-based modules to manage and expose Ceph clusterinformation, including a web-basedCeph Dashboard.At least two managers are normally required for highavailability.Ceph OSDs: An Object Storage Daemon (Ceph OSD,
ceph-osd) stores data, handles data replication, recovery,rebalancing, and provides some monitoring information to CephMonitors and Managers by checking other Ceph OSD Daemons for aheartbeat. At least three Ceph OSDs are normally required forredundancy and high availability.MDSes: ACeph Metadata Server (MDS,
ceph-mds) stores metadatafor theCeph File System. Ceph Metadata Servers allow CephFS users torun basic commands (likels,find, etc.) without placing a burden onthe Ceph Storage Cluster.RGWs: ACeph Object Gateway (RGW,
ceph-radosgw) daemon providesa RESTful gateway between applications and Ceph storage clusters. TheS3-compatible API is most commonly used, though Swift is also available.
Ceph stores data as objects within logical storage pools. Using theCRUSH algorithm, Ceph calculates which placement group (PG) shouldcontain the object, and which OSD should store the placement group. TheCRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, andrecover dynamically.
Recommendations
To begin using Ceph in production, you should review our hardwarerecommendations and operating system recommendations.
Get Involved
You can avail yourself of help or contribute documentation, sourcecode or bugs by getting involved in the Ceph community.
Brought to you by the Ceph Foundation
The Ceph Documentation is a community resource funded and hosted by the non-profitCeph Foundation. If you would like to support this and our other efforts, please considerjoining now.