- Notifications
You must be signed in to change notification settings - Fork6.1k
Ceph is a distributed object, block, and file storage platform
License
Unknown and 3 other licenses found
Licenses found
ceph/ceph
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Seehttps://ceph.com/ for current information about Ceph.
Most of Ceph is dual-licensed under the LGPL version 2.1 or 3.0. Somemiscellaneous code is either public domain or licensed under a BSD-stylelicense.
The Ceph documentation is licensed under Creative Commons Attribution ShareAlike 3.0 (CC-BY-SA-3.0).
Some headers included in theceph/ceph
repository are licensed under the GPL.See the fileCOPYING
for a full inventory of licenses by file.
All code contributions must include a valid "Signed-off-by" line. See the fileSubmittingPatches.rst
for details on this and instructions on how to generateand submit patches.
Assignment of copyright is not required to contribute code. Code iscontributed under the terms of the applicable license.
Clone the ceph/ceph repository from github by running the following command ona system that has git installed:
git clone git@github.com:ceph/ceph
Alternatively, if you are not a github user, you should run the followingcommand on a system that has git installed:
git clone https://github.com/ceph/ceph.git
When theceph/ceph
repository has been cloned to your system, run thefollowing commands to move into the clonedceph/ceph
repository and to checkout the git submodules associated with it:
cd cephgit submodule update --init --recursive --progress
section last updated 06 Sep 2024
We provide the Debian and Ubuntuapt
commands in this procedure. If you usea system with a different package manager, then you will have to use differentcommands.
#. Installcurl
:
apt install curl
#. Install package dependencies by running theinstall-deps.sh
script:
./install-deps.sh
#. Install thepython3-routes
package:
apt install python3-routes
These instructions are meant for developers who are compiling the code fordevelopment and testing. To build binaries that are suitable for installationwe recommend that you build.deb
or.rpm
packages, or refer toceph.spec.in
ordebian/rules
to see which configuration options arespecified for production builds.
To build Ceph, follow this procedure:
Make sure that you are in the top-level
ceph
directory thatcontainsdo_cmake.sh
andCONTRIBUTING.rst
.Run the
do_cmake.sh
script:./do_cmake.sh
Seebuild types.
Move into the
build
directory:cd build
Use the
ninja
buildsystem to build the development environment:ninja -j3
[!IMPORTANT]
Ninja is the build system used by the Cephproject to build test builds. The number of jobs used by
ninja
isderived from the number of CPU cores of the building host if unspecified.Use the-j
option to limit the job number if build jobs are runningout of memory. If you attempt to runninja
and receive a message thatreadsg++: fatal error: Killed signal terminated program cc1plus
, thenyou have run out of memory.Using the
-j
option with an argument appropriate to the hardware onwhich theninja
command is run is expected to result in a successfulbuild. For example, to limit the job number to 3, run the commandninja -j3
. On average, eachninja
job run in parallel needs approximately2.5 GiB of RAM.This documentation assumes that your build directory is a subdirectory ofthe
ceph.git
checkout. If the build directory is located elsewhere, pointCEPH_GIT_DIR
to the correct path of the checkout. Additional CMake argscan be specified by setting ARGS before invokingdo_cmake.sh
.Seecmake options for more details. For example:ARGS="-DCMAKE_C_COMPILER=gcc-7" ./do_cmake.sh
To build only certain targets, run a command of the following form:
ninja [target name]
Install the vstart cluster:
ninja install
do_cmake.sh
by default creates a "debug build" of Ceph (assuming.git
exists).ADebug
build runtime performance may be as little as 20% of that of a non-debug build.Pass-DCMAKE_BUILD_TYPE=RelWithDebInfo
todo_cmake.sh
to create anon-debug build.The default build type isRelWithDebInfo
once.git
does not exist.
CMake mode | Debug info | Optimizations | Sanitizers | Checks | Use for |
---|---|---|---|---|---|
Debug | Yes | -Og | None | ceph_assert ,assert | gdb, development |
RelWithDebInfo | Yes | -O2 ,-DNDEBUG | None | ceph_assert only | production |
The-D
flag can be used withcmake
to speed up the process of building Cephand to customize the build.
The RADOS Gateway is built by default. To build Ceph without the RADOS Gateway,run a command of the following form:
cmake -DWITH_RADOSGW=OFF [path to top-level ceph directory]
Run a command of the following form to build Ceph with debugging and alternatelocations for some external dependencies:
cmake -DCMAKE_INSTALL_PREFIX=/opt/ceph -DCMAKE_C_FLAGS="-Og -g3 -gdwarf-4" \..
Ceph has several bundled dependencies such as Boost, RocksDB and Arrow. Bydefault,cmake
builds these bundled dependencies from source instead of usinglibraries that are already installed on the system. You can opt to use thesesystem libraries, as long as they meet Ceph's version requirements. To usesystem libraries, usecmake
options likeWITH_SYSTEM_BOOST
, as in thefollowing example:
cmake -DWITH_SYSTEM_BOOST=ON [...]
To view an exhaustive list of -D options, invokecmake -LH
:
cmake -LH
If you pipeninja
toless
and would like to preserve the diagnostic colorsin the output in order to make errors and warnings more legible, run thefollowing command:
cmake -DDIAGNOSTICS_COLOR=always ...
The above command works only with supported compilers.
The diagnostic colors will be visible when the following command is run:
ninja | less -R
Other available values forDIAGNOSTICS_COLOR
areauto
(default) andnever
.
- Use "debug builds" only when needed. Debugging builds are helpful fordevelopment, but they can slow down performance. Use
-DCMAKE_BUILD_TYPE=Release
when debugging isn't necessary. - Enable Selective Daemons when testing specific components. Don't startunnecessary daemons.
- Preserve Existing Data skip cluster reinitialization between tests byusing the
-n
flag. - To manage a vstart cluster, stop daemons using
./stop.sh
and start themwith./vstart.sh --daemon osd.${ID} [--nodaemonize]
. - Restart the sockets by stopping and restarting the daemons associated withthem. This ensures that there are no stale sockets in the cluster.
- To track RocksDB performance, set
export ROCKSDB_PERF=true
and startthe cluster by using the command./vstart.sh -n -d -x --bluestore
. - Build with
vstart-base
using debug flags in cmake, compile, and deployvia./vstart.sh -d -n --bluestore
. - To containerize, generate configurations with
vstart.sh
, and deploy withDocker, mapping directories and configuring the network. - Manage containers using
docker run
,stop
, andrm
. For detailedsetups, consult the Ceph-Container repository.
- Cluster Fails to Start: Look for errors in the logs under the
out/
directory. - OSD Crashes: Check the OSD logs for errors.
- Cluster in a
Health Error
State: Run theceph status
command toidentify the issue. - RocksDB Errors: Look for RocksDB-related errors in the OSD logs.
To build a complete source tarball with everything needed to build fromsource and/or build a (deb or rpm) package, run
./make-dist
This will create a tarball like ceph-$version.tar.bz2 from git.(Ensure that any changes you want to include in your working directoryare committed to git.)
From theceph/
directory, run the following commands to launch a test Cephcluster:
cd buildninja vstart # builds just enough to run vstart../src/vstart.sh --debug --new -x --localhost --bluestore./bin/ceph -s
Most Ceph commands are available in thebin/
directory. For example:
./bin/rbd create foo --size 1000./bin/rados -p foo bench 30 write
To shut down the test cluster, run the following command from thebuild/
directory:
../src/stop.sh
Use the sysvinit script to start or stop individual daemons:
./bin/init-ceph restart osd.0./bin/init-ceph stop
To build and run all tests (in parallel using all processors), usectest
:
cd buildninjactest -j$(nproc)
(Note: Many targets built from src/test are not run usingctest
.Targets starting with "unittest" are run inninja check
and thus canbe run withctest
. Targets starting with "ceph_test" can not, and shouldbe run by hand.)
When failures occur, look in build/Testing/Temporary for logs.
To build and run all tests and their dependencies without otherunnecessary targets in Ceph:
cd buildninja check -j$(nproc)
To run an individual test manually, runctest
with -R (regex matching):
ctest -R [regex matching test name(s)]
(Note:ctest
does not build the test it's running or the dependencies neededto run it)
To run an individual test manually and see all the tests output, runctest
with the -V (verbose) flag:
ctest -V -R [regex matching test name(s)]
To run tests manually and run the jobs in parallel, runctest
withthe-j
flag:
ctest -j [number of jobs]
There are many other flags you can givectest
for better controlover manual test execution. To view these options run:
man ctest
Ceph now provides tools to build the code, run unit tests, or build packagesfrom within an OCI-style container using Podman or Docker! This allows one tobuild code for distributions other than the one you have on your system, avoidsthe need to install build dependencies for Ceph on your local system andprovides an opportunity to test builds on platforms that are not yet supportedby the official build infrastructure. For more details see thecontainer builddocument.
The list of package dependencies for building the documentation can befound indoc_deps.deb.txt
:
sudo apt-get install `cat doc_deps.deb.txt`
To build the documentation, ensure that you are in the top-level/ceph
directory, and execute the build script. For example:
admin/build-doc
To report an issue and view existing issues, please visithttps://tracker.ceph.com/projects/ceph.
About
Ceph is a distributed object, block, and file storage platform
Topics
Resources
License
Unknown and 3 other licenses found
Licenses found
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.