Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
/cephPublic

Ceph is a distributed object, block, and file storage platform

License

Unknown and 3 other licenses found

Licenses found

Unknown
COPYING
GPL-2.0
COPYING-GPL2
LGPL-2.1
COPYING-LGPL2.1
LGPL-3.0
COPYING-LGPL3
NotificationsYou must be signed in to change notification settings

ceph/ceph

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Seehttps://ceph.com/ for current information about Ceph.

Status

OpenSSF Best PracticesIssue Backporting

Contributing Code

Most of Ceph is dual-licensed under the LGPL version 2.1 or 3.0. Somemiscellaneous code is either public domain or licensed under a BSD-stylelicense.

The Ceph documentation is licensed under Creative Commons Attribution ShareAlike 3.0 (CC-BY-SA-3.0).

Some headers included in theceph/ceph repository are licensed under the GPL.See the fileCOPYING for a full inventory of licenses by file.

All code contributions must include a valid "Signed-off-by" line. See the fileSubmittingPatches.rst for details on this and instructions on how to generateand submit patches.

Assignment of copyright is not required to contribute code. Code iscontributed under the terms of the applicable license.

Checking out the source

Clone the ceph/ceph repository from github by running the following command ona system that has git installed:

git clone git@github.com:ceph/ceph

Alternatively, if you are not a github user, you should run the followingcommand on a system that has git installed:

git clone https://github.com/ceph/ceph.git

When theceph/ceph repository has been cloned to your system, run thefollowing commands to move into the clonedceph/ceph repository and to checkout the git submodules associated with it:

cd cephgit submodule update --init --recursive --progress

Build Prerequisites

section last updated 06 Sep 2024

We provide the Debian and Ubuntuapt commands in this procedure. If you usea system with a different package manager, then you will have to use differentcommands.

#. Installcurl:

apt install curl

#. Install package dependencies by running theinstall-deps.sh script:

./install-deps.sh

#. Install thepython3-routes package:

apt install python3-routes

Building Ceph

These instructions are meant for developers who are compiling the code fordevelopment and testing. To build binaries that are suitable for installationwe recommend that you build.deb or.rpm packages, or refer toceph.spec.in ordebian/rules to see which configuration options arespecified for production builds.

To build Ceph, follow this procedure:

  1. Make sure that you are in the top-levelceph directory thatcontainsdo_cmake.sh andCONTRIBUTING.rst.

  2. Run thedo_cmake.sh script:

    ./do_cmake.sh

    Seebuild types.

  3. Move into thebuild directory:

    cd build
  4. Use theninja buildsystem to build the development environment:

    ninja -j3

    [!IMPORTANT]

    Ninja is the build system used by the Cephproject to build test builds. The number of jobs used byninja isderived from the number of CPU cores of the building host if unspecified.Use the-j option to limit the job number if build jobs are runningout of memory. If you attempt to runninja and receive a message thatreadsg++: fatal error: Killed signal terminated program cc1plus, thenyou have run out of memory.

    Using the-j option with an argument appropriate to the hardware onwhich theninja command is run is expected to result in a successfulbuild. For example, to limit the job number to 3, run the commandninja -j3. On average, eachninja job run in parallel needs approximately2.5 GiB of RAM.

    This documentation assumes that your build directory is a subdirectory oftheceph.git checkout. If the build directory is located elsewhere, pointCEPH_GIT_DIR to the correct path of the checkout. Additional CMake argscan be specified by setting ARGS before invokingdo_cmake.sh.Seecmake options for more details. For example:

    ARGS="-DCMAKE_C_COMPILER=gcc-7" ./do_cmake.sh

    To build only certain targets, run a command of the following form:

    ninja [target name]
  5. Install the vstart cluster:

    ninja install

Build Types

do_cmake.sh by default creates a "debug build" of Ceph (assuming.git exists).ADebug build runtime performance may be as little as 20% of that of a non-debug build.Pass-DCMAKE_BUILD_TYPE=RelWithDebInfo todo_cmake.sh to create anon-debug build.The default build type isRelWithDebInfo once.git does not exist.

CMake modeDebug infoOptimizationsSanitizersChecksUse for
DebugYes-OgNoneceph_assert,assertgdb, development
RelWithDebInfoYes-O2,-DNDEBUGNoneceph_assert onlyproduction

CMake Options

The-D flag can be used withcmake to speed up the process of building Cephand to customize the build.

Building without RADOS Gateway

The RADOS Gateway is built by default. To build Ceph without the RADOS Gateway,run a command of the following form:

cmake -DWITH_RADOSGW=OFF [path to top-level ceph directory]

Building with debugging and arbitrary dependency locations

Run a command of the following form to build Ceph with debugging and alternatelocations for some external dependencies:

cmake -DCMAKE_INSTALL_PREFIX=/opt/ceph -DCMAKE_C_FLAGS="-Og -g3 -gdwarf-4" \..

Ceph has several bundled dependencies such as Boost, RocksDB and Arrow. Bydefault,cmake builds these bundled dependencies from source instead of usinglibraries that are already installed on the system. You can opt to use thesesystem libraries, as long as they meet Ceph's version requirements. To usesystem libraries, usecmake options likeWITH_SYSTEM_BOOST, as in thefollowing example:

cmake -DWITH_SYSTEM_BOOST=ON [...]

To view an exhaustive list of -D options, invokecmake -LH:

cmake -LH

Preserving diagnostic colors

If you pipeninja toless and would like to preserve the diagnostic colorsin the output in order to make errors and warnings more legible, run thefollowing command:

cmake -DDIAGNOSTICS_COLOR=always ...

The above command works only with supported compilers.

The diagnostic colors will be visible when the following command is run:

ninja | less -R

Other available values forDIAGNOSTICS_COLOR areauto (default) andnever.

Tips and Tricks

  • Use "debug builds" only when needed. Debugging builds are helpful fordevelopment, but they can slow down performance. Use-DCMAKE_BUILD_TYPE=Release when debugging isn't necessary.
  • Enable Selective Daemons when testing specific components. Don't startunnecessary daemons.
  • Preserve Existing Data skip cluster reinitialization between tests byusing the-n flag.
  • To manage a vstart cluster, stop daemons using./stop.sh and start themwith./vstart.sh --daemon osd.${ID} [--nodaemonize].
  • Restart the sockets by stopping and restarting the daemons associated withthem. This ensures that there are no stale sockets in the cluster.
  • To track RocksDB performance, setexport ROCKSDB_PERF=true and startthe cluster by using the command./vstart.sh -n -d -x --bluestore.
  • Build withvstart-base using debug flags in cmake, compile, and deployvia./vstart.sh -d -n --bluestore.
  • To containerize, generate configurations withvstart.sh, and deploy withDocker, mapping directories and configuring the network.
  • Manage containers usingdocker run,stop, andrm. For detailedsetups, consult the Ceph-Container repository.

Troubleshooting

  • Cluster Fails to Start: Look for errors in the logs under theout/directory.
  • OSD Crashes: Check the OSD logs for errors.
  • Cluster in aHealth Error State: Run theceph status command toidentify the issue.
  • RocksDB Errors: Look for RocksDB-related errors in the OSD logs.

Building a source tarball

To build a complete source tarball with everything needed to build fromsource and/or build a (deb or rpm) package, run

./make-dist

This will create a tarball like ceph-$version.tar.bz2 from git.(Ensure that any changes you want to include in your working directoryare committed to git.)

Running a test cluster

From theceph/ directory, run the following commands to launch a test Cephcluster:

cd buildninja vstart        # builds just enough to run vstart../src/vstart.sh --debug --new -x --localhost --bluestore./bin/ceph -s

Most Ceph commands are available in thebin/ directory. For example:

./bin/rbd create foo --size 1000./bin/rados -p foo bench 30 write

To shut down the test cluster, run the following command from thebuild/directory:

../src/stop.sh

Use the sysvinit script to start or stop individual daemons:

./bin/init-ceph restart osd.0./bin/init-ceph stop

Running unit tests

To build and run all tests (in parallel using all processors), usectest:

cd buildninjactest -j$(nproc)

(Note: Many targets built from src/test are not run usingctest.Targets starting with "unittest" are run inninja check and thus canbe run withctest. Targets starting with "ceph_test" can not, and shouldbe run by hand.)

When failures occur, look in build/Testing/Temporary for logs.

To build and run all tests and their dependencies without otherunnecessary targets in Ceph:

cd buildninja check -j$(nproc)

To run an individual test manually, runctest with -R (regex matching):

ctest -R [regex matching test name(s)]

(Note:ctest does not build the test it's running or the dependencies neededto run it)

To run an individual test manually and see all the tests output, runctest with the -V (verbose) flag:

ctest -V -R [regex matching test name(s)]

To run tests manually and run the jobs in parallel, runctest withthe-j flag:

ctest -j [number of jobs]

There are many other flags you can givectest for better controlover manual test execution. To view these options run:

man ctest

Building Ceph using Containers

Ceph now provides tools to build the code, run unit tests, or build packagesfrom within an OCI-style container using Podman or Docker! This allows one tobuild code for distributions other than the one you have on your system, avoidsthe need to install build dependencies for Ceph on your local system andprovides an opportunity to test builds on platforms that are not yet supportedby the official build infrastructure. For more details see thecontainer builddocument.

Building the Documentation

Prerequisites

The list of package dependencies for building the documentation can befound indoc_deps.deb.txt:

sudo apt-get install `cat doc_deps.deb.txt`

Building the Documentation

To build the documentation, ensure that you are in the top-level/ceph directory, and execute the build script. For example:

admin/build-doc

Reporting Issues

To report an issue and view existing issues, please visithttps://tracker.ceph.com/projects/ceph.

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp