Notice

This document is for a development version of Ceph.

Crimson (Tech Preview)

Crimson is the code name ofcrimson-osd, which is the next generationceph-osd.It is designed to deliver enhanced performance on fast network and storage devices by leveraging modern technologies including DPDK and SPDK.

Crimson is intended to be a drop-in replacement for the classic Object Storage Daemon (OSD),aiming to allow seamless migration from existingceph-osd deployments.

The second phase of the project introducesSeaStore, a complete redesign of the object storage backend built around Crimson’s native architecture.Seastore is optimized for high-performance storage devices like NVMe and may not be suitable for traditional HDDs.Crimson will continue to support BlueStore ensuring compatibility with HDDs and slower SSDs.

Seeceph.io/en/news/crimson

Crimson is in tech-preview stage.SeeCrimson’s Developer Guide for developer information.

Deploying Crimson with cephadm

Note

Cephadm SeaStore support is inearly stages.

The Ceph CI/CD pipeline builds containers withcrimson-osd replacing the standardceph-osd.

Once a branch at commit <sha1> has been built and is available inShaman / Quay, you can deploy it using the cephadm instructions outlinedinCephadm with the following adaptations.

The latestmain branch is builtdailyand the images are available inquay(filtercrimson-release).We recommend using one of the latest available builds, as Crimson evolves rapidly.

Use the--image flag to specify a Crimson build:

cephadm--imagequay.ceph.io/ceph-ci/ceph:<sha1>-crimson-release--allow-mismatched-releasebootstrap...

Note

Crimson builds are available in two variants:crimson-debug andcrimson-release.For testing purposes therelease variant should be used.Thedebug variant is intended primarily for development.

You’ll likely need to include the--allow-mismatched-release flag to use a non-release branch.

Crimson CPU allocation

Note

  1. Allocation optionscannot be changed after deployment.

  2. vstart.sh sets these options using the--crimson-smp flag.

Thecrimson_cpu_num parameter defines the number of CPUs used to serve Seastar reactors.Each reactor is expected to run on a dedicated CPU core.

This parameterdoes not have a default value.Admins must configure it at the OSD level based on system resources and cluster requirementsbefore deploying the OSDs.

We recommend setting a value forcrimson_cpu_num that is less than the host’snumber of CPU cores (nproc) divided by thenumber of OSDs on that host.

For example, for deploying a node with eight CPU cores per OSD:

cephconfigsetosdcrimson_cpu_num8

Note thatcrimson_cpu_num doesnot pin threads to specific CPU cores.To explicitly assign CPU cores to Crimson OSDs, use thecrimson_cpu_set parameter.This enables CPU pinning, whichmay improve performance.However, using this option requires manually setting the CPU set for each OSD,and is generally less recommended due to its complexity.

Crimson Requried Flags

Note

Crimson is in a tech preview stage and isnot suitable for production use.

After starting your cluster, prior to deploying OSDs, you’ll need to configure theCrimson CPU allocation and enable Crimson todirect the default pools to be created as Crimson pools. You can proceed by running the following after you have a running cluster:

cephconfigsetglobal'enable_experimental_unrecoverable_data_corrupting_features'crimsoncephosdset-allow-crimson--yes-i-really-mean-itcephconfigsetmonosd_pool_default_crimsontrue

The first command enables thecrimson experimental feature.

The second enables theallow_crimson OSDMap flag. The monitor willnot allowcrimson-osd to boot without that flag.

The last causes pools to be created by default with thecrimson flag.Crimson pools are restricted to operations supported by Crimson.crimson-osd won’t instantiate PGs from non-Crimson pools.

Object Store Backends

crimson-osd supports two categories of object store backends:native andnon-native.

Native Backends

Native backends perform I/O operations using theSeastar reactor. These are tightly integrated with the Seastar framework and follow its design principles:

seastore

SeaStore is the primary native object store for Crimson OSD. It is built with the Seastar framework and adheres to its asynchronous, shard-based architecture.

cyanstore

CyanStore is inspired bymemstore from the classic OSD, offering a lightweight, in-memory object store model.CyanStoredoes not store data and should be used only for measuring OSD overhead, without the cost of actually storing data.

Non-Native Backends

Non-native backends operate through athread pool proxy, which interfaces with object stores running inalien threads—worker threads not managed by Seastar.These backends allow Crimson to interact with legacy or external object store implementations:

bluestore

The default object store used by the classicceph-osd. It provides robust, production-grade storage capabilities.

Thecrimson_bluestore_num_threads option needs to be set according to the CPU set available.This defines the number of threads dedicated to serving the BlueStore ObjectStore on each OSD.

Ifcrimson_cpu_num is used fromCrimson CPU allocation,The counterpartcrimson_bluestore_cpu_set should also be used accordingly toallow the two sets to be mutually exclusive.

memstore

An in-memory object store backend, primarily used for testing and development purposes.

Metrics and Tracing

Crimson offers three ways to report stats and metrics.

PG stats reported to the Manager

Crimson collects the per-PG, per-pool, and per-OSD stats in aMPGStatsmessage which is sent to the Ceph Managers. Manager modules can querythem using theMgrModule.get() method.

Asock command

An admin socket command is offered for dumping metrics:

.. prompt:: bash #

$ ceph tell osd.0 dump_metrics$ ceph tell osd.0 dump_metrics reactor_utilization

Herereactor_utilization is an optional string allowing us to filterthe dumped metrics by prefix.

Prometheus text protocol

The listening port and address can be configured using the command line options of--prometheus_portseePrometheus for more details.

Brought to you by the Ceph Foundation

The Ceph Documentation is a community resource funded and hosted by the non-profitCeph Foundation. If you would like to support this and our other efforts, please considerjoining now.