Movatterモバイル変換


[0]ホーム

URL:


Your submission was sent successfully!Close

Thank you for contacting us. A member of our team will be in touch shortly.Close

You have successfully unsubscribed!Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Your preferences have been successfully updated.Close notification

Please try again orfile a bug report.Close

Canonical Ubuntu
Canonical Ceph

Install Ceph on Ubuntu

Ceph is a storage system designed for excellent performance, reliability, and scalability. However, the installation and management of Ceph can be challenging. The Ceph-on-Ubuntu solution takes the administration minutiae out of the equation through the use of snaps and Juju charms. With either approach, the deployment of a Ceph cluster becomes trivial as does the scaling of the cluster's storage capacity.

Looking for help running Ceph?


Choose the Ceph installation option for your deployment:

Single-node deployment

  • Uses MicroCeph
  • Works on a workstation or VM
  • Suitable for testing and development

These installation instructions use MicroCeph - Ceph in a snap. MicroCeph is a pure upstream Ceph distribution designed for small scale and edge deployments, which can be installed and maintained with minimal knowledge and effort.

You will need a multi-core processor and at least 8GiB of memory and 100GB of disk space. MicroCeph has been tested on x86-based physical and virtual machines running Ubuntu 22.04 LTS.

  1. To get started, install the MicroCeph snap with the following command on each node to be used in the cluster:

    sudo snap install microceph
  2. Then bootstrap the cluster:

    sudo microceph cluster bootstrap
  3. Check the cluster status with the following command:

    sudo microceph.ceph status

    Here you should see that there is a single node in the cluster.

  4. To use MicroCeph as a single node, the default CRUSH rules need to be modified:

    sudo microceph.ceph osd crush rule rm replicated_rule
    sudo microceph.ceph osd crush rule create-replicated single default osd
  5. Next, add some disks that will be used as OSDs:

    sudo microceph disk add /dev/sd[x] --wipe

    Repeat for each disk you would like to use as an OSD on that node, and additionally on the other nodes in the cluster. Cluster status can be verified using:

    sudo microceph.ceph status
    sudo microceph.ceph osd status

Multi-node deployment

  • Uses MicroCeph
  • Minimum 4-nodes, full-HA Ceph cluster
  • Suitable for small-scale production environments

These installation instructions use MicroCeph – Ceph in a snap. MicroCeph is a pure upstream Ceph distribution designed for small scale and edge deployments, which can be installed and maintained with minimal knowledge and effort.

You will need 4 physical machines with multi-core processors and at least 8GiB of memory and 100GB of disk space. MicroCeph has been tested on x86-based physical machines running Ubuntu 22.04 LTS.

  1. To get started, install the MicroCeph snap with the following command on each node to be used in the cluster:

    sudo snap install microceph
  2. Then bootstrap the cluster from the first node:

    sudo microceph cluster bootstrap
  3. On the first node, add other nodes to the cluster:

    sudo microceph cluster add node[x]
  4. Copy the resulting output to be used on node[x]:

    sudo microceph cluster join pasted-output-from-node1

    Repeat these steps for each additional node you would like to add to the cluster.

  5. Check the cluster status with the following command:

    sudo microceph.ceph status

    Here you should see that all the nodes you added have joined the cluster, in the familiar ceph status output.

  6. Next, add some disks to each node that will be used as OSDs:

    sudo microceph disk add /dev/sd[x] --wipe

    Repeat for each disk you would like to use as an OSD on that node, and additionally on the other nodes in the cluster. Cluster status can be verified using:

    sudo microceph.ceph status
    sudo microceph.ceph osd status

Containerized deployment

  • Uses a Canonical-supplied and maintained rock (OCI image)
  • Works with cephadm and rook
  • Suitable for all types of containerized deployments

These installation instructions use the Canonical produced and supplied Ceph rock — this OCI compliant image provides a drop in replacement for the upstream Ceph OCI image.

Large-scale deployment

  • Uses Charmed Ceph
  • Uses MAAS for bare metal orchestration
  • Suitable for large-scale production environments

Charmed Ceph is Canonical's fully automated, model-driven approach to installing and managing Ceph.Charmed Ceph is generally deployed on bare-metal hardware that is managed byMAAS.


Need more help with Ceph?

Let our Ceph experts help you take the next step.


Latest news from our blog ›

Contact us about software-defined storage


Please fill out this form and a member of our Cloud sales team will be in touch with you

Tell us about your project
Tell us about your project

If you use Ubuntu, which version(s) are you using?
If you use Ubuntu, which version(s) are you using?
  • LTS within standard support
  • LTS out of standard support
  • Outdated or non-LTS releases non-LTS release
  • Other

What kind of device are you using?
What kind of device are you using?

How many devices?
How many devices?

How do you consume open source?
How do you consume open source?

Do you have specific compliance or hardening requirements?
Do you have specific compliance or hardening requirements?

Who is responsible for tracking, testing and applying CVE patches in a timely manner?
Who is responsible for tracking, testing and applying CVE patches in a timely manner?

What advice are you looking for?
What advice are you looking for?

How should we get in touch?
How should we get in touch?
  • By submitting this form, I confirm that I have read and agree toCanonical's Privacy Notice andPrivacy Policy.

[8]ページ先頭

©2009-2026 Movatter.jp