Movatterモバイル変換


[0]ホーム

URL:


Your submission was sent successfully!Close

Thank you for contacting us. A member of our team will be in touch shortly.Close

You have successfully unsubscribed!Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Your preferences have been successfully updated.Close notification

Please try again orfile a bug report.Close

Canonical Ubuntu
Canonical Ceph

Block storage

Ceph block storage interacts directly with RADOS and a separate daemon is therefore not required (unlike CephFS and RGW). A Ceph block device is known as a RADOS Block Device (or simply an RBD device) and is available from a newly deployed Ceph cluster. This also makes RBD highly available by default.

RBD client usage

This section will provide optional instructions for verifying the RBD service by setting up a simple client environment. Deploy the client using the steps provided in theClient setup appendix.

An example deployment will have ajuju status output similar to the following:

Model  Controller     Cloud/Region     Version  SLA          Timestampceph   my-controller  my-maas/default  3.5.2    unsupported  20:34:16ZApp           Version  Status  Scale  Charm         Channel       Rev  OS      Notesceph-mon      18.2.0   active      3  ceph-mon      reef/stable    93  ubuntu  ceph-osd      18.2.0   active      3  ceph-osd      reef/stable   528  ubuntuceph-client   22.04    active      1  ubuntu        stable         18  ubuntu  Unit             Workload  Agent  Machine  Public address  Ports   Messageceph-client/0*   active    idle   3        10.0.0.240              readyceph-mon/0       active    idle   0/lxd/1  10.0.0.247              Unit is ready and clusteredceph-mon/1       active    idle   1/lxd/1  10.0.0.242              Unit is ready and clusteredceph-mon/2*      active    idle   2/lxd/1  10.0.0.249              Unit is ready and clusteredceph-osd/0       active    idle   0        10.0.0.229              Unit is ready (2 OSD)ceph-osd/1*      active    idle   1        10.0.0.230              Unit is ready (2 OSD)ceph-osd/2       active    idle   2        10.0.0.252              Unit is ready (2 OSD)

The client host is represented by theceph-client/0 unit.

Create a Ceph pool (‘libvirt-pool’), an RBD user (‘client.libvirt’), collect the user’s keyring file, and transfer it to the client:

juju run --wait ceph-mon/0 create-pool name=libvirt-pool app-name=rbdjuju exec --unit ceph-mon/0 -- \   sudo ceph auth get-or-create client.libvirt \   mon 'profile rbd' osd 'profile rbd pool=libvirt-pool' | \   tee ceph.client.libvirt.keyringjuju scp ceph.client.libvirt.keyring ceph-client/0:

Connect to the client:

juju ssh ceph-client/0

From the RBD client,

Configure the client using the keyring file and set up the correct permissions:

sudo mv ~ubuntu/ceph.client.libvirt.keyring /etc/cephsudo chmod 600 /etc/ceph/ceph.client.libvirt.keyringsudo chown ubuntu: /etc/ceph/ceph.client.libvirt.keyring

Install the requisite image creation software and verify that an RBD image can be created:

sudo apt install -y qemu-utilsqemu-img create -f raw rbd:libvirt-pool/image-4d:id=libvirt 4G

From the Juju client,

RBD images/pools can be inspected by querying the cluster with various commands:

juju ssh ceph-mon/0 sudo rbd -p libvirt-pool lsjuju ssh ceph-mon/0 sudo rados df --pool libvirt-pool

This page was last modified 1 year, 7 months ago.Help improve this document in the forum.


[8]ページ先頭

©2009-2026 Movatter.jp