Learning Ceph(Second Edition)
上QQ阅读APP看书,第一时间看更新

RADOS Block Device (RBD)

The RBD service is perhaps the most familiar, and at many sites is the primary or even only application of Ceph. It presents block (also known as volume) storage in a fashion that with traditional HDD/SDD applications can consume with little or no adjustment. In this way, it is somewhat analogous with facets of VxVM (™), Solaris Disk Suite (SVM)(), the Linux MD/LVM system, ISCSI or Fibre Channel () appliance, or even a ZFS () ZVOL. RBD volumes, however, are natively available to multiple servers across the network.

One can build a filesystem directly upon an RBD volume, often as the boot device of a virtual machine in which case the hypervisor is the client of the RBD service and presents the volume to the guest operating system via the virtio or emulation driver. Other uses include direct raw use by databases, direct attachment to a physical or virtual machine via a kernel driver. Some users find value in building logical volumes within their operating system instance on top of multiple RBD volumes in order to achieve performance or expansion goals. Block storage is appropriate when a disk-like resource is desired, and provides consistent performance and latency. Capacity is provisioned in discrete, disjointed chunks, so scaling up or down can be awkward and complex. Tools such as ZFS or a volume manager such as Linux LVM can mitigate this somewhat, but applications with highly variable volumes of data—think fluctuation of orders of magnitude—may be better suited to an object storage model.

RBD volume operations include the usual data reads and writes as well as creation and deletion. Snapshots can be managed for archival, checkpointing, and deriving related volumes. OpenStack's Nova, Cinder, and Glance services (Chapter 11, Performance and Stability Tuning) utilize RBD snapshots for instances, abstracted volumes, and guest OS images respectively. There is a facility to replicate/mirror RBD volumes between clusters or even sites for high availability and disaster recovery.

RBD volumes are often used transparently by virtual machines and abstractions including OpenStack Cinder and Glance, but applications and users can exploit them as well via the rbd command line and programmatically via librbd.

The following is an example use case:

The author of this chapter needed to deploy a system of yum repo mirrors within OpenStack clouds for tenant use. CPU and RAM requirements were low, but a fair amount of storage was needed to mirror the growing collections of upstream rpm and metadata files for multiple versions of two Linux distributions. A small instance flavor was chosen with 4 GB RAM and one vCPU, but only a 50 GB virtual disk volume. That 50 GB volume, which itself mapped to an RBD volume, quickly filled up as new package versions and new distributions were added. The OpenStack Cinder interface to RBD was used to provision a 500 GB volume that was then attached to the instance, where the virtio driver presented it as /dev/vdb. An EXT4 filesystem was created on that device and an entry added to /etc/fstab to mount it at each boot, and the payload data was moved over to its capacious new home.

Alert readers might suggest simply resizing the original volume. This may be possible in some environments, but is more complex and requires additional steps.