Learning Ceph(Second Edition)
上QQ阅读APP看书,第一时间看更新

CephFS

CephFS, the Ceph filesystem, has been around for quite some time and was, in fact, the first use case of Ceph back-end storage. While certain installations have successfully used CephFS in production for years, only with the 2016 release of Jewel did we gain official support versus the former tech preview status, and a complete set of management/ maintenance tools.

CephFS is somewhat akin to NFS (™) but not directly analogous. In fact, one can even run NFS on top of CephFS! CephFS is designed for use on well-behaved servers, and is not intended to be ubiquitously mounted on user desktops. You can use an operating system kernel driver to mount a CephFS filesystem as one would a local device or Gluster network filesystem. There is also the option of a userspace FUSE driver. Each installation must weigh the two mount methods: FUSE is easier to update, but the native kernel driver may provide measurably better performance.

Client mounts are directed to the cluster's Ceph MON servers, but CephFS requires one or more MDS instances to store filenames, directories, and other traditional metadata, as well as managing access to Ceph's OSDs. As with RGW servers, small and lightly utilized installations may elect to run MDS daemons on virtual machines, though most will choose dedicated physical servers performance, stability, and simplified dependency management.

A potential use case is an established archival system built around an aging fileserver that requires POSIX filesystem permissions and behavior. The legacy fileserver can be replaced by a CephFS mount with little or no adjustment.