INTRO TO CEPH#
Ceph is a single storage solution that supports multiple storage models:
Ceph has a set of components that support the file, block, and object storage models. These components translate the operations of a specific storage model into the Ceph internal storage model. Ceph at its core is an object storage solution with GWs for the file, block, and object storage models. The RADOS subsystem is a distributed object storage system. Underneath RADOS is the
BlueStore subsystem, which handles data on storage devices.
Ceph has a
client layer and a
Ceph supports objects at two levels:
Ceph supports operations by client applications on objects according to the S3 protocol (and the rarely used Swift protocol).
Ceph internally records information for file, block, and object storage services using
Ceph object is a chunk of information that Ceph records on storage devices. The following are some of the more common pieces of information recorded in Ceph objects:
S3 or Swift object data
Monitors: A Ceph Monitor (
ceph-mon) maintains maps of the cluster state, including the monitor map, manager map, the OSD map, the MDS map, and the CRUSH map. These maps are critical cluster state required for Ceph daemons to coordinate with each other. Monitors are also responsible for managing authentication between daemons and clients. At least three monitors are normally required for redundancy and high availability.
Managers: A Ceph Manager daemon (
ceph-mgr) is responsible for keeping track of runtime metrics and the current state of the Ceph cluster, including storage utilization, current performance metrics, and system load. The Ceph Manager daemons also host python-based modules to manage and expose Ceph cluster information, including a web-based Ceph Dashboard and REST API. At least two managers are normally required for high availability.
Ceph OSDs: An Object Storage Daemon (Ceph OSD,
ceph-osd) stores data, handles data replication, recovery, rebalancing, and provides some monitoring information to Ceph Monitors and Managers by checking other Ceph OSD Daemons for a heartbeat. At least three Ceph OSDs are normally required for redundancy and high availability.
MDSs: A Ceph Metadata Server (MDS,
ceph-mds) stores metadata on behalf of the Ceph File System (i.e., Ceph Block Devices and Ceph Object Storage do not use MDS). Ceph Metadata Servers allow POSIX file system users to execute basic commands (like
find, etc.) without placing an enormous burden on the Ceph Storage Cluster.
- prometheus module
- ENABLING PROMETHEUS OUTPUT
- STATISTIC NAMES AND LABELS
- USE LABEL_REPLACE
- CONFIGURING PROMETHEUS SERVER
- Rook prometheus monitoring
- prometheus module