Search
StarWind is a hyperconverged (HCI) vendor with focus on Enterprise ROBO, SMB & Edge

What is Ceph and Ceph Storage?

  • June 14, 2024
  • 13 min read
StarWind Head of Marketing. Vlad has more than 12 years of IT experience, specializing in cloud, virtualization, and data protection. He possesses extensive knowledge in architecture planning, storage systems, hardware sourcing, and research.
StarWind Head of Marketing. Vlad has more than 12 years of IT experience, specializing in cloud, virtualization, and data protection. He possesses extensive knowledge in architecture planning, storage systems, hardware sourcing, and research.

As data storage demands grow exponentially, finding scalable and resilient storage solutions becomes crucial. Enterprises require robust systems that can manage vast amounts of data efficiently while ensuring high availability and reliability. Enter Ceph: a standout contender in the world of distributed storage systems, known for its impressive scalability.

What is Ceph?

Ceph is an open-source software-defined storage solution designed to provide block, file, and object storage access from a single system. It’s built to be self-healing and self-managing, which is aimed to reduce the costs and complexity of maintaining the storage infrastructure. Ceph software can run on most commodity hardware, while it’s distributed architecture is highly scalable, up to exabyte level.

Ceph was created by Sage Weil during his doctoral research at the University of California, Santa Cruz. The project started in 2004, and by 2006, Ceph was already available under an open-source license. Ceph architecture

 

Figure 1: Ceph architecture

How Does Ceph Work?

Though Ceph can be configured to run from a single server, it’s not how it is supposed to work. For the feasible production-ready deployment, Ceph requires the minimum of 3 servers that are connected to one another in what is called a cluster. Each connected server within that cluster network is referred to as a node.

Ceph Components

Ceph uses a distributed architecture with five key components (daemons), which can all run on the same set of cluster nodes, and which all have their distinct roles. This design allows for direct interaction with these components, creating a flexible and resilient storage architecture. The key daemons in Ceph cluster are:

Ceph monitors (ceph-mon): Monitor the status of individual nodes in the cluster, including the managers (MGR), object storage devices (OSD), and metadata servers (MDS). To ensure maximum reliability, it is recommended to have at least 3 monitor nodes.

Ceph managers (ceph-mgr): Manage the status of storage usage, system load, and node capacity. Running alongside the monitor daemons, managers also provide additional monitoring capabilities and interfaces for external management systems.

Metadata servers (ceph-mds): Store metadata, including storage paths, file names, and timestamps of files for its CephFS filesystem.

Object storage devices (ceph-osd): Manage actual data, handling data storage, replication and restoration. A minimum of 3 OSDs is recommended for a production cluster.

RESTful gateways (ceph-rgw): Expose the object storage layer as HTTP interface compatible with Amazon S3 and OpenStack Swift REST APIs.

Ceph Storage Operating Principles

Ceph distributes data across multiple nodes using the CRUSH (Controlled Replication Under Scalable Hashing) algorithm, which manages data replication and placement within the cluster. Here’s how it works:

Data placement and replication: The CRUSH algorithm distributes files in a pseudo-random manner, meaning that first, CRUSH actually selects the optimal storage locations based on predefined criteria, and then files are duplicated and stored on physically separate media according to replication parameters specified by system administrator. Files are organized into placement groups (PGs), and their names are processed as hash values.

Data retrieval: To read data, Ceph uses an allocation table called the CRUSH Map to locate an OSD containing the requested file.

Self-healing: If a node fails, Ceph automatically redistributes the data to other healthy nodes and restores the initial number of data copies.

Such an approach ensures that Ceph can handle large amounts of data while providing high availability and performance.

Ceph Storage Types and Protocols

Ceph supports various storage types, making it a versatile solution for different storage needs. The primary storage types include:

Object Storage

Ceph provides object storage through its RADOS (Reliable Autonomic Distributed Object Store) layer. RADOS is a scalable object store that handles data storage, retrieval, and replication across the cluster. It allows applications to interact with data using RESTful APIs, such as S3 and SWIFT.

Block Storage

Ceph’s RADOS Block Device (RBD) provides block storage access, enabling the creation of virtual disk images that can be attached to virtual machines. This makes Ceph suitable for cloud and virtualization environments where scalable and resilient block storage is required.

File Storage

Ceph also offers file storage through CephFS, which provides a POSIX-compliant file system. CephFS allows users to store and retrieve files hierarchically, similar to traditional file systems, but with the added benefits of Ceph’s distributed architecture.

Benefits and Challenges of Ceph

Advantages:

Ceph offers numerous benefits, making it a preferred choice for many organizations:

Free software: Ceph is a free and open-source platform with extensive online resources available for setup and maintenance. RedHat’s acquisition of Ceph ensures its continued development in the foreseeable future.

Enormous scalability: Ceph scales to exabytes levels meeting even the largest storage capacity demands.

Self-healing: When properly configured and maintained, Ceph provides excellent data protection and self-healing capabilities ensuring data integrity and continuous availability.

Challenges:

However, implementing Ceph comes with its own set of challenges:

Complexity: A proper Ceph cluster setup and effective maintenance includes a steep learning curve that could be overwhelming even for skilled IT administrators without the relevant experience.

Limited performance: Ceph is sort of a “one-trick pony” in terms of storage performance. It deals perfectly with objects and large sequential data blocks (64K and more) but falls behind other competitive solutions when it comes to small-sized random or mixed workloads (4K, 8K) that are common in most virtualization use cases. Additionally, optimizing Ceph for specific workloads requires extensive performance tuning and experience.

Resource intensive: Ceph is designed for large deployments and truly begins to shine with 4, preferably 5 nodes, which is not ideal for small and medium-sized businesses. This can be somewhat mitigated with all-NVMe configurations, making a 3-node cluster a feasible option.

Ceph vs. StarWind Virtual SAN

In the world of data storage, choosing the right solution can make all the difference. Ceph and StarWind Virtual SAN (VSAN) are two prominent contenders, each with unique strengths and capabilities. When comparing Ceph with StarWind Virtual SAN (VSAN), several distinctions become evident:

Feature Ceph StarWind VSAN
Storage Types Object, Block, File Block, File
Scalability High Moderate
Hardware footprint High (3 nodes minimum, 4 or more nodes recommended, depends on storage type) Low (2 nodes minimum for a production-ready highly-available (HA) configuration)
Licensing Open source, with optional paid support Commercial, Free version with limited support is available
Ease of Setup Complex Easy to setup, installation assistance service is included in price
Performance Moderate (varies by workload) High (higher performance in most virtualization use cases)

While Ceph provides a versatile and scalable solution, StarWind VSAN offers much more impressive performance, particularly for virtual machine storage use cases. For a detailed comparison of these and other prominent solutions, refer to the “DRBD/LINSTOR vs Ceph vs StarWind VSAN: Proxmox HCI Performance Comparison” article.

FAQ

What does Ceph stand for?

Ceph stands for “Cephalopod,” inspired by intelligent marine animals known for their distributed nervous system, reflecting Ceph’s distributed architecture.

What is the function of Ceph?

Ceph decouples data from physical storage hardware through software abstraction layers, providing impressive scalability and fault management. This makes Ceph great for large private cloud environments, OpenStack, Kubernetes, and other container-based workloads.

What is the difference between NFS and Ceph?

NFS (Network File System) is a protocol that allows file access over a network, typically used for simple file sharing. Ceph, on the other hand, offers a more comprehensive and scalable storage solution with support for high availability, object, block, and file storage, making it suitable for more demanding and diverse storage needs. You can use CephFS namespaces with the NFS-Ganesha server to export them over the NFS protocol. This allows you to run multiple NFS instances with RADOS Gateway (RGW), exporting the same or different resources from the Ceph cluster. This way, you get the ease of NFS with the strength and scalability of Ceph.

Hey! Found Vladislav’s insights useful? Looking for a cost-effective, high-performance, and easy-to-use hyperconverged platform?
Taras Shved
Taras Shved StarWind HCI Appliance Product Manager
Look no further! StarWind HCI Appliance (HCA) is a plug-and-play solution that combines compute, storage, networking, and virtualization software into a single easy-to-use hyperconverged platform. It's designed to significantly trim your IT costs and save valuable time. Interested in learning more? Book your StarWind HCA demo now to see it in action!