Search
StarWind is a hyperconverged (HCI) vendor with focus on Enterprise ROBO, SMB & Edge

Proxmox VE configure a Ceph Storage Cluster

  • September 10, 2024
  • 9 min read
Cloud and Virtualization Architect. Paolo is a System Engineer, VCP-DCV, vExpert, VMCE, Veeam Vanguard, and author of the virtualization blog nolabnoparty.com
Cloud and Virtualization Architect. Paolo is a System Engineer, VCP-DCV, vExpert, VMCE, Veeam Vanguard, and author of the virtualization blog nolabnoparty.com

The Ceph Storage Cluster is a feature available on the Proxmox platform, used to implement a software-defined storage solution.

Ceph is an embedded feature in Proxmox and is completely free to use

Prerequisites

Before starting the installation of Ceph Storage Cluster, you need to create a Proxmox cluster by adding the nodes required for your configuration.
Three is the minimum number of nodes to configure a working Proxmox cluster.

If you don’t have a Proxmox cluster available on your network, follow this procedure to configure a new one.

Create a Ceph Storage Cluster

To create a new storage cluster, you need to install Ceph on all nodes within the Proxmox cluster. Using your preferred browser, login to the Proxmox nodes.

 

proxmox-configure-ceph-storage-cluster

 

Ensure that all nodes have the latest updates installed. Select the node you want to upgrade, then navigate to the Updates section. Click Refresh, and then click Upgrade to install the available updates for the selected node.

 

Ensure that all nodes have the latest updates installed.

 

Repeat same operation for all nodes within the Proxmox cluster.

Install Ceph service

Before installing the Ceph service, make sure that each node in the Disks section can detect the disks you plan to assign to the Ceph Storage cluster. As a best practice, the disks used should be of the same size (100GB, 100GB, etc.) rather than different sizes (100GB, 50GB, etc.).

 

make sure that each node in the Disks section can detect the disks you plan to assign to the Ceph Storage cluster

 

Connect to node 1 and select the Proxmox cluster. Access the Ceph section and click Install Ceph.

 

Connect to node 1 and select the Proxmox cluster

 

Select the Ceph version to install (reef in the example) and if you don’t have a commercial license, select the No-Subscription value in the Repository drop-down menu. Click Start reef installation to proceed.

 

Select the Ceph version to install

 

Type y then press Enter to install the Ceph service.

 

Type y then press Enter to install the Ceph service

 

When the installation is complete, go to Configuration tab.

 

When the installation is complete, go to Configuration tab

 

Select the Public Network and the Cluster Network to use then click Next. For a better traffic separation, it is recommended to use different networks for Public and Cluster traffic.

 

Select the Public Network and the Cluster Network to use then click Next

 

Click Finish.

 

Setup | Click Finish

 

The Ceph service in the node 1 shows a warning state because is not fully configured yet.

 

The Ceph service in the node 1 shows a warning state because is not fully configured yet

 

Repeat the same procedure to install the Ceph service on the other nodes in the cluster. Keep in mind that configuring the Ceph service on the additional nodes is not requested, as it has already been completed on the node 1.

 

Repeat the same procedure to install the Ceph service on the other nodes in the cluster

 

Configure OSD

Select node 1 and navigate to Ceph > OSD section. Click Create OSD.

 

Select node 1 and navigate to Ceph > OSD section

 

Select the Disk to use and click Create.

 

Select the Disk to use and click Create

 

The Ceph OSD is being created.

 

The Ceph OSD is being created

 

The selected disk has been added to the OSD.

 

The selected disk has been added to the OSD

 

Now repeat the same procedure for the other disks you want to configure on the selected node.

 

Now repeat the same procedure for the other disks you want to configure on the selected node

 

The configured disks.

 

The configured disks

 

When the desired disks have been configured, proceed with the next node. Select node 2, go to the Ceph > OSD section and click Create OSD.

 

Select node 2, go to the Ceph > OSD section and click Create OSD

 

The configured disks on the node 2.

 

The configured disks on the node 2

 

Select node 3, go to the Ceph > OSD section and click Create OSD.

 

Select node 3, go to the Ceph > OSD section and click Create OSD

 

The configured disks in the node 3.

 

The configured disks in the node 3

 

Select the Proxmox Cluster and go to Ceph section. This time the Status is now shown as healthy.

 

Select the Proxmox Cluster and go to Ceph section

 

Configure Monitor

When the Ceph service is enabled, by default there is only one monitor configured.

 

default there is only one monitor configured

 

Select node 1 and go to Ceph > Monitor. Click Create to configure an additional monitor.

 

Select node 1 and go to Ceph > Monitor. Click Create to configure an additional monitor

 

Select node 2 and click Create.

 

Select node 2 and click Create

 

The new monitor has been added. Click Create to add the last node.

 

Click Create to add the last node

 

Select the node 3 and click Create.

 

Select the node 3 and click Create

 

The configured monitors.

 

The configured monitors

 

Create a Pool

Next step is to assign the disks created in OSD to a pool. Select node 1, go to Ceph > Pool and click Create.

 

Select node 1, go to Ceph > Pool and click Create

 

Enter a Name for the pool. Size and Min Size define the number of the data copies distributed across the nodes. Click Create.

 

Enter a Name for the pool

 

The created datastore.

 

The created datastore

 

Expanding the three nodes, you can see the created new datastore.

 

Expanding the three nodes, you can see the created new datastore

 

Test the Ceph Storage Cluster

To verify if the Ceph Storage Cluster is working as expected, create a new VM and specify the new storage accordingly.

 

create a new VM and specify the new storage accordingly

 

The VM creation requires some settings to be configured. Type the VM Name and click Next.

 

Type the VM Name and click Next

 

The tab we need to focus on is the Disks tab. Select the datastore01 you just created as the Storage to use.

 

The tab we need to focus on is the Disks tab

 

Once created, the new VM is up and running where the virtual disks are stored in the datastore configured as Ceph Storage Cluster.

 

Once created, the new VM is up and running where the virtual disks are stored in the datastore configured as Ceph Storage Cluster

 

The VM running on node 1 has its virtual disk stored on datastore01.

 

The VM running on node 1 has its virtual disk stored on datastore01

 

By checking the other nodes, you can verify that the VM has copies of its data distributed across datastore01 on the different nodes.

 

verify that the VM has copies of its data distributed across datastore01 on the different nodes

 

If one node fails, the VM can remain operational by configuring High Availability and leveraging the distributed copies in the Ceph Storage Cluster.

 

VM can remain operational by configuring High Availability

 

Proxmox VE is available to download as free open-source solution.

Hey! Found Paolo’s insights useful? Looking for a cost-effective, high-performance, and easy-to-use hyperconverged platform?
Taras Shved
Taras Shved StarWind HCI Appliance Product Manager
Look no further! StarWind HCI Appliance (HCA) is a plug-and-play solution that combines compute, storage, networking, and virtualization software into a single easy-to-use hyperconverged platform. It's designed to significantly trim your IT costs and save valuable time. Interested in learning more? Book your StarWind HCA demo now to see it in action!