Rook is an open source Cloud-native storage orchestrator that provides a platform, framework, and support for a variety of storage solutions to integrate natively with cloud native environments.

Rook transforms storage software into self-managing, self-scaling and self-healing storage services. It is implemented through automating deployment, bootstrapping, configuration, provisioning, This can be achieved by scaling, upgrading, migration, Disaster Recovery, monitoring and Resource management. Rook uses the facilities provided by the underlying cloud native container management, scheduling, and orchestration platform to perform its responsibilities.

Rook leverage extension points for deep integration into the cloud native environment and provides a seamless experience for scheduling, lifecycle management, resource management, security, monitoring, and user experience.

Cassandra quick start

Cassandra is a highly available, fault-tolerant, peer-to-peer NoSQL database with lightning-fast performance and adjustable consistency. It provides large scale scalability without single points of failure.

Scylla is a near-hardware rewrite of Cassandra in C++. It uses a shared-nothing architecture that enables true linear scaling and major hardware optimizations for ultra-low latency and extremely high throughput. It is a direct replacement for Cassandra and uses the same interface, so Rook also supports it.

The premise condition

The Kubernetes cluster is required to run the Rook Cassandra operator. To make sure you have a Kubernetes cluster ready for Rook (Cassandra does not require the FlexVolume plug-in)

The deployment of Cassandra Operator

First deploy the Rook Cassandra Operator with the following command:

$ git clone- single - branch - branch v1.6.8 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/cassandra
kubectl apply -f operator.yaml
Copy the code

This installs operator in the namespace rook-cassandra-system. You can check that operator is up and running:

kubectl -n rook-cassandra-system get pod
Copy the code

Create and initialize the Cassandra/Scylla cluster

The operator is running now, we can create clusters. Cassandra. Rook. IO resources to create the instance of the Cassandra/Scylla cluster instance. Some of the values of this resource are configurable, so feel free to browse cluster.yaml and adjust the Settings to your liking.

When you are ready to create a Cassandra cluster, simply run:

kubectl create -f cluster.yaml
Copy the code

We can verify that a Kubernetes object representing our new Cassandra cluster has been created using the following command. This is important because it shows that Rook has successfully extended Kubernetes to make the Cassandra cluster a first-class citizen in the Kubernetes cloud native environment.

kubectl -n rook-cassandra get clusters.cassandra.rook.io
Copy the code

To check that all the required members are running, you should see the same number of entries as the number of members specified in cluster.yaml from the following command:

kubectl -n rook-cassandra get pod -l app=rook-cassandra
Copy the code

You can also track the state of the Cassandra cluster from its state. To check the current status of the cluster, run:

kubectl -n rook-cassandra describe clusters.cassandra.rook.io rook-cassandra
Copy the code

Accessing the database

  • From kubectl:

To get the CQLSH shell in a new cluster:

kubectl exec -n rook-cassandra -it rook-cassandra-east-1-east-1a-0 -- cqlsh
> DESCRIBE KEYSPACES;
Copy the code
  • From inside the Pod:

When you create a new cluster, Rook automatically creates a service for clients to access the cluster. The service name follows the convention

-client. You can view this service in the cluster by running the following command:

kubectl -n rook-cassandra describe service rook-cassandra-client
Copy the code

A Pod running in a Kubernetes cluster can connect to Cassandra using this service. Here is an example using Python Driver:

from cassandra.cluster import Cluster

cluster = Cluster(['rook-cassandra-client.rook-cassandra.svc.cluster.local'])
session = cluster.connect()
Copy the code

Scale Up

Operator supports extending racks and adding new racks. To make changes, you can use:

kubectl edit clusters.cassandra.rook.io rook-cassandra
Copy the code
  • To extend arack, please sendrackSpec.MembersField changes to the desired value.
  • To add a newrack, pleaseracksAdd a new one to the listrack. Please remember for newrackChoose a differentrackThe name.
  • Edit and saveyamlAfter, check the status and events of the cluster to get information about what is happening:
kubectl -n rook-cassandra describe clusters.cassandra.rook.io rook-cassandra
Copy the code

Scale Down

Operator supports scaling down rack. To make changes, you can use:

kubectl edit clusters.cassandra.rook.io rook-cassandra
Copy the code
  • I want to make it smallerrack, please sendrackSpec.MembersField changes to the desired value.
  • Edit and saveyamlAfter, check the status and events of the cluster to get information about what is happening:
kubectl -n rook-cassandra describe clusters.cassandra.rook.io rook-cassandra
Copy the code

Clean Up

To clean up all resources related to this walkthrough, you can run the following command.

Note: This will corrupt your database and delete all its associated data.

kubectl delete -f cluster.yaml
kubectl delete -f operator.yaml
Copy the code

troubleshooting

If the cluster does not appear, the first step is to check operator’s log:

kubectl -n rook-cassandra-system logs -l app=rook-cassandra-operator
Copy the code

If everything is fine in the operator log, you can also look at the log for one of the Cassandra instances:

kubectl -n rook-cassandra logs rook-cassandra-0
Copy the code

Cassandra monitoring

To enable JmX_EXPORTER for Cassandra Rack, you should specify the jmxExporterConfigMapName option for rack in CassandraCluster CRD.

Such as:

apiVersion: cassandra.rook.io/v1alpha1
kind: Cluster
metadata:
  name: my-cassandra
  namespace: rook-cassandra
spec:
  .
  datacenter:
    name: my-datacenter
    racks:
    - name: my-rack
      members: 3
      jmxExporterConfigMapName: jmx-exporter-settings
      storage:
        volumeClaimTemplates:
        - metadata:
            name: rook-cassandra-data
          spec:
            storageClassName: my-storage-class
            resources:
              requests:
                storage: 200Gi
Copy the code

Example of a simple Config map to get all metrics:

apiVersion: v1
kind: ConfigMap
metadata:
  name: jmx-exporter-settings
  namespace: rook-cassandra
data:
  jmx_exporter_config.yaml: | lowercaseOutputLabelNames: true lowercaseOutputName: true whitelistObjectNames: ["org.apache.cassandra.metrics:*"]Copy the code

The ConfigMap data field must contain the JmX_EXPORter_config.yaml key with a JMX EXPORTER setting.

Pod has no automatic reload mechanism when the Config Map is updated. After the ConfigMap changes, you should manually restart all rack Pods:

NAMESPACE=<namespace>
CLUSTER=<cluster_name>
RACKS=$(kubectl get sts -n ${NAMESPACE} -l "cassandra.rook.io/cluster=${CLUSTER}")
echo ${RACKS} | xargs -n1 kubectl rollout restart -n ${NAMESPACE}
Copy the code

Ceph Storage Quick start

This guide walks you through the basic setup of a Ceph cluster and enables you to use block, object, and file storage from other pods running in the cluster.

Minimum version

Rook supports Kubernetes V1.11 or later.

Important If you are using K8s 1.15 or earlier, you will need to create a different version of Rook CRD. Create CRDS. Yaml found in the pre-k8S-1.16 subfolder of the sample listing.

The premise condition

To ensure that you have a Kubernetes cluster available for Rook.

To configure the Ceph storage cluster, at least one of the following local storage options is required:

  • Raw device (no partition or formatted file system)
    • This needs to be installed on the hostlvm2. To avoid this dependency, you can create a full disk partition on the disk (see below)
  • Raw partition (no formatted file system)
  • blockThe persistent volumes that are available in the storage class

You can use the following command to verify that your partition or device has formatted the file system.

lsblk -f
Copy the code
NAME FSTYPE LABEL UUID MOUNTPOINT vda ├─ vda1 LVM2_member > eso50t-gkuv-ykth-WSgq-hnjy-eKNF-3I07Ib ├─ Ubuntu -- VG-root ext4 C2366f76 e21 4-6 f10 - a8f3-6776212 e2fe4 / └ ─ ubuntu - vg - swap_1 swap a3dc ad75-47-9492 CD - 9596-678 e8cf17ff9 swap VDBCopy the code

If the FSTYPE field is not empty, there is a file system at the top of the corresponding device. In this case, you can use VDB for Ceph and not VDA and its partitions.

TL; DR

If you’re lucky, you can create a simple Rook cluster using the following kubectl commands and sample YAMl files.

$ git clone- single - branch - branch v1.6.8 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
Copy the code

Cluster environment

The Rook documentation focuses on starting Rook in a production environment. Examples are provided to relax some of the Settings in the test environment. When creating clusters later in this guide, consider the following sample cluster listing:

  • Cluster. yaml: Cluster Settings for production clusters running on bare metal machines. At least three working nodes are required.
  • Cluster-on-pvc. yaml: Cluster setting for a production cluster running in a dynamic cloud environment.
  • Cluster-test. yaml: cluster Settings of the test environment, such as minikube.

The deployment of Rook Operator

The first step is to deploy the Rook Operator. Check if you are using a sample YAML file that is appropriate for your version of Rook.

cd cluster/examples/kubernetes/ceph
kubectl create -f crds.yaml -f common.yaml -f operator.yaml

# verify the rook-ceph-operator is in the `Running` state before proceeding
kubectl -n rook-ceph get pod
Copy the code

Before starting Operator in production, you may want to consider some Settings:

  1. If you are using Kubernetes V1.15 or earlier, you need to create it hereCRDIn the/ cluster/examples/kubernetes/ceph/pre - k8s - 1.16 / CRD. Yaml.CustomResourceDefinitionapiextension v1beta1Version inKubernetes v1.16Deprecated in.
  2. Consider whether you want to enable certain Rook features that are disabled by default. See these and other advanced Settingsoperator.yaml.
    1. Device discovery: If enabledROOK_ENABLE_DISCOVERY_DAEMONSet, Rook monitors the new device to be configured and is often used in bare-metal clusters.
    2. Flex Driver: Flex Driver has been deprecated in favor of CSI Driver, but still passableROOK_ENABLE_FLEX_DRIVEREnable.
    3. Node Affinity and Tolerations (Node associations and tolerations) : By default, the CSI Driver runs on any Node in the cluster. To configure CSI Driver Affinity, you can use a number of Settings.

Create the Rook Ceph cluster

Now that the Rook operator is running, we can create the Ceph cluster. In order for the cluster to continue to exist after a restart, be sure to set the dataDirHostPath property that is valid for the host.

Creating a cluster:

kubectl create -f cluster.yaml
Copy the code

Use Kubectl to list pods in the rook-ceph namespace. Once they are all running, you should be able to see the following PODS. The number of OSD PODS depends on the number of nodes in the cluster and the number of configured devices. If cluster.yaml is not modified, an OSD node is expected to be created on each node. CSI, Rook-Ceph-Agent (Flex Driver), and Rook-Discover Pod are also optional, depending on your setup.

kubectl -n rook-ceph get pod
Copy the code
NAME                                                 READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-provisioner-d77bb49c6-n5tgs         5/5     Running     0          140s
csi-cephfsplugin-provisioner-d77bb49c6-v9rvn         5/5     Running     0          140s
csi-cephfsplugin-rthrp                               3/3     Running     0          140s
csi-rbdplugin-hbsm7                                  3/3     Running     0          140s
csi-rbdplugin-provisioner-5b5cd64fd-nvk6c            6/6     Running     0          140s
csi-rbdplugin-provisioner-5b5cd64fd-q7bxl            6/6     Running     0          140s
rook-ceph-crashcollector-minikube-5b57b7c5d4-hfldl   1/1     Running     0          105s
rook-ceph-mgr-a-64cd7cdf54-j8b5p                     1/1     Running     0          77s
rook-ceph-mon-a-694bb7987d-fp9w7                     1/1     Running     0          105s
rook-ceph-mon-b-856fdd5cb9-5h2qk                     1/1     Running     0          94s
rook-ceph-mon-c-57545897fc-j576h                     1/1     Running     0          85s
rook-ceph-operator-85f5b946bd-s8grz                  1/1     Running     0          92m
rook-ceph-osd-0-6bb747b6c5-lnvb6                     1/1     Running     0          23s
rook-ceph-osd-1-7f67f9646d-44p7v                     1/1     Running     0          24s
rook-ceph-osd-2-6cd4b776ff-v4d68                     1/1     Running     0          25s
rook-ceph-osd-prepare-node1-vx2rz                    0/2     Completed   0          60s
rook-ceph-osd-prepare-node2-ab3fd                    0/2     Completed   0          60s
rook-ceph-osd-prepare-node3-w4xyz                    0/2     Completed   0          60s
Copy the code

To verify that the cluster is healthy, connect to the Rook Toolbox and run the ceph status command.

  • All mons should have a quorum
  • The MGR should be active
  • At least one OSD node is active. Procedure
  • If the health is notHEALTH_OK, should investigate warnings or errors
ceph status
Copy the code
 cluster:
   id:     a0452c76-30d9-4c1a-a948-5d8405f19a7c
   health: HEALTH_OK

 services:
   mon: 3 daemons, quorum a,b,c (age 3m)
   mgr: a(active, since 2m)
   osd: 3 osds: 3 up (since 1m), 3 in (since 1m)
...
Copy the code

Storage

For a walkthrough of the three storage types exposed by Rook, see the following guide:

  • Block: Create the block to be used by Pod (block) storage
  • Object: Creates an Object store that can be accessed inside or outside the Kubernetes cluster
  • Shared Filesystem: creates a Filesystem to be Shared among multiple pods

Ceph dashboard

Ceph has a dashboard where you can view the status of a cluster.

tool

We created a Toolbox container containing a full set of Ceph clients for debugging and troubleshooting the Rook cluster.

monitoring

Each Rook cluster has built-in indicator collectors/exporters for monitoring using Prometheus.

The destruction

After testing the cluster, refer to these instructions to clean up the cluster.

Network File System (NFS)

NFS allows remote hosts to mount and interact with file systems over the network as if they were mounted locally. This enables system administrators to consolidate resources onto a central server on the network.

The premise condition

  1. The Kubernetes cluster is required to run the Rook NFS operator.
  2. To expose the volume, need to passPVCAttached to theNFSServer pod.

Any type of EXPORTED PVC can be attached and exported, such as Host Path, AWS Elastic Block Store, GCP Persistent Disk, CephFS, Ceph RBD, etc. Limitations of these volumes also apply when they are shared by NFS. You can learn more about the details and limitations of these volumes in Kubernetes Docs. 3. NFS Client Packages must be installed on all nodes where Kubernetes may run NFS mounted pods. Install nfS-utils on the CentOS node or nfs-common on the Ubuntu node.

Deployment of NFS Operator

First use the following command to deploy the Rook NFS operator:

$ git clone- single - branch - branch v1.6.8 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/nfs
kubectl create -f common.yaml
kubectl create -f operator.yaml
Copy the code

You can check that operator is up and running:

kubectl -n rook-nfs-system get pod
Copy the code
NAME                                    READY   STATUS    RESTARTS   AGE
rook-nfs-operator-879f5bf8b-gnwht       1/1     Running   0          29m
Copy the code

Deploy NFS Admission Webhook (optional)

Admission webhooks are HTTP callbacks to receive access requests to API servers. Two types of admission webhooks are verifying admission webhook and mutating admission webhook. The NFS Operator supports validating admission Webhook, which validates NFSServer objects sent to API Server before being stored to ETCD (persisting).

To enable Admission Webhook on NFS, for example, to verify Admission Webhook, you need to do the following:

First, make sure you have cert-Manager installed. If it is not already installed, you can follow the instructions in the cert-Manager installation documentation. Alternatively, you can simply run the following single command:

Kubectl apply - validate = false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.1/cert-manager.yamlCopy the code

This will easily install the latest version of CERt-Manager (V0.15.1). When complete, ensure that the Cert-Manager component is deployed correctly and in the Running state:

kubectl get -n cert-manager pod
Copy the code
NAME                                      READY   STATUS    RESTARTS   AGE
cert-manager-7747db9d88-jmw2f             1/1     Running   0          2m1s
cert-manager-cainjector-87c85c6ff-dhtl8   1/1     Running   0          2m1s
cert-manager-webhook-64dc9fff44-5g565     1/1     Running   0          2m1s
Copy the code

Once cert-Manager is running, you can now deploy NFS Webhook:

kubectl create -f webhook.yaml
Copy the code

Verify that Webhook is up and running:

kubectl -n rook-nfs-system get pod
Copy the code
NAME                                    READY   STATUS    RESTARTS   AGE
rook-nfs-operator-78d86bf969-k7lqp      1/1     Running   0          102s
rook-nfs-webhook-74749cbd46-6jw2w       1/1     Running   0          102s
Copy the code

Create Openshift security Context constraints (optional)

On the OpenShift cluster, we need to create some additional security context constraints. If you are not running in OpenShift, you can skip this section and move on to the next.

To create a security context for NFS server – pod constraints, we can use the following yaml, it can also be under/cluster/examples/kubernetes/NFS SCC. Found in yaml.

Note: Older versions of OpenShift may be requiredapiVersion: v1

kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
  name: rook-nfs
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegedContainer: false
allowedCapabilities:
- SYS_ADMIN
- DAC_READ_SEARCH
defaultAddCapabilities: null
fsGroup:
  type: MustRunAs
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
- KILL
- MKNOD
- SYS_CHROOT
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: MustRunAs
supplementalGroups:
  type: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- secret
users:
  - system:serviceaccount:rook-nfs:rook-nfs-server
Copy the code

You can create an SCC using the following command:

oc create -f scc.yaml
Copy the code

Creating a Pod security Policy (recommended)

We recommend that you also create a Pod security policy

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: rook-nfs-policy
spec:
  privileged: true
  fsGroup:
    rule: RunAsAny
  allowedCapabilities:
  - DAC_READ_SEARCH
  - SYS_RESOURCE
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - configMap
  - downwardAPI
  - emptyDir
  - persistentVolumeClaim
  - secret
  - hostPath
Copy the code

Save this file with the name PSP.yaml and create it with the following command:

kubectl create -f psp.yaml
Copy the code

Create and initialize an NFS server

Now that operator is running, we can create an instance of the NFS server by creating an instance of the nfsServers.nfs.rook. IO resource. Various fields and options for NFS Server Resource can be used to configure the server and its volumes to be exported.

Before we can create NFS Server, we need to create ServiceAccount and RBAC rules

---
apiVersion: v1
kind: Namespace
metadata:
  name:  rook-nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rook-nfs-server
  namespace: rook-nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rook-nfs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get"."list"."watch"."create"."delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get"."list"."watch"."update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get"."list"."watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create"."update"."patch"]
  - apiGroups: [""]
    resources: ["services"."endpoints"]
    verbs: ["get"]
  - apiGroups: ["policy"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["rook-nfs-policy"]
    verbs: ["use"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get"."list"."watch"."create"."update"."patch"]
  - apiGroups:
    - nfs.rook.io
    resources:
    - "*"
    verbs:
    - "*"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rook-nfs-provisioner-runner
subjects:
  - kind: ServiceAccount
    name: rook-nfs-server
     # replace with namespace where provisioner is deployed
    namespace: rook-nfs
roleRef:
  kind: ClusterRole
  name: rook-nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
Copy the code

Save this file with the name rbac.yaml and create it with the following command:

kubectl create -f rbac.yaml
Copy the code

This guide has three main examples to demonstrate exporting volumes using an NFS server:

  1. Default StorageClass example
  2. XFS StorageClass sample
  3. Rook Ceph Volume example

Default StorageClass example

The first example creates an NFS Server instance step by step that exports the storage supported by the default StorageClass of the environment you happen to be running on. In some environments, this might be the host Path; in others, it might be the Cloud provider Virtual Disk. Either way, this example requires the default StorageClass to exist.

Start by saving the following NFS CRD instance definitions to a file named nfs.yaml:

---
# A default storageclass must be present
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-default-claim
  namespace: rook-nfs
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: nfs.rook.io/v1alpha1
kind: NFSServer
metadata:
  name: rook-nfs
  namespace: rook-nfs
spec:
  replicas: 1
  exports:
  - name: share1
    server:
      accessMode: ReadWrite
      squash: "none"
    # A Persistent Volume Claim must be created before creating NFS CRD instance.
    persistentVolumeClaim:
      claimName: nfs-default-claim
  # A key/value list of annotations
  annotations:
    rook: nfs
Copy the code

With the nfs.yaml file saved, now create NFS Server as follows:

kubectl create -f nfs.yaml
Copy the code

XFS StorageClass sample

Rook NFS Supports disk quotas by xfs_quota. Therefore, if you need to specify disk quotas for volumes, follow this example.

In this example, we will use an underlying volume mounted as XFS with the PRJquota option. Before creating underlying volumes, you need to create StorageClass using XFS file system and Prjquota mountOptions. Many of Kubernetes’ distributed storage providers support XFS file systems. Typically by defining fsType: XFS or fs: XFS in the storageClass parameter. But how you actually specify a storage-class file system type depends on its own storage provider. You can view the kubernetes. IO/docs/concep… Learn more.

This is an example of GCE PD and AWS EBS StorageClass

  • GCE PD
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard-xfs
parameters:
  type: pd-standard
  fsType: xfs
mountOptions:
  - prjquota
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
Copy the code
  • AWS EBS
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard-xfs
provisioner: kubernetes.io/aws-ebs
parameters:
  type: io1
  iopsPerGB: "10"
  fsType: xfs
mountOptions:
  - prjquota
reclaimPolicy: Delete
volumeBindingMode: Immediate
Copy the code

Once you have StorageClass with XFS file systems and Prjquota mountOptions, you can create an NFS Server instance using the following example.

---
# A storage class with name standard-xfs must be present.
# The storage class must be has xfs filesystem type and prjquota mountOptions.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-xfs-claim
  namespace: rook-nfs
spec:
  storageClassName: "standard-xfs"
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: nfs.rook.io/v1alpha1
kind: NFSServer
metadata:
  name: rook-nfs
  namespace: rook-nfs
spec:
  replicas: 1
  exports:
  - name: share1
    server:
      accessMode: ReadWrite
      squash: "none"
    # A Persistent Volume Claim must be created before creating NFS CRD instance.
    persistentVolumeClaim:
      claimName: nfs-xfs-claim
  # A key/value list of annotations
  annotations:
    rook: nfs
Copy the code

Save this PVC and NFS Server instance as nfS-xfs.yaml and create it using the following command.

kubectl create -f nfs-xfs.yaml
Copy the code

Rook Ceph Volume example

In this alternative example, we will use a different underlying volume as an export of the NFS Server. These steps will guide us to export the Ceph RBD Block Volume so that clients can access it over the network.

With the Rook Ceph cluster up and running, we can continue to create NFS Server.

Save this PVC and NFS server instance as nfs-ceph.yaml:

---
# A rook ceph cluster must be running
# Create a rook ceph cluster using examples in rook/cluster/examples/kubernetes/ceph
# Refer to https://rook.io/docs/rook/master/ceph-quickstart.html for a quick rook cluster setup
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-ceph-claim
  namespace: rook-nfs
spec:
  storageClassName: rook-ceph-block
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
---
apiVersion: nfs.rook.io/v1alpha1
kind: NFSServer
metadata:
  name: rook-nfs
  namespace: rook-nfs
spec:
  replicas: 1
  exports:
  - name: share1
    server:
      accessMode: ReadWrite
      squash: "none"
    # A Persistent Volume Claim must be created before creating NFS CRD instance.
    # Create a Ceph cluster for using this example
    # Create a ceph PVC after creating the rook ceph cluster using ceph-pvc.yaml
    persistentVolumeClaim:
      claimName: nfs-ceph-claim
  # A key/value list of annotations
  annotations:
    rook: nfs
Copy the code

Create an NFS server instance that you saved in nfs-ceph.yaml:

kubectl create -f nfs-ceph.yaml
Copy the code

Verify the NFS Server

We can verify that a Kubernetes object representing our new NFS Server and its export has been created using the following command.

kubectl -n rook-nfs get nfsservers.nfs.rook.io
Copy the code
NAME       AGE   STATE
rook-nfs   32s   Running
Copy the code

Verify that NFS Server Pod is up and running:

kubectl -n rook-nfs get pod -l app=rook-nfs
Copy the code
NAME         READY     STATUS    RESTARTS   AGE
rook-nfs-0   1/1       Running   0          2m
Copy the code

If the NFS Server Pod is in the Running state, then we have successfully created an exposed NFS share and clients can start accessing it over the network.

Access to the Export

Starting with Rook version v1.0, Rook supports DYNAMIC Provisioning for NFS. This example shows how dynamic configuration can be used for NFS.

After the NFS Operator and NFSServer instances are deployed. You must create storageclass similar to the following example to dynamically configure volumes.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  labels:
    app: rook-nfs
  name: rook-nfs-share1
parameters:
  exportName: share1
  nfsServerName: rook-nfs
  nfsServerNamespace: rook-nfs
provisioner: nfs.rook.io/rook-nfs-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate
Copy the code

You can save it as a file, for example, sc.yaml and create storageclass using the following command.

kubectl create -f sc.yaml
Copy the code

Note: StorageClass needs to pass the following three parameters.

  1. exportName: It tells the supplier (provisioner) which export to use to supply volumes.
  2. nfsServerName: is the name of the NFSServer instance.
  3. nfsServerNamespace: namespace in which the NFSServer instance runs.

After creating the storageclass above, you can create a PV claim that references the Storageclass, as shown in the example below.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rook-nfs-pv-claim
spec:
  storageClassName: "rook-nfs-share1"
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
Copy the code

You can also save it as a file, for example, named PVc.yaml and create PV Claim using the following command.

kubectl create -f pvc.yaml
Copy the code

The Export of consumption

We can now use the PV we just created by creating a sample Web Server app that declares the exported volumes using the PersistentVolumeClaim described above. There are two PODS to make up this example:

  1. Web Server Pod that will read and display NFS shared content
  2. Write random data to an NFS shared Writer Pod so that the site is constantly updated

From the cluster/examples/kubernetes/NFS start folder busybox pod (writer) and web server:

kubectl create -f busybox-rc.yaml
kubectl create -f web-rc.yaml
Copy the code

Let’s confirm that the expected BusyBox Writer Pod and Web Server Pod are up and Running:

kubectl get pod -l app=nfs-demo
Copy the code

To be able to access the Web Server over the network, let’s create a service for it:

kubectl create -f web-service.yaml
Copy the code

We can then use the BusyBox Writer Pod we started earlier to check that nginx is providing the data correctly. In the 1-liner command below, we use Kubectl exec to run a command in BusyBox Writer Pod that uses wget to retrieve web Pages hosted by Web Server Pod. As the BusyBox Writer Pod continues to write new timestamps, we should see the returned output also updated every 10 seconds or so.

$ echo; kubectl exec $(kubectl get pod -l app=nfs-demo,role=busybox -o jsonpath='{.items[0].metadata.name}') -- wget -qO- http://$(kubectl get services nfs-web -o jsonpath='{.spec.clusterIP}'); echo
Copy the code
Thu Oct 22 19:28:55 UTC 2015
nfs-busybox-w3s4t
Copy the code

Clean up the destruction of

To clean up all resources related to this walkthrough, you can run the following command.

kubectl delete -f web-service.yaml
kubectl delete -f web-rc.yaml
kubectl delete -f busybox-rc.yaml
kubectl delete -f pvc.yaml
kubectl delete -f pv.yaml
kubectl delete -f nfs.yaml
kubectl delete -f nfs-xfs.yaml
kubectl delete -f nfs-ceph.yaml
kubectl delete -f rbac.yaml
kubectl delete -f psp.yaml
kubectl delete -f scc.yaml # if deployed
kubectl delete -f operator.yaml
kubectl delete -f webhook.yaml # if deployed
kubectl delete -f common.yaml
Copy the code

troubleshooting

If NFS Server Pod does not appear, the first step is to check the NFS operator log:

kubectl -n rook-nfs-system logs -l app=rook-nfs-operator
Copy the code