Skip to content

Commit

Permalink
Merge pull request openshift#3388 from screeley44/gluster-dynamic
Browse files Browse the repository at this point in the history
adding gluster dyn provisioning example
  • Loading branch information
ahardin-rh authored Dec 20, 2016
2 parents 70abc3c + b790673 commit 6d24343
Show file tree
Hide file tree
Showing 3 changed files with 263 additions and 0 deletions.
2 changes: 2 additions & 0 deletions _topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -372,6 +372,8 @@ Topics:
File: ceph_example
- Name: Complete Example Using GlusterFS
File: gluster_example
- Name: Dynamic Provisioning Example Using GlusterFS
File: gluster_dynamic_example
- Name: Mounting Volumes To Privileged Pods
File: privileged_pod_storage
- Name: Backing Docker Registry with GlusterFS Storage
Expand Down
260 changes: 260 additions & 0 deletions install_config/storage_examples/gluster_dynamic_example.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,260 @@
[[install-config-storage-examples-gluster-dynamic-example]]
= Complete Example of Dynamic Provisioning Using GlusterFS
{product-author}
{product-version}
:data-uri:
:icons:
:experimental:
:toc: macro
:toc-title:
:prewrap!:

toc::[]

[NOTE]
====
This example assumes a working {product-title} installed and functioning along with Heketi and GlusterFS
====
[NOTE]
====
All `oc ...` commands are executed on the {product-title} master host.
====

== Overview

This topic provides an end-to-end example of how to dynamically provision GlusterFS volumes. In this example a simple NGINX HelloWorld application will be deployed using the
link:https://access.redhat.com/documentation/en/red-hat-gluster-storage/3.1/paged/container-native-storage-for-openshift-container-platform/chapter-2-red-hat-gluster-storage-container-native-with-openshift-container-platform[ Red Hat Container Native Storage (CNS)] solution. CNS hyper-converges GlusterFS storage by containerizing it into the {product-title} cluster.

The link:https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/index.html[Red
Hat Gluster Storage Administration Guide] can also provide additional information about GlusterFS.

To help get started follow this link:https://github.com/gluster/gluster-kubernetes[quickstart guide] for an easy Vagrant based installation and deployment of a working {product-title} cluster along with Heketi and GlusterFS containers.

== Verify the environment and gather some information to be used in later steps

[NOTE]
====
At this point, there should be a working {product-title} cluster deployed, and a working Heketi Server along with GlusterFS.
====

====
Verify and View the cluster environment, including nodes and pods.
----
$ oc get nodes,pods
NAME STATUS AGE
master Ready 22h
node0 Ready 22h
node1 Ready 22h
node2 Ready 22h
NAME READY STATUS RESTARTS AGE 1/1 Running 0 1d
glusterfs-node0-2509304327-vpce1 1/1 Running 0 1d 192.168.10.100 node0
glusterfs-node1-3290690057-hhq92 1/1 Running 0 1d 192.168.10.101 node1 <1>
glusterfs-node2-4072075787-okzjv 1/1 Running 0 1d 192.168.10.102 node2
heketi-3017632314-yyngh 1/1 Running 0 1d 10.42.0.0 node0 <2>
----
<1> Example of GlusterFS storage pods running (notice there are three for this example).
<2> Heketi Server pod.
If not already set in the environment, export the HEKETI_CLI_SERVER
----
$ export HEKETI_CLI_SERVER=$(oc describe svc/heketi | grep "Endpoints:" | awk '{print "http://"$2}')
----
Identify the Heketi REST URL and Server IP Address:
----
$ echo $HEKETI_CLI_SERVER
http://10.42.0.0:8080
----
Identify the Gluster EndPoints that are needed to pass in as a parameter into the StorageClass which will be used in a later step (heketi-storage-endpoints).
----
oc get endpoints
NAME ENDPOINTS AGE
heketi 10.42.0.0:8080 22h
heketi-storage-endpoints 192.168.10.100:1,192.168.10.101:1,192.168.10.102:1 22h <1>
kubernetes 192.168.10.90:6443 23h
----
<1> The defined GlusterFS EndPoints, in this example, they are called `heketi-storage-endpoints`.
[NOTE]
By default, user_authorization is disabled, but if it were enabled, you might also need to find the rest user
and rest user secret key (not applicable for this example as any values will work).
====




== Create a StorageClass for our GlusterFS Dynamic Provisioner

xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[Storage Classes]
are used to manage and enable Persistent Storage in {product-title}. Below is an example of a _Storage Class_ that will request
5GB of on-demand storage to be used with our _HelloWorld_ application.


====
----
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: gluster-heketi <1>
provisioner: kubernetes.io/glusterfs <2>
parameters:
endpoint: "heketi-storage-endpoints" <3>
resturl: "http://10.42.0.0:8080" <4>
restuser: "joe" <5>
restuserkey: "My Secret Life" <6>
----
<1> Name of the Storage Class.
<2> Provisioner.
<3> GlusterFS defined EndPoint (oc get endpoints).
<4> Heketi REST Url, taken from Step 1 above (echo $HEKETI_CLI_SERVER).
<5> Restuser, can be anything since authorization is turned off.
<6> Restuserkey, like Restuser, can be anything.
Create the Storage Class YAML file. Save it. Then submit it to {product-title}.
----
oc create -f gluster-storage-class.yaml
storageclass "gluster-heketi" created
----
View the Storage Class.
----
oc get storageclass
NAME TYPE
gluster-heketi kubernetes.io/glusterfs
----
====

== Create a PersistentVolumeClaim (PVC) to request storage for our HelloWorld application.

Next, we will create a PVC that will request 5GB of storage, at which time, the Dynamic Provisioning Framework and Heketi
will automatically provision a new GlusterFS volume and generate the PersistentVolume (PV) object.

====
----
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster1
annotations:
volume.beta.kubernetes.io/storage-class: gluster-heketi <1>
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi <2>
----
<1> The Kubernetes Storage Class annotation and the name of the Storage Class.
<2> The amount of storage requested.
Create the PVC YAML file. Save it. Then submit it to {product-title}.
----
oc create -f gluster-pvc.yaml
persistentvolumeclaim "gluster1" created
----
View the PVC, and notice that it is bound to a dynamically created volume.
----
oc get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
gluster1 Bound pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180 5Gi RWO 14h
----
Also view the Persistent Volume (PV).
----
oc get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180 5Gi RWO Delete Bound default/gluster1 14h
----
====

== Create a NGINX pod that uses the PVC

At this point we have a dynamically created GlusterFS volume, bound to a PersistentVolumeClaim, we can now utilize this claim
in a pod. We will create a simple NGINX pod.

====
----
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
name: nginx-pod
spec:
containers:
- name: nginx-pod
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- name: web
containerPort: 80
securityContext:
privileged: true
volumeMounts:
- name: gluster-vol1
mountPath: /usr/share/nginx/html
volumes:
- name: gluster-vol1
persistentVolumeClaim:
claimName: gluster1 <1>
----
<1> The name of the PVC created in previous step.
Create the Pod YAML file. Save it. Then submit it to {product-title}.
----
oc create -f nginx-pod.yaml
pod "gluster-pod1" created
----
View the Pod (Give it a few minutes, it might need to download the image if it doesn't already exist).
----
oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-pod 1/1 Running 0 9m 10.38.0.0 node1
glusterfs-node0-2509304327-vpce1 1/1 Running 0 1d 192.168.10.100 node0
glusterfs-node1-3290690057-hhq92 1/1 Running 0 1d 192.168.10.101 node1
glusterfs-node2-4072075787-okzjv 1/1 Running 0 1d 192.168.10.102 node2
heketi-3017632314-yyngh 1/1 Running 0 1d 10.42.0.0 node0
----
Now we will exec into the container and create an index.html file in the mountPath definition of the Pod.
----
oc exec -ti nginx-pod /bin/sh
$ cd /usr/share/nginx/html
$ echo 'Hello World from GlusterFS!!!' > index.html
$ ls
index.html
$ exit
----
Using the _curl_ command from the master node, curl the URL of the pod.
----
curl http://10.38.0.0
Hello World from GlusterFS!!!
----
Lastly, check our gluster pod, to see the index.html file that was written. Choose any of the gluster pods.
----
oc exec -ti glusterfs-node1-3290690057-hhq92 /bin/sh
$ mount | grep heketi
/dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
/dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_d8c06e606ff4cc29ccb9d018c73ee292 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
$ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick
$ ls
index.html
$ cat index.html
Hello World from GlusterFS!!!
----
====


1 change: 1 addition & 0 deletions install_config/storage_examples/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ against the volumes as a user of the system.
- xref:../../install_config/storage_examples/shared_storage.adoc#install-config-storage-examples-shared-storage[Sharing an NFS PV Across Two Pods]
- xref:../../install_config/storage_examples/ceph_example.adoc#install-config-storage-examples-ceph-example[Ceph-RBD Block Storage Volume]
- xref:../../install_config/storage_examples/gluster_example.adoc#install-config-storage-examples-gluster-example[Shared Storage Using a GlusterFS Volume]
- xref:../../install_config/storage_examples/gluster_dynamic_example.adoc#install-config-storage-examples-gluster-dynamic-example[Dynamic Provisioning Storage Using GlusterFS]
- xref:../../install_config/storage_examples/privileged_pod_storage.adoc#install-config-storage-examples-privileged-pod-storage[Mounting a PV to Privileged Pods]
- xref:../../install_config/storage_examples/gluster_backed_registry.adoc#install-config-storage-examples-gluster-backed-registry[Backing Docker Registry with GlusterFS Storage]
- xref:../../install_config/storage_examples/binding_pv_by_label.adoc#binding-pv-by-label[Binding Persistent Volumes by Labels]
Expand Down

0 comments on commit 6d24343

Please sign in to comment.