Skip to content

Commit

Permalink
Follow-up edits to PR#3388
Browse files Browse the repository at this point in the history
  • Loading branch information
ahardin-rh committed Dec 20, 2016
1 parent 6d24343 commit 32824f6
Showing 1 changed file with 111 additions and 81 deletions.
192 changes: 111 additions & 81 deletions install_config/storage_examples/gluster_dynamic_example.adoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[install-config-storage-examples-gluster-dynamic-example]]
= Complete Example of Dynamic Provisioning Using GlusterFS
= Complete Example of Dynamic Provisioning Using GlusterFS
{product-author}
{product-version}
:data-uri:
Expand All @@ -11,34 +11,45 @@

toc::[]


[NOTE]
====
This example assumes a working {product-title} installed and functioning along with Heketi and GlusterFS
====
[NOTE]
====
All `oc ...` commands are executed on the {product-title} master host.
This example assumes a working {product-title} installed and functioning along
with Heketi and GlusterFS.
All `oc` commands are executed on the {product-title} master host.
====

== Overview

This topic provides an end-to-end example of how to dynamically provision GlusterFS volumes. In this example a simple NGINX HelloWorld application will be deployed using the
link:https://access.redhat.com/documentation/en/red-hat-gluster-storage/3.1/paged/container-native-storage-for-openshift-container-platform/chapter-2-red-hat-gluster-storage-container-native-with-openshift-container-platform[ Red Hat Container Native Storage (CNS)] solution. CNS hyper-converges GlusterFS storage by containerizing it into the {product-title} cluster.
This topic provides an end-to-end example of how to dynamically provision
GlusterFS volumes. In this example, a simple NGINX HelloWorld application is
deployed using the
link:https://access.redhat.com/documentation/en/red-hat-gluster-storage/3.1/paged/container-native-storage-for-openshift-container-platform/chapter-2-red-hat-gluster-storage-container-native-with-openshift-container-platform[
Red Hat Container Native Storage (CNS)] solution. CNS hyper-converges GlusterFS
storage by containerizing it into the {product-title} cluster.

The link:https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/index.html[Red
Hat Gluster Storage Administration Guide] can also provide additional information about GlusterFS.
The
link:https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/index.html[Red
Hat Gluster Storage Administration Guide] can also provide additional
information about GlusterFS.

To help get started follow this link:https://github.com/gluster/gluster-kubernetes[quickstart guide] for an easy Vagrant based installation and deployment of a working {product-title} cluster along with Heketi and GlusterFS containers.
To get started, follow the
link:https://github.com/gluster/gluster-kubernetes[gluster-kubernetes quickstart
guide] for an easy Vagrant-based installation and deployment of a working
{product-title} cluster with Heketi and GlusterFS containers.

== Verify the environment and gather some information to be used in later steps
[[verify-the-environment-and-gather-needed-information]]
== Verify the Environment and Gather Needed Information

[NOTE]
====
At this point, there should be a working {product-title} cluster deployed, and a working Heketi Server along with GlusterFS.
At this point, there should be a working {product-title} cluster deployed, and a
working Heketi server with GlusterFS.
====

====
Verify and View the cluster environment, including nodes and pods.
. Verify and view the cluster environment, including nodes and pods:
+
----
$ oc get nodes,pods
NAME STATUS AGE
Expand All @@ -48,52 +59,53 @@ node1 Ready 22h
node2 Ready 22h
NAME READY STATUS RESTARTS AGE 1/1 Running 0 1d
glusterfs-node0-2509304327-vpce1 1/1 Running 0 1d 192.168.10.100 node0
glusterfs-node1-3290690057-hhq92 1/1 Running 0 1d 192.168.10.101 node1 <1>
glusterfs-node2-4072075787-okzjv 1/1 Running 0 1d 192.168.10.102 node2
glusterfs-node1-3290690057-hhq92 1/1 Running 0 1d 192.168.10.101 node1 <1>
glusterfs-node2-4072075787-okzjv 1/1 Running 0 1d 192.168.10.102 node2
heketi-3017632314-yyngh 1/1 Running 0 1d 10.42.0.0 node0 <2>
----
<1> Example of GlusterFS storage pods running (notice there are three for this example).
<2> Heketi Server pod.
<1> Example of GlusterFS storage pods running. There are three in this example.
<2> Heketi server pod.


If not already set in the environment, export the HEKETI_CLI_SERVER
. If not already set in the environment, export the `HEKETI_CLI_SERVER`:
+
----
$ export HEKETI_CLI_SERVER=$(oc describe svc/heketi | grep "Endpoints:" | awk '{print "http://"$2}')
----

Identify the Heketi REST URL and Server IP Address:
. Identify the Heketi REST URL and server IP address:
+
----
$ echo $HEKETI_CLI_SERVER
http://10.42.0.0:8080
----

Identify the Gluster EndPoints that are needed to pass in as a parameter into the StorageClass which will be used in a later step (heketi-storage-endpoints).
. Identify the Gluster endpoints that are needed to pass in as a parameter into
the storage class, which is used in a later step (`heketi-storage-endpoints`).
+
----
oc get endpoints
$ oc get endpoints
NAME ENDPOINTS AGE
heketi 10.42.0.0:8080 22h
heketi-storage-endpoints 192.168.10.100:1,192.168.10.101:1,192.168.10.102:1 22h <1>
kubernetes 192.168.10.90:6443 23h
----
<1> The defined GlusterFS EndPoints, in this example, they are called `heketi-storage-endpoints`.
<1> The defined GlusterFS endpoints. In this example, they are called `heketi-storage-endpoints`.

[NOTE]
By default, user_authorization is disabled, but if it were enabled, you might also need to find the rest user
and rest user secret key (not applicable for this example as any values will work).
====
By default, `user_authorization` is disabled. If enabled, you may need to find
the rest user and rest user secret key. (This is not applicable for this
example, as any values will work).
====

[[create-a-storage-class-for-your-glusterfs-dynamic-provisioner]]
== Create a Storage Class for Your GlusterFS Dynamic Provisioner



== Create a StorageClass for our GlusterFS Dynamic Provisioner

xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[Storage Classes]
are used to manage and enable Persistent Storage in {product-title}. Below is an example of a _Storage Class_ that will request
5GB of on-demand storage to be used with our _HelloWorld_ application.

xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[Storage
classes] manage and enable persistent storage in {product-title}.
Below is an example of a _Storage class_ requesting 5GB of on-demand
storage to be used with your _HelloWorld_ application.

====
----
Expand All @@ -108,32 +120,37 @@ provisioner: kubernetes.io/glusterfs <2>
restuser: "joe" <5>
restuserkey: "My Secret Life" <6>
----
<1> Name of the Storage Class.
<2> Provisioner.
<3> GlusterFS defined EndPoint (oc get endpoints).
<4> Heketi REST Url, taken from Step 1 above (echo $HEKETI_CLI_SERVER).
<5> Restuser, can be anything since authorization is turned off.
<6> Restuserkey, like Restuser, can be anything.
<1> Name of the storage class.
<2> The provisioner.
<3> The GlusterFS-defined endpoint (`oc get endpoints`).
<4> Heketi REST URL, taken from Step 1 above (`echo $HEKETI_CLI_SERVER`).
<5> Rest username. This can be any value since authorization is turned off.
<6> Rest user key. This can be any value.
====

Create the Storage Class YAML file. Save it. Then submit it to {product-title}.
. Create the Storage Class YAML file, save it, then submit it to {product-title}:
+
----
oc create -f gluster-storage-class.yaml
$ oc create -f gluster-storage-class.yaml
storageclass "gluster-heketi" created
----

View the Storage Class.
. View the storage class:
+
----
oc get storageclass
$ oc get storageclass
NAME TYPE
gluster-heketi kubernetes.io/glusterfs
----
====

== Create a PersistentVolumeClaim (PVC) to request storage for our HelloWorld application.
[[create-a-pvc-ro-request-storage-for-your-application]]
== Create a PVC to Request Storage for Your Application

Next, we will create a PVC that will request 5GB of storage, at which time, the Dynamic Provisioning Framework and Heketi
will automatically provision a new GlusterFS volume and generate the PersistentVolume (PV) object.
. Create a persistent volume claim (PVC) requesting 5GB of storage.
+
During that time, the Dynamic Provisioning Framework and Heketi will
automatically provision a new GlusterFS volume and generate the persistent volume
(PV) object:

====
----
Expand All @@ -150,35 +167,39 @@ spec:
requests:
storage: 5Gi <2>
----
<1> The Kubernetes Storage Class annotation and the name of the Storage Class.
<1> The Kubernetes storage class annotation and the name of the storage class.
<2> The amount of storage requested.
====

Create the PVC YAML file. Save it. Then submit it to {product-title}.
. Create the PVC YAML file, save it, then submit it to {product-title}:
+
----
oc create -f gluster-pvc.yaml
$ oc create -f gluster-pvc.yaml
persistentvolumeclaim "gluster1" created
----

View the PVC, and notice that it is bound to a dynamically created volume.
. View the PVC:
+
----
oc get pvc
$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
gluster1 Bound pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180 5Gi RWO 14h
----
+
Notice that the PVC is bound to a dynamically created volume.

Also view the Persistent Volume (PV).
. View the persistent volume (PV):
+
----
oc get pv
$ oc get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180 5Gi RWO Delete Bound default/gluster1 14h
----
====

== Create a NGINX pod that uses the PVC
== Create a NGINX Pod That Uses the PVC

At this point we have a dynamically created GlusterFS volume, bound to a PersistentVolumeClaim, we can now utilize this claim
in a pod. We will create a simple NGINX pod.
At this point, you have a dynamically created GlusterFS volume, bound to a PVC.
Now, you can use this claim in a pod. Create a simple NGINX pod:

====
----
Expand All @@ -205,45 +226,57 @@ spec:
persistentVolumeClaim:
claimName: gluster1 <1>
----
<1> The name of the PVC created in previous step.
<1> The name of the PVC created in the previous step.
====

Create the Pod YAML file. Save it. Then submit it to {product-title}.
. Create the Pod YAML file, save it, then submit it to {product-title}:
+
----
oc create -f nginx-pod.yaml
$ oc create -f nginx-pod.yaml
pod "gluster-pod1" created
----

View the Pod (Give it a few minutes, it might need to download the image if it doesn't already exist).
. View the pod:
+
----
oc get pods -o wide
$ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-pod 1/1 Running 0 9m 10.38.0.0 node1
nginx-pod 1/1 Running 0 9m 10.38.0.0 node1
glusterfs-node0-2509304327-vpce1 1/1 Running 0 1d 192.168.10.100 node0
glusterfs-node1-3290690057-hhq92 1/1 Running 0 1d 192.168.10.101 node1
glusterfs-node2-4072075787-okzjv 1/1 Running 0 1d 192.168.10.102 node2
heketi-3017632314-yyngh 1/1 Running 0 1d 10.42.0.0 node0
----
+
[NOTE]
====
This may take a few minutes, as the the pod may need to download the image if it does not already exist.
====

Now we will exec into the container and create an index.html file in the mountPath definition of the Pod.
. `oc exec` into the container and create an *_index.html_* file in the
`mountPath` definition of the pod:
+
----
oc exec -ti nginx-pod /bin/sh
$ oc exec -ti nginx-pod /bin/sh
$ cd /usr/share/nginx/html
$ echo 'Hello World from GlusterFS!!!' > index.html
$ ls
index.html
$ exit
----

Using the _curl_ command from the master node, curl the URL of the pod.
. Using the `curl` command from the master node, `curl` the URL of the pod:
+
----
curl http://10.38.0.0
$ curl http://10.38.0.0
Hello World from GlusterFS!!!
----

Lastly, check our gluster pod, to see the index.html file that was written. Choose any of the gluster pods.
. Check your Gluster pod to ensure that the *_index.html_* file was written.
Choose any of the Gluster pods:
+
----
oc exec -ti glusterfs-node1-3290690057-hhq92 /bin/sh
$ oc exec -ti glusterfs-node1-3290690057-hhq92 /bin/sh
$ mount | grep heketi
/dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
Expand All @@ -252,9 +285,6 @@ $ mount | grep heketi
$ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick
$ ls
index.html
$ cat index.html
$ cat index.html
Hello World from GlusterFS!!!
----
====


0 comments on commit 32824f6

Please sign in to comment.