Skip to content

Commit

Permalink
Fixing some typos
Browse files Browse the repository at this point in the history
  • Loading branch information
bergerhoffer committed Sep 13, 2023
1 parent 01e8d47 commit 0e523f4
Show file tree
Hide file tree
Showing 25 changed files with 43 additions and 44 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The {ai-full} supports IPv4 networking and dual stack networking. The {ai-full}

.. Enter the default gateway IP address.

.. Enter the DNS server IP addresss.
.. Enter the DNS server IP address.

. Enter the host-specific configuration.

Expand Down
4 changes: 2 additions & 2 deletions modules/deleting-machine-pools.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@ You can delete a machine pool in the event that your workload requirements have

You can delete machine pools using the
ifdef::openshift-rosa[]
Openshift Cluster Manager or the ROSA CLI (`rosa`).
OpenShift Cluster Manager or the ROSA CLI (`rosa`).
endif::openshift-rosa[]
ifndef::openshift-rosa[]
Openshift Cluster Manager.
OpenShift Cluster Manager.
endif::[]
ifndef::openshift-rosa[]

Expand Down
2 changes: 1 addition & 1 deletion modules/installation-creating-azure-service-principal.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ If you are unable to use a service principal, you can use a managed identity.

* You have installed or updated the link:https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-yum?view=azure-cli-latest[Azure CLI].
* You have an Azure subscription ID.
* If you are not going to assign the the `Contributor` and `User Administrator Access` roles to the service principal, you have created a custom role with the required Azure permissions.
* If you are not going to assign the `Contributor` and `User Administrator Access` roles to the service principal, you have created a custom role with the required Azure permissions.
.Procedure

Expand Down
2 changes: 1 addition & 1 deletion modules/installation-gcp-marketplace.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ By default, the installation program downloads and installs the {op-system-first
** Set the `name` parameter to one of the following offers:
+
{product-title}:: `redhat-coreos-ocp-413-x86-64-202305021736`
{opp}:: `redhat-coreos-opp-413-x86-64-202305021736``
{opp}:: `redhat-coreos-opp-413-x86-64-202305021736`
{oke}:: `redhat-coreos-oke-413-x86-64-202305021736`
. Save the file and reference it when deploying the cluster.

Expand Down
2 changes: 1 addition & 1 deletion modules/installation-ibm-cloud-iam-policies-api-key.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ ifdef::ibm-power-vs[]
|Default resource group: The resource type should equal `Resource group`, with a value of `Default`. If your account administrator changed your account's default resource group to something other than Default, use that value instead.

|Viewer, Operator, Editor, Reader, Manager
|Power Systems Virtual Server service in <resoure_group> resource group
|Power Systems Virtual Server service in <resource_group> resource group

|Viewer, Operator, Editor, Reader, Writer, Manager, Administrator
|Internet Services service in <resource_group> resource group: CIS functional scope string equals reliability
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// This module is included in the following assemblies:
// This module is included in the following assemblies:
//
// installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc

Expand All @@ -8,11 +8,11 @@

For edge computing scenarios, it can be beneficial to locate worker nodes closer to the edge. To locate remote worker nodes in subnets, you might use different network segments or subnets for the remote worker nodes than you used for the control plane subnet and local worker nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios.

If you have established different network segments or subnets for remote worker nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the `machineNetwork` configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the `networkConfig` paramter for each remote worker node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures the remote worker nodes can reach the subnet containing the control plane nodes and that they can receive network traffic from the control plane.
If you have established different network segments or subnets for remote worker nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the `machineNetwork` configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the `networkConfig` parameter for each remote worker node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures the remote worker nodes can reach the subnet containing the control plane nodes and that they can receive network traffic from the control plane.

[IMPORTANT]
====
All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details.
All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details.
Deploying a cluster with multiple subnets requires using virtual media, such as `redfish-virtualmedia` and `idrac-virtualmedia`.
====
Expand Down Expand Up @@ -56,4 +56,4 @@ networkConfig:
<1> Replace `<interface_name>` with the interface name.
<2> Replace `<node_ip>` with the IP address of the node.
<3> Replace `<gateway_ip>` with the IP address of the gateway.
<4> Replace `<dns_ip>` with the IP address of the DNS server.
<4> Replace `<dns_ip>` with the IP address of the DNS server.
2 changes: 1 addition & 1 deletion modules/migration-mtc-release-notes-1-7-06.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
[id="new-features-1-7-6_{context}"]
== New features

.Implement proposed changes for DVM support with PSA in Red Hat Openshift Container Platform 4.12
.Implement proposed changes for DVM support with PSA in Red Hat OpenShift Container Platform 4.12
With the incoming enforcement of Pod Security Admission (PSA) in {OCP} 4.12, whereby the default pod would run with a `restricted` profile. This `restricted` profile would mean workloads to migrate would be in violation of this policy and no longer work as of now. The above enhancement outlines the changes that will be required to remain compatible with OCP 4.12. (link:https://issues.redhat.com/browse/MIG-1240[*MIG-1240*])

[id="resolved-issues-1-7-06_{context}"]
Expand Down
10 changes: 5 additions & 5 deletions modules/network-observability-loki-install.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ It is recommended to install link:https://catalog.redhat.com/software/containers
//* <Any Loki install prerequisites for using with Network Observability operator?>

There are several ways you can install Loki. One way you can install the Loki Operator is by using the {product-title} web console Operator Hub.
There are several ways you can install Loki. One way you can install the Loki Operator is by using the {product-title} web console Operator Hub.


.Procedure
Expand All @@ -32,9 +32,9 @@ There are several ways you can install Loki. One way you can install the Loki Op

.. Verify that *Loki Operator* is listed with *Status* as *Succeeded* in all the projects.
+
. Create a `Secret` YAML file. You can create this secret in the web console or CLI.
. Create a `Secret` YAML file. You can create this secret in the web console or CLI.
.. Using the web console, navigate to the *Project* -> *All Projects* dropdown and select *Create Project*. Name the project `netobserv` and click *Create*.
.. Navigate to the Import icon ,*+*, in the top right corner. Drop your YAML file into the editor. It is important to create this YAML file in the `netobserv` namespace that uses the `access_key_id` and `access_key_secret` to specify your credentials.
.. Navigate to the Import icon, *+*, in the top right corner. Drop your YAML file into the editor. It is important to create this YAML file in the `netobserv` namespace that uses the `access_key_id` and `access_key_secret` to specify your credentials.

.. Once you create the secret, you should see it listed under *Workloads* -> *Secrets* in the web console.
+
Expand All @@ -56,5 +56,5 @@ stringData:

[IMPORTANT]
====
To uninstall Loki, refer to the uninstallation process that corresponds with the method you used to install Loki. You might have remaining `ClusterRoles` and `ClusterRoleBindings`, data stored in object store, and persistent volume that must be removed.
====
To uninstall Loki, refer to the uninstallation process that corresponds with the method you used to install Loki. You might have remaining `ClusterRoles` and `ClusterRoleBindings`, data stored in object store, and persistent volume that must be removed.
====
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ that is not supported. The following TuneD plugins are currently not supported:
The TuneD bootloader plugin only supports {op-system-first} worker nodes.
====

.Additional references
.Additional resources

* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/customizing-tuned-profiles_monitoring-and-managing-system-status-and-performance#available-tuned-plug-ins_customizing-tuned-profiles[Available TuneD Plugins]

Expand Down
2 changes: 1 addition & 1 deletion modules/nvidia-gpu-vsphere.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

You can deploy {product-title} on an NVIDIA-certified VMware vSphere server that can host different GPU types.

An NVIDIA GPU driver must be installed in the hypervisor in case vGPU instances are used by the VMs. For VMWare vSphere, this host driver is provided in the form of a VIB file.
An NVIDIA GPU driver must be installed in the hypervisor in case vGPU instances are used by the VMs. For VMware vSphere, this host driver is provided in the form of a VIB file.

The maximum number of vGPUS that can be allocated to worker node VMs depends on the version of vSphere:

Expand Down
2 changes: 1 addition & 1 deletion modules/nw-kuryr-migration.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -288,7 +288,7 @@ If SSH access is not available, you can use the `openstack` command:
----
$ for name in $(openstack server list --name ${CLUSTERID}\* -f value -c Name); do openstack server reboot $name; done
----
Alternatively, you might be able to to reboot each node through the management portal for
Alternatively, you might be able to reboot each node through the management portal for
your infrastructure provider. Otherwise, contact the appropriate authority to
either gain access to the virtual machines through SSH or the management
portal and OpenStack client.
Expand Down
4 changes: 2 additions & 2 deletions modules/nw-networkpolicy-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ spec:
policyTypes:
- Ingress
----
<1> `policy-group.network.openshift.io/ingress:""` label supports both Openshift-SDN and OVN-Kubernetes.
<1> `policy-group.network.openshift.io/ingress:""` label supports both OpenShift-SDN and OVN-Kubernetes.


[id="nw-networkpolicy-allow-from-hostnetwork_{context}"]
Expand All @@ -172,4 +172,4 @@ spec:
podSelector: {}
policyTypes:
- Ingress
----
----
17 changes: 8 additions & 9 deletions modules/nw-sriov-configure-exclude-topology-manager.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

:_content-type: PROCEDURE
[id="nw-sriov-configure-exclude-topology-manager_{context}"]
= Excluding the SR-IOV network topology for NUMA-aware scheduling
= Excluding the SR-IOV network topology for NUMA-aware scheduling

To exclude advertising the SR-IOV network resource's Non-Uniform Memory Access (NUMA) node to the Topology Manager, you can configure the `excludeTopology` specification in the `SriovNetworkNodePolicy` custom resource. Use this configuration for more flexible SR-IOV network deployments during NUMA-aware pod scheduling.

Expand Down Expand Up @@ -45,7 +45,7 @@ spec:
+
[NOTE]
====
If multiple `SriovNetworkNodePolicy` resources target the same SR-IOV network resource, the `SriovNetworkNodePolicy` resources must have the same value as the `excludeTopology` specification. Otherwise, the conflicting policy is rejected.
If multiple `SriovNetworkNodePolicy` resources target the same SR-IOV network resource, the `SriovNetworkNodePolicy` resources must have the same value as the `excludeTopology` specification. Otherwise, the conflicting policy is rejected.
====

.. Create the `SriovNetworkNodePolicy` resource by running the following command:
Expand Down Expand Up @@ -77,11 +77,11 @@ spec:
networkNamespace: <namespace> <3>
ipam: |- <4>
{
"type": "<ipam_type>",
"type": "<ipam_type>",
}
----
<1> Replace `sriov-numa-0-network` with the name for the SR-IOV network resource.
<2> Specify the resource name for the `SriovNetworkNodePolicy` CR from the previous step. This YAML uses a sample `resourceName` value.
<1> Replace `sriov-numa-0-network` with the name for the SR-IOV network resource.
<2> Specify the resource name for the `SriovNetworkNodePolicy` CR from the previous step. This YAML uses a sample `resourceName` value.
<3> Enter the namespace for your SR-IOV network resource.
<4> Enter the IP address management configuration for the SR-IOV network.

Expand Down Expand Up @@ -118,7 +118,7 @@ metadata:
spec:
containers:
- name: <container_name>
image: <image>
image: <image>
imagePullPolicy: IfNotPresent
command: ["sleep", "infinity"]
----
Expand Down Expand Up @@ -155,7 +155,7 @@ test-deployment-sriov-76cbbf4756-k9v72 1/1 Running 0 45h

. Open a debug session with the target pod to verify that the SR-IOV network resources are deployed to a different node than the memory and CPU resources.

.. Open a debug session with the pod by running the follow command, replacing <pod_name> with the target pod name.
.. Open a debug session with the pod by running the following command, replacing <pod_name> with the target pod name.
+
[source,terminal]
----
Expand Down Expand Up @@ -211,6 +211,5 @@ In this example, CPUs 1,3,5, and 7 are allocated to `NUMA node1` but the SR-IOV

[NOTE]
====
If the `excludeTopology` specification is set to `True`, it is possible that the required resources exist in the same NUMA node.
If the `excludeTopology` specification is set to `True`, it is possible that the required resources exist in the same NUMA node.
====

Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ In general, you back up data from one {product-title} cluster and restore it on

.Prerequisites

* All relevant prerequisites for backing up and restoring on your platform (for example, AWS, Microsoft Azure, GCP, and so on), especially the prerequisites for for the Data Protection Application (DPA), are described in the relevant sections of this guide.
* All relevant prerequisites for backing up and restoring on your platform (for example, AWS, Microsoft Azure, GCP, and so on), especially the prerequisites for the Data Protection Application (DPA), are described in the relevant sections of this guide.
.Procedure

Expand Down
4 changes: 2 additions & 2 deletions modules/oadp-installing-oadp-rosa-sts.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Restic is not supported in the OADP on ROSA with AWS STS environment. Ensure the
.Procedure

. Create an Openshift secret from your AWS token file by entering the following commands.
. Create an OpenShift secret from your AWS token file by entering the following commands.

.. Create the credentials file:
+
Expand Down Expand Up @@ -116,7 +116,7 @@ EOF
+
<1> The `credentialsFile` is the mounted location of the bucket credential on the pod.
<2> The `enableSharedConfig` allows the `snapshotLocations` to share or reuse the credential defined for the bucket.
<3> Assume your Velero default for your `profile: default`.
<3> Assume your Velero default for your `profile: default`.
<4> Specify `region` as your AWS region. This must be the same as the cluster region.
+
[NOTE]
Expand Down
4 changes: 2 additions & 2 deletions modules/oc-mirror-imageset-config-params.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ channels:
====
Using the `minVersion` and `maxVersion` properties to filter for a specific Operator version range can result in a multiple channel heads error. The error message will state that there are `multiple channel heads`. This is because when the filter is applied, the update graph of the operator is truncated.
The Operator Lifecycle Manager requires that every operator channel contains versions that form an update graph with exactly one end point, that is , the latest version of the operator. When applying the filter range that graph can turn into two or more separate graphs or a graph that has more than one end point.
The Operator Lifecycle Manager requires that every operator channel contains versions that form an update graph with exactly one end point, that is, the latest version of the operator. When applying the filter range that graph can turn into two or more separate graphs or a graph that has more than one end point.
To avoid this error, do not filter out the latest version of an operator. If you still run into the error, depending on the operator, either the `maxVersion` property needs to be increased or the `minVersion` property needs to be decreased. Because every operator graph can be different, you might need to adjust these values, according to the procedure, until the error is gone.
====
====
2 changes: 1 addition & 1 deletion modules/persistent-storage-local-install.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ $ oc annotate namespace openshift-local-storage openshift.io/node-selector=''

. Optional: Allow local storage to run on the management pool of CPUs in single-node deployment.
+
Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the `managment` pool. Perform this step on single-node installations that use management workload partitioning.
Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the `management` pool. Perform this step on single-node installations that use management workload partitioning.
+
To allow Local Storage Operator to run on the management CPU pool, run following commands:
+
Expand Down
2 changes: 1 addition & 1 deletion modules/rosa-adding-taints-ocm.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

:_content-type: PROCEDURE
[id="rosa-adding-taints-ocm{context}"]
= Adding taints to a machine pool using Openshift Cluster Manager
= Adding taints to a machine pool using OpenShift Cluster Manager

You can add taints to a machine pool for your Red Hat OpenShift Service on AWS (ROSA) cluster by using OpenShift Cluster Manager.

Expand Down
2 changes: 1 addition & 1 deletion modules/serverless-domain-mapping-custom-tls-cert.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ To work around this issue, enable mTLS by deploying `PeerAuthentication` instead
$ oc create secret tls <tls_secret_name> --cert=<path_to_certificate_file> --key=<path_to_key_file>
----

. Add the `networking.internal.knative.dev/certificate-uid: <id>`` label to the Kubernetes TLS secret:
. Add the `networking.internal.knative.dev/certificate-uid: <id>` label to the Kubernetes TLS secret:
+
[source,terminal]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Snapshot indications are contextual information about online virtual machine (VM
.Procedure

. Display the output from the snapshot indications by doing one of the following:
* For snapshots created by using the command line, view indicator output in the the `status` stanza of the `VirtualMachineSnapshot` object YAML.
* For snapshots created by using the command line, view indicator output in the `status` stanza of the `VirtualMachineSnapshot` object YAML.
* For snapshots created by using the web console, click *VirtualMachineSnapshot* -> *Status* in the *Snapshot details* screen.

. Verify the status of your online VM snapshot:
Expand Down
2 changes: 1 addition & 1 deletion modules/ztp-site-cleanup.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ You can remove a managed site and the associated installation and configuration
* You have logged in to the hub cluster as a user with `cluster-admin` privileges.
.Precedure
.Procedure

. Remove a site and the associated CRs by removing the associated `SiteConfig` and `PolicyGenTemplate` files from the `kustomization.yaml` file.
+
Expand Down
2 changes: 1 addition & 1 deletion rosa_architecture/rosa-understanding.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ You subscribe to the service directly from your AWS account. After the clusters
You receive OpenShift updates with new feature releases and a shared, common source for alignment with OpenShift Container Platform. ROSA supports the same versions of OpenShift as Red Hat OpenShift Dedicated and OpenShift Container Platform to achieve version consistency.

image::291_OpenShift_on_AWS_Intro_1122_docs.png[{product-title}]
For additional information on ROSA installation, see link:https://www.redhat.com/en/products/interactive-walkthrough/install-rosa[Installing Red Hat Openshift Service on AWS (ROSA) interactive walkthrough].
For additional information on ROSA installation, see link:https://www.redhat.com/en/products/interactive-walkthrough/install-rosa[Installing Red Hat OpenShift Service on AWS (ROSA) interactive walkthrough].

[id="rosa-understanding-credential-modes_{context}"]
== Credential modes
Expand Down
Loading

0 comments on commit 0e523f4

Please sign in to comment.