Skip to content

Commit

Permalink
Switch English to use code not codenew shortcode
Browse files Browse the repository at this point in the history
  • Loading branch information
mengjiao-liu committed Aug 1, 2023
1 parent 815af5d commit 68ba963
Show file tree
Hide file tree
Showing 95 changed files with 223 additions and 223 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -470,7 +470,7 @@ traffic, you can configure rules to block any health check requests
that originate from outside your cluster.
{{< /caution >}}

{{% codenew file="priority-and-fairness/health-for-strangers.yaml" %}}
{{% code file="priority-and-fairness/health-for-strangers.yaml" %}}

## Diagnostics

Expand Down
10 changes: 5 additions & 5 deletions content/en/docs/concepts/cluster-administration/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Kubernetes captures logs from each container in a running Pod.
This example uses a manifest for a `Pod` with a container
that writes text to the standard output stream, once per second.

{{% codenew file="debug/counter-pod.yaml" %}}
{{% code file="debug/counter-pod.yaml" %}}

To run this pod, use the following command:

Expand Down Expand Up @@ -255,7 +255,7 @@ For example, a pod runs a single container, and the container
writes to two different log files using two different formats. Here's a
manifest for the Pod:

{{% codenew file="admin/logging/two-files-counter-pod.yaml" %}}
{{% code file="admin/logging/two-files-counter-pod.yaml" %}}

It is not recommended to write log entries with different formats to the same log
stream, even if you managed to redirect both components to the `stdout` stream of
Expand All @@ -265,7 +265,7 @@ the logs to its own `stdout` stream.

Here's a manifest for a pod that has two sidecar containers:

{{% codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" %}}
{{% code file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" %}}

Now when you run this pod, you can access each log stream separately by
running the following commands:
Expand Down Expand Up @@ -332,7 +332,7 @@ Here are two example manifests that you can use to implement a sidecar container
The first manifest contains a [`ConfigMap`](/docs/tasks/configure-pod-container/configure-pod-configmap/)
to configure fluentd.

{{% codenew file="admin/logging/fluentd-sidecar-config.yaml" %}}
{{% code file="admin/logging/fluentd-sidecar-config.yaml" %}}

{{< note >}}
In the sample configurations, you can replace fluentd with any logging agent, reading
Expand All @@ -342,7 +342,7 @@ from any source inside an application container.
The second manifest describes a pod that has a sidecar container running fluentd.
The pod mounts a volume where fluentd can pick up its configuration data.

{{% codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" %}}
{{% code file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" %}}

### Exposing logs directly from the application

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Many applications require multiple resources to be created, such as a Deployment
Management of multiple resources can be simplified by grouping them together in the same file
(separated by `---` in YAML). For example:

{{% codenew file="application/nginx-app.yaml" %}}
{{% code file="application/nginx-app.yaml" %}}

Multiple resources can be created the same way as a single resource:

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/configuration/configmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ technique also lets you access a ConfigMap in a different namespace.
Here's an example Pod that uses values from `game-demo` to configure a Pod:

{{% codenew file="configmap/configure-pod.yaml" %}}
{{% code file="configmap/configure-pod.yaml" %}}

A ConfigMap doesn't differentiate between single line property values and
multi-line file-like values.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ request.

Here's an example `.yaml` file that shows the required fields and object spec for a Kubernetes Deployment:

{{% codenew file="application/deployment.yaml" %}}
{{% code file="application/deployment.yaml" %}}

One way to create a Deployment using a `.yaml` file like the one above is to use the
[`kubectl apply`](/docs/reference/generated/kubectl/kubectl-commands#apply) command
Expand Down
6 changes: 3 additions & 3 deletions content/en/docs/concepts/policy/limit-range.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,12 +54,12 @@ A `LimitRange` does **not** check the consistency of the default values it appli

For example, you define a `LimitRange` with this manifest:

{{% codenew file="concepts/policy/limit-range/problematic-limit-range.yaml" %}}
{{% code file="concepts/policy/limit-range/problematic-limit-range.yaml" %}}


along with a Pod that declares a CPU resource request of `700m`, but not a limit:

{{% codenew file="concepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml" %}}
{{% code file="concepts/policy/limit-range/example-conflict-with-limitrange-cpu.yaml" %}}


then that Pod will not be scheduled, failing with an error similar to:
Expand All @@ -69,7 +69,7 @@ Pod "example-conflict-with-limitrange-cpu" is invalid: spec.containers[0].resour

If you set both `request` and `limit`, then that new Pod will be scheduled successfully even with the same `LimitRange` in place:

{{% codenew file="concepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml" %}}
{{% code file="concepts/policy/limit-range/example-no-conflict-with-limitrange-cpu.yaml" %}}

## Example resource constraints

Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/policy/resource-quotas.md
Original file line number Diff line number Diff line change
Expand Up @@ -687,7 +687,7 @@ plugins:

Then, create a resource quota object in the `kube-system` namespace:

{{% codenew file="policy/priority-class-resourcequota.yaml" %}}
{{% code file="policy/priority-class-resourcequota.yaml" %}}

```shell
kubectl apply -f https://k8s.io/examples/policy/priority-class-resourcequota.yaml -n kube-system
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ your Pod spec.

For example, consider the following Pod spec:

{{% codenew file="pods/pod-with-node-affinity.yaml" %}}
{{% code file="pods/pod-with-node-affinity.yaml" %}}

In this example, the following rules apply:

Expand Down Expand Up @@ -172,7 +172,7 @@ scheduling decision for the Pod.

For example, consider the following Pod spec:

{{% codenew file="pods/pod-with-affinity-anti-affinity.yaml" %}}
{{% code file="pods/pod-with-affinity-anti-affinity.yaml" %}}

If there are two possible nodes that match the
`preferredDuringSchedulingIgnoredDuringExecution` rule, one with the
Expand Down Expand Up @@ -288,7 +288,7 @@ spec.

Consider the following Pod spec:

{{% codenew file="pods/pod-with-pod-affinity.yaml" %}}
{{% code file="pods/pod-with-pod-affinity.yaml" %}}

This example defines one Pod affinity rule and one Pod anti-affinity rule. The
Pod affinity rule uses the "hard"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ each schedulingGate can be removed in arbitrary order, but addition of a new sch

To mark a Pod not-ready for scheduling, you can create it with one or more scheduling gates like this:

{{% codenew file="pods/pod-with-scheduling-gates.yaml" %}}
{{% code file="pods/pod-with-scheduling-gates.yaml" %}}

After the Pod's creation, you can check its state using:

Expand Down Expand Up @@ -61,7 +61,7 @@ The output is:
To inform scheduler this Pod is ready for scheduling, you can remove its `schedulingGates` entirely
by re-applying a modified manifest:

{{% codenew file="pods/pod-without-scheduling-gates.yaml" %}}
{{% code file="pods/pod-without-scheduling-gates.yaml" %}}

You can check if the `schedulingGates` is cleared by running:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ tolerations:
Here's an example of a pod that uses tolerations:
{{% codenew file="pods/pod-with-toleration.yaml" %}}
{{% code file="pods/pod-with-toleration.yaml" %}}
The default value for `operator` is `Equal`.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,7 @@ graph BT
If you want an incoming Pod to be evenly spread with existing Pods across zones, you
can use a manifest similar to:

{{% codenew file="pods/topology-spread-constraints/one-constraint.yaml" %}}
{{% code file="pods/topology-spread-constraints/one-constraint.yaml" %}}

From that manifest, `topologyKey: zone` implies the even distribution will only be applied
to nodes that are labelled `zone: <any value>` (nodes that don't have a `zone` label
Expand Down Expand Up @@ -377,7 +377,7 @@ graph BT
You can combine two topology spread constraints to control the spread of Pods both
by node and by zone:

{{% codenew file="pods/topology-spread-constraints/two-constraints.yaml" %}}
{{% code file="pods/topology-spread-constraints/two-constraints.yaml" %}}

In this case, to match the first constraint, the incoming Pod can only be placed onto
nodes in zone `B`; while in terms of the second constraint, the incoming Pod can only be
Expand Down Expand Up @@ -466,7 +466,7 @@ and you know that zone `C` must be excluded. In this case, you can compose a man
as below, so that Pod `mypod` will be placed into zone `B` instead of zone `C`.
Similarly, Kubernetes also respects `spec.nodeSelector`.

{{% codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" %}}
{{% code file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" %}}

## Implicit conventions

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -300,7 +300,7 @@ Below are the properties a user can specify in the `dnsConfig` field:

The following is an example Pod with custom DNS settings:

{{% codenew file="service/networking/custom-dns.yaml" %}}
{{% code file="service/networking/custom-dns.yaml" %}}

When the Pod above is created, the container `test` gets the following contents
in its `/etc/resolv.conf` file:
Expand Down
10 changes: 5 additions & 5 deletions content/en/docs/concepts/services-networking/dual-stack.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ These examples demonstrate the behavior of various dual-stack Service configurat
[headless Services](/docs/concepts/services-networking/service/#headless-services) with selectors
will behave in this same way.)

{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}}
{{% code file="service/networking/dual-stack-default-svc.yaml" %}}

1. This Service specification explicitly defines `PreferDualStack` in `.spec.ipFamilyPolicy`. When
you create this Service on a dual-stack cluster, Kubernetes assigns both IPv4 and IPv6
Expand All @@ -151,14 +151,14 @@ These examples demonstrate the behavior of various dual-stack Service configurat
* On a cluster with dual-stack enabled, specifying `RequireDualStack` in `.spec.ipFamilyPolicy`
behaves the same as `PreferDualStack`.

{{% codenew file="service/networking/dual-stack-preferred-svc.yaml" %}}
{{% code file="service/networking/dual-stack-preferred-svc.yaml" %}}

1. This Service specification explicitly defines `IPv6` and `IPv4` in `.spec.ipFamilies` as well
as defining `PreferDualStack` in `.spec.ipFamilyPolicy`. When Kubernetes assigns an IPv6 and
IPv4 address in `.spec.ClusterIPs`, `.spec.ClusterIP` is set to the IPv6 address because that is
the first element in the `.spec.ClusterIPs` array, overriding the default.

{{% codenew file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" %}}
{{% code file="service/networking/dual-stack-preferred-ipfamilies-svc.yaml" %}}

#### Dual-stack defaults on existing Services

Expand All @@ -171,7 +171,7 @@ dual-stack.)
`.spec.ipFamilies` to the address family of the existing Service. The existing Service cluster IP
will be stored in `.spec.ClusterIPs`.

{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}}
{{% code file="service/networking/dual-stack-default-svc.yaml" %}}

You can validate this behavior by using kubectl to inspect an existing service.

Expand Down Expand Up @@ -211,7 +211,7 @@ dual-stack.)
`--service-cluster-ip-range` flag to the kube-apiserver) even though `.spec.ClusterIP` is set to
`None`.

{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}}
{{% code file="service/networking/dual-stack-default-svc.yaml" %}}

You can validate this behavior by using kubectl to inspect an existing headless service with selectors.

Expand Down
20 changes: 10 additions & 10 deletions content/en/docs/concepts/services-networking/ingress.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ Make sure you review your Ingress controller's documentation to understand the c

A minimal Ingress resource example:

{{% codenew file="service/networking/minimal-ingress.yaml" %}}
{{% code file="service/networking/minimal-ingress.yaml" %}}

An Ingress needs `apiVersion`, `kind`, `metadata` and `spec` fields.
The name of an Ingress object must be a valid
Expand Down Expand Up @@ -140,7 +140,7 @@ setting with Service, and will fail validation if both are specified. A common
usage for a `Resource` backend is to ingress data to an object storage backend
with static assets.

{{% codenew file="service/networking/ingress-resource-backend.yaml" %}}
{{% code file="service/networking/ingress-resource-backend.yaml" %}}

After creating the Ingress above, you can view it with the following command:

Expand Down Expand Up @@ -229,7 +229,7 @@ equal to the suffix of the wildcard rule.
| `*.foo.com` | `baz.bar.foo.com` | No match, wildcard only covers a single DNS label |
| `*.foo.com` | `foo.com` | No match, wildcard only covers a single DNS label |

{{% codenew file="service/networking/ingress-wildcard-host.yaml" %}}
{{% code file="service/networking/ingress-wildcard-host.yaml" %}}

## Ingress class

Expand All @@ -238,7 +238,7 @@ configuration. Each Ingress should specify a class, a reference to an
IngressClass resource that contains additional configuration including the name
of the controller that should implement the class.

{{% codenew file="service/networking/external-lb.yaml" %}}
{{% code file="service/networking/external-lb.yaml" %}}

The `.spec.parameters` field of an IngressClass lets you reference another
resource that provides configuration related to that IngressClass.
Expand Down Expand Up @@ -369,7 +369,7 @@ configured with a [flag](https://kubernetes.github.io/ingress-nginx/#what-is-the
`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) though, to specify the
default `IngressClass`:

{{% codenew file="service/networking/default-ingressclass.yaml" %}}
{{% code file="service/networking/default-ingressclass.yaml" %}}

## Types of Ingress

Expand All @@ -379,7 +379,7 @@ There are existing Kubernetes concepts that allow you to expose a single Service
(see [alternatives](#alternatives)). You can also do this with an Ingress by specifying a
*default backend* with no rules.

{{% codenew file="service/networking/test-ingress.yaml" %}}
{{% code file="service/networking/test-ingress.yaml" %}}

If you create it using `kubectl apply -f` you should be able to view the state
of the Ingress you added:
Expand Down Expand Up @@ -411,7 +411,7 @@ down to a minimum. For example, a setup like:
It would require an Ingress such as:
{{% codenew file="service/networking/simple-fanout-example.yaml" %}}
{{% code file="service/networking/simple-fanout-example.yaml" %}}
When you create the Ingress with `kubectl apply -f`:
Expand Down Expand Up @@ -456,7 +456,7 @@ Name-based virtual hosts support routing HTTP traffic to multiple host names at
The following Ingress tells the backing load balancer to route requests based on
the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).

{{% codenew file="service/networking/name-virtual-host-ingress.yaml" %}}
{{% code file="service/networking/name-virtual-host-ingress.yaml" %}}

If you create an Ingress resource without any hosts defined in the rules, then any
web traffic to the IP address of your Ingress controller can be matched without a name based
Expand All @@ -467,7 +467,7 @@ requested for `first.bar.com` to `service1`, `second.bar.com` to `service2`,
and any traffic whose request host header doesn't match `first.bar.com`
and `second.bar.com` to `service3`.

{{% codenew file="service/networking/name-virtual-host-ingress-no-third-host.yaml" %}}
{{% code file="service/networking/name-virtual-host-ingress-no-third-host.yaml" %}}

### TLS

Expand Down Expand Up @@ -505,7 +505,7 @@ certificates would have to be issued for all the possible sub-domains. Therefore
section.
{{< /note >}}

{{% codenew file="service/networking/tls-example-ingress.yaml" %}}
{{% code file="service/networking/tls-example-ingress.yaml" %}}

{{< note >}}
There is a gap between TLS features supported by various Ingress
Expand Down
Loading

0 comments on commit 68ba963

Please sign in to comment.