Skip to content

Commit

Permalink
support portName in HTTPScaledObject service scaleTargetRef (#1174)
Browse files Browse the repository at this point in the history
* support portName in HTTPScaledObject service scaleTargetRef

Signed-off-by: Jan Wozniak <[email protected]>

* mutually exclusive port and portName

Co-authored-by: Jirka Kremser <[email protected]>
Signed-off-by: Jan Wozniak <[email protected]>

* make manifests

Signed-off-by: Jan Wozniak <[email protected]>

* fix CEL syntax

Signed-off-by: Jan Wozniak <[email protected]>

* e2e test for portName

Signed-off-by: Jan Wozniak <[email protected]>

* use service lister instead of endpoints cache to get port from portName

Signed-off-by: Jan Wozniak <[email protected]>

* docs for v0.8.1 HTTPScaledObject

Signed-off-by: Jan Wozniak <[email protected]>

---------

Signed-off-by: Jan Wozniak <[email protected]>
Co-authored-by: Jirka Kremser <[email protected]>
  • Loading branch information
wozniakjan and jkremser authored Oct 25, 2024
1 parent 5d2e0ad commit f5ab058
Show file tree
Hide file tree
Showing 12 changed files with 592 additions and 18 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ This changelog keeps track of work items that have been completed and are ready

### New

- **General**: Support portName in HTTPScaledObject service scaleTargetRef ([#1174](https://github.com/kedacore/http-add-on/issues/1174))
- **General**: Support setting multiple TLS certs for different domains on the interceptor proxy ([#1116](https://github.com/kedacore/http-add-on/issues/1116))
- **General**: TODO ([#TODO](https://github.com/kedacore/http-add-on/issues/TODO))

Expand Down
12 changes: 9 additions & 3 deletions config/crd/bases/http.keda.sh_httpscaledobjects.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -93,8 +93,9 @@ spec:
type: integer
type: object
scaleTargetRef:
description: The name of the deployment to route HTTP requests to
(and to autoscale).
description: |-
The name of the deployment to route HTTP requests to (and to autoscale).
Including validation as a requirement to define either the PortName or the Port
properties:
apiVersion:
type: string
Expand All @@ -106,13 +107,18 @@ spec:
description: The port to route to
format: int32
type: integer
portName:
description: The port to route to referenced by name
type: string
service:
description: The name of the service to route to
type: string
required:
- port
- service
type: object
x-kubernetes-validations:
- message: must define either the 'portName' or the 'port'
rule: has(self.portName) != has(self.port)
scaledownPeriod:
description: (optional) Cooldown period value
format: int32
Expand Down
8 changes: 8 additions & 0 deletions config/interceptor/role.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,14 @@ rules:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- http.keda.sh
resources:
Expand Down
148 changes: 148 additions & 0 deletions docs/ref/v0.8.1/http_scaled_object.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
# The `HTTPScaledObject`

>This document reflects the specification of the `HTTPScaledObject` resource for the `v0.8.1` version.
Each `HTTPScaledObject` looks approximately like the below:

```yaml
kind: HTTPScaledObject
apiVersion: http.keda.sh/v1alpha1
metadata:
name: xkcd
annotations:
httpscaledobject.keda.sh/skip-scaledobject-creation: "false"
spec:
hosts:
- myhost.com
pathPrefixes:
- /test
scaleTargetRef:
name: xkcd
kind: Deployment
apiVersion: apps/v1
service: xkcd
port: 8080
replicas:
min: 5
max: 10
scaledownPeriod: 300
scalingMetric: # requestRate and concurrency are mutually exclusive
requestRate:
granularity: 1s
targetValue: 100
window: 1m
concurrency:
targetValue: 100
```
This document is a narrated reference guide for the `HTTPScaledObject`.

## `httpscaledobject.keda.sh/skip-scaledobject-creation` annotation

This annotation will disable the ScaledObject generation and management but keeping the routing and metrics available. This is done removing the current ScaledObject if it has been already created, allowing to use user managed ScaledObjects pointing the add-on scaler directly (supporting all the ScaledObject configurations and multiple triggers). You can read more about this [here](./../../walkthrough.md#integrating-http-add-on-scaler-with-other-keda-scalers)


## `hosts`

These are the hosts to apply this scaling rule to. All incoming requests with one of these values in their `Host` header will be forwarded to the `Service` and port specified in the below `scaleTargetRef`, and that same `scaleTargetRef`'s workload will be scaled accordingly.

## `pathPrefixes`

>Default: "/"

These are the paths to apply this scaling rule to. All incoming requests with one of these values as path prefix will be forwarded to the `Service` and port specified in the below `scaleTargetRef`, and that same `scaleTargetRef`'s workload will be scaled accordingly.

## `scaleTargetRef`

This is the primary and most important part of the `spec` because it describes:

1. The incoming host to apply this scaling rule to.
2. What workload to scale.
3. The service to which to route HTTP traffic.

### `deployment` (DEPRECTATED: removed as part of v0.9.0)

This is the name of the `Deployment` to scale. It must exist in the same namespace as this `HTTPScaledObject` and shouldn't be managed by any other autoscaling system. This means that there should not be any `ScaledObject` already created for this `Deployment`. The HTTP Add-on will manage a `ScaledObject` internally.

### `name`

This is the name of the workload to scale. It must exist in the same namespace as this `HTTPScaledObject` and shouldn't be managed by any other autoscaling system. This means that there should not be any `ScaledObject` already created for this workload. The HTTP Add-on will manage a `ScaledObject` internally.

### `kind`

This is the kind of the workload to scale.

### `apiVersion`

This is the apiVersion of the workload to scale.

### `service`

This is the name of the service to route traffic to. The add-on will create autoscaling and routing components that route to this `Service`. It must exist in the same namespace as this `HTTPScaledObject` and should route to the same `Deployment` as you entered in the `deployment` field.

### `port`

This is the port to route to on the service that you specified in the `service` field. It should be exposed on the service and should route to a valid `containerPort` on the `Deployment` you gave in the `deployment` field.

### `portName`

Alternatively, the port can be referenced using it's `name` as defined in the `Service`.

### `targetPendingRequests` (DEPRECTATED: removed as part of v0.9.0)

>Default: 100

This is the number of _pending_ (or in-progress) requests that your application needs to have before the HTTP Add-on will scale it. Conversely, if your application has below this number of pending requests, the HTTP add-on will scale it down.

For example, if you set this field to 100, the HTTP Add-on will scale your app up if it sees that there are 200 in-progress requests. On the other hand, it will scale down if it sees that there are only 20 in-progress requests. Note that it will _never_ scale your app to zero replicas unless there are _no_ requests in-progress. Even if you set this value to a very high number and only have a single in-progress request, your app will still have one replica.

### `scaledownPeriod`

>Default: 300

The period to wait after the last reported active before scaling the resource back to 0.

> Note: This time is measured on KEDA side based on in-flight requests, so workloads with few and random traffic could have unexpected scale to 0 cases. In those case we recommend to extend this period to ensure it doesn't happen.


## `scalingMetric`

This is the second most important part of the `spec` because it describes how the workload has to scale. This section contains 2 nested sections (`requestRate` and `concurrency`) which are mutually exclusive between themselves.

### `requestRate`

This section enables scaling based on the request rate.

> **NOTE**: Requests information is stored in memory, aggragating long periods (longer than 5 minutes) or too fine granularity (less than 1 second) could produce perfomance issues or memory usage increase.

> **NOTE 2**: Although updating `window` and/or `granularity` is something doable, the process just replaces all the stored request count infomation. This can produce unexpected scaling behaviours until the window is populated again.

#### `targetValue`

>Default: 100

This is the target value for the scaling configuration.

#### `window`

>Default: "1m"

This value defines the aggregation window for the request rate calculation.

#### `granularity`

>Default: "1s"

This value defines the granualarity of the aggregated requests for the request rate calculation.

### `concurrency`

This section enables scaling based on the request concurrency.

> **NOTE**: This is the only scaling behaviour before v0.8.0

#### `targetValue`

>Default: 100

This is the target value for the scaling configuration.
21 changes: 14 additions & 7 deletions interceptor/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ import (
"github.com/prometheus/client_golang/prometheus/promhttp"
"golang.org/x/exp/maps"
"golang.org/x/sync/errgroup"
k8sinformers "k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
Expand All @@ -42,6 +43,7 @@ var (

// +kubebuilder:rbac:groups=http.keda.sh,resources=httpscaledobjects,verbs=get;list;watch
// +kubebuilder:rbac:groups="",resources=endpoints,verbs=get;list;watch
// +kubebuilder:rbac:groups="",resources=services,verbs=get;list;watch

func main() {
timeoutCfg := config.MustParseTimeouts()
Expand Down Expand Up @@ -85,11 +87,10 @@ func main() {
setupLog.Error(err, "creating new Kubernetes ClientSet")
os.Exit(1)
}
endpointsCache := k8s.NewInformerBackedEndpointsCache(
ctrl.Log,
cl,
time.Millisecond*time.Duration(servingCfg.EndpointsCachePollIntervalMS),
)

k8sSharedInformerFactory := k8sinformers.NewSharedInformerFactory(cl, time.Millisecond*time.Duration(servingCfg.EndpointsCachePollIntervalMS))
svcCache := k8s.NewInformerBackedServiceCache(ctrl.Log, cl, k8sSharedInformerFactory)
endpointsCache := k8s.NewInformerBackedEndpointsCache(ctrl.Log, cl, time.Millisecond*time.Duration(servingCfg.EndpointsCachePollIntervalMS))
if err != nil {
setupLog.Error(err, "creating new endpoints cache")
os.Exit(1)
Expand Down Expand Up @@ -123,6 +124,7 @@ func main() {
setupLog.Info("starting the endpoints cache")

endpointsCache.Start(ctx)
k8sSharedInformerFactory.Start(ctx.Done())
return nil
})

Expand Down Expand Up @@ -173,10 +175,11 @@ func main() {
eg.Go(func() error {
proxyTLSConfig := map[string]string{"certificatePath": servingCfg.TLSCertPath, "keyPath": servingCfg.TLSKeyPath, "certstorePaths": servingCfg.TLSCertStorePaths}
proxyTLSPort := servingCfg.TLSPort
k8sSharedInformerFactory.WaitForCacheSync(ctx.Done())

setupLog.Info("starting the proxy server with TLS enabled", "port", proxyTLSPort)

if err := runProxyServer(ctx, ctrl.Log, queues, waitFunc, routingTable, timeoutCfg, proxyTLSPort, proxyTLSEnabled, proxyTLSConfig); !util.IsIgnoredErr(err) {
if err := runProxyServer(ctx, ctrl.Log, queues, waitFunc, routingTable, svcCache, timeoutCfg, proxyTLSPort, proxyTLSEnabled, proxyTLSConfig); !util.IsIgnoredErr(err) {
setupLog.Error(err, "tls proxy server failed")
return err
}
Expand All @@ -186,9 +189,11 @@ func main() {

// start a proxy server without TLS.
eg.Go(func() error {
k8sSharedInformerFactory.WaitForCacheSync(ctx.Done())
setupLog.Info("starting the proxy server with TLS disabled", "port", proxyPort)

if err := runProxyServer(ctx, ctrl.Log, queues, waitFunc, routingTable, timeoutCfg, proxyPort, false, nil); !util.IsIgnoredErr(err) {
k8sSharedInformerFactory.WaitForCacheSync(ctx.Done())
if err := runProxyServer(ctx, ctrl.Log, queues, waitFunc, routingTable, svcCache, timeoutCfg, proxyPort, false, nil); !util.IsIgnoredErr(err) {
setupLog.Error(err, "proxy server failed")
return err
}
Expand Down Expand Up @@ -369,6 +374,7 @@ func runProxyServer(
q queue.Counter,
waitFunc forwardWaitFunc,
routingTable routing.Table,
svcCache k8s.ServiceCache,
timeouts *config.Timeouts,
port int,
tlsEnabled bool,
Expand Down Expand Up @@ -416,6 +422,7 @@ func runProxyServer(
routingTable,
probeHandler,
upstreamHandler,
svcCache,
tlsEnabled,
)
rootHandler = middleware.NewLogging(
Expand Down
6 changes: 6 additions & 0 deletions interceptor/main_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ func TestRunProxyServerCountMiddleware(t *testing.T) {
// server
routingTable := routingtest.NewTable()
routingTable.Memory[host] = httpso
svcCache := k8s.NewFakeServiceCache()

timeouts := &config.Timeouts{}
waiterCh := make(chan struct{})
Expand All @@ -77,6 +78,7 @@ func TestRunProxyServerCountMiddleware(t *testing.T) {
q,
waitFunc,
routingTable,
svcCache,
timeouts,
port,
false,
Expand Down Expand Up @@ -194,6 +196,7 @@ func TestRunProxyServerWithTLSCountMiddleware(t *testing.T) {
// server
routingTable := routingtest.NewTable()
routingTable.Memory[host] = httpso
svcCache := k8s.NewFakeServiceCache()

timeouts := &config.Timeouts{}
waiterCh := make(chan struct{})
Expand All @@ -209,6 +212,7 @@ func TestRunProxyServerWithTLSCountMiddleware(t *testing.T) {
q,
waitFunc,
routingTable,
svcCache,
timeouts,
port,
true,
Expand Down Expand Up @@ -339,6 +343,7 @@ func TestRunProxyServerWithMultipleCertsTLSCountMiddleware(t *testing.T) {
// server
routingTable := routingtest.NewTable()
routingTable.Memory[host] = httpso
svcCache := k8s.NewFakeServiceCache()

timeouts := &config.Timeouts{}
waiterCh := make(chan struct{})
Expand All @@ -354,6 +359,7 @@ func TestRunProxyServerWithMultipleCertsTLSCountMiddleware(t *testing.T) {
q,
waitFunc,
routingTable,
svcCache,
timeouts,
port,
true,
Expand Down
Loading

0 comments on commit f5ab058

Please sign in to comment.