- Overview
- Configuration
- Deployment
3.1. Cluster Preparation
3.2. Connect Subscriptions
3.3. Core Installation
3.4. External DB
3.5. Tracing - Usage
4.1. Client SDK
4.2. API
4.2.1. Preparation
4.2.2. Preparation
4.2.3. Read Events
4.2.4. Write Events - Design
- Contributing
6.1. Versioning
6.2. Issue Reporting
6.3. Building
6.4. Testing
6.4.1. Functional
6.4.2. Performance
6.5. Releasing
This repo contains the Helm chart for the Awakari core system deployment. The core doesn't include subscriptions storage. To run the core system on own premises, request access to the cloud instance of subscriptions storage.
For a component-specific options see the corresponding sub-chart configuration. Here follow own configuration options:
Variable | Default | Description |
---|---|---|
mongodb.internal | true |
Defines whether to deploy the MongoDB internally or use external one. |
queue.backend.nats | true |
Enables the NATS JetStream queue wrapper service. Exclusive, can not be used together with other queue backends. |
semaphore.backend.nats | true |
Enables the NATS-based distributed semaphore service. Exclusive, can not be used together with other semaphore backends. |
tracing.enabled | false |
Enables the distributed tracing, internal Jaeger and Cassandra deployment as well to collect the spans. |
There are the following resources required:
- Own K8s cluster
- Cloud subscriptions service access
Create the target namespace:
kubectl create namespace awakari
Request and use the public GitHub registry access token to pull Awakari images:
docker login ghcr.io -u akurilov -p <ACCESS_TOKEN>
Create the image pull secret:
kubectl create secret generic github-registry \
-n awakari \
--from-file=.dockerconfigjson=<home/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Using cloud subscriptions requires mutual TLS authentication and encryption to secure the client subscriptions data. To access the cloud subscriptions it's necessary to have a client certificate.
Important
Cloud subscriptions doesn't have any access to events data being processed by the core system.
For the demo purposes, there is the cloud instance demo.subscriptions.awakari.cloud
available.
Ready-to-use demo client certificates are in the certs/demo directory.
For production usage, prepare own client certificate request:
openssl req -new -newkey rsa:4096 -nodes \
-keyout client.key \
-out client.csr \
-addext "subjectAltName=DNS:subscriptions.awakari.cloud" \
-subj '/CN=group0.company1.com'
Warning Never specify additional certificate attributes like "O", "OU", etc. The resulting DN should not contain commas.
Then request the client certificate (currently by email).
After the client certificate (client.crt
) is received, create a pair of cluster secrets:
kubectl create secret generic -n awakari secret-subscriptions-tls-client-key --from-file=client.key
kubectl create secret generic -n awakari secret-subscriptions-tls-client-crt --from-file=client.crt
Install the package:
helm repo add awakari-core https://awakari.github.io/core
helm install core core-0.0.0.tgz -n awakari
Warning Do not change the "core" release name
To connect the core system to the demo cloud subscriptions, override the address:
helm install core core-0.0.0.tgz -n awakari \
--set subscriptionsproxy.api.subscriptions.uri=demo.subscriptions.awakari.cloud:443
Note
This step is optional, by default the core system comes with internal MongoDB sharded cluster.
To use external MongoDB, use the values file values-mongodb-ext.yaml for the reference and substitute these with own values.
Note
This step is optional and may be useful for the performance testing purposes.
It's possible to enable the Open Telemetry tracing collection. Core system uses Jaeger collector API for this. Core system can deploy own Jaeger instance or use external one.
To enable tracing during the deployment the values-tracing.yaml file may be used:
helm install core core-0.0.0.tgz -n awakari \
--set subscriptionsproxy.api.subscriptions.uri=demo.subscriptions.awakari.cloud:443 \
--values helm/core/values-tracing.yaml
To use external Jaeger, set the root value tracing.enabled
to false
and leave it enabled for the components with
setting the custom Jaeger collector URI:
helm install core core-0.0.0.tgz -n awakari \
--set subscriptionsproxy.api.subscriptions.uri=demo.subscriptions.awakari.cloud:443 \
--values helm/core/values-tracing.yaml \
--set tracing.enabled=false \
--set evaluator.tracing.collector.uri=http://external-jaeger-collector:14268/api/traces \
--set matches.tracing.collector.uri=http://external-jaeger-collector:14268/api/traces \
--set reader.tracing.collector.uri=http://external-jaeger-collector:14268/api/traces \
--set resolver.tracing.collector.uri=http://external-jaeger-collector:14268/api/traces \
--set writer.tracing.collector.uri=http://external-jaeger-collector:14268/api/traces
To observe the traces:
- run some workload, e.g. e2e-test or perf-e2e-test
- port-forward the jaeger-query port 16686 to local and open it in browser.
Refer to Client SDK Usage.
Note
Usage Limits and Permits APIs are not available in the Core.
- Install grpcurl
- Download the necessary proto files and save to the current directory:
- Port-forward services to local:
core-reader
-> 50051core-resolver
-> 50052core-subscriptionsproxy
-> 50053
Create:
grpcurl \
-plaintext \
-H 'X-Awakari-User-Id: [email protected]' \
-d @ \
localhost:50053 \
awakari.subscriptions.proxy.Service/Create
Example payload:
{
"description": "Tesla model S updates",
"enabled": true,
"cond": {
"not": false,
"tc": {
"key": "",
"term": "Tesla Model S",
"exact": false
}
}
}
A successful response contains the created subscription id:
{
"id": "547857e3-adfc-48a5-a49e-110cfdedbaab"
}
Note the created subscription id and use it further to read the messages. Learn more about the Subscriptions API.
grpcurl \
-plaintext \
-proto reader.proto \
-max-time 86400 \
-H 'X-Awakari-User-Id: [email protected]' \
-d @ \
localhost:50051 \
awakari.reader.Service/Read
Specify the subscription id in the payload:
{"start": {"batchSize": 1, "subId": "547857e3-adfc-48a5-a49e-110cfdedbaab"}}
This starts a reader stream. A new event appear in the response once system receives anything matching the subscription. Leave this shell/window open and switch to another. Later switch back and check for new events received.
It's necessary to acknowledge every received message:
{"ack": { "count": 1}}
grpcurl \
-plaintext \
-proto writer.proto \
-H 'X-Awakari-User-Id: [email protected]' \
-d @ \
localhost:50052 \
awakari.resolver.Service/SubmitMessages
Specify the events to write in the payload:
{
"msgs": [
{
"id": "3426d090-1b8a-4a09-ac9c-41f2de24d5ac",
"type": "example.type",
"source": "example/uri",
"spec_version": "1.0",
"attributes": {
"subject": {
"ce_string": "Tesla price updates"
},
"time": {
"ce_timestamp": "2023-07-03T23:20:50.52Z"
}
},
"text_data": "Tesla model S is now available at lower price"
}
]
}
A successful response looks like:
{
"ackCount": 1
}
After this it's possible to submit more messages.
When finished, close the writer stream by pressing ^C or leave it open to publish any other messages later.
The core of Awakari consist of:
- Stateful components
- Stateless components
- 3-rd part components
- Mongodb (sharded)
- Redis in-memory cache
- NATS message bus
The service uses the semantic versioning. The single source of the version info is the git tag:
git describe --tags --abbrev=0
TODO
Build a helm package:
for i in core conditions-number conditions-text evaluator matches messages queue-nats reader resolver subscriptions-proxy semaphore-nats writer; do git clone [email protected]:awakari/$i.git; done
cd core
helm dependency update helm/core
helm package helm/core
The repo contains core functional end-to-end tests.
To run the tests locally:
- Port-forward the reader API to local port 50051
- Port-forward the resolver API to local port 50052
- Port-forward the subscriptions API to local port 50053
make test
To run the tests in K8s cluster:
helm test core -n awakari --filter name=core-test
To release a new version (e.g. 1.2.3
) it's enough to put a git tag:
git tag -v1.2.3
git push --tags
The corresponding CI job is started to build a helm chart and publish it with the specified tag (+latest).