Skip to content

Datalens with Zitadel perfomance issue #198

@nasonovsm

Description

@nasonovsm

Expected Behavior

Without the Zitadel service, dashboards are loaded within a few seconds (1-5 seconds for each chart) on the same VM

Current Behavior

After Datalens with Zitadel test deployment I observe high CPU consumption by zitadel and zitadel-db services and long load time of each chart.
At datalens.yandex.cloud:
image

The same chart on the same dashboard at the Datalens Opensource:
image

At the same dashboard without Zitadel service:
telegram-cloud-photo-size-2-5413376471097008137-y

Docker stats command output:
image

CPU grafana chart:
telegram-cloud-photo-size-2-5413376471097008139-y

In logs of zitadel-db container with enabled debug mode I observe thousands SELECT and INSERT queries when accessing the dashboard. For example:

LOG:  execute stmtcache_675670f51ec7e588ce13abe2034bee6e90eafcd0d9907150: SELECT created_at, event_type, "sequence", "position", payload, creator, "owner", instance_id, aggregate_type, aggregate_id, revision FROM eventstore.events2 WHERE instance_id = $1 AND ((aggregate_type = $2 AND event_type = $3) OR (aggregate_type = $4 AND event_type = ANY($5)) OR (aggregate_type = $6 AND event_type = ANY($7)) OR (aggregate_type = $8 AND event_type = $9)) AND "position" > $10 AND "position" < (SELECT COALESCE(EXTRACT(EPOCH FROM min(xact_start)), EXTRACT(EPOCH FROM now())) FROM pg_stat_activity WHERE datname = current_database() AND application_name = 'zitadel_es_pusher' AND state <> 'idle') ORDER BY "position", in_tx_order LIMIT $11 OFFSET $12

LOG:  execute stmtcache_a614765bc81cd263e6301614a7396ccd287ac629e218f5ed: INSERT INTO projections.current_states (
	    projection_name
	    , instance_id
	    , aggregate_id
	    , aggregate_type
	    , "sequence"
	    , event_date
	    , "position"
	    , last_updated
	    , filter_offset
	) VALUES (
	    $1
	    , $2
	    , $3
	    , $4
	    , $5
	    , $6
	    , $7
	    , now()
	    , $8
	) ON CONFLICT (
	    projection_name
	    , instance_id
	) DO UPDATE SET
	    aggregate_id = $3
	    , aggregate_type = $4
	    , "sequence" = $5
	    , event_date = $6
	    , "position" = $7
	    , last_updated = statement_timestamp()
	    , filter_offset = $8
	;

Environment

All docker containers runs on Yandex Compute Cloud VM (2 vCPU, 4 GB RAM, SSD) and Datalens (Cloud and Opensource ) connected to Managed service for Clickhouse database.

Docker images versions:

DL_CONTROL_API_VERSION=0.2139.0
DL_DATA_API_VERSION=0.2139.0
DL_US_VERSION=0.239.0
DL_UI_VERSION=0.2000.0
DL_ZITADEL_VERSION=2.61.0

OS version:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.4 LTS"

Docker version:

Server: Docker Engine - Community
 Engine:
  Version:          27.1.0
  API version:      1.46 (minimum version 1.24)
  Go version:       go1.21.12
  Git commit:       a21b1a2
  Built:            Fri Jul 19 17:42:53 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.7.19
  GitCommit:        2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41
 runc:
  Version:          1.7.19
  GitCommit:        v1.1.13-0-g58aa920
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Is this the expected behavior of the zitadel and zitadel-db service?

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions