Python tooling for evaluating dashboard migrations between observability platforms, with a current focus on Dynatrace-to-Datadog dashboard analysis and Terraform planning.
The repository is designed to answer four practical questions:
- What dashboards exist in the source environment?
- What parity already exists in the target environment?
- Which dashboards are worth keeping, improving, rebuilding, or dropping?
- Which approved dashboards can move into Terraform planning now, and what still needs human input?
- Normalize raw dashboard exports into a common inventory model.
- Extract per-widget query text and classify likely query families.
- Infer dashboard complexity and review blockers from source content.
- Compare source and target dashboard inventories for heuristic parity candidates.
- Overlay optional annotations to capture known blockers, caveats, or migration notes.
- Generate review packets and draft Datadog scaffold JSON for manual review.
- Recommend which dashboards should be rebuilt, improved, validated, deferred, or dropped.
- Produce a migration menu that combines inventory, parity, and recommendation signals.
- Generate Terraform-oriented dashboard plans and draft
datadog_dashboard_jsonresources. - Distinguish between creating new dashboards and importing existing Datadog dashboards into Terraform management.
- It does not prove semantic parity between dashboards.
- It does not fully translate Dynatrace queries into Datadog queries.
- It does not apply Terraform or create Datadog dashboards automatically.
- It does not replace stakeholder review for dashboard value and usability.
dashboard_tooling/: core models, normalization, comparison, recommendation, assessment, and Terraform planning logicscripts/: CLI entry points for each workflow stagetests/: unit, integration, and end-to-end coverage for the pipeline.github/workflows/: CI checks for compile, tests, secret scan, and static security
- Python 3.14 or newer
- No third-party runtime dependencies are required for the current codebase
The repo still supports purely local JSON workflows, but it can now also fetch dashboards directly from Dynatrace and Datadog APIs.
Supported auth environment variables:
- Dynatrace:
DYNATRACE_BASE_URLDYNATRACE_API_TOKEN
- Datadog:
DATADOG_API_KEYDATADOG_APP_KEYDATADOG_SITEorDD_SITE- optional
DATADOG_API_URL
Supported config loading paths:
- direct environment variables
- an optional
.envfile passed with--env-file - a default local
.envfile when present
Environment variables take precedence over values loaded from .env.
Example .env:
DYNATRACE_BASE_URL=https://example.live.dynatrace.com
DYNATRACE_API_TOKEN=dt0c01.example
DATADOG_API_KEY=example_api_key
DATADOG_APP_KEY=example_app_key
DATADOG_SITE=datadoghq.comWrite-back and apply paths:
- direct Datadog API create/update from generated dashboard plans
- Terraform workspace generation plus
terraform init,terraform plan, orterraform apply
Run the full test suite:
python3 -m unittest discover -s tests -vInspect the available CLIs:
for f in scripts/*.py; do
echo "=== $f ==="
python3 "$f" --help
doneFetch dashboards directly from the APIs:
python3 scripts/fetch_dashboards.py --source dynatrace --out out/raw/dynatrace.json
python3 scripts/fetch_dashboards.py --source datadog --out out/raw/datadog.jsonRun the full assessment directly from live APIs:
python3 scripts/run_dashboard_assessment.py \
--fetch-dynatrace \
--fetch-datadog \
--out-dir out/live-assessmentPublish generated dashboard plans back to Datadog through the API:
python3 scripts/publish_datadog_dashboards.py \
--plans-json out/live-assessment/terraform/terraform_plans.json \
--out out/live-assessment/publish-results.json \
--only-readyGenerate a Terraform workspace and run terraform plan:
python3 scripts/apply_terraform_dashboards.py \
--plans-json out/live-assessment/terraform/terraform_plans.json \
--work-dir out/live-assessment/terraform/workspace \
--command planThe normalizer accepts JSON exports for:
dynatracedatadog
The code currently expects dashboard-like payloads with top-level lists such as:
dashboardsitemsdashboardMetadata
Dynatrace widgets are read from tiles or widgets.
Datadog widgets are read from widgets.
Annotations are an optional JSON overlay with this general structure:
{
"dashboards": [
{
"dashboard_id": "example-id",
"title": "Example Dashboard",
"blockers": ["known_gap"],
"notes": ["Some widgets lose filter pushdown in DDSQL."]
}
]
}dashboard_id is preferred; title can be used as a fallback matcher.
python3 scripts/normalize_dashboards.py \
--source dynatrace \
--input dynatrace.json \
--out-dir out/source
python3 scripts/normalize_dashboards.py \
--source datadog \
--input datadog.json \
--out-dir out/targetOptional annotations can be applied during normalization:
python3 scripts/normalize_dashboards.py \
--source dynatrace \
--input dynatrace.json \
--annotations annotations.json \
--out-dir out/sourceIf you want the tool to fetch from APIs first and then run the full workflow, use:
python3 scripts/run_dashboard_assessment.py \
--fetch-dynatrace \
--fetch-datadog \
--annotations annotations.json \
--out-dir out/live-assessmentYou can also mix API and file inputs:
python3 scripts/run_dashboard_assessment.py \
--fetch-dynatrace \
--datadog-input datadog.json \
--out-dir out/mixed-assessmentpython3 scripts/compare_dashboards.py \
--source-inventory out/source/inventory.json \
--target-inventory out/target/inventory.json \
--out-dir out/paritypython3 scripts/annotate_review_queue.py \
--source-inventory out/source/inventory.json \
--annotations annotations.json \
--parity-json out/parity/parity.json \
--out-dir out/annotatedUse out/annotated/inventory.json and out/annotated/parity.json in later stages when annotations are part of the workflow.
python3 scripts/generate_review_scaffolds.py \
--source-inventory out/annotated/inventory.json \
--parity-json out/annotated/parity.json \
--out-dir out/review \
--status-filter exact_title_match,missing_in_target,possible_candidate,high_confidence_candidatepython3 scripts/analyze_dashboard_candidates.py \
--source-inventory out/annotated/inventory.json \
--out-dir out/analysispython3 scripts/build_dashboard_menu.py \
--source-inventory out/annotated/inventory.json \
--parity-json out/annotated/parity.json \
--out-dir out/menupython3 scripts/plan_terraform_dashboards.py \
--source-inventory out/annotated/inventory.json \
--menu-json out/menu/menu.json \
--out-dir out/terraformThe Terraform planner defaults to these menu actions:
create_or_rebuild_with_terraformvalidate_and_improve_existingvalidate_existing_parity
You can override the default action filter:
python3 scripts/plan_terraform_dashboards.py \
--source-inventory out/annotated/inventory.json \
--menu-json out/menu/menu.json \
--out-dir out/terraform \
--include-actions create_or_rebuild_with_terraformPurpose: Normalize raw dashboard exports into canonical inventory and query extracts.
Inputs:
--source {dynatrace,datadog}--input--out-dir- optional
--annotations
Outputs:
inventory.jsoninventory.csvqueries.csv
Purpose: Compare normalized source and target inventories and produce a parity queue.
Inputs:
--source-inventory--target-inventory--out-dir
Outputs:
parity.jsonparity.csvsummary.md
Purpose: Apply annotation overlays to normalized inventory and optional parity results.
Inputs:
--source-inventory--annotations--out-dir- optional
--parity-json
Outputs:
inventory.json- optional
parity.json
Purpose: Create review packets and draft Datadog scaffold JSON for human review.
Inputs:
--source-inventory--out-dir- optional
--parity-json - optional
--status-filter
Outputs:
review_packets/*.mddatadog_scaffolds/*.json
Purpose: Recommend which dashboards should be created, improved, deferred, or dropped.
Inputs:
--source-inventory--out-dir
Outputs:
recommendations.jsonrecommendations.csvrecommendations.md
Purpose: Combine inventory, parity, and recommendation signals into a customer/delivery decision menu.
Inputs:
--source-inventory--out-dir- optional
--parity-json
Outputs:
menu.jsonmenu.csvmenu.md
Purpose:
Turn approved menu items into Terraform-ready planning artifacts and draft datadog_dashboard_json resources.
Inputs:
--source-inventory--out-dir- optional
--menu-json - optional
--include-actions
Outputs:
terraform_plans.jsonplans/*.jsontf_json/*.tf.json
Purpose: Fetch raw dashboards directly from Dynatrace or Datadog APIs.
Inputs:
--source {dynatrace,datadog}--out- optional
--env-file
Outputs:
- raw JSON export compatible with
normalize_dashboards.py
Purpose: Run the dashboard assessment workflow from local JSON and/or live APIs.
Inputs:
--out-dir- optional
--dynatrace-input - optional
--datadog-input - optional
--fetch-dynatrace - optional
--fetch-datadog - optional
--annotations - optional
--env-file - optional
--terraform-actions
Outputs:
raw/*.jsonsource/inventory.json- optional
target/inventory.json - optional
parity/parity.json analysis/recommendations.jsonmenu/menu.jsonreview/review_packets/*.mdreview/datadog_scaffolds/*.jsonterraform/terraform_plans.jsonterraform/tf_json/*.tf.json
Purpose: Create or update Datadog dashboards directly from generated Terraform dashboard plans.
Inputs:
--plans-json--out- optional
--env-file - optional
--only-ready
Outputs:
- publish results JSON including created or updated dashboard IDs
Purpose:
Write a Terraform workspace from generated plans and execute terraform init, terraform plan, or terraform apply.
Inputs:
--plans-json--work-dir- optional
--command {init,plan,apply} - optional
--auto-approve
Outputs:
- generated Terraform workspace files
terraform-<command>-result.jsonwith captured stdout and stderr
inventory.json/inventory.csvCanonical dashboard inventory, variables, widget counts, and blockers.queries.csvExtracted query text and query-family classification by widget.parity.json/parity.csvHeuristic parity records between source and target dashboards.recommendations.json/recommendations.csv/recommendations.mdDashboard creation recommendations and required inputs.menu.json/menu.csv/menu.mdCombined migration decision menu suitable for customer and delivery review.terraform_plans.jsonSummary of Terraform planning state for all selected dashboards.plans/*.jsonDetailed per-dashboard Terraform planning records.tf_json/*.tf.jsonDraft Terraform resources usingdatadog_dashboard_json.
- Normalize and inspect the inventory.
- Review parity as a hypothesis, not as proof.
- Use recommendations and the menu to decide which dashboards are worth carrying forward.
- Approve only the dashboards that add clear operational value.
- Use Terraform plans to structure implementation and identify missing query mappings.
- Translate source queries into real Datadog telemetry before applying Terraform.
This is a private repository. Access is restricted to authorized team members. Do not share, redistribute, or republish any part of this codebase outside of authorized channels.
All rights reserved.