Skip to content

Conversation

@briangregoryholmes
Copy link
Contributor

@briangregoryholmes briangregoryholmes commented Dec 11, 2025

This PR adds a guard around the min/max precision allowed for time series data. It also limits the options provided in the time series aggregation selector on Explore to only those we deem "valid".

The intention of this PR is to adhere to the following heuristic:

  1. Determine the allowed grains for a given interval based on the duration of the interval and the smallest_time_grain property of the metrics view

  2. Allowed grains should result in no more than 731 (see below) buckets and no fewer than 1 bucket (though this bucket may be partial (as in the case of 24h as of latest/h by day). However, if all grains are above the max, allow year granularity. If all grains produce fewer than 1 complete bucket, allow the smallest_time_grain.

  3. If the user has explicitly asked for a certain granularity (i.e., there is a grain parameter in the url) and it is within the allowed grains for the range, render it. If the requested grain is not in the allowable grains, fall back to the range grain (if allowed) or the smallest of the allowed grains.

  4. If there is no explicit granularity requested, we should follow the grain derived from the time string (e.g day for P7D and hour for 7d as of latest/h+1h`) if allowed. If not, fall back to the smallest of the allowed grains.


Examples

Time Range: 12M as of latest/d+1d
URL Grain: day
Smallest Time Grain: day

Rendered in days.

Time Range: 36M as of latest/d+1d
URL Grain: day
Smallest Time Grain: day

Rendered in weeks

Time Range: 12h as of latest/h+1h
URL Grain: hour
Smallest Time Grain: day

In this case, while hour is the range granularity and day is normally not allowed, the time series will be aggregated by day despite displaying only a single partial bucket. Because of limitations on Explore, the URL grain will be updated to day.


This logic can be verified on Canvas via the derived grainStore in time-state.ts

On Explore, a number of changes were necessary.

  • Remove selectedTimeRange from the ShallowMergeOneLevelDeepKeys so that URL grain is reflected directly and not merged from other fallback state
  • Update handleExploreInit and handleURLChange to derive this grain state once the time range has been resolved
  • Update onSelectRange function Filters.svelte to also properly derive this before triggering a state change

However, much of this could be simplified if the /time-ranges API returned these allowed grains directly.


Open questions:

Is total number of buckets the correct barometer? Both 30 days in hours and 12 hours in minutes would render 720 buckets. Are these equivalently "taxing" on the database?

What is the appropriate number of buckets to cap at? The API limit is quite high and the practical rendering limit is around 1000 (though can be increased with a rework of the time series component). 731 was somewhat arbitrary, but this was chosen to allow looking at two years worth of days.

Checklist:

  • Covered by tests
  • Ran it and it works as intended
  • Reviewed the diff before requesting a review
  • Checked for unhandled edge cases
  • Linked the issues it closes
  • Checked if the docs need to be updated. If so, create a separate Linear DOCS issue
  • Intend to cherry-pick into the release branch
  • I'm proud of this work!

@briangregoryholmes briangregoryholmes self-assigned this Dec 11, 2025
@briangregoryholmes briangregoryholmes changed the title feat: guard time series queries with maximum interval precision [APP-450] feat: guard time series queries with maximum interval precision Dec 11, 2025
@briangregoryholmes briangregoryholmes marked this pull request as draft December 11, 2025 23:32
@briangregoryholmes briangregoryholmes changed the title [APP-450] feat: guard time series queries with maximum interval precision [APP-450] feat: guard time series queries with minimum and maximum interval precision Dec 12, 2025
@briangregoryholmes briangregoryholmes changed the title [APP-450] feat: guard time series queries with minimum and maximum interval precision [APP-450] feat: guard time series queries with minimum and maximum granularity Dec 12, 2025
@briangregoryholmes briangregoryholmes removed the request for review from AdityaHegde December 12, 2025 01:56
@begelundmuller
Copy link
Contributor

Is total number of buckets the correct barometer? Both 30 days in hours and 12 hours in minutes would render 720 buckets. Are these equivalently "taxing" on the database?

30 days would usually be more taxing as it has to process a lot more raw data. The number of output buckets is less taxing, just still something worth paying attention to.

What is the appropriate number of buckets to cap at? The API limit is quite high and the practical rendering limit is around 1000 (though can be increased with a rework of the time series component). 731 was somewhat arbitrary, but this was chosen to allow looking at two years worth of days.

I don't have a hard cap in mind, somewhere between 500-1000 seems good to me. But if there's a use case for it, we can probably go quite a bit higher.

My main concern actually isn't to enforce a hard cap, it's more to avoid accidental/unintended situations where it gets into a state that queries for a large number of buckets. It's fine to query for many buckets if that appears to be a clear user intent. (Kind of similar to "all time", which we didn't want to be too easy to select, but if someone goes to the trouble of selecting a custom time range spanning many years, we don't disallow it.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants