Skip to content

Conversation

@RobertJoonas
Copy link
Contributor

Changes

Please describe the changes made in the pull request here.

Below you'll find a checklist. For each item on the list, check one option and delete the other.

Tests

  • Automated tests have been added
  • This PR does not require tests

Changelog

  • Entry has been added to changelog
  • This PR does not make a user-facing change

Documentation

  • Docs have been updated
  • This change does not need a documentation update

Dark mode

  • The UI has been tested both in dark and light mode
  • This PR does not change the UI

@github-actions
Copy link

Preview environment👷🏼‍♀️🏗️
PR-6035

end

scope private: %{allow_consolidated_views: true} do
post "/:domain/query", StatsController, :query
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this is a WIP, but with a POST endpoint, we give up the ability to cache query responses client-side and in CDN. I think there's a lot to be gained in app responsiveness and scalability by even very basic caching - we shouldn't give that up.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This did cross my mind, but I deemed it unlikely that we ever want to browser/CDN cache dashboard stats requests because:

  1. Stats requests depend not only on the query parameters, but server-side state: site goals, props, imported data, feature flags, permissions, etc. Sounds like quite a challenge to invalidate this cache.
  2. JS memory caching (e.g. via TanStack Query that we're already using in the details modals) gives us better control over freshness and invalidation without relying on HTTP caches. Clicking around the dashboard (filters/period/with_imported) and landing on the same query -> cache hit. Refresh the entire dashboard -> everything is fresh.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, but how often do site goals, props, imported data, feature flags, permissions change?

Imagine 100 users accessing a public dashboard within one minute (e.g. a tweet with a link starts trending).

Do we want to make the 100 * 6 Clickhouse queries to populate the dashboard for each of them individually? What's the real harm in serving most of them let's say up to 2 minutes old data? It would make a difference for our servers, enabling us to scale up without adding more resources.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@apata raises good points. Caching has good potential for sure to reduce pressure on Clickhouse.

On the other hand, POST makes sense here because we need to be able to send a JSON payload. Making it a GET requires some sort of url serialization (bringing in jsonurl dependency elixir-side, using base64 or something like this). That can get a bit messy in terms of debuggability/logging.

Given that this is an internal endpoint that we can freely change in the future, I'm in favour of going with the simplest and most transparent option now (POST with json body). If we want to add http/cdn level caching in the future we can change it to a GET with url serialization. This is based on my assumption that changing it in the future is not a big job, limited to API client code in React and a single controller action on the backend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants