Skip to main content
Petanque Life

Product Analytics

F21.14 6 features

At a glance

Product analytics for sys: feature-adoption dashboards across tenants, plans, and countries; signup → first match → active funnels per tenant and aggregated; weekly and monthly retention cohorts sliced by source, plan, and country; GrowthBook experiment readouts with lift, confidence interval, and sample size; a sys-engineer-only read-only Mongo playground; and an executable product-metric catalog acting as the single source of truth for DAU, WAU, MAU, and stickiness.

How it works

Product Analytics is the surface where the platform's growth is measured rather than its operations. The product-usage dashboard plots feature-adoption rates across tenants, per plan, and per country, sourced from `ApiUsageEvent` rolled up nightly. Funnels track the canonical signup → first match → active path both per tenant and aggregated, exposing the leak points that warrant feature work.

Retention cohorts produce weekly and monthly heatmaps sliced by signup source, plan, and country, so stickiness can be diagnosed by acquisition channel. Feature-flag experiment readouts reach into GrowthBook for variant assignments and combine them with conversion events to compute lift, confidence interval, and sample size per variant; the readout is the gate before flipping a flag from experiment to default. The custom-query playground is intentionally read-only and `sys_engineer`-gated: queries cap at 10 000 rows, include a syntax-validated MongoDB driver, and persist as `sys_saved_queries` so a useful one-off becomes a shared diagnostic.

The product-metric catalog is the most important piece of the surface even though it has no UI of its own: it stores executable definitions for DAU, WAU, MAU, stickiness, retention M1, active tenants, and signups, and every dashboard pulls from the catalog. Two dashboards quoting the same metric will always see the same number, because there is exactly one definition. New metrics land here first, then in dashboards.

Key capabilities

  • Feature-adoption dashboard sliced by tenant, plan, country
  • Signup → first match → active funnel per tenant and aggregated
  • Weekly and monthly retention cohorts sliced by source, plan, country
  • Experiment readout: lift + confidence interval + sample size per variant
  • Read-only Mongo playground with 10k row cap, `sys_engineer`-gated, saved queries
  • Product-metric catalog as single executable source of truth (DAU/WAU/MAU/stickiness/retention)

In practice

A product manager wants to know whether the new score-keeper rollout has lifted weekly retention. She opens the experiment readout for the `score-keeper-v2` flag: variant B shows +6.2 percent retention with a 95 percent confidence interval that excludes zero and a sample of 4 800 users. She opens the cohort heatmap to confirm the lift holds for newer cohorts.

Satisfied, she flips the flag from 25 percent to 100 percent rollout in F21.05. A sys engineer later writes a saved query that joins `feature_usage` to `tenant.country` to slice adoption by region; she pins it for the team and the next month's CFO review reuses the same query.

Features in this subsystem

6
ID Status Features
F21.14.01 Shipped Product usage dashboard — feature adoption rates across tenants, per plan, per country. ✅ PL-T134
F21.14.02 Shipped Funnel analysis — signup → first match → active user. Per tenant and aggregate. PL-T134
F21.14.03 Shipped Retention cohorts — weekly/monthly, sliced by signup source, plan, country. PL-T134
F21.14.04 Shipped Feature-flag experiment readout — conversion lift, confidence interval, sample size per variant. PL-T134
F21.14.05 Shipped Custom query — read-only MongoDB playground with saved queries, 10 k row cap, sys_engineer-gated. PL-T134
F21.14.06 Shipped Product-metric catalog — executable definitions for DAU/WAU/MAU/stickiness/retention-m1/active-tenants/signups. Single source of truth referenced from dashboards. PL-T134