Experimental Mind #158

Here you have it: a curated overview of interesting reads you might have missed, events and jobs for the experimental mind (that’s you). Everything is handpicked by me, so you don’t have to.

Have a great week — and keep experimenting.


🔎 Interesting reads you might have missed

How to deal with top-down projects

Itamar Gilad shares how to deal with product ideas landing from the top. Or are you in the lucky situation that this is never happening in your company?

Why goal cascades are harmful? (and what to do instead)

John Cutler describes a better way to do goal setting:

Goal cascades meet internal needs, not customer needs. They meet the requirement for a planning process at the expense of catalyzing congruence across the organization. … Ditch the cascade. Create models. Set goals wherever it makes sense to set goals.

5 features to 10x experiment velocity

Many companies want to 10x their experimentation velocity. The people from Statsig describe 5 techniques that help you do this:

  1. Feature Rollouts: auto-measure new feature impact with an A/B test
  2. Parameters: remove experiment variants in code to iterate faster
  3. Layers: remove hardcoded experiment references from code
  4. CUPED: use statistical techniques to get results faster
  5. Holdouts: measure cumulative impact/progress without grunt work

15 most common CRO mistakes
Generating meaningful results through CRO and experimentation is tough. While almost anyone can achieve the odd one-off conversion rate uplift, producing wins with any consistency is another challenge entirely. What’s more, many CRO practitioners find that even when they do achieve the kinds of results they’re after, their winning variations fail to perform when served […]

Google’s recommendations to minimise impact

Google recently (slightly) updated their documentation on how to minimise the impact of A/B tests on search ranking. Not new, but a good refresher.

[Paper] Addressing hidden imperfections in online experimentation

This paper aims to make practitioners of experimentation more aware of imperfections in technology industry RCTs, which can be hidden throughout the engineering stack or in the design process.

The result of these imperfections can lead to a bias in the estimated causal effect, a loss in statistical power, an attenuation of the effect, or even a need to reframe the question that can be answered.

Updated list of curated podcasts
Updated list of curated podcastsopen.spotify.com

John Ostrowski recommended me the interview with Marty Cagan on Lenny’s podcast. They talk about what good product teams do different from feature teams. To me a crucial ingredient in building a culture of experimentation.

🚀 Job opportunities

Tracking & Measurement Specialist
Tracking & Measurement Specialistexperimentationjobs.com

[DUTCH] Als Data Tracking & Measurement Specialist bij de Bijenkorf vertaal je de data behoeften via tracking implementaties vanuit alle digitale kanalen naar relevante klantdata en maakt dit inzichtelijk via analytics.

Find all open roles on the job board.

📅 Upcoming events

All upcoming events in the next months:

Also checkout the full overview of events for next months.

💬 Quote of the week

“… there is no Spotify model at Spotify” — an Agile Coach at Spotify (source)

😉 Fun of the week

How many sprint meetings can we have? (credits: Work Chronicles)

👍 Thank you for reading

If you’re enjoying the Experimental Mind newsletter please share this email with a friend or colleague. Looking for inspiration how to best do that? Take a look at what others are saying.