Many A/B testing problems come from using statistical methods without checking if they fit the situation. The three most common mistakes are: (1) using the MannMany A/B testing problems come from using statistical methods without checking if they fit the situation. The three most common mistakes are: (1) using the Mann

Three A/B Testing Mistakes I Keep Seeing (And How to Avoid Them)

Over the past few years, I have observed many common errors people make when designing A/B tests and performing post-analysis. In this article, I want to highlight three of these mistakes and explain how they can be avoided.

Using Mann–Whitney to compare medians

The first mistake is the incorrect use of the Mann–Whitney test. This method is widely misunderstood and frequently misused, as many people treat it as a non-parametric “t-test” for medians. In fact, the Mann–Whitney test is designed to determine whether there is a shift between two distributions.

\

When applying the Mann–Whitney test, the hypotheses are defined as follows:

\ We must always consider the assumptions of the test. There are only two:

  • Observations are i.i.d.
  • The distributions have the same shape

\ How to compute the Mann–Whitney statistic:

  1. Sort all observations by magnitude.
  2. Assign ranks to all observations.
  3. Compute the U statistics for both samples.

\

  1. Choose the minimum from these two values
  2. Use statistical tables for the Mann-Whitney U test to find the probability of observing this value of U or lower.

**Since we now know that this test should not be used to compare medians, what should we use instead?

\ Fortunately, in 1945 the statistician Frank Wilcoxon introduced the signed-rank test, now known as the Wilcoxon Signed Rank Test.

The hypotheses for this test match what we originally expected:

How to calculate the Wilcoxon Signed Rank test statistic:

  1. For each paired observation, calculate the difference, keeping both its absolute value and sign.

  2. Sort the absolute differences from smallest to largest and assign ranks.

  3. Compute the test statistic:

    \

  4. The statistic W follows a known distribution. When n is larger than roughly 20, it is approximately normally distributed. This allows us to compute the probability of observing W under the null hypothesis and determine statistical significance.

    \ Some intuition behind the formula:

Using bootstrapping everywhere and for every dataset

The second mistake is applying bootstrapping all the time. I’ve often seen people bootstrap every dataset without first verifying whether bootstrapping is appropriate in that context.

The key assumption behind bootstrapping is

==The sample must be representative of the population from which it was drawn.==

If the sample is biased and poorly represents the population, the bootstrapped statistics will also be biased. That’s why it’s crucial to examine proportions across different cohorts and segments.

For example, if your sample contains only women, while your overall customer base has an equal gender split, bootstrapping is not appropriate.

Always using default Type I and Type II error values

Last but not least is the habit of blindly using default experiment parameters. In about 95% of cases, 99% of analysts and data scientists at 95% of companies stick with defaults: a 5% Type I error rate and a 20% Type II error rate (or 80% test power).

\ Let’s start with why don’t we just set both Type I and Type II error rates to 0%?

==Because doing so would require an infinite sample size, meaning the experiment would never end.==

Clearly, that’s not practical. We must strike a balance between the number of samples we can collect and acceptable error rates.

I encourage people to consider all relevant product constraints.

The most convenient way to do it , create the table ,that you see below, and discuss it with product managers and people who are responsible for the product.

\

For a company like Netflix, even a 1% MDE can translate into substantial profit. For a small startup, that’s not true. Google, on the other hand, can easily run experiments involving tens of millions of users, making it reasonable to set the Type I error rate as low as 0.1% to gain higher confidence in the results.

\


Our path to excellence is paved with mistakes. Let’s make them!

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Trust Wallet’s Decisive Move: Full Compensation for $7M Hack Victims

Trust Wallet’s Decisive Move: Full Compensation for $7M Hack Victims

BitcoinWorld Trust Wallet’s Decisive Move: Full Compensation for $7M Hack Victims In a significant move for cryptocurrency security, Trust Wallet has committed
Share
bitcoinworld2025/12/26 17:40
Cashing In On University Patents Means Giving Up On Our Innovation Future

Cashing In On University Patents Means Giving Up On Our Innovation Future

The post Cashing In On University Patents Means Giving Up On Our Innovation Future appeared on BitcoinEthereumNews.com. “It’s a raid on American innovation that would deliver pennies to the Treasury while kneecapping the very engine of our economic and medical progress,” writes Pipes. Getty Images Washington is addicted to taxing success. Now, Commerce Secretary Howard Lutnick is floating a plan to skim half the patent earnings from inventions developed at universities with federal funding. It’s being sold as a way to shore up programs like Social Security. In reality, it’s a raid on American innovation that would deliver pennies to the Treasury while kneecapping the very engine of our economic and medical progress. Yes, taxpayer dollars support early-stage research. But the real payoff comes later—in the jobs created, cures discovered, and industries launched when universities and private industry turn those discoveries into real products. By comparison, the sums at stake in patent licensing are trivial. Universities collectively earn only about $3.6 billion annually in patent income—less than the federal government spends on Social Security in a single day. Even confiscating half would barely register against a $6 trillion federal budget. And yet the damage from such a policy would be anything but trivial. The true return on taxpayer investment isn’t in licensing checks sent to Washington, but in the downstream economic activity that federally supported research unleashes. Thanks to the bipartisan Bayh-Dole Act of 1980, universities and private industry have powerful incentives to translate early-stage discoveries into real-world products. Before Bayh-Dole, the government hoarded patents from federally funded research, and fewer than 5% were ever licensed. Once universities could own and license their own inventions, innovation exploded. The result has been one of the best returns on investment in government history. Since 1996, university research has added nearly $2 trillion to U.S. industrial output, supported 6.5 million jobs, and launched more than 19,000 startups. Those companies pay…
Share
BitcoinEthereumNews2025/09/18 03:26
Trust Wallet Hack Hits $7M: CZ Hints at Possible Insider Role

Trust Wallet Hack Hits $7M: CZ Hints at Possible Insider Role

CZ hinted at possible insider involvement in the Trust Wallet incident while assuring users that their funds would be reimbursed.
Share
CryptoPotato2025/12/26 16:48