A/B Testing
読み方:A/B Testing
A controlled experiment comparing two variants (A and B) to determine which performs better on a defined metric. Widely used in landing page optimization, email marketing, and ad copy testing. Requires sufficient sample size and statistical significance before drawing conclusions.
What is A/B Testing
A/B testing (also called split testing) is a method for making data-driven decisions. Rather than arguing about which headline is better, you show each version to a random half of your audience and measure which produces more conversions.
The A/B Testing Process
1. Form a hypothesis: "Changing the CTA from 'Submit' to 'Get my free audit' will increase CVR"
2. Create variants: Build version A (control) and version B (treatment)
3. Calculate required sample size: Use a statistical power calculator
4. Run the test: Show each variant to random halves of traffic simultaneously
5. Analyze results: Wait for statistical significance before deciding
Statistical Significance
Tests need enough conversions to produce reliable results. A common minimum: 100–200 conversions per variant. Running a test for only 2 days with 10 conversions per variant produces misleading data.
What to Test (Priority Order)
1. Headline copy
2. CTA copy and color
3. Hero image or video
4. Form fields
5. Page layout and section order
When Conversion Volume Is Low
Test micro-conversions (scroll depth, button clicks, form displays) rather than final conversions. This produces actionable data faster.