Back to Basics: Back to Basics: Why You’re Making Bad Campaign Decisions
Are you a newbie or intermediate affiliate marketer? Then this is for you.
I know everyone wants to learn the flashy stuff – the tricks or secrets, but becoming a Super Affiliate is all about mastering the fundamentals.
That’s what this series is about. Getting back to the basics and building a solid base of knowledge for you to work off of.
Lets say I have a quarter, and I flip it five times.
All 5 times it lands on tails. Wow! So based off of these tests I could say with confidence that every time I flip a coin, I’ll land on tails.
But we all know that’s not true from experience, and the real probability is landing on tails is 50%.
The problem is we didn’t flip it enough times.
If I flipped it 100 times then we could see the real probability would be closer to 50%. We came to a conclusion with bad data.
This is a core concept of why many affiliate marketers can’t get their campaigns profitable. They don’t understand statistical significance, and make campaign decisions off of bad data.
Lets apply this concept to campaigns and see what I’m talking about.
Statistical Significance in Action
John has just launched a new campaign on Google adwords and here’s his data.
Lets check statistical significance using a calculator (SplitTester)
What if we let the campaign run a few more days?
It looks like ad 2 is actually better.
But why wasn’t ad 2 better in the initial test? Who cares because it doesn’t matter. It’s just like how we landed on tails every time in the earlier example – shit happens.
There’s always an element of luck to everything in life, and the more tests we run, the more we can get rid of luck and find the real numbers.
Do you see the point I am trying to make here? People make changes to their campaigns too soon.
John is going to keep running the campaign with a subpar ad. He could have had a profitable campaign, but he made some bad decisions.
John has a budget of $200 for a mobile campaign this week. He’s not sure how many ads to launch.
He decides to launch 40 ads.
You see the problem right? Each ad is only going to get $5 worth of impressions or clicks, which is not enough data to reach statistical significance.
Instead he should test a smaller number of ads, according to his budget.
In this scenario he should’ve launched with a smaller number of ads like 7. From those 7 ads he can maybe find 1 or 2 solids ads he can build the rest of his campaign from.
John launched his campaign on Monday. By Thursday he pauses the campaign because he’s losing too much money.
Maybe the campaign would have been profitable Friday, Saturday, and Sunday. (some verticals convert better on the weekends)
He doesn’t know because he didn’t collect enough data, and turned the campaign off too early.
Using This in Your Campaigns
I used ads as an example, but this applies to everything – which landing pages are the best, which offer is better, etc.
Many newbies are confused as to how many ads, landing pages, offers to launch with.
It depends on your budget.
$200 budget? Maybe 3 offers, 1 landing page, 7 ads.
$2,000 budget? Maybe 5 offers, 3 landing pages, 50 ads
I don’t make decision off of emotions or what I feel.
I make data driven decisions.
In 2014, am I still calculating statistical significance whenever I’m optimizing? Rarely. I just run a lot of volume and I can eyeball the results. I break out the calculator if a test is close.
But if you are working with a lower budget, then absolutely calculate.
Did you find this article helpful? Share or comment on the post and I’ll write more.