Ever wonder why some businesses seem to grow effortlessly while others struggle? The secret might be simpler than you think. Today, we're diving into the world of split testing - a powerful tool that can supercharge your business growth. But forget the complicated jargon and fancy theories. We're breaking it down into seven simple rules that anyone can use. Whether you're running a small startup or managing a big team, these practical tips will help you make smarter decisions and see real results.
Brandon Bateman is joined by marketing expert Garret Cragun who will be sharing real-world examples, common pitfalls to avoid, and easy ways to get started. So if you're tired of guessing what works and want to start seeing actual improvements in your business, stick around. This episode might just change the way you think about growing your company. Let's dive in!
00:00 - Introduction and welcome
01:00 - The importance of split testing for business growth
02:30 - Rule 1: Start with an actionable hypothesis
05:00 - Different types of marketers and approaches
07:00 - Rule 2: Only change one thing in a test
10:30 - Rule 3: Test samples during the same time period
13:30 - Rule 4: Every test needs a numerator and denominator
16:00 - The math behind A/B testing and statistical significance
21:00 - Common mistakes in interpreting test results
24:00 - Rule 5: Optimize tests towards revenue
26:30 - Rule 6: Test long enough for meaningful results
29:00 - Rule 7: Document your tests
32:00 - Using sprints for project management and testing
36:00 - Benefits of sprint-based project management
39:00 - Closing thoughts and wrap-up
Brandon Bateman: Hello and welcome back to another episode of the Collective Clicks podcast. This is your host, Brandon Bateman, and today I’m joined by Garrett . If you haven't heard an episode with Garrett, he is an absolute genius from my team who helps lead many things within the company. We're talking all about split testing today. How are you doing today, Garrett?
Garrett : Doing great, how are you, Brandon?
Brandon: Fantastic! I'm excited to be here and excited to be talking to you again. It’s been a while since we’ve had you on the podcast, and the world is missing your wisdom, so here we are. I know it’s been a long time, and I’m thrilled to be back in the fold.
Garrett: Yes, we’ve put a little work into this podcast today, and I’m excited to see what people think. This is honestly, in my opinion, one of the most impactful things that a business owner can understand. And that is how to test things because this is at the end of the day the only way that a company grows. You have to test things unless you've got better intuition, a better gut than anyone else. But in my experience, nobody really does. We all think we know something that we don't, and the only way to actually get better is to observe the data and learn how to get better. But there are a lot of really important aspects of split testing that people get wrong. So, we came up with what we call the seven rules of split testing. This is seven specific things that we do with every single split test to make sure that we're making the most progress we possibly can. And this can apply to so many things. Like, when I say split test, you're probably thinking like I'm testing Ad A versus Ad B. And yes, you can do this with ads, but this could also be in terms of how you're measuring Acquisitions Manager one versus Acquisitions Manager two, or Dispo rep one versus Dispo rep two. This is how you measure how a different drip campaign in your CRM affects your contact rate compared to another one. There is so much you can do with this information, and we use this all the time, like as marketers in the accounts we're working in. But we also use this all the time as operators of this business to learn and understand how we could do better. So, Garrett, I'll let you kick it off.
Garrett: Yeah, okay, so the first one is to start with an actionable hypothesis that makes sense. And what I mean by makes sense is if you find a winner, it should fit the logic of why it works. For example, it probably isn't a great test to see if we have our reps drink coffee before they go into the office and see how well they sell on their calls. If we find that they do happen to close better, does that make sense? And is that something that we would even do in our business? Right? It has to make sense, and it has to be something—if there's a learning—that you can actually apply. So an example of a good test that fits this is, let's say, REI. What kind of a video does better on YouTube? Is it a short clip that's like of you at the property, or is it more of a B-roll-like video of you talking to sellers, shaking hands? Is it more of a, 'Hey, I'm buying things,' or is it, 'Hey, I'm a person you can trust?' That's a test that makes sense because it's based on true principles of marketing. Is it trust, or is it emotion that helps you to buy? And then if there's a winner, you can make more of those videos and then test different things. So that's the first thing, is don't test dumb things that you can't build on just because they could be cute. And test things because it's easy. Like, people will test, like, button colors, and that sounds great, but that's not going to really matter really. And if green is better than red, what does that mean? And like, how can you build off of that? It's a really small thing.
Brandon: Yeah, I think what you're referring to—we'll talk about this later—there's like an order of operations for a test. There's two different offers you could use. Why are you using two different button colors? Right? So when we build this hypothesis, we want to focus on the most impactful thing. Because this one thing I think people are missing when they think about split testing is they think you can do unlimited split testing. By the time we get through all these rules, what you're going to realize is you can't do as much split testing as you thought you could do. Which means when you do it, it has to be surgical, and you have to know exactly what you're going after because if you're committing to do a split test for 12 months and it's not the right test, then you just wasted 12 months. So that's why this is so critical. I'll add one little piece to what you talked about. I've noticed that there are two different types of marketers, and I don't think either of them are right. The first type is usually very creative. These are like your typical brand marketers, your traditional ad agencies, where they come up with some of the best ideas you've ever heard in your life, and they don't know how to use data. The other type is like a very direct response type marketer, where the stereotype is you just throw a bunch of stuff against a wall and see what sticks, but you're not really making those things that you're throwing against the wall really meaningful, where differences between those things are meaningful. Because your whole game is like, I don't know what's going to work, so I'm just going to throw a bunch of stuff. And my honest opinion is, the best marketer is kind of like a mix between those because what you have to do is everything that you're testing has to be absolutely fantastic. It has to be things that, individually, you think will work, and you have your own hypothesis why you think that thing is better than that other thing. And then you also use the data to see it through. You can't be the person that just is so in love with your ideas you never feel like you have to test them, but you can't be the person that feels like you're going to test anyone. Ways so there's no sense in investing in each of the ideas to make them imly good. That's have you seen the same thing for marketers?
Garrett: Yeah, and what I would add is there is value to building your process on proven foundations. So if someone has run the test that like X is better than Y, don't waste time doing it again. Start with what's established and test things that are unique to your offering and your system. Don't waste time on things that have already been found to work well and start from there. I think people will try to reinvent the wheel of a good landing page because they like how it looks, but there's a very established way that a good page is structured. Start there and then test beyond that.
Brandon: Fantastic advice. Alright, let's talk about number two, only change one thing. This is for a true test. This is so important. So here's how this work. I'll give like a, when we think of testing, we often think of like landing pages. It's a good example of where A/B testing is really popular, right? So if I have landing page one, and I have landing page two, and I decide to change on landing page 2, I decided to make it better. And I decided to do that through, I want to change the headline, and then I also change the color of the page or something like that. Or in the topic of like bad hypothesis, I'll give another example. What if on landing page two, I made the page speed better? Does that make sense to test? No, because you already know it's better, right? You're just wasting, you're just wasting right. So let's just say, do headline and color. So now if landing page two works better, I don't know if it's because of the headline that I changed or the color that I changed. Now I do know that landing page 2 works better, so ideally, split testing is like going to the eye doctor. If you've ever been to the eye doctor, they ask you, does one or two look better? And then you say, I like number two, and then they say, okay, now does one or two look better? I like number one, and just keep on going. We just get incrementally closer and closer to the end product that you're looking for. That's exactly what it should look like. Now, there is one exception to this rule. If you just did split testing exactly the right way all the time, often, you would miss out on being able to make major improvements. And that's where there is a different type of test called a C test. A C test is when you test something completely different to determine your starting place. And this is a common thing that you do at the very beginning. So I could take landing page one that looks completely different. It's built on, it's almost like you're testing philosophy. Landing page one is all about scarcity. Landing page two is all about demonstrating benefits, right? And they're completely different. That's a C test, and there's a lot of different things that are different. So you're right, the L page two
work better than landing page one. I wouldn't know why it worked better, but what I could do is maybe make a big leap of progress right there, and then from there, do the is A/R B better again is A/R B better Etc. It's just slowly dialed in. But the I, I guess what I'm saying is that quality like split testing usually looks a lot more like evolution I'm sorry evolution like picture like the monkey like slowly turns into a human then it does like pure creation like we just dream up this new thing, and we just go right there. And that's the way to like so many people think I just want to completely redo, completely retest this thing, and it just doesn't work. We recently were hiring someone for a leadership position on the team when I interviewed them. They said one thing that told me before anything that they were very experienced we ended up hiring this person. I asked him what changes he would make. He said, well, rule number one is I'm just going to change one thing in this department at a time. Otherwise, I don't know what made a difference. Realistically, if we're going to get the results that we want in this department, we're probably going to be at this for several years. But over the next few years, my goal is to get to this point. That's the, that's like the Hallmark of if you're talking to a marketer and how they're talking about their marketing campaigns, that's how they should be talking about it. If you're talking to a business leader, that's how they should be thinking about their Department right. Everybody who's like going trying to test a bunch of stuff too fast, you just don't make the real progress that you're looking for. And you have to learn those lessons first and get that experience to learn that. And it feels like everybody just has to fail through that first to realize how important it is just slow methodical one thing at a time. Yeah, this graphic, and it was like a bell curve of like how experienced a marketer is based on like how often they change things. And like a novice makes no changes, like someone that kind of knows things makes a lot of changes, and an expert also doesn't make a ton of changes. Both both ends don't change things a lot, but one's intentionally one's just not knowing that they should right. And I think that's the same for testing like just because you test all that stuff that that tells me more that you probably don't know what you're doing because you're panicking and throwing things at the wall with no order and no plan and no intentionality behind my testing. And so if volume of test doesn't really indicate that you're doing well, it probably means that you're not doing it the right way. Yeah, it's a we just we just had a meeting this morning, a personal development meeting in our company where we talked about the difference between efficiency and productivity. Efficiency might be I have to launch 15 split tests. Productivity might be I'm okay launching one split test or two split tests, but I'm going to make more progress with those one or two split tests then that other guy's going to make with 15 that that's the difference right productivity is about less efficiency about more. And I think too often when it comes to testing we just get caught in that like efficiency mindset y I agree. What's up next G okay next up is is a big one and it's to always test your two samples during the same time period see all the time where people are like let's run this version for this long let's pause it and then let's run this version and see how they each do and this is kind of a variation of the last point I would only change one thing let's say that if I manage our sales team and I say in this quarter we are going to use this framework then in this quarter we're going to use this framework how much is different during those quarters besides the framework it's the time of year it's maybe like how well things are going in marketing it's the people that came in the final drain during that time period maybe we added a different software maybe our email servers broke for an hour or two and email didn't come through or maybe we changed how we train our team during that time and if you're like it's really hard to not change anything during during a quarter just because you're doing an AB test in the next quarter like that's just super inefficient what's better is to just take half of your like you know half of a a controlled sample and give it the test and the other during the same time period to keep those variables of time and like seasonality controlled because a lot can vary between you know even like by doing a test like Monday Tuesday Wednesday and then Thursday Friday Saturday like people behave differently during those days of the week so like keeping it like constrained even are in that like time period can matter so so keeping time bound test is so important to control things that otherwise you you just can't control well and to bring a little bit of realisticness into this sometimes you can't follow this rule if we're making changes in a department it's not like our people are robot and we can just let them like hey you're going to be this way one we're on Tuesdays we're compensating an extra $10,000 a year and on Wednesdays we're testing you're that you're you're getting a $10,000 less salary and expect that we're going to learn how compensation affects our team's performance certain things you can't do this with right but you have to the concept here control as many variables as you possibly can and and if you're questioning this I want you to think when's the last time you saw a major change in performance be it better or worse when you thought you changed nothing and I'm willing to argue that's pretty much all time we all experience it right there's always stuff changing that you're not changing in the market and and things like that so in certain scenarios you can control this really well right if we're for example like most basic thing a beginner person in Google ads might run an ad and then the next they might decide at one point I really want my ads to talk about this instead and then they change that and then at some point they look back and they say well how have my ads been doing since I made that change that's like a beginner version of Google ads that really sucks really what you should be doing is running them at the same time controlling all the other variables testing how one does versus the other there's some things that aren't quite that easy right like I know you've messed around with can we you know can we be testing our sales scripts and stuff like that where it's like there's human behavior involved so I think it's really tricky there's some even digital marketing stuff like for example if you want to if you want to measure the lift of branded search campaigns there's not a great way to do Google that I'm aware of so what you can do is choose to run those campaigns every other day and then do that for three months and then aggregate all the data from the off days and then aggregate all the data from the on days and then you're kind of controlling for time like the best you can by flipping the switch on and off versus having to be like all on and then all off but it's a it's a tricky thing but if ever possible you want to test them at the same time I see this with marketing agencies all the time or something I was working with this agency then I switched to this agency and the results got way better then I switched to this agency and the results got way worse and I'm thinking that happens to pretty much all of our clients all the time anyways even when they don't change agencies like results are always getting way better or way worse and I'm willing to admit that because there's fluctuations because even with consistent efforts factors change in the market and in the environment that drastically affect the results even sample size drastically affects the results so that's that's like a fact that we can't just ignore has a huge impact and that goes nicely into our next topic look at that transitions yes so the fourth one here is every test needs a numerator and a denominator let me
be clear on what this means there's you can't do a split test on volume you can only do a split test on efficiency the difference between volume and efficiency is a volume metric is how much of X do I get or an efficiency metric is how much of X do I get per y some great examples of volume metrics would be like how many contracts do I have an efficiency metric would be how many contracts do I have per lead or per appointment or whatever the case is right so that's the difference between them you have to Define these things I know this sounds like super basic but there's when we get to some of these later steps we talk about like how do you know what the winner of a test is there has to be a numerator and denominator for some of these basic statistical tests that you can do so you have to determine how much of X and I looking to get for y and that has to be determined as part of your hypothesis before you start the test because that's going to determine how long you want to run the test and all those different types of things so it it sounds pretty simple but you just have to you have to Define what metric you're measuring if it's a landing page you're likely going to measure how many leads do I get compared to visitors to the landing page conversion rate right there's your numerator and your denominator if it's sales you might measure how many contracts do I get from a certain number of leads if it's an ad you might be measuring how many clicks do I get per person that sees the ad but that's that yeah that that's the basics yeah and just to add on to that it it's important to have what you are measuring be what's most within the impact of the test like if I'm like going to change the images that I'm using on a Facebook ad I'm probably not going to see how that impacts our contract Fallout rate like that is just so far removed from it that it's not realistic but you do still want to watch that but if that's your like kpi you're gonna have a ton of data to get anywhere close to that being meaningful and so like having that what you are measuring and the like sample size has to be as close to what you're changing in the the process a as it can be yeah and and actually based on what you said I would just add even one more thing I think there's some balance to that too in the sense that sometimes people make it too close in the process right so I gave some examples of things you can measure I could tell you what typical marketers would do when they're testing a landing page for example is landing page one I'm looking at the conversion rate landing page two I'm looking at the conversion rate if we're not taking into account well landing page two has a call to action that says get your instant cash offer it makes it sound like we're Car Max and there's just going to be a number showing up on your screen for how much your house is worth versus landing page two says talk with a specialist about us purchasing your home which is a less strong call the action right so what you really want to measure in many circumstances and this is one of the things that we're well known for is is basically measuring your testing a step deeper in the funnel than you usually would this is a rule of thought just go one step deeper so we know that we know let's just say we're testing the landing page we know the first metric is going to be the number of people going to the landing page right like you said it doesn't make sense to do contract Fallout because then one of those numbers isn't right on the landing page right versus the number of people going to their landing page that's how many people are viewing it where most marketers would do is and then how many fill out the form where we would probably do is how many end up becoming opportunities or something like that so we're just going one step deeper which remove some of the bias because it's one of the biggest like easiest ways to screw up your business with split testing is to test something that improves performance at one point in the funnel while degrading the next stage of the funnel if you have an ad that another example if I was to write in all my ad guaranteed highest offer you'll get or whatever the case is and all I'm measuring is how many people click my ad versus see it what's going to happen I'm gonna have more people Skyrocket oh yeah but what's going to happen then to my landing page conversion rate or the qualification of my leads it's going to go down right so you don't want to be caught being one of those marketers where you constantly test things that basically make it so your top of horm mod gets wave loaded and then things drop off down in the funnel you have to be really careful about that yep one thing that I just realized we haven't covered that might be helpful is the math behind it behind an AB test what's a good sample size what's a good you know P value what is a P value you know all of those things do you think it would be worth it to go just super like high level into kind of the math behind like how to measure if you have a big enough um sample size for your ab test yeah let's do it absolutely so the and there's I mean you can go so deep into this stuff but this just F like super high level so you've seen there like a normal distribution before right I think we've all probably seen that like it exists in nature this is picture you're like bell curve right we've all seen like this like B curve he yeah yeah so the here's kind of like the theory behind I this is this it's a hard it's a hard thing for like a human brain to grasp because we're not made for this turns out computers are really good at this kind of stuff but the reality is we just just because you think you know something doesn't mean you know that thing I I'll give you an example let's just say say I flipped a coin three times and I got three tips the question I could ask you is what does the data say about the rate that you get heads versus Tails with that coin right and the data it depends on how you look at it because the data says that 100% of the time we get tails is that true well if it's a normal coin it's not true but then how do we observe data like that it's because of sample size that's the problem if I continue to flip that coin a million more times am I likely to have close to 500,000 taals yes I'm likely to have close that right so if you picture this the bell curve starts out really wide we've observe in the middle we think we know that's what we're getting but it could be way better than that and it could be way worse than that we don't know what it is yet and then as you gather more and more data your level of certainty gets higher and higher that you're close to the real number right so that's like just the most basic thing before so let's just say you're doing a split test it's each one of the things you're testing the A and the B they both have a bell curve when it turns out there's some overlap between them right so maybe one coin I tested has 75% heads the other one has 75% Tails so each one of those kind of has its own bell curve of distribution of possible likeliness and essentially there there's ways to to mathematically calculate the likelihood that what I'm observing actually has enough data to to to reasonably assume what I think I can assume from this so I hope that makes at least a little bit of sense there's a metric called A P value this is a what's called a frequentist approach to statistics there there's a metric called A P value and in that metric if you do a statistical test basically tells you the likelihood that the test is misleading to you and you want it to be a really small number and if if this all sounds complicated I'm sorry but I I'll show you like a really easy way to do this remember every test has a numerator and denominator if you open up chat GPT right now and you feed it your numerator and your denominator you'll get an answer right so I could go to chat jbt and I could tell it I have one Acquisitions rep and they had 30 leads and they have closed three contracts I have another Acquisitions rep that has 40 leads and has closed two contracts please run a statistical T Test to help me understand the probability that my first sales rep is performing better than my second sales rep and Chad gbt could literally produce for you a mathematical percentage likelihood that the first sales rep is actually doing better so you can see well is it true that there's a 95% chance of it's actually performing better or is it more something like 70% where you
're looking at it like oh this guy's better than that guy but the reality is there's not enough data for you to actually be able to say that yet there's a 70% chance that's true there's not 100% chance that's true because while that sales rep is converting at 10% really the only thing we know with 95% confidence is that they're converting at 10% plus or minus 7% so we know that they're somewhere between three and 17 which isn't actually that close of a range I hope makes a little bit of sense is is there anything else you could think G to make this like a little bit more clear or actionable I mean the way that I always think about this is what is the likelihood that the results you got are due due to chance or would be the same if you ran that test again and if that number's high that means it's probably just random if it's low that means it it's because there is a true difference between those two tests and one of the things that you usually CH before you do a test is how what you want that percentage likelihood to be like if I said I want 90% confidence in the decisions I make you have to realize I'm I'm making a decision that one in 10 of my decisions is going to be wrong if I choose 95% confidence I'm making a decision that one in 20 of my decisions is going to be wrong but now I might need to run my test for twice as long to get enough data to have that level of accur so you never eliminate the error like even if you get super there's always going to be that scenarios where now there's a one in 1,000 chance chance I've gotten enough data to where I'm like 99.9% sure there's always the 0.1% but the more data you get the smaller that chance is and what I think a lot of people are missing you know what I just I hate with so much passion is when people talk and and basically say things along the lines oh how much data is enough for a split test 100 clicks or you yeah you should get at least at least 300 visitors to the land page before you make a call or something like and you know what those those rules of thumb guidelines could be good to like maybe estimate how long you need to run the test to get statistical significance but the reality is you can't predict that because if if a works like five times better than b we're going to learn that a lot faster then if a Works 1% better than B it's going to take us a really long time to get enough data to actually prove that's true so we have to be we have to be really careful about that and I would argue that most people this is the number one thing they're getting wrong with split testing they are not seeing their split tests through to getting enough data to prove that one version is actually better than the other what they do is they set up a test with good intentions and then they see that one thing works better than the other and then they lose their patience and then they go with the one that worked better and they forget that the fact that it works better does not mean it will work better in the future they're looking at the result over a period of time they're not looking at like the overall performance and you just need those bell curves and start out really wide to just narrow and narrow until you're confident enough those numbers are pretty close to where they are to where you can say I'm okay with the level of uncertainty that I have in my decision yeah and I mean I do it too like it's easy to be like well that looks like a big difference and it's R for a while so I need to see you know lift quickly and so I'll just roll with it that way I can do my next test that I think that I think will work but you're like like being exposed to a ton of Risk by making a chance because then you're building off of that with a new test and you're basing it off of a flawed foundation and so it's important to make choices Based on data because then your next ones might be going down further and further down the wrong PA and all a sudden you end up with a way worse outcome because each choice was made off of bad data yeah you know you know how this shows up in marketing a lot of the time is it's I'm doing an AB test and every time I pick the winner suddenly that one doesn't work as well because you're just using so little just went after the one that the most random variation in its sample and it's bound to come down just it's the equivalent of L spping a lot of coins and one of them gets me like 90% heads and I jump into that one now after I just T that coin do you think it's more likely to perform lower or better than it did originally worse because it overachieved at first yeah yeah it's a statistical concept called regression towards the mean it's the same thing that happens when like for example you take two people that are extraordinarily tall and they have kids what is the most likely height of their children turns out it is tall somewhere between how tall their parents were and the me how them because from extreme values you always regress towards a meem because those extreme values oftenly not include some type of random variation so it's h yeah it's wild sometimes I picture marketing like everybody just out there flipping coins and everybody's getting all excited it's like the roulette wheel at a casino right it's oh it's hot now it's definitely going to continue to stay hot or oh it's cold so you know we got to winning soon it doesn't it doesn't work like that right there's just you know there underlying statistical probabilities but the takeaway here this I know we're talking about a lot of fluff a lot of theory let's make it super clear you literally type in the chat GPT I have this many in my sample and I have this many successes and then for your other side of the test I have this many in the sample and I have this many successes run a statistical T Test to compare the performance and it will literally write out for you which one's better and what likelihood there is that data is going to hold true the future it's so powerful and you'd be surprised there's people out there that you fired probably Based on data that wasn't statistically relevant it's insane and there's landing pages you stopped using because of that it's like when you start to view it from this lens then you can reduce that error rate from you know probably much higher to something like 10% or 5% depending on what levels statistical confidence you're looking for which is awesome um and I hope I didn't everybody with that it's so powerful but so boring I don't think it's boring but it's it's just it gets a little detailing and I think makes your brain work a little harder than you want to and easy to give up Gary you have the next one all right so the this one is is all about having your ab test optimized towards Revenue so what I tell my team and what I try to do with each test is start with the test that is closest to dollars and what I what like the reason why I do that is like a test its whole goal is like what's going to get us more clients what's going to get us more contracts right and so I could say let's try to get our ads to get as many clicks as possible and eventually like ideally that's going to lead at some point two more deals but what I could do instead is I already have these deals in the pipeline why don't we like you know right now this week try a different approach to how we dispo or how we work our leads and that's going to have a much faster impact on your bottom line and be a much big um bigger swing than just starting way way up here and hoping that it like kind of makes sorry I'm doing hands and this is an audio medium but if I start like closer to the bottom there's a a smaller journey to getting where I want to go then then inserting really far and hoping that it follows the same path all the way down does that make sense at all or is that really unclear totally makes sense to me in the sense of be close to what you care about with the test are the more likely it is that you impact that thing if you're way Upstream you just don't you just don't know that it will make it down that far it could get deluded or something and and so you're kind of talking about like where to focus first right which I think we talked about it a little bit as to but here's the thing after what I just told everyone and this is the next Point actually is make sure you test long enough which I kind of talked about before so we might skip over it a little bit but once you really understand how long you have to run test score which spoiler is way longer than a lot of people realize for things down funnel and it's actually shorter than a lot of people realize for up funnel things if you just want to test the clickr it just more data you would think because there's way more data but if you're like testing which Acquisitions manager is better that probably takes if I just had to guess five to 10 times more time than you think it does just off the top of your head to know that one really does better from the other and you can use the statistics to actually figure that out but when you realize that you realize I don't have the luxury of just saying oh well we'll just test it we've even gotten this in this situation as a company before where if we ever disagree on something then we'd say oh we'll just test it and then everybody agre on testing it and then somebody realizes shoot this test is going to take 12 months and then what inevitably happens is you start the test and then a month in we decide that's not what we want to do anymore and we just keep on starting but not finishing all these tests it's a
horrible it's a horrible place to get into so that's why it's so important to just choose the right tests in the first place when you realize that testing is a finite resource you should do number one prioritize remember productivity or efficiency what I would add is that whenever I have my test documented I use the pi acronym which the p is going to be like the the the priority of the test like is what we are testing a a high like Focus for right now of our team the I is the impact like how much impact do I think this is gonna have on my like key indicator and then e is the ease like how easy and how fast can I I get this test rolled out and get to a a a result and and so that that to me is a helpful framework for knowing where to start is like how fast can this go and get me an answer and how much confidence do I have in this test actually working yeah 100% that makes sense I actually think so so that's our that's kind of moving into our final point which is documentation I'd love to hear from you about your processes for documenting tests and why is it even important I can tell every visionary entrepreneur that's into this right now is like falling asleep because documentation of tests sounds like the the entire world but share your perspective yeah so I have all of our tests in a sheet it's our whole backlog everything that we want to test is all scored and weighted and given like an order of of of when it it it's tested and and then we document for each test what was tested why it was tested the um the data that we measured the time frame of it when the the the test ran and and then what we learned from the test and the there's a couple key reasons for this the first one is your team is is probably going to change at some point and people are going to leave and if that isn't documented people aren't going to know one why we do things they're not going to keep doing it and you're going to undo those learnings and then the next reason is to not run those same tests again because if that's already been tested and found to have a good sample size you probably don't have to run that test again for a while if you had a big enough sample size now if it was small and you rushed it maybe it is going to be varied by like by market or by time of year but don't do the same thing twice and odds are if it it isn't documented your team is going to say over and over again why don't we test having a different way that we handle you know price objections and if we always do it every time you're not gaining anything but if it's document say hey like we already did this test here's the result let's move on and build off of that L learning and not do it over and over again it's so helpful to have those things written down and documented to have a history of how far you've come to keep moving forward yeah no I'm 100% with you and I know you're not like taking this direction because it's less on the topic of split testing but I think this is actually a really like valuable nugget for us to leave with and help people better understand how you're speaking about this and that's because because what you're describing to me is also kind of your method of project management within your team which is based on the concept of Sprints and I'm So for anybody listening could you help describe like this system like why would you even do this system and then how does it work for this is basically a project management of testing either business operations or from a marketing standpoint within a company and since we implemented this our productivity of our team has drastically improved so yeah I'm curious to hear just like beginning to end what does this look like and how does this work yeah so how this goes is our team Works in two week Sprints where we begin and end a like fixed time where we work on just these X number of projects based on the bandwidth that we've decided each team member can handle and we choose at the start of each Sprint the projects that we've decided are the highest impact and will help us to make the most progress towards our established goals for that timeline and then during those two weeks we spend all of our time on just those things and and and then at the end of it we measure the impact of those tests We Gather feedback to inform our next Sprint I've heard at called lot in marketing like a like a growth Loop where you run a test or do a project and and then get feedback and iterate and iterate and it's all based off of the that Sprint framework of doing as much as you can during a short time frame to get and receive rapid feedback and it helps our team be efficient know exactly what they're doing based their time on what I Has Told them is like top priority and not what they feel is urgent or or what they're told is like Urgent by like their team members but like I tell them like hey this is the Focus right now this is all you're doing and and then at the end of it let's measure and assess and then build off of that and it's all based off of test and off of what's most important and it's based off off of you know business goals and so it helps the team to know what they're doing and and how it impacts business outcomes yeah it's Fant fantastic and if this sounds familiar to anybody it's based at least Loosely off of something called an agile process of development which is used frequently in a software development world it's basically if you look at project management there's kind of two key key like thinking groups there's the you know waterfall where it's okay this gets done and that gets done and that gets done and that's the tasks and that works you have like highly structured departments which we do have some highly structured departments but we also have other departments that aren't that way where something more like like a Agile development allows you to adapt and receive feedback a lot quicker which I think is really important for marketing because in marketing you can't lay out at the beginning of the quarter all the things you're doing test in the quarter because based on the results of those early tests the viability of the later ones changes let me tell you from a business owner standpoint what this means and what happened before we did this it's all about priorities what would happen is the most recent thing that I tell someone that I think needs to get done that's the thing that they would focus on and then and I would later reach out to them and say well what happened with this something that we talked about three months ago and they'll say well I stopped working on that because I started working on this new thing that you put on my plate and I realized I was producing a lot more stuff to get done than we had the hands to complete and the really cool things about Sprints it's not like what it puts on your plate it's what it takes off your plate because when you're in a when you're at the beginning of a Sprint you thoughtfully decide what you're going to focus your time on for the next two weeks and what that means is everything else that's not that thing isn't on your plate and everybody's on on the same page about what we're choosing to prioritize and what we're not so what that like to me what that is it's like project management by Design not by default like we don't just work on what's in front of us what's the loudest what feels most urgent we work on what we have like in our right mind chosen is the most important thing and more often than not those are split tests but literally what we're doing is a company like every Department we're split testing things all the time we're split testing things for our clients is like at our core like we live in bre this kind of iterative improvement because we want to be better every day and this is the best way to do it thank you for uh sharing some of your your Brilliance with us today Garrett um you you've made some massive changes to to the way that we do some of these things and you're the the brain behind so much of this um for anybody else listening I appreciate your time and we'll see you next week."
Sign up to our Newsletter
Ready to join the big leagues?
Start with a free strategy consultation.