Why A/B Testing has Permanently Earned a Spot in This Marketer's Tool Belt

ABOUT THIS EPISODE

In this episode, Olivia Hurley talks to Bob Glotfelty, VP of Growth @ Taulia  

Yeah, hi everyone welcome back to be, to be growth. My name is Olivia, Hurley and today I'm joined by bob black felt E was the VP of growth at Talia. Hi bob, how are you doing? I'm doing very well. It's wonderful to be here with you today. Oh good, I'm so glad. Well I am really excited to talk to you because I'm going to get a huge education today because you have a lot of experience around A. B testing and I have very little experience around a. B testing and testing in general. Um and so starting off, can you bring us all in what has caused you to believe so strongly in the importance of a B testing and then testing as well? Yeah, so I don't think 10 years ago I would have thought that I have stretched such strong opinions on a B testing but what's really made me such a believer is actually how many times I've been wrong and it's not just me, it's very smart people that I've worked with, people with an incredible amount of experience time and time again, you just find yourself being wrong and that's a humbling experience. And there's a tool out there called a B testing which or maybe it's a methodology, not a, not a tool but it is a mechanism to overcome that challenge, that problem and it allows you to remove bias and judgment that you make that's incorrect and look at something very simplistically, so that's kind of why I've become such a big believer and yeah, it's an amazing tool and for anybody new to this concept like myself, can you in your own words and own experience define a B testing for us a little bit. Yeah, so, so a B testing very specifically is just like the name describes, you have an A and you have a B and you run those two things in parallel and you see which one is better. So One of the easiest concepts or the easiest ways I'd like to explain. It is with website testing. So you can send 50% of your traffic to your website as you normally would. and then you can send 50% of your traffic to a variation of your website that you've modified and you can say okay if you compare those two groups, which one has an action that you desire. So you can a B test very small things, you can change the colour of a button or a word on a page or you could change huge amounts of things, right? You could have people go to an entirely different website that has nothing the same. And those would both be a B tests, you can do, you know a B C D E F, you can do many variations if you like. But it's the concept of having a a test case and a control case that are running in parallel that allow you to measure success. So I'm curious is HIV testing considered an essential practice by marketers. And is it used often? Yeah, I think so. I mean, it doesn't work in every possible case. Right? So if you're going to a trade show, there's only one trade show that you go, you can't have two separate booths and have the traffic magically split between the two. So it's, it's not an applicable testing mechanism for all types of marketing activity. However, for things like digital, it's extremely effective. Right? You can send, You're gonna send 1000 emails out, you can send 500 that look one way and 500 that look a different way. It's very easy to customize something like that. So it is definitely an essential practice. It is used very often. It is not available for every possible marketing channel or activity, but it is a very effective...

...way to, to measure things that can be measured through that at all. Okay, that makes total and complete sense to me. And I think also just allows you to not have to put all of your eggs in one basket. So as market or how do you decide where in the funnel and with which assets to run the A. B tests? Yeah, I mean, you're generally looking for things that are going to have a return. So if you're sending a one off email, there's no reason to measure that or if it's so late in the funnel that your sample size is small, you can't do an effective A. B test. So for me, it's more about is it going to create a result. So if you have something that's very ineffective today, you know, a B testing small changes, it's not really worth your time. But if you have something that's, that's a high contributor or a very useful thing that has a lot of volume, those things justify A B tests much more. You'll typically see high correlation between volume and success success measures as key indicators for kind of where you should use that within your, your final or other other mechanisms of measuring where to use it. Yeah, Do you have an example or a story you can share with us of, of deciding where in the funnel to, to run a navy test. Sure. Yeah. So one example would be, we've run a lot of A B tests on our platform itself. So when our users are in our platform, how do they interact with different things? Um, it's very easy for us to run a B tests on our homepage because it gets a ton of traffic, everybody lands on that first page, there are other pages within our platform where really care about the activity, but the volume is so low that it doesn't, it doesn't make a difference. So if you think about that from a kind of floor stage perspective that's very early on because that's where the bulk of the volume is landing, it's very easy to test on those types of pages, but late stage right before somebody is about to take an action and act as a conversion, those can be much harder to test because you don't necessarily have the same amount of volume there. Okay, so volume extremely important. Um, traffic extremely important. How long should you run an A. B test to know that you've, you've measured it for all it's worth? Yeah, they're really great tools out there that different companies have built that allow you to essentially run the calculation of when you reached statistical significance. So what do I mean by statistical significance? If you have, Let's say 100 people going through a flow and uh, 50 went to one test, 50 went to the other. If 24 took the action you wanted out of the 50 on one test and 25 took the action on the other. That hasn't reached statistical significance. So there are tools online, little calculators you can use where you put in the amount of volume, the expected difference between the two cases and it will tell you, oh, you need to run this test for two months because that's when you're going to have enough examples to say statistically confident that this is better than this other one. And then other times you'll run it through and you have so much volume and it'll be such an improvement. You may only have to run it for a day or a week. Again, I'm using time as a, as a measure here. It can also just be volume. Right? So if you're sending a number of emails that may not be time bound. But things like that can kind of give you a sense for when you've reached statistical significance. Okay. Oh, that's so interesting. So there's no, as with many things, there's no one size fits all with this type of testing and it really is, the information at the end is going to be completely different each time you test. So how do you and what KPI s do you measure against when it be testing? Yeah, I mean you're...

...primarily looking for lift and conversion. So if you send this email and you're looking for somebody to click a button in the email is the incremental lift. 5%, 15%. that's a measure you're looking for. And then you do want to look for statistical significance because if you're sending millions of emails and you get 1/10 of a percentage lift, that may be hugely valuable to your company. So the lift may be small but if you can prove it is valid and it's over a large sample, um, that can be just as compelling. And then in the end, what information are you looking for from the A. B test and I'm curious to hear some stories and examples about what you've discovered after a B testing, especially to your point earlier of saying like we found out we were wrong. Yeah, I feel that any test is pretty much a success and that's going to sound weird because, well either it worked or it didn't, but things generally land into buckets. Either they were successful and you got incremental lift and you're going to make a change to your approach or it didn't and you've learned something important in the process. Um, so I'll give some examples. I always feel like a simpler flow is better for a user. So if I can just fill out three fields, I'd rather do that than fill out nine fields or fill out multiple pages in a flow. I just genuinely believe that the shorter is better. A B testing will prove time and again, that that's not always the case and that teaches you something. So you may actually do things that are counterintuitive and you'll learn about your users, you'll learn about how they move through the process. So when tests don't work, you learn something that helps you for the next test or helps to continue to iterate and improve on your process. Is there anything that tall you specifically that you started doing, you thought surely this will work or it won't work and it ended up being the opposite. Anything specific. There have been a lot. I think my, my, my favorite example of that is I used to think html emails were great, right? They look beautiful. They have colours and design and what we've done over time as we've said, okay we're going to a B test and email again and again and again until we get the email to perform extremely well. And so what we've seen is that oftentimes the most effective thing is extremely short emails, all plain text and ideally looking like they're written by a person, which is pretty crazy, right? Because in marketing we want to add design and we want to make things look aesthetically pleasing. But oftentimes that's the antithesis of what the user is going to interact with most effectively. And so yeah, really basic things actually tend to perform better than things that as a marketer you may think are really appealing there often enough. Hey everybody Logan with sweet fish here, if you're a regular listener of GDP growth, you know that I'm one of the co host of the show, but you may not know that. I also head up the sales team here is sweet fish. So for those of you in sales or sales ops, I wanted to take a second to share something that's made us insanely more efficient lately. Our team has been using lead I. Q. For the past few months. And what used to take us four hours gathering contact data now takes us only one where 75% more efficient were able to move faster with outbound prospecting and organizing our campaigns is so much easier than before. I'd highly suggest you guys check out lead I. Q as well. You can check them out at lead I Q dot com. That's L E A D...

I Q dot com. All right. Let's get back to the show. That's so fascinating to, because you run the A B test, you find this out now it impacts your tactics and how you communicate with this particular persona with this particular example. Did it save time and money moving to something very, very basic and simple versus this. Did it, did it give up some free up some bandwidth for your team? Yeah. I mean for us, yes, it's I think it's eye opening in the future. Right? You, you know, once you have an email template built, it's pretty easy just to hit the send button again. But then in the next time you're going to run a different campaign or you're going to do things differently, you realize, look, we don't need to go down this whole path of creating all this complexity. What's the message we want to say? How do we say that very concisely and simply, And how do we just push that message out there? So it's definitely allowed us to sort of leapfrog right to the better answer in the future with additional campaigns and that saves us time of having to go through that whole path and and have all these emails that are potentially less less successful. That's so cool. So is there ever a time when a B. Testing or maybe testing in general is a bad idea? Yeah, I would say there's a there's a few examples of when it is bad. So one is if if you can't get a valid test, don't do an A B test, right? So if you have traffic of I don't know 10 people A. b. 5-1 and five together, you will never hit statistical significance. So the test will never be valid. Never a reason to do that. And then the other example is if you really only have one shot at something right, if you're taking a half court shot at a basketball game and if you make it you win whatever prize like you can't possibly a B test that you only have one shot. So you know, don't bother trying to use a tool that's sort of not uh these tools I'm curious is just somebody who doesn't know a lot about A. B. Testing. Are these tools things that you rely on every time or is this is a B testing a skill that you can your predictions get better and better or have you found that you're still utterly surprised sometimes? Yeah, I mean, so the tools themselves, a lot are built into marketing automation tools. So we use market. So for example you can, within that tool you can run a test on an email to say this much traffic here, This much traffic here. This easy comparison. There are tools like optimized lee that allow you to manipulate your website. So there's a lot of like actual tools to do that. But the things you're testing will be all sorts of different things and I think you can get better over time because you've seen what's worked and what hasn't in the past. But if I were to go to an entirely new company and try to do it again, I think you need to a B test a lot of these things because the dynamics are different. The users are different. Um it needs to be uh fit to the scenario. We also work with a consulting firm called Cro metrics and this is all they do. All they do is um a B testing and optimization and uh they're they're very heavy on the the optimize the side and they work across clients and so they bring their feedback from other clients. But what works for one client doesn't work for another. So there is some improvement, but you need to really think about it within your use case and everybody's use cases is unique. So for somebody wanting to do what you're suggesting to employ a B testing as a as a regular rhythm in their marketing department, what is step one? Yeah, that's a that's a good question. I mean I would say test it. Right. So I know, I know we've been talking, you know, pretty narrowly...

...to a B. Testing and I would put that in a broader umbrella of testing, but give it a try, go do a test, see if it works for you to see if see if you learn something, there's nothing really stopping you. Um, there are a lot of ways you can kind of brute force it, you can um, you know, if if your marketing automation system doesn't allow you to do that, we'll be just a quick thing. We'll try to send two different emails that are different and measure them. It's there's plenty of ways to do that. So give it a try to see what you learn and uh, if it works for you, I think you'll be a believer like me and you'll keep on keep on doing it. I'm curious in terms of things that you can't necessarily measure. Do you think that A B testing has increased your ability to be creative because you know that you can try things and you don't have to kind of like I said earlier, put all your eggs in one basket and with this one shot idea, has it increased creativity? Yeah, I think, I think it creates openness. I think it allows you to listen to other people and it allows you to give them fair feedback. So a very common, I haven't experienced as much, but what I've heard from a lot of others at a B test is somebody within the organization or a senior executive will say we should do x. We need to change our website and you know, it shouldn't say contact us. I should say give us a call, I don't know. And by having the ability to a B test, you don't need to debate whether that should be the case or not, you simply run an A B test and then you come back and you say you were right. See look at the statistics were better. We should we should change the wording accordingly or you can say we tried it. Here's what we found the results aren't there And there's no debate. You know, you're able to communicate to somebody why something isn't done that way. So I think it allows you to open up two more ideas from more people and there isn't much cost to do it because you learn in the process and you can always move things back. Do you think there, are there any naysayers to a B testing? Like is there any reason why why somebody would be against a B testing? Yeah, I mean it takes extra time. Right? If if you need to send an email and you just want to send it, it only takes one iteration. If you are going to a B test, you need to do a little more work. So there is a opportunity cost in terms of your time, if you're at a very small company and you're the only marketing person is the opportunity, cost of testing what you're doing more valuable than doing something else, maybe, maybe not, you definitely find in, in larger organizations, they tend to spend more time on this kind of thing because uh, they'll see more value and they've got more resources. Um, but yeah, again, it always depends on the context of the organization with, with somebody, they've started testing, they're trying out there taking step one. Is there any warning sign or anything that you experienced as you became more and more proficient in A B testing? That was a, that was a flag or, or like a don't do what I did, that kind of thing. Yeah. So I don't, I don't know if I would define this the way you asked the question, but I would say this is something that everybody should be aware of when they A B testing, which is and and others have spoken to have said the exact same things. You go through peaks and valleys. So you're going to have a time where you won't have anything to test and because you don't have any ideas and you'll need to go through this iteration and ideation process to come up with tests and then you have other times, well, you'll have so many tests that you can't possibly run them all and you have to kind of wait or kind of sequence them and prioritize them. And so when there's a lot, everybody's on board with a B...

...testing, everybody thinks it makes sense. But when you don't have many people think that, well we should just go do something else or we should move on and that I would caution against because you're going to come back to that. So be consistent. Get yourself into the valley, use that as a reason to come up with more ideas, seek more feedback from other people. Get creative, but you're going to have those dips and depending on the way you're resourcing the program, what is structured internally? That may create pressure because people are saying, well, what are you testing right now? And if you say we're not testing anything that can be hard, but you will come out of that valley and you'll continuously go in and out of the cycles. And uh, you should, you should try not to ride the wave, you should try to be steady right through the center and be consistent with your testing practice. Oh my gosh, what sage advice. I'm curious. Do you have an example of that from your own career? Yeah, I mean, I would say uh, so again we engage third parties and we use different tools for this. So when you don't have a lot of tests, people say, well, why are we, we should just get rid of that tool and we should no longer work with the agency. I remember that happened, I think a year into our cycle and I think we're five or six years now into having this program set up and so, you know, you see it regularly because you hit those valleys, but we wouldn't have had the success we've had today had we not been convinced that there's value and and kind of stuck it out through that through those valleys. I love that. I feel like I've just gotten a massive education and I'm going to go A B test. But if there was anything from this episode that you'd want people to take away from this, aside from go try things and go test them, what would it be? The way I look at? A B testing is it's one element of overall testing and I'm a huge believer in testing, right? Uh I will rarely sign a contract for more than a year for example, because I want to test up, I want to see how it works and then I want to make a decision, right? I always believe in that iteration, cycle testing to me is kind of a sub component of decision making or judgment and so it is just a tool to sort of help you make good judgments and good decisions. So yeah, I would say the takeaway for everyone here is, you know, kind of regardless of who you are, what level you're at or what you're doing. Decision making is really the important element of any business or role or function and A B testing and testing in general, isn't it amazing mechanism to help that decision making process. It can't always be used but it's a it's a really important skill to home and leverage over time. I love that A B testing can support the muscle development of decision making as a whole. Is there? I'm just curious just to pick your brain for a second about decision making. Was there any any resource or book or experience that really kind of helped you navigate the world of big business decisions? For sure. So I think we've been talking very deep on the data side and I as a person, I've always really enjoyed being data driven and seeing data and having that drive your decision making. In fact, I think I over balanced that that was I was too much of that early in my career. The book blink by Malcolm Gladwell is an excellent book which is sort of the Counter to the data driven view, which is that your mind and your brain the way you think you're actually able to see a lot of uh answers way earlier than the data will actually tell you that it can create bias and it talks...

...about some of those challenges. But it was a very pivotal thing for me to read that and say, you know, maybe I'm maybe I'm being so data driven that it's not allowing me to be as nimble and as agile and as quick in my decision making and there are times when you just know and you shouldn't, you shouldn't wait. So it is all a balance. Um and I would definitely encourage people that are data driven to read that book because it will, it will tell you that maybe it's not always the right decision to, to approach things that way. Oh my goodness! Well I like I said earlier, I've just been given a crash course on a B testing and a book recommendation to top it all off. This has been so wonderful bob. Thanks so much for joining me on GDP growth. Absolutely. It was wonderful to be here and uh looking forward to hopefully doing this sometime in the future before I let you go because I can't let you go without this. How can people learn more about you and Talia? Yeah, well if anybody wants to contact me, Lincoln is definitely best get past this fan filters and I'm pretty easy to find. There's not many bob block lt's out there. My company is Talia, uh T A U L I A uh you can go to tally dot com to learn more about us. Well, thank you so much for joining me today. It was great being here. Thank you so much. Are you on Lincoln? That's a stupid question. Of course you're on linkedin here at sweet fish. We've gone all in on the platform, multiple people from our team are creating content there. Sometimes it's a funny gift for me. Other times it's a micro video or slide deck And sometimes it's just a regular old status update that shares their unique point of view on B2B marketing leadership or their job functions. We're posting this content through their personal profile, not our company page and it would warm my heart and soul if you connected with each of our evangelists, we'll be adding more down the road. But for now you should connect with Bill Read, our Ceo Kelcy Montgomery, our creative director Dan Sanchez, our director of audience growth Logan, Lyles, our director of partnerships and me, James Carberry. We're having a whole lot of fun on linked in pretty much every single day and we'd love for you to be a part of it.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (1705)