Jonas Weigert on A/B Testing Beyond the Landing Page (Q&A)

Jonas Weigert on A/B Testing Beyond the Landing Page (Q&A)

As conversion optimization continues to mature and become adopted by more organizations, it’s always interesting to see how companies are approaching growth and optimization. Especially, for me, in the tech startup space, as these companies often live and die by data, and tend to build their organizations around experimentation.

LawnStarter is one such company, so we sat down with their CTO, Jonas Weigert, to learn about how they experiment across their product and communication and how they deal with optimization as a company.

Introducing Jonas Weigert and LawnStarter

LawnStarter makes it easy to book lawn care. They are a consumer market place where you order a lawn care subscription with just a few clicks. They have a ratings system and build distribution algorithms to best distribute lawn care providers to customers.

In short, they need to approach data and experimentation from multiple fronts, so it gets a bit complicated in how they actually do this, as well as how they balance experimentation with their typical product development process.

Jonas Weigert is the driving force being the majority of these decisions. As LawnStarter’s CTO, he heads their technical efforts, which includes experimentation across their product and communications.

Q: How did you get involved with LawnStarter?

Jonas Weigert:

“I started developing software when I was twelve and have built little tools and websites ever since.

The goal was always to remove some problem or repetitive task out of people’s day-to-day. I ended up going to Virginia Tech for Computer Science and joined LawnStarter to build their platform to alleviate the challenges the lawn care space faces when it comes to customer service and operational efficiency.”

A/B Testing Beyond the Landing Page

Since LawnStarter involves more than the traditional add-to-cart and then complete purchase conversion metrics, they utilize their customer and web data in various channels. They’re a subscription business, so ongoing communications are important. They also experiment frequently with their own product’s functionality.

Q: What does experimentation look like at LawnStarter?

Jonas Weigert:

“Like most online businesses, we A/B test landing pages and the signup flow. However since we are an operations focussed company, that’s just the start.

We also put a lot of effort into testing product communication (emails, SMS, and push notifications) to drive users into our product. Lastly we test a lot of internal features, like our marketplace distribution algorithm, that live behind the scenes to optimize platform functionality and at the end of the day will create a better customer experience.”

The industry seems to moving further towards universal optimization, or as Optimizely’s tagline puts it, “experiment everywhere.” Instead of surface level changes, organizations are asking how they can use all that data they’re collecting to experiment across their stack and communication channels, making data-driven decisions where previously gut-decisions prevailed.

Image via Conductrics

Whether you use a tool like Optimizely or Conductrics to do that, or build your own platform that is uniquely tailored to your needs, it’s becoming increasingly important to have that level of flexibility and freedom. Add to the fact that Intuit just released an open source testing tool that has these capabilities, and I think we’re going to see a lot more of this approach to optimization in the future.

Optimizing Communications and the Customer Experience

What we usually talk about when we talk about CRO is running experiments on a web interface – most of the time with ecommerce or lead generation, sometimes with SaaS optimization.

LawnStarter, being a marketplace, represents a different set of challenges. But they also push the boundaries in terms of experimentation because they move past the web interface to test their product communications as well. They wrote about it on their engineering blog pretty extensively. Here’s a quick diagram they published that visualizes their engagement tracking for email communications:

Since they’re testing their web interface, app, product communications, and product features, what technology do they use to accomplish all of this?

Q: What do you guys use to run experiments in different parts of your product and communications?

Jonas Weigert:

“I believe that for a company our size, the simplest solution is the best one.

So for the landing page and signup flow, we use Optimizely in combination with some off-the-shelf reporting tools. For product communication testing we built our own solution in house and it has allowed us to not only drive higher click through rates on emails but also see what the actions are users take during the sessions we initiated with a particular call-to-action.

Testing internal features is hard to standardize, so to keep it simple, we run multiple tests and tag users that have been exposed to one variation or the other and then report over time how users who were exposed to a certain test use our platform.”

Q: Wow, so you built your own system to do all that – why did you choose to do it this way? Why not use an out of box platform?

Jonas Weigert:

“As I mentioned, for us the simplest solution is usually the best (as long as it can get the insights we need).

We want to be able to iterate quickly and that sometimes means that new testing capability needs to be added.

Using off the shelf solutions allows teams to iterate quickly by avoiding engineering cycles. The next step is to use the off-the-shelf solutions to trigger tests that engineering implements. This becomes necessary as our user facing product is a single page application and existing tools (like Optimizely) do not play well with that technology for purely technical reasons.

That being said, sometimes the existing solutions will not be able to integrate as closely into your platform as needed to get all the insights you want to get out of a test. When that happens, we build our own platforms to solve the testing problem once and for all.”

So there it is. When testing more prototypical interface and user experience elements, they keep things simple with Optimizely and off-the-shelf reporting. When testing product communications or internal features, things that require a more customized approach, they built their own system to handle it.

The Challenges with Single Page Apps

Single Page Apps (SPAs) are becoming more popular for a variety of reasons, allowing for more dynamic apps and a more seamless user experience. But because the client loads just once, they present technology problems for A/B testing. I asked Jonas about how they tackle these challenges.

Q: Funny that you mention single page apps. I understand they’re becoming more popular. Could you explain why they are so hard to test with? What advice do you have for people dealing with them?

Jonas Weigert:

“The key problem with single page apps (SPAs) lies within the name.

The app page is loaded once and the content on the page is modified (not a new one loaded) when you click through the functionality.

The tools that are out there to test page content (like Optimizely) need a page every time it wants to modify the content for a test. Since this event does not fire reliably and the application makes changes to the page continuously, it is impossible for those tools to hook into your SPA reliably without breaking a bunch of functionality.”

Algorithm Optimization

Talking to Jonas, I was most interested in how they approach experimenting with their product features and especially their distribution algorithm, which is how they match customers to lawn care providers.

It’s not as simple as a randomized bucket of web visitors via paid, organic, social, etc. These are live experiments on your current customers, and therefore, there are a whole new set of challenges, including randomization, bucketing, customer experience complexity issues, and a more challenging analysis.

Q: You mentioned testing your distribution algorithm, could you go into a little more depth on what that means? How do you determine if one distribution algorithm ‘won’?

Jonas Weigert:

“Measuring success of something like our distribution algorithm is definitely a challenge.

When we create a different version of this algorithm it is because we are trying to solve a problem: customer satisfaction, provider success, high ratings, etc.

We consider the algorithm a winner only if the problem we were trying to solve was materially reduced or solved and other critical business metrics like churn, engagement, lifetime value and many more are unaffected or improve.”

How to Prioritize Experimentation with Little Resources

LawnStarter doesn’t have the resources that American Express has. But they still take experimentation very seriously (as many startups do). However, they’re constantly engaged in a balancing act: how many resources can they feasibly expend with A/B testing as opposed to feature development and other opportunity areas?

This balancing act is fascinating with startups, because they’re constantly required to be scrappy and creative.

Q: Being an early stage company, you must be very resource constrained. How do you justify investing so many engineering resources to A/B testing?

Jonas Weigert:

“A/B testing is without a doubt the most valuable thing we can do to drive our product forward.

Since we’re not building the faster, cheaper version of an existing solution, testing allows us to validate ideas quickly and reliably by understanding the end impact the ideas have on the customer experience. The challenge with A/B testing is not to set up or run the tests.

The challenge is reporting the results across customer segments and / or markets. Since we do all of our reporting on these tests with Tableau, our engineering team can focus on solving the problem and doesn’t have to spend a massive amount of time building reporting tools for every single test.

Also, we’re hiring a growth engineer right now so we can have one person dedicated to testing. That way we never experience regret because engineering resources are being pulled towards something else”

Of course, resource challenges don’t go away when a company has more general resources, talent, and cashflow. In fact, I’ve more frequently seen startups of sparse resources devoting more time and attention to experimentation than the juggernauts of enterprise, who are often slow to act and treat product development much differently.

So the problems of iteration, customer research, and reporting don’t go away with resources. The challenge of maintaining a culture of experimentation are always there.

Wrapping up the conversation with Jonas, I asked him to sum up his advice to fellow tech startups looking to get serious about optimization.

Q: What’s one tip you would give to a company in your stage?

Jonas Weigert:

“The data we collect on how people use our platform is the most valuable asset when it comes to making product decisions.

Collecting this data is only the first step. You have to make it actionable.

We decided early on that we want to be “data-driven.” To achieve that engineering designs solutions that not only work but allow us to learn from every iteration by asking the question “What do we want to learn?” before the first line of code is written.

The biggest game changer for us was to use Redshift (a database that allows you to cross reference all you data from Salesforce, Zendesk, GA, Mixpanel, Optimizely, and internal databases) in combination with Tableau to allow every team to gain the insights without the need for engineering to build a report.”

Conclusion

As Jonas mentioned, data is their most valuable asset. But it’s not just the collection, it’s about putting it into action.

Though they have unique challenges according to their industry and business model, there are some more general takeaways here, too.

First, think critically about how you want to prioritize testing and optimization in your company. Do you want to bring experimentation beyond the landing page and really use it to fuel growth? As you scale, how will you empower teams to actually do this? There’s not a one-size-all answer here, but it helps to start the conversation.

Second, try to make things as simple as you feasibly can. Of course, make sure your technology and approach are effective, but if no one on your team can access your data or run tests, then there is an inherent bottleneck. LawnStarter favors the simple solution, as it’s conducive to speed and democratizes testing and data.

That’s what a “data-driven” organization is at heart, right?

Join the Conversation Add Your Comment

  1. Thank you for sharing this post Alex!!!
    Your reply is more practical and helpful. I hope it will be helpful for too many people that are searching for this topic. Great post!

    Can’t wait to follow and learn.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Current article:

Jonas Weigert on A/B Testing Beyond the Landing Page (Q&A)