Written by Shane Neubauer
Traction is a big topic for us at Beyond right now! I'd like to share how we approach this important topic.
Traction is the quantitative evidence of customer demand. I regard traction to be a sub-topic of marketing. It's focused primarily on getting early momentum, and evidence that your product works.
We have a very systematic approach to most things at Beyond, and traction is no exception. Let me show you.
To guide our work, and ensure that we're optimising for the right thing, first we set a traction focus. The focus needs to be aligned with what we're trying to achieve for our business at the time.
Right now our focus is on getting people to sign up to our waiting list. Once we open up registrations on the product, this will change to getting people to create an account. This is the first signal to help us understand if people are interested in our product. Once people start to use it, however, the signal will need to be a lot deeper. For now, waiting list sign ups is it.
At any given point, the focus should be aligned with what we think proves that we are doing the right thing. Later it might be another metric, such as revenue, number of items saved, or friends added.
Whatever the metric, it needs to prove validity of your product as best it can.
The focus is important, because as you begin brainstorming ideas and talking to people, you'll naturally begin to come up with new ideas about things you can do. That's a good thing! But it's important not to get distracted, because by spreading your time and focus too thin, you will not get the most out of your time. It's always better to nail one thing vs. doing five things poorly.
Before we get into strategy, ideas, or actions, the focus must be clear.
Everything we do is an experiment. We acknowledge up front that we don't know if it will work or not (even if sometimes we think we do), and design an experiment to test. Everything is a hypothesis until we have data to back it up.
Coming up with experiment ideas involves a semi-structured brainstorm session. We have a list of 19 main channels, which includes things like SEO, content marketing, offline events, and more.
We set a timer and spend time coming up with ideas for every single channel. At this stage, we try not to judge anything — no idea is bad, impossible, or silly.
A couple of ideas on our list which didn't make the cut (yet) include writing a book, and hosting a "how to make money during a pandemic" online workshop. These could work! But they are not our top ideas for now.
After brainstorming, we generally have quite a large list of ideas. In our first session, we were left with 66 ideas to work with. That's way too many!
Our first mechanism to narrow this down was to look at each idea through three lenses:
Naturally we want to take the highly effective and cheap options first. Based on how we want to enter the market, it's also useful to think about type of users too.
We are working primarily with two personas for Beyond (content curators, and content consumers), so we also took time to consider which persona will be most valuable first, and make sure the experiments are supportive of our strategy.
From this prioritised list, we took the top ten, and moved on to Traction Mapping.
Ideas and experiments often have two or more ways of providing value to us. They provide value through their direct outcome (social media posts bring clicks) but very often they provide value in other ways too.
For example, writing blog posts means we start to rank in search engine results, they give us something to share on social media, and they also help us build credibility and be discovered by other blogs or news sites.
In this way, some experiments actually support other experiments. Even if one doesn't bring great results, if it supports another experiment, then we might keep it anyway.
After shortlisting our top ideas, we perform an exercise we call Traction Mapping, where we visually map each experiment, and the influence it has on others.
This helps us figure out a good starting point, or where we should be investing a lot (or not much). We can see from the links between ideas how impactful one of them might be. It intuitively starts to reveal some good areas to start.
Experiments are worthless if you don't have a good way to measure results. By nature, an experiment is uncertain, so we need to use data to understand if it was worth our investment or not.
There are three important metrics that we use:
To measure all of these metrics, we use Google Analytics, configured with some custom funnels.
Most of it is achieved using out-of-the-box features, but there's one thing we built to support our measurement: a special URL that fires a custom event, depending on a parameter.
This is a small piece of code that we insert into each public page on our site, and if there's an experiment ID (just a number we choose) then it tracks that as an event.
This way, we can track performance of traffic through each experiment, even if they are landing on the same landing page.
In one case, our website was linked during a speaking event using a QR code. We made sure to include our experiment ID in the URL, so that we could track "clicks" through this experiment, even with an 'offline' event.
When something starts working well, we double down on it. If something is not performing well, we have an honest conversation, and probably kill it. (There is a chance we'll keep it if the investment is low, and it supports another experiment)
It's important to always use rates to judge if something is working or not. It's not necessarily about how many absolute conversions we got, because that number is swayed also by the amount of reach or size of the platform. What we're mainly looking for is how effective is it, meausred by click-through rate and conversion rate.
If an experiment is highly effective, then we want to double down. By investing more into it, we expect that the number of conversions will increase proportionally.
To know how much more to invest, start increasing your investment into that experiment, and look for the moment that the conversion rate begins to drop. Then pull back a bit. That's your sweet spot.
For example, we may be getting great results from posting once per week on LinkedIn, and twice per week might perform twice as good. But, as we start to increase to 5 times per week, or 10 times, it's very likely we'll see engagement begin to drop as our content is seen as spammy or annoying. That's when we've gone too far, and we should pull back.
95 LinkedIn posts per week is definitely not going to bring 95x results.
Remember, experiments are all about finding out what works and what doesn't. Once we know what works, then we need to decide how to maximise performance on it.
It's still early days for us at Beyond, and there's a long journey ahead. We're excited to continue sharing our approach to creating and growing an amazing product, so make sure you join our mailing list or follow us on Twitter to get the updates first.
On Beyond you can find the best content on the Internet, curated by real people. Not algorithms.