As a PM, I have been doing Hypothesis testing also known as A/B testing for many years now. As I transitioned to growth, it has become one of the most important tools that I use to learn about our users, the friction that they experience in using the product, and using the insights thus generated to inform our product features. Rapid iteration helps avoid any blindspots that PMs or the team may have about users or how users want to receive value from your product.
Hypothesis testing is a simple concept that has its roots in statistics 101. However, it is easy for us to get confused and misinterpret the terminologies. With the rise of platforms such as Optimizely, Mixpanel, Google analytics, Kissmetrics etc. the real challenge is not as much in understanding how significance testing works as it is in understanding how to frame the hypothesis testing, define the success criteria, and interpret the results that we get after running an A/B test.
Every now and then when someone in the team excitedly announces “We launched the new CTA and it is doing so much better than the old experience” or “The new algorithm has increased the conversion metric by 50% in just two days” as a growth PM, I have to be extra cautious not to jump to premature conclusion and risk shipping a feature that does not serve the user and help the business.
Before understanding some good practices, which we will discuss in a later article, let us talk about the basics of Hypothesis testing. There are primarily four aspects:
Hypothesis Framing: ‘Do not rush to run but rush to frame’
Bias towards action or running experiments often leads the team to a cliff from where there is no return or in other words when PMs/teams do not know what to do with the results. Read my article on commitment trap here. This is why it is important to flush out all details before even thinking about investing on a project and hoping that we will be able to interpret the results later on. It may seem not productive but the more one invests in it, the better one becomes at it. As the misattributed quote to Lincoln goes “If I have 8 hours to cut a tree, I would spend first 6 sharpening my axe”.
Hypothesis: Framing the right hypothesis is thus the core of A/B testing. Let us go back to stats 101 where we learnt hypothesis testing consisting of Null and Alternate hypothesis
H0 (The null hypothesis): μ − μ0 = 0
H1 (The alternate hypothesis): μ − μ0 ≠ 0
However, in product management, we use a slightly different version. The one that I use is:
‘By doing X (experiment detail) will help users achieve Y (What value does user get out of it — informed by your user problem) and thus will have a% impact on metric b in Z days’.
Example ‘Moving the add to the cart CTA to the front page will help user find means to add the items to the cart and this will have impact on unique users adding items to the cart per week by 10% in 20 days’.
The advantage of such a framing is that you know in advance what experiment you want to run, what user benefit it delivers, and what metric you expect to move, by how much, and in what time. To put a number to how much impact the experiment can deliver requires pre analysis from the analytics/PM, but just in case the analysis is not available you could use your best guess that you expect is realistic and measurable.
Saying the experiment will have positive impact or >0 impact on a metric is not enough as little uplift like that may be by random chance.
The time in which you want to expect to move the metric is informed by the sample size needed assuming the current baseline for the metric, the expected uplift in the metric, and the user traffic (We will talk about this in another article as well) that you expect. Having time quantification will help avoid ‘peeking syndrome’ and also can be used to explain the reason to run the experiment for z time period.
Define edge cases: As they say the devil is in the details. Identifying edge cases and open questions before even running the experiments will help understand nuances that you would otherwise miss. So it is important to capture the detail in a spec or a one pager or a confluence page.
Decision tree: Spend time chalking out a decision tree that defines what action you would take once the results are out. You are most neutral when you have not started running the experiments. The moment you pitch the idea to executives, inspire the team to work on it, and spend sprints working on it, you will find it difficult to get out of the ‘commitment trap’ syndrome. Decision tree will help de-bias that action. An example of a decision tree that I used in one of the experiments is shared below for reference.
That was a lot for the first part. So we will dive deeper into significance testing in the next article.
Read our other articles on Product Leadership, Product Growth, Pricing & Monetization strategy, and AI/ML here.
As a photographer, it’s important to get the visuals right while establishing your online presence. Having a unique and professional portfolio will make you stand out to potential clients. The only problem? Most website builders out there offer cookie-cutter options — making lots of portfolios look the same.
That’s where a platform like Webflow comes to play. With Webflow you can either design and build a website from the ground up (without writing code) or start with a template that you can customize every aspect of. From unique animations and interactions to web app-like features, you have the opportunity to make your photography portfolio site stand out from the rest.
So, we put together a few photography portfolio websites that you can use yourself — whether you want to keep them the way they are or completely customize them to your liking.
Here are 12 photography portfolio templates you can use with Webflow to create your own personal platform for showing off your work.
Subscribe to our newsletter to receive our latest blogs, recommended digital courses, and more to unlock growth Mindset