Sunday, July 20, 2014

Just launched my new company called Insignum

Yesterday I launched my new company called Insignum. It delivers anomaly detection and notification for your analytics across Mixpanel, KISSmetrics and Google Analytics.

The product provides data-driven startups with automated analytics intelligence to find deviations in expected customer behaviour as they happen. Using machine learning algorithms, it continuously searches your data and when an anomaly is detected it will notify you so that you can act on the insight.


If you have any feedback about the product I'd be very interested in hearing about it. Insignum's Twitter account is @insignum_io.

Tuesday, June 3, 2014

How a one word change increased product demo conversions by 139%

The following is a repost from the GoCardless Blog where I explained one of our successful A/B tests. I also started a good discussion about the merits of the A/B test over at growthhackers.com.



This post looks at an A/B test with a simple copy change and how it improved conversion rates by 139%. The idea behind the A/B test was to give users immediate access to the product, via a recorded demo, instead of having them receive a personal phone call from our sales team later on.

Theory

Using a framework, when A/B testing potential improvements to landing pages, is helpful. Sean Ellis has a simple one for understanding the broad levers that can been affected: Conversion Rate = Desire - Friction. Desire and Friction can be further broken out and the LIFT Model by WiderFunnel describes this well:

LIFT Framework for CRO

Hypothesis

Our goal was to improve the conversion rate for demo requests so that customers could access the content they were interested in as soon as possible and with minimal friction. We wondered if the “Request a demo” button might be causing some users anxiety (as described by the LIFT model) and, as such, artificially lowering conversion rates. We then tested whether the wording “Watch a demo” would outperform the original “Request a demo” wording.
We had further reason to believe that immediate access to a recorded demo would be beneficial as only 1/5th of leads ended up watching the live demo which has been scheduled by our sales team.

Website Modifications

The original GoCardless user experience with the “Request a demo” was the following:
  • Call to action (CTA) on the homepage was “Request a demo”
  • Users were taken to a request a demo form to fill out and submit
  • Upon completing the form, users were given a date and time of an upcoming live demo that they could participate in.

GoCardless Homepage With Request Copy

GoCardless Demo Page With Request Copy

We altered the user experience by giving users immediate access to a recorded demo:
  • Call to action (CTA) on the homepage is “Watch a demo”
  • Users are taken to a “Watch a demo” form to fill out and submit
  • Upon completing the form, users are shown a 10 minute recorded demo in their browser
GoCardless Homepage With Watch Copy

GoCardless Requesting A Demo Page With Watch Copy

GoCardless Watch A Demo Page

Most of these changes were implemented within Optimizely’s Multi-page Experiments feature (aka Conversion Funnel Testing). However, we did build the recorded demo page ourselves as we didn’t need to A/B test this page directly.

Although some time and effort went into thinking about reducing friction, the work required to implement and instrument these changes was very easy because of Optimizely and Mixpanel. Optimizely has a very useful single toggle option for sending super properties to Mixpanel so that we can track what happens deep within our funnel.

Results

Given our acquisition channel characteristics, we ran the A/B test for a full 7 days. We then looked for a statistically significant winner with at least a 95% confidence level. Optimizely’s report panel below shows that the “Watch” version consistently outperformed the original version:

Results From Optimizely Report

We also ran the numbers through Mixpanel’s split test calculator based on event data we track in our conversion funnels:

Results From Mixpanel's Split Test Calculator

This shows that the “Watch a demo” version is more than twice as effective as the “Request a demo” version (139% increase in conversion). With this simple copy change, derived from the idea of reducing friction for new users, we’ve dramatically increased the number of users who watch a product demo and are therefore more likely to become customers.

Tuesday, December 17, 2013

10 Steps For Running A/B Tests On Mobile Apps


There are a number of mobile architectures that support effective A/B testing within mobile apps. They range from rapid prototyping ones based on HTML5 components to feature flag based ones that trigger different versions of native components. The trade-offs are between in-app performance, testing iteration time and the native look and feel within the app. The main concern for effective A/B testing is to produce as many valid experiments as possible in the shortest amount of time. Therefore the longer this process takes, the longer it will take to discover what version(s) of the app perform best for various user segments. Whichever strategy is used, A/B tests should not be dependent on infrequent App Store releases to be the most effective.

After setting up a new A/B testing framework, its important to run an A/A test and determine if it is calibrated correctly. This type of A/A test should also be run every so often to make sure the A/B testing framework still works as expected and produces the correct statistical results.

Once a basic A/B testing framework is setup, here are the steps to run an effective A/B test:
  1. Define a goal that can be accurately measured. The effort in this step will reap dividends later in reducing the number of failed or ineffective tests. 
  2. Brainstorm ideas for how to satisfy the goal. These can come from a variety of places such as qualitative customer feedback, employee suggestions, behavioural economic theories, gut feelings about product improvements, etc. 
  3. Prioritize the list of ideas above based on the ease of implementation, the estimation of improvement potential and the relative position in the funnel. 
  4. Setup the necessary event-based analytics tracking for an individual user's flow through the entire app. These events should be wired together to produce a funnel so that it is clear what the conversion rates are at each step. Depending on what is being tested, the user’s flow should begin from their entry point in the app (direct launch, push notification or website launch) through to the point of purchase and/or post-purchase follow-up. Another important strategy is to measure not only the success of the step being tested, but also the overall engagement of a user. 
  5. Capture a baseline set of metrics for how the app currently performs for various user segments before any testing is run. 
  6. Build the minimum viable test (MVT) and make sure to test it with a small set of beta users prior to releasing it in order to validate the initial metrics. 
  7. Decide on the proportion of users that will be exposed to the A/B test (e.g. new users, returning users, users who haven't purchased yet, 10% of all users, etc.) 
  8. Run the A/B test until the results become statistically significant for the required confidence level (usually 95%). Also ensure that the A/B test occurs during a time period that is considered "usual" activity (e.g. don’t A/B test on a Sunday if users don’t often purchase on a Sunday). 
  9. Calculate which version of the test performs better. If the newly tested version is superior, make it the default version of the mobile app and release it into production for all users. 
    • If the newly tested version either performs poorly or no conclusion can be reached, record the details and possibly re-assess later. 
  10. Observe any other tangential effects that the A/B test may have caused such as increased support calls/emails, decreased retention, engineering complexity, etc. It may also be helpful to present some users with a brief survey asking them about their new experience in the mobile app. The results from this survey will add valuable qualitative feedback to the A/B test’s quantitative results. 
  11. Repeat the process by running another A/B test.
Ultimately, executing A/B tests is about simplicity and speed. The faster the tests can be run and statistically significant winners declared, the more growth a product will see over time. 

The steps given above for running A/B tests relate to users who have already downloaded the mobile app. A/B testing can also be performed on users coming from specific growth channels. Due to mobile's inherently closed ecosystem, attribution is more complicated on mobile apps. However once it is setup correctly, it is possible to track users from specific growth channels so that each channel’s revenue potential can be calculated and optimized.

Monday, November 11, 2013

Xamarin Features RESAAS Mobile App













One of the things I am passionate about at RESAAS is our mobile app for iPhone and Android. We are often exploring how our customers use the app differently from the browser experience and then optimizing the experience for that exact use case.

Xamarin, the company behind the cross-platform development framework that uses C#, recently featured the RESAAS App on their website: http://xamarin.com/apps/app/resaas_the_real_estate_social_network.

I've written previously about our App being showcased on the Appcelerator Titanium blog (when we used their framework instead of Xamarin) as well as our initial app release back in April 2013.

Sunday, November 10, 2013

A Growth Hacking Case Study on Starbucks SRCH


In 2011, while working at Blast Radius, a global digital agency, I was responsible for the technical development of the 'Starbucks SRCH Scavenger Hunt'. The following video describes the campaign.


Given my recent venture into the world of growth hacking and the way it now informs my thinking, I took another look at the Starbucks SRCH Scavenger Hunt from a growth hacking perspective.

Here, I will present it as a retrospective case study using the publicly available data. Here are the numbers quoted by Blast Radius in their post:
  • 7000 Starbucks locations advertised the initial QR code for launch
  • 300k visits over 3 weeks
  • 23k registrations (97% played at least one clue)
  • Avg. time for 1st person to solve a clue was 21 min, indicating extremely high engagement with the brand
  • Over 20k posts from social channels regarding SRCH
  • Media coverage from Mashable, USA Today, CNN, PSFK and more

1. Use a Simple Framework

I've posted before about Dave McLure's Startup Metrics for Pirates: AARRR or Chamath Palihapitiya's growth framework. Neil Patel and Bronson Taylor have also created an even simpler three stage framework influenced by Dave's ideas: Get Visitors, Activate Members and Retain Users. Either framework can be used to independently measure and analyze each stage that a user progresses through as they go from having never heard about the product to being fully engaged and possibly paying for a premium version. In this case, I'm choosing to use Chamath's four stage growth framework as it ignores the revenue stage (due to Facebook's business model which makes sense for SRCH as well because it was also a free product):









2. Start Acquiring Users

Paid media was not used for this project so all inbound traffic for SRCH's acquisition (300k visits) came from the following three sources: 
  1. Existing Starbucks Customers (via their 7000 retail locations)
  2. Traditional Media (Mashable, USA Today, CNN... etc)
  3. Social Media (Mostly Twitter & Facebook)
















Things To Consider:
  • Unique Users: Using "visits" to quantify the acquisition stage is ill-advised. Visits, page views, downloads... etc are usually just vanity metrics and were most likely quoted in this specific instance to bolster numbers. What should be measured at this stage is the exact number of unique users to the landing page(s).
  • Conversion Rates: According to Terifs data analysis, Starbucks had, on average, somewhere around 500 daily customers at their retail locations in 2010-2011. Given this insight, if 7000 retail locations had approximately 500 daily customers over a 2 week period while they might have advertised the SRCH Scavenger Hunt, then there was a potential audience of 49 million customers (without factoring in repeat customers which might  in fact be quite high). As an example, if we split the 300k visits three ways across each acquisition channel (stores, traditional media & social media), then we can estimate that the Starbucks locations brought in approximately 100k visits alone. Thus 100k visits/49M potential customers translates to a conversion ratio of only 0.2%. It is interesting to consider that this is in line with conversion rates for digital display advertising (i.e. banner ads) which are known to have very low click-through rates (CTR) compared to other advertising methods. So when thinking about the logistics and development costs required to setup advertising across 7000 Starbucks stores coupled with the conversion rate of those in-store ads, which approximate the conversion rate of banner ads, it may have been more beneficial to spend time optimizing the in-store advertising of SRCH or switching to paid media to drive those visitors to Starbuck's landing pages.

3. Measure Each Stage In Detail

One of the most valuable things to do in any project is to measure each stage along the growth framework (a.k.a. funnel) and figure out the conversion rate at each stage. This shows where users are dropping off and also allows segmentation of the traffic/users so that insightful questions can be asked like "Which types of users are activating more often?" or "What source did our most engaged users come from?" or "Where should we start optimizing first?".


NOTE: The only data available is from Blast Radius may not be accurately measuring the most representative proxy for each stage. 

Here are some things to consider when building these types of funnels and analyzing the results:

  • Counting Conversion: The funnel should be measuring each user independently and any action they perform should only be counted once. Thus, if a single user sent out multiple social media posts, the virality stage should only count one of those social posts since that user initially "converted" to that stage of the funnel (i.e. converting multiple times is still just a single conversion). The reason this immediately stood out to me was the 87% conversion from the engagement to virality stage. From my experience, this number is quite high, and I assume that it measures the number of total social posts but not necessarily the ones from engaged users only.
  • Defining Engagement: The engagement stage took into account whether the "user played at least once", which may or may not be the right proxy for what should be considered an engaged user. Measuring engagement is by far the hardest stage to measure and each business should measure it differently and constantly re-assess whether they are measuring the right thing. Many industry leaders have discovered what their leading indicators of engagement are, but these are hard to figure out without a comprehensive understanding of the customer and tested theories based on data analysis.
  • Funnel Creation: Given the growth framework above it is very helpful to map each stage to a funnel step in an event based analytics tool such as Mixpanel or Kissmetrics. I've written a post before about using Dave McLure's AARRR framework with Mixpanel but here is a mocked-up version of the growth framework above mapped to a Mixpanel Funnel:


3. Optimize The Funnel

Given the data above, the best place to start optimizing would be higher up in the funnel where the largest drop-off was experienced (i.e. landed users who don't sign-up). The reason for this is that a one percent increase in signed-up users has a much larger effect on the overall completion rate than the same percentage increase in engaged users. One thing to be careful of with this approach is that diminishing returns start setting in the moment you begin optimizing a step. At some point the effort required to discover a change that has a tangible effect is no longer worth the cost. Here are some ideas that could have been used for optimizing each step of the funnel:

  • Optimizing Acquisition: Inbound traffic came from 3 channels as mentioned above. Figuring out which of those channels brought in the "best" users (most highly engaged) using Mixpanel's segmentation features (or Google Analytics), could focus efforts by reallocating resources to focus on the acquisition channel that performed the best and had the greatest potential for increases. For example, optimizing the retail in-store advertising about SRCH during the Scavenger Hunt would have been complex (in terms of logistics and timing to rollout any changes) but this could be tested at a single store and if sufficient increases were noticed to justify changes across the other 7000 stores, the improved advertising could be rolled-out. Essentially testing a variety of in-store combinations of advertising placements, colours, QR codes vs. actual links... etc. could be rapidly performed to see what single or set of changes drove more traffic.
  • Optimizing Activation:  The conversion page could be A/B tested (using something like Optimizely) for activation to determine if there are any changes that would boost sign-ups. Social sign-up, wording, images, colours, layout can all be A/B tested provided there is enough inbound traffic to support the tests. (See Neil and Bronson's suggestions for conversion growth hacks). Changes should be statistically significant, as measured with a A/B split test calculator.
  • Optimizing Engagement: This is the core of a user's experience. As can be seen, there are a number of steps that a user must go through to get to this point but once they are here they should be given what some call a "must-have experience"or "aha-moment" if they are ever to come back and continue to use the product. Not having this is the difference between whether or not the product has a product-market fit. Without it, no growth hacking will be that effective over time as the product will just bleed users over and over again until there are no more users left to acquire. Therefore, optimizing for engagement comes only after product-market fit has been found. If there is a clear understanding of how users are engaging with the product and there is a desire to boost engagement, a number of tactics are available. For the Starbucks SRCH Scavenger Hunt, email, SMS or push notifications could be used to alert users when the next clue has been released or when the first user solves a clue. SRCH was a game after all so building in a gamification system built upon competing users could boost engagement with existing users.
  • Optimizing Virality: Increasing the amount of users who post something about the product to their social graph requires trust, a value proposition and reducing friction. Thus, testing a number of combinations such as where in the flow should the user be prompted to post, what copy should be used to encourage a user to post and what copy should be used for the auto-populated post text. Additionally adding in some clear value added benefit (i.e. exclusive access, more game features... etc) for the user posting could also increase the number of user's who decide to post something to their social graph.