Wednesday 18 December 2013

10 Steps For Running A/B Tests On Mobile Apps


There are a number of mobile architectures that support effective A/B testing within mobile apps. They range from rapid prototyping ones based on HTML5 components to feature flag based ones that trigger different versions of native components. The trade-offs are between in-app performance, testing iteration time and the native look and feel within the app. The main concern for effective A/B testing is to produce as many valid experiments as possible in the shortest amount of time. Therefore the longer this process takes, the longer it will take to discover what version(s) of the app perform best for various user segments. Whichever strategy is used, A/B tests should not be dependent on infrequent App Store releases to be the most effective.

After setting up a new A/B testing framework, its important to run an A/A test and determine if it is calibrated correctly. This type of A/A test should also be run every so often to make sure the A/B testing framework still works as expected and produces the correct statistical results.

Once a basic A/B testing framework is setup, here are the steps to run an effective A/B test:
  1. Define a goal that can be accurately measured. The effort in this step will reap dividends later in reducing the number of failed or ineffective tests. 
  2. Brainstorm ideas for how to satisfy the goal. These can come from a variety of places such as qualitative customer feedback, employee suggestions, behavioural economic theories, gut feelings about product improvements, etc. 
  3. Prioritize the list of ideas above based on the ease of implementation, the estimation of improvement potential and the relative position in the funnel. 
  4. Setup the necessary event-based analytics tracking for an individual user's flow through the entire app. These events should be wired together to produce a funnel so that it is clear what the conversion rates are at each step. Depending on what is being tested, the user’s flow should begin from their entry point in the app (direct launch, push notification or website launch) through to the point of purchase and/or post-purchase follow-up. Another important strategy is to measure not only the success of the step being tested, but also the overall engagement of a user. 
  5. Capture a baseline set of metrics for how the app currently performs for various user segments before any testing is run. 
  6. Build the minimum viable test (MVT) and make sure to test it with a small set of beta users prior to releasing it in order to validate the initial metrics. 
  7. Decide on the proportion of users that will be exposed to the A/B test (e.g. new users, returning users, users who haven't purchased yet, 10% of all users, etc.) 
  8. Run the A/B test until the results become statistically significant for the required confidence level (usually 95%). Also ensure that the A/B test occurs during a time period that is considered "usual" activity (e.g. don’t A/B test on a Sunday if users don’t often purchase on a Sunday). 
  9. Calculate which version of the test performs better. If the newly tested version is superior, make it the default version of the mobile app and release it into production for all users. 
    • If the newly tested version either performs poorly or no conclusion can be reached, record the details and possibly re-assess later. 
  10. Observe any other tangential effects that the A/B test may have caused such as increased support calls/emails, decreased retention, engineering complexity, etc. It may also be helpful to present some users with a brief survey asking them about their new experience in the mobile app. The results from this survey will add valuable qualitative feedback to the A/B test’s quantitative results. 
  11. Repeat the process by running another A/B test.
Ultimately, executing A/B tests is about simplicity and speed. The faster the tests can be run and statistically significant winners declared, the more growth a product will see over time. 

The steps given above for running A/B tests relate to users who have already downloaded the mobile app. A/B testing can also be performed on users coming from specific growth channels. Due to mobile's inherently closed ecosystem, attribution is more complicated on mobile apps. However once it is setup correctly, it is possible to track users from specific growth channels so that each channel’s revenue potential can be calculated and optimized.

Monday 11 November 2013

Xamarin Features RESAAS Mobile App













One of the things I am passionate about at RESAAS is our mobile app for iPhone and Android. We are often exploring how our customers use the app differently from the browser experience and then optimizing the experience for that exact use case.

Xamarin, the company behind the cross-platform development framework that uses C#, recently featured the RESAAS App on their website: http://xamarin.com/apps/app/resaas_the_real_estate_social_network.

I've written previously about our App being showcased on the Appcelerator Titanium blog (when we used their framework instead of Xamarin) as well as our initial app release back in April 2013.

A Growth Hacking Case Study on Starbucks SRCH


In 2011, while working at Blast Radius, a global digital agency, I was responsible for the technical development of the 'Starbucks SRCH Scavenger Hunt'. The following video describes the campaign.


Given my recent venture into the world of growth hacking and the way it now informs my thinking, I took another look at the Starbucks SRCH Scavenger Hunt from a growth hacking perspective.

Here, I will present it as a retrospective case study using the publicly available data. Here are the numbers quoted by Blast Radius in their post:
  • 7000 Starbucks locations advertised the initial QR code for launch
  • 300k visits over 3 weeks
  • 23k registrations (97% played at least one clue)
  • Avg. time for 1st person to solve a clue was 21 min, indicating extremely high engagement with the brand
  • Over 20k posts from social channels regarding SRCH
  • Media coverage from Mashable, USA Today, CNN, PSFK and more

1. Use a Simple Framework

I've posted before about Dave McLure's Startup Metrics for Pirates: AARRR or Chamath Palihapitiya's growth framework. Neil Patel and Bronson Taylor have also created an even simpler three stage framework influenced by Dave's ideas: Get Visitors, Activate Members and Retain Users. Either framework can be used to independently measure and analyze each stage that a user progresses through as they go from having never heard about the product to being fully engaged and possibly paying for a premium version. In this case, I'm choosing to use Chamath's four stage growth framework as it ignores the revenue stage (due to Facebook's business model which makes sense for SRCH as well because it was also a free product):









2. Start Acquiring Users

Paid media was not used for this project so all inbound traffic for SRCH's acquisition (300k visits) came from the following three sources: 
  1. Existing Starbucks Customers (via their 7000 retail locations)
  2. Traditional Media (Mashable, USA Today, CNN... etc)
  3. Social Media (Mostly Twitter & Facebook)
















Things To Consider:
  • Unique Users: Using "visits" to quantify the acquisition stage is ill-advised. Visits, page views, downloads... etc are usually just vanity metrics and were most likely quoted in this specific instance to bolster numbers. What should be measured at this stage is the exact number of unique users to the landing page(s).
  • Conversion Rates: According to Terifs data analysis, Starbucks had, on average, somewhere around 500 daily customers at their retail locations in 2010-2011. Given this insight, if 7000 retail locations had approximately 500 daily customers over a 2 week period while they might have advertised the SRCH Scavenger Hunt, then there was a potential audience of 49 million customers (without factoring in repeat customers which might  in fact be quite high). As an example, if we split the 300k visits three ways across each acquisition channel (stores, traditional media & social media), then we can estimate that the Starbucks locations brought in approximately 100k visits alone. Thus 100k visits/49M potential customers translates to a conversion ratio of only 0.2%. It is interesting to consider that this is in line with conversion rates for digital display advertising (i.e. banner ads) which are known to have very low click-through rates (CTR) compared to other advertising methods. So when thinking about the logistics and development costs required to setup advertising across 7000 Starbucks stores coupled with the conversion rate of those in-store ads, which approximate the conversion rate of banner ads, it may have been more beneficial to spend time optimizing the in-store advertising of SRCH or switching to paid media to drive those visitors to Starbuck's landing pages.

3. Measure Each Stage In Detail

One of the most valuable things to do in any project is to measure each stage along the growth framework (a.k.a. funnel) and figure out the conversion rate at each stage. This shows where users are dropping off and also allows segmentation of the traffic/users so that insightful questions can be asked like "Which types of users are activating more often?" or "What source did our most engaged users come from?" or "Where should we start optimizing first?".


NOTE: The only data available is from Blast Radius may not be accurately measuring the most representative proxy for each stage. 

Here are some things to consider when building these types of funnels and analyzing the results:

  • Counting Conversion: The funnel should be measuring each user independently and any action they perform should only be counted once. Thus, if a single user sent out multiple social media posts, the virality stage should only count one of those social posts since that user initially "converted" to that stage of the funnel (i.e. converting multiple times is still just a single conversion). The reason this immediately stood out to me was the 87% conversion from the engagement to virality stage. From my experience, this number is quite high, and I assume that it measures the number of total social posts but not necessarily the ones from engaged users only.
  • Defining Engagement: The engagement stage took into account whether the "user played at least once", which may or may not be the right proxy for what should be considered an engaged user. Measuring engagement is by far the hardest stage to measure and each business should measure it differently and constantly re-assess whether they are measuring the right thing. Many industry leaders have discovered what their leading indicators of engagement are, but these are hard to figure out without a comprehensive understanding of the customer and tested theories based on data analysis.
  • Funnel Creation: Given the growth framework above it is very helpful to map each stage to a funnel step in an event based analytics tool such as Mixpanel or Kissmetrics. I've written a post before about using Dave McLure's AARRR framework with Mixpanel but here is a mocked-up version of the growth framework above mapped to a Mixpanel Funnel:


3. Optimize The Funnel

Given the data above, the best place to start optimizing would be higher up in the funnel where the largest drop-off was experienced (i.e. landed users who don't sign-up). The reason for this is that a one percent increase in signed-up users has a much larger effect on the overall completion rate than the same percentage increase in engaged users. One thing to be careful of with this approach is that diminishing returns start setting in the moment you begin optimizing a step. At some point the effort required to discover a change that has a tangible effect is no longer worth the cost. Here are some ideas that could have been used for optimizing each step of the funnel:

  • Optimizing Acquisition: Inbound traffic came from 3 channels as mentioned above. Figuring out which of those channels brought in the "best" users (most highly engaged) using Mixpanel's segmentation features (or Google Analytics), could focus efforts by reallocating resources to focus on the acquisition channel that performed the best and had the greatest potential for increases. For example, optimizing the retail in-store advertising about SRCH during the Scavenger Hunt would have been complex (in terms of logistics and timing to rollout any changes) but this could be tested at a single store and if sufficient increases were noticed to justify changes across the other 7000 stores, the improved advertising could be rolled-out. Essentially testing a variety of in-store combinations of advertising placements, colours, QR codes vs. actual links... etc. could be rapidly performed to see what single or set of changes drove more traffic.
  • Optimizing Activation:  The conversion page could be A/B tested (using something like Optimizely) for activation to determine if there are any changes that would boost sign-ups. Social sign-up, wording, images, colours, layout can all be A/B tested provided there is enough inbound traffic to support the tests. (See Neil and Bronson's suggestions for conversion growth hacks). Changes should be statistically significant, as measured with a A/B split test calculator.
  • Optimizing Engagement: This is the core of a user's experience. As can be seen, there are a number of steps that a user must go through to get to this point but once they are here they should be given what some call a "must-have experience"or "aha-moment" if they are ever to come back and continue to use the product. Not having this is the difference between whether or not the product has a product-market fit. Without it, no growth hacking will be that effective over time as the product will just bleed users over and over again until there are no more users left to acquire. Therefore, optimizing for engagement comes only after product-market fit has been found. If there is a clear understanding of how users are engaging with the product and there is a desire to boost engagement, a number of tactics are available. For the Starbucks SRCH Scavenger Hunt, email, SMS or push notifications could be used to alert users when the next clue has been released or when the first user solves a clue. SRCH was a game after all so building in a gamification system built upon competing users could boost engagement with existing users.
  • Optimizing Virality: Increasing the amount of users who post something about the product to their social graph requires trust, a value proposition and reducing friction. Thus, testing a number of combinations such as where in the flow should the user be prompted to post, what copy should be used to encourage a user to post and what copy should be used for the auto-populated post text. Additionally adding in some clear value added benefit (i.e. exclusive access, more game features... etc) for the user posting could also increase the number of user's who decide to post something to their social graph.

Saturday 7 September 2013

Examples of Mobile First Development

We all know that developing for mobile is different than developing for the desktop but how exactly is it different? I'm deeply interested in how a mobile product needs to be structured in a fundamentally different way than a desktop product in order to thrive. It cannot simply be a slimmed down, feature minimal version of a the desktop or tablet version. It should not feel as if its missing critical features or useful add-ons simply because they couldn't fit into the mobile format.

Various concepts have emerged for how to approach mobile app development. One revolves around mobile apps being remote controls for real life. Another is about mobile apps being useful for a user while they wait for something (i.e. in a line-up or in an elevator). In order for that to happen a user should be able to launch the app and perform some task within 30 seconds to a minute. MG Siegler of CrunchFund also had this to say about building for mobile:
"Don’t build an app based on your website. Build the app that acts as if websites never existed in the first place. Build the app for the person who has never used a desktop computer. Because they’re coming. Soon."
Some companies have built very compelling business models that fit well with this mobile first, quick and effective/remote control paradigm. Users are responding well by engaging with and being retained by these mobile apps due to their simplicity. The following examples show how mobile apps can reduce inherently complicated tasks down to very simple actions which are fundamentally different from anything we've seen previously on the desktop.

1. Hotel Tonight


Hotel Tonight is a mobile-only app where users can book last-minute hotel deals. Hotel Tonight has simplified booking a hotel room down to only a few essential actions without degrading the experience to the point where it feels limited. Here is the user flow from initial launch through to booking confirmation:



1. During launch the mobile app retrieves the hotels on offer given the user's current location. Although this may take a few extra seconds, it presents the user with the exact information they are interested in right when the app loads (the user is not required to type anything).

2. The user is then presented with the hotel selections in a scrollable list with the 3 most essential details displayed: photo, price & location. The app also provides some other useful data like the type of hotel experience (Solid, Basic, Luxe, Charming, Hip) and a rating by other Hotel Tonight guests. This information makes it very easy to select an appropriate hotel for the night.

3. Once a hotel has been initially selected, some further details can be reviewed such as additional photos, information about the hotel itself and its exact location on a map along with the final price.

4. The final screen allows the user to easily confirm the dates, price and credit card to be used for the transaction.

From start to finish this process only requires 3 simple actions: a single hotel selection, an initial booking of the room and finally a confirmation of the booking details. Hotel Tonight has given travellers the ability to easily choose and book a hotel room from their mobile phones. The whole process from start to finish feels uninhibited by the mobile form factor it actually thrives within it. Its because of this that the mobile app thrives and continues to delight users.


2. Car2go


Car2go is a vehicle sharing service paid for by the minute where vehicles can be picked-up and dropped off in different locations. The primary way to find and reserve a vehicle is via its mobile app. Given the nature of the service offered, it needs to take seconds (not minutes) to book a vehicle via the app. Here is the user flow from initial launch through to vehicle reservation:


1. The mobile app launches fairly quickly, determines the user's current location in order to position the map correctly and starts retrieving vehicle locations. Again this presents the user with the exact information that are looking for without any interaction after launching the app.

2. Vehicles begin populating on the map and the user can then zoom in and out to find the vehicle closet to them or simply select a vehicle.

3. Once a vehicle has been selected, the blue vehicle marker expands to show 3 additional bits of information: license number, distance away and gas available (indicated as a percentage). Although this information may be helpful in some circumstances it can be presented in a different and this third step can potentially be removed completely:
  • The license number is unimportant to the vast majority of users unless a user has forgotten something in a vehicle and is trying to find it (but this is a very rare case).
  • Knowing the distance to a vehicle isn't as helpful as knowing the approximate time it would take to walk. A vehicle may be 562m away but how long would that take if the user walked there? A separate interaction with the marker (such as a "2-second hold") could display the walking time from the user's current location therefore removing the need for the distance measurement in the expanded marker.
  • The gas available is the most helpful but it could be more easily displayed so that it can be compared with all other vehicles. By displaying something visually on each vehicle's non-expanded marker (possible a textual percentage of even a level indication), this again would avoid the need to click on the marker to find out how much gas is available.

4. The final screen allows the user to review the vehicle's gas available, cleanliness, street address before confirming the reservation.

Again from start to finish this process only requires 3 simple actions: a single vehicle selection, reviewing some of the vehicle's specific information and finally a confirmation of the reservation details. As outlined above this 3-step process can be further simplified to just 2 steps by removing the expandable marker step and simple going straight to the reservation confirmation screen. Having said this, even with 3 steps this mobile app by Car2go is very efficient to use - a vehicle can be reserved in under 30 seconds.

3. Uber


Uber seamlessly connects a user needing transportation with a taxi driver. Uber, the company, does not own any vehicles and does not have any drivers on staff, instead they provide ride-logistics to both users and drivers in order to match supply and demand more effectively. Ultimately Uber wants to get users a taxi in the shortest amount of time and give them the best experience while helping drivers anticipate demand and therefore maximize the earnings per shift. Here is the user flow from initial launch through to taxi request:



1. During launch the mobile app determines the user's location in order to display the correct map along with taxis on the following screen. 

2. Taxis begin populating on the map but they also update in real-time as taxis move about through the streets or get requested by other users and are no longer available. This gives the user immediate feedback about each taxi's relative speed and direction along with the approximate supply of taxis in the given area. All a user has to do is position their pickup location, review the approximate wait time and then tap the "SET PICKUP LOCATION" button.

3. The final screen allows the user to review the pickup location, credit card to use and approximate wait time before confirming the request.

From start to finish this process only requires 2 simple actions: choosing a pickup location and confirming the taxi pickup request. Uber has made the process of booking a taxi on their mobile app as simple as possible and it works phenomenally well for users.

Friday 6 September 2013

Building Product Roadmaps with Kanban

The following back-of-the-napkin drawing is a great illustration for how to think about aligning an entire organization around building the best product(s). Kanban fits well with this illustration as it can be used continuously to build an evolving product roadmap with the aid of everyone across the company in a repeatable and consistent way.


Here are the major steps:
  1. Source: New product ideas should be sourced from anyone inside the company. Anyone from any department should be able to provide small or large ideas that could eventually make their way into the product.
  2. Filter: Have a single person (usually the product manager) filter out and prioritize what ideas should be worked on. Filtering should be performed methodically based on both quantitative and qualitative measures so that the same selection criteria can be user over and over again.
  3. Recycle: Compost all the ideas that weren't selected and provide specific reasons for why they weren't selected. This openness and transparency helps the originator of the idea(s) understand the how to improve future ones so that they are more likely to be selected.
  4. Display: Display what product ideas will be worked on and in what order. This is where a Kanban board can come in very handy. Everyone in the organization should have access to this board where they can see the progress of previously selected product ideas, who is working on them and potentially add additional requirements as they come up.
The key is to make all of these steps publicly visible within the company and give everyone the ability to comment, argue and build on existing ideas. Kanban is about transparency, it gives everyone in an organization the knowledge about what ideas are being proposed, which ones are being selected and dropped, the reason why that happened and, finally, what should be worked on in the near future. This very simple yet transparent process helps to align an entire organization towards a common goal via a repeatable and consistent process.

Friday 30 August 2013

The Startup Curve and How Not to Die

Paul Graham's famous startup curve maps the trials and tribulations that a startup can go through on the road to success.


Of course, not every startup ends up hockey sticking after the "Wiggles of False Hope". Most startups end up failing (90% of tech startups according to Allmand Law), but Paul also ended his "How Not to Die" essay with these words:
"So I'll tell you now: bad shit is coming. It always is in a startup. The odds of getting from launch to liquidity without some kind of disaster happening are one in a thousand. So don't get demoralized. When the disaster strikes, just say to yourself, ok, this was what Paul was talking about. What did he say to do? Oh, yeah. Don't give up."

Thursday 29 August 2013

3 Responsibilities of Product Management


Adam Nash (prior VP of Product Management at LinkedIn) describes what it takes to be a great product leader. He boils down a product manager's job description to three key responsibilities.

Responsibility #1: Product Strategy

Adam describes product strategy this way: "it’s the product manager’s job to articulate two simple things":

1. What game are we playing?
    • What is the vision of the product?
    • What value does it provide customers?
    • What is the differentiated advantage over competitors?
2. How do we keep score?

Clearly answering these questions synchronizes teams across the organization and helps them understand how to win effectively in the market. 

Responsibility #2: Prioritization

Different processes exist to handle the prioritization of features and tasks: Waterfall, RUP, Agile, Kanban.. etc. But without a solid product strategy, prioritization becomes very difficult to do effectively. Adam describes a framework for product planning which he calls the Three Feature Buckets:
  1. Customer RequestsThese are features that your customers are actively requesting. 
  2. Metrics MoversThese are features that will move your target business & product metrics significantly.
  3. Customer DelightThese are features that customers haven’t necessarily asked for, but literally delight them when they see them.
Features may fit into more than one bucket but rarely fit in all three. The benefit of classifying each feature this way is so that a team can be intellectually honest with themselves about why they should implement a particular feature.

Responsibility #3: Execution

Execution is all about shipping the product and getting it in front of users so that they can derive value from it. Sometimes this means ASAP and sometimes it requires timing the market effectively. Teams have many different approaches on how they execute ranging from light weight idea, test and deploy methodologies to full-blown specifications, sign-off, development, QA and release cycles. Adam describes the 4 parts of execution that are critical to its success:
  1. Product specification: The necessary level of detail to ensure clarity about what the team is building.
  2. Edge case decisions: Very often, unexpected and complicated edge cases come up. Typically, the product manager is on the line to quickly triage those decisions for potentially ramifications to other parts of the product.
  3. Project management: There are always expectations for time/benefit trade-offs with any feature. A lot of these calls end up being forced during a production cycle, and the product manager has to be a couple steps ahead of potential issues to ensure that the final product strikes the right balance of time to market and success in the market.
  4. Analytics: In the end, the team largely depends on the product manager to have run the numbers, and have the detail on what pieces of the feature are critical to hitting the goals for the feature. They also expect the product manager to have a deep understanding of the performance of existing features (and competitor features), if any.

Top 5 Growth Hacks to Consider

1. The Minimal Homepage


Dropbox, Pinterest, Quora are famous for their minimal homepage. When you start out and envision designing a new homepage for your product with all of its features, it's so counter intuitive and hard to decide to have a homepage with only one sentence, one photo and one call to action above the fold.

Anyone who hasn't measured this effect before will argue and argue that a minimal homepage like the one above will not convert as well because people just won't understand much about the product yet and will therefore not sign-up. It's always a good idea to A/B test your own minimal homepage against other types like short or long firm ones but time and time again minimal wins out (and sometimes significantly). The reason for this is that the page is simple for anyone to understand and the call-to-action is really clear because its the only one. We've all been to a homepage with 10 different products or large amounts of verbiage and its incredibly hard to know where to start or what to click on.

2. Send Push Notifications to Increase Retention


Using email to remind users about your product and re-engage them is a commonly known growth hack. Email is definitely an old school channel (not as sexy as social) but it can be very effective once you've taken the time to master it. Adam Nash (former VP of Product Management at Linkedin) had this to say about emails as a traffic source:
Email scales, and it’s inherently personal in its best form.  It’s asynchronous, it can support rich content, and it can be rapidly A/B tested and optimized across an amazing number of dimensions.  The best product emails get excellent conversion rates, in fact, the social web has led to the discovery that person to person communication gets conversion person over 10x higher than traditional product emails.
Having said this email is a saturated channel. Most companies are not doing email well but they're doing it none-the-less which adds to the sheer volume we all receive in our inbox on a daily basis. The Law of Shitty Clichthroughs is a common problem we face when we all rush into a channel and saturate it but recently it's been made slightly worse with Gmail's new tabbed inbox changes. MailChimp has some great data that shows a 0.5% to 1.0% drop in open-rates because of this change.

Don't stop sending email, as its still a great channel, but try sending push notifications if you have a mobile App. Relevant push notifications have a significant affect on retention rates. To get some ideas of how push notifications compare to email open and click-through rates, this post gives us some insight about their effectiveness:

  • 30%-60% open rates
  • 4%-10% interaction rates (with spikes as high as 40%)

Pro Tip: Use Mixpanel (event based analytics) to register users for your website and/or mobile app. Once you've done that you can send them emails, push notifications or text messages manually or automatically based on specific events a user performs within your application. Not only that but you can build intuitive funnels to track the exact effective of your email, push or text message campaigns.

3. Kill a Feature


This growth hack is less about a quick win and more about getting to the core of what might actually help your product grow long term. It can also be used to make sure that your product is continually refined and made simpler to use instead of leaving all those unused features hanging around which cause product bloat and therefore user confusion/frustration.

The above image is from a presentation by Dave MccLure. It's pretty self explanatory. Andrew Chen talks about this in a similar way when he says that "Does your product suck? If so stop adding features and 'zoom in' instead".

4. Embeddable Widgets


<iframe width="560" height="315" src="//www.youtube.com/embed/PRDtRTuMZtM" 
frameborder="0" allowfullscreen></iframe>

Embedded video, like the one above (along with its <iframe> HTML code) was YouTube’s famous growth hack that helped them scale to hundreds of millions of users. In return for free video hosting, blogs and websites promoted the YouTube brand by embedding YouTube-hosted videos on their sites. It was a win-win for everyone involved and YouTube capitalized by acquiring users organically.

Many companies know about this growth hack and have subsequently created their own widgets (Vimeo, SlideShare, Healthtap... etc) that can be used in blogs or websites. Sometimes this works well (SlideShare for instance) but it can also fall flat with your user base. Having an embeddable widget is necessary but not sufficient to harness this growth hack's potential. There has to be a strong incentive for users to spend the time and energy to embed your widget in their site. There should be a strong need to share the content delivered by the widget in order for this growth hack to bear significant fruit.

5. Two-sided Referral Incentives



During Dropbox's growth, the company tried a number of marketing channels like long-tail search and paid advertising but they didn't work out that well. Dropbox then began experimenting with a referral program and it worked really well. The structure of that referral program is the most interesting part. We've all registered for a service before and been asked to invite friends and told that if they join we'll get some special offer. It feels pretty spammy to give a company your friend's email address and you get all the benefit. Dropbox knew this so they devised their referral program to be two-sided. If you invite your friend and they join BOTH of you will get the benefit. Each person receives 250 MB of additional storage. The psychology of this is great, it no longer feels spammy to invite your friends, you actually feel like you're helping them out by giving them more than they otherwise would have had without your invite.

Tuesday 27 August 2013

Classifying Knowledge for Engineering and Product Development

When designing and building a software product there are many things that individuals or teams know and there are also things that they don't know... this is pretty obvious. However this simplified distinction is missing some of the inherent complexity about knowledge that we can or cannot know. Donald Rumsfeld describes this larger epistemological concept, which can be quite complex, in a pretty straight forward way:
There are known knowns; there are things we know that we know. There are known unknowns; that is to say, there are things that we now know we don't know. But there are also unknown unknowns – there are things we do not know we don't know.
These three ways of classifying knowledge can be very helpful during product development for both engineering and product teams. Not only does it help to communicate the various types of things that we may or may not know but also helps to showcase that there are things that we don't even know that we don't know - i.e. concepts, tools, processes, best practises that might exist already but we don't even know about them and therefore don't know to look for them (which I will explain in more detail below).




1. Known Knowns

"Known knowns" are those things that each of us has previous experience with or those things that are held as best practises within the industry that we know about already. Here are some for engineering and product teams:
  • Engineering: Computer languages that are closer to the bare metal of a machine (e.g. C/C++) are generally faster then those languages that run on a virtual machine (e.g. Java, .NET, Python... etc) given the same algorithm and data.
  • Product: When designing conversion pages/screens, each additional step (button click, user input, screen swipe.. etc) in-between the user's entry point and the objective for them to accomplish, results in some trivial or significant amount of drop-off. This can be easily seen in a conversion funnel by measuring and then observing the individual percentage drop-offs between each successive step.
In the engineering case you usually don't need to write the exact same algorithm in multiple languages and measure the execution time but you can be confident that as a general rule of thumb a C/C++ algorithm will be slightly (or maybe even significantly) faster than a virtual machine based language running the same algorithm. Testing always makes sure this assumption is valid and the results can sometimes conclude the opposite but engineers perform so many performance or functional assumptions when writing code that validation for each and every one is not realistic.

In the product case you can be almost certain that the removal of a single non-essential step will increase the conversion rate by some amount (what exactly that amount would be is something that would have to be measured). For example, many conversion flows for social networks include the option to import contacts in order to find your friends/colleagues/acquaintances... etc so that you can connect or follow them (Facebook, Linkedin and Twitter do this). But in other products its less about satisfying the core functionality of the product in terms of connections and more about it being a virality growth hack to acquire more users. Importing contacts in the first case is obviously vital to engage and retain users in a social network but in the second case the step could be removed at the cost of acquiring less users (i.e. no virality). Removal of this step would almost certainly increase the overall conversion rate (due to the step no longer causing drop-off) but usually the increased amount of users, due to the contact import, out weighs the amount of users who drop-off at the contact import step. Therefore it's usually desirable to leave the contact import step in the conversion flow even though is decreases overall conversion rates by a slight amount.

As a final note, the better you are and the more experience you have, the more "known knowns" you accumulate. Because of this you work more efficiently since you have access to a wider array of expertise that you otherwise wouldn't have had. According to Malcolm Gladwell's "10,000-Hour Rule" written about in his book, Outliers, it may take 5 years or more (40 hours/week over 5 years is 10,000 hours) to accumulate enough knowledge to become an expert in any given field.


2. Known Unknowns

"Known unknowns" are those things that we may have some theory for or gut feeling about but don't actually know what the correct approach or answer might be. It might even be that you know that something exists but don't know much else about it. Essentially its being aware of your own ignorance about something. Many good senior engineers or business analysts face these types of issues on a day to day basis and are pretty good at either asking questions and listening to others, performing research to uncover the right approach/answer or, if not available, they experiment or prototype until it becomes more evident what the right approach/answer might be. Once this is done "known unknowns" turn into "known knowns" which is obviously a good thing.

For many things "known unknowns" are similar across engineering and product teams. A software engineer or business analyst might know that something exists which could help improve the product (say a new Python library or integration with another product) but not really know much about it. All they would have to do is begin researching it and given sufficient time they will figure out what they need to.

There is of course a specific area where these teams differ. The worlds of engineering and product start to diverge for "known unknowns" when predictability is accounted for. Software systems are inherently predictable, they generate the same output given the same input - they're built this way to manage complexity. There are things that seem random but they are usually the result of unexpected user input, actual random number/string generators and/or partial system failures that cause intermittent behaviour. The advantage of predictable systems is that various solutions can be re-used across domains and if something has been used once before chances are it can be used again in a different context and the benefit of a lower learning curve can be leveraged. There is also the benefit of a tangible "done criteria" for software systems. If a module within an application was supposed to parse a CSV file and insert each row into a database, then it's pretty easy to determine when all the work is done either by observing the results or writing some unit/integration tests to verify things. The consequence of this can be seen with all the open-source libraries and tools available that are built by one set of engineers and then used by thousands of others engineers for their own applications in a different context.

For product teams the inherent challenge is that users behaviour differently and are very hard to predict. Users behave differently from across different products but even more unsettling is that their behaviour changes over time on the same product. Essentially users are unpredictable and things that have worked in the past may not work in the future. It's analogous to the Law of Shitty Clickthroughs: what works today may not always work tomorrow as users learn and respond to your own product and all the other products that they use. On top of this its never as easy to simply mimic/copy an existing product's features and expect the same results. Engineering teams can use the same open-source library and be pretty confident that the results will be the same. However product teams who decide that a given feature might be worth mimicking/copying for their own product are really gambling on the idea that their users are the same type of users as the other product - and this is never usually the case. For example: If you're building some type of social network you're most likely observing what Twitter, Facebook, Linkedin, Pinterest... etc are doing but simply turning your UI into a Pinterest type feel or adding hashtags may not have the same results as those products had due to the fundamental difference that their users are not the same as yours.

Having said of this, engineering teams are able to turn "known unknowns" into "known knowns" often times more easily than product teams by researching and spec'ing out the design. Even before anything is built it's usually apparent that the given solution will work with enough engineering time (scalability aside). For product teams its a constant challenge to start with "known unknowns" and turn them into "known knowns" after thoroughly researching possible options or coming up with theories of how users might behave. Until the intended product feature addition or change is fully implemented (or, if you're lucky, maybe just some small component of it that users interact with), its almost impossible to know how users will behave.

Conclusion: For product teams "known unknowns" usually stay that way until you've actually shipped your product and see how users are using it. It's only at this stage that "known unknowns" become "known knowns".


3. Unknown Unknowns

This is the scary stuff. "Unknown unknowns" are the things you don't even know that you don't know about (actually they're not that scary because you're actually completely ignorant about them). They're not even on your radar and you don't even know they exist. The dangerous part with "unknown unknowns" is that even with enough time or energy they won't magically turn themselves into "known unknowns". In the engineering case this is like not even knowing that an entire field of academic study has worked on and solved a particular problem with an elegant algorithm that if you just knew about you could use in your application and solve you and your team hundreds of hours working on your own solution. In the product case its like being completely ignorant that another business somewhere in the world has a competing product that is vastly better than yours even though you and your team might have done some extensive market research.

So what do you do with "unknown unknowns"? Well here is the only thing Mike Gagnon thinks you can do: 
The best you can do with “unknown unknowns” is be aware that the category exists and maintain an open mind. This way when information presents itself to you, you can cognizantly realize that it was an “unknown unknown” and then you’ll either be in the “known knowns” or “known unknowns” category.
This is why being a prolific reader is so important. It feeds your mind with a ton of new information, ideas, concepts and solutions that you had no idea existed previously. Everything you read, are told about or experience first hand has the opportunity to identify "unknown unknowns" which at the very moment no longer remain as "unknown unknowns".

My grade twelve english teacher once told our class that we should not spend the rest of our lives reading books that agree with our current belief system but rather we should deliberately read ones that disagree with it so that we grow and change for the rest of our lives. I don't think I really had any idea what he was talking about back then but some how more than decade later I still remember those words and I now understand what he means.

4. Unknown Knowns

This one wasn't mentioned by Donald Rumsfeld but its worth mentioning briefly. Scholars of logic will have noticed that there are two words with two possibilities each which translates into 4 possible outcomes. Mr. Rumsfeld mentioned three of them and obviously didn't talk about the forth but its worth asking whether or not "unknown knowns" are even possible.

It turns out that the well known psychoanalytic philosopher Slavoj Žižek extrapolated a forth category which he obviously called "unknown knowns". He said that "unknown knowns" are those things that we intentionally refuse to acknowledge that we know. Slavoj even wrote an essay on the matter but it speaks more about politics than the underlying concept of unknowingly knowing something. Essentially its those things that we actually do know but we either vehemently deny them or even go so far as to suppress the knowledge we have about them.

The engineering and product teams that I've managed have been highly collaborative and open to exploring new ideas even if that meant redoing and/or throwing out inferior work. So when one person found a solution that could significantly change or alter currently held assumptions, it was always encouraged to bring that to the team. In light of this there was never much incentive to suppress knowledge about something across the team in order to avoid telling the truth even if it hurt to tell it.

There could of course be a case where any one individual would have some incentive to suppress the knowledge they had about something and therefore harbour an unknown known. This is obviously a reality but building the right culture with the right people helps immensely in avoiding this behaviour outright or at the very least discourages it.

Sunday 28 July 2013

What to do if you drank the Kool-Aid on bullshit metrics

I've always been interested in quantitative data and the ability to derive insights from that data. So when a VC Firm (Andreessen Horowitz) raised a ton of money ($10.25M) for an analytics start-up (Mixpanel), I took notice. Soon after raising the money, Marc Andreessen and Suhail Doshi came out swinging with a punchy line aimed at getting to the heart of a common problem in the technology industry:
Some people call page views and the like “vanity metrics,” but Marc Andreessen and Mixpanel founder Suhail Doshi have decided they want to raise the shame level by calling them “bullshit metrics.
Andreessen told me in an interview last week, “People think they’re richer if they have Zimbabwean dollars than U.S. dollars.”
“We and other investors need to get more vocal,” Andreessen said. “Page views and uniques are a waste of time.” 
Andreessen said his firm won’t throw start-ups out the door if their pitches include bullshit metrics - but it’s perhaps something they might consider.
Liz Gannes @ AllThingsD

So if you drank the Kool-Aid and decided that random download stats or pageview metrics, that go up and to the right, are pretty much worthless, then where do you turn? What do you measure that is more insightful than these bullshit metrics?

I've written previously about Dave McLure's "Startup Metrics for Pirates: AARRR" and its definitely worth starting there for an overarching framework for how to think about the whole customer lifecycle. Once you understand that lifecycle for your particular product, you can then begin to integrate an analytics platform into your product that captures the information you need and can then act on. Event based analytics (the kind of thing that Mixpanel excels at) is based around capturing and then segmenting all the events that your users perform. By segmenting the aggregate of these events you are then able to build an awareness of and insight about who your customers are, how they first got introduced to your product and how they are currently using it. Segmentation is a great starting point but there is another even more valuable tool called cohort analysis once you have all your events setup. Cohort analysis allows you to measure customer retention so that you can answer the question of whether or not your customers love your product. Andrew Chen has a great blog post on this where he asks that very question.

So once you have a few months worth of cohort data, how can you then determine whether your product is above or below par? It turns out this is a very hard question to answer because it usually depends on the type of product, your customers and a variety of other factors (essentially there is no "standard" that's works for every product). This doesn't stop some people/companies from speculating so here are a few reference points:


So as a very general rule of thumb, a retention rate of 30% month after month seems like a decent number to benchmark against. But a word of caution, definitely don't consider that number to be some special threshold for which your product can be deemed successful in the marketplace if you surpass it. The matrix from Flurry above was created in Oct, 2012 but an earlier version first appeared back Sep, 2009. If you look at how the retention rates for social networking apps have change over the last 3 years its startling. Back in 2009 social networking apps had a 90 day retention rate of approx. 15% as opposed to approx. 34% in 2012. 

As always, in the technology business, the goal posts continue to move every single year. Ben Horowitz, of VC firm Andreessen Horowitz, said this:
The technology business is fundamentally the innovation business. Etymologically, the word technology means “a better way of doing things.” As a result, innovation is the core competency for technology companies. Technology companies are born because they create a better way of doing things. Eventually, someone else will come up with a better way. Therefore, if a technology company ceases to innovate, it will die.

Friday 26 July 2013

RESAAS reblasts App Featured on Appcelerator Titanium Blog

In a previous post I showcased the iOS and Android App that my team and I at RESAAS released in March, 2013 after only 2 months. The App was built using Appcelerator's Titanium cross-platform framework after we migrated from an older PhoneGap implementation. We chose to go with Titanium due to its (almost) write once run everywhere framework.

Once we released the App, the folks over at Appcelerator took notice of it and loved the look & feel of the App, specifically the photo heavy activity feed that showcases real estate professional's listings. They subsequently asked me to respond to a number of questions they had about our App for an upcoming blog post on their developer blog. 


The official RESAAS blog also has a couple posts about other features related to reblasts App:

Mixpanel Implementation of Startup Metrics for Pirates: AARRR

Dave McClure of 500 Startups (a seed accelerator and investment fund) has a great slide presentation on slideshare called Startup Metrics for Pirates: AARRR!!!. Don't let the old school graphics fool you, it's packed with a ton of insight about how to strategically think about your startup in terms of quantifiable metrics.




On a previous post about Chamath Palhapitiya and focusing on the right things I included his simple 4 stage growth framework which had the following stages: Acquisition, Activation, Engagement & Virality.

Dave's metrics, called AARRR, have an additional stage (5 in total) and are as follows: Acquisition, Activation, Retention, Referral and Revenue. In Dave's case the Referral stage is similar to Chamath's Virality stage except for the fact that Dave goes into some detail about using it to effectively acquire new users on the back of your existing users. Chamath advocates quite passionately to not even focus on the concept of virality due to its illusive nature and the negative distraction it will cause. There are some very subtle differences between virality and referral but I will discuss those in an upcoming post.

Chamath's team at Facebook never spoke about virality or k-factor while building out their incredibly successful social network. He believed it was essential for his team to focus on the quality of the actual product and continue trying to make the overall experience for users better and better. Chamath says far too many people focus on this holy grail of trying to make their "bad" product viral in some way instead of finding ways to make their product better (or even just decent/okay) so users actually keep using it over time.

Either way Dave's startup metrics for pirates (described in the presentation above) give a solid footing to any startup interested in building a quality product/service and a business around that product/service. Measuring how successful things are going during a startups lifecycle is difficult due to the dynamic nature of a startup, but this simple framework and associated metrics are sufficiently generic enough that they are still relevant as a startup morphs into a revenue generating business. 

I am a huge fan of Mixpanel's event based analytics platform. I use it extensively at RESAAS and love the ease of setup and ability for both engineers and marketers to easily sift through tons of data, explore theories and then develop insights that impact future product decisions. If Dave's startup metrics for pirates, AARRR, is something you decide to build yourself then I highly suggest trying Mixpanel as the analytics platform behind those metrics (other product you could use include Google Analytics, Kissmetrics, Woopra, Flurry... etc). What's nice is that the Mixpanel team even put together this great blog post back in November, 2012 about using Mixpanel to implement Dave's AARRR metrics.

Monday 1 July 2013

Growth and Distribution

One of the many joys of working in technology company is seeing your product used by actual people. Right from the beginning when the initial beta users start signing-up there is a sense that all the hard work (thinking+coding+coffee) might come to something. Then suddenly as user concentration increases the first few sparks of social interaction between users begins. Finally, when traction sets in, entire communities of people start engaging around an idea and then the next idea and on and on it goes.

I got my first real exposure to rapid scale and the interaction between a swarm of users when I built a scavenger hunt game called SRCH for Starbucks. The game was coded, tested and ready to go when Starbucks released their first clue. Within minutes tens of thousands of people descended on our application trying frantically to win one of the coveted prizes. They frequently turned to Twitter to communicate with each other where they'd trade strategies, vent their frustration for losing or gloat when they won.

Since starting at RESAAS back in 2011 as employee #1, we've all worked really hard to build an enterprise social platform that works well within the dynamics of the real estate industry. From just a handful of beta users in the early days to a thriving community of real estate professionals today, RESAAS has grown by leaps and bounds. 

Great products need great technology but even more than that they need great distribution. Simply finding low cost channels that put your intended audience in front of your product is not enough. The competitive advantage exists for those companies who are able to optimize around the conversion and retention of newly acquired audiences. It requires deep analytical insight about actual user behaviour along with a rigorous process for testing, iterating and optimizing at a rapid pace. This is why we setup a Growth Team at RESAAS. It's purpose was and still is to accelerate user growth across the platform. To illustrate how far we've come from just a handful of beta users, the following real-time data shows RESAAS user activity across the US:

Wednesday 26 June 2013

Blogging for HousingWire for Inman Real Estate Connect SF 2013

I'll be blogging for HousingWire for the upcoming Inman Real Estate Connect San Francisco conference on July 10-12th, 2013.


I'll post some of the blog posts back hear once they've been published by HousingWire.

-----

Update - Here is one of the posts: http://www.housingwire.com/rewired/2013/07/11/interoperability-between-trulia-zillow-realtorcom-platforms