Super.com Referrals
This project was focused on exploring, defining, and then designing a novel two-sided referral program as a key value-add feature that benefited customers while solving the core business objectives of a) lowering the cost to acquire customers, b) increasing net revenue from earnings, and c) user growth.
Objective
From the business point of view, there was an untapped opportunity to build a referral program wherein existing Super.com customers could be incentivized to bring in new customers. This would not only lower the company’s overall cost of acquiring customers, but referred users are generally more active users, so other downstream metrics would also benefit (engagement, retention).
For the initial launch, success was expected to be modest and was to be measured by:
A referral K-factor of 20%
3k net-new earners acquired
From the customer point of view, Super.com customers are above-average dollar-conscious. They value every dollar they can keep in their pockets. The referral program was launched to solve this ongoing pain point for customers. By referring other users to our earnings program, we would be providing those referrers with an ongoing, passive stream of 15% commissions for everything their referrals earned.
My Role
Design Management, UI/UX/Interaction Design, Information Architecture, Design Vision and Strategy, Project Management and Design Process, Content Strategy
I was responsible for directing a team of 2 other designers (intern and mid-level designer) through initial discovery and definition. In parallel I articulated a design strategy and vision, then executed and delivered designs for an MVP of the Super.com referrals experience.
We designed a two-sided experience. For Referrers, my goal was to clearly articulate value and make sharing as seamless as possible. For the Referred user, my goal was to help these users experience value as soon as possible — which incidentally also benefited the Referrer. All, of course, while balancing deadlines, business and marketing partner needs, and technical limitations.
Discovery & Definition
Initial activities from kickoff were focused on competitive analysis and user journey mapping. Our earnings program was directly competitive with a few industry leaders, and we took inspiration from analyzing those competitors. This helped the team make a decision about the incentive program itself. User journey mapping exercises helped our product and business teams empathize with our users, as an effort to demonstrate and advocate for potential experience investments.
The competitive analysis was completed by our team’s intern, with direction from myself that honed the focus on Business Model, Experience, and to provide recommendations based on best practices and insights.
User journey mapping exercises were completed by our team’s junior designer, with direction from myself that focused on articulating the customer experience such that we could gather insights from x-functional stakeholders, identify the touchpoints we wanted to target, uncover additional areas of opportunity, and surface potential pain points.
This was also a necessary step to identify the core requirements for this two-sided experience: Referrers and the Referred user.
The “Referrer” experience as a journey map. Click for larger version
The “Referred user” experience as a journey map. Click for larger version
Design strategy
Knowing how quickly our team moved, it was essential to better understand our end state so we could make better informed choices as we iterated. As our team was working on other Discovery & Definition activities, I developed an end state concept based on competitive analyses, stakeholder interviews, and customer discovery (existing earnings research from our UXR team). This resulted in lo-fi wires that our team would continue to leverage as we considered MVP scope, how we were scaling our experience, and future phases of work.
Rapid usability testing
Knowing that our MVP experiment would gather useful quantitative data, I quickly assembled, planned, and launched a usability test of both Referrer and Referred user experiences to gather qualitative insights. The primary objective of the testing was to identify how we might iterate in the future. The secondary objective was to identify any big show-stoppers that would necessitate a pause in the launch schedule.
I discovered a few really important insights that would help us in the next iteration, but an example of an immediately actionable insight for MVP was making sure sure any examples of potential earnings (referee gets $100, you get $20) were highlighted as examples. Users expressed confusion about the examples we provided. They were unsure if the provided examples of earnings were requirements or limitations of the program — neither being accurate — and so likely would have negatively affected user experience and behavior. We addressed this before MVP launch.
Rapid unmoderated usability testing to better understand the Referrer
Rapid unmoderated usability testing to better understand the Referred user
MVP Referrer experience
As we evaluated the potential extent of the experience, we decided to scope the Referrer experience to focus on primary entry points and primary landing page. This was so we could use our resources to build out a more effective Referred User experience. This resulted in an exercise that identified the areas of our app experience that had the highest potential view and clickthrough rates.
In the end, with the support of the Product team lead, we decided to launch the MVP as an Amplitude experiment, with a whole assortment of entry points, to identify which would get the highest CVRs. This would allow us to iterate quickly and release to 100% of customers with targeted messaging and experiences for specific customer groups.
As design lead, I coordinated with our data lead to analyze pre-launch user behaviors and so identify the best initial placements for entry points. I then worked within our design system to design the referral entry point UI. And finally, I designed and delivered the MVP referral landing page so users could immediately start sharing and we could start gathering data.
The most successful placement — in terms of the CVR from clicking on a placement to sending a referral link — ended up being one I had to fight for.
Our primary Referrer use case was someone who would navigate the app in search of opportunities to earn money. That would naturally bring them into a specific touchpoint in the app called the Offerwall. Due to a number of factors, I wasn’t able to use existing design system components on the Offerwall, so this placement was initially scoped out of the release. However, I quickly concepted, designed, and proposed a new component for this placement, worked with my eng partner to build it quickly, and made it available for the MVP launch.
The resulting pop-up UI generated the second-highest CTR of all placements and had the highest CVR.
MVP Referred Experience
The goal for our Referred user’s experience was to guide them along their journey to their first earnings opportunities. We focused on a linear onboarding flow based on an existing experience that I had designed for our partner Earnings-focused team. This flow had proven itself to be very successful at driving a similar user cohort to similar earning goals.
This guided onboarding incentivized specific activities— all chosen because of existing data demonstrating they led to key setup, a-ha, and habit moments. A user would receive a bonus for installing their first game, earning their first $1 in rewards, and making their first referral.
It was essential that this Referred user move quickly from sign-up to earning, since this was how the Referrer would get paid. MVP data would guide us towards re-evaluating this focus, but at launch, we hypothesized it would be the best experience for both Referred and Referrers alike.
I designed all landing pages, tracking widgets, and the bonus infrastructure with celebration moments and program details.
MVP Launch Prototypes
Super.com is a data-driven, highly iterative company. I have never worked anywhere quite like it. The shipping cadence was incredibly fast and the release cycle was measured in days, not weeks or months. The final MVP lacked many of the features I would have wanted — but that wasn’t the goal of the initial release. It was for gathering data to make more informed decisions for the next iteration. To that end, we immediately got to work identifying which user cohorts were most engaged, what entry points were most effective, and if the program had the potential to scale.
The two prototypes below illustrate the two core use cases: the Referrer and the Referred.
Figma tip: Press R to restart the prototype
MVP Impact
A K-factor of 1.0+ means your product is viral, and the goal of the referral program is ultimately to to achieve that goal.
The primary KR for the MVP launch was a K-factor of 0.20 (20%) and we managed 0.022 (2.2%). This was a big miss.
Our secondary KR was to acquire 3k monthly earners (not just sign-ups) via the referral program and we acquired 774. Another miss.
But all was not lost! Our team knew these KRs were a likely out of reach. What was important was that the results challenged a core built-in assumption: that the financial incentive alone — an unlimited, ongoing 15% commission — would be enough to motivate our customers to refer another user. This assumption was based primarily on existing user and market research and was validated in early usability testing. However, real user behavior is hard to predict and this truly defied our expectations.
My own impact could be seen in identifying essential touchpoints in the user journey that ended up with the highest CVRs; leveraging relevant, existing experiences to reduce tech and design debt and allow our team to move faster; and in providing timely research insights.
Learnings
Surprise! This is actually not the end of this case study! The most exciting aspects of this story are the experiments and iterations that followed this MVP launch. At Super.com, we used Reforge Framework’s PV ≥ PC+F equation, which when applied to this problem, meant we either over-valued the Perceived Value and under-valued either or both Perceived Cost and Friction. All subsequent experiments and iterations would consider how to increase Perceived Value (by modifying the referral incentive), decrease Perceived Cost (by thinking critically about which user cohorts we should be targeting), and decrease Friction (by making sharing even easier and nudging at the right time).
For the follow-ups to this case study, reach out and let’s set up a time to chat!