An Experimental Approach to Product Discovery

At LX Berlin, Babbel’s Product Design team shares their method and unveils an exciting new product.

BERLIN — On 27 October 2022, Babbel hosted the ninth edition of its LX Berlin Meetup, a community that convenes every three months with a mission to share knowledge about topics and innovations in LX (Learning eXperience) with the education-technology community and friends. Topics in LX Berlin are open to all ed-tech domains and audiences, and include technology, design and educational methods. The series is hosted at Babbel’s Berlin headquarters. At the most recent Meetup, three Babbel experts — Marcus Hauer, Director of Product Design; Anna Stutter Garcia, Senior Product Designer; and Matheus Winter, Product Designer — held an interactive presentation on the team’s application of scientific methods in Product Design and Discovery.

The Babbel Learning Experience

Director of Product Design Markus Hauer first familiarised the LX Berlin audience with the workings at Babbel’s Berlin-based campus and its 800 or so employees, as well as recent innovations in Babbel’s products. The Babbel app — which has sold 10 million subscriptions worldwide and counting — helps users learn a new language through self-study of short, dynamic lessons designed by over 150 experts in contemporary language learning. Babbel’s methods were recently recognized in a Yale University study to improve 92 percent of users’ fluency after two months of use. 

product design at Babbel

In recent years, the Babbel team has also enriched the user experience with a holistic approach to language learning, providing a rich array of supplementary cultural content — such as podcasts and games — produced by native speakers of Babbel’s 14 languages of study. 

The Product Design team

Markus Hauer also offered a brief rundown of the 40-member Product Design team’s structure and function. Babbel’s team operates in ‘clusters’ that focus on specific areas of the product; Product Designers are permanently embedded in these clusters, whereas team members from User Insight (formally user research or UX) and Content Design are ‘semi-embedded,’ where they float to the clusters where they are needed. Finally, several employees from the centralized teams of Design System, Experience Visions and Design Operations also operate in each cluster.

Design Systems, Markus explains, is a team in a rebuilding phase, while Experience Vision is an end-to-end team and Design Operations acts as the glue that pulls the whole cluster together, enabling the team as a whole to maintain a three-part focus on the user side, the business side, and the technology side.

Each core member of the inner ‘cluster’ of a team — the Product Manager, Engineering Manager and Product Designer — represent a key component of a product or feature’s success: the Product Manager ensures viability; the Engineering Manager focuses on usability; and, finally, the Product Designer concentrates on desirability.

Markus also shared with the audience the Product Design team’s new venture on the content platform Medium, which features a different designers from a diverse variety of backgrounds discussing topics similar to the LX Berlin presentation, with the aim of generating conversation and inspiration, and open-sourcing what Babbel is doing. 

An experimental approach to Product Discovery

Senior Product Designer Anna Stutter Garcia and Product Designer Matheus Winter then walked the audience through Babbel’s innovative, science-based approach to Product Discovery, which the team explained is a phrase used at Babbel to signify the process of identifying a problem that needs solving, and then determining how to solve it. 

Too often, teams attempt to rush to a solution straight away, “falling in love” with the first or second solution that comes along, rather than going through a truly thorough process that enables them to see what the real problems were all along. Rushing to a solution, Stutter Garcia and Winter emphasised, may result in a ‘solution’ that has little to no impact, or that fails to deliver in some important way, heavily demoralising the team that invested so many hours into what they only now realized was a rushed process.

Employing the whimsical example of sending users to space, the team enumerated several assumptions a rushed process could gloss over, such as assuming users wanted a certain sort of window in their imaginary spacecraft, or even that they wanted to go to space in the first place. (This is an example of the desirability assumption.)

The team then sketched out what their Product Discovery method incorporates at present:

  • realisation,
  • framework,
  • application,
  • recognising mistakes
  • and takeaways.

While the spacecraft example might have seemed unrealistic, it was still highly illustrative of exactly the sorts of assumption that many teams make about their solutions. By not examining the assumptions themselves (and whether they were actually true), teams rush into the discovery process without really checking in with key components. Are these assumptions really true to users? (Again, do they even want to go to space?) Instead of rushing to a solution, instead develop a framework (in this case, space) to give the actual problems room to emerge and define themselves clearly. 

One the team had made the realisation that they needed to re-examine some assumptions, it was time to determine the framework of the testing they would need. Rather than develop a completely new framework from scratch (the so-called ‘triple diamond’ conceptualisation that includes problem discovery and definition as well as solution discovery and concept validation), the team decided they only needed a ‘double diamond’ reconceptionalisation of their framework that began at concept validation. 

In the process of Product Discovery it’s crucial to recognize the relative level of risk in introducing a new product to a user base, and that is why the Babbel team recommended the audience at LX Berlin begin their own testing processes with ‘very low fidelity type’ experiments that tested the veracity of the assumptions themselves. Then, as a team gains confidence and determines that the direction they’re going in is good for them, the fidelity of experiments can increase. When a team performs, for example, concept and usability testing at the expense of assumption testing, more can potentially go wrong. The final component that Babbel’s team tried to incorporate into their framework was to be extremely systematic in tracking what they tested — whether assumptions worked (and, if not, if the team needed to pivot or drop the idea altogether), what to do and how strict to be about it, and setting expectations before looking at test results.

When it came time for the application of testing, Matteus Winter and Anna Stutter related the story of several methods the team used to determine whether certain assumptions about a feature that would be crucial to the app’s success were true. First the team sought to test a particular (and ‘relatively simple’) desirability assumption, essentially determining whether Babbel users possessed a particular characteristic. Therefore, for their first experiment, a quickly executed activity that only took a period of about four hours, the team conceived of and released an in-app survey with only two questions, with the purpose of forcing the team to understand what the users’ underlying assumption really was. 

Fortunately, the survey, which the team released with modest expectations, garnered approximately 1400 responses very quickly, and the team was indeed able to discern fairly quickly that their initial assumption had been true.

To test another assumption, the team took a different route and recruited five live human learners to work with a pen-and-paper prototype using a mural board and sticky shapes to mimic the look of a screen, but with neutral components and an interview guide. How did the test users interact with this prototype interface? What did they talk about? What did they notice? And, just as importantly, what didn’t they notice? 

Once again, the evidence the team collected demonstrated that their initial assumption had been true. Only then, after about two and a half weeks of experimentation, did the team feel confident in engaging the engineering team and moving forward with the product development. 

It wasn’t, however, all ‘sunshine and tater tots’ for this team — in identifying their mistakes or mishaps (illustrated in the slideshow with one of English’s most colorful curse words beginning with ‘s’), the team revealed that, for example, some key members of the cluster were not able to be present for parts of the developmental phase, that there were fairly restrictive time constraints in developing the product, and, finally, that the team’s own confirmation bias could absolutely have factored into their results.

Takeaways

The first key realisation that the team took away from this endeavor was the all-important reminder that the theoretical and practical worlds are different. Every project will look different, be paced differently, have a different team, etc. Therefore, it’s key that the framework of a successful Product Discovery process contain flexible parts. For example, one team’s order of operations might be completely different from another’s. 

The second key takeaway from this process emphasised Winter and Stutter’s original point: A team must fight the urge to rush to development without evidence that certain assumptions about a product or feature’s desirability, feasibility and viability are actually true.

The Babbel team’s final words of encouragement to the audience before engaging in a Q&A session were: it is better to fail early by understanding that an initial assumption wasn’t true than to fail by, so to speak, building the entire rocket ship because the team underestimated a single solution.

Click here to learn more about LX Berlin and join the community.

Learn more about LX Berlin and join the community!
Click here
Share: