More and more frequently, I hear of teams adopting design sprints into their product development process. While this is undoubtedly a step in the right direction for the vast majority of product teams, my experience mentoring startups has shown me that often these efforts ultimately fall short because of two main failures in the team’s methodology: 1) they front load their focus so that the majority of time is spent on the internal ideation and design steps – the fun parts – rather than the user validation step, and 2) they don’t allow nearly enough time to collect thorough and actionable feedback from their users. Because of this, teams often miss out on the exact benefit they were hoping to achieve with their design sprints, namely a significant shortcut to learning without having to first build and launch.

Despite having had a lot of success with design sprints in the past, I’ve recently begun to move away from them in favor of a system which I believe fits more naturally into the often eclectic day-to-day nature of most product teams, aligns better with the often time-intensive nature of high-quality design work, and commits the majority of the time to the user-facing validation step which so many teams often short-change in their design sprints. We call these ‘Discovery’ sprints, and I genuinely believe every single digital product team would benefit by including them in their process, either as a complement or a substitute to their regularly scheduled design sprints. I’ll spend the rest of this article giving a very brief overview of designs sprints for those who are unfamiliar, and then exploring the ins-and-outs of Discovery sprints, in particular how teams can use them to validate the impact of potential new features before they spend the precious time and money building them.

Design Sprints 101

Google's Design Sprint Diagram
Design sprints provide a cheap shortcut to learning.

Popularized by Google, design sprints are a method for quickly and inexpensively answering critical business questions through a short, highly-focused process of design, prototyping, and testing ideas with customers. Several different methods have been proposed over the years, but the most popular one proposed by Google Ventures’ design team revolves around a 5-day work cycle, with each day focusing generally on a single step in the process: 1) identifying the problem and setting a goal, 2) researching current solutions and sketching ideas for a new one, 3) critiquing proposed solutions and selecting one to pursue, 4) prototyping the solution and preparing for user interviews, and 5) interviewing customers and learning from their responses to your prototype.

Working together in a sprint, you can shortcut the endless-debate cycle and compress months of time into a single week. Instead of waiting to launch a minimal product to understand if an idea is any good, you’ll get clear data from a realistic prototype. The sprint gives you a superpower: You can fast-forward into the future to see your finished product and customer reactions, before making any expensive commitments.

– Google Ventures

Discovery Sprints: Introduction & Case Study

Similar to a design sprint, Discovery sprints also aim to quickly and inexpensively validate the impact of ideas before a single line of code has been written, primarily by engaging users and collecting feedback much earlier into the process. The main differences, though, are 1) the fluid, non-time bound nature of Discovery sprints, and 2) the significantly higher degree of focus on user validation. In Google’s design sprint, for example, the team spends 4 days working together internally to develop a prototype, and then 1 day with a group of manually curated users testing (and learning) about the success of that prototype. In a Discovery sprint, by contrast, the team spends an undefined amount of time getting an idea to a presentable stage (i.e. as long as is needed), and then makes the idea/feature/prototype accessible to users (again, for as long as is necessary) until a certain threshold of feedback and degree of clarity has been achieved. Let’s unpack this a bit by looking at a very simple, real-world example.

Imagine you work at an ed-tech startup and have an idea for a new feature which will allow students to highlight any text they want in your app. You start by working with your team to quickly spec, wireframe, design, and perhaps even prototype this feature inside of InVision. Unfortunately, this process takes far longer than 4 days, and that delay has nothing at all to do with your team spending too much time on any single one of these steps. Why does it take so long, then? Because in the vast majority of real-life product teams, this potential new highlighting feature is just one of dozens of efforts you’re currently preoccupied by, and you simply don’t have the luxury of shutting down shop to spend a single, highly-focused week on determining its impact. Instead, your team makes progress on it – and many other things, including other potential features – as time and circumstance permit. In the end, it may only amount to 4 days of work total, but they most likely won’t be 4 consecutive days.

Eventually, though, your team has a presentable idea and is ready to start validating it with users. In order to avoid selection bias and gain as accurate a depiction of the feature’s success as possible, you decide to collect feedback from your entire user base, not just a pre-selected cohort. However, in order to avoid bombarding all users, you only notify them of the test when they’re actually on the screen in your app where that feature may one day exist if built. This way, your test attracts a larger sample size while not disrupting all users unnecessarily, and at the same time has the added benefit of providing the context about where and how this feature will function.

Now that you’re collecting feedback from a far larger population than your initial hand-selected group, you need a way of quickly and scalably capturing your users’ reactions while still providing enough clarity to understand exactly why they feel the way they do. So, for those users who participate in your test, you not only ask for their general reaction (positive, neutral, or negative), but depending on their sentiment, you then immediately survey them to understand exactly why they feel the way they do. If they’re neutral, do they just not understand the feature? If they’re positive, how positive are they? Would they consider paying more for it? If they’re negative, why? Are they negative because they don’t think it’s executed well? Because they don’t think it should be a priority? Or are they negative for some totally unrelated reason, such as your support team refusing to give them a refund? Your test aims to uncover all of this in as few questions as possible.

After a week or two of the test being live inside of your app, you see that the new response frequency has decreased to the point to which it’s no longer beneficial, so you decide to end the test and evaluate your results. What you discover is that only a very small percentage of respondents reacted negatively to this new highlighting feature, with the remaining being evenly split between positive and neutral reactions. After reviewing the survey results for the neutral respondents, though, you learn that roughly half had simple clarifying questions (e.g. “Can I highlight in multiple colors?”), and the other half requested additional elements to the feature (e.g. “In addition to highlighting, can I also add my own notes to any text?”) Based on this analysis, your team designs a second version of this feature – now including both multi-color highlights and note-taking capabilities – and release it back into your app for additional testing. A week later, you close the test with more than 90% of your users having responded with a positive reaction. The feature can now be prepped for development.

Integrating Discovery Sprints into your Process

The example scenario above is a real experience encountered by one of my prior companies, and resulted in arguably our most successful (in terms of engagement) features ever. What started out as an idea for basic highlighting culminated in a more robust yet ultimately vetted feature allowing for highlighting and notes, all as part of the same system. In total, the entire process took about 3 weeks of time, but only about a week of that – spent speccing, designing, and prototyping both versions of the proposed feature – required any of my team’s direct attention. All of the remaining time was committed to just letting the feature exist in the wild, collecting unrestricted and unbiased feedback from our users. And during that time, my team was able to actively work on a handful of other Discovery sprints, in addition to our typical day-to-day responsibilities outside of new feature ideation and design.

Which is exactly why I’ve begun to shift away from design sprints in favor of these Discovery sprints. Not only are they more in line with the often eclectic day-to-day nature of most product teams, allowing for the progression of multiple projects at the same time, but they also ensure that we allow enough time for the user-facing validation step which so many teams often skimp on in their design sprints. By doing so, we’re able to thoroughly validate the potential impact of many ideas simultaneously, at a pace which doesn’t require us to close down shop for a week at a time.

The question now is how you can go about integrating Discovery sprints into your own team’s process. While Parlay was designed to support this methodology from start to finish, an industrious team can certainly achieve a similar outcome without it. Regardless of your system, here are a few quick suggestions for how to develop and time your own Discovery sprints:

  • The higher fidelity your idea, the better clarity you can expect in your users’ feedback. But there are decreasing returns. A prototype is ideal, but I’ve seen amazing feedback to just a basic sketch and accompanying text description.
  • Allow the feedback portion of your sprint to run until you see a drastic drop in responses. Typically 1-2 weeks.
  • Aim for a high enough volume of responses to ensure a representative slice of users. 30 seems to be a good threshold.
  • Ask your users for both an initial sentiment (we recommend positive, neutral, and negative) and to complete a survey.
  • Keep your surveys short, ideally 5 or so questions with pre-defined answers. The last question can be open answer.
  • Have fun working with your users. Discovery sprints allow you to treat them as collaborators, not just customers.

Parlay bots celebrating

That’s all for now. If you have any questions, concerns, or suggestions, please let us know in the comments or by emailing me at keith@teamparlor.com.

 

Ready to change the way you build your product?

Start placing previews today with a free 14 day trial!

Get Your Free Tiral