FOUNDERS
MENTORS
ADVISORS
INVESTORS

Vlad Shulman Shares the Key to '0→1 Product Development' is Recalibrating Between Usability Sessions

Founder POV
On
Vlad Shulman - Participant of Cloud Zero's Founder Circle 2.0
Vlad Shulman at Cloud Zero's Founder Circle 2.0, an invite-only party for entrepreneurs in Seattle.
The secret to '0→1 product development' is what needs to happen between two real usability sessions.
In this article

A surprising realization for early product development has been around Minimum Viable Products (MVP) — specifically, that no one manages to start with the MVP but instead stumbles upon it after many iterations.

I'm now convinced the product-iteration-loop is the single most important process to setup correctly for 0→1 product development. Every product-related frustration can be traced back to my own implementation failures in this process.

Consistently, there appear to be three areas of a successful iteration-loop:

  1. Channel for acquiring external usability testers.
  2. Agreement on what makes a real usability session.
  3. Recalibrating product design between two usability sessions.

1 / Without external users, product demos never seem to push beyond quality assurance and feature showcase. At best, internal demos catch critical-failure bugs which prevent completing a usage session end-to-end. At worst, internal demos get the build team hyped up to add impressive engineering and aesthetic designs.

External usability testers are the only way I’ve found to get a reality check. [A]

2 / It’s unintuitive what makes a good usability session. My worst formats have been interviews, product tours of features, and mock roleplay sessions demonstrating “how it’s supposed to work”. These formats net zero useful feedback.

I started using the term “live fire session” to indicate a 1-on-1 format with an external usability tester where we use the prototype for real. A live-fire-session creates real data, does real user behavior, and completes a real usage path (agreement on what constitutes a real usage path took a while to figure out). [B]

3 / Acquiring external testers and designing appropriate sessions proved to be straightforward. The magic — and the crux of why outsourcing early product development often fails — is in what needs to happen between two usability sessions.

Specifically, between usability sessions I needed to sit down and recalibrate all my assumptions / product designs / usability expectations. Here are some particularly useful prompts used for recalibration:

Jobs to be done: What isn't the user doing for themselves? What job(s) are they hiring our product to do for them? Which one job do we want to focus on for the next session? [C]

Win state: How would we know our product works? Why didn't this job get done in the previous session? What needs to happen in the next session that would prove to us our product accomplished that job?

Feature subtraction: Which existing features will distract from that job getting done in the next session? Can we temporarily hide that functionality? [D]

New features: What existing products from other domains do that job well? Which of their features / designs can be cherry-picked to try in our next usability session?

My assumptions for product design never started right; they gradually became less-wrong through many (many) iterations. Also it was interesting to note that "feature subtraction" was the area I've commonly observed others to skip.

Quick note to non-technical founders: Outsourcing product development fails because of misaligned incentives. A consultant / freelancer aims to deliver the full scope of work, and then recommend scope expansion. A pure-play engineer / designer aims to get into flow state, and tackle increasingly complex features. This doesn't result in a correct MVP.

0→1 product development is easier as a solo builder, but it limits the vision. Teams build greatness, but require mutual participation in the iteration-loop.

Takeaway: Make time between usability sessions to confirm something-that-people-want is getting built.

Anecdotes

[A] Two channels I’ll continue using are a) posting on job board for a $33/session part-time position, and b) using a paid usability testing service which offloads the lead gen + qualifying + appointment setting for ~$120/session.

[B] After many failed attempts, I finally settled on a 50-minute session where we took turns managing each other through a real frustration in personal / work life (guided by the prototype).

[C] Jobs To Be Done (JTBD) is my favorite product development book. My early product iterations were unsuccessful because I chose poorly which JTBD to digitize — initially I picked meeting agendas, then the meeting doc editing experience, and then the asynchronous work in preparation for an upcoming meeting — it wasn’t until I focused on “the prompts a manager needs to say during a 1-on-1 conversation to get to a decision” that the usability sessions were noticeably richer in feedback.

[D] My graveyard of subtracted features included: Notion-esque doc functionality to add / sort content with expanded side-peak, user login with workspace management console, analytics dashboard, action tracker, meeting agenda gamification tracker, agenda check-in flow, API integrations with external tools. The minimal set of features remained: dynamic form, basic rich text editor, countdown timer, and share button.

Credits

Thanks to Nancy Xu for iterating the process of external usability tests, and Shiku Wangombe for the charrette innovation.


Tags

Vlad Shulman