An Experiment with Livestream Tickets

This year, for the first time, we’re selling livestream tickets for The Lean Startup Conference, December 10 – 11. In the past, we’ve offered the livestream free to community groups. But over the years, based on attendee feedback, we’ve fine-tuned the livestream and are now confident we can provide you not only with a great live experience but with a very useful bundle of recordings, too. Selling livestream tickets means we can offer them to individuals, not just to groups, and we can guarantee better quality video, too.

But, of course, there’s more. We’ve spent years refining the way we shoot the video, the software we use, the tech-support we provide viewers, the onscreen chat we moderate during the event, and the Q&A participation we offer. All of that means that livestreamers enjoy a lot benefits. You will:

  • Catch all the main-stage talks and our most popular breakouts live on December 10 and 11. You get high-quality video of every talk in the Grand Ballroom. That includes all keynotes in the mornings, and our most popular breakouts in the afternoons.
  • Get a downloadable video recording of all conference sessions, including all breakouts that take place outside the Grand Ballroom. The recorded video is a special perk for livestreamers, and we’ll send it out shortly after the conference. (For in-person attendees, the recorded video is available only as part of our our top-tier tickets.)
  • Participate fully in Q&As. We field questions via an online form, even for attendees in the room, and you share equal opportunity to ask questions. Livestreamers are not second-class citizens for Q&A.
  • Connect with other livestream attendees in our special moderated chat. Our terrific livestream coordinator, Michele Kimble, moderates a dedicated chat session with all the livestreamers. You not only meet other remote attendees, but you also have Michele on hand to answer questions, make sure you have all the key info and troubleshoot any technical problems.
  • Meet other attendees via our conference social network. We have a dedicated social network for the event, which livestreamers and in-person attendees alike can use to connect with any conference participant.

The pricing is simple: A livestream ticket is $300, and you can register today. It covers both December 10 and 11, our main conference days. Plus, we’ve set it up so that you pay per screen, not per seat; if you have several people watching together, you pay for just one livestream ticket.

In addition, because we like helping build community, we’ll give you a $100 discount if you choose to be an official livestream host. Official hosts allow anyone local to attend your screening, and when you register for a livestream ticket, you must share your event URL with us so that we can list it on our site (here’s what that listing looked like last year; hosts mostly used EventBrite to let people reserve seats, though you can set up any web page you’d like). This year for the first time, we don’t require that official hosts guarantee a minimum number of attendees. Of course, you’re welcome to charge people for your local event.

(One caveat: We intend the livestream for people who can’t attend in person. If you’re based in the Bay Area, we ask you to join us at The San Francisco Fairmont, and we aren’t offering livestream tickets for local groups.)

Selling livestream tickets is an experiment for us, building on our previous test at Office Optional, where livestream tickets were far more popular than we expected. It also takes into account the fact that our international audience has grown considerably, and the logistics and cost of wrangling a free worldwide livestream have likewise increased; as a startup ourselves, those factors require us to adapt. Our hypothesis is that we can offer you really good value for a great livestream experience (plus video bundle), and we believe the per-screen pricing and group discount will make the livestream accessible to almost everyone. We’ll measure our success in ticket sales and attendee feedback. If you have questions or ideas for how we can make the livestream experience even better, don’t hesitate to email Michele Kimble, our livestream coordinator.

Not to put too fine a point on it, but while our livestream ticket sales are new, we have a great track record of delivering an amazing experience. Here are just a few of the many great comments our virtual attendees shared last year:

“Our simulcast was really well received. Even inside a big company, participants took away a lot of valuable ideas. Looking forward to next year! –Microsoft

“We had wonderful attendance. This event helped to build our community and provide substantive content and topics for future Startup Evanston events.” — Startup Evanston

“Our community of entrepreneurs loved participating.” –Spain Lean Startup

Whether you’re new to Lean Startup or a veteran, whether you work in a brand-new startup or in an established organization, our conference is designed to improve your success and speed building new products and services. We hope you’ll join us in person or via livestream in December. Register today!




When you’re building an entirely new kind of product, how do you measure success?

And is that the same for every kind of product, in every sector—nonprofit, government, tech, whatever? Alistair Croll, Eric Ries, and Danielle Morrill discussed these questions in depth, during an hour-long webcast on October 2nd. Here’s a recording of their conversation, and it covers some fresh ways to approach metrics.

Three Tips That Will Improve Your Startup’s Success

You’ve told us clearly that you want more in-depth how-to sessions at The Lean Startup Conference. So this year, we’ve packed in more talks and workshops with detailed advice than ever before; check our program page for details on our initial batch of talks. Take advantage of these sessions by registering today for the best price possible; our fall sale ends on Friday,  October 31, and it’s the last price break of the year.

To give you a taste of our how-to sessions, we asked three of our conference speakers for tips you can put into practice right now–whether you work at a startup or in an established organization.

1. How Best to Engage Customers Remotely

You understand the importance of engaging directly with your customers as you develop products for them, but what happens when your user base is very far away? “There are a ton of tools out there that allow you to communicate with someone remotely, and when choosing a tool there are a lot of things you need to keep in mind,” says user experience consultant Holly DeWolf, whose conference session will teach practical, cost-effective techniques for remote engagement. “Do you need to see your customer’s computer screen? Do you need to record the session to share it with your team? Is it user-friendly for you and your customer?”

When connecting with distant users, DeWolf recommends determining which tool is least invasive for your customer base. “The big thing for me is the usability of the tool itself for customers,” DeWolf says. “I try hard not to make a customer download a tool they don’t already have. Tools like Skype or Google Hangout are popular now, but not everyone has them downloaded already. [If they don’t have them], I prefer tools like or GoToMeeting that make it easier for a customer to just click on a link and get on quickly [without having to download anything].”

DeWolf also recommends that you practice using a tool in advance to work out any kinks. She explains: “Ask a  coworker, a friend or even your grandma to get on so you can practice once or twice before getting on with a customer. Where are you going to keep a list of the questions you’re going to ask? Where are you going to take notes? Are you recording the session? Do you need a note to remind you to do that? There is going to be a lot going on, so as you start adding in little things, it makes it really worthwhile to have practiced beforehand.” In conducting our own customer interviews, we’ve found practice to be critically important for success–but easy to forget. Schedule it on your calendar to make sure it happens.

2. How to Interpret Customer Feedback

When you hold interviews with potential customers for your new product, it’s tempting to hear anything they say that isn’t overtly negative as a sign of enthusiasm. But be very careful in reading their tone. “If you’re talking to people, and they’re very polite and mild-mannered the whole time, that’s a sign that you’re not really solving a big problem,” said Cindy Alvarez, author of Lean Customer Development: Build Products Your Customers Will Buy and head of product design and user research for Yammer (a Microsoft company). In her session at the conference in December, Alvarez will lead entrepreneurs through live problem-solving to help you with the challenges of customer development and qualitative feedback.

“I’ve never seen an interview case where people who were enthusiastic customers did not express some sort of frustration or excitement,” Alvarez said. “If you don’t hear the variation, if you’re not putting exclamation points in your notes anywhere, then you’ve got a bunch of polite people who probably won’t buy your product. I’d say if you talk to five people and none of them seem particularly enthused, then try talking to a different type of five people.”

For loads of excellent, detailed advice on customer interviews, check out our in-depth Q&A with Alvarez, in which she helps a startup figure out why it can’t find potential customers to talk to.

3. How to Test the Right Aspect of Your Business

It’s deceptively easy to test the wrong aspect of your business or run the wrong test for what you need to learn, wasting time and money while you head down the wrong path. “Entrepreneurs always think they have the best idea. They do need that confidence to execute it and make it happen. But you have to balance that confidence with an awareness and open-mindedness that you could be wrong and don’t know until you test that idea with potential customers,”  says Grace Ng, co-founder at QuickMVP.  At the conference, she’ll lead a session, “Tactics for Truly Effective Experiment Design,” which includes a decision-making framework that guides teams to choose the right kind of experiment to run based on what you need to test.

When you create an experiment, Ng notes that constructing an accurate hypothesis to test should include clearly stating the problem you’re solving and who you’re solving it for. “A hypothesis statement is the building block of an effective experiment,” Ng says. “An accurate hypothesis will state, ‘I believe this solution of X will solve this problem of Y, and in order for this to be true, I need to see these results of Z happen’” If your hypothesis doesn’t prove true, you learn a lot and can adjust accordingly; if you run a test with no hypothesis (“let’s just see what happens”), it’s much harder to figure out what step to take after.

For a whole lot more relevant how-to advice, we’ve collected some of our favorite talks below from previous Lean Startup Conferences. And for more in-depth discussions on building your new products and services, join us at this year’s Lean Startup Conference. Register today. Our our fall sale ends Friday, October 31, and it’s the last price break of the year!

— Some of our favorite how-to talks from Lean Startup Conferences past —

On testing & customer feedback

On Lean UX

On building Lean Startup teams

On Lean Startup product strategy

On Lean Impact


A Growing Set of Lean Impact Resources

A gratifying aspect of teaching people to use Lean Startup methods is seeing the ideas take root in more and more sectors. For a couple of years now, Lean Startup has been spreading in mission-driven organizations, where people commonly refer to the principles as “Lean Impact.” If you work in government, in an educational institution, at a place like or Kiva, or in a startup non-profit, we have a growing body of resources to meet your need for information about applying Lean Startup when profit isn’t the sole goal.

First off, we’re pleased to report that Leanne Pittsford—founder of Lean Impact, Start Somewhere, and Lesbians Who Tech—is leading a full-day workshop on Lean Impact essentials at this year’s Lean Startup Conference in December. She and several hand-picked guest speakers will explain how they’ve used Lean Startup principles to achieve greater social impact, and they’ll answer pressing questions about funding models for mission-driven organizations that need to experiment with new products and services. Our fall sale for conference tickets ends on October 31, and it’s the last price break of the year, so register today for discounted pricing to avoid paying full price. (We also offer scholarship passes for young and minimally funded non-profits; if that’s your org, apply right now.)

If you want a taste of what that workshop will cover, join us for a free webcast on October 28 at 10a PT: An Introduction to Lean Impact, in which I’ll talk with Leanne about how social-sector organizations can use the Lean Startup framework to meet their goals, and we’ll answer audience questions live.

To learn more about Leanne and Lean Impact, check out her talk at Netroots Nation from June: “How Lesbians in Tech Took Over the World and Built a 4,000-person Community in Less Than a Year.”

In addition Leanne’s workshop, this year’s Lean Startup Conference features a number of other speakers from social-sector organizations:

  • Christie George, a socially-focused investor who recently wrote a great piece on funding alternatives, and Mitch Kapor, well known for his impact investing, will discuss measuring success beyond the bottom line
  • Max Ventilla will explain how AltSchool is building an ambitious new network of customer-driven schools
  • Julie Lorch from will speak about building cross-functional teams
  • Tiffani Bell will will talk about learning quickly in order to address Detroit’s water crisis
  • Margo Wright will speak about the nuances of customer development when you’re working with a population you know well already.
  • Reverend Ken Howard will speak about creating and sustaining Christian communities using the Lean Startup approach.

Those examples draw from our initial group of speakers, and we’ll be announcing more later this month.

In the meantime, we’ve gathered below a slew of videos from previous Lean Startup Conferences that have great lessons from mission-driven organizations. Enjoy—and we look forward to seeing you at our webcast on October 28 and at the 2014 Lean Startup Conference in December.



Customer development for the social sector





Cindy Alvarez | Photo: Cindy Alvarez

In our “Now What?” series, startups ask real-life questions, and we find experienced entrepreneurs to offer deep, relevant advice. If you have a startup challenge, and you’d like insight from an experienced entrepreneur, let us know in this short form. – Eds

Minute Sitter’s issue

We’re in the process of testing the market for an iPhone app that helps parents connect with local, trusted babysitters and after-school care providers. Of the parents we’ve interviewed in our target market so far, this product resonates very strongly; most have rated the problem of not being able to find a babysitter a 7 or 8 out of 10. The problem is that it’s very difficult to find more interviewees. We’ve exhausted our friend network, and when we ask parents at the end of the interview if they know of others we could speak to, they always say yes, but it doesn’t progress any further. We’ve tried connecting with parenting Facebook groups to no avail, and it seems the next option is to pay for interviews, which we don’t want to do as this taints results. How do we find more interviewees?

About the expert, Cindy Alvarez

Cindy Alvarez is the author of Lean Customer Development: Build Products Your Customers Will Buy. She runs User Experience for Yammer (a Microsoft company) and has been helping companies build better products through intensely understanding their customers for over 14 years. Her background spans psychology, interaction design, product management, customer research, and lean startup tactics. She tweets and blogs.

About the reporter

April Joyner writes on business, entrepreneurship, and technology. She was previously a senior reporter at Inc., and she has also written for,, and OZY. She lives in Brooklyn, New York, and enjoys playing the violin in her spare time. Follow April on Twitter.

Interview with Cindy Alvarez, September 2014. Edited and condensed here.

April Joyner: What’s the first thing this startup should do?

Cindy Alvarez: The first thing would be to make sure that within their team, they’re aligned on what their hypothesis and target customers are. You need to boil things down to a sentence that is as simple as I believe this kind of person has this kind of problem, which could be solved in this way. And those have to be fairly specific—specific enough that you might think of a person that meets those criteria. A lot of times, I’ve found that teams aren’t actually in agreement on the hypothesis, even if they think they are. Once you have that, you basically have an implied audience in there. So then you start thinking about, “Who is that person, and where are they likely to be?”

Steve Blank has this great pyramid of needs. It’s basically a person who has a problem, recognizes they have a problem, has the ability to solve that problem, and has tried to put together a solution out of bit pieces. Those are the people you want to start with, because they’re the people who are the hungriest. This startup’s customer might be anyone who has a child, but that’s not necessarily your best market. That might be your eventual market, but the people you need to start with are the people who have met all those other criteria.

AJ: In this startup’s case, what might those criteria be?

CA: “Has a problem”: We define as people who have children who need care. “Who recognize they have a problem”: Something in their life is frustrating to them because they don’t have care. Maybe they’re not having date nights, maybe they’re not going to networking events, maybe they’re not seeing their family, or they’re not able to play a sport. You might have kids and not have a care problem. In that case, maybe your life would be happier if you had a babysitter and did this extra thing, but you’re not feeling it right now.

The next thing would be ability to make a change: In this case, probably someone who can pay. Your very lowest-income client is probably not the best place to start, because care is expensive in any situation. Then, beyond the ability to deal with it would be someone who has actually tried something—someone who has tried to find babysitters before, especially via some kind of online solution. If someone says, “Oh, yeah, I like to have a baby sitter,” but they have never made any attempt to acquire one, then they’re probably not the best customer for you.

Another thing that’s helpful is to do what I call a traits continuum, which is basically to write opposing traits, one on one side, one on the other, and figure out where you think people are. For example, for anything app-related, you might have from tech-savvy to not tech-savvy. Way over on the “not” side might be someone who’s never downloaded an app to their phone—probably not a good candidate here. A little further over might be someone whose significant other or kids put apps on their phone, but they don’t know how to do it. That’s still probably not the best market. Way at the other extreme is the kind of person who will try anything and download anything.

AJ: So once you know who your target customer is, what do you do next?

CA: A lot of times, if you have this list of traits, then you can say, “Oh, this person will be a great person to talk to.” You don’t necessarily need to identify every individual person you’ll speak to. But if you can read your list and it isn’t specific enough to make you think of even one unique person, you might be starting too wide. If you actually manage to tie it back to that one example person—like, “Oh, this sounds just like Pamela”—go talk to Pamela and say, “Do you use Craigslist? Do you use Sittercity? Do you use Care? How do you get babysitters today? What parenting groups do you subscribe to? Are there mailing lists? Are there children’s activity places that you go to?” Asking someone that level of information isn’t a strict customer development interview, but it’s saying, “Okay, we’ve identified a persona. Now let’s actually find that person and ask them where we should start looking.”

From there, try and figure out how can you convince that person to talk to you. In the software world, a lot of these pitches for interviews happen online. So, it might be constructing your email pitch. I typically recommend that people test it out first. Don’t blast out your email pitch as soon as you have it written. Send it to one or two people, maybe not even people who are your target customers, and say, “Read this, how does this sound?” A lot of times, the first draft email will not have the tone that you intend. A friend of yours might say, “This sounds arrogant,” or “This sounds too informal,” or “This sounds overly formal,” or, “This sounds like I’m not sure what you’re going to ask of me.” Other people are very good at picking up on those little weird bits of language that influence response rate. So I might write a pitch, send it to someone, get their feedback, change the language to take care of any issues, and then start sending it out to other people who I think are my real target customers.

AJ: To backtrack, since you mentioned email: when you’re brainstorming where customers might be, you’re actually gathering emails?

CA: It depends. There are some places where you might be able to contact people somewhat directly through the site, like LinkedIn or Quora, for example. That’s probably a less good option for a parenting-specific site. For that, the places where you find people might be, say, Parenting Mailing List. A mailing list is a place where you’d be able to get someone’s email address fairly easily, but something like a parenting forum is not so much. Generally, posting to a forum to say, “Hey, do people want to talk to me about my business idea?” is seen as sort of a negative thing. That’s along the same lines as advertising your product, and no one really wants that. In those cases, you may need to become a contributing member of that community first, invest that time until you get to know people. At that point, you have a little more social acceptability to ask questions. Or you can make friends with individuals and get their contact information that way.

With parents, the real world is a very, very good place to find them. Let’s say you have friends who have friends who are parents. Then what you are probably going to do is ask your friend to forward them an email. But your friend isn’t necessarily going to want to do that unless you’ve made it very clear that you’re going to be a good actor. I wouldn’t ask you to introduce me to a friend, and then say, “Go ask your friend to help me move my house.” What’s typically easy is to send an email to someone saying, “Hi, I’d really like to talk to parents in order to learn blah, blah, blah. You have a friend who I’d be particularly interested in talking to. Would you be willing to forward this email to them to make the request? I promise it will be no more than a 20-minute conversation, and if they don’t respond, I won’t continue bothering them.” There, you’ve established very clear parameters of what you’re asking for.


AJ: So, with this startup, it sounds like they’ve talked to some people, but they’ve exhausted their network. In that particular instance, how can they find other people?

CA: To be honest, what this smacks of to me is not that they’ve run out of people to talk to, but that there’s some other problem. Personally, as a parent, I think it would be pretty easy to get me to agree to an interview about this topic, because it’s a pretty painful thing when parents have a hard time getting care. The fact that people aren’t champing at the bit suggests to me that either the problem isn’t being pitched in a way that’s resonating, or that the request for conversation sounds onerous in some way. Maybe it sounds like it’s going to take a really long time, or they might be trying to contact people via phone who would rather use email, or vice versa. They might be trying to call people during the dinner hour. If you try to get anything out of me between 6:30 and 8:30 pm, I am not receptive.

What’s tricky is that people usually will not tell you what the problem is, so you have to do a certain amount of troubleshooting. If someone was like, “Hey, my friend really wants to talk to you about this,” and I kept saying, “Oh, yeah, I keep meaning to talk to her,” that’s a sign that for some reason, it’s not valuable enough to me. So the founders need to take a look and make some guesses about what that problem is. Is it that their pitch is not compelling enough? Are they communicating with people in a way that’s somehow off-putting, but they may not realize it? Are they communicating at people at the wrong time? Are they communicating with people in achannel that’s not common to them? Are they coming off somehow as advertisers? There are lots of things like this that might actually be an issue.

AJ: OK, so where should you account for these things? Is this something you need to think about right when you define who your target customers are?

CA: Whenever you’re going to interview customers, there’s a few things that you need to know about, what they value and what their limitations are. Typically, that’s what your network is for. You don’t necessarily have to be deeply embedded within your customer market, but you should at least be within arm’s reach of them. In this case, surely they have parents who are friends, or they have friends who have parents, who are friends. If they don’t actually have, say, five or six people that they can have a simple conversation with along the lines of, “What is the best way to reach you? Are there certain times that are a dead zone?” I think that’s a very difficult place from which to start a business, and I would recommend that they start making some friends who fall into that category.

You’re obviously never going to predict all of these things. But what you can do is constantlyiterate on the process. People who aren’t willing to talk to you may still be willing to answer a single question via email. If you say, “I’d love to talk to you about this solution,” and someone doesn’t respond, instead of continuing to try, you might say something like, “I’m just curious. We don’t have to have this conversation, but did I ask in a way that was inconvenient to you?” Or “Is there a way I could have phrased this better?” Or “Is there something I could have done to make this seem more appealing?”

That allows someone to give you a little bit of honesty without having to commit to a 20-minute phone conversation. You’re not going to get a whole lot out of one question that will stand in for a customer development interview. What you want to do is make sure that the next time you contact a parent, she actually says, “Yes, I want to talk to you.”


AJ: So this goes back to what you said earlier about developing a strategy to convince people to talk to you. Are there certain things that make people more or less likely to want to talk?

CA: Sure. You want people to talk to you; you want to recognize them as experts.

There are a few things that I can list that are turnoffs. One is when people feel like you’re trying to sell them something. You want to be really clear that you’re not doing that. At Yammer, I sometimes will start conversations with prospective customers by saying, “I’m not a salesperson. I couldn’t sell you this product even if I tried.” Eventually, of course, you’re going to ask people for their money, but when you’re doing the customer development interview, you want to remove that from the conversation. In fact, I don’t even like using the word customer with prospective customers, because I don’t want them in that buying mindset. So I’ll use words like you or your personal experience or in your life. I won’t say, “You seem like a prospective customer,” or “You might be a future customer,” because then I think they’re in that mindset of “At some point this person’s going to ask me for money.”

The second one is ego. People will say, “I’m a marketer with 20 years experience and blah, blah, blah.” If that’s the start of your pitch, then your email is basically saying, “Here’s a bunch of stuff about me.” So I’m not really convinced that you want my opinion because you’ve just spent a paragraph telling me all about you. I know people do this to gain credibility. It seems like a very logical strategy, and yet it falls flat on its face. I’ve gotten unsolicited customer development interviews where people go on and on about their credentials. I’m just like, “Ugh.” I barely read on.

I’d say the third one is an unclear ask for your commitment of time. Sometimes I will get a pitch and someone clearly wants to learn something from me about a product, but I don’t know what they want from me. Someone reading your email pitch or hearing your verbal pitch should have a very clear sense of what you’re asking for. I think people try to be polite—we think, “I won’t come right out and ask for things because that seems rude.” But giving multiple options is actually more of a burden because now I have different decision points to consider. The best pitches are very straightforward: “Can I talk to you for 15 minutes on the phone?” That is incredibly clear. I know exactly what I’m committing to, I know the medium, and I know it’s not going to take that long. I’m very likely to say yes to a pitch like that.

I think it’s generally best for you to pick a modality that you like, and offer another one as a fallback. Someone might say, “Look, I’d love to talk to you but it’s really hard to get me on the phone.” Then, you can say, “Can we converse via email or via chat instead?” At the last couple of companies I’ve worked at, I’ve had a large number of international customers. Between bad phone connections and accents on either side, either me not understanding them or vice versa, sometimes people will say, “Let’s just do Gchat.”


AJ: When you’re asking people to do customer interviews, has there ever been anything that’s surprised you about the process?

CA: Well, I don’t know if it’s surprising, but I think something that catches me off guard is mobile. At this point, more than 60% of emails are opened first on a mobile device. That is an incredibly short amount of space in which to make your point. If I’m looking at something on my iPhone screen, I’m seeing maybe two sentences. Somewhere in that two sentences, you have to hook me, and it has to be really clear how with one thumb I can hit reply and say yes. If not, it goes into the read-but-not-replied-to depths of my inbox. That’s purgatory. So I’ll write what I think is a really good succinct pitch, and I’ll send it to myself and open it on my phone. And I’ll be like, “Oh, the ask is way below the fold. This is terrible.” So I have to go back and cut more words.

The other thing is that people who are less tech-savvy have a very itchy spam filter. In talking to a lot of Yammer’s customers about exploring new features, a lot of the folks who are outside of technology are very suspicious that things might be some kind of spam or phishing. A lot of times, we’ll send one email to someone and see if it gets picked up on, and then send a few more, versus trying to blast people all at once. I’ve been very surprised sometimes by people who write back saying, “Are you a real person?” And I’ll read the email, and I’m like, “I don’t know what they’re responding to.” It seems completely legitimate—it’s from a real person, I’ve written it in a very human tone of voice, but something tripped someone’s “Maybe this is a phishing attack” filter.

I think one big thing is sending an email from an account that’s not a real name. People are suspicious of things that don’t come from humans. If you send from “MinuteSitter Support,” that’s not a human. If the email says it was sent from “Cindy, MinuteSitter,” that’s slightly better, but that might be someone selling me something. Another thing is, if you’re using a service like MailChimp or CampaignMonitor, sometimes the way the “sent from” line is rendered looks suspicious, like if it says, “From X on behalf of Y.” I’ve found that when things seem like they’ve been emailed through an additional domain, non-tech-savvy people don’t understand what that means—they just think it’s probably bad. At the startup level, I would just send emails from my personal account.

AJ: How can you tell whether you’re not asking people for interviews the right way, or if your product just doesn’t resonate with people in your target market?

CA: The easiest thing is to find some other person, even outside the target market, pitch them, and ask for feedback. So if the startup has an email drafted, forward that email to someone completely outside of their organization and say, “What do you think about this email? Would you be likely to say yes? What do you think about the people who wrote it?” It’s so valuable to have a friend outside the building for things like this. I just have a couple of friends, or people I’ve worked with in the past—at any given time I might send them an email and then follow up via chat, and be like, “Did you get that email? Was there anything weird about it?” Just the ability to do that saves so much time. If you don’t have that person, find that person in the startup community.

This is also something you could do with a quick survey. We’ve done this for feature work at Yammer, when we’re trying to ascertain whether the tone of our copy is positive for people. We might show a screenshot that has a bunch of copy on it, and then on the next page of the survey just ask a couple of questions like, “What did you think this was asking for? What did you think was happening in this step?” and see how people respond. We’ve definitely had cases where certain words had a certain connotation that people were picking up on. So we might show a screenshot, and then on the next page people would say, “Oh, I thought this was going on because this term seemed very negative to me,” and it’s often very surprising.

If they actually got people to respond to their pitch, and no one identified any issues with it, then I would move on to the next easiest thing to validate, which is, “Does this solution make any sense to their target audience?”

AJ: How do you validate that?

CA: Once people have agreed to talk to you, you want to know what they’re doing today. One trap startups tend to fall into is to ask aspirational questions. It’s typical to say something like, “Would you be interested in a service that does X?” That’s an almost useless question. The odds are that you’re going to get a “yes” answer, because frankly, it’s free to say yes. There’s no commitment involved. If you say, “Would you like a service that delivers chocolate to your house every night?” I’d say, “Sure.” Never mind that I’d have to pay for it, or that my health might suffer. The other thing is that most people have things that they wish they would do. If you’re asking about future behavior—”If you had this service, would you do X?”—people are just terrible predictors. It’s not just that they’re likely to say yes; they’re likely to be wrong.

Instead of asking, “Would you like to use a service like this?” you want to take it a step back and say, “Tell me about how you have found care for your children in the past.” Then you’re going to get answers like, “I’ve used this online service,” or “I’ve never used an online service,” or “I asked the person who lives next door because I know them, because I’ve lived next door to them for ten years,” or “I asked my friend who already has a babysitter how she found hers.” These are going to be useful bits of information, and you’re going to use them as a jumping-off point to figure out how, from that past behavior, you can shunt people into a new behavior.

A lot of times, by talking to people about what they’re currently doing, you can uncover their frustrations with what they’re currently doing. For example, someone might say, “Oh, I don’t have a problem getting a babysitter. I just ask my friend Joyce, who has a babysitter that she really trusts, and I just ask Joyce for that babysitter’s number.” The frustration might be that sometimes the babysitter’s already committed to Joyce, or frankly, Joyce is getting annoyed that you’re poaching her babysitter, and you don’t want to lose a friend over it, or that this babysitter’s great, but she doesn’t drive. Those little bits of frustration are where you can identify opportunities for providing a better solution.


AJ: How do you know you’re moving in the right direction, once you’ve tweaked your pitch and started talking to people?

CA: If you can get people to talk to you, you’re moving in the right direction. And once people have started talking to you, you should be listening for emotion. If you’re talking to people, and they’re very polite and mild-mannered the whole time, that’s a sign that you’re not really solving a big problem. I’ve never seen an interview case where people who were enthusiastic customers did not express some sort of frustration or excitement. Another big one is shame—people who feel like they ought to be doing something but they aren’t.

If you don’t hear the variation, if you’re not putting exclamation points in your notes anywhere, then you’ve got a bunch of polite people who probably won’t buy your product. I’d say if you talk to five people and none of them seem particularly enthused, then try talking to a different type of five people. It’s very unlikely that you’re going to strike out five times in a row, if you’ve really got a good market pitch.



AJ: Is there a certain number of interviews you need in order to figure out whether your product is resonating with people?

CA: The number of interviews people do is going to vary. A lot of times, I’ve said anywhere from 30 to 50 for this kind of scenario, where someone is just getting started. That person may be on the brink of making a big decision like, “I’m going to quit my day job,” or “We’re going to hire a full-time engineer,” or “We’re going to raise money.” Those are giant decisions. So 30 to 50 interviews are a lot, but if you’re deciding to quit your cushy day job and jump full feet into something, a lot of people want to have a sense of comfort. If you’ve already started a company and made those big decisions, then to some degree, you’ve already taken on that risk, so you might do fewer.

If you are able to very rapidly put out a minimum viable product and get people using it, then again, you might do fewer interviews because you’re going to be actually building the solution. Certainly, I’ve known people who’ve done five to ten interviews, but within the next week, they were able to put out a minimum viable product and get real customers using it. So they say, “Well, we are going to do a few interviews because now we’re actually watching people use the product, and people are giving us money,” which of course is the strongest possible signal.

If you have a startup challenge, and you’d like insight from an experienced entrepreneur, let us know in this short form. – Eds



Eric Ries | Photo: The Lean Startup Conference

In our “Now What?” series, experienced entrepreneurs discuss issues that real-life startups face. In this piece, Eric Ries talks about testing two-sided markets and gets real about usability testing, too (if you’re new to those terms, we’ve defined them below). If you have a startup challenge, and you’d like insight from an experienced entrepreneur, let us know in this short form. – Eds

The startup’s problem

We’re trying to create a marketplace for consumers (think: eBay, Etsy, or UrbanSitter—but a little more specialized). We’ve talked on the phone or in person to 100 buyers and 100 sellers who’ve told us they’d use the site. We’ve gotten 20 sellers to give us basic info on what they’re offering, and at what price. We’ve tested the idea by email, matching up two buyers with sellers; the transactions were completed, everyone had good experiences and was enthusiastic about using the service again, so we built a bare-bones site. But after having contacted 150 more potential buyers by email and after having run a Google ad to draw buyers from outside our own network, nobody is buying. Now what?

About Eric Ries

In addition to serving as Editor at Large for The How, Eric Ries is an entrepreneur and author of the New York Times bestseller The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Business, published by Crown Business. He graduated in 2001 from Yale University with a B.S. in Computer Science. While an undergraduate, he co-founded Catalyst Recruiting. Ries continued his entrepreneurial career as a Senior Software Engineer at, leading efforts in agile software development and user-generated content. He later co-founded and served as CTO of IMVU, his third startup. In 2007, BusinessWeek named Ries one of the Best Young Entrepreneurs of Tech. In 2008, he served as a venture advisor at Kleiner Perkins Caufield & Byers before moving on to advise startups independently. Today he serves on the board of directors for Code for America and on the advisory board of a number of technology startups and venture capital firms. In 2009, Ries was honored with a TechFellow award in the category of Engineering Leadership. In 2010, he was named entrepreneur-in-residence at Harvard Business School and is currently an IDEO Fellow. The Lean Startup methodology has been written about in the New York Times, the Wall Street Journal, Harvard Business Review, Inc.,Wired, Fast Company, and countless blogs. He lives in San Francisco with his wife, Tara.

Interview with Eric Ries, August 2014. Edited and condensed here.

Sarah Milstein & Mercedes Kraus: What would be your first step here?

Eric: We don’t have a lot of detail on this situation, so here’s what I going to assume: This entrepreneur thinks that they’re going to be able to replace eBay by creating a much better buying and selling experience for some kind of product category that they’re very passionate about. I’m going to make a further assumption that this is what we call a sticky engine of growthbusiness. The idea here is that once you start using this product, you pretty much can’t stop. Products that have that character to them have a very specific kind of growth pattern. They have network effects, just like viral products have network effects, and they’re theoretically very similar, but the phenomena that you measure in the world is very different.

In the case of a company like eBay, the network effect is that, once you start using it, you can’t stop because everybody else is there, and it becomes the the de facto place where you buy or sell the product in question. So if you’re a Beanie Baby buyer, you can’t go anywhere else because all the product inventory is there. If you’re a Beanie Baby seller, then you can’t go anywhere else because all the customers are there. You’re stuck. But just because you buy your Beanie Babies on Ebay doesn’t mean you’re going to go tell your friends about it. You may be very private about the fact that you’re a Beanie Baby collector, and that’s fine. No problem. A Paypal or a Facebook is very different; it doesn’t work if people you know don’t participate. So I’m going to assume that the eBay model is the goal, this product is for people who are obsessively buying and selling a collectible, like anime collectibles and Star Wars dolls and stuff like that, those kind of collectibles. Classic two-sided market.

Now, here’s the issue. Rule number one in a situation like this is always: Have you facilitated a transaction to show it can be done? We have, so we know that we can create some value. Now, what I want to know is: Can we get someone to stick to this, whatever the experience is that we’re trying to create? We want someone who’s going to use our product to say, “This is my place to be.” So we have to ask ourselves, “OK, now, what do I have to accomplish in order to make that a reality?”

As soon as the words came out of my mouth, I’m thinking, “I’m screwed.” Because buyers want the maximum inventory, and of course they’re going to check a lot of sites. How can we make this the place they want to go? We might try to figure out how to create a massive amount of inventory, so that they don’t need to go anywhere else. Here’s a great example: is a company I’m an investor in (in Chicago) that sells musical equipment like vintage guitars and amps, pedals, and stuff. The founder of Reverb had a guitar store already. So when the site launched, it had unbelievable inventory that you couldn’t find anywhere else. But the best part is that the store also buys used equipment. So if you were a customer, and you listed something, he would buy it, and you’d have a great experience. If you were looking, you’d find cool stuff, and you’d have a great experience, too.

That actually might be a model we could do with this startup. We could say, “I want to be a general purpose collectible site, but I can’t corner the market in all collectibles. But maybe I could corner the market in some specific kind where I could build up an inventory. I personally could put my own capital to work buy my stuff.”

A lot of people who want to create two-sided markets are chicken. They want to do e-commerce, but they don’t want to hold inventory. So just get over it. Or you can try the Airbnb trick of finding existing inventory at Craigslist and porting it over. There are a million different ways to create that additional inventory, and you can go crazy with it. But what startups forget is the goal: We want one customer to feel like they have to stick to our product. This is the cool thing about network effects. Very few people, if any, in a network experience the whole network at once. My telephone has value for me because I can call the other people in the network. The larger the network, the more people I call, the more valuable it is. But, how many people do I actually call in a day as an individual customer? For me as a customer, the value of the network is the number of other nodes in the network that I actually interact with, which, in a lot of cases, can be very, very, very small.


It’s possible in the early days of a network-effects business to simulate the experience of network saturation for an individual customer, especially if you identify that customer in advance and cheat. You could imagine that we’re going to target this customer and try to make their life perfect. If you knew everyone I’m supposed to call tomorrow on a new phone network, and you went and signed them all up and made them available for me to call, I would have a great experience because everyone I need to call is there. I would be like, “Wow, this product is awesome.”

But most entrepreneurs are too chicken to actually do the work to create that good experience for the initial customer. If that’s your situation, you can cheat by penetrating an extremely dense network subnode [i.e., a small, tight-knit group within the larger population -Eds] and get all those people signed up at once. That’s why so many people love college-campus products like Facebook. It can be incredibly valuable with just one school signed up. So back to our case. We don’t want to start by going after all collectibles. We’re going to go after vintage Star Wars dolls. There’s only 25 people in the world who buy and sell those things because they’re psychotic collectors, and we’re going to go sign all twenty-five of them up and make this the place where they interact with each other.

SM & MK: OK, so our entrepreneur has some inventory, and let’s say they’ve cornered a small market, but nothing is closing. You were facing a similar problem at IMVU, what was the first thing you did? [IMVU, a company that Eric co-founded, lets users create 3D avatars and exchange virtual goods. Successful now—and the basis for much of Eric’s Lean Startup methodology—it started off poorly. Wikipedia has some history on it. – Eds]

ER: I hit my head against the wall for months, so I wouldn’t recommend that. It was so unbelievably hard. Like, you have on the order of 100 customers or something, and you’re not seeing traction. Starting small is no problem. But having 100 customers try your product and only three of them think it’s all right—that’s different. I’ve heard a lot about startups’ spending $5 a day on Adwords and bringing in 100 clicks with that, of which one person would stick. That feels awful. You just want to die pretty much.


So what I wished I’d understood then is that that means you have to pivot. It’s not working. It’s not ambiguous. People call me all the time, “Hey, I need to tell you about my story and then you help me decide if it’s time to pivot.” It’s very easy: If you’re calling me, it’s time. People who know it’s working, you absolutely 100 percent will know that it’s working. You will not be asking an expert whether you should pivot or not. It will be clear. So something’s not right in the value proposition here. I can’t tell you what’s wrong. All you can do is investigate.

For us at IMVU, we eventually brought people in for usability tests and watched them use the product.

SM & MK: So in your case, that was bringing people in and having them physically sitting next to you and they’re trying to use the product?

ER: That’s right. Listen, when in doubt, that is always a good thing to do, because when you watch people trying to use your product, it’s extremely educational. And yet it took me months to see the problem. The reason I wasted so much time was because when people came in and tried to use our product and failed, I assumed we had a usability problem. So I tried to make the product easier to use. Now, making the wrong product easier to use just makes it easier for people to realize they don’t want to use it. So it’s like the definition of a lose, lose, lose. It’s worse for the customer, worse for you, worse for the metrics. It’s bad, and it’s very frustrating.

Every entrepreneur is like, “I’ll just explain it better. I’ll get better marketing. If customers just weren’t quite so stupid, then all these good things will happen.” But unless you’re looking at the right non-vanity metrics, you can never figure it out. When I was having this experience, I didn’t know the term “vanity metrics.” I didn’t know about pivots. I didn’t about any of this stuff. So I spent a lot of time banging my head against the wall, trying to get my vanity metrics to go up and failing.

SM & MK: So how did you refine your metrics?

ER: We could easily have solved this problem for ourselves if we’d had a very simple conversion-rate metric and a very simple retention metric. For example: What percentage of people who try to use the product succeed? And what percentage of those people came back two days later? That would have been enough to show us that our improvements were not making the situation better. We didn’t need some fancy analytic software. It could have been really very basic.

SM & MK: Tell us more about your usability tests.

ER: Usability tests are in a category of qualitative testing, as opposed to quantitative testing. Quantitative testing is for validating a hypothesis. The classic entrepreneur belief is: I believe that everyone in the whole world is going to love my product. Easy experiment to run. If it’s true that everyone in the world will love your product, it’s also true that a hundred people will love your product. So you launch it to a hundred people, and if all one hundred love the product, then keep going. Have fun. People in that situation tend not to be very interested in qualitative testing because they’re too busy turning the crank. No problem.

But if you’re doing a usability test, it’s probably because something is not going as well as you expected. Like you thought everybody in the world would want to use your product and only 10 percent of people do. Or in the case here, we did 100 interviews and the first couple of customers seemed to work, and then it died. What people told you and what they’re doing are very different things. Happens all the time, because nobody wants to tell you your baby isn’t beautiful.

Now that we know people won’t use our product, we can have a usability test where we watch them very carefully to understand why not. You can ask them questions about what they’re doing. You can try to really understand their mindset at the time that the action takes place. We always say, “Metrics are people, too.” Everything we measure in Lean Startup is the behavior of an individual person. If you want to change that behavior, the most important thing to understand is: What is in the mind of the person the moment before the behavior happens? What were they thinking at the time? It doesn’t matter why they’re not signing up or buying the product.

If somebody uses your product every day, that means that there was literally a moment when they were looking at their phone, and there were hundreds of apps on their phone, and they chose to use your app. What were they thinking in the minute before? Those are the kind of things we want to understand, and that data is very useful if it’s around a specific quantitative result that we already know.

I remember one time when I was doing usability testing around a software product, and in the usability test, customers always succeeded at the task in question, yet the metrics said very few customers would succeed. It was a paradox. We knew from the data in the real world that nobody was able to figure out how to use this particular feature, but in the usability test everything seemed fine. Customers used it, no problem.

We finally realized that, in the focus groups before the quantitative testing, we gave them a nudge so small we didn’t even notice it. It was an unconscious tip like, “Hey, just look here.” Just very subtle, and that totally messed up the results. Once we resolved to sit there, and we knew what we were looking for, then we could ask, “What were you thinking during the 25 minutes you just stared there, not clicking any buttons?” And the customer’s like, “I’m pissed. I was afraid, I didn’t know what to do.” They would express what they were actually feeling.

SM & MK: So how long long should we expect usability testing to take? Is this something that is days’ worth of work, and how long does the usability test run for anyway?

ER: An individual test is usually something like 20 minutes long. But how many you do is 100 percent context-dependent. You keep doing them until you get the result you need. Sometimes the answer is completely evident in one usability test. You say, “Eureka, I’ve found it.” You make the change, the numbers move, and you move on, and that’s a day’s work. Sometimes, though, you say, “Eureka. Ah, its so obvious.” You go fix the thing, and then you do another usability test and realize it didn’t make a damn difference. “Eureka, I have found it again and again and again.” I have gone through hundreds of Eurekas before I finally said, “Wait a minute.”

The reason I think that happens is because we’re optimistic, and we think that we’re almost there. So we often wind up focusing on micro-optimization when the problem is bigger. We’re improving usability instead of focusing on the core value proposition. This kind of testing is great because you eventually run out of things to test, and it forces you to take a step back and say, “Wait a minute, am I asking a big enough question about my strategy here?” Then you can get out of micro-optimization and enter a strategic conversation. But depending on how smart you are and how much experience you have, that could take days, weeks or unfortunately, months or years. And how much money you have, which is part of being a startup.

People say raising too much money is dangerous for start ups. This is one of the mechanisms by which that danger becomes manifest because, if you have unlimited runway, you often don’t have the motivation to think bigger about what’s actually going on. You just keep micro-optimizing your product.

SM & MK: On usability testing, what do you wish you’d done differently in the past?

ER: God. So many things. I wish I had understood this quantitative/qualitative thing I was just talking about. That would have saved me so much time. The number one biggest thing that I wish I had known in the past was how to reconcile vision with customer feedback. It sounds like a big picture, abstract, lofty, philosophical thing, but it’s where the rubber hits the road of the usability test. This is a place where if you don’t have confidence in your vision and an understanding of what that vision means, you just get so screwed.


Let’s say I just produced the most amazing album that’s going to be the biggest hit record of all time. I played it for one person, and they’re like, “It sucks. Your music sucks, dude.” What do you do? Do you give up? Are you not a musician anymore? If your goal is to produce pure art, then you can just say, “You don’t like my music, you’re a moron.” But most entrepreneurs want to have a very specific impact on the world. As a matter of fact, so do most artists. So you’ve got to find this place where you can say, “What is the right synthesis of what I believe and what reality will accept?” A usability test is the place where you’re the most emotionally challenged to do that. Okay, someone doesn’t like your product, but what does that really mean? Does it mean you have the wrong product? Does it mean they’re not the target market? If you don’t have solid conviction, you can wind up on a weekly cycle of pivoting from idea to idea to idea too soon.

But if you have too strong a conviction, you can watch stubbornly and not take in the feedback that you need, and therefore never get to the next level. How do you find the confidence that you were on the right track—while also listening to the person to really understand what they were saying?

The critical thing to understand is that feedback tells you about the person giving the feedback, not about yourself. If someone says your product sucks, that doesn’t mean anything about your product. You’ve learned zero about your product. All you’ve learned is that this person doesn’t like your product. Then the question is: What do I extrapolate from that? You have a data point—this kind of person doesn’t like your product very much—so let’s try again.

It used to take me dozens and dozens of interactions. Now I’ve gotten much better at it. But it used to be that, until I saw every kind of person in the world not like my product, I could still be like, “Well, that’s just a type thing. Once we find the right kind of person, then they’re going to like my product.” But when young and old, and every race, ethnicity, gender, and age all consistently didn’t like it, I started to be like, “Hmm, maybe I don’t have the right product after all.”

Now that I’ve done this a bit more, I’m much better at drawing more reasonable inferences. I can see, “You know what, even from just three data points here, I know something’s not right.”

SM & MK: What’s the pattern that you see now that helps you recognize that it is a pattern?

ER: That’s a great question. The pattern is simply that people say the same thing. This teenager and this 45-year-old—who have nothing in the whole world in common and can’t agree on anything—happen to agree that my product is terrible. That’s odd. In fact, it’s pretty impressive that they agree about the specific thing that’s wrong with it.

Of course, sometimes people give different reasons. For one person, it’s too expensive; for another person, it’s too hard to use. So it’s confusing, and you have to get more information. But if you wait to get a definitive answer, you’ll be waiting forever. You have to make the best decision you can on the basis of the information you have now.

The good news is, it’s a process of continuous experimentation. So your new hypothesis will be immediately put it to the test tomorrow. And a cumulative sample size of the information you collect actually turns out to be quite large over the course of a month or year. You can make relatively rash decisions based on small data points, knowing that if you make a wrong decision, you can backtrack.

SM & MK: With usability testing, what steps would somebody else be tempted to take that you’d say they can ignore?

ER: This is going to sound totally contradictory. The first is insisting that every person you do the test with absolutely matches your customer archetype: being a prima donna and not accepting anybody’s feedback. I know some teams that they never do a usability test because they can never find a target customer. It’s like, “Hmmm. Maybe the fact it’s so hard to find a target customer is indicative of a problem.” I mean, you can be wrong about your target customer. I once had a product where the kind of customer that was using it was different than my target customer, and I kept trying to kick them out. Until somebody else pointed out I was being really dumb.

The opposite is accepting feedback indiscriminately from everybody. That’s also a mistake. Take a random person off the street and say, “Hey, here’s my new high-tech medical device.” And they’re like, “Huh?” Not valuable.

It all comes back to remembering that the true goal of all this testing and data collection is to validate the hypothesis that underlies the vision. If I have a strong belief that every doctor in the world will understand this new medical breakthrough, then it makes sense to talk to doctors and to validate and to take their information seriously. The fact that a person on the street does not understand it, that’s irrelevant. But if I really believe I have a product that’s for everybody, then I should accept the validity of everybody’s feedback.

You can scale the feedback you get by how close to the archetype a person is. If they’re not too close, you can say, “I’ll take this 10 percent seriously.” I’ll need 10 times more data points from that kind of person before I consider it valid. But the other mistake I made was—and this is common among engineers especially—whatever number of data points we had, I would say that it wasn’t a statistically significant sample. That’s the ultimate excuse to get you out of any data. It’s almost always wrong.

SM & MK: Why?

ER: The intuition people have about sample size comes from things like presidential polling, where the thing we’re trying to detect is actually a relatively small change in preference. So we’re looking at the tenth of a percent because it matters a lot. A candidate that gets 50.1% of the vote wins; 49.9% of the vote could lose. We’re really looking for the minute changes, so we need a big sample to detect that thing. In startups, what we need to know is: How big is the underlying signal? If I want to know if people like to breathe air or not, I don’t actually need that big of a sample to figure that out because it’s very, very, very obvious. Do people walk with their feet or with their hands? I don’t have to sample a million people to find that out. It’s very, very, very obvious, because the signal is quite strong. The kind of tests we do in a startup are supposed to be high signal-to-noise ratio things. Like: Are we on the right track? Do people like our product at all? Given that we’re looking for high-signal things, if the signal is ever ambiguous, the answer is no. [To really understand this issue, check out Nate Silver’s book, The Signal and the Noise. – Eds]

I’ve never heard an entrepreneur say, “We have a small number of customers, but they absolutely adore us or think we’re awesome. But it’s an insignificant sample, so we need to do more testing.”

To learn more about how to do usability testing yourself, check out Andre Glusman’s straightforward slide deck, “Lean Usability,” David Peter Simon’s useful post on guerrilla usability testing, and Laura Klein’s practical book, UX for Lean Startups

If you have a story to share about the actionable metrics you use to measure your value to customers, join the discussion over here

If you have a startup challenge, and you’d like insight from an experienced entrepreneur, let us know in this short form. – Eds

Sarah Milstein is Editor in Chief of The How.

Mercedes Kraus is Startup Managing Editor of The How.

Jargon, demystified

If there was a term you didn’t know that we haven’t defined, please let us know—we want to help! Also, if you have a better definition or an addition to a definition, shoot us a note.

Sticky, viral, and paid engines of growth. As Eric explains above: The idea with a sticky product is that once you start using it, you pretty much can’t stop. A news site that you check daily is sticky.In this handy post on engines of growth, David Link explains that the viral engine “depends on users acquiring and activating other users as a mere and necessary consequence of normal product use… Modern examples are Hotmail (with the viral hook being the footer in every e-mail),Facebook (with the viral hooks being the friend suggestions and others), and Zynga (with the viral hooks being the various opportunities for social interactions within its games).” The paid engine of growth is the other basic approach, and it relies on ads, referral bonuses, and other cash outlays. For more on engines of growth, check out David’s piece and this post from Eric.

Network effects. When a product or service that becomes more valuable the more people that use it. Common examples include the phone system, markets like eBay, and platforms like Twitter—all of which are fundamentally more useful when more people are connected to them. This Wikipedia article is a little geeky, but it gets into some good detail.

Two-sided market. A product or service that brings together two distinct groups of people for a shared transaction of some sort. eBay brings together buyers and sellers. Uber brings together riders and drivers. App stores bring together mobile developers and phone users. Etcetera. Harvard Business Review has a tidy summary of two-sided markets.

Pivot. A pivot is a change in strategy based on what you’ve learned so far. They’re super-common in startups, even though the stories aren’t always well known. For example, YouTube started as a video-dating site. When the dating part didn’t take off, the company pivoted to focus on video sharing, which seemed to hold promise. Here, Eric explains pivots in depth, and this Forbes piece has a nice rundown of common kinds of pivots.

Usability testing. “Usability testing refers to evaluating a product or service by testing it with representative users. Typically, during a test, participants will try to complete typical tasks while observers watch, listen, and takes notes. The goal is to identify any usability problems, collect qualitative and quantitative data, and determine the participant’s satisfaction with the product.” That straightforward definition is, surprisingly, from a U.S. Department of Health & Human Services site,, which has the clearest basic info around about usability testing.

Metrics. A fancy term for measurements. Actionable metrics—those you can make meaningful decisions around—measure specific customer behaviors and patterns. During the interview above, Eric described them like this: “Anything denominated on per customer or per human being basis tends to be the right thing. The percentage of customers who subscribe to our article and become long term readers. The percentage of customers who read the article today and come back to read a new article, versus the ones that come back and read a new article tomorrow. The average revenue per customer. Those kinds of numbers tend to be really useful. Say we have 10 customers come in, and three of them love our product. We have 10 more customers look at the next version. Four of them like it, and then five, and then six, then seven, then eight. Eventually 10 out of 10 people like our product. You can see progress is being made even if the total number of customers might only be 100, because we chunk them up 10 at a time.” (That kind of chunking up is called cohort analysis.) Here’s Eric’s cornerstone post on actionable vs vanity metrics.

Vanity metrics. Measurements that are appealing to look at—and that shout for attention—but don’t tell you anything meaningful about your value to customers. For example, it’s fun to watch your number of Twitter followers increase or focus on how much total revenue you’ve taken. But Twitter followers aren’t necessarily customers, and gross revenue without contextual information doesn’t tell you whether you’re looking at sustained growth or scattershot injections of cash.

During our interview (but not quoted above), Eric explained: “The shorthand is that vanity metrics are gross numbers and large quantities. So: total revenue, total customers, number of clicks—any number that’s big, the kind of thing you like to brag about. “Oh, my god, we have 2,000 page views! And now we’ve hit 20,000. Lo! We have 2 million page views!” That could be 2 million people looked at our site one time and hit close. Could be that one guy loves our product way too much. He just started hitting refresh, refresh, refresh. Or it could be anything in-between. You actually really don’t know what’s going on. It means that you are subject to all kinds of variation and gyration due to external factors, and those numbers are not necessarily correlated to anything that you did. So they’re totally worthless.”

Conversion rate. The percentage of people who perform a desired action, like filling out a form or completing a purchase. Mashable gives a good overview of the term.

Retention rate. The percentage of customers who return in a period. Retention takes into account churn—those who leave—so you can see overall growth. Inc explains how to calculate it.

Hypothesis. What you think will happen when customers come into contact with your product. The basic structure looks like this: “I believe [customers like this] will [behave like this] in [this measurable way]. “Validating a hypothesis” means you’re running experiments that prove it true; “invalidating a hypothesis” means your experiments are proving it false. Ben Yoskovitz has a clear write-up on how to craft a useful hypothesis.

Runway. The amount of time a startup has until it runs out of cash. The term is most often applied to companies that have cash in the bank from investors but bring in little or no revenue.



Daina Burnes Linton | Photo: The Lean Startup Conference/Jakub Mosur and Erin Lubin

The Lean Startup method tells you to test your product idea with customers before you spend time and money building something you aren’t sure people want.

But how can you test a product without having the actual product? In her talk at the 2013 Lean Startup Conference, Daina Burnes Linton, CEO of Fashion Metric, explained how her company had done just that. As a result of their approach, Linton said, “Fashion Metric is today is very different from what our original idea was”—and she considered that a very big success. Here’s how Fashion Metric started:

We challenged ourselves to really understand the real problem the customer was experiencing…. We thought clothing shoppers—shoppers in stores and malls—might have a very difficult time deciding what to buy. Maybe it’s because they’re alone, and they’re not with their friends, and they can’t get their opinions. So, we thought, “Well, wouldn’t it be great if there was a mobile app, and you could take a picture of what you’re trying to figure out what to buy, and you can gain access to a personal stylist that can give you advice in real time and help make your purchase decisions?” Sounds like it could be a reasonable idea, right? Of course, friends and family always say, “Oh, yeah, that’s a great idea. Build that. That sounds awesome.” But we weren’t sure if we were solving a real problem, and so, we decided to really understand: Is this a real problem that customers are experiencing when they’re shopping in stores? So, we talked to who we thought our customer was all over the country. We went to malls in Los Angeles, New York City, and San Francisco. We asked, “What’s the biggest problem that you have when you’re shopping for clothes?” Very open-ended. What we found when we did this exercise was that not a single person—not one person—gave us the natural response that they had a hard time deciding what to buy. Not one person. So we almost built an app to solve that problem that didn’t exist.

By doing targeted customer research before building their product, the Fashion Metric team gave themselves 20/20 hindsight.

Of course, if they had stopped there, the company wouldn’t exist today. Instead, they looked carefully at the data they were gathering and realized that 90 percent of the men they spoke with said that finding clothes that fit was a problem. So the team dug deeper and learned that men particularly struggled to find dress shirts that fit well. With a clear trend around a real problem, Fashion Metric now had data to start building. Rather than code up a site, however, the team decided to validate their idea with an MVP that consisted of two parts: 1. a landing page asking for potential customers’ email addresses, and 2. follow-up phone interviews to gather customer sizing information. (Note that Daina is an engineer, so holding off on the coding was no small act of will.)

Before writing a single line of code, all we had was that landing page, and we learned a lot. We had an accelerated understanding of the problem; we were no longer building an app to solve a problem that didn’t exist; we understood the depth of the problem. We were able to see how far customers were willing to go to solve it. We were starting to see some trends in the data, to understand what questions we could ask, and whether or not it was technically feasible to solve the problem. You would think at that point, “Okay, great. Build something,” right? “Build the whole thing.” But we didn’t.

To find out what Fashion Metric did next, scroll down to watch or listen to Daina’s 12-minute talk. We’ve also included the full, unedited transcript below.

In the comments, we’d love to hear your best advice for finding relevant customers to interview before you have a product to show. Please tell us about the idea you were testing and how you found people. B2B and B2C ideas equally welcome! – Eds

Daina Linton is the co-founder and CEO of Fashion Metric, a company that builds technologies to increase sales and reduce returns in apparel e-commerce by improving fit accuracy. Fashion Metric’s comprehensive data-driven technology calculates a customers’ full compliment of measurements based on a seed of information provided by the customer. Fashion Metric then uses this algorithm output to fit customers in made-to-measure clothing or integrated on e-commerce platforms that carry “off-the-rack” sizes. A trained engineer, Daina pursued her degree while simultaneously holding several research internships at university affiliated hospital research labs. Eventually, she left a PhD engineering program at UCLA to parlay her experience in data analytics and image-processing methodologies and ultimately launch Fashion Metric. Follow her on Twitter.

Sarah Gaviser Leslie is a corporate storyteller and executive communications consultant in Silicon Valley. 

Jargon, demystified

Includes terms from the talk that we didn’t quote above. If there was a term you didn’t know that we haven’t defined, please let us know—we want to help! Also, if you have a better definition or an addition to a definition, shoot us a note.

MVP. An MVP, or minimum viable product, is an experiment that helps you quickly validate—or often invalidate—a theory you have about the potential for a new product or service. (Although often a stripped-down version of your product, an MVP is different from a prototype, which is intended to test a product itself and usually answers design or technical questions.)

The minimum concept is key because, in order to move rapidly and definitively, you want your experiments to include only features that help you test your current theory about what will happen when your customers interact with your product. Everything else is not only a waste of time and money, but can also cloud your results. The viable concept is key, too, in that MVPs have to actually produce test results you can learn from, which means it has to work on some meaningful level for customers.

The classic example is that if you’re trying to test the demand for a new service, you might do that by putting up a one-page website where people can pre-order the service before you’ve spent any time developing the actual thing or hiring people to make it happen. By the same token, if you’re testing a hypothesis around customer use of a new jet engine, then you have to make sure it will fly safely—in order for it to be truly viable, or feasible.

Landing page. A web page with a form to collect visitor info. HubSpot has an opinionated take on landing pages.

Ideation. Fancy word for brainstorming or idea formation.

Concierge. A type of MVP in which you manually fulfill a service that you’re thinking of automating in your product. A concierge MVP usually doesn’t require building much, if anything, and the high-touch interaction with customers lets you learn a lot.

Intelligence engine. Software that analyzes raw data to create useful insights.




Dan Milstein | Photo: The Lean Startup Conference/Jakub Mosur and Erin Lubin

We all know that hard work and good luck are key to startups’ success. But what if that’s not true?

What if all startups have people who work hard? What if a bit of serendipity is fairly common? Let’s make it concrete: Have you worked at—or run—a startup where people were deeply committed and worked long hours, yet the company failed?

In his talk at the 2013 Lean Startup Conference, Dan Milstein explored what does make a difference for startups: Information. It’s worth real money, he emphasized, and the way to make more money is to more quickly gather information that helps you figure out the right things to work on.

This mindset is so critical, in fact, that you should be afraid of working on the wrong things. Dan:

If hard work and luck are important, but they don’t seem to really distinguish the startups that succeed from the ones that fail, then the choices of what we’re working on must be critical. What you choose to work on is actually your biggest lever, with a huge differential effect. You should be very, very scared of working on the wrong things. In fact, you should be terrified. I would say you should be so terrified that you actually don’t work. If you’re not sure that what you’re working on is the most valuable thing to your startup, you should stop working. I tell people this and they think I’m exaggerating, but I’m not. You should only work if what you’re working on is the most valuable thing.

Dan gives examples and does the math to show why working on the wrong thing is devastating for a startup. He also talks about the kind of information you want to gather at a startup: the kind that answers the riskiest or most uncertain questions. He explained: “You actually don’t get much information when you already know something; you get a lot when you’re uncertain. And then, what information is valuable depends on what decision you’re making.”

As you may have noticed in your own startup, identifying your biggest risk can be hard. Dan points out that it’s harder than you think, because risk shifts constantly. He tells this story about a software product, for hospitals, that used a public data set. Before selling or building it, the company’s biggest risk was that nobody would buy it. So the startup created a demo, and one hospital signed a $10-million contract for the product before it truly existed:

That’s great, you did the right thing. So now your sales team is out there trying to repeat that and sell the second one, and you’ve got a bunch of engineers now building that thing. And I want you to imagine something. I want you to imagine a junior developer, someone on the team, bright guy but young–guy or girl. And some morning—it’s a Thursday morning—and they were given a job of taking the demo app and turning it into a real production system. And they’re working with this public data set, and they discover, to their surprise, that it’s not as comprehensive as everyone thought it was. It worked well for the demo, but for the actual hospital, it’s actually not going to work. The whole product that the company has sold is actually not going to succeed the way they’ve done it. They have to do it some other way. In the moment after this person makes this discovery, the biggest risk for the startup has changed. The biggest risk is no longer: Can we repeat this sale? The biggest risk is: Can we actually build the thing that we promised in the first sale that we thought we could build, but we just discovered we were wrong?

If the biggest risk has changed, the thing you should be doing to gather the most information has changed. Because the way you gather the most information is by going after the biggest risk. Therefore, the thing that’s going to get you the most informationand therefore, the most moneyhas changed. So, as long as the company is still doing what it was doing before that discovery was made, they’re doing the wrong thing. And one way to look at this is that, in order for your company to move fast (the entire organization), the thing that will limit them in how fast they can move and how fast they can make money is how fast they can respond to the changing nature of risk. Because it’s only by going after the biggest risk do you make the most money, and because risks are changing all the time, the entire organization has to be able to change direction. And this, really, nobody gets this.

Learn more about identifying risk, gathering information, and making money by watching or listening to Dan’s 20-minute talk, embedded below. We’ve also included the full, unedited transcript at the end of the post.

When have you realized your biggest risk had changed? Let us know in the comments. – Eds

Dan Milstein is a co-founder at Hut 8 Labs, a software consulting shop in Boston. He’s worked as a programmer, architect, team lead and product owner at a variety of startups over the last 15 years. He is fascinated by the interactions between complex systems and the humans who build and maintain those systems. He’s recently written on How To Survive a Ground-Up Rewrite Without Losing Your Sanity, and Coding, Fast and Slow: Developers and the Psychology of Overconfidence. Follow him on Twitter.

Mercedes Kraus is Startup Managing Editor for The How. 

Dan Milstein, Risk, Information, Time and Money (in 20 Minutes), The Lean Startup Conference 2013




Jargon, demystified

If there was a term you didn’t know that we haven’t defined, please let us know—we want to help! Also, if you have a better definition or an addition to a definition, shoot us a note.

Opportunity cost. Given more than one choice of things to do and limited resources, opportunity costs are the potential benefits you give up in the choices you don’t explore. For example, let’s say you have a customer who asks you to build a highly specialized product for them, even though you don’t generally do extensive custom work. If you take the project, you’ll get money from the customer and perhaps some intangible things like a stronger relationship. But because you don’t have unlimited time and people, taking the project means you’ll give up the opportunity to build something else—perhaps a product that you could sell to many customers. If this sounds like every decision you make has an opportunity cost, you’re right on. Opportunity cost is a central idea in business—and it’s why the value of information in making decisions is so great. We found that these examples from Inc., while a little stiff, help put the term in a wider context.

CRUD app. CRUD is short for create, read, update, and delete: the four basic functions of database applications. It’s the simplest, dumbest kind of app an engineer can make.

Chained risks. A sequence of interconnected risks, where the first risk suggests that other risks will arise. In the talk, Dan mentions an essential risk chain of startups: 1. Can we build it? (this question is often framed as technical or product risk); if so, 2. will they buy it? (often framed as customer or market risk).

Degree of surprise. We only get information when there’s uncertainty and risk; so, the less you know—and therefore the more surprised you are by new information—the more you are learning.

Information theory / Claude Shannon. A branch of applied math, electrical engineering, and computer science. The foundational ideas of information theory were developed by Shannon in order to examine the communication, compression, and storage of data. We like this profile of Shannon in Scientific American. For geeks, this paper [PDF] on the wider context of information theory in the digital age, goes deep.

Series A funding. The first round of major investment, usually $2 – $10 million, that a startup receives (it may not be the first investment, however; seed funding is generally the first money—sometimes the founding team’s own—used to get a startup just off the ground). The name itself refers to the Series A Preferred Stock that investors receive in exchange for buying in; subsequent rounds are referred to as Series B, Series C, and so forth. Venture capitalists (VCs) are generally the investors, though in a round of funding, several firms often invest, and sometimes individuals participate, too. Over at Entrepreneur, they’ve got a good picture of the whole funding timeline; the process doesn’t always look exactly like that, but it’ll give you a sense of how things can go.

Valuation. For startups, this is a kind of appraisal that assesses the company’s financial value, usually based on potential growth rather than current profits or assets. For example, if your company has started selling a service for $100 per year, and you have 100 initial customers, it’s likely worth a lot more to investors than the $10,000 you’ve taken in. If they believe you can gain many more customers rapidly, investors might project your future value in the millions, reflecting your company’s potential, and buy shares on that basis. For more background on valuations, check out this clear post from VC Brad Feld, this useful piece from Founders and Funders (though maybe skip the hectic infographic at the top), and this straightforward discussion from Investopedia.


sarah-milstein-1024x731Sarah Milstein | Photo: The Lean Startup Conference/Jakub Mosur and Erin Lubin

The How is a project of Lean Startup Productions, which also runs The Lean Startup Conference, among other things. Not coincidentally, my co-founder for the company is Eric Ries, author of The Lean Startup. People often ask us for basic information about the Lean Startup approach, and I’m pleased to publish this explainer on a site of our own. –Sarah Milstein, CEO of Lean Startup Productions and Editor in Chief of The How

Lean Startup is a method for creating and sustaining innovation in all kinds of organizations. It helps you get good at answering two critical questions:
  • Should we build this new product or service?
  • And how can we increase our odds of success in this new thing?

When you know those answers, you can reduce unnecessary failure and instead focus your time and money on ideas that have promise. The method is equally useful in brand-new companies, Fortune 500 enterprises, government agencies, educational institutions, and non-profit organizations. Although it has roots in the tech sector, it is not for tech alone and has been used profitably for nearly every kind of product or service you can imagine—from diesel turbines tomiddle-school math classes.

That sounds great, right? It is. But there’s a lot of confusion over what Lean Startup is, how it works, and how you can apply it. Herewith, a rundown of the essential ideas to clear things up and get you started. (From here on out, when I talk about products, I mean it as a catchphrase that includes products of all kinds—digital or physical—plus services or processes that an organization creates to sell to or serve customers. Those customers can include co-workers, or what people sometimes call internal customers.)

1. You say the method works for established organizations. Why is the method called Lean Startup?

The word “startup” often brings to mind an image of two people working in a garage in Silicon Valley. But there’s a more useful definition laid out by my business partner, Eric Ries, who coined the term Lean Startup: “A startup is a human institution designed to create a new product or service under conditions of extreme uncertainty.” I’ve emphasized the last two words, because I want to underscore that in this definition, what determines a startup are the unknowns a new product faces—not the age, size, or sector of the company.

In other words, in Lean Startup terms, a startup is a group of people working on a risky new product, even if that group of people works for Exxon or the US Marine Corps.

With that definition in mind, there are three areas in which a startup typically faces a very high degree of uncertainty—or risk:

  1. Technical risk, also known as product risk. You could think of this as the question: Can we build this thing at all? For example, if you’re seeking a cure for cancer, there’s a big risk that you’ll fail to find it. If you do find it, you’ll certainly have customers, so there’s no market risk.
  2. Customer risk, also known as market risk. This is the question: If we build this thing, will people use or buy it? Put another way: Should we build this thing? The story of Webvan illustrates this risk: At the turn of the millennium, the company spent $1 billion to build a series of high-performance warehouses and trucking fleets on the assumption that people would buy groceries online. Although it was technically possible to offer groceries online and deliver them to homes, customers weren’t interested in the service at the time, and Webvan folded after a couple of years and a lot of investor dollars down the drain.
  3. Business model risk. This amounts to the question: Can we create a way for this thing to make us money? Strong business models aren’t always obvious. For example, you know Google as a company that makes a lot of money selling search-related ads. But when the Google website launched, it wasn’t obvious that ad sales would become the killer business model, and it took a number of years before they hit on that approach.

If you’re wondering which kind of risk you face, let me help you out: It’s customer risk. Nearly always, it’s the biggest question, because you simply don’t know the value, if any, your new product has for potential customers. When I say, “Nearly always,” I mean: This is so often the case, you should assume it’s true every time.

The tricky part is that commonly, product risk looks more urgent. After all, if you’ve hit on an exciting new idea that you’re pursuing, you’re doing so because you believe other people will be interested in it, too. And if you assume the demand will exist, you’ll be tempted to make sure you can build the product before you offer it to people. But that’s a very big assumption, and many, many startups have failed after building cool stuff, because they relied on a framework of inaccurate assumptions about how customers would behave. Good news: There’s no reason you should put time and money toward a belief you haven’t proven. Below, I’ll talk more about assumptions and how you can avoid repeating a doomed history like Webvan’s.

Note that when you think in terms of risk, rather than company history, it becomes clear that lots of existing organizations have startups within them. For instance, if you’re Gillette and you add a 5th blade to your iconic razor, you have no risks: the product, the market, and the business model are all known. But Gillette’s parent company, Proctor & Gamble, has R&D teams looking at new methods for hair removal. For those new ideas, everything is unknown. Which means the teams working on them are startups.

Incidentally, the biggest company we know of that’s systematically applying Lean Startup methods is GE. Ranked seventh largest in the world, as of May 2014 by Forbes, the company has trained 7,000 managers around the globe in Lean Startup principles and has used them to improve outcomes on things like diesel engines and refrigerators.

What If?

What if I face product risk and customer risk? Attack the customer risk first. When the founders of Litmotors learned that the vast majority of car trips involve one person going just a few miles from home, they set out to make a new kind of car that would be more efficient for these kinds of rides. Based on a two-wheeled motorcycle chassis, their prototype looked pretty funky and had a new technology that kept it stable. While they were testing the technology, the chief technology officer (CTO) had a personal emergency and had to leave the company for an extended period.

Unable to pursue the prototype tests without the CTO’s expertise and unable to replace him, the remaining team realized they could reduce their customer risk ahead of their product risk, so they finished a showroom model that didn’t actually run and offered it for presale. The results astonished them: Nearly 16 percent of people who came in to see the non-working model put down money to buy one—adding up to dozens of pre-orders in a very short time. Here’s CEO Danny Kim telling the story at the 2013 Lean Startup Conference.

That’s an encouraging story because the company was able to find customers. You can easily imagine a story in which they prove the technology works, then get the vehicle government-approved for sale, then build a terrific manufacturing and distribution system for it—each step of which takes months or years and great expense—and then discover that nobody wants to buy a two-wheeled car. (Indeed, that’s pretty much the story of Segway.)

By guiding you to answer the question, Should we build this?, Lean Startup helps you avoid the spectacular and unnecessary failure of building a perfect product that nobody buys.

2. Why Lean Startup? Is this method for small budgets only?

The word “lean” usually brings to mind cheap or bootstrapped (i.e. self-funded) companies—and possibly Lean Cuisine. So it can be daunting if you think you have to build something with tons of unknowns on a ridiculous shoestring budget, and maybe you have to eat frozen diet meals along the way. Good news: you can—and should—let go of those ideas. Lean is not about cheapness.

What lean actually refers to is Toyota’s lean manufacturing revolution. For Toyota, “lean” didn’t refer to the money available but instead to a narrow focus on producing value for customers and eliminating everything else. This is surprising, so I’ll repeat: Lean is about focus.

In manufacturing, the business Toyota is in, the value you can provide is fairly clear: Customers want a product that’s assembled correctly. But for most startups, as we discussed above, your value to customers is unknown.

When you marry a focus on customer value (the lean part) with an extreme uncertainty about customers (the startup part), it becomes clear that learning what customers want and will pay for is your biggest priority. It’s the thing you want to do most quickly and effectively.

3. What’s an MVP? I’ve heard the term, and it has a nice ring to it.

We’ve just established that learning quickly what customers want and will pay for is the key activity for startups. An MVP, or minimum viable product, is a tool that helps you do that. Specifically, an MVP is an experiment that helps you validate—or often invalidate—a theory you have about the potential for a new product or service. (Even though an MVP is often a stripped-down version of your product, it’s different from a prototype, which is intended to test a product itself and usually answers design or technical questions.)

The minimum concept is key because, in order to move quickly and definitively, you want your experiments to include only features that help you test your current theory (also known as a hypothesis—or what you think will happen when customers come into contact with your product). Everything else is not only a waste of time and money, but can also cloud your results. The viable concept is key, too, in that this thing has to actually produce test results you can learn from, which means it has to work on some meaningful level for customers. The classic example here is that if you’re trying to test the demand for a new service, you might do that by putting up a one-page website where people can pre-order the service before you’ve spent any time developing the actual thing or hiring people to make it happen. By the same token, if you’re testing a hypothesis around customer use of a new jet engine, then you have to make sure it will fly safely—in order for it to be truly viable, or feasible.

Customer Development

I’ve also heard the term customer development. Where does that fit in? After you’ve identified your assumptions, the first test you run is often customer development—a term coined by serial entrepreneur Steve Blank—also known as talking to people. In other words, you do qualitative research by interviewing potential customers to learn more about their needs and behaviors in your product’s area. At this stage, you’ll likely discover, as startups commonly do, that your basic idea holds little appeal for your target customers, or that a key assumption about customers’ needs was simply wrong. Excellent. You’ve just saved yourself thousands of dollars and months of time. When you start consistently hearing the same thing from potential customers about their needs—and you have an idea about how to meet those needs—then it’s time to start running tests with a version of your product to validate (or invalidate) your idea.

For example, Fashion Metric had an idea for app that would let clothing shoppers get feedback from a stylist on items they were thinking of buying. Before building even an MVP, the founding team went to stores in three cities and asked clothing shoppers about their greatest shopping problems. Not a single person mentioned a frustration that could be solved with Fashion Metric’s original idea. But the team did hear over and over that men had a hard time finding dress shirts that fit properly. Fashion Metric took that info and built a landing-page MVP to test a custom-shirt concept (a landing page), and then went from from there. Here’s CEO Daina Burnes Linton telling the story at the 2013 Lean Startup Conference.

You use MVPs early and often to test out assumptions you have about your new product, thereby reducing the risk that you’ll spend time and money building the wrong thing. In Lean Startup terms, assumptions are unproven beliefs you have about why your plan will work—for instance, a belief that people will pay for your new product, or a belief that they’ll use it at all.

An MVP is central to what people call the build-measure-learn loop, which mimics the scientific method but tests business ideas rather than, say, theories of evolution. The process generally looks like this:

  1. Identify your assumptions
  2. Home in on the assumption that carries the biggest risk
  3. Determine how to test your assumption, often with a version of your product designed for this purpose—the MVP itself
  4. Figure out your hypothesis about the MVP
  5. Run the test
  6. Review the results
  7. Incorporate the results into your next test
  8. Iterate—also known as lather, rinse, repeat or, simply, do it all again—this time incorporating the new information you’ve collected

On the front lines of entrepreneurship, there’s a lot of haziness about what MVPs really are, how they work, or whether they work at all. Nearly all of the confusion stems from focusing on the MVP and ignoring the other steps in the build-measure-learn loop. You can rise above the fray and get useful results by making sure that your MVPs are embedded in the larger process.

For instance, if you make sheet-metal screws, and you suspect there’s demand for a new kind of wood screw you’ve designed, your biggest risk might be whether your wholesale distributors will sell it to retailers for you. You have a hypothesis that two out of three distributors will get on board—enough to get the screws to market. To test the theory, you create a brochure that your salespeople use in conversations with distributors (important: you do this before you’ve created the screws themselves). If your salespeople are able to close enough deals for the screws, you’ll have solid information saying you should actually develop the product. In discussing this case, a lot of people would focus on the MVP in the story, which is the brochure. But that undercuts the nature of the process, which is not meant to help you churn out printed marketing material but is instead meant to help you learn whether your assumptions about your distributors were right. If you keep that in mind when you’re slinging around MVP ideas in your company, and you include a lot of discussion of hypotheses, you’ll be miles ahead of your competitors. (Here’s some more info from Eric on MVPs.)

Speaking of hypotheses, the most effective ones are usually quantitative. That’s because they give you a clear way to see whether your assumptions were right (i.e., people will spend at least three minutes per page reading articles on our new site; or, one of ten sales calls in the next month will lead to a signed contract for our new product). The basic structure for a hypothesis looks like this: “I believe [customers like this] will [behave like this] in [this measurable way]. “Validating a hypothesis” means you’re running experiments that prove it true; “invalidating a hypothesis” means your experiments are proving it false. Ben Yoskovitz has a clear write-up on how to craft a useful hypothesis.

When you create a hypothesis, it can be around either the value potential or the growth potential for a product. A value hypothesis tests whether a product delivers value to customers once they’re using it, whereas a growth hypothesis tests how new customers discover a product. The usual mechanisms for growth are viral, sticky, and paid. We’ll discuss them in a future piece, but in the meantime, check out this post from Eric and this one from David Link.

For Example

Can you give me another MVP example? Sure. If your publishing company is thinking of putting out a coffee table book in the United States about cooking with insects, you might reasonably identify a big risk: Will anybody buy this thing? You know you canproduce such a volume; in fact, you can pretty much apply all of the processes you already use for publishing lavish books of themed cake recipes for minor holidays. But, despite a recent New York Times piece on the bright future for insect-based cuisine, you haven’t been able to find a community of bug-eaters to test your idea with.

Here’s one way to find out if enough people will buy the book for you to get past break-even costs: Go ahead and publish it. Commission the writer and photographer, assign an editor to develop the book with them, find people to test the recipes, get a copyeditor to review the final text, have production people layout the pages and correct the photos, hire a freelance indexer, get a pro to proofread the whole thing, and then ship it off to China for printing (and probably send a production expert to oversee the run). Oh, and your salespeople have to make sure bookstores will stock it, and your marketing and PR people will have to make sure readers know it exists. From the time you decide to find out if anybody will buy it to the time you’re able to actually test the idea using this approach is approximately two and a half to three years. Not to mention upwards of $200,000 in staff time and hard costs.

Or, you could MVP it with a full build-measure-learn loop. First, create a hypothesis. Based on your experience and any data you have available, make an educated guess at how many sales you’ll actually make on this book. Next, work with a writer to create a blog and attract a readership. Then, offer the book for pre-sale to those readers (which you can do before you’ve put even ten seconds of effort into creating a print volume), and thus test your hypothesis. Total time elapsed? Two to six months, and, as a bonus, the readers test the recipes for you. Note that this isn’t a free process. You may have to pay the writer and photographer, and perhaps you’ll spend some money on training the writer to use blogging software and social media tools that help them build a following. Generously, it might cost you $20,000. In other words, you could test ten book ideas for the cost of publishing one. And because you can run your tests simultaneously, you could learn in several months—rather than over the course of a decade—which ideas are worth investing more in.

4. What’s a pivot?

Let’s say you’re running lots of experiments, which is great. But your experiments are mostly invalidating your ideas, which is deflating. The good news is that in Lean Startup, you have a new move: You can pivot. A pivot is a change in strategy, based on what you’ve learned so far. They’re super-common in startups, even though the stories about them aren’t always well known. For example, YouTube started as a video-dating site. When the dating part didn’t take off, the company pivoted to focus on video sharing, which seemed to hold promise.

Pivots come in many forms. You might shift your product focus, as YouTube did, or you might realize that the product you’ve envisioned appeals to a very different set of customers than those you’d originally guessed. You might learn that the channel through which you’d planned to sell won’t work, but that another channel is a strong option. Etcetera, etcetera. This Forbes piece has a nice rundown of common kinds of pivots.

You can tell you’ve pivoted successfully when your new experiments are overall more productive than the old ones, which is a sign that you’re more closely aligned with your customers. Here, Eric discusses pivots in depth.

5. Where do metrics fit into this?

Glad you asked. First, let’s get clear that the word “metrics” is just a fancy term for measurements. Metrics often include things like the number of new customers in a certain time period, the number of customer visits (or other activity) per day or month, and the amount of revenue generated in a defined time period. Metrics are important because, as I noted earlier, quantitative information is key to really learning where you stand with customers. But not all metrics are created equal, and for startups, there are two kinds you need to know about: actionable metricsand vanity metrics.

Actionable metrics—those you can make meaningful decisions around—measure specific customer behaviors and patterns. For example, average revenue per customer tells you a lot about your value to customers. Even better, it lets you test features and other aspects of your product to determine whether what you do you can increase the number. To really understand the impact of your actions on your customers, startups often measure them in groups, generally called a cohort analysis, in which you compare the behavior of a subset of your customers against another subset you’ve treated differently. Cohorts are most commonly defined by when they become your customers, so, for example, you might compare new customers from June and new customers from July—when you offered all new customers a coupon to try your premium service—to determine whether your average revenue per customer has gone up.

Vanity metrics, on the other hand, are broad measurements that are appealing to look at but don’t tell you anything meaningful about your value to customers. For example, it’s fun to watch your number of Twitter followers increase or to focus on how much total revenue you’ve taken in. But Twitter followers aren’t necessarily customers, and gross revenue without contextual information doesn’t tell you whether you’re looking at sustained growth, scattershot injections of cash, or customer behaviors you can affect.

One key way to home in on meaningful metrics is to recognize that, in a startup, your most important activity is gathering information to reduce unknowns—not meeting product-development deadlines or driving sales. For most of us, that’s a huge shift in focus, and it’s difficult. If you’re a leader, this idea is central, because you have to establish and look for appropriate signs that your new-product teams are learning about whether they’re providing value to customers and building on what they’ve learned. If you revert to deadlines and profit, which you’ll be tempted to do, you’ll sink your young ships.

For example, Facebook might not exist today if the company had tried to generate revenue at the outset. When the site first started, it was for college students, and it didn’t yet have ad sales. But the founders suspected they were onto something interesting because, each time they gave a new school access to the site, a spectacularly high percentage of students would sign up in a very short time. In addition, those students spent an unusually long time on the site, and they came back frequently. None of those metrics had to do with revenue initially, but they helped Facebook recognize that it was creating value for students and that it had ways to grow.

Metrics and learning milestones vary a lot based on your products and hypotheses, and we’ll talk more about them in future posts. In the meantime, here’s Eric’s cornerstone post on actionable vs. vanity metrics, and here are two ways to tell you’re moving in the right direction with any product:

  1. You cycle through the build-measure-learn loop much, much more rapidly over time. You go from six months to roll out one MVP and analyze it, to three months, to one month, to one week—and you learn more in the shorter rounds as you get better at formulating hypotheses.
  2. Your team starts to make decisions based on the data you gather, not on job titles.

Down the road, we’ll talk about other aspects of the Lean Startup method, like cross-functional teams and innovation accounting. But until then, tell us in a short note about the Lean Startup ideas you’re grappling with, or what’s most mystifying, important, and intriguing to you about your organization, product, or otherwise, and we’ll use those inquiries to guide our pieces. Send a short note with your idea to our editors.

Last Chance for Summer Sale Prices  

At this year’s Lean Startup Conference, we seek to answer the difficult questions you face as an entrepreneur. To give you a sense of how we’ll do that, we’re introducing you to three of our speakers—all of whom are appearing for the first time at The Lean Startup Conference, and all of whom have advice you can put to work today. Note that summer sale pricing for the conference ends on Monday night, so register now for the best deal possible.

Herewith, our introductions.

Ben Horowitz, Andreessen Horowitz. Frankly, Ben doesn’t need a ton of introduction. A well-known startup innovator, he’s co-founder of the leading VC firm Andreessen Horowitz and author of a new book, The Hard Thing About Hard Things: Building A Business When There Are No Easy Answers, which brims with unusually direct, useful advice for new and seasoned entrepreneurs alike.

Among the questions we’re seeking to address at this year’s conference is: What does the culture of a high-performance, high-growth team look like? In his book and on his blog, Ben tackles that question. We particularly like this post, Hiring Executives: If You’ve Never Done the Job, How Do You Hire Somebody Good?, in which he guides new entrepreneurs of growing companies through one of the more vexing challenges you’ll face, what he calls making “the lonely decision.” He starts with sharply observed pitfalls and offers specific steps you can take to avoid them, staring with a process for defining what you need in a new hire and then moving on to this step:

“Write down the strengths you want and the weaknesses that you are willing to tolerate. The first step is to write down what you want. In order to ensure completeness, I find it useful to include criteria from the following sub-divisions when hiring executives:

  • Will the executive be world class at running the function?
  • Is the executive outstanding operationally?
  • Will the executive make a major contribution to the strategic direction of the company? This is the “are they smart enough?” criteria.
  • Will the executive be an effective member of the team? Effective is the key word. It’s possible for an executive to be well liked and totally ineffective with respect to the other members of the team. It’s also possible for an executive to be highly effective and profoundly influential while being totally despised.
  • These functions do not carry equal weight for all positions. Make sure that you balance them appropriately. Generally, operational excellence is far more important for a VP of Engineering or a VP of Sales than for a VP of Marketing or CFO.”

Ben then gives more detail on how to turn the criteria into a real hire. At the conference, Eric will dive deep in an interview with Ben, asking him hard questions about hiring during growth and other shoals of entrepreneurship.

Melissa Bell, We’re really pleased to have Melissa join us. has been one of the most closely watched media launches of the year, and as its Senior Product Manager and Executive Editor, Melissa was responsible for leading a lot of its success. One of our questions for this year’s conference is: How can we get products to market faster? So we were particularly intrigued when we learned that Melissa and her team took just nine weeks to develop the high-profile site; other Vox Media properties had taken eight months to roll out.

As explained in this post from Michael Lovitt, Vox’s VP of Engineering, Melissa and her team expedited their launch by sacrificing perfection and focusing their goals narrowly. Instead of spending months fine-tuning the website before presenting it to the world, they chose to “fail fast and iterate.” That phrase gets tossed around a lot these days, putting it in danger of losing its meaning. But Melissa backed it up with real processes, and rather than calling the unveiling of the site a “launch,” she instead wound up referring to it as a “deploy, the first of many.”

The team also worked with an ethos that would trust their MVP, which had two foundational pieces. Michael explains:

“In order to meet our expectations for what a new Vox Media site must be, we would focus on two big things: the important early and foundational branding and visual design work; and a new, still-to-be-figured-out product feature for helping readers understand the news. By limiting the new big things to only those two, we could free ourselves to throw all of our creative energy into them, and do them well, and rely on the work done by our past selves to carry the rest of the site.

“Once everyone agreed to this plan, in every conversation about scope and the prioritization of site features, we were able to stay grounded by our shared sense of what was important to get right for launch, and what could wait for now.”

At The Lean Startup Conference, we’ll learn more from Melissa about how her team hewed to its early goals, what worked in developing the site, what she’d do differently next time, and how they’re tackling the site’s current growth and new challenges.

Seppo Helava, Nonsense Industry. We’re proud that The Lean Startup Conference brings you not only high-profile speakers and leaders from high-growth companies you already know about, but also excellent presenters you aren’t yet aware of. Indeed, we consider it our job to find relatively unknown people with great advice and experience to share. Seppo is one such speaker.

An accomplished game developer and company founder, Seppo has worked hard to figure out how to keep employees invested and productive—particularly in an environment where you’re running lots of experiments that don’t lead to profitable products. His application to speak at this year’s conference addressed this question: How can we keep up team morale when experiments invalidate a lot of our ideas? and he hooked up with his deep understanding of the problem and tangible ways to maintain co-workers’ enthusiasm.

Seppo laid out clearly something we all see pretty often: when you constantly test your ideas, you find that a lot of them don’t fly, and so you have to throw out work all the time. He went to talk about the natural attachment that employees feel to their projects, particularly those they’ve polished carefully, and the resulting struggle to move on, even when those projects aren’t proving out. That dynamic generates a fear of experimentation—the opposite of what you want on your team.

At the conference, Seppo will talk about how his company now works to answer a question, rather than develop a product for presentation. He’ll discuss not only their approach in terms of training, teamwork and communication, but how’s it’s played out over a period of refinement.

To see these speakers and a slew of other entrepreneurs with incredible lesson to share, register today for The Lean Startup Conference. Prices go up on Monday night!