Author Archives: Mercedes Kraus

[Day 4/30] What do I do with my idea?

Photo by The Lean Startup Conference/Jakub Mosur and Erin Lubin

Over the past few days, we have defined what a Lean Startup is and learned the 5 basics of The Lean Startup. Today, we are sharing 5 tips on how to quickly test your idea.

“But despite a promising idea, we were nonetheless doomed from day one, because we did not know the process we would need to use to turn our product insights into a great company.” ~ Eric Ries, The Lean Startup

You have a great idea.

Now, what do you do with it?

5 Tips for Quickly Testing Your Idea

Today, we are sharing a post that originally posted on Intuit Labs, during our takeover of their blog. This post was written by Mercedes Kraus.  

Because social media and analytics software have given entrepreneurs a wealth of options to find and connect with customers, running a marketing experiment is one of the fastest ways to test an idea. But where do you start, and, with all the options out there, how do you get clear answers from a wealth of data? Marketing experts Cindy Alvarez, Alistair Croll and Anita Newton gave us five tips for running successful tests, and a few pitfalls to avoid.

Author of, Lean Customer Development: How to Build Products Your Customers Will Buy, Cindy Alvarez has been using customer development techniques for well over a decade, and today heads up product design and user research for Yammer, a Microsoft company. Speaker and author of several books on technology and business, including Lean Analytics, Alistair Croll has launched several companies, run the Year One Labs accelerator in Montreal, and today works on research and strategy at CloudOps. The VP of marketing at Adknowledge, a global ad tech company, Anita Newton also runs marketing at the startup Mighty Green Solutions and teaches marketing through the Kauffman Foundation’s Founders School and FastTrac’s Venture Program.

Tip 1: Before you run a test, get some initial information from your customers. Because you’re going to want to test your biggest risk.

Cindy Alvarez: You want to de-­risk your idea. In general, the biggest risk is that no one cares, so you want to just put something in front of people, and see if you can get evidence that they care about it. Putting up a test marketing site, putting a prototype in front of someone, doing a Kickstarter, sending an email and seeing if anyone responds — these things are great ways of experimenting.

Another approach that doesn’t involve creating anything — and so it’s cheap and easy — is to look for analogs to what you’re doing. They don’t necessarily have to look a lot like what you’re doing, but think about the concerns you have and how you could extrapolate those concerns from someone else. For example, the experiment that we’re doing right now with Yammer is trying to get people to participate more. The other day, I read an article on Massive Open Online Courses, MOOCs, about how online participation in those courses doesn’t come naturally to most people, and the organizers were trying to figure out how to encourage online participation.

When I saw that, I thought, “What a great analog to what we’re trying to do.” So now I have a researcher who is talking to people who are enrolled in MOOCs, and asking about the experience.

Alistair Croll: When you’re too early in the business to have hard data, you need to base your assumptions on competitive analysis, industry baselines, and customer feedback from interviews. You’re after the riskiest assumption in the business model, because that’s the one you need to prove you can overcome.

Usually, the metric for that is tied to attention. We all think our products are unique, special snowflakes, but the hard truth is that nobody really cares about our products. So I’d begin with metrics around engagement and stickiness first.

Anita Newton: When I’m just starting, I’m not that structured — because you just don’t know what you don’t know. But start getting information with one­-on-­one conversations, and also through some sort of anonymous approach. For a B2B product, talk to salespeople: If I’m developing a content marketing plan, or a set of campaigns, whether it’s a webinar or a gift, I will walk them through my ideas, and they will rip two thirds of them.

Tip 2: With some info from your customers, come up with a constrained hypothesis.

Alvarez: If you can make a tight hypothesis (through observation and customer interviews), it’s easier to get clear results. If you say something like, “I think these people would benefit from better software.” That’s very vague. Are you making it better in the right way? It could be that you’ve improved something about their experience but it’s not the most important thing. It’s very hard to get clear results out of that sort of thing. Also, don’t try to test three things at once.

Croll: You should always have a hypothesis that you’re either proving or disproving. We say in Lean Analytics that if a metric doesn’t change your behavior, it’s a bad metric. Know what you’ll do based on the results before you collect the data. People hate this. They want to just start — start building, start collecting, start measuring. But understanding why you’re doing something and how it will affect you is crucial.

Newton: Make sure you really understand what you’re testing and what you’re trying to learn. That sounds so obvious, but most people don’t do it. For instance, if you have two landing pages you’re testing, ask: What do I want to accomplish? Is it traffic, is it engagement, is it conversion? Maybe you could say, I want to change this landing page so I can see if that improves my conversion rate.

Tip 3: Other constraints, like time or money, will save you some headaches.

Croll: You should always focus. Remember high school math, when you solved an equation by getting one variable on the left and everything else on the right? You need to isolate one variable so you can change it.

We wrote a long post about scoring customer interviews [tool alert] on our blog. Usually the qualitative information gives you the “unknown unknowns,” exploratory insight you want to investigate. Then you need to find a way to quantify it. Let’s say that customer interviews give you five possible marketing angles. So set up five Google, Facebook, or LinkedIn campaigns and try out the five angles, and see which one works best. You don’t have to be selling a product — just ask people to fill out a survey, and see which angle or tagline gets the best results. As a bonus, you’ll have survey responses (which you can use for content marketing) and respondents (with whom you can set up more interviews).

Newton: You need to have a budget. It doesn’t need to be a lot of money, but be clear on what that is. When I do a lot of testing on Facebook, we don’t spend a lot of money, but we did quantify that money, even if it was just $25 or $50.

Tip 4: You’re going to get murky results. So, run your test again, take a harder look at your metrics, and sometimes, go with your gut.

Croll: The first results are always murky. You learn that you’re asking the wrong questions, of the wrong audience. Let’s say, for example, that you run an online survey and get answers that are all over the map. Then you slice and dice the data — by geography, by gender, by browser — and you notice that all the respondents who seemed positive used a recent version of MacOS. This immediately tells you something about their comfort with technology, socio-economic status, and so on. You could then use this to more tightly narrow your research, or to ask different questions, or to decide whether your user interface is too advanced or too simple, or to invest in customer support earlier, or to prioritize iOS versus Android. All from one piece of data you didn’t even know you were looking for.

Newton: With murky information, you have to go with your gut. Or when things are murky, a lot of times we’ll just do it again. Run an experiment a couple different times, a couple different ways. Setting up an experiment again is not as hard as setting it up from scratch.

Tip 5: Talking through tests with your team is where you will learn the most.

Alvarez: The greatest tool that we use is not a software tool, but is just the practice of storytelling. It’s just figuring out: When we learn something interesting, how do we make it memorable? How do we fit it together with the things we felt in the past? How do we update the oral tradition of what things people know within the company? It’s not very high­-tech, but it’s incredibly effective, and it’s a way to scale up what, in our org, is a fairly small team to take advantage of lots and lots of smart people we have.

Newton: Once you get that initial view of your data set — survey or conversation or whatever — the real value initially is coming back to your team and talking through it. And if you don’t have a team, just bring in another person. You’ll get smarter.

It would be a mistake to not sit down to do a post­-mortem. It’s very uncomfortable to do it, but you have to check your ego at the door and have the hard conversations, or else it’s not going to get better. When we sit down and talk about why we failed, it’s really great. And with that, if you’re bigger, marketing automation really works, but don’t hire an agency. Do it yourself: you’ll learn more and save so much money.

As a bonus, here’s Stephanie Hay with three techniques for testing marketing content, including checks on the language you use, what specific things to ask users, and how to get found by potential customers [5:08]:

In November at The Lean Startup Conference, you will have some great opportunities to go from inspiration to innovation.

Here are some examples of what you will learn:

From Inspiration to Innovation – Using Design Thinking to Build Creative New Products, Laura Klein and Christina Wodtke

In this hands on workshop, Christina Wodtke and Laura Klein will teach you useful Design Thinking techniques like Storyboarding, Empathy Mapping, and Generative Research that will help you better understand your users and build more innovative products.

How to Experiment Your Way To Good Ideas, Theresa Torres

We rarely have a shortage of ideas. Everyone from the receptionist to the CEO has an idea for how to improve the business. How do you know which ideas are worth pursuing? We can’t possibly run an experiment for every idea that anyone generates. In this session, you’ll learn a framework for evaluating each idea and determining when it warrants an experiment. Break the cycle of over-testing and identify good ideas faster.

There is more…

We will also feature a workshop From Ideas to Products, facilitated by Poornima Vijayashanker the founder of Femgineer. This year’s conference is the perfect place to create a plan and make movement on testing your ideas.

Cindy Alvarez, who is featured in this article will be speaking on how to drive change in enterprises–including approaches that have worked (and not worked) to get teams across Microsoft being more Lean and experimental.

If you’re ready to join us in November at The Lean Startup Conference, now is the perfect time to buy your ticket and save 10% off the full price.

30 days of LS graphic




This week, we’ve been at The Lean Startup Conference out in San Francisco—getting inspired and learning a ton.

So of course, we wanted to share with you a few of our favorite talks. These are from Wednesday’s lineup and are mostly about experimentation. We’ve got a list of favorites from Thursday’s lineup, too.

Test Your Way to the Right Answer by Anita Newton

Brand-new startups begin with almost zero customer data—a risky position from which to build a new product. But when you have very little money, how can you acquire critical information quickly? Anita Newton advisor, investor, and marketer at Mighty Handle, reveals how her bootstrapped, non-technical startup did clever customer development online, and rapidly tested its way into the customer insights it needed to sell its consumer packaged goods to the largest retailer in the world.

Identify and Validate Your Riskiest Assumptions by Laura Klein

MVPs are great—unless you’re building them to test assumptions that aren’t really mission-critical. In this hands-on session, Laura Klein, author of UX for Lean Startups and head of product development for Hint Health, breaks down the kinds of assumptions you should look for and a process for developing hypotheses that reveal your true barriers to growth.

The Diesel Engine MVP: Cory Nelson in Conversation with Eric Ries

When you have long product cycles or you’re building big physical things–or both–you typically face significant risk, as a lot can go wrong between drawing board and customers. In theory, Lean Startup methods help you reduce that risk. But it’s not always obvious how you can apply them. Cory Nelson, Sr. Executive Product Manager at GE Distributed Power, talks with Eric Ries about how GE has used Lean Startup methods to develop a new diesel engine more quickly and with less risk than it had for similar products in the past.

How a 30-Year-Old Hardware Company Is Bringing Products to Market 3x Faster by Kevin Ellsworth

Hardware companies face particular challenges testing and iterating on their product ideas. It’s often cost-prohibitive to get an MVP in the hands of customers, and it can be seemingly impossible to ramp up production cycles. But you can push the boundaries of convention. Kevin Ellsworth, Product Manager at Cirris, explains how his team has built systems for consistent learning that have helped them release new products over a matter of months rather than years.


Get Comfortable Shipping Imperfect Products by Lauren Gilchrist

Top product managers must have great customer empathy–but too much of it can slow you down. On the one hand, you need empathy to understand your customers, so that you can build products that solve their problems. On the other hand, too much empathy can prevent you from releasing a product that doesn’t solve all of your customers’ needs at once. Lauren Gilchrist, Product Manager at Pivotal Labs, gives five tips for shipping less-than-perfect MVPs so that you can all learn from end users, fast.


Lessons from Experimentation at the Biggest Organization in the US by Todd Park

The US federal government is the country’s largest employer and does not have a reputation for moving quickly. But Todd Park, who served from 2012 to 2014 as United States Chief Technology Officer and Assistant to the President and is now a technology advisor to the administration in Silicon Valley, is bringing an entrepreneurial approach to government and continues to make real change. He and key U.S. technology leaders describe their most challenging projects and share advice for experimenting in large organizations.




When you’re building an entirely new kind of product, how do you measure success?

And is that the same for every kind of product, in every sector—nonprofit, government, tech, whatever? Alistair Croll, Eric Ries, and Danielle Morrill discussed these questions in depth, during an hour-long webcast on October 2nd. Here’s a recording of their conversation, and it covers some fresh ways to approach metrics.



Dan Milstein | Photo: The Lean Startup Conference/Jakub Mosur and Erin Lubin

We all know that hard work and good luck are key to startups’ success. But what if that’s not true?

What if all startups have people who work hard? What if a bit of serendipity is fairly common? Let’s make it concrete: Have you worked at—or run—a startup where people were deeply committed and worked long hours, yet the company failed?

In his talk at the 2013 Lean Startup Conference, Dan Milstein explored what does make a difference for startups: Information. It’s worth real money, he emphasized, and the way to make more money is to more quickly gather information that helps you figure out the right things to work on.

This mindset is so critical, in fact, that you should be afraid of working on the wrong things. Dan:

If hard work and luck are important, but they don’t seem to really distinguish the startups that succeed from the ones that fail, then the choices of what we’re working on must be critical. What you choose to work on is actually your biggest lever, with a huge differential effect. You should be very, very scared of working on the wrong things. In fact, you should be terrified. I would say you should be so terrified that you actually don’t work. If you’re not sure that what you’re working on is the most valuable thing to your startup, you should stop working. I tell people this and they think I’m exaggerating, but I’m not. You should only work if what you’re working on is the most valuable thing.

Dan gives examples and does the math to show why working on the wrong thing is devastating for a startup. He also talks about the kind of information you want to gather at a startup: the kind that answers the riskiest or most uncertain questions. He explained: “You actually don’t get much information when you already know something; you get a lot when you’re uncertain. And then, what information is valuable depends on what decision you’re making.”

As you may have noticed in your own startup, identifying your biggest risk can be hard. Dan points out that it’s harder than you think, because risk shifts constantly. He tells this story about a software product, for hospitals, that used a public data set. Before selling or building it, the company’s biggest risk was that nobody would buy it. So the startup created a demo, and one hospital signed a $10-million contract for the product before it truly existed:

That’s great, you did the right thing. So now your sales team is out there trying to repeat that and sell the second one, and you’ve got a bunch of engineers now building that thing. And I want you to imagine something. I want you to imagine a junior developer, someone on the team, bright guy but young–guy or girl. And some morning—it’s a Thursday morning—and they were given a job of taking the demo app and turning it into a real production system. And they’re working with this public data set, and they discover, to their surprise, that it’s not as comprehensive as everyone thought it was. It worked well for the demo, but for the actual hospital, it’s actually not going to work. The whole product that the company has sold is actually not going to succeed the way they’ve done it. They have to do it some other way. In the moment after this person makes this discovery, the biggest risk for the startup has changed. The biggest risk is no longer: Can we repeat this sale? The biggest risk is: Can we actually build the thing that we promised in the first sale that we thought we could build, but we just discovered we were wrong?

If the biggest risk has changed, the thing you should be doing to gather the most information has changed. Because the way you gather the most information is by going after the biggest risk. Therefore, the thing that’s going to get you the most informationand therefore, the most moneyhas changed. So, as long as the company is still doing what it was doing before that discovery was made, they’re doing the wrong thing. And one way to look at this is that, in order for your company to move fast (the entire organization), the thing that will limit them in how fast they can move and how fast they can make money is how fast they can respond to the changing nature of risk. Because it’s only by going after the biggest risk do you make the most money, and because risks are changing all the time, the entire organization has to be able to change direction. And this, really, nobody gets this.

Learn more about identifying risk, gathering information, and making money by watching or listening to Dan’s 20-minute talk, embedded below. We’ve also included the full, unedited transcript at the end of the post.

When have you realized your biggest risk had changed? Let us know in the comments. – Eds

Dan Milstein is a co-founder at Hut 8 Labs, a software consulting shop in Boston. He’s worked as a programmer, architect, team lead and product owner at a variety of startups over the last 15 years. He is fascinated by the interactions between complex systems and the humans who build and maintain those systems. He’s recently written on How To Survive a Ground-Up Rewrite Without Losing Your Sanity, and Coding, Fast and Slow: Developers and the Psychology of Overconfidence. Follow him on Twitter.

Mercedes Kraus is Startup Managing Editor for The How. 

Dan Milstein, Risk, Information, Time and Money (in 20 Minutes), The Lean Startup Conference 2013




Jargon, demystified

If there was a term you didn’t know that we haven’t defined, please let us know—we want to help! Also, if you have a better definition or an addition to a definition, shoot us a note.

Opportunity cost. Given more than one choice of things to do and limited resources, opportunity costs are the potential benefits you give up in the choices you don’t explore. For example, let’s say you have a customer who asks you to build a highly specialized product for them, even though you don’t generally do extensive custom work. If you take the project, you’ll get money from the customer and perhaps some intangible things like a stronger relationship. But because you don’t have unlimited time and people, taking the project means you’ll give up the opportunity to build something else—perhaps a product that you could sell to many customers. If this sounds like every decision you make has an opportunity cost, you’re right on. Opportunity cost is a central idea in business—and it’s why the value of information in making decisions is so great. We found that these examples from Inc., while a little stiff, help put the term in a wider context.

CRUD app. CRUD is short for create, read, update, and delete: the four basic functions of database applications. It’s the simplest, dumbest kind of app an engineer can make.

Chained risks. A sequence of interconnected risks, where the first risk suggests that other risks will arise. In the talk, Dan mentions an essential risk chain of startups: 1. Can we build it? (this question is often framed as technical or product risk); if so, 2. will they buy it? (often framed as customer or market risk).

Degree of surprise. We only get information when there’s uncertainty and risk; so, the less you know—and therefore the more surprised you are by new information—the more you are learning.

Information theory / Claude Shannon. A branch of applied math, electrical engineering, and computer science. The foundational ideas of information theory were developed by Shannon in order to examine the communication, compression, and storage of data. We like this profile of Shannon in Scientific American. For geeks, this paper [PDF] on the wider context of information theory in the digital age, goes deep.

Series A funding. The first round of major investment, usually $2 – $10 million, that a startup receives (it may not be the first investment, however; seed funding is generally the first money—sometimes the founding team’s own—used to get a startup just off the ground). The name itself refers to the Series A Preferred Stock that investors receive in exchange for buying in; subsequent rounds are referred to as Series B, Series C, and so forth. Venture capitalists (VCs) are generally the investors, though in a round of funding, several firms often invest, and sometimes individuals participate, too. Over at Entrepreneur, they’ve got a good picture of the whole funding timeline; the process doesn’t always look exactly like that, but it’ll give you a sense of how things can go.

Valuation. For startups, this is a kind of appraisal that assesses the company’s financial value, usually based on potential growth rather than current profits or assets. For example, if your company has started selling a service for $100 per year, and you have 100 initial customers, it’s likely worth a lot more to investors than the $10,000 you’ve taken in. If they believe you can gain many more customers rapidly, investors might project your future value in the millions, reflecting your company’s potential, and buy shares on that basis. For more background on valuations, check out this clear post from VC Brad Feld, this useful piece from Founders and Funders (though maybe skip the hectic infographic at the top), and this straightforward discussion from Investopedia.