During our May 15 webinar, “Opportunity Costs and The Cost of Opportunities”, the audience asked Shawn Williams, VP of R&D at Rogers Corporation, Udi Chatow, Business Management at Applied Materials, and I questions on how to leverage opportunity costs to drive the upside value of innovation projects and portfolios. I’ll address these in this ValuePoint.

Time

Questions related to time frames included:
  • Is there a time factor for how long an idea takes to mature?
  • When you focused on short term (2-3 years) to increase revenue didn’t it leave a gap for you developmentally out 3+ years?
  • What if Oyster projects take 20 years?

SmartOrg’s Innovation Screen by Difficulty compares projects by relative probability of success vs. expected return. Expected return is a net present value that incorporates time into its calculations through discounting of investment and return cash flows. SmartOrg’s Innovation Screen by Time compares projects by time to maturity vs. expected return, showing the distribution of short-term and long-term opportunities. It supports assessments of whether a project’s potential upside justifies a long-term investment and suggests where to pivot long-term projects into several short-term ones that accelerate returns. Additionally, SmartOrg’s projected P&L roll-ups expose troughs in future returns that could be filled by adding or time-shifting projects.

During our May 15 webinar, “Opportunity Costs and The Cost of Opportunities”, the audience asked Shawn Williams, VP of R&D at Rogers Corporation, Udi Chatow, Business Management at Applied Materials, and I questions on how to leverage opportunity costs to drive the upside value of innovation projects and portfolios. I’ll address these in this ValuePoint.

Clutter

Projects with low probability of success and/or low potential returns distract you from executing well on better opportunities.
  • Why would people want to waste their time on White Elephants?
  • Do the White Elephants evolve to Bread & Butters or Oysters, or just drop out of the portfolio?

You shouldn’t waste time on a White Elephant (a project with both a low probability of success and a low potential return) unless it’s forced on you, for example, by a specific demand from a key customer. Try to pivot a White Elephant to a Bread & Butter (higher probability of success) or an Oyster (much higher expected value). If such a pivot isn’t possible or practical, then cancel the White Elephant.

Evaluation and Comparison

SmartOrg has found that in evaluating projects and portfolios, accuracy is not as important as credibility and comparability.
  • What are the main KPIs we should look at when we do portfolio management?
  • How do you get people to objectively estimate the probability of success for a project?
  • How is the quality of the input data scrutinized and/or normalized to ensure a consistent and realistic assessment?
  • How do you manage uncertainty?
  • How do you assess projects at different levels of development (i.e., initial ideas vs. ideas with months of development)?
  • Where does a disruptive technology fall on the innovation chart?
KPIs for innovation and new product development portfolios usually include revenue growth and profitability. SmartOrg and Rogers Corporation developed a measure for a portfolio’s growth contribution called Portfolio Power. There are also project-level KPIs, and tied to these are Proof Points for guiding the development path of the project.
There is an art to objectively estimating probability of success, often based on comparable past projects and what made them succeed or fail. Uncertainty gets built into estimates of projected commercial characteristics (e.g., market size, market share, units sold, selling price): the project team estimates the high, low and expected values (i.e., 10, 50 and 90 percent probability levels) of each characteristic assuming the development was successful. Once these estimates are made for all the projects in the portfolio, there’s an important calibration step to ensure that the estimates are comparable.
Different stages of development and disruptive technologies map to the estimated probability of success. Early-stage projects and disruptive technologies are usually Oysters that require detailed learning plans with well-structured proof points.

Proof Points

Learning plans that guide projects efficiently are built from proof points:

  • How do you identify your proof points?
  • What level of market analysis is done up-front?
  • Does a market’s speed of movement change the development plan?
  • How do you know when to pivot?
  • Is there a structure to capture post launch evaluations and transfer the learnings quickly?
Different from phase gates, proof points are specific questions about the feasibility and practicality that must be answered in the affirmative for the project to continue, like, “does the technology work,” and, “can the technology be manufactured cost-effectively?” Address these in the order from hardest to easiest: effort spent on validating easy proof points first will be wasted if a hard proof point fails to work out.
If a proof point shows the current project plan will fail, look for ways to pivot and restructure the project to succeed. Failure of a proof point can be one of degree: if aiming for a low target makes the project unviable, the required pivot is to aim much higher. If a pivot can’t be found to salvage the project, cut the project and free up its corresponding opportunity cost. Post-launch, keep evaluating proof points and pivoting as needed.

Organization

Questions touching on organizational culture:
  • How do you create a culture where people don’t see getting their projects cut as a failure?
  • How might opportunity value be assessed on internal projects (e.g., manufacturing efficiency)?

The key to a culture that accepts the need to cut projects is credibility. When projects are evaluated objectively and compared consistently, stakeholders can agree the evaluations are fair. Even if a project owner doesn’t like the decision to cancel a project, that owner can accept the decision because it was made fairly on objective evidence.

Evaluate internal projects, like cost-cutting programs, the same way. Avoided-cost cash flows can be compared the same way as new revenues, on the basis of net present value, probability of success and time frame.