metaidea

Innovation risks

Drucker‘s Landmarks of Tomorrow lists clearly 3 risks that arise out of innovation is comprehensive and can orient towards tasks and outcomes around innovation quickly. Even if outcomes are a function of many variables and ambiguous,(an excuse that innovation managers typically ride on to retain jobs/titles while not really “tasking”) you still have to act. That said  “tasking” / task orientation is necessary and not arbitrary or ambiguous in any enterprise. That’s a post for another day. So these tasks arise after the enterprise decides to reduce its first risk in innovation.

1. Risk of Exposure. Exposure risk is an inaction risk, while remaining very successful in the chosen market, this risk makes the whole business irrelevant as newer models and innovations take over existing customers and create new ones. I visualize this risk on a slider bar, where there is a NO on the left end and an YES commitment on the right. Depending on the level of YES, time and resource availability is determined for innovation.

Risk of Exposure Slider bar

This YES commitment (on the risk of exposure) does not mitigate but lead to the next, new set of risks below. In any case this risk cannot be avoided. You can see examples today in education like the massive open online courses offered by Coursera and the likes while the incumbent i.e. every higher education player could have very well acted earlier or the popular digital photography disruption misses by Kodak.

Risk of failure in developing the innovation When I heard Ravi Venkatesan at the recent Zinnov Confluence, he mentioned “skunkworks are interesting to see in labs, but unless the whole organization aligns to an innovation, there is really no chance”. I believe that, by organization he would mean the “tasks” on business processes starting from budgeting, development, sales/marketing, service, legal etc. that are specialized and entrenched across departments, but need to come together. Successful businesses ideally should not delay capex investments into innovation, and commit to experimenting the next set of revenue drivers. Experiments could be for example

  • small like skunkworks or community driven developments internally
  • taking ownership in companies that are doing the development

Still the structure has to commit itself to this developmental action and evaluate all along even if it means changing directions many times mid way to make sure the next risk of failure is covered. With the crowd sourcing possibility on almost anything this risk has greatly reduced, this as a model has been operational across many platforms like kickstarter (for investments), or ninesigma (for effort).

Risk of failure of the innovation itself

This is the biggest of all risks and can be really dramatic, and we know many stories like these in recent times. What Drucker calls here as ‘responsibility for the consequences’ of the failure itself, while constantly acting for the opportunity. It is no more a chance but choice and choosing to resolve contradictions between the global versus local, profit versus free, etc and thus becoming a value decision in itself. Most of us are aware of the commercial failure of much touted innovations like Segway and others.

Risks in Innovation

Advertisements
Standard
metaidea

Evaluating Innovation

Evaluating innovation has always been a difficult job for innovators, investors, facilitators, and managers. With increased pace of developing ideas, it becomes critical to evaluate innovations effectively and quickly. Before I begin developing an innovation evaluation framework, I will define what I think is an innovation and draw some characteristics first. Innovations are

  • purposeful action and aligns with some personal or organizational vision
  • developing ideas that are perceived as new and valuable
  • impactful at a scale, may include financial, social, environmental, life impact
  • investments that may lead to disproportionate returns

Innovations are evaluated for various purposes like

  1. Qualifying for investment/grant/other resources
  2. Quantifying impact of the innovation
  3. Modifying the development process for a set of ideas
“While we recognize that the American economy is changing in fundamental ways- and that most of this change related directly to innovation-our understanding remains incomplete…centrality of the need to advance innovation measurement cannot be understated” – Carl Schramm in the committee report to Secretary of Commerce 2008

At level 0, I believe the following facets have to be considered

EvaluatingInnovationsL0

Evaluation includes the following phases/activities around data and reporting

1. Data Collection, depending on the kind of evaluation it may include quantitative and qualitative information.  Typically if data is collected from primary sources aka the field through surveys, direct interview, or secondary sources like agencies. Every collection effort should include independent variables, and dependent variables. It is useful to segregate between input variables, and outcome variables. Units of measure for all variables have to be standardized or they should be convertible. In case of comparison between different variables, you might want to consider some normalization process. Data quality standards are to be set prior to beginning the data collection and for any further analysis data has to be of some agreed minimum quality.

2. Analysis and Data representation, depending on the kind of data collected analysis methods will vary.  For example representations for financials will be in spread sheets and charts, social data will be on maps, stories will be as fitness landscapes. Typically here is where any hypothesis is provided, and tested, future state predictions like forecasts based on models are put forth. Comparison with history or benchmarks will happen at this stage as well.

3. Results of evaluation, should be an action or recommendation. In most cases evaluation leads to decisions by parties other than the evaluator. If this party is not identified prior to evaluation process, the effort is most likely to go waste.

“What are we really going to do with this? Why are we doing it? What purpose is it going to serve? How are we going to use this information?” This typically gets answered casually: “We are going to use the evaluation to improve the program” — without asking the more detailed questions: “What do we mean by improve the program? What aspects of the program are we trying to improve?” So a focus develops, driven by use.”  – Michael Quinn Patton

Once you have decided which facet of innovation you are trying to evaluate, we can now adopt from many of  available methods for doing the actual evaluation. I will try and list some of them below, with links to external resources that I have found useful.

Impact: EPSIS provides a detailed framework and clearly distinguishes between output, impact and performance and provides a set of indicators that can be used for direct measurements or indirect impact measurements. Social Impact evaluation on philanthropy from Stanford is a good place to start.

Investment: Investments related evaluation includes both input costs and outcome returns to compare innovations. For example we use something called as the t-shirt sizing for ideas at first level, that will give a scale estimate of cost. Return on Investment as a ratio is a good measure but the underlying assumptions for predicting returns has to clear, and the other common error is around data quality when predicting returns.

I personally use value investing check for fundamentals when getting into stocks, and the factors that are checked are around stability, margin of safety, and historical dividends. Investment evaluation should be reduce the impact of any failure and enhance experiment design. In many cases ‘closed world’ resources (freely available locally, and has potential use) play a significant role in reducing investment.

Diffusion: Interdisciplinary classic work in this field Diffusion of Innovations by E Rogers lists different ways and covers a broad range of research that has already happened in diffusion. I like the stages around innovation diffusion as awareness, persuasion, decision and implementation. Data collected should focus on units of adoption (individual, community, user groups, etc), rates of adoption over time, and other social aspects of the adoption.

Model: In this facet of evaluation we only focus on what model of development was used for generating and developing the innovation, and should cover business model elements and how each of the elements are being looked at. Data collection would typically include metrics (see size, time, interface and costs worksheet below from NUS below) on needs, stages of development, partner structure, productivity, etc. For example Villgro, kickstarter, and Google ventures all operate in distinct models for developing innovations.

stic time interface cost questions

Development: Entire field of Developmental Evaluation is dedicated to evaluating during innovation and applicable for complex, highly social, non-linear situations. McConnel foundation’s  practitioner guide is probably the best you can get for free.

I will cover a few methods for selecting innovation  like PUGH matrix, decision trees, possibly in another post. This will be my last post for the year 2012, and I hope to build on the momentum covering deeper and meaningful innovation topics in 2013. Happy new year…

Standard
metaidea

Say no to ROI

Michael Mitchell made a presentation on Travel Industry trends recently. Key take-away from the session personally was on the fitment between cultures and systems and on RoI. Having been part of many M&As, made hard choices on systems, benchmarking systems, deciding where to invest for the travel industry, Michael is uniquely qualified and his perspectives are unique as well.

He started from his experience with a couple of mergers and how the choice of systems actually is not of systems itself but of culture. In any M&A does not necessarily mean the best systems will prevail but systems that support the culture that is conducive for business will be picked and sustained. Making the wrong choice means erosion of brand value (in industries like travel the brand development takes as much as 20 years) and service to customers which are actually closer to culture than IT systems. On a follow up question from Jas on how will a conglomerate like SITA develop and deploy across cultures, Michael reinforced the point that it was still a matter of choice on culture that is dominant. So as always systems fit culture and not the other way around.

There was this trend that IT leadership in the industry showing positive outlook on investing in initiatives that had "shorter RoI cycles", commenting on it he said it was not the right thing to do. My question to him was when does RoI cease to be a measure of impact, and what are the alternatives. His response was if an investment was being made to reduce costs (typically cost of transactions around a core service delivered) it is relevant, but the equation becomes murky when revenue is involved. Take advertising for example, it is hard to quantify revenue that came specifically from a marketing initiative and applying RoI is erroneous here. Incremental revenue analysis was suggested as an alternative, but in my opinion that will still have the issue of attribution of credit.

Standard