f.art

Corp innovation conundrum 5

whatsapp-image-2016-09-08-at-7-48-05-am

Empowered here specifically means do whatever in the name of innovation with no support given as money, manpower, useful quick decisions, etc…, and if it does accomplish anything meaningful that is worthy of a press release, sponsor will come pose for a photo with the innovator and talk about the same fantasies and mention ideas that are actively being developed to make the fantasies more real…

Standard
metaidea

Innovation risks

Drucker‘s Landmarks of Tomorrow lists clearly 3 risks that arise out of innovation is comprehensive and can orient towards tasks and outcomes around innovation quickly. Even if outcomes are a function of many variables and ambiguous,(an excuse that innovation managers typically ride on to retain jobs/titles while not really “tasking”) you still have to act. That said  “tasking” / task orientation is necessary and not arbitrary or ambiguous in any enterprise. That’s a post for another day. So these tasks arise after the enterprise decides to reduce its first risk in innovation.

1. Risk of Exposure. Exposure risk is an inaction risk, while remaining very successful in the chosen market, this risk makes the whole business irrelevant as newer models and innovations take over existing customers and create new ones. I visualize this risk on a slider bar, where there is a NO on the left end and an YES commitment on the right. Depending on the level of YES, time and resource availability is determined for innovation.

Risk of Exposure Slider bar

This YES commitment (on the risk of exposure) does not mitigate but lead to the next, new set of risks below. In any case this risk cannot be avoided. You can see examples today in education like the massive open online courses offered by Coursera and the likes while the incumbent i.e. every higher education player could have very well acted earlier or the popular digital photography disruption misses by Kodak.

Risk of failure in developing the innovation When I heard Ravi Venkatesan at the recent Zinnov Confluence, he mentioned “skunkworks are interesting to see in labs, but unless the whole organization aligns to an innovation, there is really no chance”. I believe that, by organization he would mean the “tasks” on business processes starting from budgeting, development, sales/marketing, service, legal etc. that are specialized and entrenched across departments, but need to come together. Successful businesses ideally should not delay capex investments into innovation, and commit to experimenting the next set of revenue drivers. Experiments could be for example

  • small like skunkworks or community driven developments internally
  • taking ownership in companies that are doing the development

Still the structure has to commit itself to this developmental action and evaluate all along even if it means changing directions many times mid way to make sure the next risk of failure is covered. With the crowd sourcing possibility on almost anything this risk has greatly reduced, this as a model has been operational across many platforms like kickstarter (for investments), or ninesigma (for effort).

Risk of failure of the innovation itself

This is the biggest of all risks and can be really dramatic, and we know many stories like these in recent times. What Drucker calls here as ‘responsibility for the consequences’ of the failure itself, while constantly acting for the opportunity. It is no more a chance but choice and choosing to resolve contradictions between the global versus local, profit versus free, etc and thus becoming a value decision in itself. Most of us are aware of the commercial failure of much touted innovations like Segway and others.

Risks in Innovation

Standard
metaidea

Evaluating Innovation

Evaluating innovation has always been a difficult job for innovators, investors, facilitators, and managers. With increased pace of developing ideas, it becomes critical to evaluate innovations effectively and quickly. Before I begin developing an innovation evaluation framework, I will define what I think is an innovation and draw some characteristics first. Innovations are

  • purposeful action and aligns with some personal or organizational vision
  • developing ideas that are perceived as new and valuable
  • impactful at a scale, may include financial, social, environmental, life impact
  • investments that may lead to disproportionate returns

Innovations are evaluated for various purposes like

  1. Qualifying for investment/grant/other resources
  2. Quantifying impact of the innovation
  3. Modifying the development process for a set of ideas
“While we recognize that the American economy is changing in fundamental ways- and that most of this change related directly to innovation-our understanding remains incomplete…centrality of the need to advance innovation measurement cannot be understated” – Carl Schramm in the committee report to Secretary of Commerce 2008

At level 0, I believe the following facets have to be considered

EvaluatingInnovationsL0

Evaluation includes the following phases/activities around data and reporting

1. Data Collection, depending on the kind of evaluation it may include quantitative and qualitative information.  Typically if data is collected from primary sources aka the field through surveys, direct interview, or secondary sources like agencies. Every collection effort should include independent variables, and dependent variables. It is useful to segregate between input variables, and outcome variables. Units of measure for all variables have to be standardized or they should be convertible. In case of comparison between different variables, you might want to consider some normalization process. Data quality standards are to be set prior to beginning the data collection and for any further analysis data has to be of some agreed minimum quality.

2. Analysis and Data representation, depending on the kind of data collected analysis methods will vary.  For example representations for financials will be in spread sheets and charts, social data will be on maps, stories will be as fitness landscapes. Typically here is where any hypothesis is provided, and tested, future state predictions like forecasts based on models are put forth. Comparison with history or benchmarks will happen at this stage as well.

3. Results of evaluation, should be an action or recommendation. In most cases evaluation leads to decisions by parties other than the evaluator. If this party is not identified prior to evaluation process, the effort is most likely to go waste.

“What are we really going to do with this? Why are we doing it? What purpose is it going to serve? How are we going to use this information?” This typically gets answered casually: “We are going to use the evaluation to improve the program” — without asking the more detailed questions: “What do we mean by improve the program? What aspects of the program are we trying to improve?” So a focus develops, driven by use.”  – Michael Quinn Patton

Once you have decided which facet of innovation you are trying to evaluate, we can now adopt from many of  available methods for doing the actual evaluation. I will try and list some of them below, with links to external resources that I have found useful.

Impact: EPSIS provides a detailed framework and clearly distinguishes between output, impact and performance and provides a set of indicators that can be used for direct measurements or indirect impact measurements. Social Impact evaluation on philanthropy from Stanford is a good place to start.

Investment: Investments related evaluation includes both input costs and outcome returns to compare innovations. For example we use something called as the t-shirt sizing for ideas at first level, that will give a scale estimate of cost. Return on Investment as a ratio is a good measure but the underlying assumptions for predicting returns has to clear, and the other common error is around data quality when predicting returns.

I personally use value investing check for fundamentals when getting into stocks, and the factors that are checked are around stability, margin of safety, and historical dividends. Investment evaluation should be reduce the impact of any failure and enhance experiment design. In many cases ‘closed world’ resources (freely available locally, and has potential use) play a significant role in reducing investment.

Diffusion: Interdisciplinary classic work in this field Diffusion of Innovations by E Rogers lists different ways and covers a broad range of research that has already happened in diffusion. I like the stages around innovation diffusion as awareness, persuasion, decision and implementation. Data collected should focus on units of adoption (individual, community, user groups, etc), rates of adoption over time, and other social aspects of the adoption.

Model: In this facet of evaluation we only focus on what model of development was used for generating and developing the innovation, and should cover business model elements and how each of the elements are being looked at. Data collection would typically include metrics (see size, time, interface and costs worksheet below from NUS below) on needs, stages of development, partner structure, productivity, etc. For example Villgro, kickstarter, and Google ventures all operate in distinct models for developing innovations.

stic time interface cost questions

Development: Entire field of Developmental Evaluation is dedicated to evaluating during innovation and applicable for complex, highly social, non-linear situations. McConnel foundation’s  practitioner guide is probably the best you can get for free.

I will cover a few methods for selecting innovation  like PUGH matrix, decision trees, possibly in another post. This will be my last post for the year 2012, and I hope to build on the momentum covering deeper and meaningful innovation topics in 2013. Happy new year…

Standard
cognoise

Cascading questions on Utility

"What are we really going to do with this? Why are we doing it? What purpose is it going to serve? How are we going to use this information?" This typically gets answered casually: "We are going to use the evaluation to improve the program" — without asking the more detailed questions: "What do we mean by improve the program? What aspects of the program are we trying to improve?" So a focus develops, driven by use.”

—- Michael Quinn Patton

In my opinion this cascading series of questions is the need of the hour, especially when it comes to KM programs. Level at which you stop asking the question determines the level of utility of the program itself.

Standard