metaidea

Evaluating Innovation

Evaluating innovation has always been a difficult job for innovators, investors, facilitators, and managers. With increased pace of developing ideas, it becomes critical to evaluate innovations effectively and quickly. Before I begin developing an innovation evaluation framework, I will define what I think is an innovation and draw some characteristics first. Innovations are

  • purposeful action and aligns with some personal or organizational vision
  • developing ideas that are perceived as new and valuable
  • impactful at a scale, may include financial, social, environmental, life impact
  • investments that may lead to disproportionate returns

Innovations are evaluated for various purposes like

  1. Qualifying for investment/grant/other resources
  2. Quantifying impact of the innovation
  3. Modifying the development process for a set of ideas
“While we recognize that the American economy is changing in fundamental ways- and that most of this change related directly to innovation-our understanding remains incomplete…centrality of the need to advance innovation measurement cannot be understated” – Carl Schramm in the committee report to Secretary of Commerce 2008

At level 0, I believe the following facets have to be considered

EvaluatingInnovationsL0

Evaluation includes the following phases/activities around data and reporting

1. Data Collection, depending on the kind of evaluation it may include quantitative and qualitative information.  Typically if data is collected from primary sources aka the field through surveys, direct interview, or secondary sources like agencies. Every collection effort should include independent variables, and dependent variables. It is useful to segregate between input variables, and outcome variables. Units of measure for all variables have to be standardized or they should be convertible. In case of comparison between different variables, you might want to consider some normalization process. Data quality standards are to be set prior to beginning the data collection and for any further analysis data has to be of some agreed minimum quality.

2. Analysis and Data representation, depending on the kind of data collected analysis methods will vary.  For example representations for financials will be in spread sheets and charts, social data will be on maps, stories will be as fitness landscapes. Typically here is where any hypothesis is provided, and tested, future state predictions like forecasts based on models are put forth. Comparison with history or benchmarks will happen at this stage as well.

3. Results of evaluation, should be an action or recommendation. In most cases evaluation leads to decisions by parties other than the evaluator. If this party is not identified prior to evaluation process, the effort is most likely to go waste.

“What are we really going to do with this? Why are we doing it? What purpose is it going to serve? How are we going to use this information?” This typically gets answered casually: “We are going to use the evaluation to improve the program” — without asking the more detailed questions: “What do we mean by improve the program? What aspects of the program are we trying to improve?” So a focus develops, driven by use.”  – Michael Quinn Patton

Once you have decided which facet of innovation you are trying to evaluate, we can now adopt from many of  available methods for doing the actual evaluation. I will try and list some of them below, with links to external resources that I have found useful.

Impact: EPSIS provides a detailed framework and clearly distinguishes between output, impact and performance and provides a set of indicators that can be used for direct measurements or indirect impact measurements. Social Impact evaluation on philanthropy from Stanford is a good place to start.

Investment: Investments related evaluation includes both input costs and outcome returns to compare innovations. For example we use something called as the t-shirt sizing for ideas at first level, that will give a scale estimate of cost. Return on Investment as a ratio is a good measure but the underlying assumptions for predicting returns has to clear, and the other common error is around data quality when predicting returns.

I personally use value investing check for fundamentals when getting into stocks, and the factors that are checked are around stability, margin of safety, and historical dividends. Investment evaluation should be reduce the impact of any failure and enhance experiment design. In many cases ‘closed world’ resources (freely available locally, and has potential use) play a significant role in reducing investment.

Diffusion: Interdisciplinary classic work in this field Diffusion of Innovations by E Rogers lists different ways and covers a broad range of research that has already happened in diffusion. I like the stages around innovation diffusion as awareness, persuasion, decision and implementation. Data collected should focus on units of adoption (individual, community, user groups, etc), rates of adoption over time, and other social aspects of the adoption.

Model: In this facet of evaluation we only focus on what model of development was used for generating and developing the innovation, and should cover business model elements and how each of the elements are being looked at. Data collection would typically include metrics (see size, time, interface and costs worksheet below from NUS below) on needs, stages of development, partner structure, productivity, etc. For example Villgro, kickstarter, and Google ventures all operate in distinct models for developing innovations.

stic time interface cost questions

Development: Entire field of Developmental Evaluation is dedicated to evaluating during innovation and applicable for complex, highly social, non-linear situations. McConnel foundation’s  practitioner guide is probably the best you can get for free.

I will cover a few methods for selecting innovation  like PUGH matrix, decision trees, possibly in another post. This will be my last post for the year 2012, and I hope to build on the momentum covering deeper and meaningful innovation topics in 2013. Happy new year…

Advertisements
Standard
f.art

Long Endless

Funnel of Demand, Idea, Bonus, Recruits, and other concepts

Standard
f.art

5 essential meeting room filters

I feel meeting rooms have to be equipped with special filters to shorten meetings, make them meaningful and useful. I will start with the simple one and gradually build complex filters.

1. Clichés Filter: This filter will simply cut out all the clichés uttered in the room from the grand global database of clichés

2. If Only Filter: “If only, …simple past blah blah blah” is almost like visioning in hindsight with no possibility ever to change anything in the present, but can go on through the meeting and its next 4 occurences.

3. Me more me some more me Filter: Although can be avoided with a good time-keeper, there is an interesting instance of this filter under the name of “hide updates from this user without unfriending” feature on Facebook.

As we move from words to word collections to me me me, filters get complex as below

4. Plati Atti Servi tudes Filter: This is a seeded learning filter, inputs vary across organizations due to varying limits of acceptable platitudes and attitudes, and the number of people adopting servitude in a specific meeting. It operates on many principles like deepening organization hierarchy,  modes of operation range from total internal reflection of words (when speaker starts to meditate in the meeting room or usually sleeps off before completing the sentence), substitution of words with antonyms, thus disrupting any units of conveyable meaning.

5. Blame the Culture Filter: BTC filter operates on instances of not taking personal responsibility for failure and vaguely attributing to a figureless culture. Like “tudes” filter this is also a seeded learning filter. Performance of this filter varies based on initial org conditions, retirement age, hiring and firing volumes, country of origin among other variables.

Standard
metaidea

Negative Space in Language

Negative spaces have always intrigued me, I encountered them first in my PwC days with that classic vase-human face picture, that was even part of our brand imagery for quite some time. More recently reading Betty Edwards and the FedEx logo (note the forward arrow between E and x).

From a visual expression perspective it makes absolute sense to experience the gap and space. But is it true for language as well.

I have 2 illustrations here, first one from our office vending machine named Starvend, there was a negative space that made me read it as “Starve end” instead of the usual “Star Vend”. I perceived the extra “e” and the space.

My usual brand of cigarettes has a tag line “Honeydew Smooth” which when read with the negative spaces sounds exactly like “I need you smooth”. I heard the whole, with the negative space to mean a completely different phrase than what was written.

Can negative spaces be phonemic and not just visual? Can it also be extended to the next natural level of “a concept”? If that were true is this a reason why some people are more comfortable listening to incomplete ideas? Is this perception the reason why some people like poetry of TS Eliot or Sylvia Plath more than others? Can we design this negative space for better perception? Just asking

Standard
metaidea

Defining Innovation in the Enterprise

I have over the years seen so many platitudes around innovation and leadership that I almost have ‘banner blindness‘ for them. A welcome change today when I read AG Lafley, CEO, P&G’s letter to shareholders back in 2008. By far this is one of the comprehensive definitions of “Innovation”, as it was written to shareholders which is a far more diverse demography than employees and the language shows. In case you are on this path to defining innovation or setting objectives or launching innovation programs, this is a great place to start. Thanks P&G…

Standard
metaidea

Stage setting for Conflicts in Innovation Programs

Conflicts are natural and they become nasty only when they are not anticipated. Running innovation programs is no different in setting stage for all forms of conflict among agents. My idea to list some of the stage settings and prepare as a facilitator and go with a plan to move action forward, rather than kill an idea with a potential but in conflict. The categorization is naive but I hope you get a drift of how the stage gets set in different forms.

Selection Conflicts

  1. In larger enterprises key problem is detecting and selecting an innovator and a family of high impact ideas. The selection process if badly designed lead to sponsoring ideas that are not going increase firm’s competitiveness.  This can set stage for considerable conflict and disengagement between selected and the left outs and may result in exits with key information or high potential ideas.
  2. While detection is not an issue in smaller enterprises, a high potential idea can be in direct conflict with ownership status and the desired direction of the sole owner or promoter. If the high potential idea is not sponsored directly or given independence internally, it leads to creation of a competitor in close vicinity with the ideas.
  3. Hiding white elephant projects (i.e. projects that hog both limelight and investments for a long time with some key sponsors but produces nothing more than slide decks and platitudes for the business) within large innovation programs are a common stage for conflict, and it drives away original innovators from signing up into the programs.

Management Conflicts

  1. Innovators are at odds with their immediate supervisors usually, leading to conflict within teams and disengagement from ideas. When innovation programs are envisioned/developed at top management level , it is not fair to  expect the same level of understanding with managers running the program as concerns are at different levels. For the innovator working with such a manager there is conflict with what the top management spoke in a town hall meeting versus what happens on the ground. This conflict generally goes unresolved even when brought up in top management forums, usually results in less than anticipated participation for the entire innovation program.

Strategy  / Process Conflicts

  1. A set strategy in place, inhibits experimentation that are directionally outside of it. This is against innovators who just like to play or experiment. If the experiment succeeds innovators typically want to build further. Specifically tackling organizational administrative systems to get approvals for strategically unaligned ideas is usually tougher. This will create a stage for conflict among weak or dated ideas that are sponsored versus newer or unproven ideas that are left out.  Seeking investment versus running stealth is a choice, until capital needs remain small and avoiding conflicts with processes. Beyond a threshold the need to go through the cumbersome budget and approval process or finding a sponsor, leads to other conflicts.
Standard
metaidea

Innovation Space for Hire

I have never thought, space could be a source of revenue in innovation. My first encounter with the idea was at WhatIf Innovation; and I went on to search similar businesses, seems like there is a long tail now for everything. Anyways my take is, educational institutes have the greatest potential to get into this business than others specifically because they are at great locations and most of their infra is underutilized. Talk about Resources it is just everywhere…

Standard