metaidea

Evaluating Innovation

Evaluating innovation has always been a difficult job for innovators, investors, facilitators, and managers. With increased pace of developing ideas, it becomes critical to evaluate innovations effectively and quickly. Before I begin developing an innovation evaluation framework, I will define what I think is an innovation and draw some characteristics first. Innovations are

  • purposeful action and aligns with some personal or organizational vision
  • developing ideas that are perceived as new and valuable
  • impactful at a scale, may include financial, social, environmental, life impact
  • investments that may lead to disproportionate returns

Innovations are evaluated for various purposes like

  1. Qualifying for investment/grant/other resources
  2. Quantifying impact of the innovation
  3. Modifying the development process for a set of ideas
“While we recognize that the American economy is changing in fundamental ways- and that most of this change related directly to innovation-our understanding remains incomplete…centrality of the need to advance innovation measurement cannot be understated” – Carl Schramm in the committee report to Secretary of Commerce 2008

At level 0, I believe the following facets have to be considered

EvaluatingInnovationsL0

Evaluation includes the following phases/activities around data and reporting

1. Data Collection, depending on the kind of evaluation it may include quantitative and qualitative information.  Typically if data is collected from primary sources aka the field through surveys, direct interview, or secondary sources like agencies. Every collection effort should include independent variables, and dependent variables. It is useful to segregate between input variables, and outcome variables. Units of measure for all variables have to be standardized or they should be convertible. In case of comparison between different variables, you might want to consider some normalization process. Data quality standards are to be set prior to beginning the data collection and for any further analysis data has to be of some agreed minimum quality.

2. Analysis and Data representation, depending on the kind of data collected analysis methods will vary.  For example representations for financials will be in spread sheets and charts, social data will be on maps, stories will be as fitness landscapes. Typically here is where any hypothesis is provided, and tested, future state predictions like forecasts based on models are put forth. Comparison with history or benchmarks will happen at this stage as well.

3. Results of evaluation, should be an action or recommendation. In most cases evaluation leads to decisions by parties other than the evaluator. If this party is not identified prior to evaluation process, the effort is most likely to go waste.

“What are we really going to do with this? Why are we doing it? What purpose is it going to serve? How are we going to use this information?” This typically gets answered casually: “We are going to use the evaluation to improve the program” — without asking the more detailed questions: “What do we mean by improve the program? What aspects of the program are we trying to improve?” So a focus develops, driven by use.”  – Michael Quinn Patton

Once you have decided which facet of innovation you are trying to evaluate, we can now adopt from many of  available methods for doing the actual evaluation. I will try and list some of them below, with links to external resources that I have found useful.

Impact: EPSIS provides a detailed framework and clearly distinguishes between output, impact and performance and provides a set of indicators that can be used for direct measurements or indirect impact measurements. Social Impact evaluation on philanthropy from Stanford is a good place to start.

Investment: Investments related evaluation includes both input costs and outcome returns to compare innovations. For example we use something called as the t-shirt sizing for ideas at first level, that will give a scale estimate of cost. Return on Investment as a ratio is a good measure but the underlying assumptions for predicting returns has to clear, and the other common error is around data quality when predicting returns.

I personally use value investing check for fundamentals when getting into stocks, and the factors that are checked are around stability, margin of safety, and historical dividends. Investment evaluation should be reduce the impact of any failure and enhance experiment design. In many cases ‘closed world’ resources (freely available locally, and has potential use) play a significant role in reducing investment.

Diffusion: Interdisciplinary classic work in this field Diffusion of Innovations by E Rogers lists different ways and covers a broad range of research that has already happened in diffusion. I like the stages around innovation diffusion as awareness, persuasion, decision and implementation. Data collected should focus on units of adoption (individual, community, user groups, etc), rates of adoption over time, and other social aspects of the adoption.

Model: In this facet of evaluation we only focus on what model of development was used for generating and developing the innovation, and should cover business model elements and how each of the elements are being looked at. Data collection would typically include metrics (see size, time, interface and costs worksheet below from NUS below) on needs, stages of development, partner structure, productivity, etc. For example Villgro, kickstarter, and Google ventures all operate in distinct models for developing innovations.

stic time interface cost questions

Development: Entire field of Developmental Evaluation is dedicated to evaluating during innovation and applicable for complex, highly social, non-linear situations. McConnel foundation’s  practitioner guide is probably the best you can get for free.

I will cover a few methods for selecting innovation  like PUGH matrix, decision trees, possibly in another post. This will be my last post for the year 2012, and I hope to build on the momentum covering deeper and meaningful innovation topics in 2013. Happy new year…

Advertisements
Standard
metaidea

EMC Trends 2013 TRIZ overlay

As eleven EMC executives offer their predictions of which technologies and trends will transform cloud computing, Big Data and IT security the most in 2013, my aim is to find the underlying triz trend and possibly push one more evolution round. I assume a time frame of 1 year, basis is 8 TRIZ evolution trends applied to Mobile, applied on EMC executives predictions.

Prediction quote (emphasis and few links are mine)

My warped explanation of the underlying TRIZ trend

What can happen next on this trend

an intelligence-driven security model…will require multiple components including:  a thorough understanding of risk, the use of agile controls based on pattern recognition and predictive analytics, and the use of big data analytics to give context to vast streams of data from numerous sources to produce timely, actionable information

Law of completeness exemplified with the ENGINE understood as risks and risk related information originating across the board with a TRANSMISSION visualized as streams of information flowing to WORKING UNIT where actions are initiated from the information to contain risks and its effects and having some CONTROL on the above elements including analytics

Law of uneven development on above will mean the 4 elements will evolve in different speeds.

Within the same time frame, I feel the engine element will evolve the slowest with not many newer risk categories getting added but we may have to deal with geometrically higher number of information streams, with big data analytics playing a super system role. Governance at working unit level will go through changes as well with many tasks getting automated.

For CIOs, the common theme is “now.” Rapid time to value is the leading driver. In many cases today, the business unit holds the money and determines the priorities, but they don’t care much about platforms, just the best solution for a specific problem…movement to cloud solutions is only going to escalate

Transition to Micro Level will mean that instead of a single cloud solution at enterprise level, each department or project will begin its own adoption independently. Budgeting and allocation as always and the experiments and trials of solutions, both in size and time will shift to micro level. i.e. smaller projects with shorter time cycles to try. You can check the free trial offers from HP, Google, AWS, vmWare to see how micro this has actually become already.

Adoption rates will most likely be at the beginning of the sharp rise in the S curve (with X axis linear time, Y axis cumulative % of adoption of a cloud solution).

IT will begin its delayed policy making role later in the year with governance as the central goal after many micro-level cloud solutions get adopted in the enterprise.  Possibly negotiate with popular choice vendors for supporting internal laggard/late adopter needs.

The transformation to hybrid cloud environments, and the need to move data between corporate IT data centers and service providers, will accelerate. The concepts of both data and application mobility to enable organizations to move their virtual applications will become the norm.

Already the roles and responsibilities of the different channel entities are blurring. SIs are becoming resellers; resellers are becoming service providers; and even end users are becoming service providers. Over the next three years, it is probable that the traditional mix of end-user, channel, alliance and service organizations will change, merge or disappear.

Transition to Super system will mean aggregation and unification of the entire service procurement including licensing, integration, migration, channel management etc

More sub systems and services of the past will move to the super system and will be on the path to become ubiquitous.Most partner ecosystems these days already include license offers, marketing support, education support and account management for partners. Examples could be GoogleVMware partner programs

The emergence of the Hadoop data ecosystem has made cost-effective storage and processing of petabytes of data a reality.  Innovative organizations are leveraging these technologies to create an entire new class of real-time data-driven applications

IT will continue to see abstractions with more intelligence in the data center moving to a software control plane that uses Web-based technologies to access compute, networking, and storage resources as a whole (e.g. software-defined data center). Cloud model tenets like efficiency and agility will expand to include simplicity as data centers look for easier ways to consume technology.

Law of Conduction will mean emergence of standards for data and application portability this year.  de-facto standard is my expectation and de-jure standards especially from EU region is also possible as governments can step in to decide and declare norms from the “jungle of standards”

Law of Harmonization will necessitate smooth transitions at application and portfolio level and will mean newer services especially in migration and testing to make sure business continuity.

Simplicity, agility, portability testing, or <<other cloud tenet>> services may emerge as key sellers from offshore.

Standard
metaidea

Make more Money | Visualizing your Super System

I came to know of a colleague who started diversifying his source of income just by simply visualizing super system elements sequentially.

Super System Future  When he figured how to develop land into dwelling units this became a steady source of income as well. Most of his US based friends invest in real estate through him for development
System Present He then finds out he can actually train people and make some more money. When more people started coming to his trainings he found out creating a rental space for trainees is worth it as well, so he built paying guest accommodation for trainees
Sub System Past To begin with, he was earning a regular day job salary being a SAP consultant. He also learnt how to install SAP for training and implementation as part of the day job

An inspiring story, if you think about it, much can be attributed to forcefully thinking beyond the HERE and the NOW.

Also a great example of how a simple function of making money or value can be pushed to super levels over time using system operator.

Standard
metaidea

For all of you trying to use perception…

For all of you trying to use perception mapping, please understand it is not an effort to construct reality from perceptions surfaced in a meeting. Nobody wants to live in a world created by just perceptions. But rather the effort is to make sense of what more than 1 person in a group had as a perception and how it affects the thinking of a group and really intervening only so much to change that. Reality changes perceptions, and sometimes rarely perceptions do change reality, but you don’t necessarily try that.
If there are collector point perceptions understand that it is group think, if there are contradictions understand the reality is itself contradictory, and understand that loops that are long are really imaginary if the group itself is not in any proximity. In any case there is value in the mapping and changing the picture or representation of reality that you always had.

Standard
metaidea

What if I have to come up with doomsday scenarios exhaustively?

What if I have to come up with doomsday scenarios exhaustively?
Balachandar Ramadurai | May 27, 2010
Introduction
This is the question that is unnerving to most of us, well, atleast me. We confront doomsday scenarios everyday.
“We are going to run out fossil fuels by 2050”
or
“Temperature of the earth is going up by 6 degrees, if yes, then…”
But, in a project context, how does one predict all of this ? The idea is not rumour mongering and create fear in people’s minds. If one is ready with the risks or scenarios, one can atleast attempt to solve the problem and at another level, be ready for the risk.
Now, there is a tool by the name, Functional Analysis or Functional Attribute Analysis. When I am through explaining what it is, one gets the feeling that this is a distant cousin of MindMap and a fraternal twin of Concept Map. But the devil is in the detail, as they say. The details are given below:
Functional Analysis
1. Identify all possible elements or objects, NOT actions, but specific software modules, players, machines
For example, importing is an action, but importing module is an object.
Tips
Use a spreadsheet to list down all the elements.
If the system is too complex, consider breaking down into components.
Addition of objects can happen at a later stage as well, so don’t fret if you think you have not been exhaustive.
2. Interaction Matrix: In the above spreadsheet, rows have all the elements. Transpose these on corresponding columns as well. If elements physically touch each other or there is some direct interaction, mark “+” in the corresponding cell.
For example, if the system is a water bottle. Cap vs Bottle, the interaction is “+”, although the cap does no actual function to the bottle.
3. The object with the maximum “+” signs should be drawn at the centre of the functional analysis diagram
4. Keep adding objects to the diagram in the order of the number of “+”s
5. Add relationships between the objects, if they exist. Indicate the direction of the relationship using an arrow.
6. Add verbs on these relationships.
For example, Importing module “imports” data file.
7. Force verbs even if it is difficult to come up with a verb on all relationships
When you finish this diagram, what you have is a functional analysis diagram.
Scenario and Test Case Generation
1. Start with the key object. Key object is the one which has the maximum number of connections and hence dependencies are extremely high.
2. In each of the relationships with the key object, ask yourself 4 questions/consider possibilities in this order
a. What if this object doesnt exist? For example, What if the database crashes and is offline?
b. What if the relationship is insufficient? For example, what if they are only a few fields in the record missing?
c. What if the relationship is excessive? For example, what if there are 20,000 transactions in a single day?
d. What if the relationship or transaction harmful? For example, what if seller enters erroneous data into the signup forms (in ebay, say) ?
Tip – Dwell on each possibility for some time to get different interpretations of that possibility for that relationship
3. Do this rigorously for all the relationships to generate exhaustive scenarios.
Happy Ending
All scenarios and corresponding test cases should have been generated exhaustively. Whenever there is a change in specifications, all you need to do is pictorially, which relationship is this change affecting and one can generate test scenarios for the change.
This methodology is intended for Testing and Risk Analysis tasks. If you have any questions, please feel free to reach out to me.
Categories:Technology
Communities:Innovation, Knowledge Services, Project Management, Tech Leads, Testing

Standard
cognoise

TRIZ India Posts Sept 10

Me posting back at trizindia.org

http://trizindia.org/profiles/blogs/ifr-at-google
The above post is more of a stumbled upon example of Ideal Final Result.

http://trizindia.org/profiles/blogs/on-time-and-relationships
This post I have raised an important question on the problem definition tools, that has gone unheeded in both trizindia as well as TRIZ Developers forums. Except for Anoop Kurup acknowledging the problem.

Standard