Department of
Sociology

Center for
Innovation

Evaluation of Science and Technology

Recently several articles (Arnold, 2004; Molas-Gallart and Davies, 2006) have appeared arguing for a several changes in the kinds of S&T evaluations that are being made.  Among other suggestions, the following seem to be of special merit:  (1) a more macro and systemic focus; (2) concentrating on the processes of innovation; (3) using theory to guide the S&T evaluation; and (4) identifying blockages and obstacles or what Arnold (2004) labels failures.  One of the great advantages of concentrating on innovation processes is that it helps identify the causal chain connecting policy intervention and outcome including eventual societal impact, a problem that Molas-Gallart and Davies (2006) identify in the medium and long-term evaluations that are more typical. To this list of desiderata, we would add the perspective of the policy maker who wants to know what policy reformulations should be made to correct the blockages and obstacles.

Given these observations, the Center for Innovation has developed the following kinds of theories and frameworks for evaluation of innovation and knowledge production:

  • A three level framework, micro, meso, and macro, for evaluation of technological sectors with three sets of indicators at each level for identifying blockages and obstacles.
  • Measures of radical innovation in science and technology.
  • A focus on evaluating health care in real time.

As an example of the first objective, we presented a new policy model at an International Conference. Several papers were published on this framework. At the meetings of the American Evaluation Association we made a concrete proposal for a new kind of innovation applied to the U.S. that would examine the three levels, micro, meso, and macro, and identify a number of blockages.

This presentation was turned into an entire book, Restoring the Innovative Edge: Driving the evolution of science and technology, was published in May 2011 by Stanford Press. It suggests the following steps needs to be taken:

Restoring the Innovative Edge
Order a copy

  • Seize strategic opportunities for radical product/process innovations and scientific breakthroughs by knowing the processes of evolution
  • Select the appropriate kinds of complex research teams
  • Stimulate cross-fertilization learning within and between research teams
  • Construct transformational organizations
  • Connect the idea innovation network to overcome gaps
  • Continuously evaluate the innovation process
  • Public and private sector cooperation
  • New kinds of data collection by funding agencies

Each of these solutions involves various costs because this is a complex social system that has other barriers attached to it. Furthermore, what is the most appropriate solution for one kind of research organization, say a public research laboratory, is not necessarily the best for a private firm or a university. The book provides some consideration of these various kinds of dilemmas.

As an example of successful approaches to radical innovation in science and technology, Hage with a team of historians (Hollingsworth and Hollingsworth), selected some 276 radical innovations across a century of time defined by 8 major prizes or nominations. Only six research organizations accounted for one-fourth of these advances, one of which is the Institut Pasteur. So Hage working with Mott identified the characteristics of why this organization was so successful and published several papers [Summarized in this presentation]. This Institut created the area of biomedical research and transformed the French health care system providing a way of defining institutional innovations.

A special focus of the Center for Innovation has been measuring radical innovation in the health care system and developing new metrics to evaluate the returns from investment R&D funding. The Canadian government asked Hage to develop a comprehensive framework for evaluating health care benefits. In a white paper, he proposed 15 measures of technical advance in a specific health category by examining the stages of health care intervention. In addition three measures of increased knowledge were suggested. Special sections on various topics were created including one on measuring the quality of care and another on detecting gaps in medical research.

As an example of how this framework can be used, we received a grant from the National Science Foundation to measure the various outcomes of investments in medical research. Rather than focus on specific tools and techniques, the intent is to evaluate the clinical research program in its capacity to reduce health care costs and mortality. This project examined four morbidities: melanoma cancer, breast cancer, colorectal cancer, and frailty and falls. The melanoma report provides an example what was accomplished.

We have expanded our evaluation concerns about health to developing countries. Having previously completed a project on the relative merits of network of vs. single founded agencies in Nicaragua, the Liverpool School of Tropical Medicine asked the Center for Innovation to answer the question why leadership in the India health care system was not responsive to recommendations to improve health care delivery. In developing countries, the current technology has been to use LQAS sampling techniques combined with concrete indicators to measure the effectiveness of various interventions to reduce disease and mortality. While the centralization/decentralization of the Indian health care system meant little progress because of LQAS [see this presentation], while Uganda profited for a number of reasons from its evaluations. [See this report].

A major goal of the Center for Innovation consists of developing the concept of the Gross Domestic Innovation Benefit measure. Much of neo-classical economics has been constructed on the idea of the GDP, which is based on various measures of productivity. The consequence of this has been the emergence of theories that in various ways improve productivity but also at the same time had unfortunate consequences not only for innovation, which is poorly captured in productivity measures, but for other aspects of the health of individuals and the quality of the environment. Our concept of a measure of Gross Domestic Innovation Benefit would include the consequences of innovation not only for economic growth, employment, and positive trade balances but for increased knowledge, external power and health and rehabilitation benefits.

Recent Research Reports include:

  • Ruegg, Rosalie and Gretchen B. Jordan, 2011, Guide for Conducting Benefit-Cost Evaluation of Realized Impacts of Public R&D Programs, Prepared for U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Revised working draft, August
  • Report on the Investigation of New Requirements for STAR and Issues in the Leadership of Science, Technology and Innovation. 2010
  • Jordan , Gretchen, Pete Oelschlaeger, Alan Burns, Randy Watkins, and Tim Trucano. 2010. Description of the Sandia National Laboratories Science, Technology & Engineering Metrics Process, SANDIA REPORT Unlimited Release, SAND2010-0388, April
  • Report on the Third Wave of the Research Environment Survey Administered to STAR in 2009: Patterns of Stability and Change
  • Making the case for the Advanced Hyperspectral Sounder, for STAR. 2009
  • Examining multiple social networks at the Center for Satellite Applications and Research (STAR). 2009

Relevant Publications include:

  • Jordan, G.B., J. Hage, J. Mote. 2011. Research Profiles: Prolegomena to a Theory of the Management of Innovation, in "Technological, Managerial and Organizational Core Competencies: Dynamic Innovation and Sustainable Advantage," Eds. F.S. Nobre, D.S. Walker, and R.J. Harris, IGI Global, USA, in progress.
  • Jordan, G. 2010. A Theory-Based Logic Model for Innovation Policy and Evaluation, Research Evaluation, 19(4), October: 263-274.
  • Mc Laughlin, John A. and Gretchen B Jordan. 2010. Using Logic Models. Handbook of Practical Program Evaluation, 3rd Edition, Wholey, J., Hatry, H., and Newcomer, K., Eds., Jossey Bass, 55-80.
  • Jordan, G., J. Hage and J. Mote.  2008.  A Theories-Based Systemic Framework for Evaluating Diverse Portfolios of Scientific Work Part One:  Micro and Meso Indicators.  In C.L.S.  Coryn and Michael Scriven (Eds.), Reforming the Evaluation of Research. New Directions for Evaluation, 118:7-24.

Recent Conference Papers include:

  • Mote, Jonathon, Aleia Clark and Wilbur Hadden. Panel Session: Understanding Knowledge Production Systems: Ecology, Context and Complexity in the National Laboratories. American Evaluation Association. Minneapolis November 2012.
  • American Evaluation Assoc. Multipaper Session. November 12, 2010. Jerald Hage, Chair.

    Session Title: Report on a Test of a General Method for Quick Evaluation of Medical Research by Morbidity

    • Nelson, Amber. " A Quick Evaluation of Alzheimer's Disease Research."
    • Nixon, Alice J. "A Quick Evaluation of Breast Cancer Research."
    • Waggle, Joseph. "A Quick Evaluation of Colorectal Cancer Research."
  • Jordan, Gretchen and Rosalie Ruegg. 2009. "A Credible Approach to Benefit-Cost Evaluation for Federal Research & Technology Programs: A U.S. Department of Energy Approach." Presented at the American Evaluation Association Annual Conference, Orlando, FL, November 11-14.
  • Hage Jerald, Wilbur Hadden and Gretchen Jordan. 2009. "Measuring Technical Progress in Real Time." Presented at the American Evaluation Association Annual Conference, Orlando, FL, November 11-14.
  • Hage, Jerald, Jonathan Mote, and Aleia Clark. 2009. "Evaluating Non-existent Technologies: The Advanced Hyperspectral Sounder." Presented at the American Evaluation Association Annual Conference, Orlando, FL, November 11-14.
  • Jordan, Gretchen and Rosalie Ruegg. 2009. "A Credible Approach to Benefit-Cost Evaluation for Federal Research & Technology Programs: A U.S. Department of Energy Approach." Presented at the Atlanta Conference on Science and Technology Policy, Atlanta, GA, October 1- 4.
  • Hage Jerald, Wilbur Hadden and Gretchen Jordan. 2009. "Measuring Technical Progress in Real Time." Presented at the Atlanta Conference on Science and Technology Policy, Atlanta, GA, October 1- 4.
  • Hage, Jerald, Jonathan Mote, and Aleia Clark. 2009. "Evaluating Non-existent Technologies: The Advanced Hyperspectral Sounder." Presented at the Atlanta Conference on Science and Technology Policy, Atlanta, GA, October 1- 4.
  • Hage, Jerald, Jonathan Mote, Gretchen Jordan. 2009. "Profiles of Innovativeness and Gaps in the Idea Innovation Network." Poster presented at the 15th Annual Coalition for National Science Funding Exhibition and Reception, Washington, DC, March 24.

Working Papers:

  • What are Good Jobs in Public Research Organizations: The Distincive Job characteristics and Rewards of Research Work (Mote, Hage, Clark and Jordan), 2012
  • Good Jobs and Work Satisfaction in STEM Occuaptions and Research Work (Hage, Mote, Lucas, and Hadden), 2012

Updated 12 December 2016