Prospective evaluation of a cluster program for Finnish forestry and forest industries

Similar documents
Prospective Evaluation of a Cluster Programme for Finnish Forestry and Forest Industries

Helsinki University of Technology Systems Analysis Laboratory. Ahti Salo. P.O. Box 1100, FIN TKK Finland

A SYSTEMIC APPROACH TO KNOWLEDGE SOCIETY FORESIGHT. THE ROMANIAN CASE

Methodologies for participatory foresight and priority setting in innovation networks

Contribution of the support and operation of government agency to the achievement in government-funded strategic research programs

Written response to the public consultation on the European Commission Green Paper: From

Terms of Reference. Call for Experts in the field of Foresight and ICT

Science Impact Enhancing the Use of USGS Science

Economic and Social Council

Foresight within ERA-NETs: Experiences from the preparation of an international research program

FINLAND. The use of different types of policy instruments; and/or Attention or support given to particular S&T policy areas.

GENEVA COMMITTEE ON DEVELOPMENT AND INTELLECTUAL PROPERTY (CDIP) Fifth Session Geneva, April 26 to 30, 2010

Torsti Loikkanen, Principal Scientist, Research Coordinator VTT Innovation Studies

Use of forecasting for education & training: Experience from other countries

Expert Group Meeting on

Colombia s Social Innovation Policy 1 July 15 th -2014

The Policy Content and Process in an SDG Context: Objectives, Instruments, Capabilities and Stages

COUNCIL OF THE EUROPEAN UNION. Brussels, 9 December 2008 (16.12) (OR. fr) 16767/08 RECH 410 COMPET 550

Extract of Advance copy of the Report of the International Conference on Chemicals Management on the work of its second session

Towards a Consumer-Driven Energy System

Assessment of Smart Machines and Manufacturing Competence Centre (SMACC) Scientific Advisory Board Site Visit April 2018.

EUROPEAN COMMISSION Directorate-General for Communications Networks, Content and Technology CONCEPT NOTE

Common Features and National Differences - preliminary findings -

Committee on Development and Intellectual Property (CDIP)

Constants and Variables in 30 Years of Science and Technology Policy. Luke Georghiou University of Manchester Presentation for NISTEP 30 Symposium

Engaging UK Climate Service Providers a series of workshops in November 2014

International comparison of education systems: a European model? Paris, November 2008

Standardization and Innovation Management

COMMISSION STAFF WORKING PAPER EXECUTIVE SUMMARY OF THE IMPACT ASSESSMENT. Accompanying the

FP9 s ambitious aims for societal impact call for a step change in interdisciplinarity and citizen engagement.

Belgian Position Paper

Fistera Delphi Austria

Conclusions on the future of information and communication technologies research, innovation and infrastructures

Fact Sheet IP specificities in research for the benefit of SMEs

A Research and Innovation Agenda for a global Europe: Priorities and Opportunities for the 9 th Framework Programme

SIXTH REGIONAL 3R FORUM IN ASIA AND THE PACIFIC, AUGUST 2015, MALE, MALDIVES

December Eucomed HTA Position Paper UK support from ABHI

WORKSHOP ON BASIC RESEARCH: POLICY RELEVANT DEFINITIONS AND MEASUREMENT ISSUES PAPER. Holmenkollen Park Hotel, Oslo, Norway October 2001

HTA Position Paper. The International Network of Agencies for Health Technology Assessment (INAHTA) defines HTA as:

Report. RRI National Workshop Germany. Karlsruhe, Feb 17, 2017

Participatory backcasting: A tool for involving stakeholders in long term local development planning

Applying Regional Foresight in the BMW Region A Practitioner s Perspective

Call for contributions

Forsight and forward looking activities Exploring new European Perspectives Vienna 14-15th June 2010

TECHNOLOGICAL INNOVATION SYSTEMS FOR DECARBONISATION OF STEEL PRODUCTION

Draft executive summaries to target groups on industrial energy efficiency and material substitution in carbonintensive

European Commission. 6 th Framework Programme Anticipating scientific and technological needs NEST. New and Emerging Science and Technology

The main recommendations for the Common Strategic Framework (CSF) reflect the position paper of the Austrian Council

Post : RIS 3 and evaluation

Impacts of the circular economy transition in Europe CIRCULAR IMPACTS Final Conference Summary

Introduction to Foresight

Initial draft of the technology framework. Contents. Informal document by the Chair

EXPLORATION DEVELOPMENT OPERATION CLOSURE

Meeting Report (Prepared by Angel Aparicio, Transport Advisory Group Rapporteur) 21 June Introduction... 1

Establishing a Development Agenda for the World Intellectual Property Organization

Development of the Strategic Research Agenda of the Implementing Geological Disposal of Radioactive Waste Technology Platform

What is Digital Literacy and Why is it Important?

Brief presentation of the results Ioana ISPAS ERA NET COFUND Expert Group

Learning Lessons Abroad on Funding Research and Innovation. 29 April 2016

Submission to the Productivity Commission inquiry into Intellectual Property Arrangements

November 18, 2011 MEASURES TO IMPROVE THE OPERATIONS OF THE CLIMATE INVESTMENT FUNDS

Engineering Grand Challenges. Information slides

Working together to deliver on Europe 2020

PRIMATECH WHITE PAPER COMPARISON OF FIRST AND SECOND EDITIONS OF HAZOP APPLICATION GUIDE, IEC 61882: A PROCESS SAFETY PERSPECTIVE


COMMISSION OF THE EUROPEAN COMMUNITIES

Country Profile: Israel

Horizon Work Programme Leadership in enabling and industrial technologies - Introduction

SKILLS FORESIGHT. Systematic involving a welldesigned approach based on a number of phases and using appropriate tools

Committee on Development and Intellectual Property (CDIP)

COUNTRY: Questionnaire. Contact person: Name: Position: Address:

First update on the CSTP project on Digital Science and Innovation Policy and Governance initiatives

Technology Platforms: champions to leverage knowledge for growth

Arie Rip (University of Twente)*

Looking over the Horizon Visioning and Backcasting for UK Transport Policy

Please send your responses by to: This consultation closes on Friday, 8 April 2016.

Innovation Systems and Policies in VET: Background document

Emerging biotechnologies. Nuffield Council on Bioethics Response from The Royal Academy of Engineering

Strategic Plan for CREE Oslo Centre for Research on Environmentally friendly Energy

CO-ORDINATION MECHANISMS FOR DIGITISATION POLICIES AND PROGRAMMES:

PROJECT FACT SHEET GREEK-GERMANY CO-FUNDED PROJECT. project proposal to the funding measure

II. The mandates, activities and outputs of the Technology Executive Committee

Mainstreaming PE in Horizon 2020: perspectives and ambitions

Advanced Impacts evaluation Methodology for innovative freight transport Solutions

Principles and structure of the technology framework and scope and modalities for the periodic assessment of the Technology Mechanism

National Workshop on Responsible Research & Innovation in Australia 7 February 2017, Canberra

GUIDE TO SPEAKING POINTS:

Annual Report 2010 COS T SME. over v i e w

Climate Change, Energy and Transport: The Interviews

How to identify and prioritise research issues?

CERN-PH-ADO-MN For Internal Discussion. ATTRACT Initiative. Markus Nordberg Marzio Nessi

ACTIVITY REPORT OF THE NATIONAL INDUSTRIAL COMPETITIVENESS COMMISSION PRAMONĖ 4.0 OF 2017

Brief to the. Senate Standing Committee on Social Affairs, Science and Technology. Dr. Eliot A. Phillipson President and CEO

Information & Communication Technology Strategy

Issues and Challenges in Coupling Tropos with User-Centred Design

IESI ICT Enabled Social Innovation in support to the implementation of the EU Social Investment Package (SIP) Objectives & Research Design

E Distr. LIMITED E/ESCWA/TDD/2017/IG.1/6 31 January 2017 ENGLISH ORIGINAL: ARABIC

Enabling ICT for. development

The Impact of Foresight on policy-making - Drawing the landscape

Foresight and Horizon Scanning

Transcription:

Intl. Trans. in Op. Res. 11 (2004) 139 154 INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH Prospective evaluation of a cluster program for Finnish forestry and forest industries A. Salo, T. Gustafsson and P. Mild Systems Analysis Laboratory, Helsinki University of Technology, PO Box 1100, 02015 HUT, Finland Received 30 September 2002; received in revised form 3 February 2003; accepted 25 February 2003 Abstract In this paper, we report a prospective evaluation process for a major research program for the Finnish forestry and forest industries. To a significant extent, this process was based on fifteen participatory workshops where tools of operational research F most notably multi-criteria methods embedded in a group support system F were deployed to help representatives from industry, the research community, and public administration in the assessment of socio-economic impacts and the identification of further research topics. Drawing upon this case study, we also analyze the preconditions, advantages, and limitations of similar kinds of participatory processes in the implementation of formative and summative evaluations. Keywords: decision support; multi-criteria methods; research evaluation; technology foresight 1. Introduction There is a widespread consensus among policy makers and researchers alike that the ability to innovate is vital to the well-being of knowledge-based societies (Caracostas and Muldur, 1998; Kuhlmann et al., 1999; Smits, 2001). As a result, science and technology (S&T) policies have grown in importance, in the realization that these policies are crucial for the fostering of innovative activities and economic growth (see, e.g., OECD, 1999, 2001). In many countries, large-scale research programs are among the key instruments for the implementation of S&T policies: in Finland, for example, the Academy of Finland and the National Technology Agency (Tekes) fund an extensive range of programs which not only provide an important source of project funding but promote networking activities and exploitation of research results as well (see, e.g., Salo and Salmenkaita, 2002). Briefly put, program evaluation can be understood as a systematic process which seeks to determine the relevance, efficiency, and effect of the program in terms of its objectives, implementation, and administrative management (Papaconstantinou and Polt, 1997). Evaluation is presently common practice in the management of publicly funded research programs, whereby r 2004 International Federation of Operational Research Societies. Published by Blackwell Publishing Ltd.

140 A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 external evaluators are often engaged to secure impartial and objective findings (Luukkonen, 1998). From the policy makers point of view, however, evaluation cannot be the only source of information: this is because evaluation tends to be backward-looking, while the preparation of new S&T policies and research programs must be driven by future challenges, too (Coates, 1985; MacLean, Anderson and Martin, 1998; Meyer-Krahmer and Reiss, 1992). Such challenges can be addressed through foresight activities which: seek to generate an enhanced understanding of possible scientific and technological developments and their impacts on economy and society, in order to support the shaping of sustainable S&T policies, the alignment of research and development (R&D) efforts with societal needs, the intensification of collaborative R&D activities and the systemic long-term development of innovation systems (Salo and Cuhls, 2003). Usually, foresight activities are organized through participatory processes which involve stakeholders from the research community, industry and public administration (see, e.g., Grupp and Linstone, 1999; Héraud and Cuhls, 1999; Martin, 1995; Martin and Johnston, 1999). The need for extensive participation is partly driven by complex feedback loops in innovation processes, wherefore an adequate understanding of the socioeconomic impacts of research can be reached only by bringing several stakeholders into a constructive dialogue (Andersson, 1998). Taken together, the above observations suggest that there is a demand for a hybrid instrument which couples elements of ex post evaluation with a systematic appraisal of future developments, in order to support S&T-oriented decision-making and the shaping of new policy instruments (see also Geurts and Joldersma, 2001; Smits, Leyten and den Hertog, 1995). Specifically, prospective evaluation is here defined as a systematic process which (1) broadens the scope of traditional evaluation activities (cf. above) by considering relevant scientific, technological and societal developments and (2) interprets the results of such an analysis in terms of implications for S&T decision-making. In this paper, we report a prospective evaluation of Wood Wisdom, a national research program for the Finnish forestry and forest industries. Funded in 1998 2001, Wood Wisdom was unusually large in terms of its funding volume and scope of research topics. These and other program characteristics, together with the stated evaluation objectives and encouraging experiences from participatory approaches in related contexts (Salo and Gustafsson, 2003), suggested that a prospective evaluation based on workshops would be helpful. We therefore designed and implemented a participatory process which engaged the stakeholders in a structured discussion concerning past research accomplishments and future challenges. This process was novel in that it benefited from the deployment of decision aiding tools based on multi-criteria analysis (see, e.g., Beroggi, 1998; Hämäläinen, 1991). Thus, our contribution to the OR literature lies in (1) demonstrating the usefulness of selected OR tools in a challenging setting and by (2) discussing the potential and limitations of these tools in related contexts, based on the lessons suggested by this case study. This paper is organized as follows. Section 2 gives an overview of Wood Wisdom, outlines the evaluation objectives, and infers methodological implications from them. Section 3 describes the

A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 141 evaluation process and reports feedback from the workshop participants. Section 4 concludes by discussing the preconditions, advantages, and limitations of similar kinds of participatory evaluation processes. 2. Wood Wisdom cluster program 2.1. Program characteristics In 1996, after the severe recession of the early 1990s, the Finnish Government took the decision to allocate some h500 m from the sales of state property to research and development (Prihti et al., 2000). Further to this decision, seven cluster programs were created as a new S&T policy instrument which was partly inspired by Porter s (1990) ideas. In all cluster programs, the rationale was to bring universities, research institutes, firms, and funding agencies into closer collaboration, whilst enabling concerted R&D efforts in support of long-term S&T-based competitiveness. The largest of the cluster programs was Wood Wisdom (1998 2001), which covered practically all areas of research relevant to Finnish forestry and forest industries. Specifically, Wood Wisdom spanned the whole value chain from the production of raw materials to markets, in order to foster interdisciplinary collaboration among research groups and to enhance the market orientation of R&D efforts. Towards this end, Wood Wisdom contained a wide variety of basic and applied research projects, as well as product development activities, and even market studies. The total funding of Wood Wisdom was about h33 m. The largest share of this (44%) was supplied by the National Technology Agency which provided funds to applied technological research projects with a view towards commercial exploitation of new S&T-based knowledge. Industrial companies, too, made a significant contribution (33%) while the Academy of Finland (15%) funded basic research projects. The Ministry of Agriculture and Forestry (7%) and the Ministry of Trade and Industry (2%) also provided funds. This collaboration among several funding organizations was one of the groundbreaking features of Wood Wisdom. In hindsight, it can be interpreted as an attempt to mitigate the risk of systemic failures caused by possible coordination problems (see, e.g., Salmenkaita and Salo, 2002). For the purposes of administrative management, the 156 projects in Wood Wisdom were organized into four major research areas, which were further divided into 21 thematic groups and 34 research consortia. On average, the consortia contained about four projects with complementary objectives. The consortia were supervised by advisory boards which consisted usually of about ten S&T experts, industrial R&D managers, and representatives of funding organizations. The role of these advisory boards was to provide guidance to the projects and the later uptake of results. The program as a whole had a steering group of twelve members, most of whom were leading industrialists or representatives of funding organizations. This steering group provided strategic guidance to the program and helped improve coordination among the institutions involved. A program director was appointed, with the remit of proposing and implementing a range of measures (e.g. seminars) in support of improved coordination and effective use of resources.

142 A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 Preparations for the evaluation of Wood Wisdom were started by the funding organizations almost one year before the program was coming to a close in late 2001. The main clients of the evaluation included (1) the funding organizations, who were intent on receiving information about how well the program had met its objectives and what its likely long-term impacts would be; (2) the program management, who were interested in learning how appropriate the management structures and procedures of Wood Wisdom had been, among other things, and; (3) the participating researchers, who were keen on obtaining constructive feedback on their work. The timing of the evaluation was planned so that the results would be ready by the end of 2001, allowing them to be presented at the final conference in February 2002. Our earlier work in the evaluation of large programs (Salo and Salmenkaita, 2002), coupled with promising results from methodologically oriented pilot projects (Salo and Gustafsson, 2003), were among the reasons why we were given the opportunity to work on this challenging evaluation project. 2.2. Evaluation objectives The evaluation objectives were established in a series of meetings where the background, objectives, and the implementation logic of Wood Wisdom were outlined by the program director and the funding organizations, most notably the National Technology Agency. Specifically, the unique role of Wood Wisdom in the development of the industrial cluster suggested that the following considerations would be important: 1. Assessment of socio-economic impacts. A key rationale in Wood Wisdom was that, to a considerable extent, research efforts should be motivated by their long-term socio-economic impacts. Thus, the evaluation was expected to give a better understanding of these impacts and to suggest actions through which they might be best realized. 2. Appraisal of networking benefits. During the early phases of Wood Wisdom, it was expected that considerable networking benefits could be reaped by transcending prevailing boundaries between the users and producers of knowledge. The appraisal of changes in networking structures and collaborative activities was therefore an important concern. 3. Guidance for the preparation of later programs. Because the cluster programs were first established during 1997 2001, the funding organizations were interested in knowing (1) to what extent these programs had fulfilled the expectations that had been placed on them and, moreover, (2) what focal research topics should be emphasized in further research programs. 2.3. Methodological implications In the realization that the socio-economic impacts of Wood Wisdom would accrue from the exploitation of results from individual projects, it was felt that the evaluation should examine how successful the projects had been in producing knowledge that would be of interest to the targeted users of such knowledge. The management structure of Wood Wisdom F in which the advisory boards played an important role F suggested that the

A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 143 evaluation should build on the expertise that had accumulated within the different consortia and research projects. The objectives of Wood Wisdom and its wide scope, coupled with the need to complete the evaluation within the limited time frame and available resources, had several methodological implications: 1. Adoption of a participatory approach. Due to the wide range of socio-economic impacts, it was imperative to consult several stakeholders, in order to establish a broad enough knowledge pool for the elicitation of informed judgements on realized and anticipated impacts. Here, a participatory approach based on a series of workshops seemed pertinent, not least because it made it possible to assess networking benefits by consulting the very stakeholders who had taken part in these networks. 2. Simple, comprehensive and adaptable evaluation framework. Because Wood Wisdom was exceptionally large in terms of different research topics, there was a need for a comprehensive evaluation framework which would be applicable to different kinds of research topics and projects; yet, this framework had to be specific enough so that it could be meaningfully interpreted in the context of different projects. 3. Process repeatability. The requirement for an equitable treatment of consortia and projects meant that the evaluation framework had to be applied consistently in the same manner. This feature F repeatability F also supported comparisons between different research areas and entailed cost benefits, because supporting documentation (such as instructions) could be employed over and over again. 4. Balance between formal evaluation statements and informal comments. Due to the large scope of the program, the evaluation framework had to remain rather general (i.e., it could not be based on specific S&T topics as there would have been too many of these; cf. Hartwich and Janssen, 2000). Hence, many substantive issues had to be addressed through informal discussions which also helped validate formal evaluation statements. 5. Solicitation of anonymous inputs. To reduce the risk of undesirable characteristics of conventional face-to-face meetings (see e.g., Janis, 1982; Mennecke and Valarich, 1998), it was felt that the participants should be given the opportunity to provide anonymous feedback as well. Here, a group support system (GSS) was seen to hold potential in terms of (1) expediting the process of obtaining evaluation inputs, (2) ensuring that all participants would have an equal chance of providing inputs and (3) extending the range of qualitative and quantitative information conveyed by these inputs (see e.g., Bongers, Geurts and Smits, 2000; Mennecke and Valarich, 1998; Salo and Gustafsson, 2003; Zigurs and Buckland, 1998). A further consideration was that the evaluation was carried out at a time when preparations for the sequel program were being started. The evaluation was consequently expected to catalyze informed discussions on what specific research topics should be pursued in the sequel program and to support the characterization of such topics. A participatory approach seemed suitable for such purposes, because it would have been rather difficult to promote a productive discussion through questionnaires or conventional semi-structured interviews, for example.

144 3. Evaluation of Wood Wisdom F framework and process The design of the evaluation framework and ensuing process was driven by the selection of the two units of analysis that were subjected to a formal evaluation, i.e., (1) research consortia and (2) research projects. A major reason for this was that the consortia-specific advisory boards had the authority, S&T expertise, and competence to make statements on these two units of analysis, partly because they had worked with the consortia and the projects throughout the duration of Wood Wisdom. Moreover, both projects and consortia had clearly identifiable responsible persons (i.e., project managers and consortium coordinators) who could be invited to give introductory presentations at the workshops. In effect, an attempt to evaluate the program with regard to the stated objectives by relying on other sources of expertise would probably have been difficult, because (1) Wood Wisdom was so large that practically all leading experts in Finland were involved in it, implying such expertise was not readily available; (2) it would have been difficult to motivate such external experts to invest enough time and effort in the evaluation task, in order to reach the same level of knowledge that was already possessed by the advisory boards; (3) the emphasis on knowledge production in view of later exploitation suggested that many evaluation tasks were best undertaken by those who would eventually play a key role in such exploitation (e.g., industrial R&D managers represented on the Advisory Boards). In parallel with the process described here, the Academy of Finland invited an international panel of scientific experts to review of the scientific quality of the projects to which it had provided funds. Also, before the participatory workshops, we administered a survey study containing questions on the main objectives, accomplishments, and anticipated impacts of their projects, among others. However, because the peer review and the survey were not particularly novel from a methodological viewpoint, we focus on the participatory workshops only. 3.1. Evaluation framework A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 While Wood Wisdom was motivated by the desire to produce positive socio-economic impacts in the long term, the projects had their own, more specific objectives of knowledge production. As these objectives had guided much of the work in the program, it seemed pertinent to evaluate the projects with regard to them rather than with regard to the more general socio-economic impacts which were too uncertain and intangible to offer a reliable basis for evaluation. Moreover, responsibilities for the attainment of project-specific objectives could be justly ascribed to key persons (e.g., project manager and research staff), while the eventual future realization of socioeconomic impacts would depend on many other factors (e.g., developments in the regulatory environment), few of which would be under the control of the project or the program. In this sense, the consideration of project-specific objectives ensured that sufficient attention would be given to the new knowledge that had been produced by the projects. Combined with the consideration of broader scientific, technological, industrial and societal developments through informal discussions, however, this approach provided insights into the long-term socio-economic impacts of the program as well. The large scope of Wood Wisdom was one of the reasons why no attempt was made to develop new domain models describing all the relevant research areas. By necessity, any comprehensive

A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 145 model would have contained dozens of research areas with succinct and mutually agreed definitions, the development of which would have been a major effort in itself. Alternatively, the adoption of a higher level of aggregation would have meant that F from the viewpoint of any one consortium F the model would have contained only one or two relevant research areas, wherefore the usefulness of the resulting model as an analytical tool in the workshops would have been severely compromised. Thus, further to discussions with the program director, it was decided to build on the existing program structure (e.g., the assignment of projects to different consortia) in the evaluation. Motivated by the above considerations, the evaluation framework was structured around a multi-criteria decision model (see Fig. 1; Salo, Gustafsson and Ramanathan, 2003), organized in the form of a value tree, which consisted of attributes that were relevant to all projects regardless of their specific S&T content. In particular, a distinction was made between objectives that pertained to (1) the strengthening of different kinds of research through the additional resources from Wood Wisdom and those that dealt with (2) the development of various forms of collaborative networking, either through the creation of new networks or the strengthening of earlier ones. This distinction was motivated by the desire to highlight improved collaboration as an important objective in itself and to enable comparisons across the above two key dimensions of impacts. Despite its simplicity, the value tree made it possible to ask, for instance: (1) to what extent the research projects had achieved their objectives, with regard to the different dimensions of knowledge production and research collaboration; (2) what measures (i.e., resources or networking) had contributed most to the attainment of these objectives; and (3) what kinds of research activities and collaborative networking should be emphasized in the future, as characterized by the attributes of the value tree. Because it was rather general, however, the evaluation framework had to be specifically interpreted in the context of each consortium, to ensure that it would make sense to the advisory Strengthening of research resourcing S&T research Other research areas Basic research Applied research Product development Economic research Environmental research Social research Long-term industrial competitiveness Among domestic research organizations Extended collaboration within existing networks Creation of new networks Development of research collaboration Between domestic industry and research orgs. Extended collaboration within existing networks Creation of new networks International collaboration International collaboration in general Fig. 1. The evaluation framework.

146 A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 boards. Towards this end, questions such as What do we mean by applied research in the context of this research area? were addressed at the outset of the workshops. Also, in some cases, some attributes were entirely disregarded: for example, in the evaluation of projects which were concerned with econometric modeling, the three first attributes for S&T research (which subsumed natural sciences only) were not considered. At the project level, the project managers and the members of the advisory boards were asked to evaluate (using a 5-point Likert scale) how well the project had attained the objectives that corresponded to the lowest-level attributes of the value tree. In addition, each project manager was asked to specify the most important objectives in their projects. Towards this end, he or she was requested to assign weights to the attributes, first by dividing 100 points among the two highest-level attributes, then proceeding to the next lower level in the same way, until each attribute received a number of points which reflected its perceived importance. This approach produced subjective information about how important the benefits of collaborative networking had been, as opposed to the supply of additional resources for research efforts. Within each consortium, the same weighting approach was employed in order to derive qualitative profiles of future research needs. Here, the workshop participants were requested to consider the relative importance of the attributes by dividing 100 points first among the highest level attributes and then among the ones at the next lower level, until all the lower-level attributes had a received weight. These points were solicited using a GSS consisting of ten lap-top computers which were connected through a wireless local area network (see also Salo and Gustafsson, 2003). The results were synthesized on the spot and presented to the participants for purposes of immediate interpretation and validation. For instance, the profile in Fig. 2 F which is taken from the workshop on modified wood F suggests that there is a need to retain the focus on basic research (as opposed to applied research Fig. 2. Profile from the consortium on modified wood.

A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 147 and product development) while improved networking is also called for, especially at an international level. In this way, the weighting procedure yielded indicative profiles as to what kinds of research efforts and networking activities should be stressed. The resulting profiles were complemented by asking the workshop participants to submit through the GSS verbal descriptions of the specific research topics that should be pursued. In the workshop on modified wood, these topics included research into the environmental emissions from the thermal modification process, the treatment s effects on the long-term strength properties of wood and the suitability of various surface treatments for thermally modified wood, among others. Taken together, the combined inputs from this procedure (i.e., specification of research profiles and identification of corresponding specific topics) provided a rich set of statements which was validated through a facilitated discussion among the participants. In effect, while most multi-criteria applications are concerned with choice (comparative analysis of mutually exclusive options of which one is to be selected), the rationale for the multi-criteria framework for the analysis of research consortia was driven by an attempt to profile the kinds of activities that should be pursued in the future. The usefulness of the value tree framework was largely based on its ability to produce an aggregate representation of the participants viewpoints and, by doing so, to provide constructive inputs to the discussion. Such a use of a decision model has analogies with decision analysis interviews (see, e.g. Marttunen and Häma läinen, 1995) which, too, are mainly concerned with enhanced communication, rather than the selection of some alternative to the exclusion of others. In addition to the value tree outlined in Fig. 1, evaluative statements were solicited from the participants on: (1) how effective the advisory boards had been in guiding the projects and; (2) what kinds of impacts the projects would probably lead to by the years 2005 and 2010, respectively. This latter question was structured around four dimensions (i.e., strengthening of competencies, economic impacts, environmental impacts, and societal impacts). In addition, the participants were allowed to specify other impacts as well. Although the assessment of these likely impacts yielded approximate profiles, most workshop participants found considerable difficulty with such an assessment, not least due to the high uncertainties that are inextricably linked to the emergence of future innovations and the difficulties of distinguishing between different kinds of impacts. This suggests that, instead of attempting to analyze likely socio-economic impacts in the aggregate, it is more meaningful to evaluate projects with regard to their specific objectives and to address ensuing long-term impacts through a process where an attempt is made to separate S&T achievements from the influences of the many intervening variables. Also, one may subject selected projects to a closer scrutiny than what is possible in a workshop setting, or seek insights from ex post analyses of comparable projects that have finished several years before (see, e.g., Perrin, 2002). 3.2. Process implementation Building on experiences from earlier workshops (Salo and Gustafsson, 2003), the evaluation process was implemented through a series of fifteen workshops. The themes for these workshops were selected by the funding organizations, based on their prior assessment of which consortia and projects would probably benefit from such an evaluation most. The number of projects

148 A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 addressed at the workshops ranged from three to five and, with some exceptions, they came from the same research consortium. Several consortia and projects, selected at the discretion of the funding organizations, were excluded from the evaluation due to limited resources. The workshops were usually attended by eight or so participants. The consortium coordinator and the project managers presented the consortium and the projects, respectively. The members of the advisory board acted as the evaluation panel, while the program director participated in the discussion and raised issues that might otherwise not have been dealt with. On the organizing side, the process facilitator gave a brief introduction to the workshop and facilitated the process, but abstained from making evaluative statements (see, e.g., Schein, 1987). The technical facilitator assisted participants in the use of the GSS, recorded the workshop and took the minutes. A rapporteur posed questions every now and then and took notes in view of developing the final evaluation report. The workshop agenda consisted of six phases (see Table 1). Before the evaluation phase, short presentations (10 15 minutes) were made on the projects and consortia, followed by short discussions, in order to ensure that the workshop participants would have enough information on which to base their judgements. Evaluative statements on research projects were then solicited from the members of the Advisory Boards with the help of a GSS (i.e., ratings on a 1 5 scale with regard to the attributes in Fig. 1), whereafter these statements were aggregated and displayed to the entire group. Towards the end of the workshops, all participants were invited to submit profiles of future research needs (i.e., by assigning weights to the attributes), to identify promising research topics and to supply any relevant arguments. The structure of the above agenda was motivated by the following considerations. First, the application of any formal evaluation tool was to be preceded by presentations and associated Table 1 The workshop agenda Phase Activities Persons 1. Opening Presentation Facilitator Motivation, introduction of the agenda Presentation of facilitators and participants Elaboration of evaluation frameworks 2. Presentation of survey results a. Presentation Facilitator b. Discussion All 3. Presentation of consortium Presentation Consortium coordinator Summary of completed and planned activities 4. Project appraisals (repeated for each project) a Presentation Project managers Attainment of objectives b. Discussion All Assessment of socio-economic impacts c. Assessment Advisory board 5. Validation of project appraisals a. Display Facilitator Elaboration and validation of evaluation statements b. Discussion All 6. Consideration of research needs within consortia a. Assessment Project managers advisory board Specification of research profiles b. Display Facilitator Identification of focal research topics c. Discussion All

A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 149 discussions. Second, formal evaluative statements (based on the value tree framework) were elicited anonymously and simultaneously, whereby the participants were explicitly encouraged to motivate their statements with written informal comments. Third, all evaluation results were systemically validated by synthesizing and presenting them to the participants for subsequent discussion and elaboration. For instance, while the application of the evaluation framework was helpful in profiling future research needs, the specification of particular research topics was crucial in clarifying what the participants really meant by these profiles. From the viewpoint of workshop dynamics, the interplay between presentations, use of the evaluation framework, and ensuing discussions engaged the participants in different kinds of activities and, moreover, ensured that each participant had a particular role at any given stage of the workshop. It seems that this variability made the workshops more interesting than what would have been the case if the workshops had consisted of conventional presentations and discussions only. Also, the agreed agenda was helpful in that the facilitator could appeal to it, thus ensuring that the discussion would remain focused on relevant topics. 3.3. Workshop feedback After the workshops, the participants were asked to fill in a questionnaire which contained statements about the results ( The workshop results were useful and well-founded ), the impacts of the GSS ( The workshops benefited from the GSS use ), and the potential of similar workshops in other research programs ( Similar workshops should be organized in future programs ), among other. The 84 responses which were obtained confirmed that the participants were quite satisfied. For example, on a 5-point Likert scale (1 5 strongly disagree, 5 5 strongly agree), the averages for the above three statements were 4.00, 4.20, and 3.91, respectively, while the percentage of respondents who agreed with them exceeded 80% (see also Salo, Gustafsson and Ramanathan, 2003). Thus, the feedback not only confirmed the usefulness of the GSS but was positive in other regards as well. Furthermore, immediately after each workshop a semi-structured interview was carried out with a voluntary workshop participant. The results of these interviews were in line with the survey results and confirmed that the participants were satisfied with the workshops and the evaluation process at large. However, the interviews also revealed some sources of dissatisfaction, most notably the occasional absence of key persons in some workshops. This suggests that the presence of a right group of participants (in terms of competencies and stakeholder groups) is perhaps the single most important determinant of workshop success. 4. Discussion The experiences from the evaluation of Wood Wisdom suggest that participatory processes for prospective evaluation hold promise even in other settings where strategic policy intelligence is required (see, e.g., Kuhlmann et al., 1999; Salo, 2001b). These include, for example, planning processes, evaluation activities, and foresight exercises which typically benefit from (1) enhanced communication and collaboration among the stakeholders and (2) the generation of

150 A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 methodologically supported evaluative statements. Due to the unique organizational setting in which the Wood Wisdom workshops took place, however, the experiences reported here do not necessarily apply elsewhere. It is therefore pertinent to discuss contextual matters which cast further light on this case study, on one hand, and the preconditions and limitations of related processes, on the other hand. 4.1. Gaining access to expertise The prospective evaluation relied heavily on inputs from the advisory boards who were F thanks to their involvement with Wood Wisdom over its entire duration F well aware of the work that had been carried out by the consortia and research projects. This was one of the reasons why the advisory boards were motivated and able to provide informed evaluative statements. This would probably not have been the case, had the board members been appointed just before the participatory workshops, without ensuring sufficient prior exposure to the projects. To most participants, the process of appraising projects and emerging research needs in the light of a formal evaluation framework was new as a concept. Also, even though preparatory material was sent to the participants ahead of time, they developed a clear understanding of the participatory process only at the workshops, partly because some tools could not be fully explained in any preparatory material (e.g. the GSS). Because a clear process description helps dispel possible prior apprehensions, it seems that every effort should be made to motivate and explain the process before the workshops, in order to ensure high participation rates. In this regard, the situation would be very different with groups which convene regularly. Although vital in the assessment of the industrial relevance and socio-economic impacts, reliance on advisory boards or other similar evaluation bodies presumes that the evaluators possess the required expertise and are willing to articulate their statements; these assumptions, however, cannot be taken for granted (see, e.g., Loveridge, 2001). Also, because industrial involvement in the management of research programs is often based on voluntary participation, the mere appointment of an external evaluation body may not lead to an effective engagement unless adequate incentives are in place (such as learning, influence or compensation, for instance; see, Salo, 2001a). Related to the preceding observation, a potential pitfall in participatory evaluation is that if the members of the evaluation body are too closely involved with the projects, they may see themselves as partly responsible for the projects and may therefore supply optimistically biased statements. There is, in effect, an inherent tension between just how close evaluators can be to the object that is being evaluated: if they are too close, risks for inadvertent collusion are higher; but on the other hand, if the evaluators act at a greater distance, they may lack much of the contextual knowledge that is needed to understand the project in its proper environment. This suggests that evaluation bodies should exhibit a balanced representation so as to counter possible problems due to this kind of proximity. Wood Wisdom workshops benefited from the presence of the program director, who had substantive expertise in the research topics but who did not directly represent any of the funding organizations. Thanks to her expertise, she could pose questions on specific details, thus complementing the role of the process facilitator who managed the overall workshop process.

A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 151 Indeed, a senior R&D manager from industry noted that workshops of this kind would profit from the presence of a devil s advocate who would deliberately challenge commonly held truths, in order to break consensual thinking and overly polite remarks. Interestingly enough, this observation is aligned with the suggestion that two facilitators with dialectical roles can be helpful in the development of system dynamics models for controversial problem domains. The prospective evaluation was based on the notion of a research area as a central unit of analysis (i.e., research consortium). While successful in most regards, this approach had its limitations, too: for example, it did not support the systematic exploration of interfaces between different consortia, or the full explication of research topics that were relevant to several consortia. Also, the otherwise-successful elaboration of new research themes might be criticized on the grounds that it was carried out by those who had taken part in the existing research program. Seen from the broader view, then, there seems to be a need for workshops which are purposely designed so as not to reflect prevailing structures of program management. 4.2. Linking different levels of decision-making Experiences from the Wood Wisdom workshops can be better understood by realizing that the innovation system can be seen as a network of stakeholders who interact at several levels (see Table 2). By analogy to Martin (1995), the most relevant for the present discussions are (1) the macro-level, which is concerned with the development of general S&T policies and their implementation; (2) the meso-level, which aims to promote innovation within communities of practice, consisting of researchers and firms with interrelated and complementary interests in the context of specific research areas; and; (3) the micro-level, where S&T knowledge is generated in the context of individual projects. Through their interactions, the stakeholders at these levels generate and assimilate information on how the innovation system is performing and what further measures are called for to foster innovation. This information F which may be called strategic policy intelligence (see Kuhlmann et al., 1999; Smits, 2001) F is crucial for informed decision-making. Without sufficient information about the competencies of different research groups, those R&D managers and program managers who work at the meso-level would be ill-equipped in their attempts to initiate innovative projects, for instance. Within the framework of Table 2, one of the functions that can be ascribed to prospective evaluation is that of transcending some of the gaps that may exist between these levels. For instance, decision-makers at the meso- and macro-level are crucially dependent on information about how the innovation system is performing. Often, such information is provided through summative evaluations where numerical indicators are brought to the fore (see, e.g., Roessner, 1985). Although useful, such indicators are deficient in that they do not readily transmit relevant contextual information (e.g., gradual changes in attitudes or aptitudes). Also, the task of generating indicators may be perceived as a burden by those from whom they are collected, which in turn may foster evaluation fatigue and undermine the quality of information that is generated. Conversely, formative evaluations seek to give an impetus to the shaping of innovative activities, reflecting anticipated changes in the socio-economic environment in which innovations are created (see, e.g., OECD, 1997). By construction, formative evaluations have a strong future-

152 A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 Table 2 Linking levels of an innovation system through summative and formative evaluation Level Wood Wisdom Focus of activities Summative evaluation Formative evaluation Macro-level Industrial cluster Shaping of policies to enhance the performance of the innovation system Meso-level Research consortium Strengthening of communities of practice and their ability to innovate Micro-level Research project Creation of new S&T knowledge and innovations * Information on socio-economic impacts of S&T policies * Delivery of aggregate data on performance measures * Attainment of objectives vis-a` -vis agreed objectives * Information on S&T outputs (e.g., theses, patents) + Interpretation of policy objectives within specific research areas + Setting of research priorities (e.g., formation of consortia) + Constructive feedback to research projects + Preparation of collaborative networks and research activities oriented component and send specific signals to researchers whose activities they tend to guide. Towards this end, however, formative evaluations must be rich in the sense that they reflect the diversity of organizational contexts in which innovations take shape (e.g., possible tensions between small and large firms). This suggests that quantitative methods may not suffice unless they are accompanied by a rich set of qualitative information. More generally, it seems that participatory approaches, which combine elements of both summative and formative evaluations, can bridge gaps between the micro-, meso-, and macro levels. From the viewpoint of summative evaluation, the presence of information asymmetries and the complexity of innovation processes suggest that no stakeholder group has all the requisite knowledge; thus, evaluation results have to be generated by consulting several stakeholders. Formative objectives, on the other hand, can be attained only if the process responds to the needs that are suggested by the stakeholders emerging agendas; but this calls for a consideration of future challenges, whereby participatory approaches can help in the preparation of shared action plans. Engaging stakeholders in a participatory process is also useful in that (1) it contributes to the timely dissemination of information and (2) makes it possible to generate information on a wider range of topics that would be possible by relying on narrowly defined numerical indicators only. Overall, the Wood Wisdom workshops suggest that quantitative and qualitative methods fulfill complementary roles with somewhat different purposes: while quantitative methods are useful in the development of summative indicators, informal discussions and other qualitative approaches are needed to interpret them and lend more depth to the analysis of what such indicators signify

A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 153 in specific contexts. In Wood Wisdom, for example, the value tree framework was helpful in profiling what kinds of research efforts and networking activities would be called for in the future when pursuing further research; yet verbal descriptions, informal comments, and ensuing discussions were crucial in elaborating the specific research topics to match these profiles. 5. Conclusions In this paper, we have reported a case study which suggests that the concept of prospective evaluation F implemented through a series of computer-aided participatory workshops, for example F can supply (1) summative information to policy makers and (2) formative statements which offer detailed guidance to the activities of innovating research groups. In addition, we have outlined a framework where summative and formative evaluations are linked to three levels of the innovation system. It is our belief that the explicit positioning of prospective evaluation within this framework can help in achieving an adequate balance between (1) formative and summative components, (2) the attendant use of quantitative and qualitative methods, and (3) the identification of participants whose expertise and informed judgements are required to support the process. References Andersson, T., 1998. Managing a Systems Approach to Technology and Innovation Policy. STI Review, 22, 9 29. Beroggi, G.E.G., 1998. Decision Modeling in Policy Management: An Introduction to the Analytic Concepts. Kluwer Academic Publishers, Amsterdam. Bongers, F.J., Geurts, J.L.A., Smits, R.E.H.M., 2000. Technology and Society: GSS-Supported Participatory Policy Analysis. International Journal of Technology Management, 19, 3 5, 269 287. Caracostas, P., Muldur, U., 1998. Society, the Endless Frontier: A European Vision of Research and Innovation Policies for the 21st Century. European Commission, DG XII, Brussels. Coates, J.F., 1985. Foresight in Federal Government Policymaking. Futures Research Quarterly, 2, 29 53. Geurts, J.L.A., Joldersma, C., 2001. Methodology for Participatory Policy Analysis. European Journal of Operational Research, 128, 300 310. Grupp, H., Linstone, H.A., 1999. National Technology Foresight Activities Around the Globe. Technological Forecasting and Social Change, 60, 85 94. Ha ma la inen, R.P., 1991. Facts or Values F How Do Parliamentarians and Experts See Nuclear Power? Energy Policy, 19, 5, 464 472. Hartwich, F., Janssen, W., 2000. Setting Research Priorities: An Example from Agriculture Using the Analytic Hierarchy Process. Research Evaluation, 9, 3, 201 210. He raud, J.-A., Cuhls, K., 1999. Current Foresight Activities in France, Spain, and Italy. Technological Forecasting and Social Change, 60, 55 70. Janis, I., 1982. Groupthink, Psychological Studies of Policy Decisions and Fiascos, (2nd edition), Houghton Mifflin, Boston. Kuhlmann, S., Boekholt, P., Georghiou, L., Guy, K., He raud, J., Lare do, P., Lemola, T., Loveridge, D., Luukkonen, T., Polt, W., Rip, A., Sanz-Menendez, L., Smits, R., 1999. Enhancing Distributed Intelligence in Complex innovation Systems. Final Report of the Advanced Science and Technology Policy Planning Network Report, ISI-FhG, Karlsruhe. Loveridge, D., 2001. Foresight Seven Paradoxes. International Journal of Technology Management, 21, 7 8, 781 792.

154 A. Salo T. Gustafsson and P. Mild/Intl. Trans. in Op. Res. 11 (2004) 139 154 Luukkonen, T., 1998. The Difficulties in Assessing the Impact of EU Framework Programs. Research Policy, 27, 6, 599 610. MacLean, M., Anderson, J., Martin, B.R., 1998. Identifying Research Priorities in Public Sector Funding Agencies: Mapping Science Outputs to User Needs. Technology Analysis and Strategic Management, 10, 2, 139 155. Martin, B.R., 1995. Foresight in Science and Technology. Technology Analysis & Strategic Management, 7, 2, 139 168. Martin, B.R., Johnston, R., 1999. Technology Foresight for Wiring Up the National Innovation System. Technological Forecasting and Social Change, 60, 37 54. Marttunen, M., Häma la inen, R.P., 1995. Decision Analysis Interviews in Environmental Impact Assessment. European Journal of Operational Research, 87, 3, 551 563. Mennecke, B.E., Valarich, J.S, 1998. Information Is What You Make It: The Influence of Group History and Computer Support System on Information Sharing, Decision Quality, and Member Perceptions. Journal of Management Information Systems, 15, 2, 173 197. Meyer-Krahmer, F., Reiss, T., 1992. Ex Ante Evaluation and Technology Assessment F Two Emerging Elements of Technology Policy. Research Evaluation, 2, 47 54. OECD, 1997. Policy Evaluation in Innovation and Technology: Towards Best Practices. OECD, Paris. OECD, 1999. Boosting Innovation: The Cluster Approach. OECD, Paris. OECD, 2001. Innovative Clusters: Drivers of National Innovation Systems. OECD, Paris. Papaconstantinou, G., Polt, W., 1997. Policy Evaluation and Technology: An Overview. In: Policy Evaluation in Innovation and Technology, Towards Best Practices. OECD, Paris, pp. 9 14. Perrin, B., 2002. How to F and How Not to F Evaluate Innovation. Evaluation, 8, 1, 13 28. Prihti, A., Georghiou, L., Helander, E., Juusela, J., Meyr-Krahmer, F., Roslin, B., Santama ki-vuori, T., Gro hn, M., 2000. Assessment of Additional Appropriation for Research. Sitra Reports, Sitra, Helsinki. Porter, M.E., 1990. The Competitive Advantage of Nations. The Free Press, New York. Roessner, J.D., 1985. The Multiple Functions of Formal Aids to Decision Making in Public Agencies. IEEE Transactions on Engineering Management, 32, 3, 124 128. Salmenkaita, J.-P., Salo, A., 2002. Rationales for Government Intervention in the Commercialization of New Technologies. Technology Analysis and Strategic Management, 14, 2, 183 200. Salo, A., 2001a. Incentives in Technology Foresight. International Journal of Technology Management, 21, 7 8, 694 710. Salo, A., 2001b. Concluding Remarks on Strategic Intelligence. In: Tu bke, A., Ducatel, K., Gavigan, J.P., Moncado- Paterno -Castello, P. (eds.) Strategic Policy Intelligence: Current Trends, the State of Play and Perspectives, Institute of Prospective Technological Studies. Joint Research Centre of the European Commission, Report EUR 20137/EN, December 2001, pp. 65 72. Salo, A., Cuhls, K., 2003. Technology Foresight F Past and Future. Journal of Forecasting, 22, 2, 79 82. Salo, A., Gustafsson, T., 2003. A Group Support System for Foresight Processes. International Journal of Technology Management, (to appear). Salo, A., Gustafsson, T., Ramanathan, R., 2003. Multicriteria Methods for Technology Foresight. Journal of Forecasting, 22, 2, (forthcoming). Salo, A., Salmenkaita, J.-P., 2002. Embedded Foresight in RTD Programs. International Journal of Technology, Policy and Management, 2, 2, 167 193. Schein, E.H., 1987. Process Consultation Volume II F Lessons for Managers and Consultants. Addison Wesley, New York. Smits, R., 2001. Innovation Studies in the 21st Century: Questions from a User s Perspective. Technological Forecasting and Social Change, 69, 1 23. Smits, R., Leyten, A., den Hertog, P., 1995. Technology Assessment and Technology Policy in Europe: New Concepts, New Goals, New Infrastructures. Policy Sciences, 28, 272 299. Zigurs, I., Buckland, B.K., 1998. A Theory of Task/Technology Fit and Group Support Systems Effectiveness. MIS Quarterly September, 1998, 313 334.