Science And Politics

 

Science–Policy Interfaces in Impact Assessment Procedures
Bäcklund, A.-K., Bousset, J.P., Brogaard, S., Macombe, C., Taverne, M., van Ittersum, M.K., 2010. Science - Policy Interfaces in Impact Assessment Procedures. In Brouwer, F., van Ittersum, M.K. (Eds.) Environmental and agricultural modelling: integrated approaches for policy impact assessment. Springer, Dordrecht, pp 275-294.

The link below provides you with some thoughts on ways and difficulties to make a policy impacts assessment tool -- SEAMLESS-IF -- a science - policy interface to be used to provide policy-makers with usefull information about the likely impacts of a given specific policy option.

www.seamless-ip.org/

Abstract :

To perform impact assessments (IAs) of policy proposals is gradually becoming an important instrument in European policy making. The IA procedure in the European Commission stems from a governance concept which assumes that policy programs should be the product of complex interaction between government and non-government organizations (researchers included), each seeking to influence the collectively binding decisions that have consequences for their interest.

The idea of IA is based on the assumption of ‘co-production of knowledge’. In order to make a system like SEAMLESS-IF applicable in a European decision-making process, interaction with potential users of the system is needed during its course of development. With the objective to make use of potential users’ opinions, different forms of interaction to enable user involvement in the development of the framework have been performed. The learning that came out of discussions with officers in the EU administration participating in the SEAMLESS User Forum and representatives of regional administrations setting up assessments in test situations in France lead us to the many feedbacks.

Testing of the assessment procedures at the regional level revealed that scientists , specializing in modelling or systems analysis, can play an important role to facilitate the integration of different types of knowledge into the definition of a policy problem. Although the general experience from setting up test cases is promising, critical stages in the assessment procedure which have to be further developed became visible. Three particular steps in the process deserve attention: the framing process, the specification of resolution space and the specification of policy to be tested.

The most difficult step for science to work within policy making is the problem definition. Policy makers may express frustration with narrowly defined scientific problems and scientists struggle to separate the many strands and pieces of complex policy problems.

An overall conclusion from the interactive encounters between policy makers and scientists is that scientists can have an important role in promoting the process of learning in policy. However, in order to define integrated science-policy problems both science and policy must work in new ways. The scientists need to be able to recognize, understand, and value different forms of knowledge. Indeed, one reason for placing science within the policy process is for scientists to learn about social values, appreciate other ways of knowing, and recognize the limits of science in policy. A role for the scientist as an ‘institutionalized critic’ occurs naturally in a policy analysis process. It is the critical contribution of science to demonstrate when beliefs are supported by evidence and when conventional wisdom is simply wrong. Policy actors, on the other hand, must be prepared to engage in the rather time consuming process which is necessary to arrive at a credible outcome of the problem assessed

 

Introduction


Science and politics serve different purposes. Policy commonly refers to an institutional  choice of values (Lee 1993, 2006). Policy options are ways (actions) to favour some  values and disfavour others. A policy development process is driven by forces that  can be characterised using three dimensions – salience , credibility and legitimacy  (Cash and Buizer 2005). Salience relates to the relevance of the policy goals (issue  at stake). Credibility addresses the technical quality of the knowledge and information  used (validity and accuracy). Legitimacy concerns the society’s interest in the policy.  Science on the other hand can be seen as the sum of knowledge produced by the  application of systematic methods of inquiry – including experience or learning  from actors and institutions, each guided by different epistemologies (what can be  known) and different ontologies (what is knowable) (Feynmann 1998).

Knowledge can be characterised by the three dimensions – acquisition, application and ethics . Acquisition relates to what has been found out using enquiry methods. Application relates to how the findings can be put into practice (i.e. technology). Ethics relates to the right and wrong ways to find out new things or to use technologies.

But doing science is influenced by the societal norms, which determines its legitimacy. And making a tool to support impact assessment is more than putting science and politics together. Hence, a modelling system like the SEAMLESS Integrated Framework (SEAMLESS-IF) should not be regarded as a strictly scientific/technical application, but as a tool for communication between science and policy. Such interaction is a discernible feature of impact assessment working procedures. It is therefore logical
that interaction starts already with potential users, which can contribute to the development of the tool.

SEAMLESS is an integrated framework that enables simulation of effects of agricultural and rural development policies and innovations, with a strong emphasis on communication between scientists and stakeholders (Van Ittersum et al. 2008a). This means that impact assessments made by SEAMLESS-IF can be understood as social processes, including but not restricted to their formal end products. A social process perspective directs attention beyond the content of assessment reports and encompasses questions regarding participation, presentation, evaluation, and how the boundaries between the scientific and political dimensions of the assessment are negotiated and legitimized (Hoppe 2005) .

A deliberative process is one in which values and facts are constructed within a process where ideas are reciprocally confronted with one another as a means for learning. In order for a deliberative process to be scientifically credible, it must consider all forms of knowledge – science, expertise, local and indigenous. It must also ensure the ‘accountability’ of each form of knowledge before relying upon it as a foundation for political choices. To make the process work there must be full and open access to knowledge by all participants. There must also be full transparency in how knowledge is created and used and how values are discovered and applied. In doing so, deliberative processes may address the twin problems of uncertainty and ambiguity. This is done by institutionalizing two basic principles of organizational learning – adaptation through experimentation and iterativeness through continuous monitoring, evaluation, and change (Casey 2005) .

Following Habermas, we could state that SEAMLESS-IF can contribute to a deliberative impact assessment process, provided that the following conditions are met: (1) The values of the organisation requesting an assessment meet those of the SEAMLESS-IF. (2) The problem definition can be debated between involved parties. (3) The outcome of the assessments allows for a real debate with affected stakeholders. The third condition will normally not be in the hands of the “model owners” but controlled by the policy agencies that are us ing the SEAMLESS framework. To match the first condition, we compare if the policy experts’ assumptions about
science-policy nexus, are compatible with the hypotheses set in SEAMLESS-IF. To meet with the second one, we illustrate how the problem definition created a real debate between policy experts and integrative modellers in the regional test cases.

And hence in which ways it will be possible for SEAMLESS-IF to contribute to an institutionalisation of a deliberative impact assessment process? This chapter presents the features of the impact assessment system now being implemented in Europe. We will then present SEAMLESS-IF as a science-policy interface, and report on the ways scientific knowledge and policy values were
confronted and deliberated in discussions with potential users and in test applications of the SEAMLESS-IF.

Motive and Aim of the European Impact Assessment System


To perform impact assessments of policy proposals is gradually becoming an important instrument in European policy making. Since 2003 a formal impact assessment (IA) is required for all regulatory proposals in the EC (EC 2002) . The initiative can be traced back to the Lisbon Strategy of 2000, where the European Union set the goal of becoming the most competitive and dynamic knowledge-based economy in the world. In its endeavour to achieve this goal a core priority is to implement a better law making process in the union and the member states. One way to achieve a better knowledge base for new regulations is to submit policy proposals for impact assessment (EC 2006).


Impact Assessment as Part of the Dynamics of Policy


The IA procedure at the European Commission stems from a governance concept which assumes that policy programs are the product of complex interaction between government and non-government organizations (researchers included), each seeking to influence the collectively binding decisions that have consequences for their interest.

The idea of impact assessment is based on the assumption of the ‘co-production of knowledge’ (Callon 1999) . Different kinds of knowledge are negotiating their hybridization, which is a necessity to get forward in the management of the risk-complexity-uncertainty of the implementation of policies.

An assessment process that meets the ideal requirement is one that involves stakeholders throughout the work in a major political process, so that the suggestions put forward in the final proposal are anchored on all levels in the European community (Bäcklund 2009). Stakeholders can be consulted about different elements of the IA; the nature of the problem, policy options, impacts, etc. (EC 2005 : 9).

Assessments are likely to be more thorough and also include sustainability aspects when they concern complex issues without any simple policy solutions. Such situations ‘open up windows and pressures for significant policy change, as well as a demand for new sources of knowledge’ (Turnpenny 2008).

The possibility is also given to wait with consultation until there is a full draft proposal, only applying EU’s minimum standards for consultation. Although the latter policy procedure has been the most frequently used option, at some stage the results will always be subject to public examination. This examination will by nature be politically as opposed to scientifically motivated. Introduction of the IA instrument in European policy making is a helpful step to make economic development more knowledge based and rational but economic development is nonetheless always a struggle between conflicting interests. We are considering here the ‘adversarial model’ by Hoppe (2005), where politics is a non violent power struggle between political parties and/or organized interest groups, that lead to temporary compromises of the public interest (Lindblom 1968). The EC and its administrations are under heavy pressure from influential spheres of power. Also scientists and the use of scientific modelling tools will be subordinate to the political interplay, in which also scientific or other groups contributing to IAs stand a risk of being targeted by a political critique (Bäcklund 2009). The design of the assessment as well as the outcome can be severely criticised on both factual and political grounds (see, e.g. EEAC 2006: 17). Hence, there is a need for scientists to be aware of the political process in which they participate when consulted.

When the Commission proposes a new regulation the DG’s argumentation for the policy recommendation made in the proposal has to come across convincingly to decision makers and the public. As it has to be explained how the assessment results support the policy option in the proposal, officers want simplicity and transparency in the modelling process. The models and the assumptions made have to be understood by the policy experts. If it is not comprehensible and transparent it is not politically useful for the administration (Bäcklund 2009). The need for simplification might be a source of conflict between the ambitions of a scientific modeller and a policy expert. The core of this question concerns a ‘cultural’ difference between researchers and practitioners’ way of approaching a problem – especially practitioners in highly politicised administrations. The interactions between these cultures are therefore a main stake when developing and applying a system like SEAMLESS-IF.

 

IA as a Learning Process


The aim of introducing the IA procedure is wider than to merely provide a knowledge base for decision making. IA is introduced as a rational and better way to political decision making but alsoas a tool to improve internal communication in the Union and to restore an anticipated lack of confidence in European governance. The great aspirations placed on the IA system as a way to achieve increased communication and unity in European politics, to promote mutual learning between EU institutions and Member States and help in “restoration of confidence in government” are expressed in the Mandelkern report preceding the final IA proposal (Mandelkern [AU1] Group on Better Regulation, 2001) .

The political system in the EU is not based on a voting democracy but on a bargaining system in combination with deliberative democratic principles. Deliberative can be described as ‘collective decision making with the participation of all who will be affected by the decision or their representatives’ in combination with a ‘decision making by means of arguments offered byand toparticipants
who are committed to the values of rationality and impartiality’ (Elster 1998 : 8).

A prerequisite for a true deliberative process to occur is that it emerges under the conditions of ‘free public reasoning among equals who are governed by the decisions’ (Cohen 1998: 186). The underlying science-policy nexus model is a model of ‘pure learning’.
It assumes that scientists and policy-makers cooperate through shared concepts and strategies of innovation. It treats the policy process as a sort of research process, where the policy programme is viewed as a set of hypotheses about the causal links between acts and a desirable future state of affairs. And further putting policy into practice can be seen as a case of social experimentation (Hoppe 2005: 211).

To ensure that the assess ment work performed also s erves its bargaining and deliberative purpose, consultations are made an integral part of the EU’s IA procedures so that it at the same time makes decision-makers and the public aware of likely policy impacts and serves as a tool for communication between them (EC 2002: 3).

 

Conceptualizing Science–Policy Interactions in SEAMLESS-IF Application

SEAMLESS-IF is designed with a set of procedures and computer tools supplied by scientists, which can provide ‘information’ to policy makers, in order to facilitate the assessment of likely short term impacts of a suggested policy, and the exploration of alternative strategies for achieving longer-term goals through influencing land management decisions. The SEAMLESS-IF impact assessment procedures can be viewed as applications of a specific technology – encapsulation of specific enquiry methods (simulation models) and data – for the acquisition of new knowledge. When searching for the domain of intersection between science and policy, we shall first look at the SEAMLESS-IF impact assessment procedure (technology) and then the outputs of the procedures (new knowledge).

The SEAMLESS-IF Impact Assessment Procedure


The SEAMLESS-IF impact assessment procedure includes three main phases: pre-modelling, modelling and post-modelling (Therond et al. 2009) . During the pre-modelling phase the issue at stake and the associated spatial and temporal scales are defined. The future driving forces at a given time horizon are anticipated and indicators are selected.

The choices and selections made during this phase will express the interests, values and motivations of the organisation that has initiated the assessment of a policy or a technological change. The following example shows how policy options can steer an assessment in different directions. The objective of the policy to be tested could be to increase oil-producing crops. In order to achieve this goal a specific group wants to support biofuel. To promote this, the group needs arguments in favour of biofuel. So, the
group suggests that new cropping systems should be tested and assessed on an indicator of biofuel energy balance. The group could also be interested in assessing the global performances of cropping systems that include oil-producing crops in comparison to cropping systems without such crops. The group might further like to get ideas about how oil crops could be introduced in new regions, for example at which price level these crops would be accepted among producers. More thought would be needed to suggest changes in policies or practices to assess this question.

The main concern for the group is the location of oil-producing crops and the political and economic elements that would favour or disadvantage their introduction as these two factors would have implications on the whole supply chain. During the next step, the modelling phase , the appropriate model chain is used with the relevant data. A consistent micro-macro analysis involves a bio-economic farm model (FSSIM) and an agricultural sector model (CAPRI). The FSSIM farm models are run for the major farm types in a sample of regions within the EU. These farm models are run for a range of prices of the commodities modelled with FSSIM
and CAPRI. This results in modelled supplies of commodities for these farm types and regions at a certain price level. The price-supply relationships are then extrapolated to the farm types in regions for which no FSSIM model was run, using the econometric model EXPAMOD. Then the supply models in CAPRI are calibrated to these supply responses allowing derivation of commodity market prices consistent with the farm level behaviour in CAPRI. The prices simulated with CAPRI will then be used for a final run of FSSIM, to simulate the farm behaviour (Van Ittersum et al. 2008a).

During the post-modellingphase the impacts of tested policy options and technological changes are analysed and explored, including their institutional compatibility. The post-modelling phase may also include the transformation of the indicators into indices for the comparison between various policy options. Finally, the various results, newly constructed indicators and models are stored for future retrieval and reuse, in a knowledge base, together with annotations, and other process related documents.

Both the pre- and post-modelling phases require comprehensive interaction between the two types of actors: policy experts and integrative modellers . Policy experts express the problem to be assessed according to their interest and the political role of their organisation, and thus influence the assessment (definition of problem, policy options, indicators) in a way that supports their stake. Integrative modellers run the SEAMLESS framework. As providers of expert knowledge, data and analysis, they feed scientific knowledge into the policy process, based on their values. This makes that the SEAMLESS-IF procedures can be regarded as a plan for institutionalization of ‘deliberation’.

The term ‘deliberation’ means that values and facts are constructed within a participatory practice, where they are moulded in a process of mutual confrontation. In deliberative processes it is expected that people through interrogation will discover values – what is important to them, to others, and to society – and learn about their and society’s interests. It is this fundamental commitment to
learning that characterizes the use of ‘confrontation’ and ‘interrogation’ among ideas and value. The institutionalization of ‘deliberation’ – i.e. the simultaneous interrogation of knowledge and values – can transform political decision-making (Smith 2000) . This process is als o described by Habermas as deliberative democracy (1996).

Habermas distinguishes three models for the relation between science and policy, among which “the pragmatic model” recognises that “… there is interdependence between values and facts … and the strict separation between the functions of the expert and the politician is replaced by a critical inter-relation” (Van den Hove 2007). Drawing on Habermas’ classification of science-policy relations the SEAMLESS-IF procedure can be viewed as a pragmatic, proces s-oriented science-policy interface. The main reason for this stems from the fact that SEAMLESS-IF procedures recognise interdependence between values expressed by organisations
and techniques to satisfy value-oriented needs. In SEAMLESS-IF procedures, the strict separation between the functions of scientists and policy makers is replaced by a critical inter-relation.

In the assessment procedure, the choice of which scientific knowledge is required – i.e. which problem situations will be considered and which are left aside – involves dynamics that pertain to both the scientific and policy domains. The identification of the issue at stake, the choice of relevant disciplines, methodologies, scales, variables, and boundaries, and the strategies to articulate them are
elements of the scientific process that are in no way isolated from the socio-political context. In other words, the framing of the question to be considered in the ass essment involves value judgements and decisions about what will be considered and what not – from policy experts, and about whowill be involved and how– from integrative modellers. Hence the first domain of intersection between science and policy in the SEAMLESS-IF procedure is a result of the fact that processes of selecting, framing and addressing a problem as well as the design of potential solutions pertain to both the scientific and the political spheres.

The second domain of intersection between science and policy relates to the selection of models and indicators used to identify impacts of the policy to be tested. Value judgements are also embedded in decisions about which results will be used and how they are interpreted. The essential point here is that the SEAMLESS-IF selection process of models and indicators belongs both to the
scientific process and to the policy process. Policy experts may influence the scientific validation process in the direction of interest and values of their organisations. Integrated modellers might feed scientific knowledge into the policy process in accordance with values of the scientific networks by which ‘scientists build up their cognitive authority’. Scientific networks may be used as ‘a powerful vehicle for channelling scientific input to policy-making’ (Van den Hove 2007: 813).

The SEAMLESS-IF Outputs as Science–Policy Interface


A SEAMLESS-IF integrated assessment can be used as input in an Impact Assessment which provides policy-experts with information about the likely impacts of a specific policy option. This information is provided by indicators and indices, which are tested against a baseline scenario and a selected policy scenario. The baseline scenario can e.g. be constituted by the likely impacts of the CAP reform as it is anticipated to be implemented in 2013 (or any ot her defined time horizon), including already decided future legislative changes. The scenario is the point of reference for the interpretation of policy effects, or various shocks
coming from the ‘market level’ or the ‘farm level’ This makes that the resulting SEAM LESS-IF foresights can be understood as interfaces bet ween ‘s ystem science’ (procedures and computer tools for supporting impact assessment) and ‘social significance’ (the tested policy options and the selected indicators).

The analysis made by SEAMLESS-IF can be regarded as ‘social foresights’ (economic, social and ecological needs of society, and means to satisfy these needs – amongst which are technological means and institutional considerations) and consequently the foresights can be understood as science-policy interfaces for the construction of ‘strategic knowledge’. Drawing on Grunwald’s (2004) definition there are three reasons to regard the SEAMLESS-IF foresights as strategic knowledge as they are:
– Context-sensitive combinations of explanatory knowledge about the tested policies/innovative technologies
– Provisional and incomplete in their descriptive aspects due to the degree of uncertainty in given models and data
– Non-verifiable in nature since they do not give a representation of an empirical reality

The foresights are an explorative type of knowledge. Their aim is not to forecast the future – which is assumed to be plural – but rather on bounding the uncertainty. By offering a basis for discussing policy impacts they can play as communication devices.
Dealing with strategic knowledge for societal application requires a thorough reflection on the premises that are inherent in the integrated assessment models and on the sensitivity of the results to the hidden values and other assumptions of the used models (Morgan and Dowlatabadi 1996; Brugnach et al. 2007). It has been suggested (Van Ittersum et al. 2008b) that uncertainty shall be addressed in SEAMLESS-IF in four steps: identifying the sources of uncertainty in the selected model chain; communication of that uncertainty to the user; establish insight into the effects and/or relative importance of sources of uncertainty relevant to the users;
communication of these effects to the users.

Problems and challenges for the SEAMLESS-IF science-policy interfaces can be summarized as follows (Table 12.1).
Table 12.1 Problems/challenges for science–policy interfaces in SEAMLESS-IF
Pre-modelling phase
- Problem definition Selection of problem situations looked at/left aside
- Resolution Technology (model-data)
Post-modelling phase
- Discussion Scientific networks as vehicles to channel scientific input to policy; policy issues’ influence on education of scientists
- Strategic analysis - Balance between economic, environmental and social impacts
- Completion of the database/complexity of indicators
- Model uncertainty/sensitivity analysis of the outputs based on the premises of the models

During the pre-modelling phase , two challenges must be addressed. As a result of making different selections during the process of problem definition some issues will be tackled and others let aside. It is a main challenge to make this outcome evident for the policy makers. The “ resolution ” challenge relates to the outcome of an informed discussion between the policy maker and the researcher
resulting in selection of the suitable data, choice of the right algorithms, and the relevant indicators.

During the post-modelling phase , researchers face discussions where results can be contested. In fact, scientific researchers never act alone. They belong to institutions and to networks. Without always being fully aware of it, the researcher brings the ideas from his/her networks into the work. Through the models, hypotheses are introduced in the formulation of the question to be assessed. In turn, the policy makers influence the scientific education. During the strategic analysis other issues have to be addressed, such as how to balance economic, environmental and social impacts; how to select which outputs of the modelling, among all the available ones, shall be considered as ‘the results’; and how the potential causes of uncertainty shall be presented.

In conclusion, the EC’s Impact Assessment Guidelines suggest science-policy relations where scientists and experts share their knowledge and capabilities. The design of the procedure and the outputs given by SEAMLESS-IF (as far as we may deem it) come close to that model. The following section reports on the ways scientific knowledge and policy values were deliberated during development of the framework and in test applications.

 

Case Studies: Establishing a Debate


The two cases presented below report about interactions with potential users during development of the SEAMLESS-IF. The first case describes interaction with officers at the EC to explore their views on how the framework should be developed in order to suit assessment work in the DGs. The second case describes the interaction with representatives at regional level in France when setting up assessments in test situations.

Case: Interacting with the EC


Allocation of research funding is a way of manifesting political prioritisation and expanding knowledge in a certain direction. Given the importance attributed to impact assessment in European policy making it is logical that several large projects with the task to develop frameworks for modelling tools are presently funded by EU’s Framework Programme for Research and Technological Development (e.g. SEAMLESS, SENSOR, EFORWOOD). Development of 2 such systems is clearly in line with the Commission’s aim to implement the impact assessment procedure in the EC and in the Member States. But also the way work in the funded projects is organised, is part of a political progression to spread the idea of impact assessment as a political tool. In the research
proposals and in the intermittent evaluations of the projects the importance of user interaction with potential users at different political levels during the development of the system has been forcefully underlined. Dissemination and stakeholder interaction are regular requirements in research funded by the EC, but in the recommendations to the SEAMLESS project it has been advocated in quite a pronounced way.

It was acknowledged already from the outset of the project that the success of the framework and its impact on EU policies depended on an efficient implementation and integration into user organisations. In the project proposal it was thus stipulated that the program should establish a permanent group of ‘advisors’ among potential ‘users’ that could help to develop the framework in – a process of ‘co-evolution’. Users were defined as organisations/individuals that potentially could use SEAMLESS-IF to support impact assessment processes. For mainly practical reasons the users were divided into prime users (being DGs like Agriculture and Environment) and other userslike national policy making or implementation agencies and organisation advocating e.g. agricultural or environmental interests.

Forming a Platform for Interaction
As the aim was to develop a system that could serve the impact assessments work performed in the Commission, user interaction could very well have been arranged in such a way that the project from the start had been provided with a reference group of experienced assessment leaders in the EC. But this was not the case, instead the researchers in the project had to work their way into the DGs, to establish contact with officers that we identified as possible potential users, and evoke their interest for a modelling tool like SEAMLESS-IF.

We wanted to establish a User Forum (UF), with permanent members that were willing to follow the project during its development of the framework. In search of the relevant contacts a large number of officers at different DGs of the Commission, and in the European parliament, as well as at national and regional administrations were approached. We were in contact with more than 30 people by way of email correspondence, personal interviews and organised information meetings in both prime user and other organisations. As a result of this search process, information about the aim of the project and its background in the Commission’s Directive of
Impact Assessment has been effectively spread by the researchers. The foregoing can be seen as a purely administrative process but it can also be described as an interactive learning process. While investigating practitioners’ interest in and potential need for modelling tools, the researchers have at the same time advocated the Commissions policy in its own administration, as well as in the
European Parliament and in national administrations and interest organisations.

The actors involved, the researchers and the officers in the administrations, have participated in a process that may be described as interactive learning. This search process could also be viewed as if researchers acted as lobbyists while anchoring the idea of ex-ante impact assessment in general and assessment by help of modelling tools in particular, among employees in the administrations.

When finally the right group of suitable and interested EC officers was targeted it resulted in setting up a User Forum with people that were willing to work with us. There has been some shift of individuals during the years but several people have followed the forums from the start. The UF meetings have been regularly attended by a core group of five to ten representatives from DG Agriculture, DG
Environment and DG Economy and Finances, EEA (European Environment Agency) and JRC (Joint Research Centre). It has not been possible to include a representative from the Secretariat General in the UF, which was unfortunate as they have the overall responsibility for the IA development in the EC. One of the learning experiences we made during the initialisation phase was that according to the ‘work culture’ in the services, the DGs preferred not to include representatives from organisations external to the Commission in this type of forum. Originally we intended to create a UF with mixed participation with representatives from the Commission services as well as other organizations such as farmers unions, environmental organisations and the OECD.
The seven meetings that have been held have resulted in a knowledge exchange which has been beneficial for the development of the framework.

A systematic review of the minutes from the meetings displays that the discussions have mainly revolved around three types of themes: (i) positive and negative feedback on the presented ideas concerning structure and content of the system; (ii) demands on technical performance or knowledge content of the tool; and (iii) strategic advice from the UF concerning best ways to approach and attract the interest of the DG’s and other users.

Positive and Negative Feedback from a User Perspective
The early phase of interaction focused on explaining the idea of the framework and its components in order to get feedback on its usefulness for the participants’ organizations. It became clear that an integrated framework for modelling policy effects clearly was of great interest to the organizations and that it was timely introduced as the need for knowledge based tools increased along with the growing importance of the IA procedure in the EC. Implementation of a system like SEAMLESS would imply a major change in procedure and quality compared to the present methods, in which answers to a greater extent are estimations based on expert judgments or on fragmented model analysis.

Features that particularly appealed to the participants were the open construction of the system where linkages to other models can be made, and that the outcome can be presented with different visualization techniques. The importance of an open design of the framework is related to the anticipation that there will be more general and open questions asked in the future, like: What will happen if food security becomes more important? What happens in the EU if India and China develop into major food importers? Other reasons to keep the framework open would be to investigate for example issue related to water resources which requires integration
of hydrological models or to incorporate models that are already operational in their organization.

Parallel to the participants’ positive interest in the promising potential that a tool like SEAMLESS-IF would bring about, they  demonstrated their role as practitioners concerning another question of openness, but this time openness was not seen as a benefit. From the very start the participants were concerned about not “wasting their time” on development of a tool if the continuation of “the
product” and its administration was not secured. This is an unavoidable difference in primefocus between researcher and practitioners. For researchers the development of knowledge is meaningful as such. For practitioners it is meaningful only if
knowledge can be administrated and applied. Theoretically both parties understand each others viewpoint, but it would not have become an issue in the discussion between them if the encounter between the two had not been arranged, i.e. the formation of a UF.

The issue raised was, “What will happen with SEAMLESS-IF after 2008, when the research project has ended. Who will maintain the models and the data?” The research team was of course well aware of the necessity to provide an answer to this question but for obvious reasons it was impossible to give such guaranties before the development process was begun. The result of this is sue
being raised in the UF was that the process of finding forms for administration of the system, which was already envisioned in the project description, got an early start.

This was also strongly endorsed by the Scientific Advisory Board of the project and independent peer reviewers. As a result the Board of the project developed a business plan, of which the establishment of a SEAMLESS Association was a cornerstone. The establishment of this association is now secured. The major aim of the Association is to maintain, extend and disseminate the major product of the project, i.e. SEAMLESS-IF and its models and data. New research and application projects feedback improvements and extensions to the Association and the Association makes latest versions of SEAMLESS-IF available.

Which Type of User Interface for Whom?
Another important issue in the early stage of the project was the question of how the user interface should be designed. Before this could be decided it was necessary to establish more precisely what kind of user, with what kind of knowledge, should be able to navigate the system for which purpose. In the early discussions it was suggested by UF participants that they would like to have a simple tool on their computer to quickly answer burning questions. Such a tool would of course be very convenient to have in their daily work.

It was however difficult to envisage that the system could ever be so user-friendly that any IA-leader in a DG would be able to set up and run a question in the modelling system and yet to provide a flexible framework for a diverse range of applications.The possibility to navigate through the system would be restricted to people that had knowledge in modelling. There is obviously a tendency to do more in-house work on the IAs and also an increasing number of people that have knowledge about modelling in the DGs, but the number of staff that would be capable to perform modelling will continue to be limited. A deeper discussion about this issue in the UF also revealed that even people with knowledge in modelling would hardly have the possibility to spend the time it would take to set up modelling runs. An assessment leader has to devote her or his time to initiate and lead different parts of the impact assessment
process and will therefore not be able to personally perform in depth modelling and analysis of complicated issues. In case modelling is used it is usually outsourced.

The UF discussion, complemented with information from other DG officers that were currently leading impact assessment projects, led to the decision that we should aim for a system designed for use at different levels. To set up and run new questions, there will be a need for an integrative modeller , i.e. a person that knows the system and its components. The policy experts in the administrations should be provided with an interface where they can alter parameters in already modeled policy issues to test alternative answers by help of minor changes in variables and to view and analyse results.

Strategic Motives Behind Requests for Technical Performance and Knowledge Content

We met a number of requests concerning technical performance of the tool. These requests were usually concrete and seemingly minor. It was for instance said that it would be beneficial if the tool could be connected to the EC Guidelines for Impact Assessment. Another specific request of this kind was to show the outcome of a modelling run in relation to a policy target, such as a red flag indicating numerical targets that were not achieved. Such requests are rather easy to meet. The former request about adoption to EC guidelines had great influence on the des ign of the end users interface which is built around the three stages in the integrated assessment process, pre modelling – modelling – post modelling The latter request can be met through the visualization techniques are elaborated in the presentation part of the interface.

The reason for the two requests in the examples above can be traced directly back to the everyday practical assessment work of the services. They are also easily understood and translated by the researchers. Other requests could have their background in more profound political dimensions of the IA work and were not so easily transformed into the tool, like the questions of transparency and uncertainty of results.

The participants repeatedly expressed their concern for lack of transparency of modelling systems in general. The demand for transparency stems from a basic need for policy making institutions that their work gains legitimacy. Legitimacy is achieved when the production of knowledge, that a policy is based on, has been conducted in an unbiased way and has treated opposing interests in a fair manner. The modelling part of an assessment must not be suspected of taking over any political agenda. In the practical work situation the issue of transparency means that assessment leaders and other policy makers have to be able to defend and explain
modelling results to a critical group of stakeholders. If the modelling system is a black box, where assumptions and other critical parameters are not clear and understandable the outcome might not be perceived as useful (salient) for the officers in charge.
The demand for transparency is met with a comprehensive documentation of each of the components and the entire framework. Also the development of a joint ontology for concepts that are shared across components serves the aim of transparency and consistency (Janssen et al. 2009) . Further, the need for transparency has triggered a large investment to develop graphical user interfaces for two user types, to define and document the specifications of impact assessment experiments with SEAMLESS-IF.

We also learned that the issue of uncertainty of results was important. The closer the process gets to policy implementation the more important the aspect of uncertainty becomes. Information about uncertainty must be based on what users find important and the project therefore needed to get a deeper understanding of this issue. This experience encouraged the project to adopt a user-oriented approach towards uncertainty analysis (Gabbert et al. 2009) . As part of this approach a questionnaire with the aim to capture the most important aspects of uncertainty was distributed to participants of the User Forum and other contacts at various DGs, JRC and EEA. The questionnaire revealed which items of uncertainty potential users find most relevant. Although answers from respondents differ, they do give some guidance towards concrete uncertainty analyses in comprehensive model chains such as SEAMLESS-IF. A full blown traditional and forward approach to uncertainty analysis is neither feasible nor effective; instead a focus on relevant sources and aspects of uncertainty as identified by users and scientists is opted for (ibid.).

The political character of an IA process was also transformed into requests like the need for a very flexible selection of indicators. A situation where policy makers would be completely free to suggest their own indicators would be the optimal situation. The reason for this was that during the course of an IA, negotiations between several DGs and stakeholders are needed. If the selection of indicators is selection of indicators is not possible to achieve. Our suggestion for an indicator framework was seen as a useful compromise (Alkan Olsson et al. 2009) .

 

Concluding Remark on the UF Experience


The interactions with the potential users in the Commission served the development process of SEAMLESS-IF with specifications of requirements and the development of a strategy for maintenance beyond the project’s lifetime. It led to the identification of the need for different user interfaces and the possibility to adopt these according to procedural requirements in the EC’s assessment work. Clearly, the degrees of freedom in a large research project with many partners with a funding agency that requests a clear and detailed project proposal before starting the project are somehow restricted once the projected has started. The interactions however confirmed from the outset that what was proposed and developed by the SEAMLESS project was matching their overall needs. Within the overall aims and methodology the interactions assisted in priority setting within the project and greatly triggered the timely thinking about continuity, maintenance, extension and dissemination. In this way, the two main results of the interactions, i.e. evaluation and specification of requirements and continuity, also serve each other. Requirements which cannot be satisfied (entirely) during the lifetime of the project are an important starting point for the work plan of the SEAMLESS Association.

 

Case: Interacting with Regional Organisations


The application cases are derived from the testing stage of SEAMLESS-IF prototype 2, involving modellersfrom the project with knowledge about database structures, data formats, quality measures and model components and policy expertsrepresenting political or administrative bodies in France.

The testing of the pre-modelling phase included selecting and framing of the issue at stake for the organisation involved as well as defining the policy measures to be tested and the experiments to assess these measures. Framing the issue included specifying the ultimate and intermediate goals of the policy to be tested in terms of impact indicators. Defining potential measuresto be tested involved specification of changes in the environment of the target actors of the policy (economic instruments, regulation instruments, voluntary instruments) and/or changes in the behaviour of given actors. Designing experiments involved defining the spatial scale of the issue at stake (EU, nation, region, farm-type, etc.), the temporal scale of the implementation of the measures, and the driving forces of the future not handled by the tested policy and that might influence the outcome (alternative economic and/or environmental scenarios).

Two different methods for interaction were applied: open discussions around key topics and discussions structured around templates to be filled in during the meeting.

Templates included pre-defined typologies of problems, spatial scales, policy options (economic, regulatory, voluntary instruments), scenarios (economic and environmental) and impact indicators (domain, scale). Based on 11 regional test cases involving 17 policy experts and nine integrative modellers (from six different research institutes) the following learning experiences can be reported.

 

General Feedback on the Pre-modelling Process


Most of the policy experts appreciated the work steps in the pre-modelling phase and expressed that they helped them to clarify and make their problems explicit. The templates were perceived as good tools to support problem framing and to identify suitable policy options to be tested. The policy experts appreciated that they were asked about their opinion about possible impacts of a policy. This gave them an opportunity to express their experience and knowledge, and gave the modellers a possibility to merge this information with scientific knowledge to specify impact indicators. This process of joint work can circumvent that results will be rejected later on.
The modellers experienced the pre-modelling procedures to be an effective way to learn about social values of policy experts and include new knowledge. The test enabled them to better understand the policy experts’ opinions and thereby find ways to support them in formulating their aims and interests and to translate these into a problem possible to assess.

If the above reported experiences are generally valid it means that the SEAMLESS-IF pre-modelling procedure can serve as a learning interface for policy experts. Although the general feed-back was promising, some stages of the pre-modelling process came out as critical. Those key situations will be discussed below together with suggestions for improvements that can facilitate the process.

The Framing Process
Problem framing is the process during which involved actors state in more exact terms what they want to do. Problem framing requires an interpretation of the main aspects of the issue, and how they are connected. The factors and causal relationships will suggest possible policy options and external driving forces. The causal links will sort out which biophysical and agro-management contextual elements that have to be considered, and also demonstrate which alternatives that do not have to be considered (Bardach 2000). The combination of policy options, outlooks and contexts, which may be seen as an ‘experimentation plan’, stems from
the problem framing (Janssen et al. 2009). If the problem deemphasises or neglects [AU2] important dimensions, this can ‘limit understanding and narrow analysts’ vision’ creating ‘blind spots in which people will not see potentially valuable alternatives’
(Stern 1986: 200).

During the problem definition phase the policy experts mainly expressed their problem in terms of organizational interests, values and motivations, while the modellers needed to know the specifications of policy goals, impact indicators, and the constraints in order to set up an experiment. As a result it appeared that problem definition and specifications for the experiments needed several meetings between policy experts and modellers to become formulated. In some cases, no problem and experiment were defined as policy experts needed more time to specify the issues at stake. In other cases, the modellers needed to check the ability of the models to solve the problem or to find ways to make the problem addressable with SEAMLESS-IF.

As a result of discussion with policy makers it was decided that SEAMLESS-IF for comparative purposes should offer interactive mapping tools to co-design the different experiments run. In addition it was suggested that example problems and specifications for experiments should be provided as guidelines. This would make it easier to transform interest and values into impact indicators and experiments to be run. An open discourse about research questions and policy questions at the beginning of the assessment process can help to ensure that a broad framing is achieved. In the test cases the framing procedure was essential to attract the interest of the actors, although the participants sometimes contested the frames. Even the idea that a problem needs to be assessed is a frame and could be contested.

Specification of the Policy Option
The regional policy experts were interested in assessing both EU policies (e.g. reform of the Common Agricultural Policy) and regional policies, which are designed by their own organization (e.g. policies supporting the introduction of specific crops in new regions). But to specify the policy options to be tested proved to be another difficult step in the methodology, as there is still a lack of support tools to make this process effective. The most difficult step for science to work within policy making is the problem definition. Policy makers express frustration with narrowly defined scientific problems. And, scientists struggle to separate the many strands and pieces of complex policy problems into some assessable options.

The specification of policy options is of key importance for a successful assessment and also crucial for the understanding between policy experts and modellers. A policy option is a set of measures (changes of economic and/or regulatory environment) planned to be applied in a given space to solve a problem (difference between actual situation and desired situation).

Concluding Remark on the Regional Test Experiences
To conclude, the test experiences suggest that SEAMLESS-IF can contribute to make integrated assessment a deliberative process between the scientific and political spheres. Although the general experience from setting up regional test cases is promising, critical stages in the assessment procedure which have to be further developed became visible. Three particular design steps deserve attention: the framing process, the specification of resolution space and specification of the policy to be tested. The tests also gave insights about the importance of pedagogical tools which can favour mutual understanding and debate among the actors to improve the process.

An additional recommendation is to sufficiently account for the political, economic, social and scientific context in which the assessment is embedded. For this it is essential to find out the needs of potential users, to take notice of past experience and institutional history, and to ensure appropriate participation in the assessment process. If not, there is a risk of not achieving stakeholders’ involvement (Hilborn 1979; Mintzberg 1980).

 

Conclusions


To perform impact assessments (IAs) of policy proposals is gradually becoming an important instrument in European policy making. The IA procedure in the European Commission stems from a governance concept which assumes that policy programs should be the product of complex interaction between government and non-government organizations (researchers included), each seeking to influence the collectively binding decisions that have consequences for their interest. The idea of IA is based on the assumption of ‘co-production of knowledge’.

In order to make a system like SEAMLESS-IF applicable in a European decision-making process, interaction with potential users of the system is needed during its course of development. With the objective to make use of potential users’ opinions, different forms of interaction to enable user involvement in the development of the framework have been performed. The learning that came out of discussions with officers in the EU administration participating in the SEAMLESS User Forum and representatives of regional administrations setting up assessments in test situations in France lead us to the following conclusions.

The discussions at the EU level confirmed that the proposed framework developed by the SEAMLESS project is matching the potential users’ overall needs. The feedback on the prototypes served the development of SEAMLESS-IF with specifications of requirements on interface and visualisations needed. It also led to the identification of the need for different user interfaces and the adaptation of these according to procedural requirements in the EC’s assessment work. The interactions further assisted in priority setting within the project and greatly triggered the timely thinking about continuity, maintenance, extension and dissemination.

Testing of the assessment procedures at the regional level revealed that scient ists , specializing in modelling or systems analysis, can play an important role to facilitate the integration of different types of knowledge into the definition of a policy problem. Although the general experience from setting up test cases is promising, critical stages in the assessment procedure which have to be further
developed became visible. Three particular steps in the process deserve attention: the framing process, the specification of resolution space and the specification of policy to be tested.

The most difficult step for science to work within policy making is the problem definition. Policy makers may express frustration with narrowly defined scientific problems and scientists struggle to separate the many strands and pieces of complex policy problems.
An overall conclusion from the interactive encounters between policy makers and scientists is that scientists can have an important role in promoting the process of learning in policy. However, in order to define integrated science-policy problems both science and policy must work in new ways. The scientists need to be able to recognize, understand, and value different forms of knowledge. Indeed, one reason for placing science within the policy process is for scientists to learn about social values, appreciate other ways of knowing, and recognize the limits of science in policy. A role for the scientist as an ‘institutionalized critic’ occurs naturally in a policy analysis process. It is the critical contribution of science to demonstrate when beliefs are supported by evidence and when conventional wisdom is simply wrong. Policy actors, on the other hand, must be prepared to engage in the rather time consuming process which is necessary to arrive at a credible outcome of the problem assessed.

 

References
[AU3] Alkan Olsson, J., Bockstaller, C., Stapleton, L.M., Ewert, F., Knapen, R., Therond, O., Geniaux, G., Bellon, S., Pinto Correira, T., Turpin, N., & Bezlepkina, I. (2009). A goal oriented indicator framework to support integrated assessment of new policies for agri-environmental systems. Environmental Science and Policy (in press).
[AU4] Bäcklund, A.-K. (2009). Impact assessment in the European commission – a system with multiple objectives. Environmental Science and Policy (in press).
Bardach, E. (2000). A practical guide to policy analysis: The eightfold path to more effective problem solving . New York: Chatham House.
Brugnach, M., Tagg, A., Keil, F., & de Lange, W. J. (2007). Uncertainty matters: Computer models at the science–policy interface. Water Resources Management, 21 , 1075–1090.
Callon, M. (1999). The role of lay people in the production and dissemination of scientific knowledge. Science, Technology and Society, 4 (1), 81–94.
Casey, A. (2005). Enhancing individual and organizational learning: A sociological model. Management Learning, 36 (2), 131–147.
Cash, D., & Buizer, J. (2005). Knowledge-action systems for seasonal to inter-annual climate forecasting: Summary of a workshop . Washington, DC: The National Academies Press.
Cohen, J. (1998). Democracy and liberty. In J. Elster (Ed.), Deliberative democracy (pp. 185–231). Cambridge: Cambridge University Press.
EC. (2002). Communication on impact assessment, COM 2002(276). Retrieved Feb 15, 2007, from https://ec.europa.eu/governance/impact/docs/key_docs/com_2002_0276_en.pdf
EC. (2005). Impact assessment guidelines, SEC 2005(791). Retrieved Feb 15, 2007, from https://ec.europa.eu/governance/impact/docs/SEC2005_791_IA%20guidelines_annexes.pf
EC. (2006). A strategic review of better regulation in the European Union. COM 2006 (689). Retrieved Feb 15, 2008, from https://www.ec.europa.eu/enterprise/regulation/better_regulation/docs/en_689.pdf
EEAC. (2006). Impact assessment of European commission policies: Achievements and prospects. Statement of the EEAC – Working Group on Governance. Retrieved Feb 15, 2007, from https://www.ecologic-events.de/eu-impact assessment/en/documents/EEAC_WG_Gov_IAstatement_background.pdf

Elster, J. (ed). (1998). Deliberative democracy (pp. 1–18). Cambridge: Cambridge University Press.Feynmann, R. (1998). The meaning of it all . Reading, MA: Perseus.
Gabbert, S., van Ittersum, M., Ewert, F., Kroeze, C., Stalpers, S., & Alkan-Olsson, J. (2009, March). Accounting for user’s uncertainty information needs in Integrated Assessment models: The case of the SEAMLESS Integrated Framework. In M.K. Van Ittersum, J. Wolf, &H.H. Van Laar (Eds.), Proceedings of the Conference on Integrated Assessment of Agriculture and Sustainable Development: Setting the agenda for science and policy (AgSAP 2009) , Egmond aan Zee, The Netherlands, 10–12 March 2009 (pp. 504–505). Wageningen, The Netherlands: Wageningen University and Research Centre.
Grunwald, A. (2004). Strategic knowledge for sustainable development: the need for reflexivity and learning at the interface between science and society. International Journal of Foresight and Innovation Policy, 1 (1–2), 150–167.
Habermas, J. (1996). Between facts and norms . Cambridge: MIT Press.
Hilborn, R. (1979). Some failure and successes in applying systems analysis to ecological systems. Journal of Applied Systems Analysis, 6 , 25–31.
Hoppe, R. (2005). Rethinking the science-policy nexus: from knowledge utilisation and science technology studies to types of boundary arrangements. Poiesis & Praxis, 3 , 199–215.
Janssen, S., Andersen, E., Athanasiadis, I.N., & van Ittersum, M.K. (2009). A database for integrated [AU5] assessment of European agricultural systems. Environmental Science and Policy (in press) .
Lee, K. (1993). Compass and gyroscope . Washington, DC: Island Press.
Lee, N. (2006). Bridging the gap between theory and practice in integrated assessment. Environmental Impact Assessment Review, 26 (1), 57–78.
Lindblom, Ch E. (1968). The policy-making process . Englewood Cliffs, NJ: Prentice Hall.
Mandelkern Group on Better Regulation (2001). Final Report. Retrieved Feb 10, 2007, from https://ec.europa.eu/governance/better_regulation/documents/mandelkern_report.pdf
Mintzberg, H. (1980). Beyond implementation: An analysis of the resistance to policy analysis. INFOR, 18 (2), 100–138.
Morgan, M. G., & Dowlatabadi, H. (1996). Learning from integrated assessment of climate change. Climate Change, 34 (4), 337–368.
Smith, G. (2000). Toward deliberative institutions. In M. Saward (Ed.), Democratic innovation: Deliberation, representation and association (pp. 29–39). London: Routledge.
Stern, P. C. (1986). Blind spots in policy analysis: What economics doesn’t say about energy use.Journal of Policy Analysis and Management, 5 (2), 200–227.
Therond, O., Belhouchette, H., Janssen, S., Louhichi, K., Ewert, F., Bergez, J.-E., Wery, J., [AU6] Heckelei, T., Alkan Olsson, J., Leenhardt, D., & van Ittersum, M. (2009). Methodology to translate policy assessment problems into scenarios: the example of the SEAMLESS Integrated Framework. Environmental Science and Policy (in press).
Turnpenny, J. (2008). Are we nearly there yet? Lessons for integrated sustainability assessment from EU environmental policy-making. International Journal of Innovation and Sustainable Development, 3 (1/2), 33–47.
Van den Hove, S. (2007). A rationale for science-policy interfaces. Futures, 39 (7), 807–826.
Van Ittersum, M., Ewert, F., Heckelei, T., Wery, J., Alkan Olsson, J., Andersen, E., et al. (2008). Integrated assessment of agricultural systems – A component-based framework for the European Union (SEAMLESS). Agricultural Systems, 96 , 150–165.
Van Ittersum, M., Gabbert, S., & Ewert, F. (2008b). Uncertainty analysis in model chains for integrated assessment. SEAMLESS deliverable PD1.3.11.2, SEAMLESS Integrated Project, EU 6th Framework Programme, contract no. 010036-2, www.SEAMLESS-IP.org, p. 60. Van-Camp, L., Bujarrabal, B., Gentile, A.-R., Jones, R.J.A., Montanarella, L., Olazabal, C., & Selvaradjou, S.-K. (2004). Reports of the Technical Working Groups established under the thematic strategy for soil protection . EUR 21319 EN/1, Official Publication of the European Communities, Luxembourg, from https://ec.europa.eu/environment/soil/pdf/vol1.pdf

Contact

© 2014 Tous droits réservés.

Créer un site internet gratuitWebnode