ESDN | European Sustainable Development Network
You are here: Home > Quarterly reports
spacer

ESDN Quarterly Reports

The ESDN Quarterly Reports provide in-depth documentation of a selected topic on a quarterly basis. The current ESDN Quarterly Report is displayed below. For previous ESDN Quarterly Reports click here.

>> Click here to download a PrintPDF version

ESDN Quarterly Report September 2006

Evaluation and Review of National Sustainable Development Strategies

by Gerald Berger & Reinhard Steurer

This ESDN Quarterly Report gives an overview of different approaches in the evaluation and review of National Sustainable Development Strategies (NSDS) in Europe. In so doing, it concentrates on qualitative evaluations and reviews that assess the quality of process-related aspects of SD strategies, such as policy-making processes, policy instruments, implementation procedures, stakeholder involvement, coordination activities, etc. Taking the relevant presentations and discussions at the ESDN Conference 2006 in Salzburg into account, the report provides further details about evaluation and reviewing experiences made in Austria, France, Switzerland and the UK. It summarizes the different approaches by discussing their strengths and weaknesses.

Content

 

Providing Feedback in SD Strategy Processes: An Overview

Clarification of Terms and Levels of Application

When looking through policy documents, reports and academic literature dealing with SD policy making, it is striking that terms like evaluation, assessment, review, audit or monitoring are frequently used synonymously without a proper clarification how they differ from each other. Two examples: In a contribution to the journal Natural Resource Forum, an article states that “a variety of approaches have been used for assessing a country’s NSDS” and that “all these evaluations examine (…)” (George and Kirkpatrick, 2006: 147). A recent report prepared for the OECD (Dalal-Clayton and Bass, 2006) is concerned about “monitoring mechanisms for NSDS” and gives an overview of internal reviews, external auditing, etc. Again, the terms used are not clarified and comprehensibly distinguished from each other. Therefore, we suggest to starting a reflection process about the meaning of the most commonly used concepts of providing feedback in an SD strategy process, such as:

  • evaluation,
  • review,
  • assessment,
  • audit, and
  • monitoring.

Some of the key characteristics of the different feedback mechanisms can be described as follows.

  1. Evaluations are characterised by a systematic analysis of a specific “evaluandum”, based on clearly defined criteria and scientific methods, and they include a knowledge-based judgement (“value”). Moreover, they often give guidance on how to improve actions or processes. Most evaluations are carried out externally by independent scientists or consultancies.
  2. Reviews are most often also aimed at improving actions or processes and are mainly undertaken in the context of a revision process of a strategy, policy or project. However, reviews are usually not based on scientific criteria or methods. Instead, they are often conducted either internally by the institution responsible for the review process (“internal review”) or by peers (“peer review”). One can speak of external reviews when they are conducted by institutions that are not responsible for or related to the delivery of the reviewed strategy, policy or project.
  3. Assessments refer to a determination of effects of actions and processes, e.g. different forms of impact assessments, like sustainable impact assessment (SIA) and strategic environmental assessment (SEA). They focus rather on specific projects or policy instruments than on comprehensive strategy processes.
  4. Audits control a set of predefined criteria (e.g. standards) in a standardised way. They are conducted by a certified person or institution (auditor), without necessarily giving guidance on how to improve actions and processes. Audits are normally undertaken in regular intervals.
  5. Monitoring is an observation activity, mostly based on a set of quantitative indicators. The monitoring process does not challenge or judge the overall strategy, policy or project, but observes performance trends with regard to individual topics or specific objectives. It does not give any guidance on how to improve actions or processes. Examples are SD indicator monitoring (often documented in SD indicator reports) or aggregated monitoring tools, e.g. ecological footprint, or Green Gross Domestic Product (Green GDP).

On a general level, one can distinguish between process-oriented approaches and performance-oriented approaches (Martinuzzi, 2004) of providing feedback in an SD strategy process. While the approaches that focus on the processes of SD policy-making (e.g. policy coordination/integration activities, deployment of policy instruments, implementation procedures, stakeholder participation, etc.) apply rather qualitative methods, those that measure the performance of SD strategies with regard to policy objectives often use quantitative SD indicators.

Another important general distinction has to be made with regard to the levels of application of the different approaches:

Level of Application Approach
Country performance Ecological footprint, SD indicator monitoring, Green GDP
Strategic processes, such as NSDS Peer review, external evaluation, internal review
Individual policies, programmes and projects SIA, SEA

This Quarterly Report focuses on qualitative evaluations and reviews of NSDS in selected European countries.

 

The Role of Evaluating and Reviewing an NSDS in the Policy Cycle

NSDS are strategic processes rather than strategy documents. They need to adapt to new situations and challenges constantly. The various feedback mechanisms described above help them by doing so: by providing feedback to policy-makers they are “the basis for coherent and self-reflective policy-making in a knowledge-based society” (Steurer and Martinuzzi, 2005). Additionally, evaluations become crucial in a time of increasing demand from various stakeholders to gain insights into the delivery process of NSDS objectives and to see that SD strategies make a difference in policy-making.

The report “A review of monitoring mechanisms for National Sustainable Development Strategies” by Dalal-Calyton and Bass (2006) outlines the following five strategic purposes of “strategy monitoring”:

  • systematically tracking changes that support or hinder SD;
  • supporting common learning;
  • providing strategy steering and adaptation;
  • enabling accountability of strategy stakeholders; and
  • building confidence that effective change is possible.

Overall, all types of feedback mechanisms in SD strategy processes must be seen as part of a broader “Sustainability Management System”. Evaluating, reviewing and monitoring SD strategies are not isolated tasks that judge the quality of the strategy process and/or measure the effectiveness and impacts of individual projects, but they are parts of an organised feedback process which should be integral part of the governance of SD.

 

Recent reports and developments with regard to NSDS and evaluation mechanisms

A review of NSDS by the European Commission in 2004 revealed that NSDS vary considerably regarding their content, approach and level of implementation. Generally, the MS face a number of common challenges in the design and implementation of their NSDS, such as:

  • setting up adequate institutional and procedural arrangements;
  • defining priority actions;
  • formulating a widely accepted vision of long term development;
  • transferring ownership to target groups;
  • finding the right link between the EU SDS and the NSDS; and
  • evaluating the progress of NSDS regarding its objectives, resulting in potential amendments and fine-tuning exercises of the respective SD strategy.

The report, “National Strategies for Sustainable Development”, published by the International Institute for SD (IISD) and the Germany Society for Technical Cooperation (GTZ) in 2004 pointed out that although important progress has been made, the 19 countries analysed in the report were only at “the early stages of learning toward effective strategic and coordinated action for SD” (IISD & GTZ, 2004, ix). One of the key challenges identified in the report is the development of feedback mechanisms, including monitoring, evaluation, learning and adaptation.

In early 2005, the European Environment and Sustainable Development Advisory Council (EEAC) published the study “Sustaining Sustainability” about the state-of-the-art of NSDS in nine EU MS. One general finding is that most countries characterise the SD process as a ‘learning process’. As a consequence, “SD strategies cannot be implemented like a ‘plan’, but need flexible approaches on the government side with at the same time firm and accountable objectives” (Niestroy, 2005, 11). The study identifies several future challenges, among them an in-depth comparative analysis of national priorities, targets and indicators outlined in the NSDS.

Steurer and Martinuzzi (2005) provide a comprehensive overview of strategy formation patterns and experiences with NSDS in Europe. They refer to NSDS as important policy strategy documents available to governments to systematically organise the SD policy-making process and list key characteristics and good practices of NSDS processes. As their analysis shows, virtually all European NSDS processes feature some sort of monitoring and/or evaluation mechanisms, many of them outlined in the strategy documents themselves.

As mentioned in a recent OECD report (2006) on good practices in the NSDS, these documents must not be understood as static plans, but should evolve over time as more information becomes available and the implementation process is evaluated and/or reviewed. The OECD report suggests that “learning, adaptation and continual improvement should be characteristics of national strategies. This requires a process to monitor strategy implementation, to report to government bodies and stakeholders, and to feed back information for adjustment and improvements” (OECD, 2006, p. 29). In this context, the guidelines developed for NSDS by various bodies, like the OECD, the UN and the International Institute for Environment and Development (IIED) highlight the need for evaluation in order to enable learning about whether a strategy is on the right path and if the objectives and targets set out are translated into action.

The topic is also addressed in the renewed EU SDS that includes a section on ‘implementation, monitoring and follow-up’ which states that “a progress report on the implementation of the SD strategy in the EU and the Member States” (EU SDS, p. 26, 33) will have to be submitted every two years.

 

Provisions for Evaluation, Reviews and Monitoring in the NSDS

All NSDS processes foresee some sort of evaluating, reviewing or monitoring mechanism process. For an overview of the mechanisms used in Europe, please go to the Country Profiles section of the ESDN website.

 

Practical Experiences with Evaluating and Peer Reviewing NSDS

This section provides an overview of the practical experiences with qualitative evaluations and reviews that were made in selected European countries.

1) Peer Reviews

Peer reviews are most often associated with the Organisation for Economic Cooperation and Development (OECD) that began to use the peer review process in the 1960s. Since then, peer reviews lie at the heart of the international cooperation in the OECD and the method has been adopted by various international organisations, like the EU, the United Nations (UN), the International Monetary Fund (IMF) or the World Trade Organisation (WTO). In a 2002 OECD report, a peer review is described as “the systematic examination and assessment of the performance of a state by other states, with the ultimate goal of helping the reviewed state improve its policy making, adopt best practices, and comply with established standards and principles” (p. 4). Furthermore, it is highlighted that a peer review is conducted on a non-adversarial basis and relies heavily on mutual trust among the states involved as well as their shared confidence in the process. Lehtonen (2005, 177) argues that “peer reviews can be seen as a mechanism attempting to combine the functions of learning and accountability within a single evaluation framework”.

In the context of SD, the European Union addressed the issue with a proposal at the World Summit on SD in Johannesburg in 2002 where it suggested developing a system for promoting the sharing of experience with NSDS between countries. The need for more coherence between the various NSDS in Europe and devise mechanisms to better pool experience and good practices remain an objective of the EU, especially in the face of the diversity of SD strategy approaches. This idea was taken up by the European Commission in its proposal for a revised EU SDS where it suggests in the chapter on ‘delivering results’ to “undertake a light peer review process, focusing on specific themes, and in particular, seeking to identify examples of good policies and practices that could be implemented by all” (p. 14). The uptake of peer reviews for NSDS was further specified and concretized in the renewed EU SDS from June 2006. In paragraph 37 it says, “with regard to the national level, the Commission report [biannual progress report on the implementation of the SD strategy in the EU and the MS, starting in September 2007] will build on Member States’ actions to implement the EU SDS and the results gained from completed Peer Reviews”. Paragraph 42 specifies the voluntary peer reviews of NSDS that “should start in 2006 with a first group of Member States”. The following points are mentioned in paragraph 42 about the execution of peer reviews:

  • peer reviews should involve officials and stakeholders from other MS and, where appropriate, international observers;
  • peer reviews should focus either on the strategies as a whole or on specific themes;
  • they should also serve to identify examples of good policies and practices;
  • a subsequent round of peer reviews could start in 2007 with the next group of Member States; and
  • peer reviews could be supported by scientific evidence through external evaluation.

The idea behind the peer reviews of the NSDS within the EU is to identify and share good practices in a process of mutual learning. The peer review of a national strategy is voluntary and will be undertaken upon the initiative of the MS concerned. The process should be a bottom-up exercise with participatory elements – involving stakeholders from all political levels – with no intention to ‘name and shame’. The peer reviews are intended to address all three SD pillars and the peer reviewed country is free to choose to undertake a review of the whole NSDS or focus on one or more specific issues.

The European Commission proposes to follow the peer reviews as an observer, providing methodological help if required and financial incentives during the two-year pilot phase. Regarding the first topic, the Commission in February 2006 has published a “Guidebook for Peer Reviews of National Sustainable Development Strategies” that offers practical guidance. It is based on past experiences of evaluating NSDS, interviews with selected governmental and NGO representatives as well as the experiences made with the French peer review process (see details below). Regarding financial incentives, the Commission has launched a call for proposals to provide financial support to peer reviewed MS. The call was open until 30 June 2006. The first country that will undertake a peer review in the context of the renewed EU SDS are the Netherlands with Germany and Finland being the most likely peer countries. The Dutch peer review process is intended to start in late 2006.

Another, non-EU country is also planning a peer review process: Norway intends to undertake a peer review as part of the revision process for the NSDS and as review of the environmental policy of the country. Sweden will be the main peer country in this process. No concrete timetable for the peer review process has been agreed upon yet. Norway intends to consult the Guidebook published by the European Commission, however, will not undertake a full peer review of the whole NSDS but will select certain topics for a special focus.

 

A Guidebook for Peer Reviews of NSDS

The guidebook on peer reviews has been produced for the European Commission by an independent project team. It presents an approach to mutual improvement and learning on NSDS that can be applied across all EU MS. It is furthermore intended to provide essential information needed for undertaking a peer review process in an accessible and easy-to-follow framework. Therefore, the guidebook is essentially a toolbox to support the exchange of good practices between MS and to improve the link between the EU and the national level. A common approach to NSDS peer reviews among EU MS should help to overcome common challenges and support the exchange of experiences, while fully respecting the diversity of national approaches, priorities, goals and targets.

Below is a graph outlining the key steps in the review process as proposed in the Guidebook for Peer Reviews:

Click to enlarge
Source: Guidebook for Peer Reviews of NSDS, 2006, p. 14

Experiences with peer reviewing in France

At the World Summit on SD in Johannesburg in 2002, the French President, Jacques Chirac, made a commitment that France would organise a peer review process for its NSDS. This political commitment followed a proposal made by the EU to develop a pee review process in order to promote the sharing of experiences. Accordingly, a project was initiated in 2004 by two French ministries, the Ministry of Ecology and SD and the Ministry of Foreign Affairs, that had the objective to develop and test a methodology for ‘peer review’ of NSDS. The International Institute for Environment and Development (IIED) provided methodological help. Belgium, the UK, Ghana and Mauritius were chosen as peer countries for the peer review process (Brodhag and Talière, 2006). Below is a graph which shows the steps that were used in the French peer review process:

Click to enlarge
Source: French Ministry of Ecology and Sustainable Development & Ministry of Foreign Affairs, 2005, p.12

The four main steps of the French peer review process were:

  • Preparation of Background Report: Based on a questionnaire that was circulated among governmental and civil society actors and personal interviews undertaken with additional key stakeholders as well as an analysis of several NSDS, a background report was provided as a basic resource for the peers.
  • Methodology Workshop: The peer countries and several French stakeholders met for two days in November 2005 for a workshop to reflect and agree upon the methodology used in the peer review process. It was decided that the peer review should focus on four key strategy components: (i) process, (ii) content, (iii) outcomes, and (iv) monitoring and indicators.
  • Peer Review Workshop: One week in February 2005 was devoted to a peer review/shared learning workshop with two representatives from each of the peer countries (one from government, one from civil society), representatives from the UN, EU and the International Organisation of Francophonie as well as 35 French stakeholders from governmental institutions and civil society. The French participants provided answers and commented on a set of key questions in the four focus areas that were developed by the peers (on the basis of the background report and related documents). The peer countries also reflected upon their own experiences made regarding the selected topics. Finally, the peers had one day to discuss their conclusions and recommendations.
  • Final Report: The final report of the French peer review includes 44 recommendations of the peer countries, grouped along the four selected topics.

The success of the French peer review process depended on three key issues: First, the strong political commitment for the whole process, especially the commitment made by the President of France; second, a clear objective about the reason and scope of the peer review process; and third, a positive approach of and relationship between the peer countries.

Based on the experiences made in France, there are two key stages for a peer review process:

  1. The preparatory stage that includes the development of a concise background report (approx. 20-30 pages). This report should provide an overview of the administrative structure and decision-making processes in the country as well as the development process of the NSDS.
  2. The peer review workshop with participation from other countries (peer countries) and national/region key stakeholders that are involved in SD policy-making and the implementation of the NSDS.

Due to the growing international interest, an Expert Group Meeting on reviewing NSDS was held at the UN headquarters in New York in October 2005. In this meeting, several of the participating countries expressed their interest in organising a peer review or a similar process of shared learning. Full documentation of the event can be found on the UN homepage.

 

2) External Evaluation

The main defining characteristic of external evaluations is that they are undertaken by institutions that have no direct responsibility for the development or implementation of the NSDS. This form of evaluation is, therefore, undertaken by external, government-independent evaluators (e.g. research institutes, consultants,) from either within the country or from other countries. Several countries have made experiences with external evaluations:

Experiences with external evaluation in Austria

In February 2005, the Austrian Ministry of Agriculture, Forestry, Environment and Water Management (BMLFUW) invited a pre-selected number of institutions to submit proposals for the evaluation of the Austrian NSDS (adopted in 2002). In April 2005, an interdisciplinary group of independent researchers and consultants from Germany and Austria was appointed by the BMLFUW to undertake the evaluation and prepare a final report. The main objective was to evaluate the implementation instruments of the strategy not, however, the strategy and the policy goals itself. The requirement to undertake such an evaluation was set out in the NSDS with the aim to improve the strategy’s impacts and institutional effectiveness.

The actual evaluation was undertaken between April and November 2005. Two bodies guided the evaluation process: For organisational purposes, the external evaluation team was supported by a steering group that had the objective to coordinate the various organisational issues of the whole evaluation process. Moreover, a project board, consisting of various SD experts and stakeholders, accompanied the actual evaluation. It provided a forum for ongoing critical analysis and recommendations which offered important inputs for the participative style of the evaluation process. The evaluation process was based on a range of selected criteria, i.e. consistency, effectiveness, efficiency, appropriateness and transparency.

At the beginning of the evaluation process, the Austrian NSDS as well as the institutions responsible for coordination, implementation and monitoring were compared with the experiences of other OECD countries, based on the 2004 IISD/GTZ study. For the Austrian evaluation process, it was decided to focus on ‘institutions’ (e.g. Committee for a Sustainable Austria, Forum Sustainable Austria, working group of the regional SD coordinators, networks, etc) and ‘instruments’ (e.g. sustainability measures, work programmes, progress reports, indicators, etc.). For both of these, specific analytical criteria were developed. The following steps were most crucial in the Austrian evaluation process:

  • Right at the beginning, a short description of the evaluation process was distributed among the representative of the Committee and the Forum in order to inform all important stakeholders in the country about the evaluation.
  • In May and June 2005, the evaluation of the sustainability measures began with the distribution of questionnaires to actors responsible for the implementation of individual measures. This was done in two rounds: A first round of questionnaires was distributed among 280 actors. For the second round, 60 measures were identified for a more in-depth questionnaire survey.
  • The first outcomes of the survey were presented to and discussed with stakeholders at a meeting of the Forum Sustainable Austria in June 2005.
  • From July until November 2005, the work programmes, progress reports and indicators were evaluated. In total, 31 experts were interviewed in order to gain a more in-depth analysis of the implementation of the NSDS. The majority of the interviews were personal interviews that were accompanied by some telephone interviews.
  • From August 2005, the implementation processes and institutions were evaluated. One important step in this part of the evaluation was the organisation of a two-day ‘Round Table Sustainable Development’ which allowed the evaluation team to undertake further interviews and to integrate stakeholders from the sub-national level into the evaluation process.
  • Next up, in September 2005, was a one-day workshop with project leaders in order to gain comprehensive insights into the impacts of individual sustainability measures.
  • The final evaluation report was presented to the Austrian Government in November 2005. The recommendations made in the report were further discussed with various institutions during 2006 in order to strengthen the strategy process.

Two issues were considered as particularly important in the Austrian external evaluation process: First, to have an external point of view when evaluating the NSDS is an important added value. Second, to ensure that this external point of view is accepted at the political and administrative level, it is necessary to involve those actors in the evaluation process who are responsible for the implementation of the NSDS.

 

Experiences with external evaluation in Switzerland

An external evaluation of the Swiss NSDS from 2002 was conducted by two independent institutions in 2006. The two government-independent evaluators were private consulting companies, Interface-Institute for Policy Studies and Evaluanda. This external evaluation process was organised in the context of a planned revision of the NSDS in 2007. It comprised an evaluation of the strategy process, content and products (outputs), outcomes and impact aspects. The results of this external evaluation will be published in autumn 2006 and will then be available at the ESDN website.

 

3) Internal Reviews

Internal reviews of NSDS are undertaken by national governments in order to measure progress towards the commitments, targets and objectives that were set out in the strategy document. The review is usually undertaken by government-related bodies, i.e. ministries or other administrative bodies (e.g. National SD Councils), with little or not external inputs and delivered in a report which is made publicly available. A number of European countries has made experiences with internal reviews over the last several years, e.g. UK, Belgium most recently in 2005 (only available in French, and Dutch) and 2002, Switzerland in 2004 and Finland in 2003. For further information, please also check the Country Profiles’ section on the ESDN website. Below, the experiences made with internal reviews in the UK will be described more in-depth.

Experiences with internal reviews in the UK

In 1999, the UK published its second NSDS, entitled “A Better Quality Life”. As part of the implementation process of the strategy, annual progress reviews had to be produced by the Department for Environment, Food and Rural Life (DEFRA). The latest of these internal review reports was issued in March 2004. This report was conducted by the SD Unit within DEFRA and was part of a broader review process in the context of the development of a new NSDS in 2005. Therefore, the main objective of the report was not only to cover the key developments that occurred during 2003 (the focus year of review), but also to take stock and review government action towards SD as well as progress of NSDS implementation since the publication of the strategy.

The internal review process in the UK began in mid-2003 with a process to gather stakeholder views and to organise workshops in order to identify key themes and establish a set of specific objectives for the review. The aim was to be clear what the review should cover and what it should contribute to the process that led to the new NSDS rather than undertaking it because of a prior commitment. The aims of the review were then set out in a consultation document (Jones, n.d.):

  • improving delivery of SD outcomes;
  • increasing awareness of and engagement with SD;
  • building a sense of common purpose while supporting devolved, regional and local diversity;
  • embedding SD more effectively in government action and policy-making;
  • building on what has been achieved but challenge the government and other actors to do more;
  • involving stakeholders and those responsible for the delivery of the strategy at all levels; and
  • providing leadership through a clear vision and priorities.

The review process involved the use of questionnaires, that were sent to government departments and agencies, case studies and data collection. A steering group (consisting of key members of the internal review team, communications staff, the head of the SD Unit in DEFRA, and key statisticians) then commented on a draft report. Another draft report, including the suggestions of the steering group, was later submitted to the ministers for comments and formal approval of the Government. The final internal review report was announced by a ministerial statement in Parliament and published in March 2004. The process for the UK internal review 2004 took about give months. It involved 6 full-time policy officers and two part-time communication/information officers as well as requested contributions from government departments across all policy sectors (IIED, 2006).

The internal review covers the following topics:

  • strategic direction of SD;
  • UK government action, including the actions undertaken by the devolved administrations (Scotland, Wales and Northern Ireland) as well as the English regions and local authorities;
  • indicators, involving the progress towards the headline indicators and the use of ‘traffic lights’ to show how the indicators are changing;
  • progress made towards achieving four pre-selected topics: sustainable economy; sustainable communities; managing the environment and resources; and international cooperation and development.

After the publication of the internal review report, a consultation process was launched in April 2004. It included a website, events on specific issues, regional and local events as well as the training of facilitators for discussion in community groups (Jones, n.d.). This process was part of the development of the new UK NSDS. A draft of this new strategy was discussed in the cabinet in December 2004. The new NSDS was launched in March 2005. The emphasis of the new strategy is on delivery and the continuing involvement of those who deliver SD on the ground (Jones, 2006).

It was important for the UK internal review of the NSDS to involve the views of various stakeholders and those responsible for the delivery of the strategy early in the review process and to focus on the policy integration aspects of the strategy. However, the ‘traffic lights’ were considered to be inadequate for measuring strategy progress, mainly because they show progress on individual issues but not across the various topics (Jones, n.d.).

 

Conclusions: Pros and Cons of the Presented Evaluation Approaches

This concluding section offers a table with the pros and cons of the presented three qualitative evaluation approaches, namely peer reviews, external evaluations and internal reviews. The information provided in the table is based on the following three sources:

Approach Pros Cons
Peer Review
  • close relationship and exchange of good practices between countries – enhancing relationships between peer countries;
  • peers (public administrators) have an inside knowledge about political processes and administrative issues;
  • higher acceptance of outcomes among public administrators;
  • possibility of inclusion of a variety of actors and stakeholders;
  • facilitates intensive dialogue between those involved.
  • time-intensive;
  • resource-intensive;
  • risk of peers being not critical and objective enough;
  • depends on high-level political commitment.
External Evaluation
  • independence and outside perspective: evaluation undertaken by researchers or consultants;
  • scientific methods and knowledge-based judgement;
  • science-based evaluation can help policy-makers to justify further actions to implement SD policies;
  • involves guidance on how to improve actions and processes.
  • quality of evaluation depends on selection process of external evaluators;
  • external evaluators have less inside knowledge about strategy processes and implementation procedures;
  • possible influence from policy-makers or public administrators on external evaluators may dilute evaluation outcomes.
Internal Review
  • undertaken by internal actors who know strategy process and implementation procedures best;
  • involvement of important government departments easier;
  • a lot of experience with this approach in several European countries may help in review design and offers possibility of comparisons.
  • if no involvement of external stakeholders takes place, danger of bias and lack of objectivity;
  • lack of “outside view” can imply lack of innovation (“think outside of the box” is difficult)
  • cons can be addressed with the involvement of NCSD; but: many NCSDs lack resources to participate in thorough reviews

 

References

BMLFUW (Austrian Ministry of Agriculture, Forestry, Environment and Water Management) (2005) Evaluation Study on the Implementation of Austria's Sustainable Development Strategy, Vienna: BMLFUW, English Summary: http://www.nachhaltigkeit.at/strategie/pdf/summary_en_IU503_06-05-29.pdf, German Long Version: http://www.nachhaltigkeit.at/strategie/pdf/Evaluationsbericht_NStrat_Langfassung_06-05-11.pdf.

Brodhag, C. & Talière, S. (2006) “Sustainable Development Strategies: Tools for Policy Coherence”, Natural Resources Forum, 30: 136-145.

Dalal-Clayton, B. & Bass, S. (2006) A Review of Monitoring Mechanisms for National Sustainable Development Strategies, London: IIED, Report prepared for the OECD,
http://www.nssd.net/otherdocuments/OECD_Review_final.pdf.

DEFRA (Department for Environment, Food and Rural Affairs) (2004) Achieving a Better Quality of Life: Review of Progress Towards Sustainable Development, UK Government Annual Report 2003, London: DEFRA, http://www.sustainable-development.gov.uk/publications/pdf/ar2003.pdf.

European Commission (2006) A Guidebook for Peer Reviews of National Sustainable Development Strategies, http://ec.europa.eu/environment/pdf/nsds.pdf.

French Ministry of Ecology and Sustainable Development & Ministry of Foreign Affairs (2005) The French Strategy for Sustainable Development: Report on a Peer Review and Shared Learning Process, http://www.ecologie.gouv.fr/article.php3?id_article=4321.

George, C. & Kirkpatrick, C. (2006) “Assessing National Sustainable Development Strategies: Strengthening the Links to Operational Policy”, Natural Resources Forum, 30: 146-156.

IISD (International Institute for Sustainable Development) & GTZ (German Society for Technical Cooperation) (2004) National Strategies for Sustainable Development: Challenges, Approaches and Innovations in Strategic and Co-ordinated Action, http://www.iisd.org/pdf/2004/measure_nat_strategies_sd.pdf.

Jones, B. (2006) “Trying Harder: Developing a New Sustainable Strategy for the UK”, Natural Resources Forum, 30: 124-135.

Jones, B. (n.d.) NSDS : Progress in the UK, available from the ‘National Strategies for Sustainable Development’ (nssd) homepage: http://www.nssd.net/pdf/peer_review/English64.pdf.

Lehtonen, M. (2005) “OECD Environmental Performance Review Programme: Accountability (f)or Learning?”, Evaluation, 11(2): 169-188.

Martinuzzi, A. (2004) “Sustainable Development Evaluations in Europe – Market Analysis, Meta Evaluation and Future Challenges”, Journal of Environmental Assessment Policy and Management, 6 (4): 411-442.

Niestroy, I. (2005) Sustaining Sustainability: A Benchmark Study on National Strategies towards Sustainable Development and the Impact of Councils in Nine EU Member States, EEAC.

OECD (2006) Good Practices in the National Sustainable Development Strategies of OECD Countries, Paris: OECD, http://www.oecd.org/dataoecd/58/42/36655769.pdf.

OECD (2002) Peer Review – A Tool For Cooperation and Change: An Analysis of an OECD Working Method, Paris: OECD (compiled by Fabrizio Pagani), http://www.oecd.org/dataoecd/33/16/1955285.pdf.

Steurer, R. & Martinuzzi, A. (2005) “Towards a New Pattern of Strategy Formation in the Public Sector: First Experiences with National Strategies for Sustainable Development in Europe”, Environment and Planning C: Government and Policy, 23: 455-472.

 

 

This website is maintained by the
ESDN Office Team at the WU Institute for Managing Sustainability
ESDN Home ESDN Home