top of page
  • The Panel

Summary Linear Infrastructure Planning Panel Meeting Note Meeting #3, 27th September 2023


What social, environmental, and economic metrics should be used in linear infrastructure planning in the energy and water sectors?


The Chair noted that spatial planners are required to consider a vast range of social, environmental, and economic metrics when planning linear infrastructure (see Annex 1 in the briefing paper). An informal survey conducted with the group to gauge opinions on the most important metrics in these areas indicated that the key metrics of concern were:

  • Greenhouse gas (GHG) emissions and climate impacts

  • Biodiversity and habitats

  • Population and human health

  • Pollution and nuisances

  • Landscape, visual amenity, and culture/heritage: While scoring lower in the poll than the above impacts, this metric attracted significant attention in survey responses.


In addition to the above, cross-cutting metrics regarding sensitive areas and cumulative and distributional effects, need consideration. Other factors, such as sector-specific and devolved requirements, are also clearly important. Although the focus of the meeting was not on financial metrics, global research shows significant cost overruns in many large infrastructure projects. The way in which the social, environmental, and economic impacts of a project are dealt with can clearly influence costs.


The complexity of the legal, policy, and regulatory landscape in infrastructure planning can make dealing with metrics challenging. It is not always clear how technical sector-specific infrastructure requirements sit alongside spatial requirements. Design principles and system modelling tools (e.g. Digital Twins) can help maintain focus on the desired outcomes and map how systems work, identifying associated dependencies and impacts. At the local level, metric development can help engage stakeholders in a tangible way.


Gaps in the data can make developing metrics more difficult and even when data is available, it is not always robust. Frequently data is collected on a project-by-project basis; sharing common data sets could help increase efficiency. Breakout groups considered the following issues:


The challenges in developing social, environmental, and economic metrics


  • Clarity on the outcomes sought: This is important to identify the supporting actions and decisions that are needed. Given the complexity and uncertainty inherent in many of these areas, and the need for pace, clear outcomes can make the data collection and prioritisation task easier.

  • Data quality and granularity: An audit is needed of existing data, assessing its quality and availability. Poor data quality can lead to suboptimal decisions. Limits to the granularity of data can also be problematic (e.g the Eco-systems Intactness Index was seen as valuable but large scale).

  • Data standardisation and consistency: Standardisation of data across different stakeholders, including local authorities, is considered important. Achieving geospatial consistency with data, especially in the context of linear infrastructure projects that span large areas, is a challenge. Gathering data across vast regions is different from collecting data for a single, localised site, such as a wind farm.

  • Data access: Access to data has improved, with more open data available. However, data availability still varies between different organisations and regions in the UK. While land cover data is readily accessible, information on other factors like soils may be less accessible. Obtaining data from multiple organisations can be particularly challenging, especially when it comes to biodiversity data. Biodiversity data tends to be held by localised groups, and this challenge is often addressed by assessing impacts on habitats rather than individual species.

  • Data regulation, management and use by decision-makers: Regulation is needed particularly in areas like carbon metrics and soil carbon measurement, where variations in data quality and outcomes are problematic. The need for ongoing reflection, monitoring, and dynamic data management was emphasised to ensure data robustness in the future. There was a suggestion that decision-making bodies should be required, possibly through regulation, to utilise data in their decision-making processes. However, the capacity to analyse such data may be limited, as seen in the example of biodiversity net gain where only a few councils have officers capable of working with such data.

  • Quantification: The discussion touched upon the challenge of quantifying certain aspects versus accepting that some factors will always remain subjective and unquantifiable. For example, visual amenity can be challenging to quantify, and some attendees thought existing metrics in this area may not be suitable. In terms of biodiversity, cultural value is attached to some species and not others. Some thought it essential to consider both empirical and non-empirical values (happiness as an example of something that could fall into both categories).

  • Boundaries and complexities: There are challenges in defining boundaries for metrics (e.g. global/national/local), understanding temporal impacts, and comprehending the whole system impacts in the context of GHG/climate metrics.

  • Data tools, models and integration: Multiple tools and models are relied upon to address climate change, indicating a complex landscape in this domain. Data integration platforms can help. For example, the "Nevo platform" (Natural Environment Valuation Online), illustrates how different indicators can be integrated into an analysis. Such platforms are currently typically research-focused rather than operationally focused. Others suggested that the UN Sustainable Development Goals could provide a model of metric development. It was noted that ‘all models are wrong; some models are useful.’


Additional metrics that could be developed:


  • Equalities and health: Some attendees considered that there needs to be a focus on the intersection of equalities and health metrics. Public sector equality duties apply to planning decisions, and there's potential for these metrics to be used to help address structural inequalities. The need to define what is meant by a "healthy environment" was also flagged.

  • Speed and costs of inaction: This is particularly relevant in terms of GHG/climate.

  • Stakeholder and community engagement: Metrics to measure engagement would be helpful.

  • Natural capital: A need for further metrics in this area was considered important.

  • Embedded or displaced carbon: This could be important in specific contexts.

  • Supply chain feasibility: Is a choice sensible in terms of being able to deliver it from a supply chain perspective (e.g. people, skills, availability of rare earth metals etc).


Dealing with uncertainty in terms of what is required in linear infrastructure planning: law and environmental assessment. Dr Sue Chadwick, Pinsent Masons


Sue’s presentation (attached) highlighted that in any decision-making process related to development, it is crucial to account for future environmental impacts, even though these often involve a degree of uncertainty. While data and analytics play an increasing role in evaluating and predicting environmental impacts, they cannot completely eliminate uncertainty. The responsibility for assessing uncertainty typically falls on humans, whether it be a planning committee, an inspector, or the Secretary of State. At some point there has to be that human judgement about future impacts. And then when other humans are unhappy with that decision, it gets challenged in the courts.


In court, the interpretation of statutes involves a contextual and purposive approach, considering the law's background, explanatory notes, consultations, and ministerial statements to discern its intended purpose. For example, in the Hopkins Homes case a developer and a local authority had a dispute about the interpretation of the words "policies relevant to homes." The Supreme Court's decision highlighted that policies are treated differently from statutes, with a more flexible approach to interpretation. In the Boswell Decision, the Court expressed its reluctance to intervene in the merits of climate decision-making, emphasising that the courts do not make judgments on the scientific merits of government policies related to climate change.


In a Good Law Project Challenge regarding the government's use of WhatsApp and its instant deletion function, despite multiple policies, the Court did not find the government's actions invalid, highlighting that policies are not enforced as if they were laws. In the Mott Decision on limits on salmon fishing in the Severn estuary, the Court emphasised that its role is not to form views on technical matters but rather to assess the rationality of decisions. In the Baci Case, the courts reiterated their stance of not delving into the scientific integrity of the Environmental Agency's assessment. The emphasis was on the rationality of the decision-making process. In the Sizewell C case, the Court emphasised its reluctance to remake scientific assessments, especially for predictions far into the future, reinforcing the principle that courts stand back from scientific assessments.


Statutes are legally binding, while policies offer more room for interpretation. Courts tend to exercise caution when it comes to policies, often deferring to the judgment of policymakers.


Regarding scientific evidence, courts typically refrain from making determinations about its validity. Instead, they concentrate on the fairness, legality, and rationality of the decision-making process. Factors such as due process and consideration of relevant aspects are assessed to determine the decision's legitimacy. In terms of the duty of fairness, they will consider whether both sides have had a say and access to the same information.


A critical aspect of court evaluation is rationality. Courts scrutinize whether decisions were rational and lawfully made. If a decision is deemed arbitrary, biased, or lacking in proper inquiry, it may be deemed legally flawed.


The presentation also introduced the concept of statistical uncertainty, particularly in the context of employing models for decision-making. The planning and legal professions don’t currently recognise statistical uncertainty. While models are valuable tools for predicting environmental impacts, Sue emphasised that they cannot entirely eliminate uncertainty. Models are surrounded by ‘a no-man’s land worth of factors that may bias the model but which are not included in it.’ This uncertainty arises from various sources, including the inherent complexity of environmental systems and data limitations.


To address this uncertainty, there are several strategies, including being explicit about areas of uncertainty, quantification of associated risks, comparison of risk profiles, assessment of different sources of information, and maintenance of a clear audit trail during the decision-making process. These strategies can help decision-makers and courts navigate and manage uncertainty more effectively.


Transparency and public involvement in decision-making processes, even when algorithms or models are utilised, are important. Transparency fosters trust and accountability by making decisions accessible to the public and ensuring that the rationale behind them is well-documented. This transparency is crucial in promoting fairness and legitimacy in the decision-making process.


Points raised in the subsequent discussion included:


  • How to determine when there is too much uncertainty (e.g. through impact assessments, stop/start mechanisms etc).

  • The importance of recording what is in the model and what is not and the key metrics shaping it.

  • The need to be more rigorous in the use of language, particularly around the differences between uncertainty and risk.

  • Recognising that to reduce uncertainty you will have to pay for it; with money and/or time. There is a trade-off between uncertainty and urgency, and choices may be needed between doing things perfectly and doing them on time. Acknowledging urgency requires there to be space for making sub-optimal decisions, accompanies by mechanisms to take course correction etc. Some considered that tools such as visual impact assessments could be too hard/a waste of time.

  • The potential of using hypothetical models to engage stakeholders in discussions on uncertainty before real decisions have been made.

  • The importance of being clear about the counterfactual and the implications of not doing things (recognising ‘the future isn’t what it used to be’). For example, the impact analysis of building a transmission line should be accompanied by the impacts of not doing it so that costs and values can be compared.

  • Recognising that new infrastructure will be needed just to maintain existing services.

How should the desired social, environmental, and economic metrics be used and applied in linear infrastructure planning in the energy & water sectors?


The Chair explored some of the issues involved in developing metrics in practice. These include putting values on things that can be difficult to monetise. A possible example is Defra’s guide prices for statutory credits for Biodiversity Net Gain. Although not perfect, these guidance tables at least enable a more consistent and systematic approach in this area and more informed engagement. Could this, building on the guidance that already exists in the HMT Green Book, be a potential model for the development of social metrics?


The need to consider social, environmental, and economic metrics ‘in the round’ was also explored. There can clearly be dependencies between different impacts. Planners need to understand these and take a holistic view. The financial sector’s work on integrated reporting, which considers company performance against different types of ‘capitals’ (natural, human, social and relationship, financial etc) could provide some useful lessons here. In planning terms, the need to consider system and multiple impacts require decision makers to understand the connections between different factors and then make decisions around how different metrics should be ranked and weighted. Automated multi-criteria analysis can support decision makers as they explore trade-offs and consider alternatives. Any method will need to recognise the challenges of ‘edge cases.’


The briefing paper highlighted the following possible criteria that could be used to rank/weight metrics:


  1. The clarity of existing law and regulation surrounding that metric

  2. The availability of guidance on priorities or trade-offs in relevant existing policy frameworks etc relating to that metric

  3. Whether that metric is seen as novel or controversial

  4. Project specific information / impacts (e.g. location, degree of risk etc)

  5. Where project impacts sit in the mitigation hierarchy (avoid, minimise, restore, offset)

  6. Whether sufficient robust and standardised data, or accepted valuation guidance, is available


In discussing these criteria, break-out groups raised the following points:


  • Incorporating system effects into metrics: There is a need to include the effects on the broader system when defining metrics and their weighting. This is important to ensure that we consider not just individual data points (trees) but the overall system (forest) when making decisions. Examples given of a systems approach to metrics included the work in the water sector by Water Resources Southeast and the Ox-Cam arc Integrated Water Management Framework. Others highlighted the proposed Strategic Spatial Energy Plan as an example as this should play an important role in helping build boundaries and direction and, if endorsed, should help achieve focus. Some thought ‘metrics should follow not lead’ system effects, noting the construction industry can forget the systems impacts of a project. Others highlighted that to avoid things getting ‘too general/conceptual’ there is a need to draw boundaries around the systems included. Some considered that it may only be possible to develop systems maps once you have understood the individual components.

  • Flexibility and dynamic systems: The need for flexibility and dynamism in decision-making systems, and the importance of understanding the interaction between the criteria used, was raised. Continuous reviews and adjustments of how metrics are used are needed so they adapt to genuinely changing circumstances and project outcomes, not reacting to fashion or the loudest voice. One group expressed uncertainty about how frequently projects are reviewed after completion. They highlighted the need for a post-project review process to assess what actually happened and whether the project achieved its intended goals.

  • Resilience and intergenerational trade-offs: Resilience and future generations' interests may require additional considerations beyond the existing criteria.

  • Balancing policy, legal demands, and visionary thinking: the importance of striking a balance between adhering to policy and legal requirements, which are essential for project approval, and fostering visionary and creative thinking was raised. Creative methods and forward-thinking could help identify and develop alternative metrics like a happiness index.

  • Focused approach: To reduce complexity, only develop the metrics, and collect the relevant data, actually needed.

  • Credibility of metrics: One group suggested rewording the criterion related to "controversial" metrics to focus on the credibility of the metric. They proposed evaluating how widely a metric is used, whether it has global recognition, and whether it is generally acknowledged as a reliable measurement. Others considered “novel” should be kept as a criterion as many environmental impacts are novel and the science is rapidly changing.

  • Project-specific/local vs. general metrics: Need to get the right mix of project-specific metrics that involve local considerations with general metrics, including industry standards like BIM or ISO. Local impacts are always likely to be important.


The breakout groups also considered the decision-making process and who should be responsible for decisions around ranking and weighting. The following points were made:


  • Who makes the decision: Need to distinguish between decision-making about the final outcome and decision-making about the procedures to reach that outcome. These aspects may involve different parties and considerations. Some decisions are technical or engineering problems, while others are social or political in nature. Decision-making isn't solely about following a hierarchy; it's also influenced by political factors. The democratic mandate and rational thinking need to coexist. There is a need to create a structure that is ‘democratic’ but not so democratic that decisions never actually converge or get made. Metrics can’t do everything. Who makes the decisions also depends on the type of project and its specific proposals/characteristics. There needs to be some connection between the decision maker and the consequences of the decision in terms of accountability, responsibility, beneficiary of the outcomes, who pays etc.

  • National vs. regional vs. local decisions: The question of whether decisions should be made at the national, regional or local level was raised. This issue is particularly relevant in the context of planning consultations where national and local plans may coexist. The need for clarity and certainty in the decision-making process and government priorities was considered of significant importance for infrastructure projects. There needs to be an acknowledgement of the different national/regional/ local perspectives and the way they need to connect and create something coherent.

  • Principle-led approach: A principle-led approach that underpins all decision-making processes was seen as valuable and could establish clear goals and a framework to guide decision-makers and stakeholders.

  • Value of process in metrics: Certain metrics may hold value irrespective of their direct impact on outcomes. For instance, in some areas gathering local knowledge may not directly affect specific outcomes but can enrich the process and how things are done. Engaging stakeholders in the process of developing metrics can help people realise how hard it can be to build infrastructure that works, is resilient and has to deliver good climate/environmental, economic and social outcomes. Revealing trade-offs can often be illuminating.

  • Community Involvement: The importance of involving communities in the decision-making process was raised. Engaging with local communities can ensure that their voices are heard.

  • Role of third-party validation: third parties, such as Ada or the Turing, could play a role in validating data and processes, similar to how environmental evidence is validated by experts.


Next steps


The next Panel meeting will be on 22nd November (10:00-12:30 on zoom). Our deep dive will be on trust and new tools. We will explore: ‘Do the algorithmic calculations and advanced software assessments in new linear infrastructure planning tools do what they are meant to do?’ A draft scoping paper on this topic will be circulated for comment in October.


Attendee list


Dr Karen Barrass Founder and Director, Climate InsightsDr Melissa Bedinger Centre for Future Construction, University of EdinburghDustin Benton Policy Director, Green AllianceEric Brown Executive Adviser, Energy Systems CatapultDr Sue Chadwick Strategic and Digital Planning Adviser, Pinsent Masons LLPDr David Clubb Parter, Afallen LLPDave CostelloEnvironment and consenting, Continuum IndustriesSharon Darcy Linear Infrastructure Planning Panel ChairCharline de DorlodotCommunications, Continuum IndustriesMark EnzerStrategic Advisor, Mott MacDonald, previously Director, Centre for Digital Built BritainAlan Farquhar Planning/contaminated land Manager, SEPAAiden GillInfrastructure Team, National Farmers’ UnionDiarmid Hearns Scottish Environment LINK & Head of Public Policy, National Trust for ScotlandPaul HickeyOfwat/RAPID (Regulators’ Alliance for Progressing Infra. Dev.) Ada LeeInfrastructure Specialist, Royal Town Planning InstituteAndrew Lovett Professor of Geography, University of East AngliaGrzegorz Marecki CEO, Continuum IndustriesRosie Pearson Chairman Community Planning Alliance and Founder Pylons East David Sigsworth Adviser, Continuum Industries


Apologies


  • Darren Hemsley, Head of Supporting Good Development, NatureScot

  • Ragne Low, Deputy Director, Onshore Electricity, Scottish Government

  • Margaret Read, Director of Policy, National Infrastructure Commission

  • Phil Watson, Strategic Energy Policy Lead, Suffolk County Council


The views expressed in this note do not necessarily represent the views of all Panel members or of Continuum Industries.


27 views
bottom of page