Harnessing AI in planning: opportunities and potential pitfalls

Picture of Harry Quartermain

Harry Quartermain
December 16, 2024
Read time: minute(s)

Author's note: This content originally appeared in the Winter edition of the RTPI South West's Periodical Magazine: Branchout

--------------------------------------------------------------------------------------------------------------------------------------------------

Sound decision-making is the bedrock of the planning system. Whether it's determining an individual consent or crafting policy designations, decisions are made by humans, grounded in expert evidence. However, as planning increasingly leans on data-driven predictions, questions about the validity and transparency of such decisions are emerging.  

In this article, we look at the types of Artificial Intelligence (AI) commonly used in the planning process, how they work, where the opportunities and risks of use are, and what you can do about these risks.

 

What are the main types of AI used in planning?

The two main types of AI most commonly being used in planning and urban development are Large Language Models (LLMs) and Predictive AI.

Large Language Models (LLMs)

LLMs are designed to process and generate human-like text. They attempt to understand and produce language and summarise information. These models can analyse vast amounts of text, such as policy documents, research papers, and community feedback, to identify key themes and insights. For planners, LLMs can streamline tasks like drafting policy reports, summarising stakeholder consultations or generating plain-language explanations of complex planning rules.  


Predictive AI

Predictive AI, on the other hand, focuses on analysing historical data to forecast future trends and outcomes. It uses techniques like regression analysis, clustering, and time-series modeling to make predictions. In urban planning, Predictive AI can forecast population growth, estimate the impact of proposed developments or model traffic patterns. It provides data-driven insights that support evidence-based decision-making, helping planners anticipate challenges and allocate resources effectively.

While the impact of LLMs is relatively new, the use of Predictive AI could be seen as an evolution of the type of modelling that has underpinned traffic and climate assessment for years.  However, the increased complexity of the computer modelling involved in this kind of analysis, along with an increased awareness of the potential risks involved with data-borne biases, has led some to call into question the validity of data-driven decision taking. 

Large Language Models: efficiency vs. risk

LLMs have the potential to offer significant resource-saving potential for local authorities. As an example for planning officers, who frequently face high workloads and tight deadlines, LLMs can streamline laborious or repetitive tasks, freeing up valuable time for other critical matters.  

And with public consultations generating extensive volumes of feedback, including written submissions, survey responses, and meeting transcripts, LLMs can analyse these large datasets quickly and efficiently - identifying key themes, sentiments, and recurring concerns.

This capability significantly reduces the burden on planning or community consultation teams, allowing them to focus on interpreting findings and engaging with stakeholders rather than spending days or weeks sorting through submissions. 

At a time when quality planning officers are hard to come by, it's easy to see why this might be attractive. For resource-constrained local authorities, the adoption of LLMs represents an opportunity to enhance productivity, reduce delays, and improve the overall effectiveness of public consultations, ensuring better outcomes for communities with fewer administrative burdens.  

Although the technology and accuracy of LLMs is continuously improving, there is still some level of risk. With public consultations for example, one single public submission letter that raises a previously un-answered material planning consideration is more significant than 1,000 generic letters of objection. 

Unignored, that objection could well be enough to result in a legal challenge and even in the decision being quashed. For this reason, some level of expert humans in the loop is still valuable. Unless the LPA is certain that all the nuances of all the submissions have been given the “due consideration” required by law, there remains a risk that the cost of a Judicial Review could easily eclipse the cost savings offered by the AI.  

In short, it might pay to remember that even if an LLM has written the text, the human author and their employer are still legally responsible for the content.

 

Predictive AI: who’s making the call and based on what evidence? 

While machines making planning judgments remains a distant prospect, data-driven predictions are increasingly shaping planning processes. These predictions influence decisions about future needs and impacts, but the growing reliance on algorithmic outputs raises important questions about transparency, accountability, and trust.  

This issue is particularly acute as people become more aware of the importance of the type and quality of data on which algorithms are trained, including a growing awareness of inherent behavioural or data biases (through inclusion or omission) in historic data.  

In the realm of electronic databases, a common law presumption exists that computer records are correct. However, when these data are used within opaque models – so-called “black-box” algorithms – their application and the rationale behind their outputs can become impossible to explain.This opacity has already cast doubt on the validity of some data-driven decisions, with the recent post office scandal as the most prominent example .  

The risks associated with black-box algorithms grow as reliance on them increases. These issues have been noted not only by the Science, Innovation and Technology Committee, who recently recommended stronger testing of AI algorithms, but also by the Nolan Committee, who have raised concerns about how the use of AI aligns with the seven ‘Nolan Principles’ of public life. 

In response to these challenges, new guidance is being developed by the Greater London Assembly (GLA), advised by Dr. Sue Chadwick of top law firm Pinsent Masons, and with input from a range of industry experts to create a governance framework for opaque algorithms. 

The guidance specifically addresses the use of Predictive AI and opaque algorithms  in planning processes. This document, designed as a dynamic and evolving resource, provides practical signposts to authoritative guidance on AI, AI assurance, and data ethics.  

The guidance is intended for local authorities and private sector organisations using AI to support planning processes, whether through in-house models or commercially acquired software. Its primary aim is to foster transparency, mitigate risks, and establish a community of good practice for algorithmic decision-making in planning.  

The guidance framework focuses on five key areas to address these challenges:  

1. Transparency: Ensuring clear disclosure of algorithmic methods, following the ICO/Turing guidance.  
2. Risk assessment: Identifying potential compliance issues with GDPR, equality duties, and human rights standards.
3. Mitigation strategies: Using government AI assurance guidance to manage risks during and after implementation.  
4. Accuracy and monitoring: Testing for accuracy, maintaining human oversight, and recording interventions to ensure accountability.  
5. Public records: Documenting algorithmic use in government transparency templates to maintain public trust. 

This new guidance stands within a relative void in government legislation or guidance on the use of AI in planning. The Planning Inspectorate recently released guidance on the use of AI in casework, which included the requirement to:

  • Clearly label where you have used AI in the body of the content that AI has created or altered, and clearly state that AI has been used in that content in any references to it elsewhere in your documentation.  
  • Tell us whether any images or videos of people, property, objects or places have been created or altered using AI.  
  • Tell us whether any images or video using AI has changed, augmented, or removed parts of the original image or video, and identify which parts of the image or video has been changed (such as adding or removing buildings or infrastructure within an image).   
  • Tell us the date that you used the AI. 
  • Declare your responsibility for the factual accuracy of the content.  
  • Declare your use of AI is responsible and lawful.  
  • Declare that you have appropriate permissions to disclose and share any personal information and that its use complies with data protection and copyright legislation. 

While some of these requirements seem sensible at first glance, others are practically difficult to fully adhere to, and demonstrate a lack of complete understanding about the scope, capabilities, and availability of AI in everyday computer software (e.g. Microsoft Co-Pilot). 



The responsible use of AI in urban planning

The adoption of AI is coming in with the tide. Standing in the way to prevent or prohibit the use of AI entirely is going to be futile in the long run. Instead what we need is clear guidance on the potential pitfalls of the technology so that people can use it correctly, and not rely too blindly on generated text or predictive recommendations. 

Dr. Chadwick notes: ‘We’re all waking up to the opportunities and risks of using emerging technologies; this is a great opportunity to maximise the potential for AI to improve planning, but with sound ethical guardrails’

For planning, AI presents a pivotal opportunity to modernise processes, enhance decision-making, and achieve more sustainable urban outcomes. However, these benefits can only be realised through robust governance that ensures fairness, accountability, and transparency.  

To co-opt a phrase: the AI advises; the humans decide (and retain legal responsibility for that decision). 

By embracing a governance-first approach, planning professionals can harness AI as a transformative tool, while safeguarding public trust and maintaining a commitment to equitable and sustainable development. This balance will ensure that AI serves as a valuable ally in shaping the future of urban spaces.