Risk Management
AI could transform the UK public sector, if risks are managed effectively
Artificial intelligence (AI) is transforming the way we interact with technology. And the value organisations can extract from data.
Artificial intelligence (AI) is transforming the way we interact with technology. And the value organisations can extract from data. That’s good news for British business. But the potential impact on the public sector could be even greater. At a time when public funds have never been more stretched, the opportunity to trim costs and boost productivity through more judicious AI use is one the government is embracing with open arms.
Yet while AI adoption makes sense on paper, the public sector must also ensure any intended gains don’t also introduce excessive cyber risk. AI can expand the attack surface and expose organisations to data theft and extortion, among other things. To reap the intended benefits, the government needs to build security by design into everything it does, and measure and manage risk continuously across the entire attack surface.
Government gets serious about AI
The current government has recognised a one-in-a-generation opportunity to overhaul public services by deploying AI. Claiming the AI market could be worth over $1 trillion (£780bn) by 2035, it has committed to doubling digital transformation in the NHS to £3.4bn. In his pre-election budget, the chancellor claimed this could unlock £35bn in productivity savings over the course of the next parliament by:
- Potentially halving form-filling by doctors
- Digitising theatre processes to enable an extra 200,000 operations per year
- Reducing the 13 million hours lost by doctors every year because of legacy IT
- Using electronic health records by default for all patients
- Delivering test results faster for 130,000 patients annually thanks to AI-fitted MRI scanners
Yet AI has many more applications outside of the health service.
Empowering the education sector
In education, for example, AI could be used to free-up teacher time and provide personalised support to pupils. Use cases include:
- Virtual assistants designed to cut teacher workload by drafting curriculum plans and producing high-quality teaching resources
- Virtual tutors for pupils that create a bespoke learning plan based on marking and assessments from teachers
- AI-powered gamification to encourage greater pupil participation in learning exercises
- Automated AI-powered marking for tests and assessments
- Natural language processing use in educational tools such as writing assistants
The government is already backing such initiatives, with a £2m investment in Oak National Academy – an independent public body established to support teachers with tech-centric resources.
AI risk and how to handle it
The caveat with all of these examples is that AI should never be used in a vacuum. Human oversight is essential to optimise the output of these tools and ensure that it is accurate, safe and suitable for delivering public services. That’s especially true in the context of AI-related risk.
On the one hand, threat actors are using AI to increase the success rates of their campaigns – particularly by using generative AI (GenAI) tools to create highly convincing phishing campaigns. The volume and impact of ransomware and other threats will likely increase as a result, the National Cyber Security Centre (NCSC) has warned. On the other, expanded use of AI will increase the attack surface for public sector organisations. AI infrastructure, interfaces and models can all be targeted for different outcomes, including information theft, dataset poisoning to sabotage systems, and prompt injection attacks designed to circumvent internal governance guardrails.
That’s why the public sector, and any organisation using AI, needs to continuously and dynamically assess the risk and potential harms resulting from the technology. This is not an IT problem. It is a fundamental business-impacting risk which demands a security-by-design approach to help mitigate. This means baking security and data privacy into use of any new AI system. The NCSC has some useful guidelines for secure AI system design, development, deployment and operation – whether the organisation is creating models from scratch or building on top of tools and services provided by others.
But AI risk management can’t be carried out in isolation. It must be conducted as part of a more holistic process of understanding, quantifying and managing risk continuously across the entire attack surface. A single attack surface risk management (ASRM) platform can help here. Organisations must also consider the role played by suppliers – both to ensure they’re benefitting from the cutting edge of AI development, and that they understand the risks posed by the extended supply chain.
AI has the potential to transform the public sector. But only if we manage related risk effectively and continually, readily share information on best practices, and resist tick-box compliance. Budgets will always be tight. It’s critical that taxpayer money is well spent, but also that “good enough” does not become the default setting for government projects.