And if you do, make sure your cyber security is up to date. Kurt Frary, Head of IT/CTO of Norfolk County Council and President of Socitm 2025-26 explains
Artificial Intelligence (AI) is the disruptive technology trend of the moment. Barely a day passes without articles talking about the amazing possibilities, or the significant new risks that it poses, or how it has been used in the wrong way.
The sector, individually and collectively, has different approaches: ranging from the extremes of jumping straight in, to avoiding it completely.
Hopefully, you’re on the sensible and safe middle path. Using it, but with care, governance, guidelines and guardrails in place.
As we use it, we need to understand how to achieve the outcomes we want. AI does offer a potential key to unlocking productivity beyond traditional borders – £45 million according to last year’s State of Digital Government Review.
Those of us using it are starting to explore new potential applications. Or using it outside of work to support us.
Socitm, and most digital leaders, see AI as a transformational force for good if the risks are managed, concerns addressed, and it’s used ethically.
It’s still difficult to predict public sector AI adoption
Alongside great opportunity, there are risks (such as made-up content – hallucinations – bias and new cyber threats) and the pace is dictated by two key factors (identified in Digital Trends research).
1. Our capability and capacity to take on complex AI projects: given other pressing priorities and challenges and limited AI experience. Projects are emerging in specific and bounded applications, such as health diagnostics, social care and customer service.
2. Compliance and regulatory concerns: data quality, procurement policies, risks of bias, transparency of algorithms and concerns about liabilities.
Think your organisation is avoiding AI? You’re not.
Are you (or do you know) a Shadow AI user?
You and your colleagues could be using Shadow AI without realising it.
Shadow AI refers to AI tools or systems used within an organisation without official approval or oversight from IT or governance teams.
Salford City Council’s Afsha Zeb (Cyber Governance, Risk and Compliance Engineer) lists some real-world examples in her blog post ‘Shadow AI in the public sector: innovation without oversight?’. This includes: drafting case notes or internal documents using ChatGPT; copy-pasting sensitive or personal data into public-facing tools for analysis; using image or speech generators for accessibility without audit trails; and purchasing AI-enhanced tools without ICT or procurement involvement.
What can you do?
Afsha has tackled this exact question: “You can’t afford to ignore it. But banning AI won’t work, it will simply drive it further underground. Instead, we need a mature, strategic response: one that acknowledges the innovation already happening and guides it safely.”
Read her 6 steps to managing Shadow AI in your organisation.
Where works when thinking about and using AI
1. To maintain public trust, avoid falling for supplier and consultant hype. AI takes time and effort. Start small (develop pilots internally before deploying at a large scale or externally), establish governance arrangements (including usage policies or guidance to define ownership), and understand AI’s impact in your organisation.
2. Develop in-house skills and senior/political awareness, plus a good understanding of opportunities, risks and the functioning of AI.
3. Conduct a risk analysis of AI (including new external cyber risks) and consider how those risks are best controlled and mitigated for.
You’re using AI. How do you make it secure?
You handle sensitive data every day. When AI is introduced, the attack surface expands making cyber security not just a technical concern, but a public trust issue.
A single breach could compromise residents’ data and erode confidence in digital services.
Start with Cyber Essentials and the Cyber Assessment Framework (CAF).
Cyber Essentials is a government-backed certification offering baseline controls against the most common cyber threats.
Implementing just five key controls reduces risk, builds protection and gives stakeholders verified assurance that you’re prioritising cyber security and meeting the UK minimum standard.
The CAF offers a systematic approach to assessing how well you manage cyber risks to your essential functions.
It provides requirements, principles and outcomes to evaluate and improve cyber security.
Published before CAF 4.0 went live, Socitm’s 10 key cyber security questions for public sector leaders are still essential reading.
Watch me and my colleagues talk about implementing the CAF at Norfolk County Council. It’s helped us take a broader view of our cyber resilience across areas not always covered by other frameworks.
Questions you must answer
In October, I attended the 2025 Major Cities of Europe Conference in Paris. While waiting for my presentation slot, Dr. Alan R. Shark spoke about AI cyber security, challenging us with the following questions.
What problem are we solving? AI might not be the right solution. Don’t use it just because you can. Define your use and outcome metrics and give examples – such as, reducing waiting times.
Is data ready, secure and governed? AI relies on clean, complete, and unbiased data, but that doesn’t exist. Instead, get data to a good enough quality to work with.
How can we ensure transparency and accountability? Ethical frameworks provide citizens with recourse when AI affects them. Adopt explainable AI reporting disclosure.
What are the risks and how will we mitigate them? Use pilots before widely rolling out AI solutions.
Do we have capacity to succeed? Budgets (funding for tech, training and support) and people (AI literacy, change management, cross-functional collaboration). Communicate benefits and limitations to build trust.
Security isn’t just about firewalls it’s about communication.
We should engage with residents, explain how AI is being used, and offer clear channels for feedback.
Transparency builds trust, and trust is the foundation of successful public service.
AI: a look ahead
We’re at a turning point in our preparations to exploit AI. It’s one of our most exciting, transformative and potentially challenging technology developments.
There are different areas where AI can be its most relevant and useful. For example, identifying risks and benefits of early interventions – through linking and analysing complex data sets across systems, services and organisations.
It can be used to automate customer service journeys, connecting around individual needs, preferences and changing circumstances. Internal processes could be automated, especially as our software will often have AI capability pre-embedded.
It is also useful for analysing and demonstrating the wider effects of service decisions and risks. For example, connecting the impacts of decisions in measurable ways across domains (environment, health, social well-being and economic factors).
Local public services operate in complex environments. We manage sensitive
situations and data, often related to vulnerable people with complex and diverse service needs.
With all relevant safeguards in place, there are areas where AI can offer significant future value – with a new data insight, better risk management and joined up service delivery.
Key takeaway: AI will complement, not replace, human activity
It can simplify processes, automate resource allocation, aid in decision-making, and trigger alerts for risks, leading to better-targeted resources and services.
AI has the potential to play a significant role across the whole spectrum of public services, becoming a partner rather than a de facto replacement.
Making cyber security a board responsibility
NCSC Annual Review 2025
AI and cyber security: what you need to know