As the NCSC’s CEO set out in his letter to the Financial Times last week, AI can ultimately be a good thing for cyber security. In the near term, however, AI is likely to expose weaknesses in organisations that have not taken appropriate steps to secure their systems. That is why the NCSC continues to insist organisations improve their cyber security by implementing basic cyber hygiene.
At the same time, as we recently discussed, the NCSC considers the discovery and development of ways in which AI technologies could enhance cyber defence a priority. There is clear potential across a range of areas, including:
- threat detection such as enhanced endpoint detection and response.
- network and system vulnerability discovery, including scanning, penetration testing and red teaming
- software vulnerability research and remediation, including vulnerability discovery and patching
- automated system security management, for example within security operations centres (SOCs)
- incident response automation, supporting faster triage and containment
These developments are promising. But defender adoption will be complex and incremental. Frontier AI tools can perform some tasks extremely well, but they can also be unreliable, difficult to validate, and hard to integrate safely into existing environments. Adoption will take time, require the development of new capabilities and need careful oversight. That’s why, as Security Minister Dan Jarvis pointed out during his keynote speech at CYBERUK 2026, we are seeking collaboration to support adoption of AI for cyber defence.
To successfully adopt AI-enabled cyber defence, there are a number of risks and challenges that need to be managed. We highlight these below to encourage future collaboration on how best to support defenders.