Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    CISA Adds Four Known Exploited Vulnerabilities to Catalog

    April 24, 2026

    How UNC6692 Employed Social Engineering to Deploy a Custom Malware Suite

    April 24, 2026

    SSA-552874 V1.5 (Last Update: 2025-11-11): Denial of Service Vulnerability in SIPROTEC 5 Devices

    April 24, 2026
    Facebook X (Twitter) Instagram
    • Demos
    • Technology
    • Gaming
    • Buy Now
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Canadian Cyber WatchCanadian Cyber Watch
    • Home
    • News
    • Alerts
    • Tips
    • Tools
    • Industry
    • Incidents
    • Events
    • Education
    Subscribe
    Canadian Cyber WatchCanadian Cyber Watch
    Home»News»Applying the CIS Controls to Real‑World AI Environments
    News

    Applying the CIS Controls to Real‑World AI Environments

    adminBy adminApril 24, 2026No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    ai iconArtificial intelligence (AI) is not arriving quietly. It is showing up everywhere at once. Models now power internal copilots, agents handle multi‑step tasks, and new integration protocols let AI systems interact with tools, application programming interfaces (APIs), and business data. For many organizations, this feels like defending a moving target as the attack surface expands. Behaviors shift with every model update, and AI systems operate with a level of autonomy traditional controls were never designed to manage.

    To address these challenges, the Center for Internet Security® (CIS®), Astrix Security, and Cequence Security partnered to develop actionable cybersecurity guidance tailored to AI environments. The work extends the globally recognized CIS Critical Security Controls® (CIS Controls®) into environments where autonomous decision‑making, tool and API access, and automated threats introduce new risks.

    The result is three new CIS Companion Guides: the AI Large Language Models (LLM) Companion Guide, the AI Agent Companion Guide, and the Model Context Protocol (MCP) Companion Guide. Together, they help enterprises adopt AI responsibly and securely while staying aligned with the CIS Controls they already use.

    Why AI Needed More Than a Single Guide

    AI systems are not a single technology layered onto existing applications. They consist of multiple components, each with its own security challenges.

    LLMs determine how information is processed and generated. AI agents add reasoning, planning, memory, and autonomous action across workflows. MCP defines how AI systems interact with external tools, services, and data through a structured protocol.

    Treating these as one security surface would blur critical boundaries. CIS intentionally separated the guidance into three Companion Guides, each answering a distinct question security teams already ask:

    Together, the guides provide full coverage without duplication or gaps between layers.

    The partnership between CIS, Astrix, and Cequence focused on securing AI as it operates in real production environments.

    Astrix contributed expertise in securing AI agents, MCP servers, and non‑human identities (NHIs), such as API keys, service accounts, and OAuth tokens. This ensured strong emphasis on identity, authorization, and credential lifecycle management.

    Cequence brought deep experience securing enterprise applications and APIs, shaping guidance around visibility, governance, and control over what AI systems can access and execute.

    Combined with CIS’s standards leadership, the collaboration produced guidance grounded in real operational needs.

    Answering the Questions Security Teams Are Asking

    The structure of the Companion Guides reflects the practical questions enterprises often ask.

    What Security Controls Apply to AI Systems?

    The CIS Controls continue to apply, but AI systems behave differently than traditional applications. The Companion Guides evaluate each CIS Control through an AI‑aware lens, documenting how CIS Safeguards apply to models, agents, and MCP environments along with where traditional assumptions no longer hold.

    Why Publish Three Guides Instead of One?

    AI risk exists across multiple layers:

    • Model Layer: Inputs, outputs, context, and data exposure
    • Agent Layer: Memory, tool use, and autonomous workflows
    • MCP Layer: Protocol boundaries where tools and data are accessed

    Each Companion Guide secures what the others cannot.

    How Do the Guides Work Together in Practice?

    Across the three Companion Guides, CIS defines a shared AI security lifecycle:

    • Inputs are sanitized at the model layer (AI LLM Companion Guide)
    • Context and memory are protected across model and agent layers (AI LLM and AI Agent Companion Guides)
    • Reasoning is constrained by guardrails at the agent layer (AI Agent Companion Guide)
    • Tool requests are validated and authorized through MCP (MCP Companion Guide)
    • Actions are logged, bounded, and auditable (MCP Companion Guide)
    • Outputs are reviewed, redacted, or minimized (AI LLM and AI Agent Companion Guides)

    No single layer can enforce security end to end. Security holds only when controls span all three surfaces.

    Why This Guidance Matters Now

    Enterprise AI adoption continues to accelerate, often faster than security programs can evolve. Enterprises are deploying AI into production workflows that touch sensitive data and systems, often without clear answers about which controls apply.

    AI risks are already material:

    • Models can leak sensitive data through embeddings or logs
    • Agents can execute unauthorized code or corrupt records
    • Memory stores can accumulate confidential information indefinitely
    • RAG pipelines can be poisoned to manipulate decision-making
    • MCP servers can introduce unsafe capabilities through silent updates

    These guides recognize that AI is no longer experimental; it is operational and must be secured with the same rigor applied to cloud‑native apps, containerized workloads, and microservices.

    By extending the CIS Controls into AI environments, the Companion Guides give security teams practical, prioritized guidance for securing AI systems that are already operational without introducing a new framework.

    From Partnership to Practice

    The partnership between CIS, Astrix, and Cequence reflects a shared goal: helping enterprises innovate with AI responsibly and securely. By combining standards leadership with real‑world expertise securing agents, identities, protocols, and execution paths, the final release delivers guidance that can be put into practice immediately.

    The three Companion Guides mark a turning point in enterprise AI security. Instead of treating AI systems as ungovernable, CIS brings them into the familiar structure of the CIS Controls while addressing the unique risks they introduce.

    • The AI LLM Guide secures the model layer.
    • The AI Agent Guide secures autonomy and action.
    • The MCP Guide secures how AI interacts with tools and data.

    Together, they provide a practical, layered framework for building AI systems that are both operational and secure. AI is no longer just a research project or a productivity boost; it’s becoming infrastructure, and infrastructure needs controls.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleFIRESTARTER Backdoor | CISA
    Next Article ZDI-26-297: Siemens SINEC NMS Improper Authentication Privilege Escalation Vulnerability
    admin
    • Website

    Related Posts

    News

    How UNC6692 Employed Social Engineering to Deploy a Custom Malware Suite

    April 24, 2026
    News

    Supporting AI adoption for UK cyber defence

    April 24, 2026
    News

    How Algorithms Make Us Feel Bad and Weird

    April 24, 2026
    Add A Comment

    Comments are closed.

    Demo
    Top Posts

    Catchy & Intriguing

    March 17, 202662 Views

    The Grandparent Scam: How AI Voice Technology Makes This Old Con Deadlier Than Ever

    March 18, 202620 Views

    Global Takedown of Massive IoT Botnets Halts Record-Breaking Cyberattacks

    March 20, 202619 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    85
    Featured

    Pico 4 Review: Should You Actually Buy One Instead Of Quest 2?

    January 15, 2021 Featured
    8.1
    Uncategorized

    A Review of the Venus Optics Argus 18mm f/0.95 MFT APO Lens

    January 15, 2021 Uncategorized
    8.9
    Editor's Picks

    DJI Avata Review: Immersive FPV Flying For Drone Enthusiasts

    January 15, 2021 Editor's Picks

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Demo
    Most Popular

    Catchy & Intriguing

    March 17, 202662 Views

    The Grandparent Scam: How AI Voice Technology Makes This Old Con Deadlier Than Ever

    March 18, 202620 Views

    Global Takedown of Massive IoT Botnets Halts Record-Breaking Cyberattacks

    March 20, 202619 Views
    Our Picks

    CISA Adds Four Known Exploited Vulnerabilities to Catalog

    April 24, 2026

    How UNC6692 Employed Social Engineering to Deploy a Custom Malware Suite

    April 24, 2026

    SSA-552874 V1.5 (Last Update: 2025-11-11): Denial of Service Vulnerability in SIPROTEC 5 Devices

    April 24, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • Home
    • Technology
    • Gaming
    • Phones
    • Buy Now
    © 2026 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.