In April 2015 a ring of seasoned thieves orchestrated a heist in the Hatton Garden jewellery district in London, walking away with a reportedly £14 million in valuables.
This heist wasn’t just a failure of locks; it was a failure of imagination. The safe deposit company chose a reactive approach to security, rather than embedding protection into the system’s foundation and being secure by design. This fundamental lapse resulted insystemic blind spots that did not account for new attack vectors (i.e., the lift shaft repurposed as an entry point) and new technology (i.e., high powered drills that made traditional barriers obsolete).
This is precisely the sort of systemic blind spot that exists in cybersecurity today. As AI models and systems evolve and grow exponentially, the adversaries are no longer a heist crew with power tools. They are instead agentic systems, autonomous AI agents that have the ability to shape shift, morph into the enterprise landscape while they plan and act independently. These systems are often backed by state actors with malicious intent and left unchecked, these systems become both the attacker and the attack vector.
TCS and MIT Sloan Management Review research shows that Intelligent Choice Architectures (ICAs) operationalize secure-by-design philosophy in an agentic age. Secure by design builds protection into the very foundations of a system rather than bolting it on later. It begins with threat modeling at the design stage—mapping data flows, trust zones, and potential attack surfaces—and embeds controls throughout the lifecycle with hardened infrastructure, least-privilege access, layered defenses, and continuous checkpoints.
The true innovation of ICAs lies in their dynamic nature. ICAs continuously generate and evaluate security options, orchestrating defenses across agents, data, and tools, and monitoring interactions in real time. They expose hidden trade-offs, surface anomalies through guardian agents, and provide for oversight even in areas when human supervision cannot scale. In doing so, ICAs transform secure by design from a principle into a living system. They ensure autonomous agents operate within governed boundaries while still driving innovation and building trust with customers, regulators, and partners.
Generative AI agents thrive on context, but context can also be their greatest liability. To deliver meaningful outputs, a simple chatbot needs access to specific enterprise data that is relevant; but for it to remain secure, we must ensure that we adhere to the age old time-tested principle of ‘least privilege’ and restrict data access to the bare minimum required for it to function. Intelligent trade-off design matters more than absolute restriction or reckless openness. ICAs step into this role by generating choices between access control and the relevant contextual requirements,and dynamically determining which appropriate tools, APIs, and data sources can be invoked by the agent to achieve the desired outcomes while actively limiting exposure of the amount of data fed in and minimising the attack surface.
If a single agent creates challenges, a constellation of them magnifies risk. Multi-agent systems can misuse tools, inherit excessive privileges, collude in ways their creators never anticipated, or develop emergent behaviors outside design parameters. Direct human oversight cannot scale in these environments.
What is needed is agentic threat modelling, a systematic examination of how agents plan, store memory, communicate, and act. Traditional threat modelling frameworks don’t work for AI and GenAI, let alone AgenticAI which brings some critical unique threat vectors like Agentic Sprawl, Control Hijacking, Goal and Instruction Manipulation, Memory and Context Maniupulation, Collusion, Excessive Agency and others. [The threats identified during threat modelling then need to be mitigated using specific controls that could come from a combination of native hyperscaler capabilities, existing best of breed security tools investments, and additional specialised Agentic AI Security tools. Guardian agents add another defensive layer, serving as monitors that observe patterns, detect anomalies, and escalate issues when deviations arise.ICAs provide the scaffolding for these safeguards, transforming fragile multi-agent ecosystems into orchestrated, resilient networks where security is continuously negotiated rather than assumed.
Agentic security frameworks are evolving and gradually maturing andwhile they mitigate some of the challenges, there are critical gaps which persist. Urgent realities shaping AI deployments are often overlooked:
Closing these gaps requires fundamentally different security strategies and scaled orchestration. Effective architectures must integrate hyperscaler-native controls, best-of-breed security tools from established vendors, and AI-specific safeguards tuned to the unique risks of generative and agentic systems. ICAs bring these strands together, mapping threats to controls in real time, aligning protection with business imperatives, and ensuring defenses adapt as quickly as the agents they are designed to govern.
Though still on the horizon, quantum computing threatens to upend today’s cryptographic foundations. Preparing now means treating crypto agility as a core design principle, not a future contingency.
Key steps in quantum-safe readiness:
ICAs elevate quantum-safe planning from checklist to living system. By continuously surfacing cryptographic options, mapping them against business risks, and adapting rollouts in real time, ICAs embed resilience as a design principle, future-proofing trust across their ecosystems.
Tomorrow’s breach will not look like Hatton Garden. It will be software against software, an autonomous agent misusing a tool, exploiting an API, slipping through an unsecure interface, exfiltrating data,before anyone notices. These attacks will be adaptive as agents familiar with the enterprise will live off the land, blending in to near invisibility and exploiting given trust relationships across distributed systems. ICAs can help adapt security to context, calibrate least privilege against the right data, and keep multi agent swarms inside clear decision rights with dynamic accountability. Trust can be dynamically calibrated against context, The future of security will not be won by faster patches but by architecting intelligent choice environments where secure, governed, and adaptive decisions become inevitable. In the age of Agentic AI, we cannot simply depend on existing static mechanisms of cybersecurity, but must architect dynamic governance and risk management to build inherent trust and secure autonomous systems by design.