The greatest oncoming power shift in modern business isn’t about who — or what algorithm — makes decisions; it’s about shaping, framing, and contextualizing the choices driving those decisions. In a world of hypercomplexity, exponential advances in AI capabilities, and compounding uncertainty, strategic value no longer comes from human decision-making alone. It arises from architecting superior decision environments.
Intelligent choice architectures (ICAs) represent the vanguard of this transformation. These systems don’t just predict outcomes or automate processes — they actively collaborate with humans to create decision environments, reveal hidden opportunities and unexpected trade-offs, challenge entrenched assumptions, and generate breakthrough alternatives that expand the boundaries of possibility.
ICAs mark a decisive break from conventional uses of AI to support decision frameworks. Combining generative and predictive AI transforms artificial intelligence from a decision aid to a collaborative choice architect that better empowers human decision-making. What makes this shift revolutionary is the transition from systems that learn from decisions to systems that learn to improve the decision environment itself. Already, examples from multiple industries are demonstrating that ICAs are getting better at understanding how to shape the context in which decisions are made. As ICAs become more adept at improving decision environments, organizations must rethink how they approach decision rights, accountability, and value creation. In aspiration and effect, reliably better choices lead to reliably better decisions.
Intelligent choice architectures are dynamic systems that combine generative and predictive AI capabilities to create, refine, and present choices for human decision makers. They actively generate novel possibilities, learn from outcomes, seek information, and influence the domain of available choices for decision makers.
Our yearlong research into ICAs and their decision rights implications incorporated formal interviews with almost two dozen technical and business executives in industry verticals including technology, financial services, telecommunications, health care, retail, pharmaceuticals, media, and power generation and distribution.1 While the discussions were candid and wide-ranging, every participant stressed that their organization was in the early days of determining how best to integrate generative and predictive AI capabilities into their strategy, operations, and culture.
We’ve stopped separating IT, OT, and AI. It’s all decision infrastructure now.”
ICAs are dynamic systems that combine generative and predictive AI capabilities to create and refine choices and present them to human decision makers. They actively generate novel possibilities, learn from outcomes, seek information, and influence the domain of available choices for decision makers. See the appendix for a detailed description of key ICA attributes.
We are witnessing the emergence of a new form of organizational intelligence, in which combinations — ensembles — of humans and machines shape how choices are developed, presented, and discussed. What constitutes an “agent” in an organization must be reconsidered when agency is distributed across human-AI networks rather than conferred to discrete individuals or entities.
The more power organizations give to AI choice architects, the more empowered human decision makers can become. This flips the traditional narrative about AI diminishing human agency on its head. When AI systems take on the cognitive load of a choice architecture, humans don’t cede their power to machines; rather, they become more capable of exercising meaningful judgment and strategic thinking. This isn’t just assistance, augmentation, or automation; it’s a new form of human-in-the-loop decision-making that challenges basic assumptions about organizational authority.
The application of ICAs is already reshaping operational and strategy-related practices:
These examples illustrate ICAs’ potential and reflect empirical maturity capabilities and thresholds. That is, each of these organizations possesses the technical infra- structure, organizational readiness, and AI fluency needed to develop and integrate such systems. For many others, such capabilities remain aspirational.
The conceptual roots of ICA design lie in behavioral economics and cognitive science. Researchers such as Nobel laureates Herbert Simon, Daniel Kahneman, and Richard Thaler showed how human decision-making is bounded by attention, shaped by heuristics, and prone to systematic error, respectively. The popular introduction of the concept of “choice architecture” by Thaler and Cass Sunstein formalized a framework for structuring choices that guide human behavior without coercion.3
In their original form, however, choice architectures were static decision frameworks, applied by experts through designed interventions in public policy, marketing, and user experience, among other areas. ICAs represent a significant departure. They constitute a dynamic and computational form of choice architecture that adapts in real time, learns from interaction, and reshapes itself based on context and performance.
As ICAs evolve, they do more than recommend or predict — they begin to participate in the logic and language of choice. (See Table 1.) As that participation deepens, so must executive conversations about who holds decision rights and how those rights are distributed among individuals, teams, and intelligent systems.
From Model-Centric to Decision-Centric AI
This table shares quotations from leaders who spoke with us about their organizations’ ICA implementations and their perspectives on the need for more decision-centric uses of AI.
Speaker | Quote | Relevance |
Monica Caldas Global CIO, Liberty Mutual Insurance |
“We realized we needed to shift the mindset from building models to engineering decisions.” | This is a crisp distillation of the intelligent choice architecture shift from model-centric to decision-centric AI. |
Pierre-Yves Calloc’h Chief digital officer, Pernod Ricard |
“AI is not just about predicting consumer behavior — it’s about knowing which decisions matter most and helping our teams make them with confidence.” | This statement articulates the AI-as-choice-coach role, emphasizing internal enablement, not just external prediction. |
Funmi Williamson Chief customer officer, Southern California Edison (SCE) |
“We stopped asking, ‘What can AI do?’ and started asking, ‘What choices are we making badly?'” | Choice-centric framing flips AI from capability to accountability — a radical reorientation toward decision quality |
Funmi Williamson SCE |
“We need choice architectures that invite better defaults, not just faster decisions.” | Speed is not the goal — better trade-offs are. |
This evolution positions ICAs not only as complementary to agentic AI systems but as precursors and preconditions for their effective deployment. Intelligent agents — human or artificial — require intelligent decision environments. In this sense, ICAs are not just tools; they are infrastructure for human and machine agency.
ICAs’ promise comes with both obvious and subtle risks. Intelligent systems can generate misleading correlations, encode unexamined organizational biases, or suggest options misaligned with ethical, legal, or strategic norms. Beyond technical assurance, successful ICA adoption demands dynamic trust systems — trust rooted not merely in outcome accuracy but in cognitive comfort, explainability, and participatory validation.
Stakeholders need to feel growing confidence in how decision environments are framed, not just in the correctness of decisions. Effectively blending trust and verification will require ongoing executive vigilance. (See Table 2.)
These systems are not trivial to implement, given that they require sustained investment in data infrastructure, cross-functional talent, change management, and organizational design. Most legacy companies still struggle with fragmented data environments and siloed decision processes — foundational gaps that must be addressed before ICA adoption at scale is viable.
Moreover, ICA performance depends on cognitive data transformation: capturing perceptual patterns, tacit heuristics, and latent intention signals. Organizations overindexing on system data — volume, velocity, and accuracy — will miss the opportunity to align ICA framing logic with how humans perceive, prioritize, and decide.
A more realistic trajectory for most organizations will involve iterative progress: targeted pilots, partial deployments, and incremental learning. Early-stage ICA efforts may fail or underperform — not due to inherently flawed concepts but from bad data, cultural inertia, inadequate tooling, or expectations decoupled from executive commitment.
Building Trust in ICAs
Numerous executives we spoke with described wrestling with the need to build trustworthy, values-aligned, AI-driven decision systems that strengthen organizational culture and advance results.
Speaker | Quote | Relevance |
Dr. Anjali Bhagra Physician lead and chair, Automation Hub, and medical director, Office of Belonging, Mayo Clinic |
“We have to trust AI with low-risk decisions before we can trust it with high-risk care.” | This statement captures the idea that building trust in AI and ICAs is a gradual process. It also links choice architecture design with risk stratification. |
Emmanuel Frenehard Chief digital officer, Sanofi |
“Every AI use case starts as a governance problem and succeeds through cultural transformation.” | Brilliant in its simplicity, this remark connects decision rights, organizational learning, and cultural adaptability. |
Ben Peterson Vice president, People, Product & Design, Walmart |
“Choice architectures are not neutral. It’s important to remember the models we build can be influenced by our own perceptions, and we have to be cognizant of that.” | This observation represents a candid acknowledgment that decision environments are value-laden. |
Philippe Rambach Senior vice president and chief AI officer, Schneider Electric |
“Explainability matters — but in the boardroom, consequence matters more.” | This is a reminder that AI governance must account for impact and accountability, not just model transparency. |
These learning curves are not a reason for skepticism; they’re expected features of any strategic capability worth building.
Situational awareness and self-awareness are key. Organizations must consider whether their data and systems are ready for ICAs and, even more importantly, whether their people and incentives are aligned for intelligent choice. ICA success depends less on technical infrastructure and more on organizational introspection into five key questions:
To be clear, ICA readiness is not about AI literacy; it is about enterprise self-awareness. The deeper challenge is not whether or how increasingly intelligent systems can suggest progressively better choices but whether the organization is willing to see, understand, and act on them.
ICAs make possible not only better decisions but also a different model of decision-making — one that is more distributed, adaptive, and generative. In this model, choice becomes a shared computational and cognitive resource, not just an individual burden or executive prerogative.
ICAs reframe decision-making as a novel form of system design. They explicitly embed learning into the structure of choices. They turn decision environments into plat- forms that enable continuous improvements in enterprise intelligence — human, artificial, and collaborative.
Organizations that overlook or undervalue these shifts will effectively tether themselves to static decision frame- works in a market that demands adaptiveness. The strategic opportunity is not merely to make better decisions but to architect the conditions under which better decisions become probable and sustainable.
You don’t scale AI. You scale trust in the system making the decisions.”
Intelligent choice architectures are not simply instruments of optimization or automation. They are a new medium, mechanism, and design principle for human empowerment. Even as generative, predictive, and agentic AI transform decision analytics and economics, ICAs restore and expand human agency by shaping decision environments in which better choices, deeper judgment, and broader imagination become possible.
Rather than displacing human decision makers, ICAs empower them by extending the frontier of viable options, illuminating unseen trade-offs, and framing uncertainty as an opportunity, not paralysis.
By framing uncertainty as an opportunity, not thus forestalling decision paralysis, organizations treating ICAs as empowerment architectures — not just decision technologies — point to the next era of human-machine collaboration. Empowerment is not a byproduct of ICAs; it is their highest and most strategic expression.
An ICA that merely presents choices without improving comprehension, confidence, or context isn’t empowering — it’s just another analytic input. Empowerment occurs when better choice environments unlock better judgment. Otherwise, it’s just better user experience masquerading as intelligence. Better choices don’t just make better decisions. They make better decision makers.
Autonomy isn’t empowerment; intentional designs of intelligent choice architectures (ICAs) describe how clarity, context, and consequence enable agency. When ICAs surface better choices — ones that are more relevant, accessible, and better framed — they don’t merely improve decisions; they reallocate cognitive power across human-machine teams, shifting how choices are perceived, weighed, and acted upon. Our interviews across a half-dozen industry verticals suggest that real empowerment means the following:
In decision environments increasingly saturated with predictive and generative AI, leaders need to reconsider decision rights — both upstream, in the design of intelligence choice architectures; and downstream, where managers can exploit their enhanced choice sets.
Consider a global logistics company where an AI system streamlines route planning by minimizing fuel consumption, which reduces the organization’s carbon footprint. While this “intelligent initiative” saves money and helps the company meet its sustainability goals, it inadvertently deprioritizes high-value customer delivery. Deciding which trade-off mattered more was invisibly made upstream, while the downstream cascades were compounded through customer relationships, potentially undermining the very customer-centric efficiencies the system was originally designed to create. Meta decision rights — the ability to architect the choices available to managers — must become a greater leadership priority and privilege.
When supply chain AIs autonomously reprioritize supplier inputs during disruptions, the question isn’t who signed off on the choice — it’s “Who trained the framing logic?” Machine learning transforms executive accountability from point decisions to systems design. As health care systems increasingly incorporate AI for diagnostic triage, for example, subtle shifts in assigning weights to patients’ symptoms can steer physicians toward certain treatment protocols. Who oversees the tensions and balance between medical and clinical asset utilization? Who signs off when an AI’s recommendations conflict with credible human judgment? The authority battlegrounds shift from individual decisions to the architecture of decision environments.
Our research strongly suggests that organizations that don’t explicitly address meta decision rights will find their systems quietly becoming de facto policy makers, setting priorities and making trade-offs without any oversight or assurance of strategic alignment. Feedback loops that harvest and cultivate data from the exercise of decision rights must ensure that decision environments are continuously refreshed and aligned with strategic and operational objectives.
Most legacy enterprises treat decision rights as static governance artifacts set by leadership to forge clear lines of authority. Effective agentic environments require decision rights to become dynamic protocols — continuously allocated, contested, escalated, or deferred among humans and machines.
With apologies to the late Harvard Business School professor Michael Jensen, Decision Rights 2.0 introduces principles such as:
These aren’t just governance policies or challenges; they represent fundamental scaffolds for how knowledge, authority, and responsibility flow bidirectionally across agent networks. Accountable agentic AI requires these rights to be explicitly engineered, not tacitly assumed. Transparency becomes not just an imperative but an existential requirement for maintaining human agency in increasingly automated decision flows. (See Table 3.)
Evolving Decision Rights and Human-AI Collaboration
These quotes illustrate how executives are grappling with decision rights when using AI to make decisions.
Speaker | Quote | Relevance |
Monica Caldas Global CIO, Liberty Mutual Insurance |
“The moment AI enters the workflow, the real question isn’t ‘What does the model say?’ It’s ‘Who gets to disagree with it, and how fast?’” | This statement points to the need for disagreement protocols, override rights, and reputation-aware dissent — things that intelligent agents must eventually encode or mediate and that are crucial for Decision Rights 2.0. |
Monica Caldas Liberty Mutual Insurance |
“We use AI to inform decisions, not to automate them blindly. There’s always a human in the loop, but the loop is getting tighter.” | Human in the loop is evolving into choice in the loop — a subtle but critical transition. |
Ben Peterson Vice president, People Product & Design, Walmart |
“In retail, the most strategic decisions aren’t made by senior leaders anymore; they’re made by the systems we build.” | This is a provocative redefining of decision rights: It’s not just about redistribution but also reframing the value of human judgment versus machine judgment. |
Philippe Rambach Senior vice president and chief AI officer, Schneider Electric |
“AI doesn’t replace decisionmaking — it reframes what decisions are worth making by humans.” | This calls out the invisible hand of intelligent systems — and why designing choice architectures is now strategic leadership. |
Ragavan Srinivasan Vice president, product, Meta |
“AI can now recommend — but who has the right to say yes? We’re redesigning our approval processes around that question.” | This is Decision Rights 2.0 in action — a direct confrontation with legacy org charts and controls. |
AI systems trained to improve decision environments often outperform their human counterparts in surfacing relevant options, identifying latent trade-offs, and optimizing for complex objectives. But what happens when systems learn faster than leadership structures adapt?
Global biotechnology company Danaher is starting to deploy ICAs to transform decision-making across its M&A, product strategy, and innovation road maps. The goal is to synthesize complex data into user-friendly “cockpits” that streamline decision processes. While Danaher’s leaders retain decision authority, the approach is designed to give them a “real-time ability to dive into data that would’ve taken analysts weeks to prepare,” says Martin Stumpe, Danaher’s chief data and AI officer. “One concrete example for this is supply chain optimization, where advanced analytics can lead to substantial gains.”
This creates inherent learning-authority dilemmas: The cockpit’s logic is performance-optimized, real-time, and empirically validated, yet its internal thresholds and prioritizations are often invisible to the leaders who rely on it. Managers still own the decisions, but they operate within environments that have quietly evolved to prefer certain kinds of outcomes over others. When performance exceeds permission, operations effectively disengage and decouple from strategic execution. The meta decision about what outcomes matter most may shift, subtly and silently, beyond conscious organizational control.
While legacy governance models like RACI (responsible, accountable, consulted, and informed) presume static roles, clear authority lines, and human-centric accountability, ICAs implicitly fracture these assumptions. In decision environments where AI proposes, evaluates, and even initiates action, accountability must become relational, distributed, and fluid. Orchestration supersedes delegation.
The goal is no longer “assigning the decider” but ensuring that human and machine intelligence are coordinated, orchestrated, and activated for the decision(s) at hand.
This involves five strategic shifts in decision rights.
From KPIs to KPAIs: Evolving Metrics and Systems Thinking
KPIs need to assess the quality of decision environments, not just outputs.
Speaker | Quote | Relevance |
Pierre-Yves Calloc’h Chief digital officer, Pernod Ricard |
“We’re trying to make our metrics more intelligent, not just more granular. Intelligence is about usefulness more than precision.” | This offers a powerful reminder: Intelligent metrics serve decisions, not dashboards. They are antiperfection and pro-action. |
Philippe Rambach Senior vice president and chief AI officer, Schneider Electric |
“KPIs are evolving. They’re no longer just retrospective metrics — they’re becoming real-time negotiation tools.” | This is one of the best articulations of the measurement thesis: KPIs that learn, negotiate, and adapt can align in real time. |
Ragavan Srinivasan Vice president, product, Meta |
“One of our biggest lessons: The same data-quality standards don’t apply when you’re optimizing for speed versus learning. They require different architectures.” | This is a potent insight into trade-off architecture — how choice design must differentiate between fast execution and strategic learning. KPAIs (key performance AI indicators) can operate across different time horizons and organizational tempos. |
Intelligent choice architectures and the evolution of Decision Rights 2.0 go well beyond changing how decisions are made — they redefine and refine how performance gets measured. The architecture of the decision environment increasingly determines the shape of success: what counts, what improves, what feedback loops learn, and what scales.
There are three key factors that explain why traditional KPIs are structurally inadequate:
New measurement systems are needed to assess the quality of decision environments.5 (See Table 4.) In the context of ICAs, KPIs will measure outputs, as well as the system intelligence that, in some cases, created them. These KPAIs (key performance AI indicators) describe how well the decision environment learns, adapts, frames, and orchestrates.
Our research points to an enterprise decision-making future that is less speculative than operational; the data is already embedded in code, dashboards, and workflows, with AI models learning in real time. Organizations increasingly expect agentic AI to proactively automate what should be automated, assist where assistance is needed, and augment what should be augmented. Predictive and generative AI are no longer mere technologies; they are capabilities — ambient, infrastructural, and always on.
We’re learning that AI forces conversations between teams who never talked before. That tension is where the value lives.”
The most consequential shift underway in business doesn’t replace human decision makers with “smarter” machines or enhanced algorithmic decision-making; it fundamentally revisits and rethinks the environments in which decisions are made — and who shapes those environments. Leaders will win not by making better choices but by building better environments, where better choices become algorithmically and operationally inevitable.
ICAs are not the next stage of automation; they represent the future of choice itself. They reframe choice-making as a design problem: structuring, surfacing, and expanding meta choices that influence outcomes before options are consciously considered. In other words, they offer a better way to deliver better choices. The real revolution lies not in faster decisions but in smarter decision environments, where humans and machines collaboratively curate options.
The strategic edge is no longer defined solely based on who decides but on how choices are structured, surfaced, and evaluated. Organizations that recognize this shift treat decision-making not as a fixed function of leadership but as a design problem — one that is continuously improved through intelligent systems that learn to improve. The solution to that design problem requires choice architecture literacy, governance fluency, intelligence orchestration, and system accountability.
The future is already being intelligently designed. The challenge now is to become intentional about how we govern it.
The table below outlines the capabilities of intelligent choice architectures to change decision environments.
Intelligent Choice Architecture(ICA) Capabilities |
How ICA Capabilities Change Decision Environments |
Elevating Decision Quality Through Expanded Choice Sets |
ICAs bring a wider array of high-quality, contextually relevant choices to the forefront. Unlike traditional decision tools, which often present static or limited options, ICAs dynamically generate new alternatives based on evolving data patterns and contextual insights. This expansion means that decision makers are not confined to conventional or habitual choices; instead, they can consider innovative options that may have been previously hidden or overlooked. This boosts the quality of decisions by ensuring that people’s choices reflect a more comprehensive understanding of the decision context. |
Anticipating Outcomes With Predictive Foresight |
By integrating predictive modeling, ICAs provide decision makers with insights into potential outcomes for each option in real time. This anticipatory capacity helps decision makers weigh trade-offs and risks more effectively. For example, a retail manager assessing inventory decisions might see not only the immediate costs but also the projected downstream impacts on sales, supply chain dependencies, and seasonal trends. This predictive foresight helps decision makers align their choices with longer-term strategic goals rather than just short-term gains. |
Adapting Choices Through Continuous Learning and Feedback |
ICAs learn from previous outcomes, continuously refining their own architecture based on new data and feedback. This means that decision environments are not static; they evolve and improve over time, becoming more aligned with organizational goals and individual decision makers’ preferences. In a talent management scenario, for instance, an intelligent choice architecture might identify patterns in employee performance and turnover to adjust its recommendations for promotions, training, or transfers. This adaptability ensures that the system remains relevant and valuable as situations and objectives shift. |
Enhancing Decision Confidence by Revealing Hidden Interconnections |
ICAs expose the interdependencies between different choices, making it easier for decision makers to understand how one choice impacts others across the organization. This interconnected view is particularly valuable in complex environments where decisions in one area can have cascading effects in others. For example, a marketing manager at a global retailer like Pernod Ricard could see how adjustments to campaign targeting affect inventory needs, distribution channels, and customer engagement. By making these connections transparent, ICAs help decision makers feel more confident and informed since they can see the broader implications of their choices. |
Decentralizing Decision-Making With Tailored Choice Architectures |
By providing context-specific guidance directly to individuals at all levels, not just top leaders, and tailoring decision environments to the needs of different roles, intelligent choice architectures enable more agile and decentralized decision-making across the organization. |
Reducing Cognitive Load by Streamlining Complex Information |
ICAs filter and prioritize information, presenting decision makers with the most relevant data and choices, which minimizes cognitive overload. Rather than wading through endless reports or raw data, decision makers receive streamlined insights and summaries that highlight essential patterns, anomalies, and recommended actions. For example, in supply chain management, an intelligent choice architecture could surface key inventory adjustments or supplier choices based on real-time demand fluctuations and historical trends, sparing managers from unnecessary complexity. By simplifying complex information, ICAs allow decision makers to focus their attention on critical decisions with clarity and confidence, improving both speed and accuracy in decision-making. |
Personalizing and Interacting With Decision-Making Environments |
ICAs create an interactive, engaging, and highly customized environment that adapts to each decision maker’s preferences, needs, and goals. Rather than offering a one-size-fits-all interface, these architectures adjust dynamically, using user interactions and feedback to shape how information and options are presented. For instance, a retail executive might prioritize metrics like customer lifetime value or churn predictions, while a store manager may need insights on daily inventory and staffing. ICAs can personalize dashboards and recommendations accordingly, making interactions feel more intuitive and responsive. Additionally, intelligent choice architectures can incorporate interactive tools like what-if scenarios, simulations, and decision trees, enabling decision makers to explore potential outcomes in real time and test various options before committing to a course of action. This interactive engagement not only makes the decision process more enjoyable but also boosts confidence, since users can see the immediate effects of adjustments and tailor their decision pathways to better align with strategic priorities. |
Michael Schrage is a research fellow with the MIT Sloan School of Management’s Initiative on the Digital Economy. His research, writing, and advisory work focuses on the behavioral economics of digital media, models, and metrics as strategic resources for managing innovation opportunity and risk.
David Kiron is the editorial director, research, of MIT Sloan Management Review and program lead for its Big Ideas research initiatives.
Todd Fitz, Kevin Foley, Vikrant Gaikwad, Siva Ganesan, Sarah Johnson, Ashok Krish, Abhinav Kumar, Michele Lee DeFilippo, Samantha Oldroyd, Stephanie Overby, Lauren Rosano, Allison Ryder, Serge Vatin-Perignon, Harrick Vin
The research and analysis for this report was conducted under the direction of the authors as part of an MIT Sloan Management Review research initiative in collaboration with and sponsored by Tata Consultancy Services.
To cite this report, please use: M. Schrage and D. Kiron, “Winning With Intelligent Choice Architectures,” MIT Sloan Management Review and Tata Consultancy Services, July 2025.
We thank each of the following individuals, who were interviewed for this article:
Anjali Bhagra
Physician lead and chair, Automation Hub, and medical director, Office of Belonging, Mayo Clinic
René Botter
CIO, ASML
Monica Caldas
Global CIO, Liberty Mutual Insurance
Pierre-Yves Calloc’h
Chief digital officer, Pernod Ricard
Emmanuel Frenehard
Chief digital officer, Sanofi
Bhushan Ivaturi
Former CIO, Enbridge
Earl Newsome
CIO, Cummins
Mark O’Flaherty
Interim managing director, digital data and AI, BT
Ben Peterson
Vice president, People, Product & Design, Walmart
Philippe Rambach
Senior vice president and chief AI officer, Schneider Electric
Ragavan Srinivasan
Vice president, product, Meta
Martin Stumpe
Chief data and AI officer, Danaher
Greg Ulrich
Chief data and artificial intelligence officer, Mastercard
Funmi Williamson
Chief customer officer, Southern California Edison
Shuyin Zhao
Vice president, product, GitHub Copilot
This report, developed in collaboration with Tata Consultancy Services, examines how leading organizations are integrating predictive and generative AI to develop improved choices and present them to human decision makers. Drawing on interviews conducted in 2024 and 2025 with senior leaders in six major industry groups, our research reveals the emergence of intelligent choice architectures — a new paradigm where AI systems proactively participate in structuring and shaping strategic decisions. The implications for organizational performance, decision rights, and strategic agility are significant, particularly as businesses navigate increasing complexity and compressed decision cycles.
At MIT Sloan Management Review (MIT SMR), we explore how leadership and management are transforming in a disruptive world. We help thoughtful leaders capture the exciting opportunities — and face down the challenges — created as technological, societal, and environmental forces reshape how organizations operate, compete, and create value.
MIT SMR’s Big Ideas Initiatives develop innovative, original research on the issues transforming our fast-changing business environment. We conduct global surveys and in-depth interviews with front-line leaders working at a range of companies, from Silicon Valley startups to multinational organizations, to deepen our understanding of changing paradigms and their influence on how people work and lead.