Decision Rights 2.0
The late Harvard Business School professor Michael C. Jensen devoted decades of research to determining how the distribution of decision rights drives corporate performance and what companies can do to allocate them most effectively. Jensen argued that decision rights allocation is “an extraordinarily difficult and controversial management task,” warning of the potential dangers of the over centralization and overdemocratization of decision-making.3
As compound AI systems — systems that combine predictive and generative AI — learn to become more sophisticated choice architects, enterprises’ focus shifts from decision execution to decision design. Executives become accountable for the decision environments in which staff members operate, including defining when AI-generated nudges must be acted upon and when they can be overridden. Just how empowering or constraining should the intelligent choice sets generated for executive and managerial decisions be? Consider, for example, a trading algorithm that discovers a novel market pattern. Should it wait for human validation before acting? What about an ICA agent managing supply chain operations that identifies a more efficient logistics strategy: What permissions are required before implementing it? Under what conditions should the organization encourage human initiative versus obedience and compliance? These are questions leaders must consider.
ICA agents should reflect and respect an organization’s values and aspirations. In the Decision Rights 2.0 era, enterprises must determine who has the authority and responsibility to architect, deploy, and govern choice environments where human judgment and AI capabilities intersect. This authority carries explicit accountability both for immediate outcomes and the long-term effectiveness of decision architectures. This AI-driven redefinition elevates decision rights from a set of enterprise rules and practices regarding who can make specific decisions, what they can decide, and how strategic decisions shape how organizations harness the combined power of human judgment and artificial intelligence.
Indeed, ICA agents don’t just provide decision support — they create decision environments in which superior choices emerge from the interplay of machine intelligence and human judgment. Think of commercial aviation flight management systems that advise pilots: They don’t simply process navigation data; they preserve flight-path data and adapt to different routes, weather patterns, and pilot preferences, all while operating within strict safety parameters. Similarly, enterprise ICA agents continuously learn while operating within clear operational, legal, and regulatory boundaries. This directly addresses the too-common fear that ever-smarter and more capable AI systems will render human judgment marginal and/or irrelevant. In fact, the opposite is true. As ICA agents take on the heavy lifting of data analysis, pattern recognition, and optimization, they free their human counterparts and collaborators to focus on higher-order challenges.
Liberty Mutual effectively created an ICA agent to help train new claims adjusters by giving them more tailored training based on 20,000 company knowledge articles. This ICA agent helps adjusters more efficiently triage incoming customer calls to quickly resolve inquiries. The AI agent is one implementation of GenAI across the company. Additionally, one year after the companywide deployment of LibertyGPT, the organization’s internal instance of OpenAI’s ChatGPT, Liberty Mutual has seen it improve and support internal employee productivity. Liberty Mutual has saved more than 200,000 person-hours compared to previous settlements work- loads, says Monica Caldas, the company’s global chief information officer.
With ICAs, significant corporate decisions depend as much on the nature and purpose of intelligent decision environments as they do markets, products, culture, or strategy. A new focus on meta-decision rights emerges: the design and governance of the systems generating choices. A new meta-decision imperative requires human leadership teams and intelligent algorithms to come together to determine how decision rights around decisions rights should be effectively allocated by the human leaders and the cutting-edge algorithms responsible for those rights. Ironically, leaders seeking to maximize value from AI have little choice but to meet these meta-decision obligations.