CIO Roundtable on Strategic AI Leadership: Redefining Decision-Making in the Age of Agentic Intelligence – AWS and IBM - November 10, 2025
Kapil Gupta, IBM
Leading in the era of AI requires a fundamental shift in mindset — from experimenting with isolated tools to orchestrating intelligence across the enterprise. Success will depend on how effectively organizations institutionalize AIOps, redefine systems of record, establish meaningful success metrics, and move beyond pilots into scaled production.
For most enterprises, the real challenge is evolving from initial efforts to integrate AI into operations toward a level of maturity where intelligent systems — including emerging capabilities such as agentic AI — are managed, measured, and continuously optimized for impact.
These were among the themes that came into sharp focus during a recent CIO.com executive dinner roundtable series in San Francisco and Chicago, co-hosted with leaders from Amazon Web Services (AWS) and IBM.
Executives across financial services, manufacturing, healthcare, publishing, higher-education, technology and the public sector gathered to discuss how autonomous AI systems are changing the fundamentals of leadership, control, and enterprise design.
As enterprises race to harness these capabilities, executives are discovering that the real challenge lies not in what generative and agentic AI can do — but in how organizations choose to govern, scale, and measure them.
Mahmoud Elmashni, IBM
From Governance to Growth: San Francisco Insights
In San Francisco, Ally Gardner and Sam Malik of AWS, along with Mahmoud Elmashni and Kapil Gupta from IBM, framed the discussion around one of the most pressing questions facing technology leaders today: how to make agentic AI both powerful and responsible.
Participants agreed that the rise of agentic AI requires a fundamental redefinition of decision rights. As one CIO put it, “autonomy without accountability is chaos.” Gupta concurred, noting that organizations are now developing tiered control models in which certain AI functions operate autonomously while human oversight remains mandatory for high-impact decisions.
This structured balance between automation and accountability is fast becoming a cornerstone of AI-era governance.
“Agentic systems are incredibly fast at problem-solving, but leadership must determine where speed ends and judgment begins,” said Elmashni. “That’s where frameworks for responsibility and trust must evolve in parallel.”
What once was viewed as bureaucratic friction is now being reframed as an innovation accelerator. Gupta explained that adaptive governance frameworks — built to evolve alongside AI models — are allowing organizations to innovate confidently. “Governance isn’t about slowing
Sam Malik, AWS
things down anymore,” Gupta said. “It’s about providing a mechanism for continuous alignment between AI capability and business intent.”
Attendees described emerging AI oversight architectures, including cross-functional ethics committees and data-review boards designed to track model performance, bias, and compliance. Far from constraining progress, these structures are enabling it — allowing enterprises to scale responsibly without losing control.
Another thread dominating the San Francisco conversation was AIOps — the convergence of automation, observability, and intelligent diagnostics as an enterprise discipline. Participants emphasized that as AI adoption expands, too must the ability to maintain operational stability.
“You can’t scale what you can’t see,” said one technology leader. Building AIOps as an institutional capability — with unified visibility and AI-driven feedback loops — was cited as essential to sustaining reliability in increasingly autonomous environments.
Executives also explored how new systems of record are emerging as agents continuously generate and act on data. Traditional transactional databases are giving way to vector stores, dynamic knowledge graphs, and model-driven repositories that evolve in real time. This shift raises urgent questions about data ownership, lineage, and validation in a constantly changing information ecosystem.
Ally Gardner, AWS
Perhaps the most candid exchange centered on metrics. CIOs acknowledged that legacy KPIs — such as uptime and SLA compliance — fail to capture the value of AI-driven systems. The new scorecard, several agreed, must focus on model precision, user adoption, process acceleration, and decision quality. As one participant summarized, “We can’t manage tomorrow’s intelligence with yesterday’s measurements.”
And while enthusiasm for AI experimentation remains high, many organizations are struggling to operationalize their ideas. The key, participants concluded, is to define production-readiness criteria early — aligning use cases with measurable value and embedding risk, compliance, and finance teams into design from day one. “Operationalization must be a design principle, not an afterthought,” said AWS’s Malik.
That operational mindset is also influencing fiscal strategy. CIOs and CFOs are now modeling the total cost of AI, factoring in model training, inference workloads, data governance, and even energy consumption.
According to Gardner, the most effective organizations are “moving from capital-intensive projects toward outcome-based operational investments that give them flexibility and financial discipline.”
Balu Angaian, IBM
Chicago: From Strategy to Structure
In Chicago, the discussion picked up where San Francisco left off — focusing on the organizational structures and frameworks needed to translate AI strategy into sustained enterprise performance.
Balu Angaian and Ambhi Ganesan from IBM, joined by Liz Burton, Thadious Fisher, and Seiji Shinozaki from AWS, emphasized that leadership must define clear boundaries of autonomy as AI systems assume more decision-making power.
“The cornerstone of responsible AI leadership is clarity about who — or what — decides,” said Angaian. Executives across sectors described creating tiered autonomy frameworks that specify when an AI agent can act independently, when it must escalate, and how accountability is enforced. As Ganesan noted, “Boundaries don’t slow you down — they keep you in control.”
The conversation then turned to measurement. While financial returns remain important, participants agreed that Return on Efficiency (RoE) may be a more accurate way to capture AI’s value. As one banking executive observed, “True ROI in AI isn’t just about savings — it’s about how effectively intelligent systems can enhance productivity, accelerate outcomes, and improve the organization’s adaptability.”
Ambhi Ganesan, IBM
This reframing aligns closely with a growing focus on organizational agility. Leaders described building cross-functional models that allow IT, finance, and compliance teams to dynamically adjust workloads, budgets, and risk models as AI capabilities evolve. Agility, they noted, is no longer a cultural trait — it’s a governance requirement.
Participants also underscored the importance of data-driven decision-making in selecting and prioritizing AI use cases. Too often, initiatives are launched based on executive enthusiasm rather than objective analysis. “Data, not instinct, must guide where we apply intelligence,” said Burton of AWS. “The credibility of AI in the enterprise depends on evidence-based value creation.”
Thadious Fisher, AWS
To that end, IBM and AWS described success stories in which AI projects began with measurable KPIs, clear baselines, and validation mechanisms that linked every deployment to a defined business outcome. Fisher pointed out that this discipline “builds trust across stakeholders — from engineers to the boardroom — by making AI performance transparent and defensible.”
Another major theme was rationalization — the need to simplify the growing sprawl of tools, models, and datasets scattered across the enterprise. Without consolidation, executives warned, oversight and cost management become impossible. Simplification, they agreed, is fast becoming a strategic management imperative essential for scalability and interoperability.
From a structural standpoint, the Chicago group drew lessons from Service-Oriented Architecture (SOA) and Object-Oriented Programming (OOP). The takeaway: modularity, reuse, and standardization should be revived for the AI era.
Elizabeth Burton, AWS
Several participants advocated for catalogs or libraries of validated agentic components — reusable elements that can be trusted, deployed, and monitored consistently across functions. This “à-la-carte” approach could bring order to experimentation and make scaling safer and faster.
Finally, leaders converged on a pragmatic roadmap: start small, scale smart. Well-defined, low-risk projects can serve as learning laboratories for testing governance models, data pipelines, and operational workflows.
“You can’t transform responsibly if you don’t first learn responsibly,” Shinozaki said.
The Leadership Imperative
Across both cities, a clear consensus emerged: the AI revolution will not be won through algorithms alone. It will depend on leadership capable of bridging technology and accountability, innovation and oversight, speed and stability.
Seiji Shinozaki, AWS
Governance, measurement, and operational discipline are no longer back-office concerns — they are strategic levers. As one executive put it: “The question isn’t whether AI can transform the enterprise. It’s whether leadership can transform fast enough to match it.”