Rearchitecting the Digital Workspace: Risk, Resilience, and the Rise of the Autonomous Platform
A BizTechReports Executive Vidcast Interview with Christopher Reed, CTO of the Americas at Omnissa
In a time when enterprise IT leaders are facing a perfect storm of endpoint complexity, cloud sprawl, and heightened security risks, a new vision for the digital workspace is emerging. Omnissa, the end-user computing spinout from VMware following its acquisition by Broadcom, is aiming to redefine how organizations manage performance, productivity, and policy enforcement.
In this exclusive BizTechReports Executive Vidcast Q&A, Christopher Reed—CTO of the Americas at Omnissa—shares his insights on the evolution toward autonomous workspaces, the replatforming of legacy endpoint management, and how AI-powered decision engines are shaping the future of work.
Here is what he had to say
BTR: Christopher, you and I recently hosted a roundtable dinner in Charlotte North Carolina with senior technology executives across the metropolitan area where we discussed how organizations are rethinking end-user computing. It was a fascinating discussion, and I wanted to chat with you to see how you have seen this topic evolve over recent years.
Reed: One of the most interesting takeaways from that dinner was realizing that many in the industry still think of end-user computing as simply “device management” or “virtualization.” They assume it’s about managing this type of device or that type of device. But what became clear is that today’s challenge is about creating a holistic, meaningful user experience across everything users do—desktops, laptops, mobile devices, even SaaS apps. People told me, “I use Product X because it’s included in my contract, but it’s not living up to what I need. I use Product Y just to manage Macs. Or Product Z for rugged devices.” Every one of those tools has valuable data, but there was no unified theme to say, “How is Chris Reed actually behaving?” rather than, “How is Chris’s iPhone behaving?” or “How is his Windows box behaving?”—and that disparity is the problem.
From a risk and security standpoint, we saw 10 different data sources feeding vulnerability notifications. From a user experience standpoint, nobody really knew how the user was performing. And from a productivity standpoint, users risked downtime or friction just trying to access the tools they needed. So the question became: Can AI pull all these disparate signals together and offer actionable insights? That’s our vision of an autonomous workspace—bringing all the data together to drive security, performance, and experience in a unified way.
BTR: You used the term “autonomous workspace.” Can you define that and explain how mature the concept is today?
Reed: When we talk about “autonomy” in the workspace, it’s akin to a self-driving car. A self-driving vehicle can “see” multiple lanes, understand traffic patterns, and decide when to change lanes without a human intervening—though you still need a level of trust before you take your hands off the wheel. Similarly, an autonomous workspace means endpoints that can ensure:
The endpoint protects itself during both offline and online use.
The endpoint enforces policies in real time (zero-touch provisioning and configuration).
When something goes wrong—say an app crashes or a user’s configuration drifts—the system detects abnormalities and automatically remediates them via policy enforcement (e.g., re-encrypt, re-provision, or lock down).
The autonomy comes from endpoints communicating with a cloud-based command and control system that has built-in AI/ML intelligence. That system interprets signals—authentication events, app launches, unusual behavior—and then triggers actions. You might lock a device, escalate multi-factor authentication, or push a policy update seamlessly. Today’s environments are highly fragmented: vivid examples include the “Bring Your Own Device” (BYOD) users who carry both iPhones and Android tablets, in addition to Windows laptops and Macs. Each device is often managed by a separate team or product. The endpoint is sending telemetry, but there’s no unified orchestration. That patchwork approach is simply unsustainable.
We’re working to deploy autonomy across all device types—mobile, desktop, rugged hardware, virtual instances, you name it. The key is normalizing the data so AI can make sense of it. Right now, many organizations build their own data lakes and then try to train AI. That’s costly, time-consuming, and error-prone. Instead, we believe in a single unified platform that natively ingests signals from all endpoints. That’s how you progress from “just managing a Mac” to “delivering a cohesive, risk-minimized, high-performance experience.”
BTR: What elements are required to move from fragmented device data to a truly unified, autonomous workspace?
Reed: The core requirement is consistent signal collection and normalization. In many organizations, you’ll find engineers who excel at managing one device family—say, Macs—but they do it in a silo. They know that operating system inside and out. Meanwhile, another team is focused on Windows endpoints. Another team handles mobile device management (MDM), and a different group looks at “ruggeds” or specialized hardware. Each of those tools emits telemetry in a proprietary format.
To deliver a holistic experience, you must bring all those signals—app usage, authentication events, security posture, performance metrics—into a single pane of glass. From there, you apply AI rules and policies to ask, for instance: “Chris just fetched a sensitive file on his Windows laptop, but then started editing it on his iPad while on a public Wi-Fi network. Should we quarantine the file or notify security? Should we force re-encryption?”
If data remains in multiple, unconnected silos, your guess is as good as mine. Organizations we spoke to at that Charlotte dinner had entire developer teams trying to stitch these data streams together—building homegrown data lakes, then attempting to train AI models. That’s capital- and labor-intensive, and it still ends up being firefighting rather than foresight. Instead, a unified platform normalizes the data at ingestion. You know immediately, “Okay, this is user Chris on Device A versus Device B versus Device C.” Once you have unified data, everything else becomes easier: risk scoring, experience analytics, automated remediation, and ultimately, autonomy.
BTR: From a leadership perspective, whose responsibility is this? Is end-user computing still an IT/Procurement concern, or has it elevated to the C-Suite?
Reed: The most successful organizations treat end-user computing as a C-level imperative—specifically at the CIO, CTO, or Chief Digital Officer level. In the past, MDM was purely a procurement or asset-management function—“We need to manage a phone bill and ensure compliance.” But BYOD, cloud sprawl, and security breaches have made that mindset obsolete.
Today, every executive is aware that if the workforce can’t work seamlessly, productivity tanks—and every unpatched device is a potential breach vector. When you think about a digital transformation or a multi-year cloud migration, delivering a secure, high-performance, consistent user experience is equally critical. It directly impacts revenue, brand reputation, and compliance. So you no longer see device management relegated to siloed teams. Instead, senior leadership sets the vision: “We need an autonomous workspace that elevates experience and reduces risk.” That vision trickles down, and you then select the right integrated platform to execute it.
BTR: Many organizations hold onto legacy, point-solution thinking—“I already have Product X included in my contract, so let’s keep using it.” What drives a shift toward a platform approach?
Reed: It boils down to total cost of ownership (TCO), risk, and complexity. Financially, a lot of customers say, “Well, we already have this tool in our contract, so we’ll just bolt on a few add-ons to cover feature gaps.” In reality, what happens is you end up with 20+ point products, each licensed differently, each generating its own telemetry stream. You pay for them, but you still lose the unified management and effective automation that a single platform provides. One of our financial-services customers discovered they were paying for 23 separate products just to cover gaps left by their legacy solutions.
When you move to a holistic platform, you eliminate redundant licenses, streamline operations, and dramatically cut manual labor. Plus, the platform you choose typically offers AI/ML capabilities baked in, so you don’t have to hire multiple data scientists to integrate and normalize data.
By contrast, if you keep “Frankensteining” point solutions, you’ll spend twice as much money and six times as much effort troubleshooting.
BTR: Let’s dig into performance and risk. Does routing all traffic and telemetry through a centralized platform introduce latency or other bottlenecks?
Reed: That’s a fundamental question. In modern end-user computing, productivity is the only performance metric that matters. If an application runs slowly, that’s a problem. But “device performance” in isolation—CPU cycles, RAM usage—doesn’t matter if it still delivers a seamless experience. Our approach is to optimize for “centrally distributed” computing rather than “purely centralized” or “purely edge” in which:
Centralized workloads (think legacy VDI or data-center apps) run where they run best.
Edge workloads (modern, web-native progressive apps or AI inferencing) run on the device or at the closest edge node.
A true end-user computing platform gives you the flexibility to decide where an app should execute on a per-workload basis. If you have a CAD app that runs better in a Kubernetes cluster in a local datacenter, you route it there. If you have a collaborative web app that performs fine in a browser, you let it run locally on an employee’s laptop. By dynamically distributing workloads, you avoid both the “all-in on data center” latency problem and the “all-in on edge” performance limitations.
This approach also threads through mobile device management and virtual app delivery into a single “hub experience.” When you click on a Word document, the platform decides whether to open it in a local Progressive Web App, a native install, or a virtual desktop—based on policies, user context, and network conditions. That’s how you optimize productivity while enforcing security consistently.
BTR: You mentioned a resurgence in virtualization (VDI). Is that accurate, and why is VDI seeing renewed interest?
Reed: Absolutely. Virtual Desktop Infrastructure struggled for years due to latency and bandwidth constraints. Ten years ago, VDI only worked acceptably in places with fiber-fast LANs, and even then, users complained of lag. But in 2025, major satellite networks can provide high-bandwidth, low-latency connectivity virtually anywhere on the planet. Combine that with continual protocol optimizations—adaptive codecs, network-aware streaming, and predictive caching—and VDI performance today is far superior to what it was in 2015.
As a result, organizations are rediscovering VDI, especially for tasks that require high compute or strict data governance: CAD modeling, high-end video editing, financial modeling, R&D labs, etc. With an end-user computing platform, VDI becomes one delivery option in your “hub.” The same seamless policy layer applies whether you run your desktop in the cloud, on a dedicated appliance in your data center, or directly on a laptop. And because the platform is self-optimizing, it automatically shifts workloads between local execution, VDI, or container-based sessions depending on where they run best.
BTR: At your dinner, a common theme was “Who owns end-user computing?” With so many stakeholders—IT ops, security, procurement, business units—how do you get everyone to align?
Reed: The most critical factor is governance. You have to define: “What is our ultimate objective? Are we merely managing devices? Or are we building a digital resource delivery system that drives productivity, reduces risk, and delights users?” If your objective is the latter, then you need C-level sponsorship—typically from the CIO, CTO, or a CDIO—because that’s not a line-of-business issue; it’s a strategic business imperative.
Once the vision is clear, you assemble a cross-functional steering committee to oversee and manage: deployment, networking, and scale planning; compliance, breach mitigation, and zero-trust policies; licensing, contracts, and cost controls; and finally assess user feedback, adoption metrics, and change management.
This group decides whether to extend existing point solutions or embrace a unified platform. The key is executive alignment: If procurement pushes for “free” point solutions because they’re in contract, but security refuses due to data silos and risk, nobody wins. When leadership says, “We need an autonomous workspace,” everyone aligns their budgets and priorities toward that goal.
BTR: Finally, how do you measure success? What KPIs should organizations track as they evolve toward the autonomous workspace?
Reed: Great question. The first step in measuring success is to establish clear baseline metrics before any transformation begins. You need to understand how your organization is currently performing across four key dimensions: productivity, security, cost, and user satisfaction.
From a productivity standpoint, we look at factors such as how much time users spend troubleshooting, how frequently help-desk tickets are opened and resolved, and how long it takes on average to launch key applications. These indicators help assess whether your technology environment is enabling—or hindering—your workforce.
Security posture is another critical area. Metrics here include the number of unpatched endpoints across your environment, the mean time it takes to remediate vulnerabilities, and how quickly your teams can respond to incidents. A stronger posture should result in fewer breaches and faster containment when issues do arise.
On the financial side, cost efficiency can be measured by the total cost of ownership (TCO) per device or per user. You also want to track how much you save by consolidating redundant licenses and how much help-desk workload you can reduce through automation and better endpoint visibility.
Finally, user satisfaction is just as important as any technical metric. We monitor things like Net Promoter Scores, user feedback through surveys, and provisioning compliance—how often users receive the resources and tools they need without unnecessary friction.
Together, these indicators give you a multidimensional view of how your digital workspace strategy is performing—and whether it’s delivering tangible value across the organization.
###