Compute Capacity Liquidity: Reframing HPC Infrastructure Strategy for AI-Driven Life Sciences — Parallel Works - May 14, 2026

A Conversation with Matthew Shaxted, CEO of Parallel Works

Artificial intelligence is reshaping how life sciences organizations approach the role of infrastructure in sensitive operations, exposing limitations in legacy high-performance computing environments and accelerating a move toward more flexible, hybrid models. The impact is measurable. Research timelines are compressing, competition for scarce compute resources is intensifying, and infrastructure strategy is emerging as a direct determinant of both scientific output and financial performance.

Workloads tied to drug discovery, genomics, and simulation now require higher levels of parallel processing and throughput than many conventional high-performance computing environments were originally designed to deliver. Traditional HPC systems have largely been built around central processing units (CPUs), which support general-purpose computing and parallel processing across a limited number of cores.

Graphics processing units (GPUs), by contrast, enable massively parallel execution, supporting thousands of simultaneous operations. This capability is essential for AI models, molecular simulations, and large-scale data analysis.

These dynamics are driving a rethinking of how compute resources are provisioned and consumed. In a recent BizTechReports executive vidcast interview, Matthew Shaxted, CEO of Parallel Works, described this evolution as “compute capacity liquidity”—the ability to dynamically allocate workloads across heterogeneous environments based on availability, cost, and policy constraints.

 Here is what he had to say:

Q: How is AI changing the way life sciences organizations think about compute infrastructure?

Shaxted: Historically, most organizations designed workloads for a single system. You had access to a specific HPC cluster, and your workflows were tightly coupled to that environment. That model worked when infrastructure was relatively static and predictable.

That’s no longer the case. AI-driven workloads, especially in simulation and modeling, are more dynamic. They require different types of resources at different times, and they often need to scale quickly.

At the same time, organizations are introducing new infrastructure layers—GPU clusters, cloud environments, and specialized providers. That creates a distributed environment where compute resources are no longer centralized.

Compute capacity liquidity is a way to describe that change. Instead of treating compute as fixed infrastructure, organizations are treating it as a resource that can move. Workloads are expected to run wherever capacity is available, whether that’s on-premises, in a hyperscale cloud, or through another provider.

That flexibility is becoming a requirement.

Q: How does this concept of compute capacity liquidity change strategic planning?

Shaxted: It changes how organizations think about investment and access. Instead of planning around a single system or environment, you’re planning around a pool of resources that can be accessed in different ways.

That has implications for how you design workloads, how you provision infrastructure, and how you manage cost. It also affects how quickly you can respond to new requirements.

For example, if a new GPU architecture becomes available, you want your teams to be able to use it immediately. You don’t want to wait six months until you deploy it internally. That requires the ability to extend workloads across environments.

It also introduces optionality. You’re not locked into one provider or one system. You can choose where to run workloads based on cost, performance, or policy requirements.

Q: Is this transition happening uniformly across the life sciences sector?

Shaxted: No, it’s uneven. Some organizations are further along, especially those that have already invested in cloud and AI infrastructure. Others are still operating in more traditional HPC models.

A common challenge is that infrastructure is often managed by different groups. You might have one team managing on-prem systems, another managing cloud, and another managing AI platforms. Those environments aren’t always connected.

End users are also used to building workloads for specific systems. Changing that behavior takes time. It requires new tools and new processes.

What’s consistent is the direction. The expectation is changing. Organizations are recognizing that they need more flexibility in how they use compute resources.

Q: What are the primary operational challenges organizations face in this model?

Shaxted: Maintaining integrations across systems is one of the biggest challenges. You’re dealing with different schedulers, different cloud platforms, and orchestration frameworks like Kubernetes.

Those systems are constantly evolving. Updates to drivers, APIs, or underlying platforms can affect how everything works together. Keeping those integrations stable requires continuous effort.

It’s not that any one component is especially complex. It’s the combination of all of them, and the need to keep them working reliably over time.

That’s where organizations start to feel the operational burden.

Q: How does workload design need to change to support this environment?

Shaxted: Workloads need to be portable. They need to run across different environments without requiring significant changes.

Containerization is a big part of that. It allows you to package workloads in a way that can be deployed consistently across systems.

But portability is only one piece. You also need orchestration. You need a way to schedule workloads, manage dependencies, and ensure that they execute correctly regardless of where they run.

That requires coordination between infrastructure teams and end users. It’s not just a technical change—it’s an operational one.

Q: How is AI-driven workload creation affecting infrastructure operations?

Shaxted: It’s increasing demand. When it becomes easier to create complex workloads, more people are doing it.

You can describe a simulation in natural language, and AI tools can help generate the underlying code. That lowers the barrier to entry.

But those workloads are still compute-intensive. They can run for days across thousands of cores and GPUs. That puts pressure on infrastructure.

It also creates new challenges around scheduling and governance. You need to ensure fair access. You need to prevent resource contention. And you need visibility into what’s running and where.

Q: How is this evolution affecting the economics of compute?

Shaxted: Costs are increasing, particularly with GPU infrastructure. These systems are expensive, whether you’re purchasing them or renting capacity.

At the same time, utilization becomes more important. If you have systems that are underutilized, you’re not getting value from your investment.

Pooling resources helps address that. If workloads can move between systems, you can balance demand and improve utilization.

That has a direct impact on cost efficiency. You’re using your resources more effectively.

Q: How do hyperscalers compare with newer infrastructure providers?

Shaxted: Hyperscale cloud providers offer managed services and ease of use. They make it easier to get started and to scale.

But they can be expensive, especially for GPU workloads.

There are newer providers—what we call neo-cloud providers—that offer lower-cost, GPU-intensive infrastructure. They don’t provide the same level of managed services, so you need more operational expertise.

Organizations are combining these options. They’re using hyperscalers for certain workloads and introducing other providers to reduce costs or access specific hardware.

That mix creates flexibility, but it also increases complexity.

Q: How should organizations think about cost optimization in this model?

Shaxted: Cost optimization is no longer just about choosing the cheapest option. It’s about balancing cost, performance, and operational effort.

Moving to a lower-cost provider might reduce your infrastructure spend, but it might increase your operational burden. You need to consider both.

Visibility is critical. You need to understand how resources are being used and what they cost. That allows you to make informed decisions.

You also need the ability to move workloads. If you can’t shift workloads between environments, you can’t take advantage of cost differences.

Q: What role does orchestration play in enabling this model?

Shaxted: Orchestration is the layer that ties everything together. It provides a consistent way to access and manage resources across environments.

It handles workload scheduling, usage tracking, and policy enforcement. It also abstracts differences between systems, so users don’t have to manage them directly.

Without orchestration, you’re asking users to interact with each system independently. That doesn’t scale.

With orchestration, you can treat multiple environments as a single pool of resources.

Q: What are the key requirements for governance in this environment?

Shaxted: Governance starts with visibility. You need to know who is using resources, how they’re being used, and what it costs.

That’s especially important with GPU infrastructure, which is expensive and often shared across teams.

You also need policies to manage access and prioritize workloads. Those policies need to balance flexibility with control.

If governance is too restrictive, it slows down research. If it’s too loose, you get inefficiencies and higher costs.

Finding that balance is critical.

Q: How do organizations begin implementing this model?

Shaxted: Most organizations start by unifying access. They create a single interface where users can access different environments.

From there, they add capabilities like workload orchestration, usage tracking, and policy enforcement.

It’s an incremental process. You don’t replace everything at once. You build on what you have and connect it over time.

The goal is to create a system where users can access compute resources easily, and the organization can manage them effectively.

BizTechReports Conclusion:

Life sciences organizations are entering a phase where infrastructure strategy directly influences research outcomes, cost structures, and competitive positioning.

Shaxted's compute capacity liquidity concept reflects a move toward flexibility in how resources are accessed and utilized. Infrastructure is no longer defined by a single environment, but by the ability to allocate workloads dynamically across multiple environments.

Achieving this model requires investment in orchestration, governance, and workload portability, along with coordination across infrastructure, cloud, and AI teams.

Organizations that can operationalize this approach are positioned to accelerate discovery, improve utilization, and manage costs more effectively.

Those that cannot may find themselves constrained by fragmented infrastructure that limits both performance and visibility.

###

EDITOR’S NOTE: Click here to learn more about Parallel Works

Next
Next

The GTM Singularity Is Collapsing Traditional Go-To-Market Approaches – Forrester – May 14, 2026.