AWS Standout Flux7 Goes Back To Founder's Roots With New High-Performance Computing Practice

The cutting-edge cloud solutions implementer pulled back from high-performance computing years ago, sensing the market wasn't ready. A lot has changed since.

Flux7 has made a name for itself by implementing cutting-edge cloud solutions for large enterprises through its partnership with Amazon Web Services.

The Austin, Texas-based solution provider has been on the vanguard of bringing to market advanced cloud technology, from Internet of Things to DevOps to container-based micro-services. But one disappointment for founder and CEO Aater Suleman is that the high-performance computing (HPC) practice he envisioned when launching his company in 2013 never materialized.

Suleman's roots were in HPC—as an engineer at Intel, he helped design the Xeon Phi processors used to power those workloads. Flux7's first project involved HPC, but soon after that work it became clear the market "wasn't trending in a positive direction" for the technology, he told CRN.

Sponsored post

[Related: Take A Look Inside Dell EMC’s HPC And AI Innovation Lab]

There were "slow, early adopters … and then a huge chasm," Suleman said. "So we shifted gears and went in another direction."

Now, years later, Suleman senses the HPC market is finally heating up. To seize the opportunity that previously eluded him, Flux7 is launching a practice that helps customers—from university researchers to microprocessor designers to artificial intelligence developers—offload compute-intensive batch workloads to Amazon's cloud infrastructure.

Through the new practice, Flux7 will assess the business case for adopting cloud to power resource-demanding workloads, design and architect a solution on AWS, implement HPC clusters in the public cloud and then pass operational knowledge to the customer's internal IT team.

"It’s a modular approach to start using HPC in the cloud," Suleman said. "We're trying to help these organizations start with an assessment of what they have so we can analyze and suggest where the cloud can help with their HPC clusters."

In the past few years, third-party licenses have become more-friendly in the cloud, and concerns around security have dissipated—an important development as HPC workloads often involve sensitive IP essential to research and development and product development road maps.

Those factors along with natural refresh cycles could finally make the market ripe for born-in-the-cloud solution providers with a highly specialized skill set in implementing the technology that processes massive data sets, AI, converged modeling, simulation and analytics workloads.

Most intensive batch workloads are still executed on-premises, sometimes using supercomputing systems like those made by recent HPE acquisition Cray, other times on more traditional servers grouped together in powerful clusters.

Multiple compute cores aren't the only requirement for efficient HPC infrastructure. The systems also depend on high-performance storage, low-latency interconnects, specialized system software stacks and programming environments.

That means offloading to cloud goes beyond provisioning Amazon's most-powerful compute instances, Suleman told CRN.

Engineering challenges include managing data transfers and deciding if GPUs or high-performance storage should be accessed. Storage, in many ways, is a more significant differentiator than compute, and a partner must be able to help customers decide on services like AWS Elastic File Storage or AWS Storage Gateway.

As a consumer of dedicated HPC systems for modeling chip functionality in his past life, Suleman knows the pain points well—and how cloud can ease them.

Elasticity is a crucial selling point, he said, as HPC systems typically see tremendous variation in utilization—for example when universities are out of session and their scientists are less likely to be running advanced experiments and simulations.

"When you need them you really need them, and when you don’t, you don't," Suleman said of HPC resources.

The first step to drive adoption is enabling HPC customers to burst into the cloud. That way, "rather than having people waiting, they can get jobs done in time," Suleman said.

Next comes moving some baseline capacity into the cloud. That leads some customers to finally turn off their on-premises HPC environments altogether.

Those second and third phases of maturity are still a far way off for most HPC practitioners, Suleman said.