How To Integrate High Performance Computing On the Cloud12:32 PM EST Fri. Sep. 09, 2011
As security safeguards increase and constructive price models continue to emerge, cloud computing is evermore ubiquitous. Only a couple of years ago, cloud was simply a buzzword with few companies that understood its real value. In the following, the head of strategic sales and marketing at NIIT Technologies discusses how IT solution providers can help their clients focus not be on whether or not to adopt cloud but rather on how to integrate high performance computing (HPC) on the cloud.— Jennifer Bosavage, editor
High performance cloud computing is still in its infancy stages. Yet, industry wisdom says just like commodity components crowded-out specialized HPC architectures, cloud platforms will eventually edge-out traditional HPC infrastructure. While there are many routes an IT solution provider can take, this article presents a few steps to successfully integrate HPC on the cloud.
First, in order to integrate high power computing into the cloud and get better return on investment (ROI), it is important that companies develop applications that are elastic in nature and can scale with relative ease. The real ROI of cloud becomes evident depending upon how often one can get an invoice down to zero.
Second, because all computing workloads cannot take advantage of IaaS Cloud Environment (infrastructure as a service); load balancing for scale has to be modular in order to optimize elasticity. Moreover, in order to integrate your apps to the cloud, it is important the development team designs applications with such elasticity in mind. Specifically, applications should be made to expand or constrict in order to increase efficiency and speed.
Third, applications need to be low in latency in order for integration. When high security apps flow through the system – the mechanisms at work become extremely slow as it needs to undergo access issues. Users who have taken advantage of HPC internally have very high-end infrastructure to process this. Thus, in essence, a move to the cloud means that such cloud infrastructure needs to offer sufficient support and companies, in turn, need dedicated infrastructure to facilitate such a move. However, this can spoil the economics of high latency apps. As a result, bursty high transaction data is preferred on cloud.
Finally, as we are aware, IaaS cloud platforms provide a huge pool of commodity class infrastructure that is shared by multiple users. As such, HPC workloads must share this infrastructure with applications running on it. Thus, ideal integration will occur when HPC architecture is based on “loose coupling,” wherein each computing node executes independent of the others and carries the storage data it needs. From that point, a company can achieve scalability simply by increasing the number of nodes.
While the above pointers will help ensure successful integration of HPC in the cloud, it is also important to keep in mind the possibility that high performance computing may not become pervasive throughout the industry. Since supercomputing is so specialized (from hardware to service support to applications) and since the life cycles of these systems are so limited, HPC may in fact be a difficult push. However, because those concerns are well-known by IT solution providers that offer cloud computing, they are likely to be systematically addressed as a greater number of users demand increased efficiency.
In the near future, whatever happens with HPC, private and public clouds should become the new normal in enterprise data centers. However, if that happens, IT solution providers should note that interoperability will then be the key and API standards will need to evolve. Building an HPC-enabled cloud environment can be difficult since HPC is highly specialized. Infrastructure must include dedicated hardware to handle high response requirement. That the key to a successful integration.