A couple of weeks ago it was announced that Peer 1 Hosting had become the first company to create a CPU-GPU hybrid cloud that supports HPC system workloads. The service is built using Nvidia’s GPUs and comes loaded with the RealityServer application, an application that provides interactive and photorealistic applications remotely over the internet.
On the face of it, this sounds like a good idea and is part of the evolution of cloud for the HPC environment. Firstly, GPUs are good, proven performers in the computation stakes. See my other posts on GPUs. Second, using applications like RealityServer makes sense because it enables customers to interact with their data (or specifically models, 3D content in this situation) over the web.
However, for those customers not using web based applications like RealityServer, use of the cloud in the HPC environment can pose a problem. Firstly, do customers have the bandwidth available to take receipt of the data (which can be multiple Terabytes or Petabytes once processed)? Second, does the cloud service provider, which could have hundreds (potentially) of customers each generating multiple Terabytes of data, have the bandwidth available to send the data back to their customers? Finally, does the customer have the appropriate infrastructure to store the data created during the process?
These problems can be overcome – certainly integrators like OCF can provide adequate storage facilities for customers but none the less, customers must consider these questions before proceeding with a cloud service for HPC related problems.