Frequently Asked Questions about HPC Experiment

If you can’t find the answer to your question below, you can contact us through our Q&A Form

Q: What are the goals of the HPC Experiment?
A: To identify, test and document potential solutions to the known roadblocks in high performance computing as a service.

Q: Who can participate in the HPC Experiment?
A: The experiment is open to the entire community including international participants. Please fill in the registration form to participate.

We are looking forward to working with:
The industry end users: A typical example is a small or medium size manufacturer in the process of designing and prototyping its next product

The resource providers: This pertains to anyone who owns HPC resources, computers, and storage, and is networked to the outside world.

The application software providers: This includes software owners of all stripes, including ISVs, public domain organizations and individual developers.

The HPC experts: This group includes individuals or companies with HPC expertise, especially in areas like cluster management.

Q: I can’t provide resources at this time, can I still participate in the experiment?
A: Yes. If you are interested but can’t actively participate, please select the option to “receive reports at the conclusion of the experiment”.

Q: Are there participation fees?
A: No, there are no fees for participation. Each active participant will be expected to contribute the necessary time and effort. Resource providers are expected to offer needed software licenses, compute resources and expert help to other active participants free of charge during the experiment. The resource providers will define their own level of participation and the limits of the resources provided.

Q: How much time will I be expected to spend on the experiment?
A: Each participant is free to define their own level of participation.

Q: I would like to participate, but can I remain anonymous?
A: Yes. Please check the related box as you fill in the registration form.

Q: When will the experiment start and end?
A: The experiment is conducted in 3 month long “rounds”.
Round 4 of the Experiment starts in July 2013.

Q: Will the data sets used in the experiment be publicly available?
A: No, the data sets used in the experiment will not be publicly available. The data sets will be made accessible by the owner and only to the participants who need access to it. For example, the input data sets required for a workload will be uploaded to the compute resource providers systems by the owner of the data set.

Q: As an HPC expert, do I have to travel to the end-users site to work with him or her?
A: No, the idea is to form virtual teams who will collaborate remotely. Which media you use to communicate is completely up to the team.

Q: I want to participate as an HPC expert. How do I meet the end-users, and other participants?
A: After the Kick-off on July 20, and as soon as we have set up the teams by best fit, we will collect all information from the team members necessary for the team to start. We then suggest a team kick-off by convenient communication such as a telephone or Skype conference.

Q: Are there any established hashtags we can use on Twitter… to tweet bits of information?
A: #hpcexperiment and you an follow us at https://twitter.com/HPCExperiment

Q: What are the time commitment expectations from participants, can you walk through an example?
A: We got this question from multiple participants and we have included a scenario in our kick off packet. However, the short answer is that there is no specific requirement. Each participant is free to set his/her own level of involvement to achieve success. We will not track the amount of time participants spend related to the experiment.

Q: Can I have a list of the participants?
A: With the permission of our participants (organization) we publish their names on the home page of the hpcexperiment website. However, we are not able to publish the full list of our participants upon their request.

Q: What are the capabilities of your HPC resources?
A: We have a number of HPC resource providers with a variety of HPC cluster architectures. We encourage our end-users to submit their specific requirements to allow us to match them with the best suited HPC resource provider.

Q: Which application software vendors are participating in the experiment? Which packages can we use in the projects?
A: As an example, we have ANSYS and SIMULIA (Abaqus) on board right from the start. In case of applications based on other ISV software, as part of this experiment, we are ready to talk to them and request them to participate.

Q: Are there any specific workloads that this experiment will be focused on?
A: Any application workload an industry end-user is bringing in, is welcome. Most of the industry end-user applications we got so far are in Computational Fluid Dynamics, Finite Element Analysis, and Bioinformatics.

If you are interested in the application of HPC resources in computational fluid dynamics review: www.cfdexperiment.com

If you are interested in the use of HPC resources in computational biology and life sciences review: www.compbioexperiment.com

Q: Can we use the presentation documents after the presentation?
A: Please use all documentation we are sending out to you just for your own and your team’s use, but not sending it to people who don’t participate in this HPC Experiment. However, the ones which we publish in the public section of our website can be distributed to anybody.

Q: Is this experiment more focused on data analysis / data mining type of workloads?
A: The focus of this first experiment is on compute-intensive industry applications. But you certainly can bring in a data analysis application if it does not add additional complexity to the process like huge data transfer, for example.

Q: How much storage will be available for a group?
A: This depends on the availability of storage at our resource providers’ HPC centers. Since for the purposes of this experiment we recommend moderate-size experimental workloads (and not full-production workloads) we expect moderate-size storage requirements.

Q: What is the difference between the HPTC model and HTC/GRID model?
A: The High-Performance Technical Computing model focuses on compute and data intensive workloads to be executed on an HPC system. The Grid model focuses on collaboration, communication, and computing in a distributed networked environment interconnecting different resources accessible jointly by the collaborators. This experiment does not include the Grid model, but focuses on single computing resources in an HPC Cloud or HPC Computing Center.

Q: In addition to “software providers”, will hardware accelerator and software/hardware co-design approaches be involved in the experiments?
A: We have participants who offer moderate-sized clusters with GPGPUs. Co-design approaches are not the focus of this experiment.

Q: Is Abaqus gonna be available for the experiment?
A: YES. Thanks to Simulia.

Q: Is the network Ethernet or InfiniBand?
A: We have resource providers who offer HPC clusters with both Ethernet and Infiniband interconnects.

Q: To follow-up on a previous question regarding software/hardware *co-design* (rather than to hw accelerators by themselves): will a high performance cloud computing Platform as a Service approach be considered, to cross-layer optimize the architectures from parallel program development tools down to the execution layer?
A: This is certainly an important computer science challenge our community has to solve on our way to Exascale. However, this experiment will just try to solve existing industry end-user problems in existing HPC environments.

Q: Will there be a possibility to select the resources by the territorial principle?
A: Yes. In fact this is one of our primary rules in setting up the teams. We first pick an industry end-user, then match the HPC expert. Once this selection is complete the experiment organizers and the selected expert will look for the nearest suitable resource provider. In case we can’t convince this provider (e.g. the HPC Center ‘around the corner’) to join our experiment, we look for a resource provider among our participants.

Q: Will there be a possibility to use more than one resource provider?
A: Absolutely. For example, after a team has successfully completed their task on an HPC Center’s resource (e.g. Rutgers, Indiana, SDSC), in a next step they can turn to a commercial HPC Cloud provider like Nimbix or Amazon.

Q: Will there be such new hardware platforms like MIC?
A: According to Intel, MIC will come out late in the year. Therefore, we will not be able to include MIC in this experiment.

Q: I am researcher, can I participate in the experiment?
A: The Experiment focuses on industry projects, where the commercial benefit to the end-user is clearly defined. As a researcher, if you have an industry partner, a company that you are working together with, they would join the Experiment as an end-user. Depending on the resource requirements, you personally may be one of the experts in the team.

Q: Can projects extend beyond a 3-month round?
A: We strongly encourage projects to be defined in a way they can produce meaningful results and complete in one 3 month round. However, it is acceptable that the same project team decides to create another project in the next round to achieve further goals.

Q: Will you send this Kickoff PPT to me?
A: Yes, every experiment participant (including those who could not participate in the GoToWebinar) will receive the updated kick off meeting packet, the slides, the link to the recorded webinar, and the link to a new CAE article (just appeared in HPC in the Cloud) which highlights especially the benefits for industry end-users and for ISVs.

Q: Where can I get more information from about HPC in the Cloud?
A: For more information about please HPC in the Cloud links page.

Q: Can you give examples of the types of projects you are working on?
A: We have teams actively working on a diverse set of industry end-user projects ranging anywhere from car acoustics to turbine dynamics, from fastening capacity of anchor bolts to simulation of blood flow inside rotating micro-channels.

The collection of 25 case studies which was published in June 2013 is available for download at:
http://tci.taborcommunications.com/UberCloud_HPC_Experiment

Q: What type of computing resources are available in the UberCloud HPC Experiment for projects?
A: With over 40 resource providers participating in the HPC Experiment we can find almost any specific computing resource which is requested from the end-user. There are tightly and loosely coupled systems, with Ethernet or Infiniband interconnect, with or without GPGPUs, etc. As soon as the end-user provides his or her profile, including the application software, we are usually able to match the best suited resource.

Q: Is the software selection limited to what’s available on the UberCloud Exhibit?
A: No. We have currently 47 software providers participating in the HPC Experiment. Although they represent some of the largest ISVs, we still often have end-users with application software not present in the experiment. Then we identify a resource provider who is willing to implement this software on his resource, and then the team can run their jobs. For example, on Amazon AWS you can upload your software yourself, and then run.

Q: How are you transferring the data across for running the experiment?
A: Most common solution is FTP (and secure FTP when needed). We also know other solutions, such as sending a USB drive, can be an option where necessary. We encourage end-users to submit their projects and we will work through the project specific details such as file transfers.

Q: If the file size of results is around 100 GB, then how the files would be transferred back to the user?
A: Indeed, several of the past teams faced this challenge, for example with results from their Abaqus runs. Then we recommend remote visualization, and in case the final data is needed internally, some teams just shipped the result file(s)

Team 26 for example found that data transfer via the network was too slow; so they suggest that final results might be better transferred through an external USB hard drive via FedEx.

Q: What is the maximum number of cores that an end user can request for an experiment?
A: The experiment projects usually have an upper bound of 1000 cpucore hours, for good reasons free (experiment; not competing with our services partners), exceptions are possible if more cpucore hours are needed. But, experiment jobs shouldn’t be production jobs. Then, the number of cores just depends on the application. If the application software is highly scalable, you can use 1000 cores if this makes sens, and you still can run the job for one hour. If the application software is not scalable and runs best on 8 cores, you could do one long run for 120 hours, or e.g. 12 runs each for 10 hours.