In 1964, the Control Data Corporation delivered a mainframe computer, the CDC 6600, to the Lawrence Radiation Laboratory in California for high energy nuclear physics research. The CDC 6600 was the world’s fastest computer at the time, firing the metaphorical start pistol for the ongoing race to create the most powerful supercomputers.
Since then, universities around the world have embraced High Performance Computing (HPC) to conduct research in fields such as genomics, proteomics, computational chemistry, molecular dynamics, bioinformatics and more.
In order to conduct effective research that leads to new cures for diseases, increased safety measures in advance of natural disasters, and more, university researchers need to dig into the hordes of data collected during these experiments and draw actionable insight.
To date, there are 20-30 academic institutions in the United States that have built or developed supercomputers for research. But whether to stay ahead of other universities in research efforts or gain valuable insight into solving the world’s future problems, academic institutions large and small are seeing immediate benefits from HPC.
For instance, supercomputers and HPC provide the power that enables artificial intelligence (AI) to analyze data from sensors and tools that model, analyze, filter, classify and measure data. The convergence of AI automation, data analytics and HPC reduces the time to manually search through research data. Automating basic processes and using machine and deep learning to recognize correlation and patterns frees up teams to do more important research tasks.
Addressing the challenges
The cost and complexity of building supercomputers have deterred some CIOs from pursing this next-generation technology, but many of these challenges are surmountable and worth addressing for the benefits HPC provides. HPC requires a commitment from an institution, but it doesn’t have to be a burden.
CIOs should consider looking into external funding sources, such as grant programs to provide funding. In 2018, the National Science Foundation (NSF) awarded a $60 million grant to the Texas Advanced Computing Center (TACC) for a new supercomputing system, Frontera, which provides researchers with advanced capabilities for science and engineering.
Frontera will provide research teams with many capabilities, such as predicting the trajectory of storms, designing infrastructure to withstand them and accelerating the development of new molecules for medicine using a combination of modeling and deep learning, among others.
While massive, six-figure grant projects for supercomputers with previously unbeknownst power tend to make big headlines, the truth is that institutions of all sizes are using scalable HPC solutions to assist with research. Not every institution needs a TACC Frontera, SDSC Expanse or Comet supercomputer to reap the benefits. There are excellent smaller-scale supercomputers that are already benefiting hundreds of mid-tier research universities across the United States.
Using HPC, a team of UT Dallas researchers is working with a team of collaborators to explore microscopic pills that could potentially travel to an isolated location in a patient’s body to deliver a drug in the exact place it’s needed. For these explorations, researchers drew on supercomputing resources at TACC and two other institutions.
Texas Tech University (TTU) also provides access to two main clusters, Quanah and Hrothgar, in addition to specialty clusters and special-purpose resources. This enables virtual reality (VR) and augmented reality (AR) research for big data visualization and visual analytics. TTU is using an integrated approach combining visualization, human factors and data analysis to derive insight from massive, dynamic and ambiguous data.
The fast pace with which technology is evolving makes it unrealistic to expect every university IT team to have time to design, deploy and manage optimized solutions for HPC deployment. Oftentimes, IT teams at a university may not have experience with supercomputing, or only one person has expertise in the area.
Since those working to deploy and maintain the HPC clusters may need up-front assistance, CIOs looking to utilize HPC on campus should explore solutions that simplify the design, configuration and ordering of systems with standardized building blocks individually tested for research applications. Some solutions will even offer services and support when and where you need along with the technology to ensure a seamless deployment. Custom, scalable IT building blocks can provide a consolidated experience bringing together the elements necessary to run AI, modeling, simulation and visualization workloads, easing the burden felt by the institution.
What’s next for HPC?
A Tractica report on enterprise HPC forecasts the overall market for HPC hardware, software, storage and networking equipment will reach $31.5 billion annually by 2025–an increase from approximately $18.8 billion in 2017.
As we advance into the new decade, we’ll see even more supercomputers appearing on college campuses nationwide, regardless of size. Advances in HPC have led universities of all makeups to use machine learning and AI for everything from climate and weather modeling to geosciences and in the oil and gas industry.
Supercomputers will only become more powerful, and the applications for HPC more far-reaching, with the power to solve calculations many times faster than today’s computers. HPC continues to propel research ever closer to the elusive answers research teams have pursued for years.
- ‘Shortcuts’ to increase female enrollment in economics may backfire - June 11, 2021
- Two key digital transformation trends in higher ed - June 10, 2021
- STEM spaces are emerging as new campus epicenters - June 9, 2021