Distributed Collaborators & Data: Solving the World's Big Problems
August 28, 2020 | Susan Tussy
Distributed Collaborators & Data: Solving the World’s Big Problems
Big problems such as finding cures for COVID and disease, increasing food production, or fighting climate change, all involve a tremendous amount of data. Some of these most challenging problems are addressed by leveraging High Performance Computing (HPC). And a big part of HPC workflows involves sharing data with distributed collaborators.
Researchers are required to navigate policies across institutions and securely share data which can commonly scale to petabytes and terabytes. Oftentimes expensive scientific instruments are shared across many researchers and institutions, and there is a need to make effective use of the instrument. The researcher may only have one shot at accessing the data and transferring the derived data so speed and reliability are factors to consider. Globus enables the workflow through task orchestration so the researcher can quickly and reliably get a copy of the derived data, archive the original data, curate, publish and cite.
HPC data management and storage requires thought and consideration through the data lifecycle in order to create a frictionless environment so time is spent on Science and discovery and not on mundane tasks such as accessing, sharing, making data discoverable. Tools like Globus can help address some of these challenges and thereby accelerate discovery.
Rachana Anathakrishnan and Vas Vasiliadis from Globus along with Adrian Herrera and Eric Dey from Caringo address many of these challenges and considerations in this insightful webinar, and provide some sound advice on how to tackle them..
Additional Information: