Numerical simulations present challenges as they reach exascale because they generate petabyte-scale data that cannot be saved without interrupting the simulation due to I/O constraints. Data scientists must be able to reduce, extract, and visualize the data while the simulation is running, which is essential for in transit and post analysis. Next generation architectures in supercomputing includes a burst buffer technology composed of SSDs primarily for the use of checkpointing the simulation in case a restart is required. In the case of turbulence simulations, this checkpoint provides an opportunity to perform analysis on the data without interrupting the simulation.
First, we present a method of extracting velocity data in high vorticity regions. This method requires calculating the vorticity of the entire dataset and identifying regions where the threshold is above a specified value. Next we create a 3D stencil from values above the threshold and dilate the stencil. Finally we use the stencil to extract velocity data from the original dataset. The result is a dataset that is over an order of magnitude smaller and contains all the data required to study extreme events and visualization of vorticity.
The next extraction utilizes the zfp lossy compressor to compress the entire velocity dataset. The compressed representation results in an dataset an order of magnitude smaller than the raw simulation data. This provides the researcher approximate data not captured by the velocity extraction. The error introduced is bounded, and results in a dataset that is visually indistinguishable from the original dataset.
Finally we present Myrcene, a modular distributed parallel extraction system. This system allows a data scientist to run the previously mentioned extraction algorithms in a distributed parallel cluster of burst buffer nodes. The extraction algorithms are built as modules for the system and run in parallel on burst buffer nodes. A feature extraction coordinator synchronizes the simulation with the extraction process. A data scientist only needs to write one module that performs the extraction or visualization on a single subset of data and the system will execute that module at scale on burst buffers, managing all the communication, synchronization, and parallelism required to perform the analysis.
Speaker Biography
Stephen S. Hamilton is a Lieutenant Colonel in the US Army. In 2008 Stephen received a Master of Science in Software Engineering from Auburn University. He earned his Bachelor of Computer Science from West Point in 1998. He taught at West Point from 2008-2011, and was promoted to Assistant Professor in 2010. He is a member of Upsilon Pi Epsilon and Phi Kappa Phi. Stephen will join the Army Cyber Institute in West Point, NY as a Research Scientist in the summer of 2017.