Robert Sisneros and David Pugmire
Particle advection is one of the foundational algorithms for visualization and analysis, and is a major method used to gain understanding regarding vector fields resulting from scientific simulations. Techniques that make use of particle advection including streamlines, pathlines, streamsurfaces, Finite Time Lyapunov Exponents (FTLE), and puncture plots are central to many production visualization tools, including even those used for HPC visualization and analysis. Parallel particle advection, however, is notoriously difficult to efficiently parallelize due to the tendency for load imbalance. Computational requirements are unknown a priori and are sensitive to a number of factors: data decomposition, vector field characteristics, hardware configurations, and tunable algorithm settings. The early “parallelize over data” (POD) algorithm found success through minimizing data movement which is typically the bottleneck for visualization.
While a variety of parallelization algorithms have been developed with each typically addressing one of the factors impacting performance, POD remains the de facto standard as well as the algorithm most suitable for in situ deployment. This served as motivation for a recent project revisiting methods for improved parallel particle advection. In this talk I will outline this work and discuss the considerations of ensuring that research advancements benefit in situ capabilities.