In its first version, Catalyst heavily relied on the VTK library, so it involved advanced knowledge of VTK and how to link it to simulation codes. With time and use, we saw the different drawbacks of our first approach and decided to create a new architecture, Catalyst API.

With this new implementation, it is:

  • easier to implement in your simulation (less knowledge required)

  • easier to update from a version to another (fewer dependencies, has binary compatibility)

  • possible to activate Steering mode, where ParaView can modify simulation parameters at runtime

Learn more on this evolution in the Background section.


This is documentation for the Paraview implementation of the Catalyst API (Catalyst 2) if you are looking for information regarding the previous version of catalyst, please check this manual.


Using a Catalyst-enabled simulation

As the simulation user, you just have to provide an usual ParaView Python script, as described in Getting started with pvpython. The easiest way to do is to open a representative dataset in ParaView. Then create a pipeline and visulization. Finally File > Save Catalyst State.

So you have access to the whole power of ParaView! Just remember to use Extractors to save the meaningful data.

When saving the script, you can also enable Live Visualization. In this mode, you can connect your ParaView application to the remote simulation and see live results, as if you opened a file with automatic update at each timestep. In Live Visualization, you can also configure some Steering parameters to take the control on the running simulation.

And it happens without even writing a simulation data file on your disk!

Instrumenting a simulation

On the simulation side, the main work is to describe your data using the Conduit library. With this descriptor, ParaView is able to wrap the simulation memory without any copy. Then forward this description to ParaView through the catalyst_execute method each time you want the analysis to run, typically inside the main simulation loop.

On the initialization pass, do not forget to forward the user-defined Python script.

This script can run in distributed environment and use MPI communication

Want to run the analysis on dedicated MPI nodes? Just use the AdiosCatalyst variant and move to in transit analysis. This can be useful to use GPU nodes for the visualization part.