The Operator is the main object used to create, transform and stream the data. It can be seen as an integrated circuit in electronics with a range of pins in input and in output. When the operator is evaluated, it will process the input information to compute its output with respect to its description. The operator is made of:
Inputs: the input pins allow the user to pass on his data to the operator. Dpf data container types, standard types or operators' outputs can be connected on the input pins (connecting an operator output to another operator input doesn't evaluate this input operator). The inputs allow the user to choose the time/frequencies on which to evaluate a result, to specify the files where to find a result, to provide a field on which he wants an operation to be computed... Optional input pins allow to customize even more the operator outputs. Here is some of the most common pins:
Configurations: with configurations the user can optionally choose how the operator will run. This is an advanced feature used for deep customization. The different options can change the way loops are done, it can change whether the operator needs to make check on the input or not... Here is some of the most common configuration options:
Data transformation: this is the internal operation that will occur when an operator is evaluated. The operation will return outputs depending on the inputs and configurations given by the user. The operation applied by each operator is described in its description.
Outputs: this is the results of the operation. An Operator can have one or several outputs which are usually DPF data containers.
Operators can be chained together to create workflows. To do so, the user only needs to connect some operator's outputs to an other operator's inputs. With workflows, lazy evaluation is performed, which means that when the last operator's outputs are asked by the user, all the connected operators will also be evaluated (and not before) to compute a given result. All the inputs, outputs and description information can be found by clicking on operators on the left pannel of this documentation.
The workflow is built by chaining operators. It will evaluate the data processing defined by the used operators. It needs input information, and it will compute the requested output information. The workflow is used to create a black box computing more or less basic transformation of the data. The different operators contained by a workflow can be internally connected together so that the end user doesn't need to be aware of its complexity. The workflow only needs to expose the necessary inputs pin and output pins. For example, a workflow could expose a "time scoping" input pin and a "data sources" input pin and expose a "result" output pin and have very complex routines inside it. See workflows' examples in the APIs tab.
The field is the main simulation data container. In numerical simulations, results data are defined by values associated to entities (scoping), and these entities are a subset of a model (support). In DPF, field data is always associated to its scoping and support, making the field a self-describing piece of data. A field is also defined by its dimensionnality, unit, location... A field can for example, describe a displacement vector or norm, stresses and strains tensors, stresses and strains equivalent, min max over time of any result... It can be defined on a complete model or just on certain entities of the model thanks to its scoping. The data is stored as a vector of double values and each elementary entity has a number of components (thanks to the dimensionality, a displacement will have 3 components, a symmetrical stress matrix 6...)
The scoping is entities ids representing a subset of the model's support. Typically, scoping can represent node ids, element ids, time steps, frequencies, joints... Its location indicates what kind of entity the scoping is referring to. Scopings are used to identify the entities where a field is scoped or to choose (through an input pin) a subset on which an operator should compute its result.
The data sources is a container of files on which the analysis results can be found.
Streams is an entity containing the data sources. Once the files in the streams are opened, they stay opened and they keep some data in cache to make the next evaluations faster. To close the files, release the streams.
The support describes the model. It can be the mesh, geometric entities, time or frequency domain...
The fields container is a container of fields, used mainly in transient, harmonic, modal or multi-steps static analysis, where we can have a field for each time step or for each frequency. Consequently the fields container can describle a complete analysis with all its details. The fields container is designed as a set of fields ordered through labels and ids. Labels identify how the fields are filtered. The most common fields container have the label "time" with ids corresponding to each time sets, the label "complex" will allow to separate real parts (id=0) from imaginary parts (id=1) in a harmonic analysis.
The meshed region is dpf's entity describing a mesh. Node and element scopings, element types, connectivity (list of node indices composing each element) and node coordinates are the fundamental entities composing the meshed region. It can also have materials, named selections...
The time freq support describes an analysis' temporal or frequential space. For a transient analysis all the time sets cumulatives indices with their times are described. For a harmonic analysis, the real and imaginary frequencies, the RPMs, the load steps are described.
The model is a helper designed to give shortcuts to the user to access a model's metadata and to instanciate results provider for this model. A Model is able to open a DataSources or a Streams to read the metadata and expose it to the user. The metadata is made of all the entities describing a model: its MeshedRegion, its TimeFreqSupport and its ResultInfo. With the model, the user can easily access information about the mesh, about the time/freq steps and substeps used in the analysis and the list of available results.
The Scoping is a set of entity ids defined on a location (the location is optional).
The Scoping's location and ids can be accessed with:
The minimum requirement for a well defined Field is for it to have a dimensionality (scalar, 3 components vector, 6 components symmetrical matrix...), a location ("Nodal", "Elemental", "ElementalNodal", "Timefrq"...), a data vector and a scoping with ids. The user can also set the number of shell layers. If the field has one elementary data by entity (elementary data size = number of components for "Nodal" or "Elemental" field for example), then the data vector can be set directly. If a more complex field is required ("ElementalNodal" Field for example), the data can be set entity by entity.
The Field's side information as well as the data in itself can be accessed with:
The Fields Container is a vector of Fields and all the Fields are ordered with labels and ids. Most commonly, the Fields Container is scoped on "time" label and the ids are the time or frequency sets. More generically, the Fields Container allows to split results on different criterions.
The Fields Container is the main output of results providers:
Data Sources is the entity containing the different path to the result files of an analysis. An extension key ('rst' for example) is used to choose which files represent results files, the other one being accessory files. See more information for using Data Sources in mechanical in "How to use DPF's package / IPython" menu.
The user can create his own data to manipulate it with dpf. The Meshed Region can be created simply with:
A model is usually represented by a Meshed Region in DPF. The mesh provider operator allows to access an analysis' mesh. The user can then get different information in the mesh like the coordinates of all the nodes and the connectivity between elements and nodes.
The time or frequency space of an analysis is described by the Time Freq Support entity in DPF. It gives access to real and imaginary sets. User can create a time freq support to manage data.
Time Freq Support of a specific file can be accessed using the following methods.
The Model is built with DataSources that it will open (in a streams by default) to explore an analysis. Printing the model is a good tool to see the results that are available.
In DPF, the operator is used to import and modify the simulation data. We can count 3 main types of operators:
Operators importing/reading data
Operators transforming existing data
Operators exporting data
Those operators allow to read data from solver files or from standard file types. Different solver format are handled by DPF like rst/mode/rfrq/rdsp.. for MAPDL, d3plot for LsDyna, cas.h5/dat.h5/res/flprj for CFX and Fluent, odb for Abaqus... To read those, different readers have been implemented in plugins. Plugins can be loaded on demand in any dpf's scripting language with the "load library" methods. File readers can be used generically thanks to dpf's result providers, which means that the same operators can be used for any file types. For example, reading a displacement or a stress for any files will be done with:
Result providers can be customized to read a specific time frequency or to provide results on a subset of the mesh:
Standards file formats reader are also supported to import custom data. Fields can be imported from csv, vtk or hdf5 files:
The field being the main data container in DPF, most of the operator transforming the data take a field or fields container in input and return a transformed field or fields container in output. Analytic, averaging or filtering operations can be performed on the simulation data:
After transforming or reading simulation data with DPF, the user might want to export the results in a given format to use it in another environment or to save it for future use with dpf. Vtk, h5, csv and txt (serializer operator) are examples of supported exports. Export operators often match with import operators allowing user to reuse their data. The "serialization" operators menu lists the available import/export operators.
To create more complex operations and customizable results, operators can be chained together to create workflows. This way a result can be read from a solver result file and directly transformed in a single workflow. Examples can be found in APIs/Workflow examples menu. 2 syntaxes can be used to create and connect operators together:
Advanced user might want to configurate an operator's behavior during its running phase. This can be done through the "config". This option allows to choose if an operator can directly modify the input data container instead of creating a new one with the "inplace" configuration, to choose if an operation between to fields should use their indices or mesh ids with the "work_by_index" configuration... Each operator's description explains which configuration are supported.