hdf5dpf_generate_result_file#

Autogenerated DPF operator classes.

class ansys.dpf.core.operators.serialization.hdf5dpf_generate_result_file.hdf5dpf_generate_result_file(dataset_size_compression_threshold=None, h5_native_compression=None, export_floats=None, filename=None, mesh_provider_out=None, time_freq_support_out=None, ansys_unit_system_id=None, input_name1=None, input_name2=None, config=None, server=None)#

Generate a dpf result file from provided information.

Parameters:
  • dataset_size_compression_threshold (int, optional) – Integer value that defines the minimum dataset size (in bytes) to use h5 native compression applicable for arrays of floats, doubles and integers.

  • h5_native_compression (int or DataTree, optional) – Integer value / datatree that defines the h5 native compression used for integer input {0: no compression (default); 1-9: gzip compression : 9 provides maximum compression but at the slowest speed.}for datatree input {type: none / gzip / zstd; level: gzip (1-9) / zstd (1-20); num_threads: zstd (>0)}

  • export_floats (bool, optional) – Converts double to float to reduce file size (default is true)

  • filename (str) – Name of the output file that will be generated (utf8).

  • mesh_provider_out (MeshedRegion, optional) – Defines the meshedregion that is exported and provided by meshprovider.

  • time_freq_support_out (TimeFreqSupport, optional) – Defines the timefreqsupport that is exported and provided by timefreqsupportprovider.

  • ansys_unit_system_id (int or ResultInfo, optional) – Defines the unit system the results are exported with. a result info can be input to also export physics type and analysis type.

  • input_name1 (str or Any, optional) – Set of even and odd pins to serialize results. odd pins (4, 6, 8…) are strings, and they represent the names of the results to be serialized. even pins (5, 7, 9…) are dpf types, and they represent the results to be serialized. they should go in pairs (for each result name, there should be a result) and connected sequentially.

  • input_name2 (str or Any, optional) – Set of even and odd pins to serialize results. odd pins (4, 6, 8…) are strings, and they represent the names of the results to be serialized. even pins (5, 7, 9…) are dpf types, and they represent the results to be serialized. they should go in pairs (for each result name, there should be a result) and connected sequentially.

Returns:

data_sources – Data_sources filed with the h5 generated file path.

Return type:

DataSources

Examples

>>> from ansys.dpf import core as dpf
>>> # Instantiate operator
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file()
>>> # Make input connections
>>> my_dataset_size_compression_threshold = int()
>>> op.inputs.dataset_size_compression_threshold.connect(my_dataset_size_compression_threshold)
>>> my_h5_native_compression = int()
>>> op.inputs.h5_native_compression.connect(my_h5_native_compression)
>>> my_export_floats = bool()
>>> op.inputs.export_floats.connect(my_export_floats)
>>> my_filename = str()
>>> op.inputs.filename.connect(my_filename)
>>> my_mesh_provider_out = dpf.MeshedRegion()
>>> op.inputs.mesh_provider_out.connect(my_mesh_provider_out)
>>> my_time_freq_support_out = dpf.TimeFreqSupport()
>>> op.inputs.time_freq_support_out.connect(my_time_freq_support_out)
>>> my_ansys_unit_system_id = int()
>>> op.inputs.ansys_unit_system_id.connect(my_ansys_unit_system_id)
>>> my_input_name1 = str()
>>> op.inputs.input_name1.connect(my_input_name1)
>>> my_input_name2 = str()
>>> op.inputs.input_name2.connect(my_input_name2)
>>> # Instantiate operator and connect inputs in one line
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file(
...     dataset_size_compression_threshold=my_dataset_size_compression_threshold,
...     h5_native_compression=my_h5_native_compression,
...     export_floats=my_export_floats,
...     filename=my_filename,
...     mesh_provider_out=my_mesh_provider_out,
...     time_freq_support_out=my_time_freq_support_out,
...     ansys_unit_system_id=my_ansys_unit_system_id,
...     input_name1=my_input_name1,
...     input_name2=my_input_name2,
... )
>>> # Get output data
>>> result_data_sources = op.outputs.data_sources()
static default_config(server=None)#

Returns the default config of the operator.

This config can then be changed to the user needs and be used to instantiate the operator. The Configuration allows to customize how the operation will be processed by the operator.

Parameters:

server (server.DPFServer, optional) – Server with channel connected to the remote or local instance. When None, attempts to use the global server.

property inputs#

Enables to connect inputs to the operator

Returns:

inputs

Return type:

InputsHdf5DpfGenerateResultFile

property outputs#

Enables to get outputs of the operator by evaluating it

Returns:

outputs

Return type:

OutputsHdf5DpfGenerateResultFile

property config#

Copy of the operator’s current configuration.

You can modify the copy of the configuration and then use operator.config = new_config or instantiate an operator with the new configuration as a parameter.

For information on an operator’s options, see the documentation for that operator.

Returns:

Copy of the operator’s current configuration.

Return type:

ansys.dpf.core.config.Config

Examples

Modify the copy of an operator’s configuration and set it as current config of the operator.

>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.math.add()
>>> config_add = op.config
>>> config_add.set_work_by_index_option(True)
>>> op.config = config_add
connect(pin, inpt, pin_out=0)#

Connect an input on the operator using a pin number.

Parameters:
  • pin (int) – Number of the input pin.

  • inpt (str, int, double, bool, list[int], list[float], Field, FieldsContainer, Scoping,) –

  • ScopingsContainer – Operator, os.PathLike Object to connect to.

  • MeshedRegion – Operator, os.PathLike Object to connect to.

  • MeshesContainer – Operator, os.PathLike Object to connect to.

  • DataSources – Operator, os.PathLike Object to connect to.

  • CyclicSupport – Operator, os.PathLike Object to connect to.

  • dict – Operator, os.PathLike Object to connect to.

  • Outputs – Operator, os.PathLike Object to connect to.

  • pin_out (int, optional) – If the input is an operator, the output pin of the input operator. The default is 0.

Examples

Compute the minimum of displacement by chaining the "U" and "min_max_fc" operators.

>>> from ansys.dpf import core as dpf
>>> from ansys.dpf.core import examples
>>> data_src = dpf.DataSources(examples.find_multishells_rst())
>>> disp_op = dpf.operators.result.displacement()
>>> disp_op.inputs.data_sources(data_src)
>>> max_fc_op = dpf.operators.min_max.min_max_fc()
>>> max_fc_op.inputs.connect(disp_op.outputs)
>>> max_field = max_fc_op.outputs.field_max()
>>> max_field.data
DPFArray([[0.59428386, 0.00201751, 0.0006032 ]]...
connect_operator_as_input(pin, op)#

Connects an operator as an input on a pin. :type pin: :param pin: Number of the output pin. The default is 0. :type pin: int :type op: :param op: Requested type of the output. The default is None. :type op: ansys.dpf.core.dpf_operator.Operator

eval(pin=None)#

Evaluate this operator.

Parameters:

pin (int) – Number of the output pin. The default is None.

Returns:

output – Returns the first output of the operator by default and the output of a given pin when specified. Or, it only evaluates the operator without output.

Return type:

FieldsContainer, Field, MeshedRegion, Scoping

Examples

Use the eval method.

>>> from ansys.dpf import core as dpf
>>> import ansys.dpf.core.operators.math as math
>>> from ansys.dpf.core import examples
>>> data_src = dpf.DataSources(examples.find_multishells_rst())
>>> disp_op = dpf.operators.result.displacement()
>>> disp_op.inputs.data_sources(data_src)
>>> normfc = math.norm_fc(disp_op).eval()
get_output(pin=0, output_type=None)#

Retrieve the output of the operator on the pin number.

To activate the progress bar for server version higher or equal to 3.0, use my_op.progress_bar=True

Parameters:
  • pin (int, optional) – Number of the output pin. The default is 0.

  • output_type (ansys.dpf.core.common.types, type, optional) – Requested type of the output. The default is None.

Returns:

Output of the operator.

Return type:

type

static operator_specification(op_name, server=None)#

Documents an Operator with its description (what the Operator does), its inputs and outputs and some properties

property progress_bar: bool#

With this property, the user can choose to print a progress bar when the operator’s output is requested, default is False

run()#

Evaluate this operator.

property specification#

Returns the Specification (or documentation) of this Operator

Return type:

Specification

class ansys.dpf.core.operators.serialization.hdf5dpf_generate_result_file.InputsHdf5DpfGenerateResultFile(op: ansys.dpf.core.dpf_operator.Operator)#

Intermediate class used to connect user inputs to hdf5dpf_generate_result_file operator.

Examples

>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file()
>>> my_dataset_size_compression_threshold = int()
>>> op.inputs.dataset_size_compression_threshold.connect(my_dataset_size_compression_threshold)
>>> my_h5_native_compression = int()
>>> op.inputs.h5_native_compression.connect(my_h5_native_compression)
>>> my_export_floats = bool()
>>> op.inputs.export_floats.connect(my_export_floats)
>>> my_filename = str()
>>> op.inputs.filename.connect(my_filename)
>>> my_mesh_provider_out = dpf.MeshedRegion()
>>> op.inputs.mesh_provider_out.connect(my_mesh_provider_out)
>>> my_time_freq_support_out = dpf.TimeFreqSupport()
>>> op.inputs.time_freq_support_out.connect(my_time_freq_support_out)
>>> my_ansys_unit_system_id = int()
>>> op.inputs.ansys_unit_system_id.connect(my_ansys_unit_system_id)
>>> my_input_name1 = str()
>>> op.inputs.input_name1.connect(my_input_name1)
>>> my_input_name2 = str()
>>> op.inputs.input_name2.connect(my_input_name2)
property dataset_size_compression_threshold#

Allows to connect dataset_size_compression_threshold input to the operator.

Integer value that defines the minimum dataset size (in bytes) to use h5 native compression applicable for arrays of floats, doubles and integers.

Parameters:

my_dataset_size_compression_threshold (int) –

Examples

>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file()
>>> op.inputs.dataset_size_compression_threshold.connect(my_dataset_size_compression_threshold)
>>> # or
>>> op.inputs.dataset_size_compression_threshold(my_dataset_size_compression_threshold)
property h5_native_compression#

Allows to connect h5_native_compression input to the operator.

Integer value / datatree that defines the h5 native compression used for integer input {0: no compression (default); 1-9: gzip compression : 9 provides maximum compression but at the slowest speed.}for datatree input {type: none / gzip / zstd; level: gzip (1-9) / zstd (1-20); num_threads: zstd (>0)}

Parameters:

my_h5_native_compression (int or DataTree) –

Examples

>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file()
>>> op.inputs.h5_native_compression.connect(my_h5_native_compression)
>>> # or
>>> op.inputs.h5_native_compression(my_h5_native_compression)
property export_floats#

Allows to connect export_floats input to the operator.

Converts double to float to reduce file size (default is true)

Parameters:

my_export_floats (bool) –

Examples

>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file()
>>> op.inputs.export_floats.connect(my_export_floats)
>>> # or
>>> op.inputs.export_floats(my_export_floats)
property filename#

Allows to connect filename input to the operator.

Name of the output file that will be generated (utf8).

Parameters:

my_filename (str) –

Examples

>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file()
>>> op.inputs.filename.connect(my_filename)
>>> # or
>>> op.inputs.filename(my_filename)
property mesh_provider_out#

Allows to connect mesh_provider_out input to the operator.

Defines the meshedregion that is exported and provided by meshprovider.

Parameters:

my_mesh_provider_out (MeshedRegion) –

Examples

>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file()
>>> op.inputs.mesh_provider_out.connect(my_mesh_provider_out)
>>> # or
>>> op.inputs.mesh_provider_out(my_mesh_provider_out)
property time_freq_support_out#

Allows to connect time_freq_support_out input to the operator.

Defines the timefreqsupport that is exported and provided by timefreqsupportprovider.

Parameters:

my_time_freq_support_out (TimeFreqSupport) –

Examples

>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file()
>>> op.inputs.time_freq_support_out.connect(my_time_freq_support_out)
>>> # or
>>> op.inputs.time_freq_support_out(my_time_freq_support_out)
property ansys_unit_system_id#

Allows to connect ansys_unit_system_id input to the operator.

Defines the unit system the results are exported with. a result info can be input to also export physics type and analysis type.

Parameters:

my_ansys_unit_system_id (int or ResultInfo) –

Examples

>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file()
>>> op.inputs.ansys_unit_system_id.connect(my_ansys_unit_system_id)
>>> # or
>>> op.inputs.ansys_unit_system_id(my_ansys_unit_system_id)
property input_name1#

Allows to connect input_name1 input to the operator.

Set of even and odd pins to serialize results. odd pins (4, 6, 8…) are strings, and they represent the names of the results to be serialized. even pins (5, 7, 9…) are dpf types, and they represent the results to be serialized. they should go in pairs (for each result name, there should be a result) and connected sequentially.

Parameters:

my_input_name1 (str or Any) –

Examples

>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file()
>>> op.inputs.input_name1.connect(my_input_name1)
>>> # or
>>> op.inputs.input_name1(my_input_name1)
property input_name2#

Allows to connect input_name2 input to the operator.

Set of even and odd pins to serialize results. odd pins (4, 6, 8…) are strings, and they represent the names of the results to be serialized. even pins (5, 7, 9…) are dpf types, and they represent the results to be serialized. they should go in pairs (for each result name, there should be a result) and connected sequentially.

Parameters:

my_input_name2 (str or Any) –

Examples

>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file()
>>> op.inputs.input_name2.connect(my_input_name2)
>>> # or
>>> op.inputs.input_name2(my_input_name2)
connect(inpt)#

Connect any input (an entity or an operator output) to any input pin of this operator. Searches for the input type corresponding to the output.

Parameters:
  • inpt (str, int, double, bool, list[int], list[float], Field, FieldsContainer, Scoping,) –

  • ScopingsContainer (E501) – Input of the operator.

  • MeshedRegion (E501) – Input of the operator.

  • MeshesContainer (E501) – Input of the operator.

  • DataSources (E501) – Input of the operator.

  • CyclicSupport (E501) – Input of the operator.

  • Outputs (E501) – Input of the operator.

  • noqa (os.PathLike #) – Input of the operator.

class ansys.dpf.core.operators.serialization.hdf5dpf_generate_result_file.OutputsHdf5DpfGenerateResultFile(op: ansys.dpf.core.dpf_operator.Operator)#

Intermediate class used to get outputs from hdf5dpf_generate_result_file operator.

Examples

>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file()
>>> # Connect inputs : op.inputs. ...
>>> result_data_sources = op.outputs.data_sources()
property data_sources#

Allows to get data_sources output of the operator

Returns:

my_data_sources

Return type:

DataSources

Examples

>>> from ansys.dpf import core as dpf
>>> op = dpf.operators.serialization.hdf5dpf_generate_result_file()
>>> # Connect inputs : op.inputs. ...
>>> result_data_sources = op.outputs.data_sources()