Incremental#
- class ansys.dpf.core.incremental.IncrementalHelper(start_op: ansys.dpf.core.dpf_operator.Operator, end_op: ansys.dpf.core.dpf_operator.Operator, scoping: ansys.dpf.core.scoping.Scoping, scoping_pin: int | None = None)#
Provides an API to transform an existing workflow into an incrementally evaluating one.
It works by plugging operators into an incomplete workflow.
Example
>>> from ansys.dpf import core as dpf >>> from ansys.dpf.core import examples >>> path = examples.find_msup_transient() >>> ds = dpf.DataSources(path) >>> scoping = dpf.time_freq_scoping_factory.scoping_on_all_time_freqs(ds) >>> >>> result_op = dpf.operators.result.displacement(data_sources=ds, time_scoping=scoping) >>> minmax_op = dpf.operators.min_max.min_max_fc_inc(result_op) >>> >>> new_op = dpf.split_workflow_in_chunks(result_op, minmax_op, scoping, chunk_size=5) >>> min_field = new_op.get_output(0, dpf.types.field) >>> max_field = new_op.get_output(1, dpf.types.field)
- estimate_size(max_bytes: int, _dict_inputs: Dict[int, Any] = {})#
Estimates the chunk size from the estimated number of bytes outputted in one iteration.
Estimation is based on the size of the output for one ID of the given time_scoping, so it will run the operator for only one iteration.
It only supports Field and FieldContainer. For other types, you should specify chunk_size argument in the split() method.
- Parameters:
max_bytes (int) – Max allowed size of an output from the first operator, for one iteration (in bytes).
_dict_inputs (dict[int,any]) – Dictionary associating pin number to inputs, for evaluating output of one iteration.
- Return type:
int
- split(chunk_size: int, end_input_pin: int = 0, rescope: bool = False)#
Integrate given operators into a new workflow enabling incremental evaluation.
Given a chunk size (multiple of given scoping), it will provide a new operator to retrieve outputs from, and enable incremental evaluation, notably reducing peak memory usage.
- Parameters:
chunk_size (int) – Number of iterations per run
end_input_pin (int, optional) – Pin number of the output to use from the first operator (default = 0)
rescope (bool, optional) – Rescope all the outputs based on the given scoping (default = False)
- Return type:
- ansys.dpf.core.incremental.split_workflow_in_chunks(start_op: ansys.dpf.core.dpf_operator.Operator, end_op: ansys.dpf.core.dpf_operator.Operator, scoping: ansys.dpf.core.scoping.Scoping, rescope: bool = False, max_bytes: int = 1073741824, dict_inputs: typing.Dict[int, typing.Any] = {0: <ansys.dpf.core.scoping.Scoping object>}, chunk_size: int | None = None, scoping_pin: int | None = None, end_input_pin: int = 0)#
Transforms a workflow into an incrementally evaluating one.
It wraps in one method the functionality of the IncrementalHelper class as well as the estimation of the chunk size.
If no chunk_size is specified, the function will attempt to estimate the value by calling IncrementalHelper.estimate_size(max_bytes, dict_inputs).
If no scoping_pin is specified, the function will attempt to deduce the correct pin, which would be the first input pin matching a scoping type.
- Parameters:
start_op (Operator) – Initial operator of the workflow to convert
end_op (Operator) – Last operator of the workflow to convert
scoping (Scoping) – Scoping to split across multiple evaluation
rescope (bool, optional) – If enabled, will rescope final outputs with the given scoping (default = False)
max_bytes (int, optional) – Max allowed size for the output from the first operator (default = 1024**3)
dict_inputs (dict[int, any], optional) – Inputs to pass to the first operator, used only for the estimation run (default = {})
int (chunk_size =) – Maximum number of scoping elements to process in an iteration (default = None)
optional – Maximum number of scoping elements to process in an iteration (default = None)
scoping_pin (int, optional) – The pin number on the first operator to bind the scoping (default = None)
end_input_pin (int, optional) – Pin number of the output to use from the first operator(default = 0)