Working with DPF server configurations#

This tutorial demonstrates how to work with different DPF server types and configurations to optimize your workflow based on your specific needs.

DPF is based on a client-server architecture where PyDPF-Core acts as the Python client API communicating with a DPF Server. Understanding server types is essential for choosing the right configuration for your use case, whether you need maximum performance on a local machine, secure remote access with mTLS authentication, or distributed computation across a network.

Download tutorial as Python script Download tutorial as Jupyter notebook

Understanding DPF server types#

There are three main server configurations available in PyDPF-Core:

  • InProcessServer: Direct communication within the same Python process (fastest, default since Ansys 2023 R1). Requires compatible runtime dependencies between Python packages and DPF plugins.

  • GrpcServer: Network communication using gRPC protocol (enables remote and distributed computation). Process isolation prevents dependency conflicts with DPF plugins.

  • LegacyGrpcServer: Legacy gRPC communication for Ansys 2022 R1 and earlier versions

The choice of server type impacts performance, memory usage, dependency management, and distributed computing capabilities.

Starting a local InProcess server#

The default and most efficient way to use PyDPF-Core is with an InProcessServer. This configuration runs the DPF server directly within your Python process, eliminating data transfer overhead and providing the fastest performance.

Note

While InProcessServer offers the best performance, it requires that all runtime dependencies are compatible between your Python environment and DPF plugins. If any Python dependency clashes with a DPF plugin dependency, that plugin will not be loaded, resulting in lost capabilities.

GrpcServer does not have this limitation because process isolation ensures dependency isolation between the client and server.

First, import the necessary modules:

# Import the ansys.dpf.core module as ``dpf``
from ansys.dpf import core as dpf

# Import the examples module
from ansys.dpf.core import examples

Start a local server using the default configuration:

# Start a local DPF server with default InProcess configuration
local_server = dpf.start_local_server()

# Display the server object
print(local_server)
DPF Server: {'server_ip': '', 'server_port': None, 'server_process_id': 3880, 'server_version': '12.0', 'os': 'nt', 'path': 'D:\\a\\pydpf-core\\pydpf-core\\dpf-standalone\\v271\\ansys\\dpf\\server_2027_1_pre0'}

The server is now ready to be used for creating DPF objects.

Using the local server#

Once you have started a local InProcessServer, you can pass it to any DPF object constructor to ensure operations run on that specific server.

Create an |Operator| on the local server:

# Instantiate a displacement Operator on the local server
local_operator = dpf.operators.result.displacement(server=local_server)

# Display the Operator
print(local_operator)
DPF U Operator: 
  Read/compute nodal displacements by calling the readers defined by the datasources. 
  Inputs:
         time_scoping (optional) [scoping, int32, vector<int32>, double, field, vector<double>]: time/freq values (use doubles or field), time/freq set ids (use ints or scoping) or time/freq step ids (use scoping with TimeFreq_steps location) required in output. To specify time/freq values at specific load steps, put a Field (and not a list) in input with a scoping located on "TimeFreq_steps". Linear time freq intrapolation is performed if the values are not in the result files and the data at the max time or freq is taken when time/freqs are higher than available time/freqs in result files. To get all data for all time/freq sets, connect an int with value -1. 
         mesh_scoping (optional) [scopings_container, scoping]: nodes or elements scoping required in output. The output fields will be scoped on these node or element IDs. To figure out the ordering of the fields data, look at their scoping IDs as they might not be ordered as the input scoping was. The scoping's location indicates whether nodes or elements are asked for. Using scopings container allows you to split the result fields container into domains 
         fields_container (optional) [fields_container]: Fields container already allocated modified inplace 
         streams_container (optional) [streams_container]: result file container allowed to be kept open to cache data 
         data_sources [data_sources]: result file path container, used if no streams are set 
         bool_rotate_to_global (optional) [bool]: Rotate the result to the global coordinate system if rotations are available (default true). Please check your results carefully if 'false' is used for Elemental or ElementalNodal results averaged to the Nodes when adjacent elements do not share the same coordinate system, as results may be incorrect. 
         mesh (optional) [abstract_meshed_region, meshes_container]: mesh. If cylic expansion is to be done, mesh of the base sector 
         read_cyclic (optional) [enum dataProcessing::ECyclicReading, int32]: if 0 cyclic symmetry is ignored, if 1 cyclic sector is read, if 2 cyclic expansion is done, if 3 cyclic expansion is done and stages are merged (default is 1) 
         expanded_meshed_region (optional) [abstract_meshed_region, meshes_container]: mesh expanded, use if cyclic expansion is to be done. 
         sectors_to_expand (optional) [vector<int32>, scoping, scopings_container]: sectors to expand (start at 0), for multistage: use scopings container with 'stage' label, use if cyclic expansion is to be done. 
         phi (optional) [double]: angle phi in degrees (default value 0.0), use if cyclic expansion is to be done. 
  Outputs:
         fields_container [fields_container] 

Create a |Model| on the local server:

# Define the result file path using an example file
result_file = examples.find_simple_bar()

# Instantiate a Model on the local server
local_model = dpf.Model(result_file, server=local_server)

# Display basic information about the Model
print(local_model)
DPF Model
------------------------------
Static analysis
Unit system: MKS: m, kg, N, s, V, A, degC
Physics Type: Mechanical
Available results:
     -  node_orientations: Nodal Node Euler Angles
     -  displacement: Nodal Displacement
     -  element_nodal_forces: ElementalNodal Element nodal Forces
     -  elemental_volume: Elemental Volume
     -  stiffness_matrix_energy: Elemental Energy-stiffness matrix
     -  artificial_hourglass_energy: Elemental Hourglass Energy
     -  kinetic_energy: Elemental Kinetic Energy
     -  co_energy: Elemental co-energy
     -  incremental_energy: Elemental incremental energy
     -  thermal_dissipation_energy: Elemental thermal dissipation energy
     -  element_orientations: ElementalNodal Element Euler Angles
     -  structural_temperature: ElementalNodal Structural temperature
------------------------------
DPF  Meshed Region: 
  3751 nodes 
  3000 elements 
  Unit: m 
  With solid (3D) elements
------------------------------
DPF  Time/Freq Support: 
  Number of sets: 1 
Cumulative     Time (s)       LoadStep       Substep         
1              1.000000       1              1               

Starting a gRPC server#

For distributed computation or remote access scenarios, use a GrpcServer. This configuration enables network communication using the gRPC protocol, allowing you to connect from different machines or leverage distributed computing capabilities.

Warning

Starting with Ansys 2026 R1 (DPF 2026.1.0) and PyDPF-Core 0.15.0, DPF Server gRPC connections default to using authenticated mTLS (mutual TLS) transport for enhanced security. This change also applies to service packs for Ansys 2025 R2 SP03 and SP04, 2025 R1 SP04, and 2024 R2 SP05.

For remote connections, you must configure mTLS certificates on both client and server machines. See Run DPF Server in Secure mode with mTLS for detailed information on certificate configuration.

Use the AvailableServerConfigs class to specify the server configuration:

# Get the GrpcServer configuration
grpc_server_config = dpf.AvailableServerConfigs.GrpcServer

# Start a local server with gRPC configuration
grpc_server = dpf.start_local_server(config=grpc_server_config)

# Display the server object
print(grpc_server)
DPF Server: {'server_ip': '127.0.0.1', 'server_port': 50054, 'server_process_id': 6876, 'server_version': '12.0', 'os': 'nt', 'path': 'D:\\a\\pydpf-core\\pydpf-core\\dpf-standalone\\v271\\ansys\\dpf\\server_2027_1_pre0'}

Retrieve the server connection information:

# Get the server IP address
server_ip = grpc_server.ip

# Get the server port
server_port = grpc_server.port

# Display connection information
print(f"Server IP: {server_ip}")
print(f"Server Port: {server_port}")
Server IP: 127.0.0.1
Server Port: 50054

Connecting to a remote gRPC server#

Once a GrpcServer is running, you can connect to it from another machine or process using the connect_to_server function. This enables distributed computation where data processing occurs on a remote server.

Connect to the gRPC server:

# Connect to the remote gRPC server
remote_server = dpf.connect_to_server(ip=server_ip, port=server_port, as_global=False)

# Display the connected server object
print(remote_server)
DPF Server: {'server_ip': '127.0.0.1', 'server_port': 50054, 'server_process_id': 6876, 'server_version': '12.0', 'os': 'nt', 'path': None}

Create DPF objects on the remote server:

# Instantiate an Operator on the remote server
remote_operator = dpf.operators.result.displacement(server=remote_server)

# Display the remote Operator
print(remote_operator)
DPF U Operator: 
  Read/compute nodal displacements by calling the readers defined by the datasources. 
  Inputs:
         time_scoping (optional) [scoping, int32, vector<int32>, double, field, vector<double>]: time/freq values (use doubles or field), time/freq set ids (use ints or scoping) or time/freq step ids (use scoping with TimeFreq_steps location) required in output. To specify time/freq values at specific load steps, put a Field (and not a list) in input with a scoping located on "TimeFreq_steps". Linear time freq intrapolation is performed if the values are not in the result files and the data at the max time or freq is taken when time/freqs are higher than available time/freqs in result files. To get all data for all time/freq sets, connect an int with value -1. 
         mesh_scoping (optional) [scopings_container, scoping]: nodes or elements scoping required in output. The output fields will be scoped on these node or element IDs. To figure out the ordering of the fields data, look at their scoping IDs as they might not be ordered as the input scoping was. The scoping's location indicates whether nodes or elements are asked for. Using scopings container allows you to split the result fields container into domains 
         fields_container (optional) [fields_container]: Fields container already allocated modified inplace 
         streams_container (optional) [streams_container]: result file container allowed to be kept open to cache data 
         data_sources [data_sources]: result file path container, used if no streams are set 
         bool_rotate_to_global (optional) [bool]: Rotate the result to the global coordinate system if rotations are available (default true). Please check your results carefully if 'false' is used for Elemental or ElementalNodal results averaged to the Nodes when adjacent elements do not share the same coordinate system, as results may be incorrect. 
         mesh (optional) [abstract_meshed_region, meshes_container]: mesh. If cylic expansion is to be done, mesh of the base sector 
         read_cyclic (optional) [enum dataProcessing::ECyclicReading, int32]: if 0 cyclic symmetry is ignored, if 1 cyclic sector is read, if 2 cyclic expansion is done, if 3 cyclic expansion is done and stages are merged (default is 1) 
         expanded_meshed_region (optional) [abstract_meshed_region, meshes_container]: mesh expanded, use if cyclic expansion is to be done. 
         sectors_to_expand (optional) [vector<int32>, scoping, scopings_container]: sectors to expand (start at 0), for multistage: use scopings container with 'stage' label, use if cyclic expansion is to be done. 
         phi (optional) [double]: angle phi in degrees (default value 0.0), use if cyclic expansion is to be done. 
  Outputs:
         fields_container [fields_container] 

# Instantiate a Model on the remote server
remote_model = dpf.Model(result_file, server=remote_server)

# Display basic information about the remote Model
print(remote_model)
DPF Model
------------------------------
Static analysis
Unit system: MKS: m, kg, N, s, V, A, degC
Physics Type: Mechanical
Available results:
     -  node_orientations: Nodal Node Euler Angles
     -  displacement: Nodal Displacement
     -  element_nodal_forces: ElementalNodal Element nodal Forces
     -  elemental_volume: Elemental Volume
     -  stiffness_matrix_energy: Elemental Energy-stiffness matrix
     -  artificial_hourglass_energy: Elemental Hourglass Energy
     -  kinetic_energy: Elemental Kinetic Energy
     -  co_energy: Elemental co-energy
     -  incremental_energy: Elemental incremental energy
     -  thermal_dissipation_energy: Elemental thermal dissipation energy
     -  element_orientations: ElementalNodal Element Euler Angles
     -  structural_temperature: ElementalNodal Structural temperature
------------------------------
DPF  Meshed Region: 
  3751 nodes 
  3000 elements 
  Unit: m 
  With solid (3D) elements
------------------------------
DPF  Time/Freq Support: 
  Number of sets: 1 
Cumulative     Time (s)       LoadStep       Substep         
1              1.000000       1              1               

Through the network using gRPC, a DPF server enables distributed computation capabilities. For examples of distributed workflows, see the gallery of examples.

Configuring mTLS certificates for secure gRPC connections#

When connecting to a remote gRPC server (starting with DPF 2026 R1 and PyDPF-Core 0.15.0), you need to configure mTLS certificates for secure communication. This applies to both the server machine and the client machine.

Set the certificate location using an environment variable#

The location of mTLS certificates is specified using the ANSYS_GRPC_CERTIFICATES environment variable. This must be set on both the server machine and the client machine.

On Windows:

# Set the environment variable on Windows (run in PowerShell or Command Prompt)
# os.environ['ANSYS_GRPC_CERTIFICATES'] = r'C:\path\to\certificates'
pass

On Linux:

# Set the environment variable on Linux (run in terminal)
# export ANSYS_GRPC_CERTIFICATES=/path/to/certificates
pass

For detailed information on generating mTLS certificates, see the Generating certificates for mTLS documentation.

Comparing server configurations#

You can explicitly choose different server configurations using the AvailableServerConfigs class. This is useful when you need to test performance or compatibility with different server types.

Start servers with different configurations:

# Get InProcessServer configuration
in_process_config = dpf.AvailableServerConfigs.InProcessServer

# Start an InProcess server
in_process_server = dpf.start_local_server(config=in_process_config, as_global=False)

# Display the InProcess server
print(f"InProcess server: {in_process_server}")
InProcess server: DPF Server: {'server_ip': '', 'server_port': None, 'server_process_id': 3880, 'server_version': '12.0', 'os': 'nt', 'path': 'D:\\a\\pydpf-core\\pydpf-core\\dpf-standalone\\v271\\ansys\\dpf\\server_2027_1_pre0'}
# Get GrpcServer configuration
grpc_config = dpf.AvailableServerConfigs.GrpcServer

# Start a gRPC server
grpc_server_2 = dpf.start_local_server(config=grpc_config, as_global=False)

# Display the gRPC server
print(f"gRPC server: {grpc_server_2}")
gRPC server: DPF Server: {'server_ip': '127.0.0.1', 'server_port': 50055, 'server_process_id': 7392, 'server_version': '12.0', 'os': 'nt', 'path': 'D:\\a\\pydpf-core\\pydpf-core\\dpf-standalone\\v271\\ansys\\dpf\\server_2027_1_pre0'}
# Get LegacyGrpcServer configuration (for compatibility with older versions)
legacy_grpc_config = dpf.AvailableServerConfigs.LegacyGrpcServer

# Start a legacy gRPC server
legacy_grpc_server = dpf.start_local_server(config=legacy_grpc_config, as_global=False)

# Display the legacy gRPC server
print(f"Legacy gRPC server: {legacy_grpc_server}")
Legacy gRPC server: DPF Server: {'server_ip': '127.0.0.1', 'server_port': 50056, 'server_process_id': 2592, 'server_version': '12.0', 'os': 'nt'}

Key takeaways#

The choice of DPF server configuration depends on your specific requirements:

  • Use InProcessServer for local computations requiring maximum performance and minimal memory overhead (default since Ansys 2023 R1)

    • Provides the fastest performance by eliminating data transfer between client and server

    • Limitation: Requires compatible runtime dependencies between Python packages and DPF plugins. Incompatibilities between dependencies can prevent plugins from loading

    • Best suited for environments with controlled dependencies and standard DPF plugins

  • Use GrpcServer when you need distributed computation, remote access, or when running DPF on a different machine (available since Ansys 2022 R2)

    • Process isolation ensures dependency isolation, avoiding clashes between Python environment and plugins

    • Starting with DPF 2026 R1, gRPC connections use mTLS authentication by default for enhanced security

    • Configure ANSYS_GRPC_CERTIFICATES environment variable on both client and server for mTLS

    • For more information, see Run DPF Server in Secure mode with mTLS

  • Use LegacyGrpcServer only for compatibility with Ansys 2022 R1 and earlier versions

All configurations use the same start_local_server function with different ServerConfig parameters, making it easy to switch between server types as your needs change.