Python calculator

<< Click to Display Table of Contents >>

Navigation:  General >

Python calculator

 

Description

The Python Calculator module allows you to apply a custom Python script to each input seismic gather. The script receives the gather data, performs any computation you define, and returns the modified gather as output. This gives you full flexibility to implement trace-by-trace or gather-level algorithms, custom filters, amplitude manipulations, header editing, or any other seismic processing operation that can be expressed in Python.

The module also offers an optional integration with the ChatGPT AI assistant, which can generate the Python script for you based on a natural-language description of the processing task you want to perform. Once the script is generated, you can review it, run it, and save it for future use. This makes it accessible even for users who are not experienced Python programmers.

Input data

Input gather

The seismic gather to be processed. The module iterates over each gather in the dataset and passes it to the Python script for processing. The gather includes all trace data and associated trace headers, which are accessible from within the script.

Parameters

User script

The path to the Python script file (.py) that will be executed on each gather. The script must implement the expected interface that accepts the gather data and returns the processed gather. You can write this script manually in any text editor or Python IDE, or you can generate it automatically using the ChatGPT integration via the Modify gather with ChatGPT custom action. The script is re-initialized whenever this path or the Python executable path changes.

Python executable

The name or full path of the Python interpreter to use when running the script. The default value is pythonw.exe, which runs Python without a console window on Windows. If you have multiple Python environments installed (for example, Anaconda or a virtual environment), enter the full path to the desired Python interpreter (e.g., C:\Anaconda3\envs\myenv\pythonw.exe). Ensure that all Python packages required by your script are installed in the selected environment. Changing this field triggers reinitialization of the Python engine.

ChatGPT

This group contains settings for the optional ChatGPT AI assistant integration. When configured, you can use the Modify gather with ChatGPT custom action to open an interactive dialogue with ChatGPT, describe the processing operation you want in plain language, and receive a generated Python script that is automatically saved and linked to this module. An OpenAI API key is required to use this feature.

API key

Your personal OpenAI API key, used to authenticate requests to the ChatGPT service. You can obtain an API key by creating an account at platform.openai.com and generating a key in your account settings. This field is required to use the Modify gather with ChatGPT action. Keep your API key confidential and do not share your project files containing it with others.

Preprompting text file

An optional path to a plain text file (.txt) containing instructions or context that is sent to ChatGPT before your actual question. This preprompt can describe the data format, conventions used in your project, the expected Python script interface, or domain knowledge about seismic processing that helps the AI generate more accurate and relevant scripts. For example, you might include a description of the gather data structure or the available trace header fields. Leave this field empty if no additional context is needed.

Max response timeout, sec

The maximum time in seconds to wait for a response from the ChatGPT service before the request is considered to have timed out. The default value is 120 seconds. The valid range is 0 to 1200 seconds. On slow network connections or when asking complex questions that require longer AI processing time, increase this value to avoid premature timeouts. A value of 0 means no timeout limit is imposed.

Auto-run procedure on bot response

When enabled, the module automatically executes the generated Python script immediately after ChatGPT delivers its response, without requiring you to manually trigger a run. This is useful for rapid iterative development: you can ask ChatGPT to refine the script and immediately see the results on the seismic data. By default this option is disabled, allowing you to review the generated script before running it.

Settings

Execute on { CPU, GPU }

Selects whether processing is performed on the CPU or GPU. For Python scripts, execution always takes place via the Python interpreter on the CPU. GPU acceleration applies only if the script itself uses GPU-capable libraries such as CuPy or PyTorch.

Distributed execution

Controls whether the processing job is distributed across multiple compute nodes in a cluster environment. When enabled, gathers are dispatched to remote worker nodes for parallel execution. The Python executable and the user script must be accessible on all worker nodes at the same path.

Bulk size

The minimum number of gathers to bundle into a single processing chunk when distributing work across nodes or threads. Larger values reduce scheduling overhead but may increase memory usage per chunk. Adjust this setting based on gather size and available memory on worker nodes.

Limit number of threads on nodes

When distributed execution is active, this setting restricts the number of parallel threads used on each remote compute node. Use this to prevent the Python calculator from monopolizing all CPU cores on a shared cluster node, leaving resources available for other running jobs.

Job suffix

An optional text label appended to the distributed job name. This helps distinguish multiple simultaneous Python calculator jobs running on the same cluster. Use a descriptive suffix (e.g., denoise_run1) so that job status and logs are easily identifiable in the cluster monitoring interface.

Set custom affinity

Enables manual specification of CPU core affinity for the processing threads. When enabled, the Affinity field becomes active. This is an advanced option for performance tuning on NUMA (Non-Uniform Memory Access) systems. Leave disabled for standard workstation usage.

Affinity

Specifies the set of CPU cores to which processing threads are pinned when Set custom affinity is enabled. Pinning threads to specific cores can improve cache utilization and reduce inter-core memory transfers on large multi-socket servers. Only relevant in advanced high-performance computing scenarios.

Number of threads

The number of parallel threads used to process gathers concurrently on the local machine. Each thread runs an independent instance of the Python interpreter, allowing multiple gathers to be processed simultaneously. Increase this value to make full use of multi-core workstations. Note that each thread launches a separate Python process, so memory consumption scales with thread count. Setting this too high on a machine with limited RAM may degrade performance.

Skip

When enabled, this module is bypassed and the input gather is passed through to the output unchanged. Use this option to temporarily disable the Python script without removing the module from the processing flow, for example when comparing results with and without the custom script applied.

Output data

Output gather

The processed seismic gather returned by the Python script. The output gather replaces the input gather in the processing flow and is passed downstream to the next module. The trace data and headers in the output gather are exactly as returned by your Python script, so the script has full control over the content of the output.

Information

Graphics

Custom actions

Modify gather with ChatGPT

Opens an interactive chat window connected to the ChatGPT AI service. You can describe the seismic processing operation you want to perform in plain language, and ChatGPT will generate a Python script that implements it. The generated script is automatically saved to a file and linked to the User script parameter. You can continue the dialogue to refine the script, request bug fixes, or ask for explanations. A valid API key and a configured Python executable are required before using this action. Optionally, the Preprompting text file can be used to provide the AI with domain context before the conversation begins, improving the relevance of the generated code.