Rebuild SP using FFID

<< Click to Display Table of Contents >>

Navigation:  General >

Rebuild SP using FFID

 

Description

The Rebuild SP using FFID module corrects shot point (SP) numbering in seismic datasets where multiple distinct source activations (shots) have been assigned the same SP number. This situation commonly arises when vibroseis records or repeated shots within a single station are all tagged with one SP value in the field, even though each physical activation has a unique Field File ID (FFID). The module uses the FFID header value to disambiguate these cases and assigns unique, numerically distinct SP values to each separate shot activation within a given source line and station.

The corrected data is written to a new output file in .gsd format. For each source location (identified by survey ID, source line, and original SP), the module scans all FFIDs encountered in the data. The first FFID encountered retains the original SP number; each subsequent unique FFID receives a new SP value offset by 0.1 per additional shot (e.g., original SP 1000 becomes 1000.0, 1000.1, 1000.2, etc.). This ensures that geometry and trace attributes downstream correctly reflect the true number of distinct source activations.

Input data

Input traces data handle

Connect the seismic data file whose SP numbers need to be corrected. The input must be an open SEG-Y or .gsd data handle. The module reads all traces sequentially from this source to build the mapping between FFIDs and corrected SP values. The input file is not modified.

Parameters

Output file name

Specify the full file path and name for the output dataset. The output is written in .gsd format. This file will contain a complete copy of the input traces with updated SP header values reflecting the FFID-based correction. Make sure the target directory exists and is writable before running the module.

Bulk size

Controls the number of traces read and processed in each memory block. The default value of 1,000,000 traces is suitable for most datasets. Reduce this value if you encounter memory limitations on large surveys. Increasing the bulk size can improve throughput when processing very large files on machines with ample RAM.

Rewrite file

When set to true, the module will overwrite an existing output file at the specified path. When set to false (default), the module will stop with an error if a file already exists at the output path. Enable this option when re-running the module after adjusting parameters to avoid having to manually delete the previous result file.

Clone SEG-Y file

This parameter is reserved for future use and is not currently active in the processing workflow.

Settings

Execute on { CPU, GPU }

Selects the hardware used to run the processing. Choose CPU for standard execution on the host processor.

Distributed execution

Configures the module to run across multiple nodes in a distributed computing environment. When enabled, the workload is split across available processing nodes to reduce total execution time for large datasets.

Bulk size

Sets the minimum chunk size (in number of traces) assigned to each distributed processing node. Larger values reduce overhead from task scheduling; smaller values improve load balancing when node performance varies.

Limit number of threads on nodes

When enabled in distributed mode, caps the number of CPU threads used per processing node. This is useful when nodes are shared with other jobs and full CPU utilization would interfere with concurrent workloads.

Job suffix

An optional text label appended to distributed job names to help identify this run in a cluster job scheduler or processing log. Useful when running multiple instances of the module simultaneously on the same cluster.

Set custom affinity

When enabled, allows you to manually specify CPU core assignments for this process using the Affinity parameter below. Disable this option to let the operating system allocate CPU resources automatically.

Affinity

Specifies the CPU core affinity mask for this module when Set custom affinity is enabled. This setting is intended for advanced users managing multi-process workloads on dedicated processing servers.

Number of threads

Sets the number of CPU threads used during execution. Increasing the thread count can speed up I/O and processing on multi-core systems. Set to match the number of available physical cores for best performance, or reduce if other processes require CPU resources concurrently.

Skip

When enabled, this module is bypassed entirely during workflow execution. Input data is passed through unchanged to the next step. Use this option to temporarily disable the SP rebuild step without removing it from the processing sequence.

Output data

Output traces data handle

The handle to the corrected output dataset written to the file specified in the Output file name parameter. This handle can be connected to subsequent modules in the processing flow for further geometry operations or quality control steps. All trace amplitudes and headers are preserved from the input; only the SP header values that required FFID-based correction are modified.

Information

Graphics

Custom actions