|
<< Click to Display Table of Contents >> Navigation: Math > Iterative deblending |
Removing incoherent noise from repetitive shots iteratively
![]()
![]()
What Is Deblending?
Deblending is the process of separating overlapping seismic records that were acquired using simultaneous sources. When two or more sources fire at nearly the same time, their seismic wavefields overlap (“blend”). Deblending removes this interference and recovers individual shot gathers.
Why Do We Perform Simultaneous Source Acquisition?
Simultaneous source acquisition (also called blended acquisition) is used to:
Increase acquisition speed
•Multiple sources fire without waiting for each other → MUCH faster surveys.
Reduce operational cost
•Fewer idle times
•Less fuel + vessel time (marine)
•Faster crew productivity (land)
Increase fold per day
•More shots recorded per unit time.
Improve sampling
•More dense shot coverage at lower cost.
Enable exploration in time-restricted zones
•Weather windows
•Fishing zones
•Military restricted periods
Why Is Deblending Necessary?
Because simultaneous firing creates overlapping wavefields, which causes:
•Cross-talk noise
•Interference
•Incorrect amplitudes
•Difficulty in picking first breaks
•Poor velocity analysis
•Poor imaging
To use blended data in normal seismic processing, we must separate each source’s contribution.
What Is Iterative Deblending & how it works?
Iterative deblending is the most widely used technique.
Deblending relies on two facts:
1.Signal is coherent
(events follow moveout, look like reflections)
2.Interference noise is incoherent
(random-looking because the sources fire at varied time dithers)
So we use an iterative loop:
Iterative Deblending Workflow
Step 1 — Initial Estimate
Start by assigning blended data roughly to each source (simple split or a rough filter).
Step 2 — Apply a Coherency Constraint
Reflection energy is coherent in:
•f–k domain
•Radon domain
•Curvelet domain
Noise is not.
We keep coherent energy → throw away incoherent noise.
Step 3 — Subtract Reconstructed Shot from Blended Data
This removes interference progressively.
Step 4 — Repeat (Iterate)
Each iteration:
•Improves signal
•Reduces cross-talk
•Enhances reconstruction
After ~30–40 iterations, sources are well separated.

Shot Times in Simultaneous Source Acquisition
In simultaneous acquisition, multiple sources fire without waiting for previous shots. But to make deblending possible, the shot times are randomized (a technique called dithering).
Types of dithering:
1.Random time dithering (most common)
2.Linear dithering
3.Variable time delays
4.Orchestrated firing patterns
These shot times create incoherent interference, which is much easier to separate from coherent reflections.
How Shot Times Are Created / Recorded
Shot times originate from the acquisition system and GPS clock.
Step 1: Synchronization
All sources and recording systems are synchronized to:
•GPS
•Rubidium clocks
•Precision timing units
This ensures microsecond accuracy.
Step 2: Shot firing command
The acquisition controller sends a firing signal to the source:
•Airgun controller (marine)
•Vibroseis controller (land)
•Explosive detonation unit (legacy)
Step 3: Exact firing time stamp recorded
The shot time is written into:
•Shot header files
•Observer logs
•SPS source files
•Navigation files
•SEG-D / SEG-B / SEG-Y headers
Step 4: Stored per-trace
Each trace stores:
•Source time
•Time since shot start
•Source sequence number
Role of Shot Times in Deblending
Deblending relies on the fact that:
Different sources fire at different (random) times → so their interference appears incoherent.
Using shot times, the algorithm builds the blended source matrix
Deblending works by reversing this mixing.
Shot times define:
•How much overlap occurs
•How interference patterns appear
•How coherent energy is separated
Without shot times:
•Deblending becomes blind
•Nearly impossible to separate shots correctly
Shot Times in Marine vs Land
Marine (airguns)
•Very accurate (ms to microsecond)
•Stored in navigation files
•Used for source signature corrections
•Used for deblending
Land vibroseis
•Vibroseis sweep has start times
•Phase and phase-locking depend on accurate timing
•Used for correlation
•Used for deblending in simultaneous source vibroseis
Types of Shot Times Files in Acquisition
The shot times appear in:
•SPS files (UKOOA/SEG-P1)
•RPS (receiver) & SPS (source) files
•Marine navigation logs
•SEG-D headers
•Observer logs
•Field notes for explosive shooting
Each system logs:
•Shot number
•Shot timestamp
•Source ID
•Vessel position
•Delay time / dither
![]()
![]()
Each shot time file is a delimited text file (TXT, CSV, or DAT) that records the absolute firing time and source identity for every shot in the blended acquisition. The module reads one or more such files and builds an internal lookup table that maps each source (identified by source line and shot point number) to its precise firing time. This timing information is essential for computing the relative time shifts between simultaneously fired sources, which drive the blending and deblending operators. You must load at least one valid shot time file before the module can run.

![]()
![]()
The FK thresholding is performed on overlapping patches of data. The trace window defines the lateral size of each patch in number of traces. The default is 25 traces. Larger values capture longer-wavelength coherent events more accurately, but increase computation time. For gathers with dense trace spacing, 20 to 30 traces is typically a good starting range.
This is the vertical (temporal) patch size used in the FK thresholding. It is specified in seconds. The default is 0.25 s (250 ms). Choose a time window large enough to include several reflection cycles at the dominant frequency of your data. If the window is too short, the frequency resolution in the FK domain will be poor and signal may be incorrectly suppressed.
This is a fractional value between 0 and 1. The module computes the maximum spectral amplitude across all FK patches, then sets the soft-threshold equal to that maximum multiplied by this percentage. The default is 0.05 (5%). A lower value (closer to 0) removes less noise but risks keeping interference; a higher value removes more but may damage weak reflections. Start with the default and adjust if residual noise is visible in the output gather.
The threshold level applied in each iteration is scaled by a decay function: (exp(exponent * iteration) + asymptote) / (1 + asymptote). The exponent should be a negative number. The default is -0.05. A more negative value (e.g., -0.1) produces a faster decay, meaning the threshold drops more steeply in the first few iterations and then stabilizes. A value closer to 0 gives a slower, more gradual decay. If deblending converges too quickly and leaves residual noise, try a slower decay (less negative exponent).
This sets the floor of the decay function, preventing the threshold from reaching zero at late iterations. The default is 0.2. A higher asymptote means the filter never fully relaxes and continues suppressing weak noise at the end of all iterations. A lower value allows the threshold to become very small, which may be appropriate if you need to recover very weak reflections after most of the interference has been removed.
The default is 50 iterations. Each iteration refines the estimate of the deblended gathers by applying the blending forward operator, computing the residual, and suppressing incoherent noise in the result. More iterations generally produce cleaner separation but add proportionally to processing time. For a quick quality-check run, 10 to 20 iterations may be sufficient; for final production, 50 to 100 iterations are typical. If the solver reaches the specified tolerance before completing all iterations, it stops early automatically.
The solver checks convergence at the end of each iteration by comparing the maximum absolute change in the output gather to the maximum amplitude multiplied by this tolerance. The default is 1e-6. When the change falls below this threshold, the solver stops early even if the specified number of iterations has not been reached. In practice, the iteration count usually governs runtime unless a very tight tolerance is set. The default value of 1e-6 is appropriate for most datasets.
Before the main iterative loop begins, the module estimates the optimal step size (learning rate) for the gradient update by computing the largest eigenvalue of the blending operator. This epsilon controls the convergence criterion for that eigenvalue power iteration. The default is 1e-6. This is an advanced parameter: the default value is appropriate for nearly all datasets, and there is rarely a reason to change it.
These parameters tell the module how to parse the shot time file. Before the column selectors (Source line column, Source SP column, Shot time column) become available, you must first load at least one shot time file in the Shot times files input above. The module will then read the header row of that file and populate the column dropdown lists automatically. Make sure that the separator and row numbers match the actual format of your file before executing.

![]()
![]()
![]()
![]()
There is no information available for this module so the user can ignore it.
![]()
![]()
In this example workflow, we are reading a synthetic seismic gather with blended shots along with the shot time files.
![]()
![]()
There are no action items available for this module.
![]()
![]()
YouTube video lesson, click here to open [VIDEO IN PROCESS...]
![]()
![]()
Yilmaz. O., 1987, Seismic data processing: Society of Exploration Geophysicist
* * * If you have any questions, please send an e-mail to: support@geomage.com * * *
![]()