Skip Nav Destination
Filter
Filter
Filter
Filter
Filter

Search Results for
Operator antialiasing

Update search

Filter

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

### NARROW

Peer Reviewed

Format

Subjects

Journal

Publisher

Conference Series

Date

Availability

1-20 of 76 Search Results for

#### Operator antialiasing

**Follow your search**

Access your saved searches in your account

Would you like to receive an alert when new items match your search?

*Close Modal*

Sort by

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2004 SEG Annual Meeting, October 10–15, 2004

Paper Number: SEG-2004-1135

... ABSTRACT

**Antialiasing**filters in Kirchhoff migration are designed to eliminate migration artifacts caused by**operator**aliasing. Unfortunately, they also reduce the amplitude of dipping events. This paper discusses the**antialiasing**effect on amplitudes and shows how to preserve the dipping...
Abstract

ABSTRACT Antialiasing filters in Kirchhoff migration are designed to eliminate migration artifacts caused by operator aliasing. Unfortunately, they also reduce the amplitude of dipping events. This paper discusses the antialiasing effect on amplitudes and shows how to preserve the dipping energy beyond aliasing frequencies.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2000 SEG Annual Meeting, August 6–11, 2000

Paper Number: SEG-2000-0981

... ways to implement a parallel Kirchhoff prestack time migration. Three basic strategies are discussed, and their parallel scalability analyzed. The impact of

**operator****antialiasing**on parallel performance is considered for two common methods of**antialiasing**protection. A timing model of the parallel...
Abstract

Summary Kirchhoff 3D prestack migration is a seismic data processing application which produces a subsurface image of the earth by integrating large amounts of recorded seismic data. It is an industry staple (particularly the "time migration" variant), and is very computationally intensive, requiring the use of supercomputers to produce results in a timeframe of weeks or months. One type of supercomputer is based on a clustering of many smaller computers, each with their own memory and some sort of network connection to each other.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1999 SEG Annual Meeting, October 31–November 5, 1999

Paper Number: SEG-1999-1496

... direction when forming 2-D common-offset sections by binning. SEG 1999 Expanded Abstracts An alternative to cross-spreads for imaging wide-azimuth 3-D surveys Deriving

**operator****antialiasing**conditions by converting integrals to summations Equation (4) indicates how to accurately approximate two...
Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2008 SEG Annual Meeting, November 9–14, 2008

Paper Number: SEG-2008-2386

... limits with just three multiply-add

**operations**. One pays a price in bandwidth, however, as the highest input frequency it passes is only half of temporal Nyquist in the event**antialiasing**is required. Linear interpolation, as used in Claerbout''s original paper, lifts that restriction, but at a cost...
Abstract

Summary One quite well known method for antialiasing Kirchhoff migration is the compact and efficient triangle filter method first proposed by Claerbout (1992). In its most efficient Kirchhoff application (Lumley, et al., 1994), it requires no extra memory and implements antialias frequency limits with just three multiply-add operations. One pays a price in bandwidth, however, as the highest input frequency it passes is only half of temporal Nyquist in the event antialiasing is required. Linear interpolation, as used in Claerbout''s original paper, lifts that restriction, but at a cost in computation and some additional loss of fidelity. Supersampling can replace linear interpolation, but forces a tenfold memory increase to retain frequencies up to 90% Nyquist. In this abstract, I provide a memory efficient way to modify the original method to permit frequency limits all the way up to Nyquist, while retaining three multiply-add computational efficiency. Introduction Antialiasing, that is selective low-pass filtering, is used in Kirchhoff migration to suppress energy leaking into the migrated image from coherent events cross-cutting the summation trajectory. (See Fig 1.) Given a full description of the geometry of the input traces and the parameters that define the Kirchhoff summation trajectories, Mazzucchelli and Rocca (2001) provide a way of determining quite precisely where and how much antialiasing is required. Wang (2000, 2004) leverages migration wavelet stretch and dip estimates in an image space dealiasing method. Whatever method is selected to determine antialiasing requirements, the result is an upper frequency limit that will generally be different for every pair of input and output samples in the migration. Make multiple copies of each input trace, each filtered to a different maximum frequency, and sum samples from these copies according to the local antialias limits. Coarsen the calculation of antialias limits to, say, every 5th time sample and every 10th trace, so that the overhead of computing the limits (as distinguished from applying the limits) is small. Coarsen the application of antialias limits with small windows, say 4 to 8 samples long, of block extraction of samples from a selected frequency-limited input trace. Use a fast, albeit approximate, antialias limit application such as the Claerbout triangle filter method which integrates each trace forward and backwards and then applies a gapped second difference operator to construct each low-pass filtered sample on the fly. Any sizable additional memory overhead will generally degrade migration efficiency, incurring more cache misses, page faults and disk I/O. Computing antialiasing twice as fast does no good if the CPU''s are idle twice as long. The effects of interpolation incurred because summation trajectories pass between recorded trace samples are an issue. Standard interpolation methods are implicitly lowpass filters, so efforts to preserve high frequencies in antialiasing will be wasted if the interpolation removes those frequencies in the Kirchhoff summation. To reduce the computational cost of this extra overhead, a number of approaches can be taken, either individually or in combination:Any attempt to tackle computational load still has to keep in mind two other factors:

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1997 SEG Annual Meeting, November 2–7, 1997

Paper Number: SEG-1997-1086

... iasing

**Antialiasing**of Kirchhoff**operators**can be accomplished by aperture control and**operator**dip filtering, 1087 3-D Prestack Migration but this has the undesirable effect of suppressing steep-dip contributions to the image. We used three different implementations of**antialiasing**for 3-D prestack...
Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2000 SEG Annual Meeting, August 6–11, 2000

Paper Number: SEG-2000-0806

... the imaging quality. Therefore, the importance of

**antialiasing**has drawn increasing attention. The migration**antialiasing**dip**operators**are determined by spatial directional derivatives on diffraction hyperboloid surfaces or reflection ellipsoid surfaces depending on the different**antialiasing**algorithm...
Abstract

Summary Kirchhoff migration is a powerful imaging technique that accommodates the complexity of 3D seismic data. When implemented correctly, this method performs effectively on variable geometries, complex velocity models and steep dipping events. However, aliasing noise often contaminates the imaging quality. Therefore, the importance of antialiasing has drawn increasing attention. The migration antialiasing dip operators are determined by spatial directional derivatives on diffraction hyperboloid surfaces or reflection ellipsoid surfaces depending on the different antialiasing algorithm, in this paper, we present a new formula that correctly expresses dip value related to the trace spacing for 3D prestack Kirchhoff time migration.

Proceedings Papers

Publisher: NACE International

Paper presented at the CORROSION 2001, March 11–16, 2001

Paper Number: NACE-01291

... ABSTRACT Removing DC trends before calculating power spectral densities is a necessary

**operation**, but the choice of the method is probably one of the most difficult problems in electrochemical noise measurements. The procedure must be simple and straightforward, must effectively attenuate...
Abstract

ABSTRACT Removing DC trends before calculating power spectral densities is a necessary operation, but the choice of the method is probably one of the most difficult problems in electrochemical noise measurements. The procedure must be simple and straightforward, must effectively attenuate the low frequency components without eliminating useful information or create artifacts. Several procedures will be presented, including moving average removal, linear detrending, polynomial fitting, analog or digital high-pass filtering, and their effect on electronic and electrochemical signals discussed. The results show that the best technique appears to be polynomial detrending On the contrary, the recently proposed moving average removal method was found to have considerable drawbacks and its use should not be recommended. INTRODUCTION In the analysis of electrochemical noise (EN), when extracting statistical information from the time records of the fluctuations of the electrical quantities (current or voltage), one is often confronted with the problem that the signal sampled does not appear to be stationary, at least within the measurement time T. The signal is said to be drifting, and since the calculation of the power spectral density (PSD) or even of the standard deviation presupposes a stationary process, it is necessary to apply some procedure to the incoming signal so as to eliminate the contribution of what is commonly called its drift. The reasons for this behavior may be different and hard to know: for example, the signal may be stationary, but it may contain frequency components lower than fo = l/T, or there may be some slow alteration of the system under study that causes the drift, whether linear or not. In corrosion studies, progressive deterioration of the electrodes and therefore lack of stationarity, is to be expected in many c a s e s . An illustrative example is given by a random signal superimposed to a linear drift. In the implementation employed to generate Fig. la, white noise (2 mV p-p) in the range from 0 to 300 Hz, produced by a signal generator was added to a ramp with a 1.6 mV/min slope. Both time records in Fig. la and lb consist of 2048 points, but in the first the sampling rate is 10 Hz with a low-pass antialiasing filter at 3.3 Hz, while in the second it is 100 Hz, the cut-off frequency of the antialiasing filter being set at 33 Hz. For this case there is an analytical solution, and the expression for the PSD is:l, 2 a2T ~x, comp( f ) -- ~x (f) + ~ (1) 2~2f 2 where tPx~comp is the PSD of the composite signal, tPx that of the signal without drift, and the second term, which represents the effect of the drift, contains the slope, a, and the duration, T, of the time record. This component of the PSD, which gives a straight line of slope -2 in the logarithmic plot, is deterministic and not stochastic, so that the error is zero. For this reason, the low-frequency part of the spectra in Fig. 2, which are produced from the fast Fourier transform (FFT) of the two time records, is very smooth. Although the signal sampled in the two figures is the same, since Eq. 1 contains the total time T, which is different in the two cases, the PSD is different, as shown in Fig. 2. The fact that the two PSDs do not join is another indication that the signal is not stationary, so that one can only speak of pseudo-PSDs. If the sampling rate is increased to 1 kHz (antialiasing filter set at 330 Hz), curve c in Fig. 2 is a good representation of the PSD of white noise because the amplitude aT of the drift during the acquisition time is now small compared to the random fluctuations of 2 mV p-p. One has here the analytical proof of the experimentalist's common sense that if the drit~ is negligible during

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2007 SEG Annual Meeting, September 23–28, 2007

Paper Number: SEG-2007-2344

... that this is applicable to time as well as depth migration algorithms. Before the data are stacked along a diffraction surface, they are filtered for waveform phase shaping and pre–filtered in preparation for the

**antialiasing****operation**applied during summation. Since the integration process shifts the waveform shape...
Abstract

INTRODUCTION SUMMARY Since 3D Prestack Kirchhoff Depth Migration (KPSDM) has become one of the leading imaging tools for hydrocarbon exploration, its accurate and precise handling of the kinematical and dynamical aspects of the wavefield have become center stage to the R&D efforts worldwide. In a separate paper in this proceeding by the same author, describe a modified antialiasing filter weight that corrects for amplitude artifacts observable in earlier designs. Here we continue the efforts of developing an efficient true amplitude migration algorithm by suggesting a simplification of the traditional filtering done during this process that will improve the performance and precision of the results. In most Kirchhoff migration implementations, a triangular smoothing filter is used to avoid high frequency aliasing along the migration operator. This filter is implemented in three steps: causal integration, anti-causal integration, and Laplace-type differentiation along the diffraction stacking surface. In addition a derivative filter (known as r–filter) is applied to the input data to correct for the wavelet phase rotation introduced by the Kirchhoff summation. We will find that the standard filtering sequence of applying the r–filter, causal integration, and anti-causal integration can be replaced by just an anti–causal integration. Kirchhoff migration provides one of the best imaging solutions when data are non-uniformly distributed in space. It also is fast and flexible when it comes to input/output geometries. In our quest to make 3D KPSDM better and faster we have found an alternative to achieve a comparable result by replacing the traditional filtering sequence made of a r–filter, causal integration and anti-causal integration with a single anti-causal integration filter. Note that this is applicable to time as well as depth migration algorithms. Before the data are stacked along a diffraction surface, they are filtered for waveform phase shaping and pre–filtered in preparation for the antialiasing operation applied during summation. Since the integration process shifts the waveform shape, the r–filter works restoring it. The anti-aliasing filter is applied in three stages: causal integration, anti-causal integration and then a Laplacian computation rolling along the stacking diffraction surface. By replacing these three filters by an anti-causal integration the performance and accuracy of the algorithm can be greatly improved. The results presented here also incorporate the normalization factor in the anti-aliasing filter mentioned above and described elsewhere in these proceedings. These corrections eliminate azimuthally anisotropic amplitude behavior on the migration impulse response as well as amplitude distortions with time and offset, introduced by the traditional scaling factor in Lumley et al. (1994) and Abma et al. (1999). THREE FILTERS IN ONE Differentiation in the frequency domain can be accomplished by a multiplication by -iw, where w represents the circular frequency, and D t is the temporal sampling rate. As the sampling rate D t goes to zero the approximation becomes an equality. From this it is straight forward to conclude that the three filters: r–filter, causal integration and anti-causal integration should be equivalent to just an anti-causal integration.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2015 SEG Annual Meeting, October 18–23, 2015

Paper Number: SEG-2015-5852752

... are enhanced if there are complex multiple generators in the subsurface. In this paper, I propose a simple methodology to perform

**antialiasing**in the multiple contribution gather (MCG) domain to reduce the artifacts generated in the SRME predicted multiples. Introduction SRME is a prediction...
Abstract

Summary Surface related multiple elimination (SRME) is a powerful data driven tool to remove surface related multiples. However, it has a very strict requirement on the acquisition geometry to obtain a satisfactory result. The shortcomings of the method due to inadequate acquisition are enhanced if there are complex multiple generators in the subsurface. In this paper, I propose a simple methodology to perform antialiasing in the multiple contribution gather (MCG) domain to reduce the artifacts generated in the SRME predicted multiples. Introduction SRME is a prediction and subtraction method (see, e.g., Verschuur and Berkhout, 1997, and Weglein et al., 1997). Surface related multiples are predicted using the appropriately preprocessed input data and then subtracted from the dataset without multiple attenuation. Written explicitly, the 3D SRME prediction is the sum of autoconvolutions of the data given by the equation. equation Here M is the predicted multiple for a trace with source location (x s ,y s ) and receiver location (x g ,y g ). D is the input data. The summation is performed over the defined (x,y) aperture. The MCG contains traces generated by these individual autoconvolutions of the data D, prior to summation. SRME is a very popular and effective algorithm for removing surface related multiples. Ideally, this algorithm requires seismic sources at every receiver location. This prerequisite is not satisfied for field datasets. Figures 1 and 2 (modified from Verschuur, (2006)) clearly demonstrate the artifacts generated in the predicted multiples due to inadequate acquisition. Figure1a is the MCG generated for a single trace using an adequate source spacing. Figure 1b is the MCG generated for the same trace using double source spacing. The spatial aliasing is very noticeable in figure 1b. Comparison of the predicted multiples in figure 2, created by the summation of the MCG, clearly shows the artifacts generated due to inadequate sampling of the source.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1995 SEG Annual Meeting, October 8–13, 1995

Paper Number: SEG-1995-0041

... to different lower rates, synchronizing the free-running signatures with the seismic source, filtering out unwanted frequencies for improving S/N ratio and constructing software

**antialiasing**filters. These multifunctions are achieved simultaneously by a one-time FIR (finite impulse response)**operation**between...
Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1987 SEG Annual Meeting, October 11–15, 1987

Paper Number: SEG-1987-0729

... of noise, Proo. I.R.E., 37, 10-21. x 5 : i FIG. 1. Impulse response of the smear stack

**operator**(3 impulses) generated using Hale s method. FIG. 2. Impulse response of the partial modeling opera- tor generated by transpos- ing Hale s method. Runtime parameters notation. 732 Spatial**antialiasing**5...
Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1992 SEG Annual Meeting, October 25–29, 1992

Paper Number: SEG-1992-0995

... response including the directivity corm&m. This

**operator**will automatically cut off at 8 =90 degrees, i.e. cos (x 12 ) = 0, thus selecting the correct migration apertum. FK plots of the above are needed to demonstrate anti- aliasing and evanescent energy rejection properties. 997 4**Antialiased**Kirchhoff...
Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2003 SEG Annual Meeting, October 26–31, 2003

Paper Number: SEG-2003-1130

... Summary

**Operator**/Imaging aliasing introduced in Kirchhoff migration is often tackled by trace tapering, aperture truncation or time and offset-variant filtering. The latter approach is the most suitable. However, most implementations and published results using this technique are derived...
Abstract

Summary Operator/Imaging aliasing introduced in Kirchhoff migration is often tackled by trace tapering, aperture truncation or time and offset-variant filtering. The latter approach is the most suitable. However, most implementations and published results using this technique are derived for Kirchhoff time migration and assume a constant velocity media. In this paper, we introduce an antialiasing filter for ray-based pre-stack depth migration and for general heterogeneous velocity models. We illustrate the benefits of such a scheme on a numerical example. Introduction Three kinds of spatial aliasing can occur during the migration process, all leading to poor and ambiguous images. The three categories are: data, image and operator migration aliasing and have been extensively discussed by Lumley et al. (1994) and Biondo (2001).

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2008 SEG Annual Meeting, November 9–14, 2008

Paper Number: SEG-2008-2512

... at similarities and differences in the implementation of the two methods, from deghosting, obliquity and

**antialiasing**, to memory requirements and out-of-core pipelining, and finally comparing their relative computational cost. Of particular note, I show that (1) no spatial FFT padding is required to suppress...
Abstract

Summary In this abstract I compare theoretical and practical aspects of the Delft and the Inverse Scattering surface-related multiple attenuation approaches. I first show the essential theoretical equivalence of the two using a simple three line mathematical argument. After that I look at similarities and differences in the implementation of the two methods, from deghosting, obliquity and antialiasing, to memory requirements and out-of-core pipelining, and finally comparing their relative computational cost. Of particular note, I show that (1) no spatial FFT padding is required to suppress wraparound artifacts and (2) Fourier spatial interpolation does nothing whatsoever to reduce aliasing artifacts. Introduction The fundamental principle of surface-related multiple prediction is that the upcoming energy recorded at any given trace location in a shot produces a downgoing reflected signal that is a secondary source whose response can be predicted by convolving that trace with a suitably tailored shot record whose shot position is at that given trace location. Because this basic convolution implicitly squares the source spectrum, deconvolution is used to flatten the source spectrum so that its square remains flat over the bandwidth of the data. Of course, additional adaptive shaping is needed to fine tune the imperfectly predicted multiples. Interpolation and extrapolation of the input data to fill in missing inner offsets is a necessary preprocessing step. In addition to attenuating edge effects, this step accounts for the fact that in a horizontally layered earth multiples at offset 2h arise from primaries at offset h. In addition, it is generally helpful to interpolate each shot record 2:1. This last preprocessing improvement is due to Bill Dragoset (pers. comm.) who notes that the convolution will take two dips p and q and turn it into a steeper dip p+q. Since the multiple estimate involves summation across such convolved traces, sufficiently aliased energy will stack in as background fuzz. By preinterpolating 2:1, an original dip of p msec/trace now becomes 1/2 p instead, supressing its contribution to aliasing. Deghosting is generally needed to separate the up and downgoing shot and recorded traces. This operation depends upon the source and receiver depths and the angles at which the seismic energy arrives at the surface. An obliquity correction can be applied at the same time as deghosting if done in the Fourier or ? p domains. The cosine obliquity term compensates for the difference between amplitude and energy. Theory Let us initially assume that we have arranged our shots and receivers on a common grid, with the shot and receiver spacings both equal to the grid spacing. After a temporal Fourier transform to constant frequency slices, a nice way of visualizing the basic Delft (Verschuur, et al. 1988) operation of predicting multiples, M, is as a matrix multiplication of a data matrix, D, and a proxy primary matrix, Q, which is typically the input data itself or the output of a previous demultiple iteration.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2010 SEG Annual Meeting, October 17–22, 2010

Paper Number: SEG-2010-3662

...). One important approach to trace interpolation is prediction interpolating methods, mainly an extension of Spitz’s original method, which uses low-frequency non-aliasing data to extract

**antialiasing**prediction filters and then interpolates high frequencies beyond aliasing. Claerbout (1992) treated...
Abstract

SUMMARY Seismic data are often inadequately sampled along spatial axes. Spatially aliased data can produce imaging results with artifacts. We present a new adaptive prediction-error filter (PEF) approach based on regularized nonstationary autoregression, which aims at interpolating aliased seismic data. Instead of using patching, a popular method for handling nonstationarity, we obtain smoothly nonstationary PEF coefficients by solving a regularized least-squares problem. Shaping regularization is used to control the smoothness of adaptive PEFs. Finding the interpolated traces can be treated as another linear least-squares problem, that solves for data values rather than filter coefficients. Using benchmark synthetic and real data examples, we successfully apply this method to the problem of seismic trace interpolation. INTRODUCTION The spatial sampling interval is an important factor that controls seismic resolution. Too large a spatial sampling interval leads to aliasing problems, which can adversely affect migration and result in poor lateral resolution of subsurface images. An alternative to expensive dense spatial sampling is interpolation of seismic traces (Spitz, 1991). One important approach to trace interpolation is prediction interpolating methods, mainly an extension of Spitz’s original method, which uses low-frequency non-aliasing data to extract antialiasing prediction filters and then interpolates high frequencies beyond aliasing. Claerbout (1992) treated Spitz’s method as a prediction-error filter in the original t-x domain. Porsani (1999) proposed a half-step prediction-filter scheme that makes the interpolation process more efficient. Wang (2002) extended f - x trace interpolation to higher dimensions, the f -x-y domain. Gulunay (2003) introduced an algorithm similar to f -x prediction filtering, which has an elegant representation in the f -k domain. Naghizadeh and Sacchi (2009) proposed an adaptive f -x interpolation using exponentially weighted recursive least squares. Most recently Naghizadeh and Sacchi (2010) used a prediction approach similar to Spitz’s method, except that the curvelet transform is involved instead of the Fourier transform. Seismic data are nonstationary, but a standard PEF can only be used to interpolate stationary data (Claerbout, 1992). Patching is a common method to handle nonstationarity (Claerbout, 2010), although it occasionally fails in the assumption of piecewise constant dips. Crawley et al. (1999) proposed smoothly nonstationary PEFs with “micropatches” and radial smoothing, which typically produces better results than the rectangular patching approach. Fomel (2002) developed a plane-wave destruction (PWD) filter (Claerbout, 1992) as an alternative to t-x PEF and applied the PWD operator to nonstationary trace interpolation. However, the PWD method depends on the assumption of a small number of smoothly variable seismic dips. In this paper, we use the two-step strategy, similar to that of Claerbout (1992) and Crawley et al. (1999), but calculate the adaptive PEF by using regularized nonstationary autoregression (Fomel, 2009) to deal with both nonstationarity and aliasing. Shaping regularization (Fomel, 2007) controls the locally smooth interpolation. We test the new method by using several benchmark synthetic examples. Results of applying the proposed method to a field example demonstrate that regularized adaptive PEF can be effective in trace interpolation problems, even in the presence of multiple variable dips.

Proceedings Papers

Paper presented at the International Petroleum Technology Conference, December 7–9, 2009

Paper Number: IPTC-13998-ABSTRACT

... is coarse. One of the key reasons for the loss of bandwidth with large midpoint bins is due to the anti-alias filtering of the migration

**operators**that must be done. As discussed by Abma et al. (1999), such filtering is needed to prevent the generation of artifacts. The resolution implications...
Abstract

This reference is for an abstract only. A full paper was not submitted for this conference. Introduction The word "resolution" is often assumed to refer to the specific case of temporal resolution. In that regard, Kallweit & Wood (1982) observed that when two octaves of bandwidth are present, the limit of temporal resolution can be expressed as 1/(1.4 x FMAX). However, equally important is the issue of spatial resolution. One of the methods proposed by Berkhout (1984) for quantifying spatial resolution is via the use of the "spatial wavelet". Such wavelets demonstrate that better temporal resolution leads to better spatial resolution. A key point in this paper, though, is that this relationship works the other way too. That is, better spatial resolution leads to better temporal resolution. For instance, of great interest spanning from the Gulf of Mexico to the Red Sea is the exploration for reservoirs beneath salt. In order for the migration process to be able to produce high temporal frequencies in the images of reflections beneath salt, the corrugated nature of the top-salt boundary needs to be portrayed faithfully in the velocity model. However, if a smoothed version of that boundary is used instead (as would certainly be the case in the first round of tomography), the spatial resolution of the top salt is lost. This is what leads to a forfeiture of the subsalt temporal resolution. Binning requirements The formulas for spatial wavelets are computed from calculus via the analytic integration of continuous functions. However, seismic data are sampled in time and space, and the imaging calculations use discrete summations. This means the spatial resolution in real surveys is more limited than indicated by the spatial wavelets - and the limitation gets worse when the sampling is coarse. One of the key reasons for the loss of bandwidth with large midpoint bins is due to the anti-alias filtering of the migration operators that must be done. As discussed by Abma et al. (1999), such filtering is needed to prevent the generation of artifacts. The resolution implications are demonstrated in Figure 1. A depth-varying velocity function from an onshore survey was used to model diffractions from two closely spaced points in the zone of interest. Those diffractions were then migrated and stacked. The results from two candidate survey designs are shown. The macro designs were identical. However, the source and receiver intervals were selected to yield the 40-ft (12 m) and 80-ft (24 m) CMP bin dimensions in the two surveys respectively. We can see that the 40-ft CMP bin design clearly resolves the two points that are 200 ft (61 m) apart, but the 80-ft design does not. Also, analyses of spectra (not shown) reveal that the temporal bandwidth for the 40-ft case is better than that from the 80-ft scenario - again confirming the inter-relationship of temporal and lateral resolution. This situation is definitely shared in marine surveys too. Such examples not only demonstrate the benefits of the more detailed structural interpretation that can be obtained from small-bin surveys, they also demonstrate the more detailed identification of reservoir properties that can be derived from inversion. Coordinate accuracy requirements Of course hand-in-hand with the drive for greater spatial resolution should be the drive for greater accuracy in source and receiver coordinate information. That is understandably more challenging in the offshore case. To investigate this issue, modeling and subsequent migration tests similar to those performed for Figure 1 were executed for a marine survey design. A velocity function was used from a field where the target was 6130 m deep. After the modeling of the diffraction surfaces was performed, the source and receiver coordinates were perturbed. This caused the migration to be conducted with inaccurate coordinate information. Three scenarios are featured in Figure 2. The panel on the left is used for reference. In that case, the correct coordinates were used for the migration. The panel in the middle shows the results obtained when the receiver coordinates were perturbed using a Gaussian distribution characterized by a 3-m standard deviation. That is similar to the type of accuracy that is available from leading-edge acoustic positioning systems. The panel on the right shows the result when the standard deviation was 20 m. That is akin to the type of accuracy that was available in early surveys that relied solely on compasses for streamer navigation data. We can see that the loss of resolution induced by the 3-m inaccuracy is no great consequence. The two point diffractors that are separated by 30 m are easily resolved. However, those diffractors are not resolved when the standard deviation is 20 m. Note that in this exercise the bin dimensions are 5 m. So, the right-most panel in the figure demonstrates that small bins by themselves are not sufficient for good resolution. Accuracy in coordinates is required too. Enabling technologies Improved (temporal and spatial) resolution requires denser spatial sampling. This naturally implies that massively more shots (via continuous recording techniques) and/or higher channel counts are required in acquisition. Indeed such strategies would seem to be ideal for onshore programs in the Middle East and North Africa where the desert environments place minimal restriction on access. However, in other regions, topography, vegetation, infrastructure, and many other things often severely restrict where shot points can be placed. In those cases, the burden of denser spatial sampling would have to be placed primarily on the channel count. Whatever the case, the quest for better sampling also implies that each shot should ideally be a point (as in the case of a single vibrator) and each receiver should be recorded by a separate channel - otherwise there will be smearing of the signal. But this is not to say that it would be sufficient simply to use more channels and more computers. An order of magnitude increase in the number of live channels requires paradigm shifts in data QC, data transfer, and processing. It also requires improvements in things like positioning accuracy - as mentioned above. So assuming all hurdles are overcome, how many live channels would we like to have in each shot? Well frankly, most geophysicists would probably take all that they could get. Today, large "conventional" land and marine acquisition systems might have 4,000 to 5,000 channels. However, some single-sensor land systems have offered up to 30,000 live channels - with further advances to 150,000 channels recently launched. Similarly, marine singlesensor systems can record tens of thousands of live channels - with the main limitation being how many streamers can be towed by the vessel. Final Remarks What we have said here is that resolution is multifaceted. Good temporal resolution does not depend simply on how much high-frequency energy our seismic sources can pump into the ground. Good temporal resolution in the 3D migrated image also requires good spatial sampling. Good spatial sampling requires high channel counts. High channel counts require a paradigm shift in everything from QC procedures to final interpretation. Also, the very definition of "sampling" implies discrete sampling - not mixing. And finally, the big questions of course are just how small do the bins have to be, and how many channels are needed? In other words, what are the requirements in the field design that are needed to meet the requirements in resolution? Projects with which the authors have been involved employed bins as small as 3 m or so. Such density is certainly not yet required in most areas, but it might very well be appropriate for specific instances ranging from the SAGD programs of the heavy oil province in Canada to high-resolution surveys in heavily karsted zones of the Middle East. As a matter of practice, proper survey evaluation and design studies need be conducted to answer these field-specific questions. References Abma, R., Sun, J., & Bernitsas, N., 1999, Antialiasing methods in Kirchhoff migration: Geophysics, 64 . 1783–1792 Berkhout, A. J., 1984, Seismic exploration - seismic resolution: a quantitative analysis of resolving power of acoustical echo techniques. Geophys. Press, London. Kallweit, R. S. & Wood, L. C. 1982. The limits of resolution of zero-phase wavelets. Geophysics. 47 . 1035–1046.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1997 SEG Annual Meeting, November 2–7, 1997

Paper Number: SEG-1997-1119

... ABSTRACT No preview is available for this paper. expl procedure azimuth geometry interpolation aliased

**operator**intemat reservoir characterization receiver traverse**operator**grid cell geophy fourier transform beasley kirchhoff**operator**mobley upstream oil & gas dmo...
Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the SEG International Exposition and Annual Meeting, September 15–20, 2019

Paper Number: SEG-2019-3214151

... for the steep migration

**operator**summation trajectory. A reduced CDP interval in migration image can mitigate this problem, but with additional computation and GPU memory cost. For better data quality, we implement the**antialiasing**with the triangle filter following Lumley et al. (1994), which was first...
Abstract

ABSTRACT Least-squares migration in the data domain can improve the migration image illumination and mitigate the migration artifacts. But it requires many iterations of migration and demigration to obtain the desired subsurface reflectivity model, which can limit its wide use in production. As for Kirchhoff migration, the main purpose of the least-squares method is to remove the migration artifacts and random noise, improve image illumination and amplitude fidelity under complex velocity, which can be fully addressed by image domain least-squares migration. Adding the Q effect into the least-squares migration process can compensate for amplitude loss due to attenuation in the gas zone and can correct the image phase. We investigate the application of image-domain least-squares Kirchhoff migration and least-squares Kirchhoff Q migration with adaptive curvelet domain matching filters, used as an inverse Hessian operator. A practical workflow together with a 3D field data example in an offshore Ireland Atlantic dataset demonstrates reduced image artifacts, improved amplitude behavior and stable Q compensation. Presentation Date: Tuesday, September 17, 2019 Session Start Time: 8:30 AM Presentation Start Time: 8:55 AM Location: 214C Presentation Type: Oral

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1998 SEG Annual Meeting, September 13–18, 1998

Paper Number: SEG-1998-1171

... the locality of the Kirchhoff migration

**operator**to restrict the processing to a portion of the whole model (target), according to the features of interest. This saves computing resources. The migration proceeds one shot gather at a time. We use a triangle**antialiasing****operator**(Claerbout, 1992). Traveltimes...
Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2003 SEG Annual Meeting, October 26–31, 2003

Paper Number: SEG-2003-1091

... fac- tors in Kirchhoff migration: 62nd Ann. Internat. Mtg, Soc. Expl. Geophys., Expanded Abstracts, 995 998. Sun, J., and Bernitsas, N., 1999,

**Antialiasing****operator**dip in 3D prestack Kirchhoff time migration - an exact solution: 61st Mtg., Eur. Assn. Geosci. Eng., Expanded Abstracts, Ses- sion:1052...
Abstract

SUMMARY With the widespread adoption of wavefield continuation methods for prestack migration, the concept of operator aliasing warrants revisiting. While zero-offset migration is unaffected, prestack migrations reintroduce the issue. Some situations where this problem arises include subsampling the shot-axes to save shot-profile migration costs and limited cross-line shot locations due to acquisition strategies. These problems are overcome in this treatment with the use of an appropriate source function or band-limiting the energy contributing to the image. We detail a synthetic experiment that shows the ramifications of subsampling the shot axis and the efficacy of addressing the problems introduced with our two approaches. Further, we explain how these methods can be tailored in some situations to include useful energy residing outside of the Nyquist limits.

Advertisement