Teses em Geofísica (Doutorado) - CPGF/IG

URI Permanente para esta coleçãohttp://10.7.2.76:4000/handle/2011/2357

O Doutorado Acadêmico pertente a o Programa de Pós-Graduação em Geofísica (CPGF) do Instituto de Geociências (IG) da Universidade Federal do Pará (UFPA).

Navegar

Submissões Recentes

Agora exibindo 1 - 20 de 50
  • ItemAcesso aberto (Open Access)
    Solução da equação de Archie com algoritmos inteligentes
    (Universidade Federal do Pará, 2011) SILVA, Carolina Barros da; ANDRADE, André José Neves; http://lattes.cnpq.br/8388930487104926
    Archie equation is a historical mark of Formation Evaluation establishing a relationship among the physical properties and the petrophysical properties of reservoir rocks, which makes possible the identification and quantification of hydrocarbon in subsurface. Water saturation is the solution of Archie equation obtained from the measure of formation deep resistivity and porosity estimated. However, the solution of Archie equation is no trivial, in the dependence of previous knowledge of formation water resistivity and Archie exponents (cementation and saturation). This thesis introduces a set new intelligent algorithm to solve Archie equation. A modification of competitive neural network, nominated as bicompetitive neural network produces the log zonation. A new genetic algorithm with evolutionary strategy based in the mushrooms reproduction produces estimates for the matrix density, the matrix transit time and the matrix neutron porosity, which associated to a new rock model, produces realistic porosity estimates considering shale effects. A new model of competitive neural network, nominated as angular competitive neural network is able to accomplish the interpretation of Pickett plot, supplying the information about formation water resistivity and cementation exponent. All results of the methodology hereintroduced are presented using synthetic data and actual wireline logs and core analysis results.
  • ItemAcesso aberto (Open Access)
    Post-imaging analysis of pressure prediction in productive sedimentary basins for oil and gas exploration
    (Universidade Federal do Pará, 2015-05-26) VIEIRA, Wildney Wallacy da Silva; LEITE, Lourenildo Williame Barbosa; http://lattes.cnpq.br/8588738536047617
    This thesis has several aspects related to the problem of basin modeling towards oil and gas exploration, and with two general divisions: parameter estimation, and pressure prediction. For the structure of this work, the first topic is related to velocity analysis and effective media, where estimated a distribution for the P wave velocity in time, the transformation to depth, and the use an effective model for the density and for the S wave velocity distributions. The reason for initially focusing on these estimations is because they represent one of the most basic information that one can have from the seismic domain, from where the other seismic parameters can be calculated, and from where the second part of this is totally based. The second topic is related to computing stress, strain and pressure distribution in the subsurface using the information from the P and S wave velocities and the density models, in order to localize areas of high and low pressures that act as natural suction pumps for the mechanics of oil and gas accumulation into productive zones and layers. We have highlighted this second part for the final work presentation, and call attention to the sensitivity of pressure mapping to the velocity and density variations. We also classify the first division as dedicated to the conventional seismic processing and imaging, and have clled the second division as post-imaging stressstrain-pressure prediction. As for the final aim of geophysics is to create images of the subsurface under different properties, the stress calculation only makes total sense for real data, and this makes mandatory the acquired seismic data be three component. As an important conclusion from the numerical experiments, we show that pressure does not have a trivial behavior, since it can decrease with depth and create natural pumps that are responsible for accumulating fluids. The theory of porous media is based on integral geometry, because this mathematical discipline deals with collective geometrical properties for real reservoirs. It was shown that such collective properties are namely for porosity, specific surface, average curvature and Gaussian curvature. For example, cracked media has, as a rule, small porosity, but very large specific surface area that creates anomalous high 𝛾 = 𝑣𝑆/𝑣𝑃 ratio, what means a negative 𝜎 Poisson coefficient. Another conclusion is related to calculating discontinuity in pressure between solid and fluid, what depends on the structure of pore space.
  • ItemAcesso aberto (Open Access)
    Estimativa de parâmetros em meios VTI usando aproximações de sobretempo não hiperbólicas
    (Universidade Federal do Pará, 2015-09-30) PEREIRA, Rubenvaldo Monteiro; CRUZ, João Carlos Ribeiro; http://lattes.cnpq.br/8498743497664023
    Transversely isotropic (TI) media is a more realistic model for processing seismic data, for example, fractured media with preferred fracture direction, or composite by periodic thin layers. In particular, TI media with vertical symmetry axis (VTI) are widely used as models for P-wave propagation in shales, abundant rock in hydrocarbon reservoirs. However, the P-wave propagation in homogeneous media VTI have as their main characteristics, depend on four parameters of rigidity and also to possess: complicated algebraically phase velocity equation, difficult group velocity equation to explain and moveout equation nonhyperbolic. Therefore, several authors have presented parameterization and obtained approximations to these equations depending on three parameters only. Among these, the moveout approximations have been widely used in inverse methods to estimate lithological parameters in homogeneous media VTI. Such methods have generally been successful in estimated stacking velocity vn and the anellipticity parameter η, since these are the only ones required for generating initial models for the steps of seismic processing in the time domain. One of the most used methods for estimating parameters is the basedsemblance velocity analysis, though, because this method is limited to sections with small offset-depth ratio, adaptations for anisotropic media, considering nonhyperbolic moveout approximatios are required. In this paper, based on anelliptical approximation shifted hyperbola, anelliptical rational approximations are presented for: phase velocity, group velocity and moveout nonhyperbolic in homogeneous VTI horizontally layered media. The validity of these approximations is made by calculating their relative errors by comparing with other known approximations in the literature. To semblance-based velocity analysis is performed to measure the accuracy of the rational moveout approximations to estimate parameters in VTI media. The results demonstrate the great potential of rational approximations in inverse problems. In order to adapt to VTI media, we modify two coherence measurements by semblance which are sensitive to amplitude and phase variations. The accuracy and robustness of the adapted coherence measurements are validated by estimation of in anisotropic parameters in VTI media.
  • ItemAcesso aberto (Open Access)
    Seismic amplitude analysis and quality factor estimation based on redatuming
    (Universidade Federal do Pará, 2015-04-25) OLIVEIRA, Francisco de Souza; FIGUEIREDO, José Jadsom Sampaio de; http://lattes.cnpq.br/1610827269025210
    Amplitude correction is an important task to correct the seismic energy dissipated due the ineslasticity absortion and the geometrical spreading during the acoustic/elastic wave propagation in solids. In this work, we propose a way to improve the estimation of quality factor from seismic reflection data, with a methodology to estimate de quality factor based on the combination of the peak frequency-shift (PFS) method and the redatuming operator. The innovation in this work is in the way we correct travel times when the medium is consisted by many layers. In other words, the correction of traveltime table used in the PFS method is performed using the redatuming operator. This operation, which is performed iteratively, allows to estimate the Q-factor layer by layer in a more accurate way. A redatuming operation is used to simulate the acquisition of data in new levels, avoiding distortions produced by near-surface irregularities related to either geometric or material property heterogeneities. In this work, the application of the true-amplitude Kirchhoff redatuming (TAKR) operator on homogeneous media is compared with conventional Kirchhoff redatuming (KR) operator restricted to the zero-offset case. Our methodology is based on the combination of the peak frequency-shift (PFS) method and the redatuming operator (TAKR with weight equal 1). Application in synthetic and in seismic (Viking Graben) and GPR (Siple Dome) real data demonstrates the feasibility of our analysis.
  • ItemAcesso aberto (Open Access)
    Caracterização de fraturas em imagens de amplitude acústica utilizando morfologia matemática
    (Universidade Federal do Pará, 2013) XAVIER, Aldenize Ruela; GUERRA, Carlos Eduardo; http://lattes.cnpq.br/7633019987920516; ANDRADE, André José Neves; http://lattes.cnpq.br/8388930487104926
    Fractures analysis is of particular interest in the characterization of carbonate reservoir since the fractures are the classic geological setting for stock and produce hydrocarbon in this kinds of reservoirs. Particularly in Brazil is growing the interest in the characterization of carbonate reservoirs, with the recent discoveries in pre-salt. The acoustic imaging tools provide valuable information about the amplitude of the reflected waves in the borehole wall, which can be interpreted to allow the characterization of fractures. However, some problems arise due to the qualitative interpretation of these images that are basically performed with the use of vision and experience of the interpreter. This work presents a methodology that performing the fractures analysis of acoustic images and can be divided into three steps. The first one presents the image modeling, which is used to infer the aspect of the fractures in different geological settings. In the second step, the mathematical morphology is used as an edge detector and performs the fractures identification in the acoustic image. The last step deals with the extraction of geometric attributes of the fractures with the adoption of a four degree polynomial according to the least square criterion. The evaluation of this methodology is performed with synthetic images generated by the presented modeling, which supports the characterization of fractures performed in real images.
  • ItemAcesso aberto (Open Access)
    Modelagem eletromagnética 2.5-D de dados geofísicos através do método de diferenças finitas com malhas não-estruturadas
    (Universidade Federal do Pará, 2014-10-23) MIRANDA, Diego da Costa; RÉGIS, Cícero Roberto Teixeira; http://lattes.cnpq.br/7340569532034401; HOWARD JUNIOR, Allen Quentin; http://lattes.cnpq.br/6447166738854045
    We present a 2.5D electromagnetic formulation for modelling of the marine controlledsource electromagnetic (mCSEM) using a Finite Diference frequency domain (FDFD) method. The formulation is in terms of secondary fields thus removing the source point singularities. The components of the electromagnetic field are derived from the solution of the magnetic vector potential and electric scalar potential, evaluated in the entire problem domain that must be completely discretized for the use of the FDFD. Finite difference methods result in large sparse matrix equations that are efficiently solved by sparse matrix algebra preconditioned iterative methods. To overcome limitations imposed by structured grids in the traditional FDFD method, the new method is based upon unstructured grids allowing a better delineation of the geometries. These meshes are completely adaptable to the models we work with, promoting a smooth design of their structures, and may only be refined locally in regions of interest. We also present the development of RBF-DQ method, (radial basis function differential quadrature) which makes use of the technique of functions approximation by linear combinations of radial basis functions (RBF) and the technique of differential quadrature (DQ) for approximation of the derivatives. Our results show that the FDFD method with unstructured grids when applied to geophysical modeling problems, yield improved quality of modeled data in comparison with the results obtained by traditional techniques of FDFD method.
  • ItemAcesso aberto (Open Access)
    Inversão de velocidades por otimização global usando a aproximação superfície de reflexão comum com afastamento finito
    (Universidade Federal do Pará, 2016-08-25) MESQUITA, Marcelo Jorge Luz; CRUZ, João Carlos Ribeiro; http://lattes.cnpq.br/8498743497664023
    The recent geophysical literature has shown the building of an accurate initial model is the more appropriate way to reduce the ill-posedness of the Full Waveform Inversion, providing the necessary convergence of the misfit function toward the global minimum. Optimized models are useful as initial guess for more sophisticated velocity inversion and migration methods. I developed an automatic P-wave velocity inversion methodology using pre-stack two-dimensional seismic data. The proposed inversion strategy is fully automatic, based on the semblance measurements and guided by the paraxial traveltime approximation, so-called Finite-Offset Common-Reflection-Surface. It is performed in two steps, at first using image rays and an a priori known initial velocity model we determine the reflector interfaces in depth from time migrated section. The generated depth macro-model is used as input at the second step, where the parametrization of the velocity model is made layer by layer. Each layer is separated from each other by smoothed interfaces. The inversion strategy is based on the scan of semblance measurements in each common-midpoint gather guided by the Finite-Offset Common-Reflection-Surface traveltime paraxial approximations. For beginning the inversion in the second step, the finite-offset common-midpoint central rays is built by ray tracing from the velocity macro-model obtained in the first step. By using the arithmetic mean of total semblance calculated from the whole common-midpoint gathers as objective function, layer after layer, a global optimization method called Very Fast Simulated Annealing algorithm is applied in order to obtain the convergence of the objective function toward the global maximum. By applying to synthetic and real data, I showed the robustness of the inversion algorithm for yielding an optimized P-wave velocity macro-model from pre-stack seismic data.
  • ItemAcesso aberto (Open Access)
    Inversão da forma de onda orientada ao alvo
    (Universidade Federal do Pará, 2016-09-16) COSTA, Carlos Alexandre Nascimento da; COSTA, Jessé Carvalho; http://lattes.cnpq.br/7294174204296739
    We propose a new target-oriented waveform inversion to estimate the physical parameters from a specific target in the subsurface from observed data from deviated-VSP acquisition or surface seismic data. Furthermore, we investigate a strategy to estimate the impulse responses from a local target in the subsurface from deviated-VSP acquisition or surface seismic data as an iterative sparse inversion approach, where the main feature of this strategy is that all multiple scattering in the data is used to enhance the illumination at target level. In these approaches we fit the upgoing wavefields observed at a specific level near the local target with the upgoing wavefields estimated at same depth level through convolution-type representation for the Green’s function. The main feature of the target-oriented waveform inversion is that we just need to know the up- and downgoing wavefields at the depth level above the target area to estimate the physical parameters for the area of interest. We show through numerical tests that the iterative sparse inversion approach does not require dense sources sampling to estimate the impulse responses from a target below a complex overburden, because of all the extra illumination via multiples. The physical parameters above the target area is not necessary to know if we use the data from deviated-VSP geometry of acquisition, but for surface seismic data we need to know a smooth physical parameter above the target area to estimate the up- and downgoing wavefields at depth level nearby the local target. For surface seismic data we used Joint Migration Inversion to estimate the up- and downgoing wavefields at depth level near the target area.
  • ItemAcesso aberto (Open Access)
    Structural constraints for image-based inversion methods
    (Universidade Federal do Pará, 2016-04-22) MACIEL, Jonathas da Silva; COSTA, Jessé Carvalho; http://lattes.cnpq.br/7294174204296739
    This thesis presents two methodologies of structural regularization for Wave-Equation Migration Velocity Analysis and Joint Migration Inversion: cross-gradient regularization and filtering with morphological operators. In Wave-Equation Migration Velocity Analysis, the cross-gradient regularization aims to constrain the velocity contrasts with the reflectivity map by parallelization of the velocity gradient vector and the image gradient vector. We propose a version with cross-gradient of the objective functions: Differential Semblance, Stack Power and Partial Stack Power. We combine the Partial Stack Power with its version of cross-gradient, in order to gradually increase the resolution of the velocity model without compromising the adjustment of the long wavelengths of the velocity model. In Joint Migration Inversion, we propose to apply morphological operators of erosion and dilation in the preconditioning of the velocity model in each iteration. Operators use the reflectivity map to mark the regions with the same value of physical property. They homogenize the geological layer and accentuate the velocity contrast at the edges. Structural constraints do not only reduce the ambiguity in estimating a velocity model, but also make the migration/inversion methods more stable, reducing artifacts, delineating geologically plausible solutions, and accelerating the convergence of the objective function.
  • ItemAcesso aberto (Open Access)
    Análise de processos oceanográficos no estuário do rio Pará
    (Universidade Federal do Pará, 2016-11-04) ROSÁRIO, Renan Peixoto; ROLLNIC, Marcelo; http://lattes.cnpq.br/6585442266149471
    This thesis investigated physical oceanographic processes in the Pará River estuary, focusing on saline intrusion and hydrodynamic process. The choice of this topic arose from the motivation to consolidate the understanding of hydrodynamic and hydrographic issues in the Pará River estuary since this region of the Amazon Coastal Zone still a challenge to researchers. The first step was to define the methods and parameter to get better data in time and space difference. In this context, direct observations were conducted in the estuary in two moments, the low and high river discharge, using velocity, salinity profile (longitudinal and vertical), and temperature profile. Furthermore, in an unprecedented way, it was conducted over a year and ten months salinity and water level (tide) monitoring at strategic points of the estuary. The main conclusions of this research obtained from this data set was the identification of salt water intrusion in the estuary of the Pará River, entering about 100 km from the mouth. The sensitivity of salinity intrusion is affected by river discharge (seasonal variability), and tide energy (daily variability). The Stokes drift generated by tidal propagation in the estuary was the responsible for the net salt flux landward. The innermost portion of the estuary (more than 60 km from the mouth) does not show gravitational circulation and the estuary salt transport above is performed entirely by turbulent diffusion; and the outer portion of the resulting stream reverts to the depth and advective and diffusive processes are important to contribute to the salt transport in the estuary.
  • ItemAcesso aberto (Open Access)
    Atenuação de múltiplas pelo método WHLP-CRS
    (Universidade Federal do Pará, 2003-01-28) ALVES, Fábio José da Costa; LEITE, Lourenildo Williame Barbosa; http://lattes.cnpq.br/8588738536047617
    In the sedimentary basins of the Amazon region, the generation and accumulation of hydrocarbons is related to the presence of diabase sills. These rocks present a great impedance contrast to the host rocks what turns to cause the generation of internal and external multiples with similar amplitudes the primary events. These multiples can predominate over the information originated at the deeper interfaces, making more difficult the processing, interpretation and imaging of the seismic section. In the present research work, we conducted de multiple attenuation in synthetic commonsource (CS) seismic sections by combining the Wiener-Hopf-Levinson for prediction (WHLP) and the common-reflection-surface-stack (CRS) methods. We denominated this new combination under the name and label of WHLP-CRS method. The deconvolution operator is calculated from the real amplitudes of the seismic section trace-by-trace, and this strategy represents efficiency in the process of multiples attenuation. Multiples identification is carried out in the zero-offset (ZO) section simulated by the CRS-stack applying the periodicity criteria between the primary and its repeated multiples. The wavefront attributes, obtained by the CRS-stack, are employed to move the shifting windows in the timespace domain, and these windows are used to calculate the WHLP-CRS operator for the multiple attenuation carried out in the CS sections. The development of the present research had several intentions as: (first) avoid the inconveniencies of the processed ZO section; (second) design and apply operators in the CS configuration; (third) extend the WHL method to curved interface; (fourth) use the good results obtained in the new CRS-stack technology whose application extends to migration, tomography, inversion and AVO.
  • ItemAcesso aberto (Open Access)
    Tomografia eletromagnética para caracterização de reservatórios de hidrocarbonetos
    (Universidade Federal do Pará, 2003-10-03) BAPTISTA, João Júnior; RIJO, Luiz; http://lattes.cnpq.br/3148365912720676
    In the oil production it important the monitoring of the reservoir parameters (permeability, porosity, saturation, pressure, etc) for its management. Changes in the reservoir dynamic parameters induce variations in reservoir flow, as for example, losses in the pressure, making it difficult the process of extraction of the oil. The fluid injection increases the internal energy and pressure of the reservoir, stimulating the movement of the oil in the direction of the extracting wells. The crosswell electromagnetic method can become in efficient technique in monitoring the injection processes, considering the fact that the percolation of conductive fluid through the sediment is a very sensitive. This thesis presents the results of a very efficient algorithm of electromagnetic tomography applied to synthetic data. The imaging scheme assumes a cylindrical symmetry around a source consisting of a magnetic dipole. During the process of imaging we used 21 transmitters and 21 receivers distributed within two wells 100 meters apart. For the forward problem solution it was used the finite element method applied to the Helmholtz equation for the secondary electric field. It will be demonstrated that the algorithm obtained is not under restrictions imposed by Born and Rytov approximations, therefore, the algorithm can be efficiently applied for any situation as a electric conductivity contrasts as large as 2 to 100, frequencies as 0.1 to 1000.0 kHz and scatterers of any dimensions. The inverse problem was solved using the stabilized Marquardt scheme. This scheme employs a technique that seeks the solution iteratively. The inverted synthetic data, with added Gaussian noise, are the magnetic vertical component, separated in its respective real (in-phase) and imaginary (quadrature) parts. Without constrains, the inverse tomography problem is totally unstable. To stabilize the inverse solution, absolute and relative constraints have been used. The use of these constraints allows producing high definition images. The results show that the resolution is better in the vertical direction than the horizontal and it is also a function of source operating frequency. The position and attitude of the target are recovered well. These results show that constraints can attenuate or eliminate the poor resolution.
  • ItemAcesso aberto (Open Access)
    Imageamento da porosidade através de perfis geofísicos de poço
    (Universidade Federal do Pará, 2004-01-27) MIRANDA, Anna Ilcéa Fischetti; ANDRADE, André José Neves; http://lattes.cnpq.br/8388930487104926
    Porosity images are graphical representations of the lateral distribution of rock porosity estimated from well log data. We present a methodology to produce this geological image entirely independent of interpreter intervention, with an interpretative algorithm approach, which is based on two types of artificial neural networks. The first is based on neural competitive layer and is constructed to perform an automatic interpretation of the classical Pb - ΦN cross-plot, which produces the log zonation and porosity estimation. The second is a feed-forward neural network with radial basis function designed to perform a spatial data integration, which can be divided in two steps. The first refers to well log correlation and the second produces the estimation of lateral porosity distribution. This methodology should aid the interpreter in defining the reservoir geological model, and, perhaps more importantly, it should help him to efficiently develop strategies for oil or gas field development. The results or porosity images are very similar to conventional geological cross-sections, especially in a depositional setting dominated by clastics, where a color map scaled in porosity units illustrates the porosity distribution and the geometric disposition of geological layers along the section. The methodology is applied over actual well log data from the Lagunillas Formation, in the Lake Maracaibo basin, located in western Venezuela.
  • ItemAcesso aberto (Open Access)
    Paleomagnetismo de rochas vulvânicas do Nordeste do Brasil e a época da abertura do Oceano Atlântico Sul
    (Universidade Federal do Pará, 1983-12-28) GUERREIRO, Sonia Dias Cavalcanti; SCHULT, Axel
    In the first part of this paper palaeomagnetic and rock magnetism investigations were developed in volcanic samples from the Northeast of Brasil. The age of the samples spans the Jurassic and Cretaceous periods. To accomplish this task four areas were studied and a total of 495 samples from 56 cites were analyzed. A portable drilling machine with 2.5 cm core diameter was used to collect the samples. The orientation of the samples were obtained by means of a magnetic compass, and a clinometer. The specimens were submitted to alternating field demagnetization and in, a few cases, to thermal demagnetization. Giving unit weight to each site the mean direction of the characteristic remanent magnetization of each one of the studied areas were determined. The volcanic rocks from Jurassic, lying in the western part of the Maranhão Basin (Porto Franco - Estreito) , yielded the mean direction: declination D=3.9º, inclination I=-17.9°, with the circle of 95% of confidence α95=9.3º, precision parameter k=17.9, number of sites N=15. All sites showed normal polarity. For this area was determined a palaeomagnetic pole with coordinates 85.3°N, 82.5°E (circle of 95% of confidence A95 = 6.9º) that is situated near other known palaeomagnetic poles for this period. The Lower Cretaceous rocks from the eastern part of the Maranhão Basin (Teresina-Picos-Floriano) yielded a mean direction for the characteristic remanent magnetization having D= 174.7º, I=6.0°, α95=2.8º, k=122, N=21. All sites showed reversed polarity. The calculated palaeomagnetic pole associated with these rocks has coordinates 83.6°N, 261.0ºE (A95= 1.9°) and is in agreement with other South American poles of the same age. In Rio Grande do Norte a swarm of Lower Cretaceous tholeiitic dikes was studied having a characteristic mean direction with D= 186.6º, I= 20.6º with α95= 14.0º, k= 12.9, and N= 10. The sites in this area showed mixed polarity. The computed palaeomagnetic pole is located at 80.6ºN and 94.8ºE with A95= 9.5º. The study of the volcanic rocks of the magmatic province of Cape Santo Agostinho yielded the following values for the characteristic remanent magnetization D= 0.4º, I=-20.6º with α95= 4.8º, k=114, N=9. All sites showed normal polarity and the calculated paleomagnetic pole has the coodinates: 87.6ºN 135ºE with A95= 4.5. The secular variation of the obtained directions was discussed so that each pole presented in this paper is really a palaeomagnetic pole. The analysis of the magnetic minerals of these samples was done by thermomagnetic curves and by X-ray diffraction. In most cases the magnetic phase in the rocks is mainly titanomagnetite with poor titanium content. Maghemite and sometimes hematite, usually a product of weathering, did not obscure the initial thermoremanent magnetization of these rocks. Generally the determined Curie temperature lies between 500-600º C. Frequently it was observed that the exsolved titanomagnetite has a phase near magnetite and another phase rich in titanium, near ilmenite, as a result of high temperature oxidation. The second part of this paper deals with the determination of the time of the opening of the South Atlantic ocean by means of palaeomagnetic data. In this paper, however, instead of using the polar wandering paths of the continents (the usual method) statistical tests were applied that give the probability that a certain configuration for the two continents be consistent or not with the palaeomagnetic data for a chosen period. For the Triassic, Jurassic, Lower Cretaceous and Middle-Upper Cretaceous periods the palaeomagnetic poles for Africa were compared with the respective poles of South America in pre-drift configuration by means of an F-test. Other configurations that indicate some separation between the two continents were also tested. The results of the tests showed that the pre-drift reconstruction after Martin et al (1981) is consistent with the palaeomagnetic data for Triassic, but there is a significant difference between the respective Jurassic, Lower Cretaceous and Middle-Upper Cretaceous palaeopoles for the two continents, with a probability of error of less than 5%. Other pre-drift reconstructions were tested and the results were the same. Comparing the pole positions for South America and Africa in a configuration that indicates a small separation between the two continents, as the one suggested by Sclater et al (1977) for 110 m.y.B.P. one finds a significant difference for the Triassic data. For the Jurassic and Lower Cretaceous palaeomagnetic poles, which are, however, earlier than the suggested date of the reconstruction, the results are consistent with that separation of the continents with a probability of error of less than 5%. The reconstruction for 80 m.y.B.P., after Francheteau (1973), indicanting a larger separation between the continents, is consistent with the Middle-Upper Cretaceous palaeomagnetic poles. Assuming the premise of the movements of crustal blocks relative to each other as rigid blocks, the results of the F-test indicated that South America and Africa were close together during Triassic. There was, nevertheless, a small separation between the continents in Jurassic, probably due to an earlier rifting event, and this separation was stationary until Lower Cretaceous time. This result is different from the most part of the papers that discuss the openning of the South Atlantic ocean. The Middle-Upper Cretaceous data are compatible with a fast and significant spreading of the continents in that period.
  • ItemAcesso aberto (Open Access)
    Empilhamento sísmico por superfície de reflexão comum: um novo algoritmo usando otimização global e local
    (Universidade Federal do Pará, 2001-10-25) GARABITO CALLAPINO, German; CRUZ, João Carlos Ribeiro; http://lattes.cnpq.br/8498743497664023; HUBRAL, Peter; http://lattes.cnpq.br/7703430139551941
    By using an arbitrary source-receiver configuration and without knowledge of the velocity model the recently introduced seismic data stacking method called Common Reflection Surface (CRS) simulates a zero-offset (ZO) section from multi-coverage seismic reflection data. For 2-D acquisition, as by-products provides three normal ray parameters: 1) the emergence angle (β0); 2) the radius of curvature of the Normal Incidence Point Wave (RNIP); and 3) the radius of curvature of the Normal Wave (RN). The CRS stack is based on the hyperbolic traveltime paraxial approximation depending on β0, RNIP and RN. In this thesis is presented a new algorithm of the CRS stack based on two-parameters and one-parameter search strategy combining global and local optimization methods for determine the three parameters that define the stacking surface (or operator). This is performed in three steps: 1) two-parameters search by applying global optimization to determine β0 and RNIP; 2) one-parameter search by applying global optimization to determine RN; and 3) three-parameters search by applying local optimization to determine three parameters, using as initial approximations the parameter triple of the earlier two steps. In the first two steps is used the Simulated Annealing (SA) algorithm and the Variable Metric algorithm is used in the third step. To simulate the conflicting dip events, for each ZO sample where there are interference of intersecting events is determined an additional parameter triple corresponding to a local minimum. The stacking along the respective operator for each particular event allows to simulate their interference in the simulated ZO section by means of their superposition. This new CRS stack algoritm was applied to synthetic data sets providing high-quality simulated ZO sections and high precision determination of the stack parameters in comparison with the forward modeling. Using the hyperbolic traveltime approximation for identical radii of curvature RNIP = RN, an algorithm called Common Diffraction Surface (CDS) stack was developed to simulate ZO sections for diffracted waves. In a similar way to the CRS stack procedure, this new algorithm also uses the SA and VM optimization methods to determine the optimal parameter couple (β0, RNIP) that define the best CDS operator. The main features of the algorithm are the data normalization, common-offset data, large aperture of the CDS operator and positive search space for RNIP. The application of the CDS stack algorithm in a synthetic dataset containing reflected and diffracted wavefields provides as main result a simulated ZO section containing diffracted events clearly defined. The post-stack depth migration of the ZO section locates correctly the discontinuities of the second interface.
  • ItemAcesso aberto (Open Access)
    Interpolação de dados de campo potencial através da camada equivalente
    (Universidade Federal do Pará, 1992-09-15) MENDONÇA, Carlos Alberto; SILVA, João Batista Corrêa da; http://lattes.cnpq.br/1870725463184491
    The equivalent layer technique is an useful tool to incorporate (in the process of interpolation of potential field data) the constraint that the anomaly is a harmonic function. However, this technique can be applied only in surveys with small number of data points because it demands the solution of a least-squares problem involving a linear system whose order is the number of data. In order to make feasible the application of the equivalent layer technique to surveys with large data sets we developed the concept of equivalent data and the EGTG method. Basically, the equivalent data principle consists in selecting a subset of the data such that the least-squares fitting obtained using only this selected subset will also fit all the remaining data within a threshold value. The selected data will be called equivalent data and the remaining data, redundant data. This is equivalent to splitting the original linear systems in two sub-systems. The first one related with the equivalent data and, the second one, with the redundant data in such way that, the least-squares solution obtained by the first one, will reproduce all the redundant data. This procedure enables fitting all the measured data using only the equivalent data (and not the entire data set) reducing, in this way, the amount of operations and the demand of computer memory. The EGTG method optimizes the evaluation of dot products in solving least-squares problems. First, the dot product is identified as being a discrete integration of a known analytic integral. Then, the evaluation of the discrete integral is approximated by the evaluation of the analytic integral. This method should be applied when the evaluation of analytic integral needs less computational efforts than the discrete integration. To determine the equivalent data we developed two algorithms namely DOE and DOEg. The first one identifies the equivalent data of the whole linear systems while the second algorithm identifies the equivalent data in sub-systems of the entire linear systems. Each DOEg's iteration consists of one application of the DOE algorithm in a given subsystem. The algorithm DOE yields an interpolating surface that fits all data points allowing a global interpolation. On the other hand, the algorithm DOEg optimizes the local interpolation because it employs only the equivalent data while the other current algorithms for local interpolation employ all data. The interpolation methods using the equivalent layer technique was comparatively tested with the minimum curvature method by using synthetic data produced by prismatic source model. The interpolated values were compared with the true values evaluated from the source model. In all tests, the equivalent layer method had a better performance than the minimum curvature method. Particularly, in the case of bad sampled anomaly, the minimum curvature method does not recover the anomalies at the points where the anomaly presents high curvature. For data acquired at different levels, the minimum curvature method presented the worse performance while the equivalent layer produced very good results. By applying the DOE algorithm, it was possible to fit, using an equivalent layer model, 3137 gravity free-air data and 4941 total field anomaly data from the marine Equant-2 Project and the aeromagnetic Carauari-Norte Project, respectively. The DOEg algorithm was also applied in the same data sets optimizing the local interpolation. It is important to stress that none of these applications would have been possible without the concept of equivalent data. The ratio between CPU times (executing the programs with the same memory allocation) required by the minimum curvature method and the equivalent layer method in global interpolation was 1:31. This ratio was 1:1 in local interpolation.
  • ItemAcesso aberto (Open Access)
    Atenuação de múltiplas e compressão do pulso fonte em dados de sísmica de reflexão utilizando o filtro Kalman-Bucy
    (Universidade Federal do Pará, 2003-01-24) ROCHA, Marcus Pinto da Costa da; LEITE, Lourenildo Williame Barbosa; http://lattes.cnpq.br/8588738536047617
    The main objective of this work is the study and the application of the Kalman-Bucy method in the processo f deconvolution to the impulse and deconvolution with prediction, considering the observed data as no stationary. The data used in this work are synthetic and, with this, this Thesis has characteristics of a numerical and search. The operator of deconvolution to the impulse is obtained from the Crump theory (1974), doing use of the solution of equation of Wiener-Holp presented by Kalman-Bucy in the continuoun and discrete forms considering the stacionary process. The prediction operator (KBCP) is based the Crump (1974) and Mendel et al (1979) theorics. Its structure resembles the Wiener-Hopf filter, where the coefficients of the operator are obtained through the autocorrelation, in the case (KBCP) are obtained from the function bi(k). The problem is defined in two steps: the first consists of the generation of the signal, and second of its evaluation. The deconvolution performed is classified as statistics, and is a model based in the properties of the registered signal and its representation. The method were applied only in synthetic data with common-shot section obtained from models with continuous interfaces and homogeneous layers.
  • ItemAcesso aberto (Open Access)
    Imageamento homeomórfico de refletores sísmicos
    (Universidade Federal do Pará, 1994-10-06) CRUZ, João Carlos Ribeiro; HUBRAL, Peter; http://lattes.cnpq.br/7703430139551941
    This thesis presents a new technique for seismic stacking called homeomorphic imaging, which is applicable to the imaging of seismic reflectors in a bidimensional, inhomogeneous and isotropic medium. This new technique is based on ray geometrical approximation and topological properties of reflection surfaces. For this purpose the concepts of wavefront, incidence angle, radius and caustic of wavefront and ray trajetory are used. Considering a circle as the geometrical approximation of the wavefront in propagation, it is possible to define diferent homeomorphic imaging methods, depending on processing configuration. In this way, the following methods are possible: 1) Common Source (Receiver) Element (CS(R)E), which relate to a set of seismograms with a single source (receiver) and a real reflected wavefront is considered; 2) Common-Reflecting-Element (CRE), which relate to a set of seismograms with a single reflection point and a wavefront hipotetically generated in the same reflection point is considered; 3) Common Evolute Element (CEE), which relate to a set of seismograms with each pair of source and geophone located in the same point on the seismic line and a wavefront hipothetically generated in the curvature center of the reflector is considered. In the first method is obtained a stacked seismic section using arbitrary central rays. In the last two methods the result is a zero-offset seismic section. These methods give also other two sections called radiusgram and anglegram, the latter being emergence angles and the former radii of wavefront in the moment that it reaches the observational surface. The seismic stacking is made using a local correction-time applied to the travel time of a ray that leaves the source, and after reflection, is registered as a primary reflection at a geophone, in relation to the reference time which is the travel time of the central ray. The formula used for the temporal correction depends on the radius, the emergence angle of the wavefront and the velocity which is considered constant near the seismic line. It is possible to show that in this new technique the registered signal is not submitted to stretch effects as a consequence of the temporal correction, furthermore there is no problem with reflector point dispersal as a consequence of dip reflectors, in contrast with the techniques that are based on NMO/DMO. In addition, considering that no a prori knowledge of a macromodel is necessary but the velocity near the seismic line, the homeomorphic imaging can be applied to inhomogeneous models without losing the strictness of the formulation.
  • ItemAcesso aberto (Open Access)
    Região do espaço que mais influencia em medidas eletromagnéticas no domínio da frequência: caso de uma linha de corrente sobre um semi-espaço condutor
    (Universidade Federal do Pará, 1994-07-28) BRITO, Licurgo Peixoto de; DIAS, Carlos Alberto; http://lattes.cnpq.br/9204009150155131
    One of the major interpretation problems in geophysics is to determine the region in the subsurface which generates the main part of the signal. In this thesis, the position and size of this region, hereinafter called the main zone, have been found by modelling an electromagnetic system in which the source is an infinite line of electric current, extended over a conductive half-space. The earth has been modelled as a conductive half-space with an inhomogeneity in it as being an infinite layer or a prism of infinite length in the direction of the source line. The signal in the receiver of an electromagnetic system over a conductive homogeneous half-space is different from the one taken over the half space including an inhomogeneity. This difference is a function of the position of the inhomogeneity in relation to the transmiter-receiver system, besides other parameters. Therefore, with the other parameters fixed, there will be a specific position where this difference will maximize. Since this position depends on conductivity contrast, inhomogeneity dimensions and on source frequency, instead of a single position one will have a region where the inhomogeneity will give the maximum contribution to the measured signal. This region is called the main zone. Once the main zone is identified, the targets in the subsurface can be more precisely located. Usually they are conductive parts of the earth with some specific interest. One can facilitate the exploration and reduce production costs if these conductors are well identified during prospecting. A detectability function (∆) has been defined to measure the contribution to the signal due to the inhomogeneity. The ∆ function has been computed using amplitude and phase of the magnetic field components: Hx and Hz which are, respectively, the tangential and the normal to the earth's surface. The size and position of the main zone has been identified using the extremais of the ∆ function, which change with conductivity contrast, and the inhomogeneities' size and depth. Electromagnetic fields for one-dimensional models were calculated using a hybrid form, numerically solving the integrals that were obtained analytically. Two-dimensional models were computed numerically, by the finite elements technique. The maximum values of ∆ function, computed with amplitude of Hx, have been chosen to locate the main zone. This shows more stable results than other amplitude and phase components, both for one and two-dimensional models, when physical properties and geometric dimensions are changed. For the one-dimensional model, where the inhomogeneity is an infinitely extended horizontal layer, the depth of its central plane was found to be po = 0.17 δo, where po is the depth of this central plane and δo is the skin depth for the plane wave (in an homogeneous half-space having a conductivity σ1 equal to that of the backgound, and the frequency w corresponding to the maximum value of ∆ calculatede for the amplitude of Hx). For two-dimensional inhomogeneities, the co-ordinates of the main zone central axis was found to be do = 0,77 r0 (where do is the horizontal distance from this axis to the source) and po = 0,36 δo (where po is the depth of this central axis), with r0 being the source-receiver separation and δo the skin depth in the same conditions as in the one-dimensional case. If the values of r0 and δo are known, it is possible to determine (do, po). Associating each value of ∆ function (calculated using the amplitude of Hx) with the values of d = 0,77 r and p = 0,36 δ for each r and w used to generate ∆, a method to locate the main zone is sugested. The isovalue curves of ∆ are plotted to construct sections of ∆. These sections indicate the conductors position and provide some helpful insight into their geometric forms when the values of ∆ get dose to the maximum.
  • ItemAcesso aberto (Open Access)
    Uma nova abordagem para interpretação de anomalias gravimétricas regionais e residuais aplicada ao estudo da organização crustal: exemplo da Região Norte do Piauí e Noroeste do Ceará
    (Universidade Federal do Pará, 1989-12-18) BELTRÃO, Jacira Felipe; HASUI, Yociteru; http://lattes.cnpq.br/3392176511494801; SILVA, João Batista Corrêa da; http://lattes.cnpq.br/1870725463184491
    Despite its great importance to the study of global geologic structures, interpreting gravity anomalies is not a trivial task because the observed gravity field is the resultant of every gravity effect produced by every elementary density contrast. Therefore, in order to isolate the effects produced by shallow sources from those produced by deep sources, I present a new method for regional-residual separation and methods for interpreting each isolated component. The regional-residual separation is perfomed by approximating the regional field by a polynomial fitted to the observed field by a robust method. This method is iterative and its starting value is the least-squares fitting. Also, the influence of observations containing substantial contributions of the residual field in the regional field fitting is minimized. The computed regional field is transformed into a map of vertical distances relative to a given datum. This transformation consists of two stages. The first one is the downward continuation of the regional field which is assumed to be produced by a smooth interface separating two homogeneous media: the crust and the mantle. The density contrast between the media is presumably known. The second stage consists in transforming the downward continued field into a map of vertical distances relative to a given datum by means of simple operations. This method presents two difficulties. The first one is related to the instability inherent to the downward continuation operation. The use of a stabilizer is therefore mandatory, leading to an inevitable loss of resolution of the features being mapped. The second difficulty, inherent to the gravity method, is the impossibility of determining the interface absolute depths. However, the knowledge of the absolute depth at one single point of the interface by independent means allows the computation of all absolute depths. The computed residual component is transformed into an apparent density map. This transformation consists in calculating the intensity of several prismatic sources by linear inversion, assuming that the real sources are confined to a horizontal slab and have density contrasts varying only along the horizontal directions. The performance of the regional-residual separation method was assessed in tests using synthetic data, always producing better results as compared either with polynomial fitting by least-squares or with the spectral analysis method. The method for interpreting the regional component was applied to synthetic data producing interfaces very close to the true ones. The limit of resolution of the features being mapped depend not only on the degree of the fitting polynomial, but also on the limitation imposed by the gravity method itself. In interpreting the residual component, a priori information is needed about the depth and thickness of the slab confining the true sources. However, results of tests using synthetic data showed that reasonable estimates for the h6rizontal limits of the sources can be obtained, even when the depth and thickness of the slab are not known. The ambiguity involving depth to the top, thickness and the apparent density can be visualized by means of curves of apparent density as a function of the presumed depth to the top of the slab, each curve corresponding to a particular assumed value for the slab thickness. An analysis of the configuration of the curves allows a semi-quantitative interpretation of the real sources depths. The sequence of all three methods described above was applied to gravity data from northern Piauí and northwestern Ceará state. As a result, a crustal organization model was obtained consisting of crustal thickenings and thinnings related to a compressive event which caused the raise of dense, lower crust rocks to shallower depths. This model is consistent with surface geological information. Also, the .gravity interpretation suggests the continuity of the Northwestern Ceará Shear Belt for more than 200 km under the Parnaíba Basin sedimentary cover. Although the sequence of methods presented here has been developed for the study of large scale crustal structures, it could also be applied to the interpretation of smaller structures, as, for example, the basement relief of a sedimentary basin where the sediments have been intruded by mafic rocks.