White Papers Archives - Meta | Innovative AI Analytics and Training Software https://www.exploremetakinetic.com/blog/category/white-papers/ beyond interactive Thu, 13 Apr 2023 04:20:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 https://www.exploremetakinetic.com/wp-content/uploads/2020/08/cropped-Group-1215@2x-1-32x32.png White Papers Archives - Meta | Innovative AI Analytics and Training Software https://www.exploremetakinetic.com/blog/category/white-papers/ 32 32 From Density Logs To Vertical Stress Magnitude https://www.exploremetakinetic.com/blog/from-density-logs-to-the-vertical-stress-magnitude/?utm_source=rss&utm_medium=rss&utm_campaign=from-density-logs-to-the-vertical-stress-magnitude Mon, 06 Jul 2020 21:01:11 +0000 https://www.exploremetakinetic.com/?p=1912 Assuming that the overburden stress (Sv), is one of the three principal stresses, the stress tensor (representing the in-situ stress field) in the Earth’s crust can be explained by four independent components: In the last two Geomechanics blog posts we covered how “Extended Leak-off Tests” can be used to estimate the minimum horizontal stress (Shmin) magnitude and how […]

The post From Density Logs To Vertical Stress Magnitude appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
Assuming that the overburden stress (Sv), is one of the three principal stresses, the stress tensor (representing the in-situ stress field) in the Earth’s crust can be explained by four independent components:

  • Smagnitude
  • maximum horizontal stress (SHmax) magnitude
  • minimum horizontal stress (Shmin) magnitude
  • and the SHmax orientation

In the last two Geomechanics blog posts we covered how “Extended Leak-off Tests” can be used to estimate the minimum horizontal stress (Shmin) magnitude and how Geomechanics experts use “Image Logs” to determine the orientation of the maximum horizontal stress. In this post we explore calculating the 3rd element of vertical stress magnitude (Sv) using density logs and how different processing techniques can affect this estimation.

How do we estimate the vertical stress magnitude?

Precise knowledge of the vertical stress magnitude has become very important in drilling operations, particularly in deep water areas and in regions with high magnitude or shallow overpressures. The absolute vertical stress at a specified depth is defined as the pressure exerted by the weight of the overlying rocks and expressed as follow where ρ (z) is the density, z is depth and g is the acceleration due to gravity (~ 9.81 m/s2). 

In offshore wells, the Sv is equal to the pressure exerted by the weight of the water column from the surface to the seabed plus the weight of the sediment column at a specified depth. Therefore, the Sv at any depth is easily calculated by integrating the density log from the surface (or seabed) where ρw is the density of seawater, at a water depth (Zw) and sb is the depth from seabed. 

Seems straight forward however there are three issues that must be address before calculating the vertical stress.

3 issues that must be address before calculating vertical stress

1. The density log must be correctly formatted

  • The depth scale on well logs is generally recorded relative to the distance along the hole below the rotary table. In deviated wells the ‘measured’ depth is greater than the true vertical depth below the surface. Hence, the calculated vertical stress is underestimated if the effect of wellbore deviation and the height of the rotary table above the surface are not taken into account.

2. The average density from the surface to the top of the density log must be estimated

  • Geophysical logs are not usually available for whole well intervals to save on costs. Therefore, rock density needs to be predicted from the top of the well log to the surface. 

3. The density log must be filtered to remove spurious data

  • Different types of corrections, such as environmental corrections and data filtering, need to be applied on the raw density data to filter erroneous data since the density log, is highly prone to errors, particularly in washout or rugose zones where there is a poor contact between the pad of the density tool and the wellbore wall.

Here are some recommendations on how to remove unreliable data from density logs:

Use density log correction curve (DRHO) to isolate spurious density measurements. Density data was assumed to be inaccurate where the corresponding DRHO value is greater than 0.2 g/cm3.

The caliper log provides an alternative measure of borehole rugosity. If the caliper is greater than ±5% of the bit size, density data is assumed to be affected by the rugose hole, and should be removed.

A combination of filtering methods can be applied on the density logs to ‘de-spike’ and edit the density log (using a running median filter) and remove any anomalous measurements. Note that poor logging conditions are manifest as sharp low density spikes often associated with enlarged caliper readings.

Finally, the density logs need to be smoothed and re-sampled (e.g. using average and MOD functions) for filling gaps and a consistent calculation of vertical stress.

Mojtaba Rajabi, Scientific Advisor

Vertical stress calculator

Above is a visualization of the Vertical Stress Calculator found in the metaKinetic platform where you can practice cleaning up the density log before calculating the vertical stress magnitude at any given depth.

Want access to this simulation and more on the metaKinetic platform? Contact us!

The post From Density Logs To Vertical Stress Magnitude appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
Microseismic location error simply explained https://www.exploremetakinetic.com/blog/microseismic-location-error-simply-explained/?utm_source=rss&utm_medium=rss&utm_campaign=microseismic-location-error-simply-explained Wed, 24 Jun 2020 11:52:00 +0000 https://www.exploremetakinetic.com/?p=548 What is microseismic location error?

The post Microseismic location error simply explained appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
You hear about microseismic location error everywhere and how they affect the usage and interpretation of these data streams. Have you ever wondered how this error is calculated?

The location of a microseismic event is determined through evaluation of how well observations like picks and hodograms are consistent with a series of test locations. Through intelligent choices of these test locations, the hope is that the location algorithm will determine the location that best fits these observations.

The consistency of the location with the measured data is evaluated through a misfit function, a simple example of such a function would be the average difference of the measured and modelled traveltimes (the average traveltime residual) from a given test location to the receivers.  

Microseismic Location Error calculation using misfit function

A schematic diagram for how location error and misfit relate to one another is shown above. In the example, the picks corresponding to the best fitting location are plotted on the waveform on the top left. The misfit function corresponding to that location is illustrated below that. When the location is shifted, the picks slightly misfit to the onsets of the waveforms. When the picks just exceed what can be considered as adequate, then a measure of location error can be calculated. In the figure above, if it is asserted that the shift in the location is just starting to significantly affect the fit of the picks to the waveform, then the distance shifted is an estimate of location error. 

Above 👆 is the “Surface locator” simulation of the metaKinetic platform. This simulation facilitates understanding of effective parameters on resolving better location with higher accuracy. Want to have access to this simulation and investigate these effective parameters on your own? Contact us!

The post Microseismic location error simply explained appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
Extended Leak-off Test 101 https://www.exploremetakinetic.com/blog/extended-leak-off-test-101/?utm_source=rss&utm_medium=rss&utm_campaign=extended-leak-off-test-101 Thu, 14 May 2020 01:46:44 +0000 https://www.exploremetakinetic.com/?p=1766 Assuming that the overburden stress (Sv), is one of the three principal stresses, the stress tensor (representing the in-situ stress field) in the Earth’s crust can be explained by four independent components: In the previous Geomechanics blog post we explored how SHmax orientation can be estimated using image logs. In this post we explore how the Leak-off Test (LOT) […]

The post Extended Leak-off Test 101 appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
Assuming that the overburden stress (Sv), is one of the three principal stresses, the stress tensor (representing the in-situ stress field) in the Earth’s crust can be explained by four independent components:

  • Smagnitude
  • maximum horizontal stress (SHmax) magnitude
  • minimum horizontal stress (Shmin) magnitude
  • and the SHmax orientation

In the previous Geomechanics blog post we explored how SHmax orientation can be estimated using image logs. In this post we explore how the Leak-off Test (LOT) and its extended version can be used to estimate the minimum horizontal stress (Shmin) magnitude.

What is a Leak-off Test anyway?

Leak-off Tests involve increasing the mud pressure in an open and isolated section of wellbore, below the casing shoe, in order to create a small tensile fracture, measuring the strength of the open formation.

What is the Extended Leak-off Test (XLOT)?

The Extended Leak-off Test (XLOT) is more comprehensive and longer leak-off tests in which pumping is not stopped immediately after the leak-off pressure is observed, and are primarily conducted to obtain a fracture closure pressure.

An Extended Leak-off Test (XLOT) will consist of one, two, or more complete cycles of leak-off, formation breakdown, fracture propagation, shut-in and fracture closure. Figure below shows an ideal example of XLOT that shows different stages in a time-pressure test.

How do we estimate Shmin accurately?

In general, the leak-off pressure is less reliable for estimating Shmin, and tends to slightly over-estimate Shmin. However, when Extended Leak-off Test (XLOT) data is available, both Fracture Closure Pressure (FCP) and Instantaneous Shut-in Pressure (ISIP) can be used to provide more reliable information for calculation of Shmin. The FCP information from the subsequent cycles are more reliable than the first cycle because the FCP of the second or third cycles has removed the effect of tensile rock strength and hence provide more accurate estimation of Shmin magnitude. 

Mojtaba Rajabi, Scientific Advisor

Below 👇 you are visualizing picking FCP on an XLOT data to estimate Shmin magnitude in the metaKinetic platform.

Want to have access to this simulation and practice estimating Shmin magnitude by picking FCP on Extended Leak-off Test (XLOT) data using a double-tangent method? Contact us!

The post Extended Leak-off Test 101 appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
What’s b-value after all? https://www.exploremetakinetic.com/blog/whats-b-value-after-all/?utm_source=rss&utm_medium=rss&utm_campaign=whats-b-value-after-all Sat, 11 Apr 2020 14:50:00 +0000 https://www.exploremetakinetic.com/?p=1042 Simply put, b-value is the slope of a log-normal distribution of passive seismic event sizes, namely the number of events versus their magnitudes. It is often referred to as the Gutenburg-Richter relationship and used to describe the nature of earthquake distributions in both space and time.  Initially it was believed that b-values could be used as […]

The post What’s b-value after all? appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
Simply put, b-value is the slope of a log-normal distribution of passive seismic event sizes, namely the number of events versus their magnitudes. It is often referred to as the Gutenburg-Richter relationship and used to describe the nature of earthquake distributions in both space and time.  Initially it was believed that b-values could be used as a predictor of large magnitude event occurrences but more commonly it has been used to describe the stress and fracture state of a rock volume. 

Used cases of b-value

  • Hydraulic fracturing response
    • Depending on fracture complexity the b-values can change.  It is well known that for large scale earthquakes associated with faults the b-values tend towards 1 globally.  If we follow Aki’s approach with D=2b, where D is the fractal dimension, a b-value of 1.5 would represent a complex interconnected network of fractures in three dimensions.  This would suggest that volumes where b approaches 1.5 would be a good proxy for the stimulated reservoir volume that leads to production.
  • Magnitude of completeness for passive seismic data
    • The flattening of slope at lower magnitudes represent the incompleteness in recording over the volume of interest.  Similarly, the lack of larger events, resulting in steeper slopes greater than 1.5, are either related to in-sufficient recording time, or to restrictions in event generation due to geologic elements. 

Restricting the magnitude range of interest using a b-value plot allows for arriving at a robust and unbiased passive seismic dataset to describe reservoir behavior more accurately.


Ted Urbancic, Scientific Advisor
Semismicity distribution simulation from the metaKinetic platform..

Above 👆 is the “Seismicity Distribution” simulation in the metaKinetic platform, to facilitate defining the magnitude of completeness for one stage of hydraulic fracturnig and interpreting the rock volume characteristics.

Want to have access to this simulation and investigate b-values? Contact us!

The post What’s b-value after all? appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
On Predicting Physical Rock Properties https://www.exploremetakinetic.com/blog/on-predicting-physical-rock-properties/?utm_source=rss&utm_medium=rss&utm_campaign=on-predicting-physical-rock-properties Mon, 06 Apr 2020 16:02:15 +0000 https://www.exploremetakinetic.com/?p=1024 The physical properties of rocks – their density and elastic moduli – along with their porosity and the fluids filling them can be accurately measured by various methods, both in situ as in well-logs and from laboratory tests of well cores. But to predict those properties from just a knowledge of mineral and fluid composition […]

The post On Predicting Physical Rock Properties appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
The physical properties of rocks – their density and elastic moduli – along with their porosity and the fluids filling them can be accurately measured by various methods, both in situ as in well-logs and from laboratory tests of well cores. But to predict those properties from just a knowledge of mineral and fluid composition and porosity is a complex process. This is due mainly to the fact that rocks of the exact same composition can have different properties depending on how those constituents are put together.

Geoscientists and petrophysicists use different rock models to simplify and facilitate the prediction of rock properties.

The most general models can predict the upper and lower bounds of the physical rock properties. That is, the actual rock properties must exist only between those two bounds. There are two well-known rock property models of this type:

  • Voight-Reuss
  • Hashin-Shtrikman

Voight-Reuss Model

This model sees a rock as hard and rigid plates separated by softer materials. If you measure properties along the plates, this is the Voight upper –harder– bound in which the forces are supported by the rigid framework. If you measure the properties at right angles to the plates, this is the Reuss –lower– bound in which the forces are supported by the soft materials.

Hashin-Shtrikman Model

The Hashin-Shtrikman upper and lower bounds are a trifle more sophisticated. Instead of layer cakes, one can think of balls encased in other balls, each having different properties. A hard ball inside a soft one will be easier to squeeze than a hard ball with a soft center. Then imagine many of these balls put together into a larger system.

Depending on what model is used to calculate the upper and lower bounds of a particular system of minerals and fluids, the rock physics templates (curves) are differentiated. Below 👇 is the “Rock Physics Modeling” simulation in the metaKinetic platform, computing and illustrating the changes of a chosen rock physics template with variation of the rock properties.

This simulation is designed to convey the concept of how rock properties change with rock type and how complex it can be to design a rock to exact specifications. 

Michael Burianyk, Scientific Advisor

Want to have access to this simulation and build your own rock type and rock physics templates? Contact us!

The post On Predicting Physical Rock Properties appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
Image Logs Untangled https://www.exploremetakinetic.com/blog/image-logs-untangled-geomechanics-101/?utm_source=rss&utm_medium=rss&utm_campaign=image-logs-untangled-geomechanics-101 Thu, 06 Feb 2020 23:11:00 +0000 https://www.exploremetakinetic.com/?p=988 Assuming that the overburden stress (Sv), is one of the three principal stresses, the stress tensor (representing the in-situ stress field) in the Earth’s crust can be explained by four independent components: But how do Geomechanics experts indicate the orientation of the maximum horizontal stress? Often they interpret borehole image logs for this purpose! Borehole image logs provide […]

The post Image Logs Untangled appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
Assuming that the overburden stress (Sv), is one of the three principal stresses, the stress tensor (representing the in-situ stress field) in the Earth’s crust can be explained by four independent components:

  • Smagnitude
  • maximum horizontal stress (SHmax) magnitude
  • minimum horizontal stress (Shmin) magnitude
  • and the SHmax orientation

But how do Geomechanics experts indicate the orientation of the maximum horizontal stress? Often they interpret borehole image logs for this purpose!

Borehole image logs provide a 360° image of the wellbore wall based on petrophysical property contrasts and provide a reliable source for interpretation of BOs and DIFs.  

But what are the BOs and DIFs?

Drilling a well in pre-stressed rock essentially provides a “stress free surface” where there is no shear stress on the borehole wall. This can lead to the alteration of far-field stress and formation of drilling-induced tensile fractures (DIFs) and wellbore breakouts (BOs).  

Borehole breakouts form where the hoop stress acting on the wellbore wall exceeds the compressive strength of rock and causes shear-failure and spalling off of the rocks forming the wellbore wall. In a vertical wellbore, borehole breakouts cause an enlargement of the hole diameter in the Shmin orientation, which gives the borehole cross-section an approximately oval shape, with the long-axis of the ellipse aligned parallel to Shmin.

DIFs are created when the minimum principal effective stress, in the disturbed stress zone around the wellbore, becomes negative (in tension) and below the tensile strength (circumferential stress < T < 0). Hence, it causes zones of tensile failure on the wellbore wall that are aligned in the SHmax orientation. 

Below 👇 you are visualizing picking BOs and DIFs on an image log and arriving at SHmax orientation in the metaKinetic platform.

How to distinguish between BOs and DIFs?

In borehole image logs, breakouts are observed as relatively wide, ‘blobby’ (often poorly resolved) zones of either high conductivity or low-amplitude/high borehole radius appearing on opposite sides of the wellbore wall.  However, DIFs are interpreted as pairs of narrow sharply defined conductive fractures on borehole wall images that are generally parallel or slightly inclined to the borehole axis and separated by approximately 180°. 

Mojtaba Rajabi, Scientific Advisor

Want to have access to this simulation and sharpen your skills in interpreting image logs? Contact us!

The post Image Logs Untangled appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
Microseismic Monitoring: DAS vs. Conventional Geophones (Downhole & Surface) https://www.exploremetakinetic.com/blog/microseismic-monitoring-das-vs-conventional-geophones-downhole-surface/?utm_source=rss&utm_medium=rss&utm_campaign=microseismic-monitoring-das-vs-conventional-geophones-downhole-surface Mon, 06 Jan 2020 22:09:26 +0000 https://www.exploremetakinetic.com/?p=961 How does microseismic (MS) acquisition using DAS (Distributed Acoustic Sensing) compare with conventional Downhole or Surface using geophones? You may have had this question in mind, so here is a brief article to enable you to know the basics regardless of your expertise.  The main difference between “Surface MS Monitoring”, “Downhole MS Monitoring”, and “DAS […]

The post Microseismic Monitoring: DAS vs. Conventional Geophones (Downhole & Surface) appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
How does microseismic (MS) acquisition using DAS (Distributed Acoustic Sensing) compare with conventional Downhole or Surface using geophones? You may have had this question in mind, so here is a brief article to enable you to know the basics regardless of your expertise. 

The main difference between “Surface MS Monitoring”, “Downhole MS Monitoring”, and “DAS MS Monitoring” goes back to the sensitivity and the frequency band of the instrumentation used for monitoring.

Conventional Surface vs. Downhole MS Monitoring

First let’s compare the Surface vs. Downhole MS Monitoring using 3C geophones (call it conventional): Although surface MS processing (mainly via stacking and migration) is often quicker than downhole data processing (first break picking), the downhole monitoring geophones (usually 15 Hz) provides a wider range of detectability in the reservoir since they resolve smaller size events. However, the 15 Hz geophones are unable to give accurate magnitude estimates for larger events (> Mw -1) due to saturation. That’s where the surface MS acquisition comes in handy using broadband geophones with main frequency cutoff as low as 0.01 Hz that can pick up very low frequency events (> Mw -1 and above).   

Conventional Downhole vs. DAS MS Monitoring

In the case of DAS MS monitoring, since the sensitivity of fiber is lower than downhole geophones the acquired dataset is usually sparse (i.e., fewer events are detected), therefore it provides less information about the dynamic reservoir behavior through advanced analytics. To improve sensitivity, signals from multiple points along the fiber can be stacked but this results in a decrease in MS location accuracy. Using DAS for MS is also restrictive in determining characteristics of the microseismic events. Despite the charm of simpler deployment of Fiber for hydraulic fracturing, more R&D work is needed for this technique to arrive at scientific maturity for MS monitoring.

Combining Monitoring Methods

The downhole monitoring for hydraulic fracturing always wins if you have to choose one method over another (conditional to creating appropriate coverage). If you are combining Surface MS and DAS MS monitoring, you should know that both these techniques will provide you with a dataset that is biased toward larger magnitude events occurring closer to the treatment wells because that’s what the instruments are capable of resolving. Meanwhile, the Surface MS Monitoring is complementary to conventional Downhole since the surface network extends the recording range of seismicity generated during a stimulation. The cost of adding a sparse surface network is generally minimal and is something that should be considered for many plays.

If you are in a decision-making position, budgeting for DAS and MS surveys, you need to take as many details as possible into consideration to arrive at the monitoring solution that works best for your asset. Don’t underestimate the value of feasibility studies (modeling) before you make a monitoring service purchase decision to reduce the unexpected surprises with the acquired MS data.

Ellie Ardakani, CEO @ Meta

Above 👆 you are visualizing such modeling using 5 surface sensors in the metaKinetic platform.

For sure there are more aspects into these techniques that must be considered. Want to have access to this simulation and explore by yourself or have further questions? Contact us!

The post Microseismic Monitoring: DAS vs. Conventional Geophones (Downhole & Surface) appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
Top 4 qualifiers for choosing microseismic vendors https://www.exploremetakinetic.com/blog/top-4-qualifiers-for-choosing-microseismic-vendors/?utm_source=rss&utm_medium=rss&utm_campaign=top-4-qualifiers-for-choosing-microseismic-vendors Tue, 03 Sep 2019 15:40:19 +0000 https://www.exploremetakinetic.com/?p=649 You have business objectives and available budget in place to conduct a microseismic survey on your next frac job, now all you need is to find an appropriate vendor to provide you with the service. Here are four top qualifiers that you need to take into consideration when it comes to selecting a microseismic vendor […]

The post Top 4 qualifiers for choosing microseismic vendors appeared first on Meta | Innovative AI Analytics and Training Software.

]]>
You have business objectives and available budget in place to conduct a microseismic survey on your next frac job, now all you need is to find an appropriate vendor to provide you with the service. Here are four top qualifiers that you need to take into consideration when it comes to selecting a microseismic vendor for your project. 

1. Equipment

Crucial to any microseismic survey is the equipment used to run the survey.  In traditional microseismic acquisitions the sensors are some type of geophones/seismometer either deployed downhole or on surface. Inherently the sensitivity and the bandwidth of the geophones have a significant role on the recorded signal quality and strength which then in turn affect the resultant microseismic attributes. Testing and maintenance of the equipment on the regular basis is also critical to ensure recording high data quality and efficient deployment.

For example, for a downhole microseismic monitoring you need to know:

  • Wireline characteristics such as history, length, and strength
  • Deployability of the sensor arrays based on wellbore specifications
  • Whip connectors for extended interconnect (minimum 600 ft)
  • Digitization at the sensor (minimum 4kHz) with continuous recording
  • Different sensor types with sensitivity charts
  • Intra-sensor spacing capabilities and evidence of seal integrity
  • Clamping/coupling mechanism for improved vector fidelity
  • Functionality in high temperature
  • Tool maintenance program with evidence of post- and pre-acquisition scheduled equipment testing
  • Integration capability with broadband sensors

2. Deployment

You have to make sure the vendor has the ability to deploy different configurations (multi-array, vertical, horizontal, surface) that itself depends on the region/pad in which the survey is conducted. Having a large number and successful projects in the basin wherein the frac job is conducted usually is a good sign. Ask about possible/previous deployment failures in their systems, the downtime associated with those failures, and how they resolved the issue.

3. Operations

Realtime microseismic acquisition provides you with a wealth of knowledge of the deformation caused by hydraulic fractures within the reservoir in real-time as the project is happening. So, it is important that the vendor has proven operational experience. They must be prepared for any tool/system failure that comes their way and perform smoothly under pressure. Data streaming capabilities and effective automated real-time processing is another side of this equation. At the end of the day as an operator you need to make sure you have QC metrics in place that helps you distinguish reliable from unreliable results which you received from the microseismic vendor.

4. Data Analytics and Integrated Interpretation

You want to make sure the vendor has the in-house technical competency to process the acquired data for microseismic attributes (including but not limited to location and associated errors, magnitude, stress release, energy, moment tensor solutions) and provides you with QC statistics so you know what part of the data can be trusted and used in the advanced analytics and integrated interpretation workflows.

The vendor shall have the ability to slice and dice the data and provide you with insights that are applicable to your project and worth your investment (time and budget). The larger number of projects the vendor had in the target Basin/Formation is valuable as they will be able to provide you with collective insight before and after project completion. Making sure you will be provided with information that are relevant and actionable in a timely manner is very important.

Don’t settle for data dumps.

Ted Urbancic, Scientific Advisor

If you need more details on any of the qualifiers discussed in this article contact us, we would be more than happy to help. Subscribe to our mailing list for more technical tips and tricks.

The post Top 4 qualifiers for choosing microseismic vendors appeared first on Meta | Innovative AI Analytics and Training Software.

]]>