home
goals
about sensor
optimization
climate
why sensor?
who's fastest?
p10 p50 p90
sensorpx
bayes and markov
drainage radius
dca
frac conductivity
tight & fractured
capillary pressure
miscible
primary_miscible
reserves
mmp
artificial intelligence
spe3
spe8
spe10
parallel?
gridding
fd vs fe
map2excel
plot2excel
third party tools
services
publications
q & a
ethics
contact us
Dr. K. H. Coats

 

 

SensorPx Example 2 (input data files are spe1*.dat, example2_1k.odat)

Far fewer runs may be required for optimizations than are required for forecasts, i.e. a statistically significant set of scenarios may not be required in order to make optimization decisions. In this example we compare the performance of the operational options on many otherwise identical realizations of the unknowns, rather than on independent sets of random realizations as in Example 1.  The probability that option c is better than option d is taken as the fraction of total realizations in which option c outperforms option d.  We can examine the accuracy of that estimated probability by observing how it changes with the number of scenarios considered.

Makeoptdat.exe is a program included with Sensor that creates identical sets of realizations for each of the operational options represented in a set of existing Sensor data files that use the Sensor Uncertain Inputs options to represent the unknown variables.  example2_1k.odat is the one-line input data file to Makeoptdat for this example, specifying that Makeoptdat will create 1000 realizations for the spe1 injection well completion options a, b, c, and d given in Example 1:

 spe1a.dat  spe1b.dat  spe1c.dat  spe1d.dat  1000

The workflow for this example is:

1. Run Makeoptdat.exe to create identical sets of 1000 realizations of the uncertain spe1 permeability and porosity distributions for each of the operational options a, b, c, d given in Example 1.  Makeoptdat creates input data files for Makespx.exe, named ospe1a.mspx, ospe1b.mspx, ospe1c.mspx, and ospe1d.mspx, in a subdirectory of the same name as the Makeoptdat input data file (example2_1k - this directory must be created prior to execution of Makeoptdat).

2.  Run Makespx / runs.bat on each of the .mspx data files created in (1) for the operational options that you wish to compare (we will first compare options c and d here).  After each Makespx execution, rename the results subdirectory, and move the file sensor.stat to that subdirectory.

3.   Compare results given in the sensor.stat files saved in (2) for the different operational options.

This workflow simply represents partial automation of the manual optimization workflow that reservoir engineers have been using for over 50 years.

For command-line execution of this workflow in a work directory named Example2:

 mkdir Example2

 cd Example2

 copy “%sensordata%”\spe1*.dat .

 copy “%sensordata%”\example2_1k.odat .

 mkdir example2_1k

 makeoptdat.exe example2_1k.odat example2_1k.olog

 cd example2_1k

 ospe1c.mspx

 runs.bat

 move results results_spe1c

 move sensor.stat results_spe1c\c_sensor.stat

 ospe1d.mspx

 runs.bat

 move results results_spe1d

 move sensor.stat results_spe1d\d_sensor.stat

Results

Table 6.3 gives statistics taken from comparison of cumulative oil production at end of run for options c (completion of injector in all layers) and d (completion of injector in bottom 2 layers) reported for all runs in the sensor.stat files in directories results_spe1c and results_spe1d, respectively.  We tabulate the fractional number of scenarios in which option c outperforms option d, in sets of 100 and 500, and for all 1000 cases.

We see significant variation in results for sets of 100 cases (from 51% up to 74%).  But the figures for the first two sets of 500 cases are very close (58.2% for the first 500, 57.6% for the second 500).  We propose that this variation is sufficiently small to estimate that the probability that option c is better than option d is equal to the average of those figures (equal to the average for all 1000 runs), or 57.9%.

We check our results by repeating the entire workflow, resulting in a different set of 1000 equally probable realizations.  From this second set of 1000 runs we computed a probability of 57.6% that option C will outperform option D, confirming that our estimated probability of 57.9% based on the first 1000 runs (for each option) is sufficiently accurate to make the operational decision with confidence. 

However, Example 1 showed that 10,000 runs are required in order to accurately quantify probable production for any of the operational options.  Far fewer runs are generally required to make operational decisions than are required to accurately quantify the probable absolute or relative performance of the options that is represented by the Px, Py, and Pz results obtained in Example 1.

 

 Table 6.3

Estimated Probability of Option C Outperforming Option D

 

Case  numbers

Fraction Option C Outperforming D

1 - 100 66/100
101 - 200 59/100
201 - 300 58/100
301 - 400 55/100
401 - 500 53/100
   
1 - 500 291/500 (58.2%)
   
501 - 600 55/100
601 - 700 53/100
701 - 800 51/100
801 - 900 74/100
901 - 1000 56/100
   
501 - 1000 288/500 (57.6%)
   
1 - 1000 579 / 1000 (57.9%)
   
1 - 1000 (second set) 576 / 1000 (57.6%)

These results for the first set of 1000 cases are taken from differences in end-of-run production reported in files c_sensor.stat and d_sensor.stat (open these text files with Notepad).  Results for the second set of 1000 cases are taken from differences in results reported in files c1_sensor.stat and d1_sensor.stat.

The sensor.stat files used here are written by Sensor, and accumulate end-of-run results for all Sensor runs made in a given directory.


© 2000 - 2022 Coats Engineering, Inc.