SensorPx Example 1 (input data files are spe1*.mspx,
spe1*.dat)
If you have not made Makespx the default program to open .mspx files and
SensorPx the default program to open .spx files, do so as indicated in
Section 5.1 after copying your data files to the work directory.
This example is based on spe1, and is a black oil case of gas injection in a
10 by 10 by 3 grid. We attempt to quantify uncertainty in oil recovery (and all other
output variables) due to specified uncertainties in the entire input
permeability and porosity fields (900 total unknowns), and then attempt an
optimization of well completion. The questions we wish to answer are:
1. How accurately can we quantify (forecast) production in stochastic
(Monte-Carlo) modeling, as a function of the number of scenarios considered,
for the given set of uncertainties?
2. Can we sufficiently quantify uncertainty in this example to determine
how to best complete the injector (vertically), given its areal location and
the uncertainties in the input reservoir description? Consider completion
options (a) as originally specified in layer 1 (for which it is assumed we
have log and core data), (b) bottom layer only, (c) all layers, and (d)
bottom two layers only.
The Sensor data files that are input to Makespx and that specify completion
options a, b, c, and d are spe1a.dat,
spe1b.dat,
spe1c.dat, and spe1d.dat, respectively.
These data files use the Sensor Uncertain Inputs features described below in
order to generate any desired number of cases having equally probable values
and combinations of the uncertain descriptive reservoir variables that are
randomly populated from their input probability distributions.
To copy all example data files to work directory ‘spe1’, and to run the
first set of cases (Makespx case spe1a1k.mspx) from a Command Prompt Window:
Click on Start and enter cmd.exe in the Search or Run box. In the Command
Prompt Window that appears, or in another program, enter the following
commands in the order given:
mkdir spe1
cd
spe1
copy “%sensordata%”\spe1*.mspx .
copy “%sensordata%”\spe1*.dat .
spe1a1k.mspx
runs.bat
runspx.bat
Or, to run from Windows Explorer (to open, click on Start and enter
‘explorer.exe’ in the Search or Run box):
Create a work directory of any name. Copy spe1*.mspx and spe1*.dat from the
SensorDataSets subfolder of the Sensor installation directory given at the
beginning of this Chapter to your work directory. Double click on
spe1a1k.mspx. This executes Makespx and creates runs.bat and runspx.bat.
You may need to refresh Windows Explorer with F5 to show the created .bat
files. Double click on runs.bat to execute it, which makes the Sensor
runs. When runs.bat has finished execution, execute (double click on)
runspx.bat if it was generated by Makespx (for nsproc>1).
Upon completion of the above, the SensorPx output file spe1a.log will open
in Notepad, and binary output files spe1a.p10, spe1a.p50, and spe1a.p90 will
be written. The results of all Sensor runs made will be in the results
subdirectory of the work directory, named casen.out and fortn.61,
n=nadd+1,nadd+ncases, where ncases is the total number of Sensor cases
specified to be run under CASES, and nadd is the total number of existing
results files added with any ADD specifications, in the .mspx data file.
The output files of any Sensor cases that fail to run to completion are
written to the badruns subdirectory, and corresponding error messages will
appear in the output SensorPx .log file.
The first Makespx input data file considered in this example is
spe1a1k.mspx. The data file specifies
that SensorPx will compute P10, P50, and P90 results for 1000 Sensor
executions of the Sensor data file spe1a.dat in 8 simultaneous sets of (1000
/ 8 =) 125 sequential runs:
10
50 90 8
CASES
spe1a.dat 1000
END
By default, exceedance probabilistic values are computed. The CUM
option can be used to instead compute cumulative probabilistic
values:
Exceedance Probability: There is at least an x% chance that the value of
the variable will be greater than or equal to its exceedance Px value (it is
greater than or equal to that value in at least x% of runs). The exceedance
P10 value of a variable is a high number, and P90 is a low number.
Cumulative Probability: There is at least an x% chance that the value of
the variable will be less than or equal to its cumulative Px value (it is
less than or equal to that value in at least x% of runs). The cumulative
P10 value of a variable is a low number, and P90 is a high number.
The only differences between the spe1a.dat Sensor case and the original
spe1.dat case are the Uncertain Inputs specified in spe1a.dat for the
permeability and porosity distributions, and the change of the BHP of the
injector from 10000 to 9000, to avoid the effects of a negative oil
compressibility pvt data error. The originally specified porosity and
permeability values, constant by layer, are assumed to represent the mean
values. The value of r specified in the data shown below is the ratio of
the standard deviation to the mean, for a normally distributed variable. In
general (applying to both normal and log-normal distributions of a variable
X), the input value of r is defined as
r = (Xmean – X-1sd) / Xmean
(5.1)
where X-1sd is the value of the variable at one standard
deviation below the mean (r was defined by Dykstra and Parsons as the
‘coefficient of permeability variation’ for log-normal distributions of
permeability).
Each execution of the Sensor data file spe1a.dat will generate an equally
probable combination of the uncertainties that are populated according to
their specified distributions, using a random number generator. The well
block properties are assumed to be less uncertain (they have a much smaller
entered value of r) because of the assumption of existing log and core data
where they have been completed as originally specified:
C i1 i2 j1 j2 k1 k2 mean r min max
DISTRIBUTE KX KY 1 10 1 10 1 1 500 .5 0 10000
DISTRIBUTE KZ 1 10 1 10 1 1 50 .5 0 10000
DISTRIBUTE KX KY 1 10 1 10 2 2 50 .5 0 10000
DISTRIBUTE KZ 1 10 1 10 2 2 50 .5 0 10000
DISTRIBUTE KX KY 1 10 1 10 3 3 200 .5 0 10000
DISTRIBUTE KZ 1 10 1 10 3 3 19.23 .5 0 10000
DISTRIBUTE POROS 1 10 1 10 1 3 .3043 .2 0 1
C
ASSUME WELL BLOCK PROPERTIES ARE LESS UNCERTAIN
DISTRIBUTE KX KY 1 1 1 1 1 1 500 .01 0 10000
DISTRIBUTE KZ 1 1 1 1 1 1 50 .01 0 10000
DISTRIBUTE KX KY 10 10 10 10 3 3 50 .01 0 10000
DISTRIBUTE KZ 10 10 10 10 3 3 50 .01 0 10000
DISTRIBUTE POROS 1 1 1 1 1 1 .3043 .01 0 10000
DISTRIBUTE POROS 10 10 10 10 3 3 .3043 .01 0 10000
For the first POROS distribution specified above (normal by default),
applying to all but the well block porosities, the standard deviation is
(r*mean=) 0.06043, and 99.74% of a large number of properly populated
porosity values should lie between 3 standard deviations of the mean, which
is between 0.1230 and 0.4856.
We wish to examine the accuracy of our probabilistic estimates as a function
of the number of cases considered. Since pore volume (porosity) is treated
as uncertain, and for the sake of simplicity, we will focus only on
cumulative oil recovery here, rather than on fractional recovery. In a more
complex example we might examine the variance of all probabilistic
cumulative production and injection values. Cumulatives are recommended,
rather than rates, which are averages over specific timesteps – rates are
reported at the end of each timestep, but in any analysis of them, rates
should be taken from the rate of change of the cumulatives, (Cn+1-Cn)/(tn+1-tn),
and should be considered to apply at the middle of each common reported time
period (which may include more than one timestep in any given Sensor case,
since timestepping will generally vary by case). Ideally, we would consider
all cumulative production/injection figures and include an economic model to
compute Px,y,z results of net present value (NPV). That can easily be done
with a separate economics package, and an integrated economics model is
planned as a future enhancement of SensorPx.
Results
Figure 5.1 shows the results of 3 executions of
spe1a1k.mspx to compute P10, P50, and
P90 forecasts for 1000 runs of Sensor case spe1a.dat. The large deviations
in Px,y,z results for the 3 sets of runs indicates than many more runs are
needed to accurately quantify uncertainty in cumulative oil production.

Figure 5.1
Figures 5.2 through 5.5 show the results of 3 executions of
spe1a10k.mspx,
spe1b10k.mspx,
spe1c10k.mspx, and
spe1d10k.mspx to compute P10, P50, and
P90 forecasts for 10000 runs of Sensor cases spe1a.dat, spe1b.dat,
spe1c.dat, and spe1d.dat, respectively. The good reproducibility of Px,y,z
results for the 3 sets of runs in all cases indicates that accuracy may be
sufficient in order to differentiate between the probable performance of the
completion options a, b, c, and d. That of course depends on how strongly
the results are affected by the operational options. We can make reliable
optimizations only if the variations in probabilistic results due to
operational alternatives are significantly greater than the estimated errors
in those results. We can estimate those errors by examining the
reproducibility of probabilistic results computed from the results of
multiple sets of cases having different random realizations of the unknowns.

Figure 5.2 (option a)

Figure 5.3 (option b)

Figure 5.4 (option c)

Figure 5.5 (option d)

Figure 5.6
Comparison of P10, P50, and P90 cumulative oil production for options
a,b,c,d
Figure 5.6 compares probabilistic cumulative oil production for the last set
of 10000 runs made for each of the 4 options (the gold curves in Figures 5.2
– 5.5). It appears that option c offers slightly better probabilistic
performance than option d (completion in layers 2 and 3), which is the
second best option, and that the originally specified top layer completion
is the worst possible choice. But the differences between option c and d
probabilistic results are small, and are possibly within the margin of
error, based on these sets of 10000 runs. In order to verify that case c is
optimal, and to examine the further increase in accuracy of predictions and
our ability to make optimizations regarding operational options having less
effect on results than these options do, 3 sets of 100000 runs were made
comparing options c and d (Makespx data files are spe1c100k.mspx,
spe1d100k.mspx). The results are shown in Figure 5.7 and confirm our
conclusions that option c, completion of the injector in all layers, is most
likely to be the optimal choice. The probabilistic results of the 3 sets of
runs for case c and for case d are graphically indistinguishable, while all
case c results clearly exceed those for case d by a small margin.
End-of-run Px,y,z figures are also given in Table 5.1. The Px,y,z figures at
end-of-run for case c differ by a maximum of 0.3%, while case d values vary
up to a maximum of 0.4%, and the case c values are all higher than the case
d values, by an average of about 1%. The end-of-run P90 figure averages
indicate that the minimum cumulative oil production in the highest producing
(best) 90% of the cases c runs exceeds that in the case d runs by about 1%, which is a significant figure of about
0.5 million barrels.

Figure 5.7
P10, P50, P90 cumulative oil production for 3 sets of 100000 runs, cases c
and d
The maximum deviations observed at end-of-run in our 3 sets of 10000 runs
for cases c and d (shown in Figs. 5.4 and 5.5 and also from Table 5.1) are
about 1.3% and 1.1%, respectively. So with an order of magnitude increase
in the number of runs, from 10000 to 100000, we achieved only about a 3-fold
increase in the accuracy of our probabilistic forecasts, with a
corresponding approximate 3-fold increase in our ability to distinguish
between the relative performance of operational options. We saw
approximately the same increase in apparent accuracy (based on variation in
results of only a few sets of runs) of probabilistic results when going from
1000 to 10000 runs.
Table 5.1 gives data and output file names along with their predicted P10,
P50, and P90 values of cumulative oil recovery at end of run (10 years) for
all executions of Makespx/SensorPx in this example. These tabular
end-of-run results are taken from the “FINAL FIELD RESULTS” table in the
SensorPx output .log files. An example from the SensorPx output file
spe1a1ka.log from our first execution of spe1a1k.mspx / runs.bat /
runspx.bat is shown in Table 5.2 (your results will differ).
Table 5.1
End-of-Run Px,y,z CUMOIL values for all Example 1 MakeSpx/SensorPx
Executions
Makespx Datafile |
Sensor Datafile |
SensorPx Output File |
Number of runs |
CUMOIL P10, mstb, t=3650 |
CUMOIL P50, mstb, t=3650 |
CUMOIL P90, mstb, t=3650 |
spe1a1k.mspx |
spe1a.dat |
spe1aka.log |
1000 |
50391 |
46663 |
42134 |
|
|
spe1a1kb.log |
1000 |
50896 |
46580 |
41057 |
|
|
spe1ak1c.log |
1000 |
49760 |
46751 |
43262 |
|
|
|
|
|
|
|
spe1a10k.mspx |
spe1a.dat |
spe1a10ka.log |
10000 |
50352 |
46659 |
43066 |
|
|
spe1a10kb.log |
10000 |
50583 |
46505 |
42357 |
|
|
spe1a10kc.log |
10000 |
50093 |
46704 |
42803 |
|
|
|
|
|
|
|
spe1b10k.mspx |
spe1b.dat |
spe1b10ka.log |
10000 |
52821 |
49053 |
44010 |
|
|
spe1b10kb.log |
10000 |
52747 |
49174 |
44559 |
|
|
Spe1b10kc.log |
10000 |
52967 |
48976 |
44679 |
|
|
|
|
|
|
|
spe1c10k.mspx |
spe1c.dat |
spe1c10ka.log |
10000 |
53747 |
49855 |
45586 |
|
|
spe1c10kb.log |
10000 |
53753 |
49579 |
46112 |
|
|
spe1c10kc.log |
10000 |
53744 |
49832 |
46216 |
|
|
|
|
|
|
|
spe1d10k.mspx |
spe1d.dat |
spe1d10ka.log |
10000 |
53435 |
49254 |
45373 |
|
|
spe1d10kb.log |
10000 |
52715 |
49519 |
45627 |
|
|
spe1d10kc.log |
10000 |
52972 |
49059 |
44862 |
|
|
|
|
|
|
|
spe1c100k.mspx |
spe1c.dat |
spe1c100ka.log |
100000 |
53648 |
49852 |
46096 |
|
|
spe1c100kb.log |
100000 |
53482 |
49972 |
46204 |
|
|
spe1c100kc.log |
100000 |
53663 |
49771 |
46116 |
|
|
|
|
|
|
|
spe1d100k.mspx |
spe1d.dat |
spe1d100ka.log |
100000 |
53372 |
49384 |
45270 |
|
|
spe1d100kb.log |
100000 |
53235 |
49393 |
45156 |
|
|
spe1d100kc.log |
100000 |
53191 |
49368 |
45253 |
Table 5.2
FINAL FIELD RESULTS
VARIABLE TIME PX PY PZ
(CASE) (CASE) (CASE)
------------------------------------------------------------------------
QOIL
0.3650000E+04 0.1088708E+05 0.8912476E+04 0.7593289E+04
36 384 518
QWAT
0.3650000E+04 0.0000000E+00 0.0000000E+00 0.0000000E+00
0 0 0
QGAS
0.3650000E+04 0.8936689E+05 0.8111063E+05 0.7603425E+05
454 132 420
QWI
0.3650000E+04 0.0000000E+00 0.0000000E+00 0.0000000E+00
0 0 0
QGI
0.3650000E+04 0.1000000E+06 0.1000000E+06 0.1000000E+06
183 1000 256
WCUT
0.3650000E+04 0.0000000E+00 0.0000000E+00 0.0000000E+00
0 0 0
GOR
0.3650000E+04 0.1197446E+05 0.9151039E+04 0.7008436E+04
523 312 271
CUMOIL
0.3650000E+04 0.5039073E+05 0.4666276E+05 0.4213393E+05
927 233 650
CUMWAT
0.3650000E+04 0.0000000E+00 0.0000000E+00 0.0000000E+00
0 0 0
CUMGAS
0.3650000E+04 0.1970040E+06 0.1707808E+06 0.1419334E+06
120 848 525
CUMWI
0.3650000E+04 0.0000000E+00 0.0000000E+00 0.0000000E+00
0 0 0
CUMGI
0.3650000E+04 0.3602181E+06 0.3496366E+06 0.3307256E+06
196 225 397
OILREC
0.3650000E+04 0.1796322E+02 0.1634342E+02 0.1486434E+02
121 635 650
GASREC
0.3650000E+04 -0.4398073E+02 -0.4905497E+02 -0.5189258E+02
102 104 268
PAVGHC
0.3650000E+04 0.8354913E+04 0.8259162E+04 0.8124512E+04
502 41 793
PAVG
0.3650000E+04 0.8354880E+04 0.8259124E+04 0.8124478E+04
502 41 793
QGLIFT
0.3650000E+04 0.0000000E+00 0.0000000E+00 0.0000000E+00
0 0 0
CUMGLIFT
0.3650000E+04 0.0000000E+00 0.0000000E+00 0.0000000E+00
0 0 0
The case number from which the computed Px,y,z results were taken is given
below each reported result. Note that except for the two virtually
identical average pressures, no two Px, Py, or Pz results are from the same
case. There is no such thing as a Px, Py, or Pz case – only the
probabilistic results have any significant meaning, and none of the
individual cases indicated can possibly represent any probabilistic behavior
or description, beyond the singular Px, Py, or Pz result that they predict
at the reported time(s).
However, we may
be able to find many cases that match P50 oil, gas, and water
production/injection results fairly well, and by tuning the Uncertain Inputs
to match the p50 results to historical production, we can maximize the rate
at which history matches can be found stochastically. Those found
cases will have very different descriptions, as demonstrated in Example 4.
Large numbers of history matches can then quantify uncertainty in
predictions that are made from them.
|