home
goals
about sensor
optimization
climate
why sensor?
who's fastest?
p10 p50 p90
sensorpx
bayes and markov
drainage radius
dca
frac conductivity
tight & fractured
capillary pressure
miscible
primary_miscible
reserves
mmp
artificial intelligence
spe3
spe8
spe10
parallel?
gridding
fd vs fe
map2excel
plot2excel
third party tools
services
publications
q & a
ethics
contact us
Dr. K. H. Coats

 

Questions & Answers


Q:  We have 2 wells completed at the same depth in the same reservoir, and one seems to be initially producing a gas condensate and the other one is producing a black oil.  How can that be?  (recently asked in a SPE Reservoir Technical Group discussion)

A: There could be 2 separate reservoirs, as others have mentioned.  That would be indicated by unequal initial fluid phase contacts levels and pressures (where Pc=0).  Or if initial contacts and pressures are the same and we assume that the formation is a single reservoir with continuous hydraulic connectivity at initial capillary-gravity equilibrium, then initial saturations (and therefore produced fluids) can vary areally at a given depth simply due to heterogeneity and capillary pressure and phase transition zones.   For example, let's say that your well producing oil is in a low perm zone (0.1 md) compared to the condensate well (100 md), and that porosity is constant, and that the Leverett J-function applies to Pcgo.  Say that the entered curve has Pcgomx=10 psia at Sg=1, and that the reference J-function value is Jref = sqrt (kref/phiref) = sqrt (1 md/ 0.1)  = 3.162278.

The "oil" well could be in (towards the bottom of) the oil-gas transition zone corresponding to the tight rock (.1 md) Pc curve, which is equal to the entered curve times Jref/J = 3.162278/sqrt(.1/.1) = 3.162278.  So max Pcgo at the oil well is equal to 10*3.162278 = 31.62278.  If the average phase gradients of oil and gas in the transition zones are say, .25 and .15 psia/ft, respectively, then the total height of the gas-oil transition zone above the GOC at the oil well is Pcgomx/(go-gg) = 31.62278/.1 = 316.2278 ft.

Pc at the "gas" well is equal to the entered curve times Jref/J = 3.162278/sqrt(100/.1) = 0.1 The total height of the gas-oil transition zone above the GOC at the gas well is Pcgomx/(go-gg) = 10*.1/.1 = 10 ft.

If both wells were completed 30 ft above the GOC, the "gas" well would be initially producing all gas at reservoir conditions (from above the transition zone, at Sginit = 1 - Swc) while the "oil" well would be initially producing mostly oil (So would be close to 1-Swc  near the bottom of the transition zone).  This can be demonstrated by initialization of a 2D x-z reservoir model with only two adjacent columns, one for each well.  The initial downhole oil phase produced (all oil where Sg < Sgc) from the oil well would be at or close to equilibrium with the initial downhole gas phase produced from the gas well. Substitute your possible reservoir/fluid properties into reservoir models (making various assumptions as demonstrated) to check this and other possible explanations for observations.  If the produced fluids are at or near initial equilibrium then a single pvt representation can represent them.  Consult with a pvt expert for assistance with fluid characterization.


Q:  If Sensor is technically superior, why isn't it the market leader?

A: The main reason is that in the past we have not offered integrated reservoir engineering workflows, and have only provided the reservoir simulator.  Our clients have generally been expert users who have recognized the extreme advantages gained by building workflows with the best available components.  However, commercial pre/post-processing packages and workflow integration and optimization tools recently developed by our associates provide vastly superior automated and integrated deterministic and probabilistic reservoir modeling capabilities and workflows, and are rapidly increasing Sensor's market share.  See Simulation Goals.


Q:  In what ways does Sensor treat black oil and compositional problems differently?

A: Sensor treats black oil and compositional problems with the same logic. The code and input data are the same for the two cases, with the obvious exception of PVT treatment and PVT data.


Q:  What shortcuts, approximations, or sacrifices to accuracy, if any, explain the CPU times of Sensor?

A: Sensor makes no shortcuts, approximations, or data changes related to accuracy.  While Sensor was designed to provide accuracy equal to or greater than that of other models, the accuracies of different models’ results are best judged by actual run results.  Simple examples are: (1) compare Sensor’s one-iteration production well 6200 mcf/d rates of spe3.dat with rates from other models, (2) compare the initialization of phase identity and pressure in the critical-point black oil (compositional) test4a.dat (test4b.dat) with other models, (3) for any problem, test the 0-rate, no-change accuracy of Sensor initialization (this is a recommended test of initial equilibrium and properly isolated equilibrium regions, insert two data lines TIME 10000/END at the beginning of Recurrent Data for a few-second run with no wells), (4) for symmetrical problems, check the symmetry of results.

CPU times can vary widely among different simulators, as indicated by several of the SPE Comparative Solution Project problems37-43*.  In one case, CPU times among five participants varied by a factor up to 50 for the same dataset and same machine40.   Model efficiency depends upon many factors, including (1) choice of formulation (Impes, Implicit, Adaptive Implicit, etc.), (2) solver type (NF vs ILU-based methods) and logic of well constraint terms inclusion, (3) implicit well terms in Impes, (4) method of handling wellbore crossflow, (5) active-block coding in the solver as well as throughout the model, (6) Newton-Raphson vs successive-substitution methods in flash and phase stability calculations, (7) choice of variables in Impes compositional cases with large Nc, and many more.  Hundreds of published papers discuss these and other aspects of simulator methodology.  Different models’ authors often disagree regarding the best choices and use different methods.  Implementation can be as important as the choice of methods.  For example, Adaptive Implicit should be more efficient than Impes. But in one case two models, one using Impes, the other using Adaptive Implicit, were run on the same data and machine.  The Impes run was six times faster.  A more extreme example is given on the Miscible page of this website.  Code efficiency can be a factor.  Sensor uses no pre-compilers or other code efficiency enhancers. 

As in the case of accuracy, model speed is best judged by actual model runs. In addition, the comparisons should be “fair”.  We might run the same dataset on the same machine on Sensor and on Company A’s Model A and find CPU times of four hours for Model A.  However, if we asked Company A to run the problem, they might well make minor input parameter changes to obtain a two or three hour run. 

* Reference numbers are those of the Sensor Manual.


Q:  Do parallel applications provide any overall performance advantage in simulation on multicore machines, or on any other hardware?

A:  Even though parallel applications may provide single run speedup, the answer is no, except where available memory is not sufficient to run multiple serial cases (the current maximum single-process memory of 512 GB is enough for a 320 million cell case in Sensor, or 32 simultaneous ten million cell cases).  That exception now possibly includes only a few gigantic fields, but it won't exist for very long since the available memory is doubling every year or two.  Single runs provide no value.  It is the evaluation of sensitivities to many control and uncertain variables obtained from many runs that allows predictive optimization.  Serial applications can make far more efficient use of multicore machines (or any other hardware with sufficient memory) than parallel applications can, since they don't have to pay for the parallel inefficiencies caused by coupled domains.  By running 1 simultaneous serial application per core or processor (on 64 bit hardware generally capable of providing far more than sufficient memory) to simultaneously evaluate multiple sensitivities, you can achieve the maximum run rate that the hardware and software is capable of.

Running one simultaneous serial application per core, or one per processor on single-core machines, maximizes the hardware output in terms of the rate it can execute floating point operations (Floating Point Operations Per Second, or FLOPS).  Using Sensor (1) minimizes FLOs required per run, (2) avoids parallel sub-process communication and synchronization requirements, problems, and inefficiencies and (3) maximizes the overall run rate.

One of our clients was previously running a single porosity black oil field case using the leading parallel model on an 8-node cluster.  They can now run Sensor on each node 15 times faster than their previous parallel run time using all 8 nodes.  Overall, that's a speedup of 120, over two orders of magnitude.  This is much greater than the black oil speedup we usually see, even considering the huge parallel inefficiencies.  One of the causes (accounting for a factor of 2 or 3) is that for some reason the parallel model wouldn't run well in Impes, while Sensor's Impes formulation was significantly faster than its Implicit formulation.


Q:  In John Killough's original 1995 SPE9 paper, SPE 29110 referenced on your Who's Fastest? page,  Sensor is reported as being only 30% faster than VIP, on the same machine.  Yet, your Who's Fastest? page shows that Sensor is currently 5 or 6 times faster than VIP on that problem.  How can that be?

A:  See SPE 29111 (originally presented at the same conference as 29110, later published as SPE 50990 in SPEREE) - the cpu time reported for Sensor on SPE9, on the same machine used to run Sensor and VIP in SPE 29110, shows that Sensor was about 2.5 times faster than VIP in 1995, when using our Nested Factorization solver that we had recently completed.  The Sensor SPE9 run times in SPE 29110 were obtained from an older version using a solver provided by Phillips that had been used during early Sensor development.  Since 1995, many improvements have been made and continue to be made to Sensor.


Q:  What size problems can be run on Sensor, and what is the hardware cost?

A: Problem size is limited only by addressable machine memory. For any run, Sensor prints the required memory allocation at the top of the output file.  The required memory depends upon more factors than just the numbers of components and active grid blocks.  For example, the NF solver requires less storage than the default RBILU(0) solver. The Table below is a rough guide for Impes cases.  For the implicit formulation, memory requirement is higher and increases  more rapidly with the number of components.

Active blocks

Components

 Memory MB

125,000  

Black Oil

183

250,000  

Black Oil

365

1,500,000  

Black Oil

2400

3,000,000  

Black Oil

4800

50,000  

6

147

100,000  

6

295

50,000  

9

206

100,000  

9

412

 25,000  

24

358

When the memory required by your run exceeds the available memory on your PC, the run elapsed (wall clock) time may significantly exceed CPU time due to paging (use of virtual memory).  Sensor elapsed and CPU times are about the same when available memory is not exceeded (see the times appearing at the bottom of the downloadable example problem output files).

The Sensor6k executable is a Windows 32-bit version, but will run on Windows 32-bit and Windows x64 operating systems (through the WOW32 emulator).  On 32-bit systems, the maximum memory addressable by a process is 2 GB.  This allows black oil problems roughly on the order of a million blocks.

On Windows x64 operating systems (8, 7, Vista, XP, Server 2003, or Compute Cluster Server 2003), maximum memory addressable by the 32-bit executable is 4 GB, allowing black oil problems on the order of 2 million blocks.  Our 64-bit version, SENSOR 64, allows access of more memory (up to the machine limit, theoretical = 8 TB).  The maximum physical memory currently available on computers running Windows 8 (x64) is 512 GB (it is 192 GB for Windows 7), allowing Sensor black oil cases up to about 320 million cells.  Cost for those machines with the maximum 512 GB RAM may range into the tens of thousands, but currently available 64-bit machines with 16 to 64 GB of memory are inexpensive and are more than sufficient for most cases.  The pc with 16 GB can accommodate cases up to about 10 million cells, or 10 simultaneous million cell cases (since Sensor Impes memory requirement varies linearly with problem size as in above table).  If the theoretical maximum memory addressable by 64 bit processes, 8 TB RAM, were available, Sensor could run impes black oil cases up to about 5 billion cells on a single node.

All estimated maximum problem sizes mentioned above apply to Impes black oil cases.  The user is cautioned that good efficiency (particularly) in Impes cases requires that grid block sizes (pore volumes) must not be small.  If they are, timestep size limitations will and should be expected to cause prohibitively long run times, particularly for larger problems.  See the discussion of cpu time scaling with problem size on page 2 of this section.  Also, the ability to run very large problems is not by itself a good justification to do so.  Grid dimensions (number of gridblocks) should be chosen as the smallest required to control the effects of numerical dispersion on results while adequately describing geology.  In general, very large problems are justified only by very large reservoirs.


Q:  Can you indicate in one place all the steps necessary to run the Sensor simulator and plot results with Plot2Excel? 

A: We illustrate for spe1.dat, and compare results with a modified spe1 case.
1.
Create a work directory (folder), for example C:\sensor\testspe1, using Windows Explorer, or to do so by command line from a script or manually from a Command Prompt Window (to open go to Start/Programs/Accessories and select Command Prompt), enter the following commands:

cd C:\

mkdir sensor

cd sensor

mkdir testspe1

cd testspe1

You have created and are now in the work directory.

2. Copy your data file into your work directory.  The spe1.dat file is provided in the directory given by the environment variable %sensordata%.  This is easy to do manually by copy/paste from Windows Explorer, but from a script or from command line following (1) above, you can enter:

copy "%sensordata%\spe1.dat" .

Notice the space followed by the period at the end (the period means 'here').  The quotation marks are required when the path name contains blanks, including the expanded name of any environment variables.

3.  If executing manually and the Command Prompt Window is not open from (1) and (2) above - go to Start/Programs/Accessories and select Command Prompt, then in the Command Prompt Window, go to the work directory, by entering for example:

cd C:\sensor\testspe1

4. Execute sensor.  Execute in a script or manually enter in the Command Prompt Window:

sensor spe1.dat spe1.out

 Move or rename fort.61 to (say) "f61spe1" (move fort.61 f61spe1). Change the spe1.dat data file to allow no gas solution above bubble point (as indicated in comments in the spe1.dat data file) and save the changed file as spe1a.dat.  Enter:

sensor  spe1a.dat  spe1a.out

Move or rename fort.61 to (say) "f61spe1a".   

5. In the work directory, construct the SensorPlot data file (say) "spe1.sp" as:

TITLE
 
SPE1 compare cases of gas going into sol'n 
 
and not going into sol'n above 4014 psia bubble pt.
ENDTITLE 

FILE
 
f61spe1     Rs'>0   ! Sensor result file and a
 
f61spe1a    Rs'=0   ! source name as a single word.

OUTPUTNAME
 
spe1       ! any name desired

FIELD
  QOIL   (Y2) GOR    (T) SPE1 EFFECT OF Rs SOLUTION GAS ABOVE BP

  CUMOIL (Y2) CUMGAS
  PAVG   (Y2) OILREC

END
 

Enter at command prompt:  SensorPlot.exe.  You will be prompted for the name of the SensorPlot data file, enter "spe1.sp". See file “SensorPlot.inf” for the run summary. SensorPlot will generate 2 files: “spe1.tab” with data tables and “spe1.plt” with plot-log records.

6. Open MS Excel and follow the instructions on page 10 of the SensorPlot Manual. It will guide you through Plot2Excel to build and view the requested plots.

Note: If you are making many runs and always want the same plots, you can construct the ".sp" file once with the file name "fort.61" entered under file.  Then after each run when you want plots, you need only enter "SensorPlot.exe" and proceed from there. The downloadable Sensor data files include SensorPlot data files of  ".sp" extension for spe1, spe2, spe3, spe5, spe7, spe9, spe10, test2, test3a and test16.


Q:  How do Sensor run times scale with problem size?

A:  We illustrate using spe1 as an example.  This case takes about 0.2 cpu seconds on our 2.8 GHz desktop.

If the numbers of blocks and wells are increased by a factor of n, while keeping the grid block sizes (dx, dy, dz) the same (i.e., the reservoir is replicated by a factor of n), average timestep size will remain about the same, and the larger case should take approximately n x (0.2 cpu seconds).  If spe1 is increased from 10 x 10 x 3 (300 blocks) to 800 x 800 x 3 (almost 2 million blocks), then n=6400 and the larger case should take about 21 minutes on our machine.

However, if the total reservoir size is fixed, and the grid block sizes (dx, dy, dz) are decreased such that the total number of blocks are increased by a factor of n, average timestep size will be reduced by a factor of n, and the larger case should be expected to take approximately n x n x (0.2 cpu seconds).  Here, we are assuming that the number of wells and their boundary conditions and spatial locations remain fixed.  We would expect this 800 x 800 x 3 run (n=6400) to take about 93 days on our machine.

This illustrates the importance of not using block sizes that are significantly smaller than those required to control numerical dispersion while adequately describing geology.  The model can and should be used to determine those requirements by examining the sensitivity of results to the level of spatial discretization, using upscaling methods of the user's choice.

While most of the provided example cases are small, Sensor's performance advantage is not a function of the number of gridblocks.


Q:  What tuning is needed or recommended for high efficiency and stability in Sensor?

A:  Selections of formulation (default Impes, or IMPLICIT card for fully implicit) and linear solution option (default ILU, or NF card for Nested Factorization) can significantly affect performance.  While in some cases a particular formulation is called for (like Implicit for radial coning problems), in general we recommend test runs to determine the optimum selections.  Very rare cases require modification of timestep/run control or other solver control defaults.  Some options may improve performance.  We recommend that 2D and 3D Impes runs should be tried first with an entry of CFL 2 for stable step control.  Reduction of the stable step size may be needed in some cases to eliminate oscillations.  The PERC card specifying percolation control in Impes runs sometimes gives good speedup for gassy oil problems.  See Section 7.1 in the Sensor Manual for further discussion of recommended run control.


Q:  Can unstructured grids be used with Sensor?

A:  Yes.  They are specified using the structured Cartesian format, in the form of transmissibilities and cell pore volumes and depths.  This is essentially an accounting problem and must be performed by an unstructured gridding program.  The number of blocks in the x, y, and z directions are set to include a sufficient number of cells.  Any extra cells are deactivated and cause no penalty.  The unstructured cells are ordered in some way, and the corresponding specified structured transmissibilities either represent a true unstructured connection, or are zero where no unstructured connection exists.  All other unstructured connections are specified as non-neighbor connections (transmissibilities).  The connections are assumed k-orthogonal, i.e. Sensor does not employ a multipoint flux approximation.  Since Sensor's linear solvers are already coded in an unstructured manner, the unstructured nature of the grid does not cause performance penalties, unlike some other models with structured matrix representations.  Well intersections and perforation well indices are also computed by the unstructured gridding program.  Map visualization requires an unstructured viewer linked to the Sensor output and to the unstructured grid representation.

Sensor can also model dual porosity/dual permeability systems in unstructured grids, through entry of approximately equivalent rectilinear gridblock dimensions, along with rectilinear matrix block dimensions.


Q:  What about hybrid (mixed-type) grids?

A:  Sensor can handle any combination of grid systems that the gridding program is able to describe.


Q:  Can Sensor handle local grid refinement?

A: Yes.  But the refinements must be set up by a gridding program or pre-processing step in the context of structured and non-neighbor connections in Sensor's single xyz grid.  This is similar to the setup for unstructured grids described above.  RExcel now performs this function for Sensor (http://www.santecpe.com).


Q:  What are the limitations of Analytical and Numerical Simulation?

A: Analytical solutions are generally limited to simple homogeneous single phase or 2 phase immiscible systems, and do not generally apply to our real reservoirs. Perhaps the most complex flow problem with an analytical solution is the Buckley-Leverett problem, for two phase incompressible immiscible 1D flow in a homogeneous system. Numerical solutions are limited by several factors:

  • Hardware capabilities

  • Characterization ability

  • Model size (upscaling ability and control of numerical dispersion)

  • Model applicability (assumptions)

  • Uncertainty

See Novel Techniques for Reservoir Management and Requirements for Substantiation.

Upscaling is almost a lost art, because the non-orthogonal (corner point) grid systems most are using are not generally upscalable, as the fine and coarse grid system boundaries do not coincide. Single-phase flow-based permeability upscaling can be quite effective on Cartesian grids - see our web pages on Spe10 and Gridding and Upscaling.

Inability to upscale and control numerical dispersion severely limits our abilities in reservoir modeling. For example, parallel computing is a symptom of the problem. It is the opposite of what is needed to make progress in modeling, which is the continuous evaluation of large numbers of scenarios to account for uncertainty (geological and upscaled characterizations, history matches, and predictions), on an iterative basis, towards solving the automatic global predictive optimization problem.  See Simulation Goals.

Also see our Uncertainty Quantification and SensorPx pages.  Inability to account for uncertainty can be a severe limitation in reservoir modeling.  When the uncertainties have significant effects on results, probabilistic analysis is needed for optimization and forecasting.


Q: What are "machine-learning" algorithms?

A: "Machine-learning" is a misnomer or marketing term usually applied to optimization or function-matching methods.  Many apply the the term to almost any computerized algorithm (software program).  The intelligence behind any software program is in the minds of its developers.  Computers add speed, capacity, and memory to numerical solutions, but not intelligence.

Machines cannot learn, and there is no such thing as artificial intelligence.  Machines can only do what they are programmed to do and their only advantages over the human brain are memory and speed in communications and in performing massive and/or repetitive calculations.  Computers and computer programs are the most advanced tools ever developed by humans.  Their capacity for magnifying the ability of human intelligence to solve problems may seem like magic or like "artificial intelligence" but it is not.

We are aware of no reproducible example of any "machine learning" method that has been shown to be useful in reservoir modeling for optimizations or predictions (other than an applicable full-physics reservoir model or simulator along with manual or automated optimization methods).  Statistical analysis of data, curve-fitting or mapping, function matching, speech or image recognition or processing, nor any automated optimization method represent "machine learning" or "artificial intelligence".

Even our most advanced workflows in simulation do not qualify as "machine learning".  They are  software tools that are programmed and run by humans on machines to solve problems (which seems to be the most currently-used definition of "machine learning" today).  We can automatically (iteratively) optimize any of the inputs to maximize or minimize any benefit function (such as NPV for predictive optimization  or a mismatch function for upscaling or history matching) defined by the outputs of any set of linked batch applications, using third party workflow integration, automation, and optimization software (such as Pipe-it from Petrostreamz that offers a number of different optimization methods and algorithms).  For example, we can automatically optimize well and fracture spacings and operational variables (i.e. determine optimal periods and constraints for depletion, water flooding, wag flooding, and blowdown) for simplified sets of parallel horizontal wells (given fracture geometry) assuming simultaneous operation of all wells in the field, to determine the simplest possible answer to the questions we have always asked : "where should we put our wells and how should we operate them to maximize NPV?" Uncertainty may be included in the analysis with the additional capability of iteratively generating and evaluating an arbitrary number of equally-probable realizations along with simple and fast statistical analysis of their results in SensorPx.  Uncertainties in reservoir descriptions can be partially accounted for by the Uncertain Inputs capabilities of Sensor, which can almost instantly create unlimited numbers of equally-probable realizations of the uncertainties from their input probability distributions.


© 2000 - 2022 Coats Engineering, Inc.