Modeling
Modeling
https://en.wikipedia.org/wiki/Scientific_modelling 1
Model
• A representation of a physical property or entity that can be used to make predictions or compare
observations with assumptions.
• Mathematical velocity models are commonly used to predict the depth to a formation of interest.
• Physical models, such as layers of clay or putty, can be used to simulate rock layers.
• As Sheriff (1991) points out, agreement between data and a model does not prove that the model is
correct, since there can be numerous models that agree with a given dataset.
• The act of constructing a model.
2
Forward modeling
• The practice of taking a model and calculating what the observed values should be, such as predicting
the gravity anomaly around a salt dome using a gravity model or predicting the traveltime of
a seismic wave from a source to a receiver using a velocity model.
• The technique of determining what a given sensor would measure in a given formation and environment by
applying a set of theoretical equations for the sensor response.
• Forward modeling is used to determine the general response of most
electromagnetic logging measurements, unlike nuclear measurements whose response is determined mainly
in laboratory experiments.
• Forward modeling is also used for interpretation, particularly in horizontal wells and complex environments.
• In this case, iterative forward modeling is used.
• The set of theoretical equations (the forward models) can be 1D, 2D or 3D.
• The more complex the geometry, the more factors can be modeled but the slower the computing time.
3
Scientific modelling
• Scientific modelling is an activity that produces models representing empirical objects, phenomena, and physical processes, to
make a particular part or feature of the world easier to understand, define, quantify, visualize, or simulate.
• It requires selecting and identifying relevant aspects of a situation in the real world and then developing a model to replicate a
system with those features.
• Different types of models may be used for different purposes, such as conceptual models to better understand, operational
models to operationalize, mathematical models to quantify, computational models to simulate, and graphical models to
visualize the subject.
• Modelling is an essential and inseparable part of many scientific disciplines, each of which has its own ideas about specific
types of modelling.
• The following was said by John von Neumann.
o ... the sciences do not try to explain, they hardly even try to interpret, they mainly make models.
• By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed
phenomena.
• The justification of such a mathematical construct is solely and precisely that it is expected to work—that is, correctly to
describe phenomena from a reasonably wide area.
• There is also an increasing attention to scientific modelling in fields such as science education, philosophy of science, systems
theory, and knowledge visualization.
• There is a growing collection of methods, techniques and meta-theory about all kinds of specialized scientific modelling.
4
Example scientific modelling.
A schematic of chemical and transport
processes related to atmospheric
composition.
5
Overview
• A scientific model seeks to represent empirical objects, phenomena, and physical processes in a logical and objective way.
• All models are in simulacra, that is, simplified reflections of reality that, despite being approximations, can be extremely useful.
• Building and disputing models is fundamental to the scientific enterprise.
• Complete and true representation may be impossible, but scientific debate often concerns which is the better model for a
given task, e.g., which is the more accurate climate model for seasonal forecasting.
• Attempts to formalize the principles of the empirical sciences use an interpretation to model reality, in the same way
logicians axiomatize the principles of logic.
• The aim of these attempts is to construct a formal system that will not produce theoretical consequences that are contrary to
what is found in reality.
• Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific
models are true.
• For the scientist, a model is also a way in which the human thought processes can be amplified.
• For instance, models that are rendered in software allow scientists to leverage computational power to simulate, visualize,
manipulate and gain intuition about the entity, phenomenon, or process being represented.
• Such computer models are in silico.
• Other types of scientific models are in vivo (living models, such as laboratory rats) and in vitro (in glassware, such as tissue
culture).
6
Relation between a real and a formal system (or model)
7
Basics
Modelling as a substitute for direct measurement and experimentation
• Models are typically used when it is either impossible or impractical to create experimental conditions in which scientists can
directly measure outcomes.
• Direct measurement of outcomes under controlled conditions (see Scientific method) will always be more reliable than
modeled estimates of outcomes.
• Within modeling and simulation, a model is a task-driven, purposeful simplification and abstraction of a perception of reality,
shaped by physical, legal, and cognitive constraints.
• It is task-driven because a model is captured with a certain question or task in mind.
• Simplifications leave all the known and observed entities and their relation out that are not important for the task.
• Abstraction aggregates information that is important but not needed in the same detail as the object of interest.
• Both activities, simplification, and abstraction, are done purposefully. However, they are done based on a perception of
reality.
• This perception is already a model in itself, as it comes with a physical constraint.
• There are also constraints on what we are able to legally observe with our current tools and methods, and cognitive
constraints that limit what we are able to explain with our current theories.
• This model comprises the concepts, their behavior, and their relations informal form and is often referred to as a conceptual
model.
8
• In order to execute the model, it needs to be implemented as a computer simulation.
• This requires more choices, such as numerical approximations or the use of heuristics.
• Despite all these epistemological and computational constraints, simulation has been recognized as the third pillar of scientific
methods: theory building, simulation, and experimentation.
Simulation
• A simulation is a way to implement the model, often employed when the model is too complex for the analytical solution.
• A steady-state simulation provides information about the system at a specific instant in time (usually at equilibrium, if such a
state exists).
• A dynamic simulation provides information over time.
• A simulation shows how a particular object or phenomenon will behave.
• Such a simulation can be useful for testing, analysis, or training in those cases where real-world systems or concepts can be
represented by models.
Structure
• Structure is a fundamental and sometimes intangible notion covering the recognition, observation, nature, and stability of
patterns and relationships of entities.
• From a child's verbal description of a snowflake, to the detailed scientific analysis of the properties of magnetic fields, the
concept of structure is an essential foundation of nearly every mode of inquiry and discovery in science, philosophy, and art .
9
Systems
• A system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole.
• In general, a system is a construct or collection of different elements that together can produce results not obtainable by the
elements alone.
• The concept of an 'integrated whole' can also be stated in terms of a system embodying a set of relationships which are
differentiated from relationships of the set to other elements, and form relationships between an element of the set and
elements not a part of the relational regime.
• There are two types of system models:
1) discrete in which the variables change instantaneously at separate points in time and,
2) continuous where the state variables change continuously with respect to time.
Generating a model
• Modelling is the process of generating a model as a conceptual representation of some phenomenon.
• Typically, a model will deal with only some aspects of the phenomenon in question, and two models of the same
phenomenon may be essentially different—that is to say, that the differences between them comprise more than just a
simple renaming of components.
• Such differences may be due to differing requirements of the model's end users, or to conceptual or aesthetic differences
among the modelers and to contingent decisions made during the modelling process.
• Considerations that may influence the structure of a model might be the modeler's preference for a reduced ontology,
preferences regarding statistical models versus deterministic models, discrete versus continuous time, etc. 10
• In any case, users of a model need to understand the assumptions made that are pertinent to its validity for a given use.
• Building a model requires abstraction.
• Assumptions are used in modelling in order to specify the domain of application of the model. For example, the special theory
of relativity assumes an inertial frame of reference.
• This assumption was contextualized and further explained by the general theory of relativity.
• A model makes accurate predictions when its assumptions are valid, and might well not make accurate predictions when its
assumptions do not hold.
• Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of
relativity works in non-inertial reference frames as well).
Evaluating a model
• A model is evaluated first and foremost by its consistency to empirical data; any model inconsistent with reproducible
observations must be modified or rejected.
• One way to modify the model is by restricting the domain over which it is credited with having high validity.
• A case in point is Newtonian physics, which is highly useful except for the very small, the very fast, and the very massive
phenomena of the universe.
• However, a fit to empirical data alone is not sufficient for a model to be accepted as valid. Factors important in evaluating a
model include:
o Ability to explain past observations
11
o Ability to predict future observations
o Cost of use, especially in combination with other models
o Refutability, enabling estimation of the degree of confidence in the model
o Simplicity, or even aesthetic appeal
o People may attempt to quantify the evaluation of a model using a utility function.
Visualization
• Visualization is any technique for creating images, diagrams, or animations to communicate a message.
• Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the
dawn of man.
• Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary
methods of technical drawing for engineering and scientific purposes.
Space mapping
• Space mapping refers to a methodology that employs a "quasi-global" modelling formulation to link companion "coarse" (ideal
or low-fidelity) with "fine" (practical or high-fidelity) models of different complexities.
• In engineering optimization, space mapping aligns (maps) a very fast coarse model with its related expensive-to-compute fine
model so as to avoid direct expensive optimization of the fine model.
• The alignment process iteratively refines a "mapped" coarse model (surrogate model).
12
Types
13
Applications
14
Example of the integrated use of Modelling and Simulation in Defence life cycle management. 15
The modelling and simulation in this image is represented in the center of the image with the three containers.
Probabilistic programming
• Probabilistic programming (PP) is a programming paradigm in which probabilistic models are specified and inference for
these models is performed automatically.
• It represents an attempt to unify probabilistic modeling and traditional general purpose programming in order to make the
former easier and more widely applicable.
• It can be used to create systems that help make decisions in the face of uncertainty.
• Programming languages used for probabilistic programming are referred to as "probabilistic programming languages" (PPLs).
16
Deterministic approach
• The simplest way of doing this, and indeed the primary method used, is to look at best estimates.
• The projections in financial analysis usually use the most likely rate of claim, the most likely investment return, the most
likely rate of inflation, and so on.
• The projections in engineering analysis usually use both the most likely rate and the most critical rate.
• The result provides a point estimate - the best single estimate of what the company's current solvency position is, or
multiple points of estimate - depends on the problem definition.
• Selection and identification of parameter values are frequently a challenge to less experienced analysts.
• The downside of this approach is it does not fully cover the fact that there is a whole range of possible outcomes and
some are more probable and some are less.
17
Deterministic system
• In mathematics, computer science and physics, a deterministic system is a system in which no randomness is involved in the
development of future states of the system.
• A deterministic model
• will thus always produce the same output from a given starting condition or initial state.
• Standard deviation
18
Probabilistic programming
• Probabilistic programming (PP) is a programming paradigm in which probabilistic models are specified and inference for these
models is performed automatically.
• It represents an attempt to unify probabilistic modeling and traditional general purpose programming in order to make the
former easier and more widely applicable.
• It can be used to create systems that help make decisions in the face of uncertainty.
• Programming languages used for probabilistic programming are referred to as "probabilistic programming languages" (PPLs).
19
Computational model
• A computational model uses computer programs to simulate and study complex systems using an algorithmic or
mechanistic approach and is widely used in a diverse range of fields spanning from physics, engineering, chemistry and
biology to economics, psychology, cognitive science and computer science.
• The system under study is often a complex nonlinear system for which simple, intuitive analytical solutions are not
readily available.
• Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by
adjusting the parameters of the system in the computer, and studying the differences in the outcome of the
experiments.
• Operation theories of the model can be derived/deduced from these computational experiments.
• Examples of common computational models are weather forecasting models, earth simulator models, flight
simulator models, molecular protein folding models, Computational Engineering Models (CEM),[8] and neural
network models.
20
Born method
Formation Evaluation
• A method of analyzing the response of an induction logging tool that considers the contribution of each element of
the formation as a perturbation from the average background conductivity.
• The development of the solution is similar to the Born approximation in quantum mechanics, since the latter also involves a single
scattering.
• The Born response is valid for modest formation contrasts.
• The zero-conductivity Born response is identical to the geometrical factor.
21
Iterative forward modeling
Formation Evaluation
• The use of repeated forward modeling of a logging tool response to produce modeled logs that very closely match the
measured logs.
• The final model is then the log analyst's best estimate of the formation properties.
• Iterative forward modeling is a hand-operated inversion.
• The technique is used mainly for laterologs and induction logs when the formation or the environment are complex, so that
the environmental effects cannot be separated and treated individually by automatic inversion.
• Iterative forward modeling allows the log analyst to use local knowledge and petrophysics to select between the many
possible solutions that are mathematically correct.
• These cases occur most often in horizontal wells, or vertical wells with the combined effects of invasion and
large resistivity contrast between beds.
22
Stochastic modelling
• A stochastic model would be to set up a projection model which looks at a single policy, an entire portfolio or an entire
company.
• But rather than setting investment returns according to their most likely estimate, for example, the model uses random
variations to look at what investment conditions might be like.
• Based on a set of random variables, the experience of the policy/portfolio/company is projected, and the outcome is
noted.
• Then this is done again with a new set of random variables. In fact, this process is repeated thousands of times.
• At the end, a distribution of outcomes is available which shows not only the most likely estimate but what ranges are
reasonable too.
• The most likely estimate is given by the distribution curve's (formally known as the Probability density function) center
of mass which is typically also the peak(mode) of the curve, but may be different e.g. for asymmetric distributions.
• This is useful when a policy or fund provides a guarantee, e.g. a minimum investment return of 5% per annum.
• A deterministic simulation, with varying scenarios for future investment return, does not provide a good way of
estimating the cost of providing this guarantee.
• This is because it does not allow for the volatility of investment returns in each future time period or the chance that an
extreme event in a particular time period leads to an investment return less than the guarantee.
• Stochastic modelling builds volatility and variability (randomness) into the simulation and therefore provides a better
representation of real life from more angles.
23
Computational model
• Computational model uses computer programs to simulate and study complex systems using an algorithmic or
mechanistic approach and is widely used in a diverse range of fields spanning
from physics, engineering, chemistry and biology to economics, psychology, cognitive science and computer
science.
• The system under study is often a complex nonlinear system for which simple, intuitive analytical solutions are not
readily available.
• Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done
by adjusting the parameters of the system in the computer, and studying the differences in the outcome of the
experiments.
• Operation theories of the model can be derived/deduced from these computational experiments.
• Examples of common computational models are weather forecasting models, earth simulator models, flight
simulator models, molecular protein folding models, Computational Engineering Models (CEM), and neural
network models.
24
Computer modeling
• Computer modeling is a widely used technique that uses computer programs to simulate and study complex
systems.
• It is used in a diverse range of fields spanning from physics, engineering, chemistry and biology to economics,
psychology, cognitive science and computer science.
• A computer model is a representation of a real-life system or situation, such as the workings of a nuclear reactor
or the evacuation of a football stadium.
• Computer modelling often uses numerical analysis to approximate the real solution of the problem3
25
• A computer model is an abstract mathematic representations of a real-world event, system, behavior, or natural
phenomenon.
• A computer model is designed to behave just like the real-life system.
• The more accurate the model, the closer it matches real-life.
26
Model of computation
• In computer science, and more specifically in computability theory and computational complexity theory,
a model of computation is a model which describes how an output of a mathematical function is computed given
an input.
• A model describes how units of computations, memories, and communications are organized.
• The computational complexity of an algorithm can be measured given a model of computation.
• Using a model allows studying the performance of algorithms independently of the variations that are specific to
particular implementations and specific technology.
27
Models
• Sequential models
• Sequential models include:
• Finite state machines
• Post machines (Post–Turing machines and tag machines).
• Pushdown automata
• Register machines
• Random-access machines
• Turing machines
• Decision tree model
28
Functional models
29
Concurrent models
• Concurrent models include:
o Actor model
o Cellular automaton
o Interaction nets
o Kahn process networks
o Logic gates and digital circuits
o Petri nets
o Process calculus
o Synchronous Data Flow
30
• Some of these models have both deterministic and nondeterministic variants.
• Nondeterministic models correspond to limits of certain sequences of finite computers, but do not correspond
to any subset of finite computers; they are used in the study of computational complexity of algorithms.
• Models differ in their expressive power; for example, each function that can be computed by a Finite state
machine can also be computed by a Turing machine, but not vice versa.
31
Turing machine
• A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols
on a strip of tape according to a table of rules.
• Despite the model's simplicity, it is capable of implementing any computer algorithm.
• The machine operates on an infinite memory tape divided into discrete cells, each of which can hold a single symbol
drawn from a finite set of symbols called the alphabet of the machine.
• It has a "head" that, at any point in the machine's operation, is positioned over one of these cells, and a "state"
selected from a finite set of states.
• At each step of its operation, the head reads the symbol in its cell.
• Then, based on the symbol and the machine's own present state, the machine writes a symbol into the same cell, and
moves the head one step to the left or the right, or halts the computation.
• The choice of which replacement symbol to write, which direction to move the head, and whether to halt is based on
a finite table that specifies what to do for each combination of the current state and the symbol that is read.
• Like a real computer program, it is possible for a Turing machine to go into an infinite loop which will never halt.
32
Nondeterministic Turing machine
• In theoretical computer science, a nondeterministic Turing machine (NTM) is a theoretical model of computation
whose governing rules specify more than one possible action when in some given situations.
• That is, an NTM's next state is not completely determined by its action and the current symbol it sees, unlike
a deterministic Turing machine.
• NTMs are sometimes used in thought experiments to examine the abilities and limits of computers.
• One of the most important open problems in theoretical computer science is the P versus NP problem, which
(among other equivalent formulations) concerns the question of how difficult it is to simulate nondeterministic
computation with a deterministic computer.
33
Deterministic Turing machine
• In a deterministic Turing machine (DTM), the set of rules prescribes at most one action to be performed
for any given situation.
• A deterministic Turing machine has a transition function that, for a given state and symbol under the
tape head, specifies three things:
• the symbol to be written to the tape (it may be the same as the symbol currently in that position, or not
even write at all, resulting in no practical change),
• the direction (left, right or neither) in which the head should move, and
• the subsequent state of the finite control.
• For example, an X on the tape in state 3 might make the DTM write a Y on the tape, move the head one
position to the right, and switch to state 5.
34
Numerical integration
• In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a
definite integral.
• The term numerical quadrature (often abbreviated to quadrature) is more or less a synonym for "numerical
integration", especially as applied to one-dimensional integrals.
• Some authors refer to numerical integration over more than one dimension as cubature; others take
"quadrature" to include higher-dimensional integration.
• The basic problem in numerical integration is to compute an approximate solution to a definite integral to a
given degree of accuracy.
• If f(x) is a smooth function integrated over a small number of dimensions, and the domain of integration is
bounded, there are many methods for approximating the integral to the desired precision.
• Numerical integration has roots in the geometrical problem of finding a square with the same area as a given
plane figure (quadrature or squaring), as in the quadrature of the circle.
• The term is also sometimes used to describe the numerical solution of differential equations.
35
Numerical integration is used to calculate a numerical approximation for the value , the area under the curve
defined by f(x) . 36
Motivation and need
• There are several reasons for carrying out numerical integration, as opposed to analytical integration by finding the antiderivative:
1. The integrand f (x) may be known only at certain points, such as obtained by sampling.
Some embedded systems and other computer applications may need numerical integration for this reason.
2. A formula for the integrand may be known, but it may be difficult or impossible to find an antiderivative that is an elementary
function.
An example of such an integrand is f (x) = exp(−x2), the antiderivative of which (the error function, times a constant) cannot be written
in elementary form. See also: nonelementary integral
3. It may be possible to find an antiderivative symbolically, but it may be easier to compute a numerical approximation than to
compute the antiderivative.
That may be the case if the antiderivative is given as an infinite series or product, or if its evaluation requires a special function that is
not available.
37
Multidimensional integrals
• To compute integrals in multiple dimensions, one approach is to phrase the multiple integral as repeated one-
dimensional integrals by applying Fubini's theorem (the tensor product rule).
• This approach requires the function evaluations to grow exponentially as the number of dimensions increases.
• Three methods are known to overcome this so-called curse of dimensionality.
• A great many additional techniques for forming multidimensional cubature integration rules for a variety of
weighting functions are given in the monograph by Stroud.
• Integration on the sphere has been reviewed by Hesse et al. (2015).
38
Monte Carlo
• Monte Carlo methods and quasi-Monte Carlo methods are easy to apply to multi-dimensional integrals. They
may yield greater accuracy for the same number of function evaluations than repeated integrations using one-
dimensional methods.[
• A large class of useful Monte Carlo methods are the so-called Markov chain Monte Carlo algorithms, which
include the Metropolis–Hastings algorithm and Gibbs sampling.
Sparse grids
• Sparse grids were originally developed by Smolyak for the quadrature of high-dimensional functions.
• The method is always based on a one-dimensional quadrature rule, but performs a more sophisticated combination of
univariate results.
• However, whereas the tensor product rule guarantees that the weights of all of the cubature points will be positive if
the weights of the quadrature points were positive, Smolyak's rule does not guarantee that the weights will all be
positive.
Bayesian quadrature
• Bayesian quadrature is a statistical approach to the numerical problem of computing integrals and falls under the field
of probabilistic numerics.
• It can provide a full handling of the uncertainty over the solution of the integral expressed as a Gaussian
process posterior variance. 39
Algorithm
• In mathematics and computer science, an algorithm (/ˈælɡərɪðəm/) is a finite sequence of mathematically rigorous instructions,
typically used to solve a class of specific problems or to perform a computation.
• Algorithms are used as specifications for performing calculations and data processing.
• More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated
decision-making) and deduce valid inferences(referred to as automated reasoning).
• In contrast, a heuristic is an approach to solving problems that do not have well-defined correct or optimal results.
• For example, although social media recommender systems are commonly called "algorithms", they actually rely on heuristics as
there is no truly "correct" recommendation.
• As an effective method, an algorithm can be expressed within a finite amount of space and time [3] and in a well-defined formal
language for calculating a function.
• Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed,
proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final
ending state.
• The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms,
incorporate random input.
• Alghoarismi or algorismiis the Latinization of Al-Khwarizmi's name; the text starts with the phrase Dixit Algorismi, or "Thus
spoke Al-Khwarizmi". 40
• Around 1230, the English word algorism is attested and then by Chaucer in 1391, English adopted the French term
Flowchart of using successive subtractions to find
the greatest common divisor of number r and s
41
Thank you
42