0% found this document useful (0 votes)
40 views32 pages

Presentado A: Ricardo Javier Pineda Tutor

This document summarizes the individual work of four students in a group project applying algorithms from Unit 3 to solve problems involving Markov chains. Each student solved problems related to Markov chains, including steady state, initial state multiplication, and using Solver in Excel. They analyzed statistical data, differentiation of algorithms, and application of algorithms to decision making under risk and uncertainty. The students concluded that Markov chains are useful tools for analyzing variations and aiding decisions, and that the project helped them apply probabilistic and risk analysis skills.

Uploaded by

jorka couple
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views32 pages

Presentado A: Ricardo Javier Pineda Tutor

This document summarizes the individual work of four students in a group project applying algorithms from Unit 3 to solve problems involving Markov chains. Each student solved problems related to Markov chains, including steady state, initial state multiplication, and using Solver in Excel. They analyzed statistical data, differentiation of algorithms, and application of algorithms to decision making under risk and uncertainty. The students concluded that Markov chains are useful tools for analyzing variations and aiding decisions, and that the project helped them apply probabilistic and risk analysis skills.

Uploaded by

jorka couple
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

PHASE 6 - SOLVE PROBLEMS BY APPLYING

THE ALGORITHMS OF UNIT 3

PRESENTADO POR:

ALBERTO SATURNINO CORTES CAICEDO


COD. 11803969

LUZ ESTEFANI CAMACHO ALBADAN


COD. 1006718253

JORGE ANDRES SOSA


COD. 13571769

JHOHANNA GONZALEZ
COD. 52791653

GRUPO: 212066-47

PRESENTADO A:
RICARDO JAVIER PINEDA
TUTOR

UNIVERSIDAD NACIONAL ABIERTA Y A DISTANCIA


ESCUELA DE CIENCIAS BÁSICAS E INGENIERÍA
PROGRAMA INGENIERÍA INDUSTRIAL
CEAD ACACIAS
NOVIEMBRE 21 DE 2018
INTRODUCTION

The following document is the evidence of the individual development of each


student of the work group and compiled in a collaborative work with four members
of the group 47. This phase consists in the development of the problems of this
third unit of the course, for which we are asks to solve problems related to the
differentiation of algorithms, their characteristics and applications in different
environments of risk and / or uncertainty, for decision making and optimization of
expected results taking into account Markov chains (stable state, multiplication of
initial state), implementing the Solver in Excel and following the guidelines of the
web conference of the course.
Estudiante 1. Alberto Saturnino Cortes
Estudiante 2. Luz Estefani Camacho
Estudiante 3. Jorge Andrés Sosa Peinado

Problem 1. Markov chains (steady state):


Problem 2. Markov chains (Initial state multiplication):
Problem 3. Markov chains (Initial state multiplication):
Problem 4. Markov chains (Initial state multiplication):
Problem 5. Markov chains (Initial state multiplication):
Estudiante 4. Jhohanna Gonzalez
Problem 2. Markov chains (Initial state multiplication):

En Colombia hay 5 operadores móviles principales, como Tigo, Comcel, Movistar,


ETB y Uff, que llamaremos estados. La siguiente tabla resume las probabilidades
de que cada cliente tenga que permanecer en su operador actual o hacer un
cambio de compañía
Problem 3. Markov chains (Initial state multiplication):

In Colombia there are 6 main mobile operators such as Avantel, Tigo, Comcel,
Movistar, ETB and Uff, which we will call states. The following chart summarizes
the odds that each client has to stay in their current operator or make a change of
company.
Problem 4. Markov chains (Initial state multiplication):

Supongamos que se obtienen 4 tipos de refrescos en el mercado: Colombia, Pepsi


Cola, Fanta y Coca Cola cuando una persona ha comprado en Colombia, existe la
probabilidad de que continúen consumiendo el 40%, de los cuales el 20%
comprará Pepsi Cola. El 10% que compra Fanta y el 30% que consume Coca
Cola; cuando el comprador actualmente consume Pepsi Cola es probable que
continúe comprando el 30%, el 20% que compra Colombiana, el 20% que
consume Fanta y el 30% de Coca Cola; Si actualmente se consume Fanta, la
probabilidad de que continúe consumiéndose es del 20%, el 40% compra
colombiano, el 20% consume Pepsi Cola y el 20% se destina a Coca Cola. Si
actualmente consume Coca Cola, la probabilidad de que continúe consumiendo es
del 50%, el 20% compra colombiano, el 20% que consume Pepsi Cola y el 10%
que se pasa a Fanta.

En la actualidad, cada marca colombiana, Pepsi Cola, Fanta y Coca Cola tienen
los siguientes porcentajes en participación de mercado respectivamente (30%,
25%, 15% y 30%) durante la semana 3.
Problem 5. Markov chains (Initial state multiplication):

Supongamos que obtiene 6 tipos de marcas de jeans en el mercado colombiano:


Marca 1, Marca 2, Marca 3, Marca 4, Marca 5 y Marca 6. La siguiente tabla
muestra las probabilidades de que continúe usando la misma marca o que la
cambie.

En la actualidad, la marca, tiene los siguientes porcentajes en participación de


mercado respectivamente (19%, 18%, 17%, 15%, 19% y 12%) durante la semana
4.
CONCLUSIONS

Alberto Saturnino Cortes

I can conclude that this unit is very important for me in the job performance as a
future industrial engineer and that statistical data are an essential part of decision
making, with a thorough investigation on the subject that is carried out each of the
exercises, also continue With the process of learning to perform the characteristics
of a state, for my concept the Markov chains are important to allow me to analyze
the constant variations through the previous states and the results of the assertive
decisions.

Luz Estefani Camacho

It has been a very practical and interesting phase, very important for my training
process; I learned to differentiate the algorithms, know their characteristics and
their application in different environments of risk and / or uncertainty for the daily
life of any company, since they help us make decisions, optimizing expected
results. I feel endowed with tools such as markov chains, which in a practical and
easy way help me to become unwilling before any decision of uncertainty that I
have to face in my work environment.

Jorge Andrés Sosa

My conclusions for this collaborative work are: The main objective was to discuss
and analyze the problems raised, along with the revision of the algorithms that
were applied. Also to differentiate the algorithms, their characteristics and their
application in different environments of risk and / or uncertainty, for the decision
making and the optimization of the expected results. Solve problems (with the use
of Solver) applying the algorithms of Unit 3, taking into account the Markov chains
(stable state, multiplication of initial state).
We also learned that a solution apparently yields different probabilities if the figures
are rounded to 1 decimal, but the approximations generate many deviations for
those rounded figures. When they approach a decimal number, apparent answers
are obtained, which in real life is very unpredictable and unsafe to do this.

Jhohanna Gonzalez

With the development of this topic on risk decision making, knowledge was
acquired based on a series of techniques, which allowed us to take an example,
which serves to be applied in the case that it is required to make decisions under
risk, we can apply this knowledge in the development of projects that are being
executed and also to determine the probabilities and opportunities to solve a
problem of economic type and order.

The concept of uncertainty implies a series of determinations to the human being


where decisions must be made in the cases that a rise, the risk implies that it can
be assigned some kind of probabilistic distribution.
BIBLIOGRAPHY

 Ibe, O. (2013). Markov Processes for Stochastic Modeling:


Massachusetts, USA: University of Massachusetts Editorial.
Retrieved from:
http://bibliotecavirtual.unad.edu.co:2051/login.aspx?direct=true&
db=nlebk&AN=516132&lang=es&site=eds-live

 Dynkin, E. (1982). Markov Processes and Related Problems of


Analysis: Oxford, UK: Mathematical Institute Editorial. Retrieved
from:
http://bibliotecavirtual.unad.edu.co:2048/login?url=http://search.
ebscohost.com/login.aspx?direct=true&db=e000xww&AN=552478
&lang=es&site=ehost-live

 Piunovskiy, A. (2012). Examples In Markov Decision Processes: Singapore:


Imperial College Press Optimization Series. Retrieved from
http://bibliotecavirtual.unad.edu.co:2051/login.aspx?direct=true&db=nlebk&
AN=545467&lang=es&site=eds-live

You might also like