Questions tagged [markov-process]

A stochastic process with the property that the future is conditionally independent of the past, given the present.

Overview

A Markov process is any stochastic process $Y_{t}$ such that the future is conditionally independent of the past, given the present; the distribution of the process only depends on where the process is, not where it has been: $$ P(Y_{t+1}=y_{t+1} |Y_t = y_{t}, Y_{t-1} = y_{t-1}, ..., Y_{1} = y_{1}) = P(Y_{t+1}=y_{t+1} |Y_t = y_{t}) $$ This property is known as the Markov property.

References

The following threads on math.se provide references to resources on Markov processes:

1235 questions
32
votes
3 answers

What is the difference between "limiting" and "stationary" distributions?

I'm doing a question on Markov chains and the last two parts say this: Does this Markov chain possess a limiting distribution. If your answer is "yes", find the limiting distribution. If your answer is "no", explain why. Does this Markov chain…
Kaish
  • 1,135
27
votes
2 answers

Markov Process that depends on present state and past state

I would just like someone to confirm my understanding or if I'm missing something. The definition of a markov process says the next step depends on the current state only and no past states. So, let's say we had a state space of a,b,c,d and we go…
mentics
  • 373
26
votes
2 answers

Real-life examples of Markov Decision Processes

I've been watching a lot of tutorial videos and they are look the same. This one for example: https://www.youtube.com/watch?v=ip4iSMRW5X4 They explain states, actions and probabilities which are fine. The person explains it ok but I just can't seem…
18
votes
6 answers

Check memoryless property of a Markov chain

I suspect that a series of observed sequences are a Markov chain... $$X=\left(\begin{array}{c c c c c c c} A& C& D&D & B & A &C\\ B& A& A&C & A&D &A\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ B& C& A&D & A & B & E\\ …
HCAI
  • 779
13
votes
3 answers

Estimating Markov chain probabilities

What would be the common way of estimating MC transition matrix given the timeseries? Is there R function for doing that?
user333
  • 7,211
10
votes
3 answers

Determine the communication classes for this Markov Chain

Say we have a Markov Chain with probability matrix $$ P = \begin{pmatrix} 0.25 & 0.25 & 0.5 & 0 & 0 \\ 0 & 0.66 & 0 & 0.33 & 0 \\ 0 & 0.25 & 0.25 & 0.25 & 0.25 \\ 0.5 & 0 & 0 & 0.25 & 0.25 \\ 0 & 0 & 0 & 0.2 & 0.8 \end{pmatrix} $$ I'm confused, it…
Dan
  • 101
9
votes
3 answers

Equilibrium distribution of Markov chain

The transition matrix is $$P =\begin{bmatrix} \frac12 & \frac12 & 0 & 0 \\ \frac12 & \frac12 & 0 & 0 \\ 0 & 0 & \frac13 & \frac23 \\ 0 & 0 & \frac13 & \frac23\end{bmatrix}$$ Now the question is how can I find all equilibrium distribution of this…
8
votes
2 answers

Hidden Markov Model to predict the next state

I am learning to use HMM and I am trying to solve the following problem. There is a robot moving around the nodes in graph. The robot can move to adjacent nodes with certain probabilities. Each time the robot steps into a new "node", a (noisy)…
andreSmol
  • 537
7
votes
1 answer

Markov chain as sum of iid random variables

Suppose I have a sequence of iid random variable, $Z_i$ for $i=0,1,2,3...$ such that $$\Bbb P(Z_i=z)= \begin{cases} p, &\text{if } z=1 \\ 1-p, & \text{if } z=0. \end{cases}$$ Define $S_k=\sum^k_{i=0}Z_i$ and $\prod_n=\prod^k_{i=0}Z_i$ . So I have…
user255658
7
votes
2 answers

Is there a measure of how well a Markov chain allows movement between states?

Define $$ A = \left( \begin{matrix} .5 & .5 \\ .5 & .5 \end{matrix} \right),\; \; B = \left( \begin{matrix} .99 & .01 \\ .01 & .99 \end{matrix} \right), \; \; C = \left( \begin{matrix} .01 & .99 \\ .99 & .01 \end{matrix} \right)$$ Taken as Markov…
Bianca
  • 81
6
votes
2 answers

Markov chain ( Absorption)

I have just started learning Markov chain and I am clueless about how to solve this problem A man rolls a boulder up a 40 meter-high hill. Each minute, with probability 1/3 he manages to roll the boulder 1 meter up, while with probability 2/3 the…
6
votes
1 answer

What are the potential functions of the cliques in Markov random field?

I have been trying to understand the representation of the joint probability density of Markov random fields in the form of factors of the potential functions. I am finding it difficult to grasp the idea of potential functions and how we are…
user31820
  • 1,501
6
votes
4 answers

Aperiodicity in markov chain

given this transition matrix of markov chain \begin{bmatrix} \dfrac{1}{2} & \dfrac{1}{4} & \dfrac{1}{4}\\ 0 & \dfrac{1}{2} & \dfrac{1}{2} \\ 1 & 0 & 0 \end{bmatrix} which represents transition matrix of states $a,b,c$. $a$ has probability of…
joseph
  • 201
5
votes
1 answer

Is the stationary distribution in a Markov chain just an average or will this probability distribution actually be reached?

So I know that a connected Markov chain has a stationary distribution $\pi$ that satisfies $$\lim_{t \rightarrow \infty} a_t = \pi,$$ where $a_t$ is the average probability distribution at time $t$. So this average converges to our stationary…
Elena
  • 51
5
votes
1 answer

State space for Markov Decision Processes

I'm currently trying to formulate a MDP for a Reinforcement Learning (RL) task. Having read a variety of papers where RL has been applied I've been left somewhat confused as to what can be a considered part of the state space. I was always under the…
Barry
  • 533
1
2 3 4 5 6 7 8