Brownian Motion in 30 Seconds

Trying to write a non-rigorous Brownian motion cheatsheet, failing, and giving up…

divider

I initially planned this as a reference sheet for quick facts about Brownian motion. However, I soon realized that it would always be unsatisfying and incomplete, as there’s almost too much to talk about.

With just some facts about the Normal distribution, we could compute passage times, conditional distributions and the Brownian bridge, or talk about Brownian motion as a Gaussian/Markov process. But I don’t think it would be a great approach, so hopefully one can look to future notes for a more rigorous treatment instead.

Univariate Normal Distribution

To understand Brownian motion as a Gaussian process (GP), we need to be clear on the normal and multivariate normal (MVN) distributions. A random variable ZZ is said to have the standard normal distribution if its probability density function (PDF) ϕ\phi is:

ϕ(z)=12πez2/2\phi(z) = \frac{1}{2 \pi}e^{-z^2 / 2}

for <z<- \infty < z < \infty .

Integrating, we have its cumulative distribution function (CDF) Φ\Phi as

Φ(z)=zϕ(t)dt=z12πet2/2dt,\Phi(z) = \int_{-\infty}^{z} \phi(t) dt = \int_{-\infty}^{z} \frac{1}{\sqrt{2\pi}}e^{-t^2/2}dt,

which we usually leave written as is.

If ZN(0,1)Z \sim N(0,1), then a random variable X=μ+σZX = \mu + \sigma Z for constants μ,σ\mu, \sigma is distributed N(μ,σ2).\sim N(\mu, \sigma^2).

The CDF and PDF of XX would be

F(x)=Φ(xμσ), f(x)=ϕ(xμσ)1σF(x) = \Phi(\frac{x - \mu}{\sigma}), \text{ } f(x) = \phi(\frac{x-\mu}{\sigma}) \frac{1}{\sigma}

For a normally distributed r.v. XX, we can also normalize to get back to the standard normal. That is, if XN(μ,σ2)X \sim N(\mu, \sigma^2), the standardized version is

XμσN(0,1).\frac{X-\mu}{\sigma} \sim N(0,1).

Joint Distributions

The distribution of a random variable XX tells us about the probability of XX falling into any subset of the real line. The joint distribution of random variables X,YX,Y tells us about the probability of (X,Y)(X,Y) falling into a subset of the plane.

Marginal distribution of XX tells us dist. of XX ignoring YY. Conditional distribution of XX tells us the dist. of XX after observing Y=yY=y.

For a continuous joint distribution, the CDF

FX,Y(x,y)=P(Xx,Yy)F_{X,Y}(x,y) = P(X \leq x, Y \leq y)

must be differentiable w.r.t. x,yx,y such that the joint PDF

fX,Y(x,y)=d2dxdyFX,Y(x,y)f_{X,Y}(x,y) = \frac{d^2}{dx dy}F_{X,Y}(x,y)

exists. We define the covariance between two random variables X,YX,Y as

Cov(X,Y)=E((XEX)(YEY)).Cov(X,Y) = E((X-EX)(Y-EY)).

We can think of this as measuring what happens when XX and / or YY deviate from their expected values. For example, if large XX implies small YY, the covariance would be negative. Note that

Cov(X,Y)=E((XEX)(YEY))=E(XYXEYYE+EXEY)=E(XY)E(XEY)E(YEX)+E(EXEY)=E(XY)E(X)E(Y).\begin{align*} Cov(X,Y) &= E((X-EX)(Y-EY)) \\ &= E(XY - XEY - YE + EXEY) \\ &= E(XY) - E(XEY) - E(YEX) + E(EXEY) \\ &= E(XY) - E(X)E(Y). \end{align*}

Zero covariance means that two random variables are uncorrelated. We define the correlation as the covariance normalized between 1-1 and 11:

Corr(X,Y)=Cov(X,Y)Var(X)Var(Y).Corr(X,Y) = \frac{Cov(X,Y)}{\sqrt{Var(X)Var(Y)}}.

Multivariate Normal Distribution

The kk-dimensional random vector X=(X1,...,Xk)X = (X_1, ..., X_k) has a multivariate normal distribution if every linear combination of the XjX_j has a normal distribution.

The multivariate normal distribution is fully specified by the mean of the components and the covariance matrix.

A Gaussian process (GP) is a stochastic process such that every finite collection of those random variables has a multivariate normal distribution.

Brownian Motion

Standard Brownian motion (SBM) is a stochastic process WW with

Continuous paths

What does it mean for something to have “continuous paths”?

Recall that a stochastic process is a collection of random variables on a common probability space (Ω,F,P)(\Omega, \mathcal{F}, P) for Ω\Omega the sample space, F\mathcal{F} a σ\sigma-algebra, and PP a probability measure.

The sample space consists of all possible outcomes/trajectories. We can think of these as sequences. For XX a stochastic process, we can view it in two ways. For fixed time tt,

Xt(w):=X(t,w):ΩS,X_t(w) := X(t,w) : \Omega \rightarrow S,

where XX maps a realization ww to whatever value it takes at tt in the state space SS. This is a random variable.

On the other hand, for fixed outcome wΩw \in \Omega, we have

Xw(t):=X(t,w):TS,X^w (t) := X(t,w) : T \rightarrow S,

where XX maps a time tt to a value in the state space SS. This is called a sample function or realization. In the case where TT is time, we call Xw(t)X^w(t) a sample path.

So, in Brownian motion, when we say “continuous paths,” we mean for wΩw \in \Omega, that the stochastic process W(,w)W(\cdot, w) is continuous with probability 11:

P(wΩ:W(,w)C)=1.P (w \in \Omega : W(\cdot, w) \in \mathbb{C}) = 1.

Stationary & independent increments

Stationary: We want the distribution of W(t)W(s)W(t) - W(s) to depend only on tst-s:

Independent: W(t1)W(t0),W(t2)W(t1),...,W(tn)W(tn1)W(t_1) - W(t_0), W(t_2) - W(t_1), ..., W(t_n) - W(t_{n-1}) are independent from each other.

W(t)W(d)W(ts)W(ss)=W(ts)N(0,ts).W(t) - W(d) \sim W(t-s) - W(s-s) = W(t-s) \sim N(0, t-s).

The above shows us that the increments are normally distributed, too.

We can also understand Brownian motion as a Gaussian process with stationary and independent increments.


As mentioned in the beginning, there’s much more to discuss, but we’ll leave it to my other pages…