13 Jun 2015, 21:52

Gaussian Random Fields

Random fields are an example of a more formally defined noise process that can appear visually similar to some forms of procedurally generated noise like Perlin noise or simplex noise.

There are some really nice of examples of descriptions for random fields and in particular Gaussian random fields on Wikipedia. However, with a few exceptions, there doesn’t seem to be a great deal of literature available on how to generate such fields. Furthermore, while there is the RandomFields package on CRAN for R, there don’t appear to be any well known Python packages.

In the search for a good library, I did find an amazing question on the Mathematica Stack Exchange, where a number of different techniques for genering these fields have been presented.

This Python snippet is a simplified version of the current top voted answer

def fftIndgen(n):
    a = range(0, n/2+1)
    b = range(1, n/2)
    b.reverse()
    b = [-i for i in b]
    return a + b

def gaussian_random_field(Pk = lambda k : k**-3.0, size = 100):
    def Pk2(kx, ky):
        if kx == 0 and ky == 0:
            return 0.0
        return np.sqrt(Pk(np.sqrt(kx**2 + ky**2)))
    noise = np.fft.fft2(np.random.normal(size = (size, size)))
    amplitude = np.zeros((size,size))
    for i, kx in enumerate(fftIndgen(size)):
        for j, ky in enumerate(fftIndgen(size)):            
            amplitude[i, j] = Pk2(kx, ky)
    return np.fft.ifft2(noise * amplitude)

for alpha in [-4.0, -3.0, -2.0]:
    out = gaussian_random_field(Pk = lambda k: k**alpha, size=256)
    plt.figure()
    plt.imshow(out.real, interpolation='none')

Which produces figures like:

Image 3 Image 1 Image 2

02 Jun 2015, 21:23

Snakes and Ladders

One of the simplest non-trivial examples of something interesting to do with Markov chains (that’s pretty easy to visualise and understand) is working out how many moves (on average) it takes to complete a game of snakes and ladders? This selection of an example is somewhat intentional because it’s the same example as the wikipedia entry for absorbing Markov chains, and because it’s just hard enough to actually justify solving the problem properly.

A full description of absorbing Markov chains is available on wikipedia, which is a near verbatim transcription from Kemeny and Snell’s book (also available online). In short, a (Discrete Time, Finite State) Markov chain is a system that describes the evolution of probability of going from one state to another in an instantaneous transition. Transitions between states are modelled using a stochastic matrix (a matrix where the rows must sum to one).

The wikipedia article on snakes and ladders quotes the solution as 39.6, for the Milton Bradley version of Chutes and Ladders. That’s a useful baseline for comparison, because the board layout is easy to find online. It’s also worth noting that the page for absorbing Markov chains quotes the solution as 39.2.

Each of the states in the chain represents a location on the board, and we’ll have one extra state (state 0) that represents the initial state off the board. That means we can conveniently number those state 0 - 100 (inclusive), where 0 is the initial state off the board, and 100 is the terminal state.

Ignoring (initially) the chutes and ladders, a transition from a state results in you advancing by the number of squares on a six sided dice. From any state $x$, you can reach states $x+1, x+2, …, x+6$, unless such a transition would take you off the board (beyond state 100), where the transition is invalid (which means you stay where you are). The probability of each of these transitions is 16, except for the edge case where the move is invalid - in those cases a small adjustment is needed to keep the player in that states.

def snakes_and_ladders_matrix():
    n = 101
    A = np.zeros((n, n))
    for i in xrange(6):
        A += np.diag(np.ones(n-1-i),i+1)

    # work out the cells that are reachable for every initial cell 
    b = np.sum(A, axis=1)

    # update the transition matrix to have a self-edge for every missing edge 
    A += np.diag(6 - b)

    # normalize
    A /= 6.
    
    return A

A = snakes_and_ladders_matrix()

We can verify that the resulting matrix is a valid (right) stochastic matrix trivially - and it’s always a good idea to enforce constraints like this when working with Markov Chains.

np.testing.assert_allclose( 1, np.sum(A, axis=1) )

So for the game with no snakes or ladders, this is enough to show how many moves it take to finish the game. The equation for mean number of steps can be written in terms of the fundamental matrix $N$ and the identity matrix $I$:

$$ N = (I-Q)^{-1} $$

The equation is then

$$ t = N \mathbf{1} $$

Where $\mathbf{1}$ is the column vector of ones. While it’s possible to solve the equation like this, but only just, it’s not an effective way to use a computer to solve the equation. Solutions to linear equations on a computer are solved in the form:

$$ A x = b $$

But our unknown is on the wrong side of the equation. Somewhat conveniently, multiplying through by $I-Q$ gives:

$$ (I-Q) t = \mathbf{1} $$

This means we can solve the equation:

Q = A[:-1,:-1]
I = np.identity(Q.shape[0])
o = np.ones((Q.shape[0],1))
result = np.linalg.solve(I-Q, o)
print 'mean turns to complete (without snakes and ladders)', result[0, 0]

Which gives 33.3, which is close enough to give confidence that we’re in the ballpark when we add the snakes and ladders.

ladders = np.array([
    [1, 38],  [4, 14],  [9, 31],  [21,42],
    [28,84],  [36,44],  [51,67],  [71,91],
    [80,100]
]) 

snakes = np.array([
    [98, 78],  [95, 75],  [93, 73],  [87, 24],
    [64, 60],  [62, 19],  [56, 53],  [49, 11],
    [48, 26],  [16, 6],
]) 

def snakes_and_ladders_matrix():
    n = 101
    A = np.zeros((n, n))
    for i in xrange(6):
        A += np.diag(np.ones(n-1-i),i+1)

    # work out the cells that are reachable for every initial cell 
    b = np.sum(A, axis=1)

    # update the transition matrix to have a self-edge for every missing edge 
    A += np.diag(6 - b)

    edges = np.vstack([ladders, snakes])

    for src, dst in edges:
        b = A[:, src].copy()
        A[:, dst] += b
        A[:, src] = 0
    
    A /= 6.
    return A


A = snakes_and_ladders_matrix()
assert_allclose( 1, np.sum(A, axis=1) )

And running the code gives the solution

Q = A[:-1,:-1]
I = np.identity(Q.shape[0])
o = np.ones((Q.shape[0],1))
result = np.linalg.solve(I-Q, o)
print 'mean turns to complete', result[0, 0]

Gives 39.598

22 May 2015, 09:47

Markov chains and Directed Graphs

One of the simplest possible probabilistic concepts is that of a Finite Markov chain. The basic principle of a Markov chain is that it describes how something (a state) changes over time. In effect, if you start in a state, after some time, a transition occurs, and state updates. If this sounds familiar, it’s not surprising, Markov chains share many common features with directed graphs.

My interest lies more in their applications to statistical physics (which is a fancy term for describing study of the motion of lots of little bits of stuff). For my application, Markov chains have been helpful as a tool for developing initution into more complex systems.

So what are the similarities between Markov chains and Graphs?

Markov Chains Graph Theory
Markov Chain Directed Graph with edges labelled with transition probability
State Vertex
Transition Edge
$T(i, j) = 0$ No edge from $(i, j)$
$T(i, j) > 0$ Edge from $(i, j)$, with weight $T(i, j)$
Accessible $i \rightarrow j$ Reachable
Communicating Classes $(i \leftrightarrow j)$ Strongly Connected Components
Permutation of a MC Isomorphism
Transpose $T’$ Flip edges in graph
Absorbing $T(i,i) = 1$ There exists an edge $(s, s)$, and the out degree of $s = 1$