Mastodon Icon GitHub Icon LinkedIn Icon RSS Icon

Small introduction to Random Walking

Random Walking is a handy technique to have in your gamdev toolbelt and - despite the name - it is most useful for everything but actual walking. With random walking, we define the output of a process that can be described sequence of random steps. The main difference with a random sequence is that each new value will be statistically near the previous one. Imagine a gold price chart and assume that the current cost is 10€ for 1g of gold. We cannot guess what will be the price in one hour but we can be sure that it will be around 10€ per gram, maybe 9.5€, maybe 10.5€. What is certain, is that a sudden drop to 1€ per gram would be deeply unlikely.

In this article, we will talk briefly about random walking and its ability to simulate many real-world processes such as resources prices, temperature, floating objects position over time, and much more.

A Simple Example

Suppose our hero reaches a town and want to fill up her potion stack. It is better to prepare for a new adventure! However, potions do not grow on trees - or better - they do; but the trees needed for the potions are few, rare, and very dangerous to harvest. For some reason, we really care about the economic accuracy of our RPG, so we want to fluctuate the price of the potions similar to how the price of oil barrels varies over time in the real world.

If the game is single player, we do not have enough agents to simulate the full world economics. Instead, we can use random walking to generate the potions price. After all, as we have seen in the previous example, if there are no external events, the price of a commodity is very well modeled by a random walking variable.

Therefore, we can have a potionPrice variable in the game that is gently nudged up or down every game cycle. When the player asks for a potion, we use this variable as the current potions price. Easy. Well, almost.

A basic random walking implementation

Now that we know what random walking is and why it is useful, we may be tempted to implement a Random Walking algorithm for our game. After all, applying the algorithm is trivial.

$$ x_t = x_{t-1} + \mathcal{N}(x_{t-1},\sigma^2) $$

This recursive formula is self-explanatory: we start with a single value (e.g., \( x_0 \)) and, at each step, we add a value sampled from a Normal distribution centered on the current value, and with a specific variance.

This formula is easy, it produces a good-looking output and it is effortless to understand. Why, then, we need something else?

Imagine that the hero in the previous example goes back to the village after a year and want to check the new price. How can we give a meaningful value? We need to give to the player a value that agrees with the previous observations. Sure, you may say that I am nitpicking here: “who cares!” you say, “Just slap another Normal on top of the previous value and live with that!”.

However, imagine the hero returning the year after that — this time she wants to have the full price history since the village’s foundation. Sure, you can regenerate it on the fly, but you want to be sure that the hero’s previous observations are still valid and present in the history!

Now that’s a much harder problem. You could record the price history regardless of players asking for it or not; but then you need to generate and store price for all the commodities, and every village! Fortunately, there is a better solution.

Use the probability, fool!

Our best friend, the Central Limit Theorem comes to rescue! Instead of generating the random walk step by step we can “jump” in time. The theorem tells us that, if at \( t_0\) we have a value \( x_0\), then, at a generic time \( t\) we will have a value \( x\) sampled from the following distribution:

$$p(x) \sim \mathcal{N}(x_0,|t-t_0|\sigma^2)$$

The actual rigorous calculations are not needed and, honestly, not very interesting. To have a general sense on why the above formula is correct, let’s imagine that we want to produce an infinitesimal step from \( x_0\). It is easy to see that this infinitesimal step will not fall far away from \( x_0\), a bit more, or a bit less (hence why the new distribution’s mean is still \( x_0\)), and the variance will be a strictly a bit greater than the previous variance. How much? Of course, the bigger the step, the bigger will be the variance (hence the \( \Delta t\) factor in the new distribution).

To use this formula, we use the same algorithm of before: we set an initial value, we plug it in the distribution, we sample one value, and we use it as \( x_1\), and so on. In the previous example, you can see a random walk generated using the above formula. As you can see from the above example, the result is pretty sweet.

const generateSimpleRandomWalk = function(x0, sigma, n) {
    let points = [];
    let prevX = x0;
    for (var i = 0; i < n; i++) {
        let x = normal(x0, 1 * sigma * sigma);
        points.push([i, x]);
        prevX = x;
    return points;

Here we are using just one as delta time and, in fact, in this case, the formula falls back to our original implementation. However, now we can change the time delta depending on our needs! We can go even back in time! Most importantly, now we can jump, as shown by the next example.

As you can see, we put a jump there. We compute the values as usual until we reach the jump; then we compute the value after the jump and, finally, we continue with the usual computation. The value after the jump it can still be different from a value simulated step by step but, statistically speaking, the value is indistinguishable.

The interpolation problem, and a solution

At this point we solved the jump, but what about the interpolation? After all, that’s the tough problem. Again, we can find an excellent (and, honestly, elegant) statistical solution.

$$Num = \frac{x_0}{(t-t_0)\sigma^2} + \frac{x_n}{(t_n - t)\sigma^2}$$

$$Den = \frac{1}{(t-t_0)\sigma^2} + \frac{1}{(t_n - t)\sigma^2}$$

$$p(x) \sim \mathcal{N} \left( \frac{Num}{Den},\frac{1}{Den} \right)$$

Whoa! That was rough! We already knows \( x_0\), \( t_0\) and \( t\). Also, we add \( x_n\), the value toward which we want to interpolate the walk, and \( t_n\), the time when we want the interpolation to happen.

As an example, let’s look at the following example. You can see that no matter how many times you refresh the page, the random walking will always pass through the black dot. Even if everything is (and looks) random!

To generate this walk: first, we generate the part before the interpolation using the above formula, then we use the interpolated value (to avoid some nasty division by zero), and then we continue with the standard formula.


We are just scratching the surface here. There are several other applications for random walking in games, from procedural generation to artificial intelligence. However, there are two main takeaways:

  1. The statistical method for random walking is marginally harder to implement but has enormous benefits such as a statistically accurate jump, fractal behavior (you can zoom in in any time interval), and effortless “time travel.”
  2. Statistical random walking offers a beautiful interpolation algorithm that allow having a random value to converge to a fixed point in an apparently casual way.

comments powered by Disqus