Against Addiction and Gambling-like Mechanic in Free to Play Games

I want to take the cue from a last week massive Reddit thread on micro-transaction in Free2Play (F2P) games to give my opinion of the topic. I think it is important. We need to increase awareness that predatory practices in F2P games are incredibly close to gambling and share with it the same self-destructive and harmful addictive behavior. This is wrong in so many way: it is dangerous for the victims, it is dangerous for the game itself, and it is dangerous for the entire F2P model.

The Reddit thread presents this as a new discovery, but it is not. The trend is clear for a long time and there are a lot of discussions on the topic. Before we try to find a solution, let’s look at the different faces of the problem.

The Micro-Gambling Addiction: The User Perspective

When we talk about predatory marketing behaviors in F2P games, we talk about all that practices implemented to encourage/force people into spending more and more money in F2P games using excessive addiction/gambling inspired techniques. Every developer tries to make people spending money on their game. However, sometime F2P games push this too hard using addictive design mechanism with the goal of “trapping” people willing to put in the game unhealthy amount of money. And they do nothing to prevent this.

A common misconception when talking about this is that people usually assumes that the target of Predatory F2P games are the children. After all, we usually look to children as vulnerable creatures that can be easily tricked with games. This is wrong. Predatory F2P games targets are the so-called “whales”: people who can spend thousands of euros per month on a game. Children have no money, they cannot be whales.

In order to survive, F2P games need to catch some whales, and to do that they aim to vulnerable people using gambling-like techniques. We are talking about depressed people, people who had a big loss, people who have problem at works or in social context, people not satisfied with their life. F2P games give to this people a community, a goal, a deep sense of accomplishment. But in Predatory F2P all this comes with a price: a price these people need to pay to continue playing, to not let down their community/clans, to be competitive and continue to feel accomplished.

It is hard for people without these problems to understand how a person without a job can spend 3000$ in one session on MapleStory in order to craft a single weapon. But it happens, more than you think.

Is F2P Addiction as Gambling Addiction?

If this remember you something, you are right: gambling addiction. Let’s look at the symptom of gambling addiction (from here) and see if we may relate them to F2P addiction.

  • obsessing over any type of gambling – replace gambling with games and, check.
  • gambling to feel better about life – As I said before, check.
  • failing to control your gambling – This is clearly obvious for people spending 10000$ on a F2P game.
  • avoiding work or other commitments to gamble – Check.
  • neglecting bills and expenses and using the money for gambling – Sadly, check.
  • selling possessions to gamble – I am not aware of these cases, but I am sure that it is because I didn’t look well enough.
  • stealing money to gamble – Check.
  • lying about your gambling habit – Check.
  • feeling guilty after a gambling session – Check.
  • taking bigger and bigger risks while gambling – This require defining “risk” in F2P game spending, but I can assume that for F2P whales, there is a tendency of spending more and more with the time passing.

In my opinion, gambling addiction symptoms map with my experiences with F2P addicted people. But, unlike gambling, there is no warning, there is not the same public attention and F2P addiction is definitely hard to be recognized as a real pathology.

The Micro-Gambling Addiction: The Game Perspective

This concept is extremely well explained in the above video. I’ll try to summarize here the key points.

Because F2P games require whales to survive and because a handful of whales can represent 95% of the game revenues, the game slowly shifts their focus toward a whale-centric model. This is reflected in games becoming less and less fun for non-whales players. That’s how Pay-2-Win games are born.

But that’s not all. Games need non-whales customers, too. They need them, because they are the game population segment that feed the whales’ sense of accomplishment. Why I need to by that amazing sword if there is no one to crush with it?

This force the game to acquire new players faster than how the existing players quit. This, in turn, increases the advertising cost and exposure. Now, instead of a pay-2-win game, we have an annoying pay-2-win game with advertising popping up everywhere.

The self-feeding loop will continue indefinitely until the game is no more sustainable, leaving behind a battlefield of indifference or disgusted players.

Did the game improve over time? Did it leave a good memory to the old players? Did it offer a refreshing and fun experience? No. It trapped the whales into it as a parasite and pissed of everybody else. These are not the kind of products developers are proud to work in. These are not what games should be.

The Micro-Gambling Addiction: The F2P Model Perspective

The last victim of this practice is the F2P model itself. The F2P model is a good model. There is nothing inherently bad about it. Actually, F2P has many benefits, it allows everybody to play it and allows people willing to spend money on it to spend the amount of money they think the game is worth, and more.

Though, predatory F2P is actively damaging the F2P model itself. If the trend did not stop we will face the following problems:

  • Game studios that do not want to implement unethical marketing/design practices can go out of business because the finite amount of money of the F2P market is absorbed by those gambling/addiction-based games. Thus, making the F2P market less interesting for the majority of players.
  • F2P games in general may start to have a bad reputation among the core of the gaming community. In turn, this drives people away from the model. We can already see how F2P has become a term with which gamer “insult” games.
  • Governments may start acting in an indiscriminate way. Because we rarely see politicians understanding the technology they are going to regulate, I am sure that if something will happen will hit hard all the F2P model regardless of how much unethical their practices are.

We can already see some of these points in action and things are not going to improve in the foreseeable future.

Defending people from micro-gambling

Here it comes the final question: how can we stop this? As individual, we probably can’t. We can avoid such horrible F2P games but we do not make a difference. Only the “whales” matters.

But this is not a reason to stop trying. Here a simple list of things you can do.

  • Spread awareness of the problem. For now, this is not perceived as a social issue. But we have seen it definitely is. There clearly people out there that are putting themselves in financial problem territory with some game. This people need help. And nobody will help them if they think there is no problem.
  • If you know people affected by this kind of addiction try to help them. If they are not in such deep problems, keep an eye on them. Addiction is easy to snowball in much bigger addictions. Don’t be ashamed. We have reached the point where it is not a shame to ask for help for somebody spending too much at Poker. It is the same for people spending too much in some F2P game.
  • Every time it is possible, mark the connection between gambling and F2P predatory practices. Make clear that some games are using gambling related techniques. As the original Reddit thread suggest, if we stop using a generic micro-transaction for gambling-like mechanics in F2P games, it is easier to make people aware of the problem. Moreover, it will separate good F2P from bad F2P.

If you are a developer, instead, the obvious suggestion is to avoid to put heavy gambling inspired techniques in your game. I know that sometime they are accidental. But there are two easy tricks:

  • Try to make purchasable content not necessary for the game. A splendid example is Team Fortress 2 and Dota 2. In TF2 everything can be dropped, or can be bought to save time. In Dota 2, everything on sale is just cosmetic. This remove the urgency for people to buy stuff to be competitive and, at the same time, it allows people deeply involved in game to buy something to look cool.
  • Try to put a limit on the amount of money needed to unlock contents. If there is no more than X amount of € of contents per year it is impossible for vulnerable people to spend 3000€ in a month. For instance, you can release your game as F2P and,  when a single user buys a certain amount of F2P contents (e.g., 100€ or more), he/she unlocks the full game. For instance, this is done in Pokémon Picross, where the user can grind everything in a F2P fashion, or can buy the equivalent of 15€ of stuff to have complete access to the game contents. There are many other examples of this.

I am open to hear more solutions for this problem. If you have any other trick to help F2P addicted people, send me a message and I will update this section!

P.S: Hey, what about Trading-Card Games?

Oh dear God…

My definition of gambling-inspired micro-transaction (randomization of the outcome, unlimited spending, payed content required to be competitive) is quite broad. Writing this article, I realized that this definition includes games such Magic the Gathering or Hearthstone because of one of their core components: booster packs. Are booster packs considered gambling? Well… technically yes. It is, in fact, possible that people get addicted to such mechanic (I personally I think I had some physical addiction to the smell of freshly unpacked booster packs). The feedback of excitement and punishment/reward of opening a booster pack is exactly the same of spinning a slot machine.

I think there is a subtle difference though. Booster pack always gives you something of value that can be used or exchanged to get the card you really want. Evan virtual TCG games implements such mechanism in attempts to attenuate the gambling aspects. For instance, Hearthstone allow you to disenchant duplicated card and use the resulting materials to “craft” a more valuable card. Sure, it is not a zero-sum process, but it is better than a slot machine.

For this reason, I think they can give the same problem in some cases, but in a more light and controllable form. They lay in a gray area.

Again, I do not want to ban gambling-like mechanic. I want to make people aware of the problem so that we can protect vulnerable people.


Randomness in PCG is about the result, not the parameters

I feel the urge of stating the obvious: randomness in Procedural Generation refers to the perceived randomness of the outcome; not the randomness of the input parameters.

In some sense this is “obvious” but, at the same time, is one of the most common mistake I see when developers tackle procedural content generation in their games. It is an understandable mistake, though. There are two assumptions we subconsciously make when we approach randomness: 1) we think that uniformly random  parameters produce uniformly random outputs (that’s blatantly false), and 2) we think that uniformly random outputs yield to uniformly random perception in the human (even more false).

These are two easy points to fail at design phase. How we will see later, even big companies find out the “obviousness” of these two points only years after their product release.

Randomness in the output

First, we need to convince ourselves that it is false that uniformly random input parameters produce uniformly random results. For this, we can imagine a PCG algorithm as a function that takes some parameters x_1,x_2,\ldots x_n as input and returns something y as output. The wrong assumption is that if we replace x_1,\ldots x_n with random variables X_1,\ldots X_n with a uniform distribution, then the function will return a random variable Y with uniform distribution.

A trivial counter example is the function f(x) = 0. I think it is clear that the function output is not only not-uniform, but it is not random at all. However, this example tell us nothing on the problem. A more interesting and famous example is a function returning a random point in a circle. For simplicity, we assume a radius 1 circle.  The generator function will be something like the following:.

f(\rho,\theta) = \langle \rho cos(\theta), \rho sin(\theta) \rangle

This is a standard polar coordinate system, where we can use \rho in the interval [0,1], and \theta in the interval [0, 2\pi]. As it is known, for each pair (\rho,\theta) we get a unique point in the circle.

In order to generate uniformly distributed point in the circle, our first idea is to chose \rho and \theta with a uniform distribution. However, the result is far from uniform.

As you can see, the points group around the center. Why? It easy to see that, according the way we are generating the points, half of the points will be in the \rho < 0.5 circle while the other half will be in the \rho > 0.5 ring. However, the area of the small circle is just 25% of the total area of the circle! We are, indeed, pushing half the point in a quarter of the area.

What we want is to generate uniform points on the area of the circle. In other words, we want that half of the points we generate will be placed in half the area (and the same for any other ratio). In the circle problem, this can be easily solved by taking the square root of \rho before generating the points. You can do the math, or look at the example below.

The circle example is a classic and well-known example, but we can apply the same reasoning to any PCG algorithm. When you are uniformly picking the input parameters of your dungeon generator are you generating uniformly distributed dungeons? If this does not work for simple circles, I think it is unlikely that it works for dungeons. Unfortunately, the solution here is much more complex, but it is worth to take the problem in mind.

Random in the perception

True Randomness (left) vs. Perceived Randomness (right). the algorithm avoid clustering and produce a more uniform distribution of points. Source.

This is another common problem. I talked a bit about this in a previous article, but the problems returns many times even in different fields. It appears in games, when you want random abilities that seems fair to the player (that’s why a lot of MOBA use pseudo-random number generators) or music players such Apple iPods or Spotify shuffle their playlists. The problem is so common that we have the name for it: gambler’s fallacy. In short, we confuse randomness with “equally and evenly distributed events”.

The main problem here is that the final player does not always perceive true randomness as randomness. If your algorithm produces thousands of slightly different objects, even if they are technically random, if the player perceives them as “mostly the same thing” your algorithm has failed to achieve its purpose from a PCG point of view.

As a consequence, even if you have an algorithm that produce uniform randomness in the output (see above), if your output space is not uniform from a perception point of view, you still fail to achieve “true perceived randomness”. Unfortunately, understanding what means “perceived randomness” in many PCG domain is even harder that producing uniform output. For dungeon generators, this could depend on many faetures of the dungeon. For instance topology (long maze-like corridors vs. large rooms connected by short hallways), the tiles, the enemies or something else. In any case, it is not possible to guess this in advance. For this, it is important to analyze the players’ feedback in the specific game.

Combining the two

We can then conclude that we can achieve “true randomness” in PCG with the combination of two properties:

  1. The algorithm should produce random results uniformly distributed in the output space.
  2. The output space should contain a uniform distribution of “perceived random” results.

Both these properties are just guidelines, because it is impossible to turn them into practical rules, or even verify them, in any PCG algorithm more complex than a random number generator. Nevertheless, these are two important aspects that a PCG developer should keep in mind.

If you are interested in this topic, in literature, the combination of these two aspects takes the name of Expressive Range (click on the  link for the paper on Expressive Range in PCG). But for us, this is a story for another time.


Random calendar generation from planet orbital parameters

In this article I want to show you a small proof of concept where we generate from scratch an alien calendar. The difference from other random calendar generators that just put random days and random month is that, in this tool, we can specify as input the orbital parameters of the planet and satellite and generate a calendar that makes sense.  How could be the calendar for that planet orbiting a super-massive star? How could be the calendar for Mars or Venus? Every time I try to sketch a Sci-Fi world, I want to try thinking to a calendar that make sense for that particular planet. Because the calculations are long and boring, I built for myself this little tool. However, before going to the tool, I want to provide a small introduction to the problem.


Among the many things I am fascinated about, I like to imagine imaginary calendars for the inhabitants of distant worlds. A different year duration, a different moon (if any), different history, every small details can change completely the structure and the rule of an alien calendar.

After all, our calendar is just a messy result of centuries of corrections and arbitrary changes. There are so many things we give for granted but, in reality, they are just arbitrary decisions.

  • Why the year start in the middle of the Winter?
  • Why we use a 7 days week?
  • Why we use 12 months? And why are they of 30/31 days?
  • Why February is 28 days long?
  • Why a leap year every 4 years (approximately)?

Looking at different calendars developed in other cultures, such as the Chinese Calendar and the Persian Calendar, we can learn that these and many other properties are just convention.

A Calendar: the Important Parts

What we really care about a calendar is keeping track of seasons and help the people to keep track of time in their daily life. These things depend directly on the  movement of the Earth and the other celestial bodies. This is the reason this generator uses orbital parameters as a starting point for calendar generation. I want to be sure that the generated calendar is aligned on “the real world”. But let’s see which calendar aspect is connected to which orbital information.

  • Year: as we all know, the year depends on the orbital period of the Earth around its star (the Sun).
  • Day: the day is the time the Earth do a full rotation around itself (let’s ignore the difference between solar day and astral day).
  • Month: months are borderline between convention and “real thing”. Here on earth, we have this big satellite (the Moon) and we decided to align the month duration to the revolution of the Moon around the Earth.
  • Leap Years: this is a way to fix the decimal part of the ration between the year and the day duration. We will see this better later.

Then there are seasons. Seasons depend on the eccentricity of the Earth orbit but, for now, we ignore them.

Cool. We have a nice correspondence between year/month/day duration and the orbits of the Earth around the Sun and the Moon around the Earth. Therefore, we need just to compute this values.

Computing the Orbital Periods from the Orbital Parameters

The semi-major axis (in red) and the semi-minor axis (in blue) on an ellipse.

With orbital parameters  we refer to all the information about two celestial object that define how the smaller object orbits around the other. Fortunately for us, there are just three parameters:

  • Mass M of the bigger object (e.g., the Sun)
  • Mass m of the smaller object (e.g., the Earth)
  • Semi-axis major of the orbit a. That is half the value of the greater diameter of the ellipse (the major axis). For the orbits, it is convenient to compute this value with the average of the minimum distance between the Earth and the Sun (perihelion) and the maximum distance (aphelion).

With these three piece of data, we can compute everything about the orbit of the celestial objects. To do that, we will use the Kepler’s Third Law:

\frac{P^2}{a^3} = \frac{4 \pi^2}{G(M+m)}

Solving this equation for P give us the orbital period. If we want to know the position of the planet on the orbit in a specific day, the formula get much more complicated. We will talk about that when we will try to add to our calendar season and lunar phases. For now, more information on this task can be found in this PDF (they are just my notes on the topic).

For now, we only need the orbital period. Using this formula we can get the periods defining the year duration (Sun-Earth system) and the month duration (Earth-Moon system).

Computing the relevant calendar information from Orbital Periods

Now, we can compute the important aspect of a calendar.

For the year, we use the integer part of the orbital period between the planet and the star (e.g., the Earth and the Sun).

For the average month duration we use the integer part of the orbital period between the satellite and the planet (e.g., the Moon and the Sun).

Leap years  are a bit more complicated. First we need to divide the year duration (in seconds) for the day duration (in seconds). For example, on Earth, each day is composed by 86400  seconds. If we get the orbital period of earth and we divide it by this value, we get the well-known value of  365.242190402. Now, we take the decimal part, 0.242190402, and we look for a function that approximates well enough this number. Suppose for instance 0.25 = \frac{1}{4} as a good approximation. This tell us that we need to add 1 day every 4 years. If the fraction we chose is \frac{97}{400} (as it is the one we use nowadays), this means that we need to add 97 days every 400 years.

Note that this does not tell us how to do this. We can choose to add 1 day every a certain amount of years, or add a week in some special years, and so on. We will talk more about this in another time.

How can you compute this fraction? In my code I use the convergents of the continued fraction of that decimal number. You can use any other technique. It does not matter.

At this point we have all the data we need to generate any random calendar. We know how many days there are in a year, how many months, how many days there are, on average, in a month and a basic idea on how the “leap year” will work. Now, we only need to apply some randomization to these values, for instance by moving days from a month to another, and we will get a calendar structure for our planet.

The Calendar Generator

The Calendar Generator interface in the demo project.

Now that we have the basic idea of how the various concepts are connected with each other, we can start looking at the code.

The actual implementation of the calendar is hosted here. In the demo you can insert all the orbital parameters and the duration in seconds of the day of your planet, and generate a possible calendar for that configuration. The source code is hosted on GitHub.

An example output for version 0.1 of the calendar generator.

Obviously this is an early proof of concept and there are still many things I can improve. First, I still use a weekly structure for the calendar. Weeks are a convenient way for the inhabitants of the planet to divide the months, but the 7 days of our calendar does not come from any orbital parameter.  I would like to make the number 7 a user parameter and, in the future, provide alternatives to the concept of week.

Second, many other calendars on Earth have multi-year structures such as cycles or eras. A common example is represented by the 60-year-long cycle in the Chinese Calendar (and other eastern calendars). I would like to add the possibility to generate this kind of structures as well.

Finally, I assume a planet with a moon. But what if there is none? What if there are two, three, ten satellites? How would be a calendar for such exotic places? The seam reasoning could be done also for multiple stars. This multi-object system are so different that is not easy to develop a calendar for them. It is a much challenging problem that I think it is worth exploring.

And then seasons, events, and much, much more. Playing with a calendar of a distant world tell us a lot of the life of its inhabitants before we even start thinking about them.

I hope you have found this article interesting. If you want, we can continue improving this on GitHub. Follow me on Twitter and ask me anything!

Most Promising Programming Languages of 2017

Another year, another 5 promising programming languages you should keep an eye on in 2017. As usual, I’d like to write the warning I put here every year: in this list, you will not find programming languages for hiring purposes, but for very long-time investments and for pure programming fetish.

So, now that you know what I am talking about, here we go with the top 5 for 2017.

Top 5 Promising Programming Languages for 2017


Here we go, again. You know, I will put Rust again and again in this list. And not only because the language is fascinating and interesting, but for all the other aspects too. Sure, Rust it is one of the few super-low-level languages that can actually compete with C. But it is also one of the friendliest programming community I ever joined (emh, lurked).

I don’t think I have to spend too many words on Rust. Rust is a quite new memory-safe non-garbage-collected low-level language. In the last year, Rust reached version 1.14 and progresses with a steady pace with a release every 6 weeks. Even if you do not have any plan for Rust in 2017, keep an eye on it. Following its evolution is quite fun (depending on your idea of “fun”, obviously).


Nim is another very interesting language. It is a bit less established than Rust, but its syntax is easy to grasp, it is really efficient in many general-purpose and common use cases.

Now, before we go on, I had to say a thing. Usually, whenever I say that “unlike Rust, Nim is garbage collected” somebody pops up from the tall grass remembering me that “in Nim you can disable garbage collection”. Well, let’s be clear. In any GC language you can disable garbage collection. The difference is if the language works in the same way. In Nim, you can disable the garbage collection but, at the time, standard library assumes GC is running and, therefore, didn’t work. Moreover, if want to use manual memory management in Nim, you face all the problem of C because there is no compile-time safety check (unlike Rust). If you do not trust me, trust the description of Nim project on GitHub:

“Nim is a compiled, garbage-collected systems programming language […]”

Said that, for many, many applications garbage collection is not a problem, so I do not understand why some Nim guys are so harsh on this point. Nim is an amazing language even if garbage collected. It is like Python but compiled to multi-core binary code. And it has an amazing system for generating expressions at compile time.

It is still a bit in the shadow, but if its time will come (and I hope so), it will be great.


Elixir is another language that it is having a lot of traction. It can appear like a bunch of syntax sugar macros over Erlang (and probably, it born that way), but it is more. I already talked about Elixir last year. But it is only during 2016 that I started digging into Elixir for real.

The big pros of Elixir are its Erlang back-end and its Ruby syntax. The first one allows Elixir to be used for highly-concurrent software, in fact Erlang is born for that. Writing concurrent code in Elixir it is easy just like using list comprehension in Python. However, Erlang syntax can appear strange and unfamiliar for many developers, for this reason, the Ruby-like syntax can definitely help with Elixir.

However, it is a dynamically typed language (even if it supports some level of gradual-typing), so keep this in mind. In any case, depending on how Elixir will evolve in the next couple of year (and in particular its ecosystem), we can have another strong contender and alternative to Node.js.


Talking about Ruby-like languages, another very interesting language I have to point this year is Crystal. In very few words, Crystal is Ruby but compiled to machine code. As stated by the developers, Ruby syntax compatibility is not a top priority, but the goal is to be as close as possible to Ruby expressiveness and to be as close as possible to the C speed. In some sense, is very similar to Nim but with a more “Rubiest” syntax.

Another strong point for Crystal is the ability to call very easily C code and, that it is statically-typed (but type inference allows you to avoid explicit types declaration in many situations).

There is a bit of hype around Crystal even if the language is still in a very alpha state and the syntax itself it is not stable. Crystal it is still at version 0.20.3, but releases are fast and interesting to track. If you share the desire expressed in Crystal goals, you should track this project.


This is a long shot, not because we are talking about an obscure alpha language, but because Kotlin seems to be that language nobody really needed. In the last ten years, we have seen many JVM languages with the goal of becoming “a better alternative to Java”. But, in the end, everybody still uses Java.

So, why Kotlin? There is no killer feature her. The only thing can push Kotlin as a solid replacement to Java is the company that it is developing it: JetBrains.

Everybody knows JetBrains and most of us love their IDEs. This can be enough to slowly push Kotlin in the wild, especially for Android development (Android Studio is based on JetBrains IDE, after all). It this enough? Probably, not. Will see.

I’m not convinced of this language, but I think we will definitely see if it can be something serious during this year. That’s why I put it here.

Other Languages

Now a quick comment on some other languages I didn’t put in the top 5.


  • Clojure: I am definitely waiting for 1.9. Unfortunately, it is passed one year since release 1.8 and 1.9 will not be available by the end of the year. This is not a problem, but 1.9 promises a lot and I want to see it on the field before pushing Clojure back to top 5. Anyway, a solid sixth place.
  • TypeScript: I think we can agree that TypeScript is no more a promising language. With version 2.0 it is a solid and established product that kept its promises. Just like Go, it is time for TypeScript to go outside this nest.
  • Eve: This is the strangest and boldest thing I’ve found this year. It is an IDE, it is a language, it is both. Eve is definitely a strange and innovative creature that feeds on developers hype. In my opinion, Eve is too far ahead for these times. It will not be very successful but it will be an inspiration for future languages (in the best scenario it will be like Haskell).
  • Red: Red is another strange language, a modern evolution of another arcane specialized language named REBOL. I find this language interesting because it allows a very natural reactive programming. Because reactive paradigms could be the “next big thing” for the new functional programming era, it is interesting to look at a real language working with it. However, note that the language is totally in some pre-alpha unstable plane.

How to manage a Videogames Bibliography in LaTeX

There is a common question in academia for people working on videogames: “Is there a consensus on how to cite videogames in academic papers?”. Obviously not, there is no consensus and probably never will. However, I will try to show you a solution to this problem that I enjoyed a lot in the last months. It is clear, it is customizable and it is the most formal way I’ve found to do that.

Use BibLaTeX

The first prerequisite is to use BibLaTeX. BibLaTeX is a replacement for BibTeX (the default bibliography manager for LaTeX). It is more flexible and powerful than BibTeX, therefore, even if it is a bit more complicated to set up, it is worth the switch in any case.

In order to switch to BibLaTeX you can follow the simple steps described here. It is not hard. There are just three things you have to do. First, load the BibLaTeX package with

\usepackage[style=<somebiblatexstyle>,<other options>]{biblatex}

Second, load your bibliography with

\addbibresource[<options for bib resources>]{<mybibfile>.bib}

And, at last, print your bibliography.

\printbibliography[<options for printing>]

Create your videogame bibliography

Cool. Now, what we want to do is to create our personal videogame bibliography. Ideally, we want a new LaTeX command \citegame{game_id} for managing videogame citation. First, however, we have to create our videogame.bib file. Inside this file, we will add entries for videogame in this way:

  author = {{Irrational Games}},
  title = {BioShock: Infinite},
  url = {},
  version = {1.1.25},
  date = {2013-03-26},

Pro Tip. It is possible that you want to print your videogame bibliography in a separate section (different from your hard books and papers bibliography). You can do this in this way:

% Print the standard bibliography.
% Print your Videogame/Software Bibliography

Add a custom cite command

Now it is time to implement our custom cite command! In the preamble of your LaTeX file insert this code:

    \printfield{year}}}{\GenericError{}{Not a game entry}{}{}}}

This command creates a custom \citegame{game_id} that you can use to cite videogames. You can customize the format but in my documents I use the “Title (Developer Year)” format presented above. For instance, you can use the following code:

\caption{Elizabeth in \citegame{bioshockinfinite} is an AI companion which, 
among the other tasks, will randomly provide ammo, medikits and money for the player.}

To get a result like this:
Latex videogame Example


Latex Videogame Bibliography


I hope enjoy this very small tutorial! Feel free to update and improve it. There are still some rouge edges that I would like to remove. In any case, share it with your colleagues if you like!

Procedural Generation in the Post No Man’s Sky Era

I think the time is ready to talk about Procedural Contents Generation (PCG) in the post No Man’s Sky era. I’m talking about “era” for a good reason: No Man’s Sky huge failure will definitely mark an era in the history of PCG, and not for a good reason. Players’ perception on PCG has been severely hurt by how badly NMS delivered its contents. Probably, this will be the end of PCG as a marketing buzz-word.

But perhaps, this will be for the best. Without the hype around PCG we can start reconsidering PCG for what it really is. However, we need to be sure to do not make the same mistakes again.

What we did wrong

Us, as a community of developers and PCG researcher, we did (or we allowed) several mistakes. Before NMS, PCG was very overused marketing term. “In this game you can explore a gazillion different levels!”, “you can visit thousands millions of new world”. We all heard those phrases in the advertising material of every PCG-powered games.

This is what I said in more innocent times.

For people like me, deeply invested in PCG for passion and research, it is clear the real meaning of those phrases. But for the average player they build totally unrealistic expectations. When I read that a game offers 1000000 different worlds, I know that this means “10 variations for 6 different parameters, most of which are similar to each other or partly broken”. However, for the average player, this means exactly 1000000 different worlds! It is not surprising that the player’s expectations are never met!

The “Stool Threshold”

The problem here is about the meaning of the world “different”. Even if, technically, 10 variations for 6 parameters are technically different from each other they are not “different” in the human brain. Without entering in the realm of heavy math and philosophy, we can simply say that two things are “Real Different” from each other if and only if their difference is greater than the stool threshold.

But what is this Stool Threshold? I will try to explain it with an example. Imagine you have an algorithm to procedurally generate different four-legged furniture. In this algorithm we can randomize three parameters, the two sizes of the rectangular plane and the length of the four legs. As you can imagine, we can generate a lot of tables, however, if the legs are tall enough and the plane is small enough, we stop having a table and we have a stool instead. Well, the Stool Threshold is exactly the blurry line that separates the stools from the tables.

The subtle difference between a table and a stool.
The subtle difference between a table and a stool.

As you can see, this is not a formal definition. But it can give us the idea of what “noticeably different” means. It is not based only on how the table/stool looks like, but this should take into account also its purpose (that is, we use a table in a completely different way than a stool). So, even if we technically have thousands of different furniture, in the end, we are offering to the players just a bunch of similar tables and stools. That’s exactly why people get hyped and bored by PCG-centered games: you promise thousands when you are offering just 2.

The Future

Now that this bubble exploded, probably people will be more aware of this issue and PCG will stop to be proposed as the core mechanic of a game. Hopefully, this means that we (developers and researchers) can start again focusing on what PCG is and what are its limits and future instead of hyping ourselves and the others. To do that, we need to stop committing the same mistakes over and over again. Therefore, here it is a small list of rules we should try to follow from now on.

  1. Stop using PCG to replace actual game-play! PCG is the palette, it is not the real painting. If the game-play is based exclusively on watching PCG at work, you are in a very dangerous place.
  2. Use the “Stool Threshold”. When you check your PCG elements, use the “stool threshold” not the plain cold combinatorial value. How many Really Different dungeons can your game generates? How many Really Different weapons? And I mean “meaningfully different” from each other. And if you do not get exciting numbers, do not worry! Have you ever seen a Diablo player saying “I play Diablo because it has thousands of different weapons!” No! Because we use PCG to support the game-play, not the other way around. If instead, you think that this is not good enough… well. Then probably you have to add game-play elements because you are dangerously near the point 1.
  3. Finally, aim to the meaningful, not to the different. PCG is not always based on generating billions of different things. Instead of generating vibrant worlds that then will be static for the whole duration of the game-play experience, use PCG to follow the player during the game, providing dynamism and changing the world in such a way the player can feel an emotive attachment. An example I love is how in Dwarf Fortress the dwarfs can use earlier events lived by the player to craft artifacts, and books. A jar inscribed with the events in which your beloved dwarf almost killed all your expedition is much much more interesting than a stunning but completely random and abstract jar.


I hope this can give the PCG community some insight on the mistakes and misconception developers have when we they talk about PCG. One day there will be technical solutions to Real Different PCG worlds. But until then, we should watch out how we talk about PCG.