Mastodon Icon GitHub Icon LinkedIn Icon RSS Icon

Artificial Anxiety and the problem "Mental Issues" in AI

Header image for Artificial Anxiety and the problem "Mental Issues" in AI

Anxiety is a human mind bug. This may seem a strange claim, but I cannot find a better explanation for anxiety disorders. In fact, we can see pathological anxiety as the undesired consequence of our ability to think about the future. Being scared about a life-threatening event in the near future is a valuable ability: it helps us to survive, avoid danger and, in short, make our species survive. That is one of the reason our species has been so successful in nature[1].

Human Anxiety

However, sometime our mind goes too much in the future, or it keeps looking too much in the negative outcomes of future events. And that’s where anxiety is deeply rooted. For example, let’s look at my cat. When my cat is scared, it is for something happening in the present moment or, at most, in the imminent future: a noise, a black bag or something only her can perceive and make us laugh at her. Nevertheless, when the treat is gone, the cat quickly came back to its normal state. There is no “anxiety”, at least not in the same form we feel it.

But human can think far in the future. We can imagine future situations, we can plan ahead and do any kind of hypothesis about future events. And this is where anxiety build up: increasing our time horizon we can perceive, we overwhelmingly increase the number of treats we feel imminent and, therefore feeding our “scared condition”.

But this is unavoidable. Planning ahead, thinking about the future, thinking about the consequence of our actions, our fate, our mortality; this is what made us the dominant species of our planet. It is undoubtedly a “feature” we cannot give up. Future sight is both a blessing and a curse.

What about AI then? My idea is that because “anxiety” is design drawback of the “plan in the future” feature, this may hold true for Artificial Intelligence too. We have the goal of designing smarter AIs that can be completely autonomous. And to achieve this we must build AIs that can look and plan around future events like we do. And this force us to face the “anxiety bug” problem. Can AI fall in the same pathological pattern? Or are they immune “by design”?

Artificial Anxiety

We are probably talking about a different kinds of anxiety. We can define “anxiety” as the fear for some future event. In our case “fear” is “fear of death”, because “death” is what can block us from achieving our final goal: reproduction. That’s what thousands of years of natural selection pressure trained us for. I am extremely simplifying the problem, I know, but let’s ignore all the nuances of the problem for now.

In AI, we can set the final goal of the agent, and we may think that we can choose a goal that can avoid “artificial anxiety”. But, whatever goal we set, an agent cannot achieve its goal if it is deactivated/incapacitated. Or the goal is so simple that is practically unavoidable (and so, what is the point of programming an agent for that?), or there will be some situations that make the goal unreachable. As a consequence, the agent will try its best to avoid such situations, will think about them, will fear them.

This is problem very similar to the Stop Button Problem. Whatever system we include to introduce an “emergency stop button” into a general AI, will make the problem worse or will make the AI useless.

With “artificial anxiety” we may be in a better position. After all, we want our agents to “fear” potential failure scenarios. This is what will drive the agent to the defined goal. What we want, however, is to find a balance between an agent paralysed by anxiety and a reckless one. We know this is possible because we know people who can live without pathological cases of anxiety disorders. Unfortunately, how to do that is far from clear and may need a deeper understanding of human mind and mental disorder.

I know that talking about mental disorders in AIs may seem ridiculous considering the limited capabilities of our current artificial agents. However, I have this feeling that this may become a relevant aspect of AI development in the far (or near?) future. All the techniques we have developed to treat human anxiety may give us hints on how to face similar issues in the artificial domain.

But more importantly, even if not everybody agrees that what I call “fear” in the domain of AI is even comparable to human fear, I feel fascinating that humans and AI can develop common patterns. In some sense, make us feel more like machines and machines like humans.


[1]However, it is interesting how this amazing ability to predict the future - so valuable for our individual selves - falls apart when we try to predict our future as a species. We do so many bad choices as a collectivity!

comments powered by Disqus