Category Artificial Intelligence
One surprising thing about LLMs nobody is talking about
Small reflections on the unexpected of syntactic networks.
Large Language Models (LLMs) maybe be controversial, somehow, but I am still amazed by their mathematical properties. In this article, I will reflect on my favorite one.
This website will always be open. For bots, too.
Because offering free and accessible information is the most important thing.
In period where everybody seems to care about the use of our content, I decided to take a stance: I will always make my content available to everybody, human and non-human alike. Because copyright is a already a problem as it is.
The Freakout-Free Guide to Generative AI
I saw a bit of a heated debate around generative AI and AI art. So this is my “guide” for the general public, where I try to answer some common questions. What is generative AI? How does it work? What are its problems? What are their advantages? Do we all die because of it? (no).
Machine Consciousness is Inevitable
Can a robot become self-conscious? It looks like an interesting question, but it is not: machine consciousness is just inevitable. The true interesting questions are hidden in the details of how such consciousness will emerge and how it will look like.
The Trolley Cart Problem is not an AI problem
Every time there is a discussion on the future of AI-powered Autonomous Vehicles, somebody put the Trolley Cart Problem (TCP) on the table. And every time this happens, I am annoyed. However, recently, I saw some mutual followers studying AI and Computer Science talking about how TCP is a fundamental problem for the future of AI and autonomous vehicles. So I think it is time to speak it loud: the Trolley Cart Problem is not an AI problem! Let’s see why.
Overview of Three Techniques for Procedural Storytelling
Inspired by a recent paper I read this week, I decided to explain the three major “classic solutions” to the generative storytelling problem: Simulation, Planning, and Context-Free Grammars. Let’s what they are and what to choose.
Marginalia: Rebooting AI by Gary Marcus and Ernest Davis
With this new year, let’s try a new format. Marginalia will be a series in which I’ll share notes and comments on interesting books I read. The name is directly inspired by the old word indicating the small notes on the margins of books.
It will be a chance to discuss my readings without the need to write a full-fledged article. I hope it will be interesting as a review of the book or as a discussion starter. So, let’s start.
The Winter of Virtual Assistants
Seven years and eight months have passed since the release of the first really popular commercial virtual assistant (VA). Yet, seven years later, virtual assistants can do only marginally better.
Sure, they understand better, they speak better, they have learned some new trick; but in the end, they are still a funny but useless experience. After the first fun moments of experimentation when you start talking to them – that is, where you keep asking them silly jokes or dumb questions – they quickly came back to be pretty dumb object. I am pretty sure that the vast majority of user use a VA just for timers, weather and – occasionally – asking for the event on our calendar.
MovingAI pathfinding benchmark parser in Rust
You know I worked a lot with pathfinding. In academia, the MovingAI benchmark created by the MovingAI Lab of the University of Denver is a must for benchmarking pathfinding algorithms. It includes synthetic maps and maps from commercial videogames.
Parsing the benchmark data, the maps, creating the map data structure and more, is one of the most boring thing I needed to do for testing my algorithms. For this reason, I think a common library for working with the maps specifications it is a must.
Questions about Deep Learning and the nature of knowledge
If there is something that can be assumed as a fact in the AI and Machine Learning domain is that the last years had been dominated by Deep Learning and other Neural Network based techniques. When I say dominated, I mean that it looks like the only way to achieve something in Machine Learning and it is absorbing the great part of AI enthusiasts’ energy and attention.
This is indubitably a good thing. Having a strong AI technique that can solve so many hard challenges is a huge step forward for humanity. However, how everything in life, Deep Learning, despite being highly successful in some application, carries with it several limitations to that, in other applications, makes the use of Deep Learning unfeasible or even dangerous.