Ciro Santilli
🔗
🔗
🔗
🔗
Video 1. "Top Down 2D Continuous Game with Urho3D C++ SDL and Box2D for Reinforcement Learning" by Ciro Santilli (2018) Source.
🔗
Figure 9. Screenshot of the basketball stage of Ciro's 2D continuous game. Big kudos to game-icons.net for the sprites.
🔗
🔗
Video 2. "Top Down 2D Discrete Tile Based Game with C++ SDL and Boost R-Tree for Reinforcement Learning" by Ciro Santilli (2017) Source.
🔗
The goal of this project is to reach artificial general intelligence.
🔗
A few initiatives have created reasonable sets of robotics-like games for the purposes of AI development, most notably: OpenAI and Google DeepMind.
🔗
However, all projects so far have only created sets of unrelated games, or worse: focused on closed games designed for humans!
🔗
What is really needed is to create a single cohesive game world, designed specifically for this purpose, and with a very large number of game mechanics.
🔗
Notably, by "game mechanic" is meant "a magic aspect of the game world, which cannot be explained by object's location and inertia alone". For example:
  • when you press a button here, a door opens somewhere far away
  • when you touch certain types of objects, a chemical reaction may happen, but not other types of objects
🔗
Much in the spirit of http://www.gvgai.net/, we have to do the following loop:
  • create an initial game
  • find an AI that beats it well
  • study the AI, and add a new mechanic that breaks the AI, but does not break a human!
🔗
The question then becomes: do we have enough computational power to simulation a game worlds that is analogous enough to the real world, so that our AI algorithms will also apply to the real world?
🔗
To reduce computation requirements, it is better to focus on a 2D world at first. Such world with the right mechanics can break any AI, while still being faster to simulate than a 3D world.
🔗
The initial prototype uses the Urho3D open source game engine, and that is a reasonable project, but a raw Simple DirectMedia Layer + Box2D + OpenGL solution from scratch would be faster to develop for this use case, since Urho3D has a lot of human-gaming features that are not needed, and because 2019 Urho3D lead developers disagree with the China censored keyword attack.
🔗
Simulations such as these can be viewed as a form of synthetic data generation procedure, where the goal is to use computer worlds to reduce the costs of experiments and to improve reproducibility.
🔗
Related projects:
🔗
Video 3. "DeepMind Has A Superhuman Level Quake 3 AI Team" ublished by Two Minute Papers (2018) Source. Commentary of Google DeepMind's 2019 Capture the Flag paper. DeepMind does some similar simulations to what Ciro wants, but TODO do they publish source code for all of them? If not Ciro calls bullshit on non-reproducible research. Does this repo contain everything?
🔗
Video 4. "OpenAI Plays Hide and Seek... and Breaks The Game!" by Two Minute Papers (2019) Source. Commentary of OpenAi's 2019 hide and seek paper. OpenAI does some similar simulations to what Ciro wants, but TODO do they publish source code for all of them? If not Ciro calls bullshit on non-reproducible research, and even worse due to the fake "Open" in the name. Does this repo contain everything?
🔗
Video 5. "Simulating Foraging Decisions" by Primer YouTube channel (2020) Source. This channel contains several 2D continuous simulations and explains AI techniques used. Notably, they have several interesting multiagent game ideas. TODO once again, are all sources published? Claims Unity based, so another downside, relying on non-FOSS engine. Ciro became mildly jealous of this channel when he found out about it, because at 800k subscribers at the time, the creator is likely able to make a living off of it, something which Ciro thought impossible. It hinges a large part of the amazing 3D game presentation, well done.
🔗
🔗

Ancestors

🔗