Ciro Santilli $$ Sponsor Ciro $$ 中国独裁统治 China Dictatorship 新疆改造中心、六四事件、法轮功、郝海东、709大抓捕、2015巴拿马文件 邓家贵、低端人口、西藏骚乱
These are projects which Ciro seriously considering doing, and which he believe could have a considerable impact in the world, given a few months of work.
They are sorted in order of "most likely to get done first".
Actual section at: Section "OurBigBook.com"
Video 1. Top Down 2D Continuous Game with Urho3D C++ SDL and Box2D for Reinforcement Learning by Ciro Santilli (2018) Source. Source code at: https://github.com/cirosantilli/Urho3D-cheat.
Figure 1. Screenshot of the basketball stage of Ciro's 2D continuous game. Source code at: https://github.com/cirosantilli/rl-game-2d-grid. Big kudos to game-icons.net for the sprites.
Video 2. Top Down 2D Discrete Tile Based Game with C++ SDL and Boost R-Tree for Reinforcement Learning by Ciro Santilli (2017) Source.
The goal of this project is to reach artificial general intelligence.
A few initiatives have created reasonable sets of robotics-like games for the purposes of AI development, most notably: OpenAI and DeepMind.
However, all projects so far have only created sets of unrelated games, or worse: focused on closed games designed for humans!
What is really needed is to create a single cohesive game world, designed specifically for this purpose, and with a very large number of game mechanics.
Notably, by "game mechanic" is meant "a magic aspect of the game world, which cannot be explained by object's location and inertia alone". For example:
  • when you press a button here, a door opens somewhere far away
  • when you touch certain types of objects, a chemical reaction may happen, but not other types of objects
Much in the spirit of gvgai, we have to do the following loop:
  • create an initial game that a human can solve
  • find an AI that beats it well
  • study the AI, and add a new mechanic that breaks the AI, but does not break a human!
The question then becomes: do we have enough computational power to simulation a game worlds that is analogous enough to the real world, so that our AI algorithms will also apply to the real world?
To reduce computation requirements, it is better to focus on a 2D world at first. Such world with the right mechanics can break any AI, while still being faster to simulate than a 3D world.
The initial prototype uses the Urho3D open source game engine, and that is a reasonable project, but a raw Simple DirectMedia Layer + Box2D + OpenGL solution from scratch would be faster to develop for this use case, since Urho3D has a lot of human-gaming features that are not needed, and because 2019 Urho3D lead developers disagree with the China censored keyword attack.
Simulations such as these can be viewed as a form of synthetic data generation procedure, where the goal is to use computer worlds to reduce the costs of experiments and to improve reproducibility.
Ciro has always had a feeling that AI research in the 2020's is too unambitious. How many teams are actually aiming for AGI? When he then read Superintelligence by Nick Bostrom (2014) it said the same with more historical context, page 35:
There may, however, be a residual cultural effect on the AI community of its earlier history that makes many mainstream researchers reluctant to align themselves with over-grand ambition. Thus Nils Nilsson, one of the old-timers in the field, complains that his present-day colleagues lack the boldness of spirit that propelled the pioneers of his own generation:
Concern for "respectability" has had, I think, a stultifying effect on some AI researchers. I hear them saying things like, "AI used to be criticized for its flossiness. Now that we have made solid progress, let us not risk losing our respectability." One result of this conservatism has been increased concentration on "weak AI" - the variety devoted to providing aids to human thought - and away from "strong AI" - the variety that attempts to mechanize human-level intelligence
Nilsson’s sentiment has been echoed by several others of the founders, including Marvin Minsky, John McCarthy, and Patrick Winston.
Related projects:
Bibliograpy:
Video 3. DeepMind Has A Superhuman Level Quake 3 AI Team by Two Minute Papers (2018) Source. Commentary of DeepMind's 2019 Capture the Flag paper. DeepMind does some similar simulations to what Ciro wants, but TODO do they publish source code for all of them? If not Ciro calls bullshit on non-reproducible research. Does this repo contain everything?
Video 4. OpenAI Plays Hide and Seek... and Breaks The Game! by Two Minute Papers (2019) Source. Commentary of OpenAi's 2019 hide and seek paper. OpenAI does some similar simulations to what Ciro wants, but TODO do they publish source code for all of them? If not Ciro calls bullshit on non-reproducible research, and even worse due to the fake "Open" in the name. Does this repo contain everything?
Video 5. Simulating Foraging Decisions by Primer YouTube channel (2020) Source. This channel contains several 2D continuous simulations and explains AI techniques used. Notably, they have several interesting multiagent game ideas. TODO once again, are all sources published? Claims Unity based, so another downside, relying on non-FOSS engine. Ciro became mildly jealous of this channel when he found out about it, because at 800k subscribers at the time, the creator is likely able to make a living off of it, something which Ciro thought impossible. It hinges a large part of the amazing 3D game presentation, well done.
It is unbelievable that you can't find easily on YouTube recreations of many of the key physics/chemistry experiments and of common laboratory techniques.
Experiments, the techniques required to to them, and the history of how they were first achieved, are the heart of the natural sciences. Without them, there is no motivation, no beauty, no nothing.
School gives too much emphasis on the formulas. This is bad. Much more important is to understand how the experiments are done in greater detail.
The videos must be completely reproducible, indicating the exact model of every experimental element used, and how the experiment is setup.
A bit like what Ciro Santilli does in his Stack Overflow contributions but with computers, by indicating precise versions of his operating system, software stack, and hardware whenever they may matter.
It is understandable that some experiments are just to complex and expensive to re-create. As an extreme example, say, a precise description of the Large Hadron Collider anyone? But experiments up to the mid-20th century before "big science"? We should have all of those nailed down.
We should strive to achieve the cheapest most reproducible setup possible with currently available materials: recreating the original historic setup is cute, but not a priority.
Furthermore, it is also desirable to reproduce the original setups whenever possible in addition to having the most convenient modern setup.
Lists of good experiments to cover be found at: the most important physics experiments.
This project is to a large extent a political endeavour.
Someone with enough access to labs has to step up and make a name for themselves through the huge effort of creating a baseline of amazing content without yet being famous.
Until it reaches a point that this person is actively sought to create new material for others, and things snowball out of control. Maybe, if the Gods allow it, that person could be Ciro.
Tutorials with a gazillion photos and short videos are also equally good or even better than videos, see for example Ciro's How to use an Oxford Nanopore MinION to extract DNA from river water and determine which bacteria live in it for an example that goes toward that level of perfection.
The Applied Science does well in that direction.
This project is one step that could be taken towards improving the replication crisis of science. It's a bit what Hackster.io wants to do really. But that website is useless, just use OurBigBook.com and create videos instead :-)
We're maintaining a list of experiments for which we could not find decent videosat: Section "Physics experiment without a decent modern video".
When Ciro Santilli first learnt the old Zermelo-Fraenkel set theory and the idea of formal proofs, his teenager mind was completely blown.
Finally, there it was: a proper and precise definition of mathematics, including a definition of integers, reals and limits!
Theorems are strings, proofs are string manipulations, and axioms are the initial strings that you can use.
Once proved, press a button on your computer, and the proof is automatically verified. No messy complicated "group of savants" reading it for 4 years and looking for flaws!
There are a few proof assistant systems with several theorems in their Git tracked standard library. The hottest ones circa 2020 are:
And here are some more interesting links:
However, as expressed by the QED manifesto, is unbelievable that there isn't one awesome and dominating website, that hosts all those proofs, possibly an on the browser editor, and which all mathematicians in the world use as the one golden reference of mathematics to rule them all!
Just imagine the impact.
Standard library maintainers don't have to deal with the impossible question of what is "beautiful" or "useful" enough mathematics to deserve merged: users just push content to the online database, and star what they like!
We then just use GitHub-like namespaces for each person's theorem, e.g. "cirosantilli/fundmaental-theorem-of-calculus" or "johndoe/fundmaental-theorem-of-calculus" so that each person owns their own preferred definition IDs, which others can reuse.
No more endless bikeshedding over what insane level of generality do your analysis theorems need to be (Ciro Santilli attended at talk about lean where the speaker mentioned this was a problem)!
This would move things more out of the "pull request and Git tracked code" approach, into a more "database with entries" version of things.
Furthermore, it is just a matter of time until the "single standard library" approach starts to break down, as the git clone becomes impossibly large. At this point, people have to start publishing separate packages. And when this happens, you would need to retest every package that you add to your project. This is why a centralized database is just inevitable at some point, it just scales better.
Interested in a conjecture? No problem: just subscribe to its formal statement + all known equivalents, and get an email on your inbox when it gets proved!
Are you a garage mathematician and have managed to prove a hard theorem, but no "real" mathematician will read your proof because your unknown? Fuck that, just publish it on the system and let it get auto verified. Overnight fame awaits.
Notation incompatibility hell? A thing of the past, just automatically convert to your preferred representation.
Such a system would be the perfect companion to OurBigBook.com. Just like computer code offers the backbone of Linux Kernel Module Cheat Linux kernel tutorials, a formal proof system website would be the backbone of mathematics tutorials! You know what, if OurBigBook.com becomes insanely successful, Ciro is going to add this to it later on.
Furthermore, it would not be too hard to achieve this system!
All we would need would be something analogous to a package registry like PyPI or NodeJS' registry.
Then, each person can publish packages containing proofs.
Packages can rely on other packages that contain pre-requisites definition or theorem.
Packages are just regular git repos, with some metadata. One notable metadata would be a human readable description of the theorems the package provides.
The package registry would then in addition to most package registries have a CI server in it, that checks the correctness of all proofs, generates a web-page showing each theorem.
All proofs can be conditional: the package registry simply shows clearly what axiom set a theorem is based on.
This is a close as we can get to Erdős' book.
Maybe Ciro will just stuff this into OurBigBook.com once that takes over the world.
This project could be seen as a more automated/less moderated version of ProofWiki.

Tagged

Ancestors