Ciro Santilli $$ Sponsor Ciro $$ 中国独裁统治 China Dictatorship 新疆改造中心、六四事件、法轮功、郝海东、709大抓捕、2015巴拿马文件 邓家贵、低端人口、西藏骚乱
Given enough computational power per dollar, AGI is inevitable, but it is not sure certain ever happen given the end of end of Moore's Law.
Alternatively, it could also be achieved genetically modified biological brains + brain in a vat.
Imagine a brain the size of a building, perfectly engineered to solve certain engineering problems, and giving hints to human operators + taking feedback from cameras and audio attached to the operators.
This likely implies transhumanism, and mind uploading.
Ciro Santilli joined the silicon industry at one point to help increase our computational capacity and reach AGI.
Ciro believes that the easiest route to full AI, if any, could involve Ciro's 2D reinforcement learning games.
Due to the failures of earlier generations, which believed that would quickly achieve AGI, leading to the AI winters, 21st researchers have been very afraid of even trying it, rather going only for smaller subste problems like better neural network designs, at the risk of being considered a crank.
While there is fundamental value in such subset problems, the general view to the final goal is also very important, we will likely never reach AI without it.
This is voiced for example in Superintelligence by Nick Bostrom (2014) section "Opinions about the future of machine intelligence" which in turn quotes Nils Nilsson:
There may, however, be a residual cultural effect on the AI community of its earlier history that makes many mainstream researchers reluctant to align themselves with over-grand ambition. Thus Nils Nilsson, one of the old-timers in the field, complains that his present-day colleagues lack the boldness of spirit that propelled the pioneers of his own generation:
Concern for "respectability" has had, I think, a stultifying effect on some AI researchers. I hear them saying things like, "AI used to be criticized for its flossiness. Now that we have made solid progress, let us not risk losing our respectability." One result of this conservatism has been increased concentration on "weak AI" - the variety devoted to providing aids to human thought - and away from "strong AI" - the variety that attempts to mechanize human-level intelligence
Nilsson’s sentiment has been echoed by several others of the founders, including Marvin Minsky, John McCarthy, and Patrick Winston.
It is hard to overstate how low the level of this conference seems to be at first sight. Truly sad.
It is a bit hard to decide if those people are serious or not. Feels scammy, but sometimes feels fun.
Video 1. Creativity and AGI by Charles Simon's at AGI-22 (2022) Source. Sounds OK!
Not having a manipulator claw is a major issue with this one.
But they also have a co-simulation focus, which is a bit of a win.
Basically it looks like the dude got enough money after selling some companies, and now he's doing cooler stuff without much need of money. Not bad.
www.reddit.com/r/artificial/comments/b38hbk/what_do_my_fellow_ai_researchers_think_of_ben/ What do my fellow AI researchers think of Ben Goertzel and his research?
Ben Goertzel's fog computing project to try and help achieve AGI.
As highlighted e.g. at Human compatible by Stuart J. Russell (2019), this AI alignment intrisically linked to the idea of utility in economy.
Basically ensuring that good AI alignment allows us to survive the singularity.
There are two main ways to try and reach AGI:
Which one of them to take is of of the most important technological questions of humanity according to Ciro Santilli
There is also an intermediate area of research/engineering where people try to first simulate the robot and its world realistically, use the simulation for training, and then transfer the simulated training to real robots, see e.g.: realistic robotics simulation.
It doesn't need to be a bipedal robot. We can let Boston Dynamics worry about that walking balance crap.
It could very well intsead be on wheels like arm on tracks.
Or something more like a factory with arms on rails as per:
An arm with a hand and a camera are however indispensable of course!
Has anybody done this seriously? Given a supercomputer, what amazing human-like robot behaviour we can achieve?
Ciro Santilli took a stab at: Ciro's 2D reinforcement learning games, but he didn't sink too much/enough into that project.
Similar goals to Ciro's 2D reinforcement learning games, but they were focusing mostly on discrete games.
The group kind of died circa 2020 it seems, a shame.
Or is real word data necessary, e.g. with robots?
Fundamental question related to Ciro's 2D reinforcement learning games.
Bibliography:
In 2019, OpenAI transitioned from non-profit to for-profit
so what's that point of "Open" in the name anymore??
The key takeaway is that setting an explicit value function to an AGI entity is a good way to destroy the world due to poor AI alignment. We are more likely to not destroy by creating an AI whose goals is to "do what humans what it to do", but in a way that it does not know before hand what it is that humans want, and it has to learn from them.
Some other cool ideas:
  • a big thing that is missing for AGI in the 2010's is some kind of more hierarchical representation of the continuous input data of the world, e.g.:
    • when we behave, we do things in subroutines. E.g. life goal: save hunger. Subgoal: apply for some grant. Subsubgoal: eat, sleep, take shower. Subsub goal: move muscles to get me to table and open a can.
    • we can group continuous things into higher objects, e.g. all these pixels I'm seeing in front of me are a computer. So I treat all of them as a single object in my mind.
  • game theory can be seen as part of artificial intelligence that deals with scenarios where multiple intelligent agents are involved
  • probability plays a crucial role in our everyday living, even though we don't think too much about it every explicitly. He gives a very good example of the cost/risk tradeoffs of planning to the airport to catch a plane. E.g.:
    • should you leave 2 days in advance to be sure you'll get there?
    • should you pay an armed escort to make sure you are not attacked in the way?
  • economy, and notably the study of the utility, is intrinsically linked to AI alignment
Good points:

Tagged

Ancestors