Ciro Santilli $$ Sponsor Ciro $$ 中国独裁统治 China Dictatorship 新疆改造中心、六四事件、法轮功、郝海东、709大抓捕、2015巴拿马文件 邓家贵、低端人口、西藏骚乱
Given enough computational power per dollar, AGI is inevitable, but it is not sure certain ever happen given the end of end of Moore's Law.
Alternatively, it could also be achieved genetically modified biological brains + brain in a vat.
Imagine a brain the size of a building, perfectly engineered to solve certain engineering problems, and giving hints to human operators + taking feedback from cameras and audio attached to the operators.
This likely implies transhumanism, and mind uploading.
Ciro Santilli joined the silicon industry at one point to help increase our computational capacity and reach AGI.
Ciro believes that the easiest route to full AI, if any, could involve Ciro's 2D reinforcement learning games.
Due to the failures of earlier generations, which belived that would quickly achieve AGI, leading to the AI winters, 21st researchers have been very afraid of even trying it, rather going only for smaller subste problems like better neural network designs, at the risk of being considered a crank.
While there is fundamental value in such subset problems, the general view to the final goal is also very important, we will likely never reach AI without it.
This is voiced for example in Superintelligence by Nick Bostrom (2014) section "Opinions about the future of machine intelligence" which in turn quotes Nils Nilsson:
Concern for "respectability" has had, I think, a stultifying effect on some AIresearchers. I hear them saying things like, “AI used to be criticized for its flossiness. Now that we have made solid progress, let us not risk losing our respectability.” One result of this conservatism has been increased concentration on "weak AI" - the variety devoted to providing aids to human thought - and away from "strong AI" - the variety that attempts to mechanize human-level intelligence
https://www.reddit.com/r/artificial/comments/b38hbk/what_do_my_fellow_ai_researchers_think_of_ben/ What do my fellow AI researchers think of Ben Goertzel and his research?
Ben Goertzel's fog computing project to try and help achieve AGI.
As highlighted e.g. at Human compatible by Stuart J. Russell (2019), this AI alignment intrisically linked to the idea of utility in economy.
Basically ensuring that good AI alignment allows us to survive the singularity.
Ciro Santilli took a stab at: Ciro's 2D reinforcement learning games, but he didn't sink too much/enough into that project.
Similar goals to Ciro's 2D reinforcement learning games, but they were focusing mostly on discrete games.
The group kind of died circa 2020 it seems, a shame.
Or is real word data necessary, e.g. with robots?
Fundamental question related to Ciro's 2D reinforcement learning games.
Bibliograpy:
In 2019, OpenAI transitionedb from non-profit to for-profit
so what's that point of "Open" in the name anymore??
The key takeaway is that setting an explicit value function to an AGI entity is a good way to destroy the world due to poor AI alignment. We are more likely to not destroy by creating an AI whose goals is to "do what humans what it to do", but in a way that it does not know before hand what it is that humans want, and it has to learn from them.
Some other cool ideas:
  • a big thing that is missing for AGI in the 2010's is some kind of more hierarchical representation of the continuous input data of the world, e.g.:
    • when we behave, we do things in subroutines. E.g. life goal: save hunger. Subgoal: apply for some grant. Subsubgoal: eat, sleep, take shower. Subsub goal: move muscles to get me to table and open a can.
    • we can group continuous things into higher objects, e.g. all these pixels I'm seeing in front of me are a computer. So I treat all of them as a single object in my mind.
  • game theory can be seen as part of artificial intelligence that deals with scenarios where multiple intelligent agents are involved
  • probability plays a crucial role in our everyday living, even though we don't think too much about it every explicitly. He gives a very good example of the cost/risk tradeoffs of planning to the airport to catch a plane. E.g.:
    • should you leave 2 days in advance to be sure you'll get there?
    • should you pay an armed escort to make sure you are not attacked in the way?
  • economy, and notably the study of the utility, is intrinsically linked to AI alignment
Good points:

Tagged

Ancestors