Given enough computational power per dollar, AGI is inevitable, but it is not sure certain ever happen given the end of end of Moore's Law.
Alternatively, it could also be achieved genetically modified biological brains + brain in a vat.
Imagine a brain the size of a building, perfectly engineered to solve certain engineering problems, and giving hints to human operators + taking feedback from cameras and audio attached to the operators.
This likely implies transhumanism, and mind uploading.
Ciro Santilli joined the silicon industry at one point to help increase our computational capacity and reach AGI.
Ciro believes that the easiest route to full AI, if any, could involve Ciro's 2D reinforcement learning games.
Due to the failures of earlier generations, which belived that would quickly achieve AGI, leading to the AI winters, 21st researchers have been very afraid of even trying it, rather going only for smaller subste problems like better neural network designs, at the risk of being considered a crank.
While there is fundamental value in such subset problems, the general view to the final goal is also very important, we will likely never reach AI without it.
This is voiced for example in Superintelligence by Nick Bostrom (2014) section "Opinions about the future of machine intelligence" which in turn quotes Nils Nilsson:
Concern for "respectability" has had, I think, a stultifying effect on some AIresearchers. I hear them saying things like, “AI used to be criticized for its flossiness. Now that we have made solid progress, let us not risk losing our respectability.” One result of this conservatism has been increased concentration on "weak AI" - the variety devoted to providing aids to human thought - and away from "strong AI" - the variety that attempts to mechanize human-level intelligence
- 2020 https://towardsdatascience.com/four-ai-companies-on-the-bleeding-edge-of-artificial-general-intelligence-b17227a0b64a Top 4 AI companies leading in the race towards Artificial General Intelligence
- Douglas Hofstadter according to https://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/ The Man Who Would Teach Machines to Think (2013) by James Somers
https://www.reddit.com/r/artificial/comments/b38hbk/what_do_my_fellow_ai_researchers_think_of_ben/ What do my fellow AI researchers think of Ben Goertzel and his research?
As highlighted e.g. at Human compatible by Stuart J. Russell (2019), this AI alignment intrisically linked to the idea of utility in economy.
Basically ensuring that good AI alignment allows us to survive the singularity.
Ciro Santilli took a stab at: Ciro's 2D reinforcement learning games, but he didn't sink too much/enough into that project.
Similar goals to Ciro's 2D reinforcement learning games, but they were focusing mostly on discrete games.
The group kind of died circa 2020 it seems, a shame.
Or is real word data necessary, e.g. with robots?
Fundamental question related to Ciro's 2D reinforcement learning games.
Bibliograpy:
- https://youtu.be/i0UyKsAEaNI?t=120 How to Build AGI? Ilya Sutskever interview by Lex Fridman (2020)
In 2019, OpenAI transitionedb from non-profit to for-profitso what's that point of "Open" in the name anymore??
- https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ "The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism."
- https://archive.ph/wXBtB How OpenAI Sold its Soul for $1 Billion
- https://www.reddit.com/r/GPT3/comments/n2eo86/is_gpt3_open_source/
The key takeaway is that setting an explicit value function to an AGI entity is a good way to destroy the world due to poor AI alignment. We are more likely to not destroy by creating an AI whose goals is to "do what humans what it to do", but in a way that it does not know before hand what it is that humans want, and it has to learn from them.
Some other cool ideas:
- a big thing that is missing for AGI in the 2010's is some kind of more hierarchical representation of the continuous input data of the world, e.g.:
- when we behave, we do things in subroutines. E.g. life goal: save hunger. Subgoal: apply for some grant. Subsubgoal: eat, sleep, take shower. Subsub goal: move muscles to get me to table and open a can.
- we can group continuous things into higher objects, e.g. all these pixels I'm seeing in front of me are a computer. So I treat all of them as a single object in my mind.
- game theory can be seen as part of artificial intelligence that deals with scenarios where multiple intelligent agents are involved
- probability plays a crucial role in our everyday living, even though we don't think too much about it every explicitly. He gives a very good example of the cost/risk tradeoffs of planning to the airport to catch a plane. E.g.:
- should you leave 2 days in advance to be sure you'll get there?
- should you pay an armed escort to make sure you are not attacked in the way?
- economy, and notably the study of the utility, is intrinsically linked to AI alignment
Good points:
- Post mortem connectome extraction with microtome
- the idea of a singleton, i.e. one centralized power, possibly AGI-based, that decisivly takes over the planet/reachable universe
- AGI research has become a taboo in the early 21st century section "Opinions about the future of machine intelligence"