Unlocking AGI Potential


Summary of my bookmarked links and Github repositories from May 13th, 2023

Links

  • Conversational Cognition: A New Measure for Artificial General Intelligence

    This link discusses the limitations of current research in Artificial Intelligence (AI), Deep Learning, and Artificial General Intelligence (AGI). It highlights the importance of social interaction and proposes a new measure of intelligence called Conversational Cognition. The article suggests that effective conversation, understanding context changes, and cooperative behavior are crucial for achieving AGI. It compares Conversational Cognition with other frameworks for cognition, emphasizing the need for a more complex understanding of intelligence. Overall, the article advocates for a shift in focus towards conversation and social intelligence in AI research.

  • Achieving Artificial General Intelligence (AGI) via the Emergent Self

    The article discusses the progression of artificial general intelligence (AGI) and the importance of self-models in achieving it. It explores the concept of different self-models, such as bodily, perspectival, volitional, narrative, and social selves, and suggests an order in which they should be learned. The author argues that self-awareness may be a fundamental cognitive capability that needs to be discovered early on in AGI development. They propose an "Inside Out" architecture where the development of a bodily self-model is crucial. By incrementally building different aspects of the cognitive stack, they believe AGI can be achieved.

  • How Artificial General Intelligence might be created

    Artificial General Intelligence (AGI), the concept of creating machines that can think like humans, may become a reality in our lifetime. AGI refers to machines capable of understanding, rationalizing, and acting without sanitized input. It should possess situational awareness, understand users' communication, and detect subtleties like sarcasm. Recent fictional examples of AGI include Jarvis from Iron Man and HAL 9000 from 2001: A Space Odyssey. To achieve AGI, there has been an unparalleled interest in AI since the 1950s, accompanied by the exponential growth of processing power and the ubiquity of devices generating vast amounts of data. Communication between AGIs and the application of cryptocurrency principles, such as secure messaging and distributed consensus, could contribute to AGI development. The AGIs of tomorrow will be interconnected, continually learning, and capable of performing multiple tasks.

  • Building Artificial General Intelligence

    This blog post highlights two resources for understanding and building Artificial General Intelligence (AGI). The first resource is the book "On Intelligence" by Jeff Hawkins, which discusses the neocortex and proposes a theoretical framework called Hierarchical Temporal Memory (HTM) that combines neuroscience and machine intelligence. The second resource is "The Book of Why" by Judea Pearl, advocating for incorporating causality into AI through causal calculus. The author emphasizes the need to move beyond Artificial Narrow Intelligence (ANI) and explores the possibility of integrating causality into the HTM model. They invite readers to join the discussion on AGI and provide additional resources on their blog and LinkedIn profile.

  • Artificial General Intelligence (AGI)

    The article discusses the concept of Artificial General Intelligence (AGI) and the scientific crisis it is currently facing. AGI refers to human-level intelligence in machines. The author argues that current approaches to AI and AGI, such as deep learning, have not been successful in achieving human-like capabilities. The article introduces Patom Theory (PT) as a potential solution for AGI, which is based on a brain emulation model. PT emphasizes the importance of emulating human language, as it is a key aspect of human intelligence. The theory proposes a different paradigm for AI that focuses on understanding and multisensory integration.

  • Why General AI May Not Be Achieved in Our Lifetime: A Realistic Look at AI Progress

    The concept of artificial general intelligence (AGI) has long captured our imagination, but the reality is that we are currently limited to narrow AI. Narrow AI performs specific tasks by analyzing patterns and data, lacking the ability to think or reason beyond its programming. However, narrow AI offers valuable benefits, such as automating tedious tasks, making accurate predictions, and creating new job opportunities. While the development of general AI raises ethical concerns regarding consciousness, rights, and societal impact, it remains a distant possibility. Instead, we should focus on improving narrow AI and leveraging its potential within ethical boundaries.

  • Frontier AI: How far are we from artificial “general” intelligence, really?

    The article discusses the ongoing debate surrounding the development of artificial general intelligence (AGI) and its potential implications. While media and conversations raise concerns about the imminent arrival of AGI with unpredictable consequences, the author, an AI investor, shares a different perspective based on their interaction with AI entrepreneurs. They emphasize that building real-world AI products is still challenging, even with narrow AI applications. The article also highlights the significant increase in AI research and resources, including data and computing power, and mentions the involvement of industry labs and startups in AGI research. The discussion explores various AI algorithms, such as deep learning, unsupervised learning, GANs, and reinforcement learning, while considering their potential for AGI development. Finally, the author mentions transfer learning as a technique that could contribute to the progress of AGI.

  • The Next Step Towards Artificial General Intelligence

    DeepMind, in collaboration with Blizzard, has released the StarCraft II Learning Environment (SC2LE) to catalyze AI research in a game not specifically designed for that purpose. SC2LE offers tools such as a Machine Learning API, a dataset of 60,000+ game replays, and PySC2, a Python library. Initial findings showed that current intelligent systems, including DeepMind's Deep Reinforcement Learning algorithm, failed to complete even one full game of StarCraft II, indicating room for improvement. The release of SC2LE provides a baseline for future AI research, aiming to develop intelligent systems that can adapt principles learned from one game to another or different environments.

  • www.cantorsparadise.com

    Richard Feynman, Nobel Laureate, discussed artificial general intelligence (AGI) in a 1985 lecture. He questioned whether machines can think like humans and be more intelligent. Feynman acknowledged that machines can excel in specific tasks, like chess, but questioned the definition of intelligence. He compared naturally evolved locomotion to mechanically designed locomotion, emphasizing that machines won't think like humans but can perform tasks better. Feynman also discussed the problem of pattern recognition and the challenges of creating a filing system that mimics human recognition abilities. Overall, he highlighted the differences between machines and human thinking.

  • The Road to Artificial General Intelligence

    This article explores the challenge of building an AI system with computing capacity equal to the human brain. The current fastest supercomputer, Tianhe-2, surpasses the brain's raw computing power, but it is not practical for widespread use. However, with Moore's Law and exponential growth in computing power, it is predicted that by 2025, affordable computers will rival the brain's power. The article also discusses strategies for developing AI software, including mimicking the brain's structure through neural networks and whole brain emulation, or employing genetic algorithms and self-improvement. Despite seemingly slow progress, minor innovations can accelerate advancements in AI.

  • The Importance of Language in Human Cognition and Artificial General Intelligence

    The article explores the role of language in human cognition and its implications for developing artificial general intelligence (AGI). It discusses the concept of internal speech, the Sapir-Whorf hypothesis, and studies on language's influence on behavior and memory. The author suggests that understanding language is crucial for creating AGI that truly mimics human thought processes. They propose building a linguistic framework for AGI that includes dynamic language learning, the ability to engage in separate conversations, and internal dialogue. The article concludes by suggesting a test to compare AGI versions with and without robust language processing to evaluate their cognitive capabilities.

  • Is Artificial General Intelligence around the corner?

    Artificial General Intelligence (AGI) refers to a machine's ability to comprehend and perform various intellectual tasks like a human. While AI excels in specific domains, it lacks overall intelligence and versatility. Researchers believe that "transfer learning," where a model trained on one task is repurposed for another related task, is crucial for achieving AGI. Transfer learning allows AI systems to benefit from previous knowledge, resulting in improved performance, faster training, and reduced data requirements.

Github repositories

  • sirxemic/xmastree-app

    The Xmas Tree Lights App is a small application designed for live coding and exporting effects for Matt Parker's Xmas tree experiment from 2021. You can access it online at https://sirxemic.github.io/xmastree-app/. To run it locally, ensure that NodeJS is installed on your machine, and it can be accessed at http://localhost:3000/. If you wish to use custom GIFT files, replace the existing src/coords.gift file with the appropriate data for different trees.

  • standupmaths/xmastree2021

    The "xmastree2021" repository provides the code and coordinates used for Matt's 2021 Christmas tree, which gained recognition in "My 500-LED xmas tree got into Harvard." The repository includes the coordinates of the tree in GIFT format (coords_2021.csv), the original source code for coordinate correction (light_fixer.py), and examples contributed by other users. If you wish to contribute, you can make small bug fixes directly to the code, create effects in CSV files in the examples folder, or propose larger projects in the Further Work section. Additional related projects include "MPTree - Matt Parker's Tree Emulator" and the "Xmas Tree Lights Live Coding App."