Limits of intelligence // is human/artificial intelligence limitless?)
There is nothing unlimited in this universe, limits just should be discovered and defined
Human intelligence is limited by human nature factors (biological limits). We are considering the fundamental capacity for evolution and new breakthroughs — not just individual human intelligence, but the collective intelligence of humanity as a whole.
AI is just a marketing term, current AI systems are just computing systems, some tools for human intelligence. Computational brute-forcing is a primary usage pattern, no matter how it is being marketed.
In 1998, Fields medalist Stephen Smale [S. Smale, Mathematical problems for the next century, The mathematical Intelligencer, 20(2) (1998), 7–15] proposed his famous eighteen problems to the mathematicians of this century. The statement of his eighteenth problem is very simple but very important. He asked “What are the limits of intelligence, both artificial and human?”. In this paper, we prove that human intelligence is limitless. Moreover, we provide justifications to state that artificial intelligence has limitations. Thus, human intelligence will always remain superior to artificial intelligence. Moreover, we provide justifications to conclude the limitations of artificial intelligence.
This paper introduces the concept of a “cognitive-consequence space” C = (C, σ, I, Cn) to model the mental space of a human being, where C is the set of mental representations, σ describes concatenation rules on C, I gives meaning to representations, and Cn is a consequence operator. The authors construct a “cognitive-consequence topological space” (C, τ) and study its properties. They discuss the cognitive limit of sequences of thoughts and define notions like cognitive similarity distance, cognitive filters, and cognitive ideals. The paper argues that while artificial intelligence has limitations due to being based on axiomatic systems (as shown by Gödel’s incompleteness theorem), human intelligence is limitless, providing an answer to the 18th problem posed by Stephen Smale on the limits of intelligence. The key points are:
1. Introducing cognitive-consequence space and cognitive-consequence topology to model the human mind mathematically.
2. Studying convergence of sequences of thoughts to cognitive limits.
3. Defining cognitive filters, ideals to organize mental representations for problem-solving.
4. Existence of Gödel’s incompleteness “black holes” in solution spaces.
5. Proving human intelligence is limitless while providing justifications for limits of AI based on axioms.
The paper combines ideas from topology, consequence operators, and Lewin’s topological psychology to analyze the nature of human and artificial intelligence mathematically.
There are fundamental limits inherent in mathematics and, similarly, AI algorithms can’t exist for certain problems — Matthew Colbrook
Artificial intelligence (AI) systems struggle to recognize their own errors, unlike humans. This issue stems from a fundamental mathematical paradox, as demonstrated in a recent study by researchers from the University of Cambridge and the University of Oslo. The paradox, linked to the work of Alan Turing and Kurt Gödel, reveals that certain problems cannot be solved by algorithms, leading to inherent limitations in AI.
Neural networks, a key AI technology, face stability issues that make them unreliable in high-risk areas like disease diagnosis and autonomous driving. Despite their potential for high accuracy, stable and trustworthy neural networks cannot always be constructed, no matter the amount of training data. This instability is a significant liability, as many AI systems operate with unjustified confidence and lack the ability to recognize their own mistakes.
The researchers propose a classification theory to identify conditions under which neural networks can be made reliable. This effort parallels past mathematical breakthroughs, suggesting that a similar foundational theory could emerge for AI, enhancing our understanding of its limitations and guiding the development of more trustworthy systems.
Instability is the Achilles’ heel of modern artificial intelligence (AI) and a paradox, with training algorithms finding unstable neural networks (NNs) despite the existence of stable ones. This foundational issue relates to Smale’s 18th mathematical problem for the 21st century on the limits of AI. By expanding methodologies initiated by Gödel and Turing, we demonstrate limitations on the existence of (even randomized) algorithms for computing NNs. Despite numerous existence results of NNs with great approximation properties, only in specific cases do there also exist algorithms that can compute them. We initiate a classification theory on which NNs can be trained and introduce NNs that — under suitable conditions — are robust to perturbations and exponentially accurate in the number of hidden layers.
Most applications of Artificial Intelligence (AI) are designed for a confined and specific task. However, there are many scenarios that call for a more general AI, capable of solving a wide array of tasks without being specifically designed for them. The term General Purpose Artificial Intelligence Systems (GPAIS) has been defined to refer to these AI systems. To date, the possibility of an Artificial General Intelligence, powerful enough to perform any intellectual task as if it were human, or even improve it, has remained an aspiration, fiction, and considered a risk for our society. Whilst we might still be far from achieving that, GPAIS is a reality and sitting at the forefront of AI research. This work discusses existing definitions for GPAIS and proposes a new definition that allows for a gradual differentiation among types of GPAIS according to their properties and limitations. We distinguish between closed-world and open-world GPAIS, characterising their degree of autonomy and ability based on several factors such as adaptation to new tasks, competence in domains not intentionally trained for, ability to learn from few data, or proactive acknowledgment of their own limitations. We then propose a taxonomy of approaches to realise GPAIS, describing research trends such as the use of AI techniques to improve another AI (commonly referred to as AI-powered AI) or (single) foundation models. As a prime example, we delve into generative AI (GenAI), aligning them with the terms and concepts presented in the taxonomy. Similarly, we explore the challenges and prospects of multi-modality, which involves fusing various types of data sources to expand the capabilities of GPAIS. Through the proposed definition and taxonomy, our aim is to facilitate research collaboration across different areas that are tackling general purpose tasks, as they share many common aspects. Finally, with the goal of providing a holistic view of GPAIS, we discuss the current state of GPAIS, its prospects, implications for our society, and the need for regulation and governance of GPAIS to ensure their responsible and trustworthy development.
Many researchers are working on GPAIS, both to design new GPAIS and to define what they are. The principal goals of this work have been two-fold: (1) proposing a more comprehensive definition of GPAIS with a focus on their properties and functionalities, and (2) categorising different approaches to build them. In comparison with existing alternatives, our proposed definition allows for a more general view of GPAIS, considering different degrees of autonomy and expected capabilities. Together with these two principal goals, we have analyzed shortly GenAI as the most important foundation models and the multimodality as a crucial aspect for managing multiple inputs. Finally, we have discussed the prospects posed by GPAIS, the implications to our society, and the regulation and governance of these emerging systems. There are a multitude of approaches to making an AI system more general. In order to consolidate the most relevant ones in a cohesive manner, we have proposed a taxonomy of methods. This taxonomy conceptually distinguishes between AI models that rely on other AI models to achieve generalisation abilities and those that utilise a single AI model. More classical multi-task learning approaches and foundation models have been categorised as AI systems in which a single AI model exists, while alternative approaches may use two or more AI systems, what we called AI-powered AI to introduce generalisation capabilities. The field of GPAIS is continually evolving, and the proposed taxonomy has established a robust foundation for understanding the diverse existing approaches. While LLMs are currently in the spotlight, there is a broad spectrum of approaches that can significantly contribute to the realisation of GPAIS. However, technical and ethical challenges must be necessarily discussed and addressed before stepping towards AGI. In the meantime, we must be mindful in the arrival of new open-world GPAIS models and task, on managing AI risks associated with current GPAIS, and work responsibly towards anticipating and mitigating such risks effectively via regulation and governance.
Existing approaches to Theory of Mind (ToM) in Artificial Intelligence (AI) overemphasize prompted, or cue-based, ToM, which may limit our collective ability to develop Artificial Social Intelligence (ASI). Drawing from research in computer science, cognitive science, and related disciplines, we contrast prompted ToM with what we call spontaneous ToM — reasoning about others’ mental states that is grounded in unintentional, possibly uncontrollable cognitive functions. We argue for a principled approach to studying and developing AI ToM and suggest that a robust, or general, ASI will respond to prompts and spontaneously engage in social reasoning.
Recent developments in Large Language Models (LLMs) have significantly expanded their applications across various domains. However, the effectiveness of LLMs is often constrained when operating individually in complex environments. This paper introduces a transformative approach by organizing LLMs into communitybased structures, aimed at enhancing their collective intelligence and problemsolving capabilities. We investigate different organizational models — hierarchical, flat, dynamic, and federated — each presenting unique benefits and challenges for collaborative AI systems. Within these structured communities, LLMs are designed to specialize in distinct cognitive tasks, employ advanced interaction mechanisms such as direct communication, voting systems, and market-based approaches, and dynamically adjust their governance structures to meet changing demands. The implementation of such communities holds substantial promise for improve problemsolving capabilities in AI, prompting an in-depth examination of their ethical considerations, management strategies, and scalability potential. This position paper seeks to lay the groundwork for future research, advocating a paradigm shift from isolated to synergistic operational frameworks in AI research and application.
The evolution of artificial intelligence (AI) has profoundly impacted human society, driving significant advancements in multiple sectors. Yet, the escalating demands on AI have highlighted the limitations of AI’s current offerings, catalyzing a movement towards Artificial General Intelligence (AGI). AGI, distinguished by its ability to execute diverse real world tasks with efficiency and effectiveness comparable to human intelligence, reflects a paramount milestone in AI evolution. While existing works have summarized specific recent advancements of AI, they lack a comprehensive discussion of AGI’s definitions, goals, and developmental trajectories. Different from existing survey papers, this paper delves into the pivotal questions of our proximity to AGI and the strategies necessary for its realization through extensive surveys, discussions, and original perspectives. We start by articulating the requisite capability frameworks for AGI, integrating the internal, interface, and system dimensions. As the realization of AGI requires more advanced capabilities and adherence to stringent constraints, we further discuss necessary AGI alignment technologies to harmonize these factors. Notably, we emphasize the importance of approaching AGI responsibly by first defining the key levels of AGI progression, followed by the evaluation framework that situates the status-quo, and finally giving our roadmap of how to reach the pinnacle of AGI. Moreover, to give tangible insights into the ubiquitous impact of the integration of AI, we outline existing challenges and potential pathways toward AGI in multiple domains. In sum, serving as a pioneering exploration into the current state and future trajectory of AGI, this paper aims to foster a collective comprehension and catalyze broader public discussions among researchers and practitioners on AGI1 .
Recent advancements in artificial intelligence have propelled the capabilities of Large Language Models (LLMs), yet their ability to mimic nuanced human reasoning remains limited. This paper introduces a novel conceptual enhancement to LLMs, termed the “Artificial Neuron,” designed to significantly bolster cognitive processing by integrating external memory systems. This enhancement mimics neurobiological processes, facilitating advanced reasoning and learning through a dynamic feedback loop mechanism. We propose a unique framework wherein each LLM interaction — specifically in solving complex math word problems and common sense reasoning tasks — is recorded and analyzed. Incorrect responses are refined using a higher-capacity LLM or human-in-the-loop corrections, and both the query and the enhanced response are stored in a vector database, structured much like neuronal synaptic connections. This “Artificial Neuron” thus serves as an external memory aid, allowing the LLM to reference past interactions and apply learned reasoning strategies to new problems.
Our experimental setup involves training with the GSM8K dataset for initial model response generation, followed by systematic refinements through feedback loops. Subsequent testing demonstrated a 15% improvement in accuracy and efficiency, underscoring the potential of external memory systems to advance LLMs beyond current limitations. This approach not only enhances the LLM’s problem-solving precision but also reduces computational redundancy, paving the way for more sophisticated applications of artificial intelligence in cognitive tasks. This paper details the methodology, implementation, and implications of the Artificial Neuron model, offering a transformative perspective on enhancing machine intelligence.