The biological brain // used 100%, brain-inspired research

There is a myth that humans or other creatures with brains don’t use it 100%, definitely we do.

sbagency
7 min readJan 19, 2024
https://www.youtube.com/watch?v=NdLTEC6X3pk

There are no useless parts in nature, neither in the brain nor in anything else. The question is how we use it, and for what. Most human brain resources are used to support body and physical processes. There is not much left for abstract thinking. We can’t just reconfigure the human brain to what we want, for example, 100% for pure thinking — it’s physically impossible. But in sci-fi everything is possible. Perhaps the evolution of humans will involve a transformation from physical beings to pure intelligence.

We can learn from the natural brain a lot.

https://arxiv.org/pdf/2401.06471.pdf

Concept learning is a fundamental part of human cognition and plays a crucial role in mental processes such as categorization, reasoning, memory and decision making. Researchers from different fields have consistently been interested in the issue of how people acquire concepts. To clarify the mechanism of concept learning in humans, this paper reviews the findings in computational neuroscience and cognitive psychology. They reveal that multisensory representation and text-derived representation are the two essential components of the brain’s representation of concepts. The two types of representations are coordinated by a semantic control system, and concepts are eventually learned. Inspired by this mechanism, this paper constructs a human-like computational model for concept learning based on spiking neural networks. The dilemmas of diverse sources and imbalanced dimensionality of the two forms of concept representations are effectively overcome to obtain human-like concept representations. Similar concepts tests reveal that our model, which is created in the same way that humans learn concepts, provides representations that are also more similar to human cognition.

Here is a summary of the key points from the research paper:

- Concept learning involves acquiring concepts through multisensory experiences (vision, hearing, touch, etc) and through language/text. These two forms of knowledge are represented and processed by distinct brain systems.

- Experiments in cognitive psychology have validated the existence of multisensory and linguistic representations of concepts in the human brain.

- The two types of concept knowledge are coordinated by a semantic control network in the brain to form integrated conceptual representations.

- The authors propose a computational model for human-like concept learning that mimics this brain architecture. It takes as input multisensory and text-derived concept representations.

- The model uses spiking neural networks to transform the inputs into spike trains and coordinates them spatially and temporally to generate integrated human-like concept representations.

- Evaluations show the model’s concept representations are more similar to human ratings than either modality alone or simply concatenating the inputs.

- The model provides a biologically grounded approach to acquire human-like semantic knowledge that could enable more human-like cognition in AI systems.

https://arxiv.org/pdf/2401.03646.pdf

Large Language Models (LLMs) have experienced a rapid rise in AI, changing a wide range of applications with their advanced capabilities. As these models become increasingly integral to decision-making, the need for thorough interpretability has never been more critical. Mechanistic Interpretability offers a pathway to this understanding by identifying and analyzing specific sub-networks or ’circuits’ within these complex systems. A crucial aspect of this approach is Automated Circuit Discovery, which facilitates the study of large models like GPT4 or LLAMA in a feasible manner. In this context, our research evaluates a recent method, Brain-Inspired Modular Training (BIMT), designed to enhance the interpretability of neural networks. We demonstrate how BIMT significantly improves the efficiency and quality of Automated Circuit Discovery, overcoming the limitations of manual methods. Our comparative analysis further reveals that BIMT outperforms existing models in terms of circuit quality, discovery time, and sparsity. Additionally, we provide a comprehensive computational analysis of BIMT, including aspects such as training duration, memory allocation requirements, and inference speed. This study advances the larger objective of creating trustworthy and transparent AI systems in addition to demonstrating how well BIMT works to make neural networks easier to understand.

Here is a summary of the key points from the research paper:

- The paper evaluates Brain-Inspired Modular Training (BIMT), a method to introduce modularity in neural networks and enhance mechanistic interpretability.

- Mechanistic interpretability involves identifying and analyzing sub-networks or “circuits” within complex models to understand their functionality. Automating circuit discovery is crucial for studying large models.

- The research investigates how BIMT affects automated circuit discovery compared to other training methods like L1 regularization.

- Experiments are conducted on MLPs classifying MNIST digits. Circuits for detecting circles and straight lines are discovered.

- Results show BIMT allows discovering higher quality circuits (lower logit differences from original network) faster and with higher sparsity compared to other models. This demonstrates BIMT significantly improves automated circuit discovery.

- Additional analysis explores computational efficiency of BIMT. It requires more memory during training due to neuron swapping but inference times have minimal increase.

- BIMT emerges as an effective approach to enhance interpretability and transparency of neural networks through efficient automated circuit discovery.

- The research advances the goal of creating more trustworthy AI systems. It provides novel insights into using BIMT for mechanistic interpretability.

https://arxiv.org/pdf/2401.07538.pdf

It is shown that a Hopfield recurrent neural network, informed by experimentally derived brain topology, recovers the scaling picture recently introduced by Deco [1], according to which the process of information transfer within the human brain shows spatially correlated patterns qualitatively similar to those displayed by turbulent flows. Although both models employ a coupling strength which decays exponentially with the euclidean distance between the nodes, their mathematical nature is widely different, Hopf oscillators versus Hopfield neural network. Hence, their convergence suggests a remarkable robustness of the aforementioned scaling picture. Furthermore, the present analysis shows that the Hopfield model brain remains functional by removing links above about five decay lengths, corresponding to about one sixth of the size of the global brain. This suggests that, in terms of connectivity decay length, the Hopfield brain functions in a sort of intermediate “turbulent liquid”-like state, whose essential connections are the intermediate ones between the connectivity decay length and the global brain size. This ”turbulent-like liquid” appears to be more spiky than actual turbulent fluids, with a scaling exponent around 2/5 instead of 2/3.

Here are the key points from the provided documents:

- The paper investigates scaling regimes in the Hopfield dynamics of a whole brain model informed by empirical brain topology data.

- It compares the Hopfield neural network model to a previous model by Deco et al. based on coupled oscillators. Despite the different nature of the models, they find similar spatially correlated patterns resembling turbulent flows.

- They confirm the existence of a scaling regime where neural activity correlations depend on the distance between brain regions. At the empirically derived decay length of 5.55 mm, they find a scaling exponent of ~2/5, close to Deco et al’s value of 1/2.

- The scaling exponent increases with the connectivity decay length in the model. There is a steep dependence around 5–7 mm, suggesting an optimal connectivity length for functional non-smooth brain behavior.

- Removing long-range connections above 5–6 decay lengths (~25–30 mm) does not significantly affect the scaling regime. This indicates the model brain is sustained by intermediate interactions between local and global scales.

- The paper concludes that the Hopfield model supports the “turbulent-like” scaling picture of whole brain dynamics. The brain appears to function in an intermediate, spiky “turbulent liquid”-like state for optimal information transfer.

https://arxiv.org/pdf/2401.06005.pdf

Vision is widely understood as an inference problem. However, two contrasting conceptions of the inference process have each been influential in research on biological vision as well as the engineering of machine vision. The first emphasizes bottom-up signal flow, describing vision as a largely feedforward, discriminative inference process that filters and transforms the visual information to remove irrelevant variation and represent behaviorally relevant information in a format suitable for downstream functions of cognition and behavioral control. In this conception, vision is driven by the sensory data, and perception is direct because the processing proceeds from the data to the latent variables of interest. The notion of “inference” in this conception is that of the engineering literature on neural networks, where feedforward convolutional neural networks processing images are said to perform inference. The alternative conception is that of vision as an inference process in Helmholtz’s sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes that give rise to it. In this conception, vision inverts a generative model through an interrogation of the sensory evidence in a process often thought to involve top-down predictions of sensory data to evaluate the likelihood of alternative hypotheses. The authors include scientists rooted in roughly equal numbers in each of the conceptions and motivated to overcome what might be a false dichotomy between them and engage the other perspective in the realm of theory and experiment. The primate brain employs an unknown algorithm that may combine the advantages of both conceptions. We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends the dichotomy and sets the stage for revealing the mysterious hybrid algorithm of primate vision.

--

--

sbagency
sbagency

Written by sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.

No responses yet