Neuro-symbolic learning and reasoning

System 1 & system 2 combination, LLM is a NLP engine, KG & other knowledge representation for reasoning

sbagency
3 min readMar 27, 2024
https://www.youtube.com/watch?v=8S0QLhDIb3E

Here is a summary of the key points from the presentation:

The speaker discusses the importance of integrating symbolic reasoning and neural networks for more robust and interpretable AI systems, an approach known as neuro-symbolic AI. He highlights the long history of recognizing the connections between logic/symbolic systems and connectionist/neural network approaches, dating back to pioneers like Turing, von Neumann, and the early neural network work.

He argues that the current success of large language models and deep learning has revealed limitations like lack of interpretability and common sense reasoning abilities. Integrating symbolic techniques like logic, verification, and reasoning could help address these issues and lead to more robust, trustworthy AI systems.

The speaker describes his group’s work on developing neuro-symbolic models that can represent and reason with logical fragments using neural networks, such as for temporal logic, epistemic logic, and non-monotonic reasoning. He sees promising directions in approaches like graph neural networks and the ideas of system 1/system 2 from cognitive science.

Overall, he makes a case for neuro-symbolic AI as a principled way to combine the strengths of neural learning and symbolic reasoning to develop more advanced, trustworthy, and interpretable AI capabilities. Key open challenges include handling first-order logic, common sense reasoning, and developing effective human-AI communication models.

https://www.youtube.com/watch?v=Bcfu_1kIH_U

The presentation discusses Elemental Cognition’s approach to combining large language models (LLMs) with symbolic reasoning engines to solve complex reasoning problems. The key points are:

1. LLMs are good at natural language processing but struggle with complex reasoning tasks.

2. Symbolic reasoning engines are reliable for complex reasoning but lack accessible interfaces.

3. Elemental Cognition’s “LLM sandwich” approach uses LLMs to bridge the gap between natural language and a symbolic reasoning engine.

4. They developed a declarative language called Cogent that acts as an intermediate representation between LLMs and the reasoning engine.

5. Cogent is readable by English speakers but unambiguous for formal reasoning.

6. They provide tools like interactive model builders assisted by LLMs to help users define models in Cogent.

7. The resulting system can interact with users in natural language while providing reliable, provable reasoning results.

8. They have deployed applications in domains like airline travel planning and degree planning at universities using this approach.

9. Security is a concern as LLMs can be vulnerable, but their constrained input/output mitigates some risks.

https://www.youtube.com/watch?v=-FMUt3OARy0
https://github.com/langchain-ai/langchain-extract

--

--

sbagency
sbagency

Written by sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.

No responses yet