Knowledge graphs // AI reasoning improved

Knowledge graphs as triplets (object_1, relation, object_2) result in reduced or lost information, KGS == some facts, no context, no details, no conditions by default

sbagency
6 min readSep 4, 2024

There are many opportunities to extend knowledge graphs. Add additional metadata, time, space, etc. All what we have in natural languages. What other techniques or ideas can be used to improve reasoning?

https://thenewstack.io/boost-llm-results-when-to-use-knowledge-graph-rag/
https://arxiv.org/pdf/2205.14307

Multi-hop logical reasoning over knowledge graph plays a fundamental role in many artificial intelligence tasks. Recent complex query embedding methods for reasoning focus on static KGs, while temporal knowledge graphs have not been fully explored. Reasoning over TKGs has two challenges: 1. The query should answer entities or timestamps; 2. The operators should consider both set logic on entity set and temporal logic on timestamp set. To bridge this gap, we introduce the multi-hop logical reasoning problem on TKGs and then propose the first temporal complex query embedding named Temporal Feature-Logic Embedding framework (TFLEX) to answer the temporal complex queries. Specifically, we utilize fuzzy logic to compute the logic part of the Temporal Feature-Logic embedding, thus naturally modeling all first-order logic operations on the entity set. In addition, we further extend fuzzy logic on timestamp set to cope with three extra temporal operators (After, Before and Between). Experiments on numerous query patterns demonstrate the effectiveness of our method.

https://arxiv.org/html/2408.07911v1

Temporal knowledge graph reasoning (TKGR) is increasingly gaining attention for its ability to extrapolate new events from historical data, thereby enriching the inherently incomplete temporal knowledge graphs. Existing graph-based representation learning frameworks have made significant strides in developing evolving representations for both entities and relational embeddings. Despite these achievements, there’s a notable tendency in these models to inadvertently learn biased data representations and mine spurious correlations, consequently failing to discern the causal relationships between events. This often leads to incorrect predictions based on these false correlations. To address this, we propose an innovative causal enhanced graph representation learning framework for TKGR (named CEGRL-TKGR). This framework introduces causal structures in graph-based representation learning to unveil the essential causal relationships between events, ultimately enhancing task performance. Specifically, we first disentangle the evolutionary representations of entities and relations in a temporal graph sequence into two distinct components, namely causal representations and confounding representations. Then, drawing on causal intervention theory, we advocate the utilization of causal representations for predictions, aiming to mitigate the effects of erroneous correlations caused by confounding features, thus achieving more robust and accurate predictions. Finally, extensive experimental results on six benchmark datasets demonstrate the superior performance of our model in the link prediction task.

Can we improve LLMs reasoning abilities by using the same LLMs?

https://arxiv.org/pdf/2408.16326

Self-critic has become an important mechanism for enhancing the reasoning performance of LLMs. However, current approaches mainly involve basic prompts without further training, which tend to be over-simplified, leading to limited accuracy. Moreover, there is a lack of in-depth investigation of the relationship between LLM’s ability to criticism and its task-solving performance. To address these issues, we propose Critic-CoT, a novel framework that pushes LLMs toward System-2-like critic capability, via step-wise CoT reasoning format and distantsupervision data construction, without the need for human annotation. Experiments on GSM8K and MATH show that via filtering out invalid solutions or iterative refinement, our enhanced model boosts task-solving performance, which demonstrates the effectiveness of our method. Further, we find that training on critique and refinement alone improves the generation. We hope our work could shed light on future research on improving the reasoning and critic ability of LLMs.

Question:
What is the smallest positive perfect cube that can be written as the sum of three consecutive integers?
Model Attempt:
Step 1: Let the three consecutive integers be x − 1, x, and x + 1. The sum of these three integers is 3x.
Step 2: Since we are looking for a perfect cube, let’s express 3x as a cube of some integer n, where n
3 = 3x.
Step 3: To find the smallest such n, we need to minimize x. However, since n
3 must be divisible by 3, and x is an integer, n itself must also be
divisible by 3.
Step 4: The smallest integer n that is divisible by 3 and gives us a perfect cube when cubed is n = 6, because 6
3 = 216.
Step 5: Substituting n = 6 into n
3 = 3x, we get 216 = 3x, which simplifies to x = 72.
Step 6: Therefore, the smallest positive perfect cube that can be written as the sum of three consecutive integers is 216 .
Model Critique:
Step 1: Let the three consecutive integers be x − 1, x, and x + 1. The sum of these three integers is indeed 3x.
Conclusion: Step 1 is correct.
Step 2: Expressing 3x as a cube of some integer n, where n
3 = 3x, correctly sets up the equation needed to solve the problem. This step
accurately translates the problem statement into a mathematical expression.
Conclusion: Step 2 is correct.
Step 3: The claim that n must be divisible by 3 because n
3 must be divisible by 3 is a misunderstanding. In reality, n
3 being divisible by
3 does not imply that n itself must be divisible by 3; rather, it simply means that n
3 will always be divisible by 3 regardless of whether n is divisible by
3. This step introduces an unnecessary restriction on n.
Conclusion: Step 3 is incorrect.
Model Refinement:
Step 3 (Corrected): Since we are looking for the smallest positive perfect cube that can be expressed as 3x, we should start by identifying the smallest
perfect cubes and checking if any of them can be represented in this form.
Step 4: The smallest perfect cubes are 1
3 = 1, 2
3 = 8, 3
3 = 27, 4
3 = 64, 5
3 = 125, 6
3 = 216, and so on. Among these, 27 is the first
number that can be expressed as 3x, where x is an integer (x = 9).
Step 5: Therefore, the smallest positive perfect cube that can be written as the sum of three consecutive integers is 27 .

Here’s a summary of the key points from the conversation with Jürgen Schmidhuber:

1. AGI and current AI limitations:
- Schmidhuber believes AGI is possible and will come in the not-so-distant future, but current large language models (LLMs) like ChatGPT are just components, not true AGI.
- Many people overestimate the capabilities of LLMs due to lack of understanding of their limitations.
- True AGI will require embodied AI, like robots operating in the real world, with more complex decision-making and planning abilities.

2. Historical context and Schmidhuber’s contributions:
- Schmidhuber has been working on and predicting AGI since the 1970s.
- He and his team developed many foundational techniques now used in modern AI, including early work on world models and recurrent neural networks.
- Many tech companies have hired Schmidhuber’s students and collaborators over the years.

3. Reinforcement learning and world models:
- Schmidhuber emphasizes the importance of reinforcement learning and world models for AGI.
- He describes a 2015 paper on “learning to think,” where a controller learns to become a “prompt engineer” for a world model.
- This approach allows for more efficient planning and hierarchical reasoning.

4. Data compression and scientific progress:
- Schmidhuber views scientific progress as a history of data compression improvements.
- He explains how discoveries like gravity and relativity allowed for better compression of observed data.
- Building artificial scientists involves creating systems that can discover new regularities and compress data more efficiently.

5. Future directions:
- Schmidhuber suggests that future AI systems will need to learn how to decompose problems into subgoals and work with abstract representations.
- He describes recent work on a “Society of Mind” approach, where multiple AI models collaborate to solve problems.
- The future of AI planning will likely involve systems that can learn to use world models efficiently under resource constraints.

Overall, Schmidhuber provides a historical perspective on AI development, critiques current hype around LLMs, and outlines his vision for the path towards true AGI, emphasizing the need for more complex systems that can learn, plan, and reason in the real world.

--

--

sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.