Hallucination-free AI // is it possible?)

There are some techniques, like RAG, GraphRAG, Knowledge graphs, etc. to reduce hallucinations…

sbagency
5 min readJun 26, 2024

Current LLMs are not applicable to life-critical applications… but some approaches to reduce that:
- RAG, retrieval augmented generation; // add relevant context
- Structured-knowledge extraction & processing; // KGs, Tables, etc.
- Code generation and execution; // transform problem into code
- Agents workflows, chain of thoughts; // multi-hop reasoning

https://arxiv.org/pdf/2405.20362

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext, 2023) or “avoid[ing]” hallucinations (Thomson Reuters, 2023), or guaranteeing “hallucination-free” legal citations (LexisNexis, 2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first preregistered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a comprehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law

Is it a fools day joke?)

https://www.pinecone.io/blog/hallucination-free-llm/
https://www.linkedin.com/pulse/quest-hallucination-free-llms-reality-check-ben-haklai-zchhf

Various advancements have been made in recent months allowing to reduce LLM hallucinations, most prominently by utilizing Retrieval Augmented Generation (RAG) and Retrieval Augmented Fine Tuning (RAFT). However, to date the most significant and promising reduction of hallucinations was achieved and demonstrated with structured output (i.e., data or information generated in accordance with a predefined schema or format) or in “knowledge-intensive” scenarios where a user wants to use a model to satisfy an “information need” in LLMs. In those domains RAG indeed shows great promise and can dramatically reduce LLM Hallucinations [1].

logprobs

To analyze token probabilities, you can leverage Hugging Face class methods. These methods allow you to examine the probabilities associated with token generation, providing insights into how the model makes decisions and potentially identifying instances where hallucinations occur. [source]

from openai import OpenAI
client = OpenAI()

completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Hello!"}
],
logprobs=True,
top_logprobs=2
)
print(completion.choices[0].logprobs)
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1702685778,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello!"
},
"logprobs": {
"content": [
{
"token": "Hello",
"logprob": -0.31725305,
"bytes": [72, 101, 108, 108, 111],
"top_logprobs": [
{
"token": "Hello",
"logprob": -0.31725305,
"bytes": [72, 101, 108, 108, 111]
},
{
"token": "Hi",
"logprob": -1.3190403,
"bytes": [72, 105]
}
]
},
{
"token": "!",
"logprob": -0.02380986,
"bytes": [
33
],
"top_logprobs": [
{
"token": "!",
"logprob": -0.02380986,
"bytes": [33]
} ]
}, "finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 2,
"completion_tokens": 2,
"total_tokens": 4
},
"system_fingerprint": null
}
https://arxiv.org/pdf/2406.06950v1

This paper focuses on the task of hallucination detection, which aims to determine the truthfulness of LLM-generated statements. To address this problem, a popular class of methods utilize the LLM’s self-consistencies in its beliefs in a set of logically related augmented statements generated by the LLM, which does not require external knowledge databases and can work with both white-box and black-box LLMs. However, in many existing approaches, the augmented statements tend to be very monotone and unstructured, which makes it difficult to integrate meaningful information from the LLM beliefs in these statements. Also, many methods work with the binarized version of the LLM’s belief, instead of the continuous version, which significantly loses information. To overcome these limitations, in this paper, we propose Belief Tree Propagation (BTPROP), a probabilistic framework for LLM hallucination detection. BTPROP introduces a belief tree of logically related statements by recursively decomposing a parent statement into child statements with three decomposition strategies, and builds a hidden Markov tree model to integrate the LLM’s belief scores in these statements in a principled way. Experiment results show that our method improves baselines by 3%-9% (evaluated by AUROC and AUC-PR) on multiple hallucination detection benchmarks. Code is available at https://github.com/UCSB-NLP-Chang/BTProp

--

--

sbagency
sbagency

Written by sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.

No responses yet