What’s wrong with LLMs? Hallucinating.
What’s wrong with AI-hype? Speculating.
In the beginning was the word. Language is the operating system of human culture. From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. A.I.’s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers. link
Language models can generate text just like calculators calculate numbers, there is no intelligence. // actually LLMs transform text to numbers
LLMs == text calculators, there is no AI/AGI
Natural language is the natural way to prompt and program computing systems. But there is a risk of prompt misunderstanding and hallucinations in generated answers.
RLHF isn’t a solution, it’s a patch.
NN has a “knowledge graph” stored in weights, with confidence level on each node.
There is a growing trend that foundation models will serve as the fundamental building blocks for most of the future AI systems. However, incorporating foundation models in AI systems raises significant concerns about responsible AI due to their black box nature and rapidly advancing super-intelligence
GPT “agents” can prompt Itself and each other
AI hype is beyond the limits. Language models can generate prompts to itself and other models. Continuous process of generating new prompts can be extended by external scripts, plugins and data sources. Is there super intelligence? Actually no, just data processing, but with new cool capabilities.
For example, in this loop a new code/scripts can be generated as well and executed. This process is a subject of optimization. Will evolve and improve over time. According to this scenario, users just formulate the high level goals, no prompt engineering, coding, etc.
We are at a new stage of a significant paradigm shift in the way computing systems work…
// in progress