If superhuman intelligence is possible, it should ideally emerge from human intelligence, eliminating the need to replicate the entire evolutionary process from the beginning. Human language stands as the most sophisticated form of knowledge representation. The function of language has been evolutionarily transformed, and it is intended not only for communication. Other forms of knowledge representation, such as audio, images, video, etc., complement natural language and can be translated to/from a language.
Human language == assembly language of a knowledge
The emergence of superhuman AI will not be an event. Progress is going to be progressive.
It will start with systems that can learn how the world works, like baby animals.
Then we’ll have machines that are objective driven and that satisfy guardrails.
Then, we’ll have machines that can plan and reason to satisfy those objectives and guardrails.
Then will have machines that can plan hierarchically.
At first, those machines will be barely smarter than a mouse or a rat.
Then we’ll scale up those machines to be as smart as a dog or a crow.
Then, we’ll adjust the guardrails to make those systems controlable and safe as we scale them up.
Then we’ll train them on a wide variety of environments and tasks.
Then we’ll fine tune them on all the tasks that we want them to accomplish.
At some point, we will realize that the systems we’ve built are smarter than us in almost all domains.
This doesn’t necessarily mean that these systems will have sentience or “consciousness” (whatever you mean by that).
But they will be better than us at executing the tasks we set for them.
They will be under our control.