Sam Altman, the CEO of OpenAI, recently live-streamed an update, discussing various topics. He mentioned the unexpected success of ChatGPT and GPT-4, highlighting that the technology’s usefulness exceeded their initial expectations. Altman expressed that the general intelligence of these models is increasing, allowing for more valuable integration into various aspects of our lives.
Regarding the advancements in AI, Altman emphasized that the key focus is on the models becoming smarter over time. He anticipates that GPT-5 or future models will continue to show improvements in generalized intelligence, enabling them to handle longer, more complex problems with greater accuracy.
In terms of the upcoming year, Altman discussed the potential gains in AI, attributing progress to both more powerful models and increased development by a larger community. He mentioned improvements in real-time voice processing and the evolving nature of how people interact with computers, moving towards a more conversational and AI-driven experience.
Altman also addressed the challenges of AI models being used in elections, expressing concerns and emphasizing OpenAI’s commitment to secure democracy. He acknowledged the need for a tight feedback loop, careful monitoring, and collaboration with partners to address potential issues.
The conversation delved into OpenAI’s policy changes regarding the use of its models by militaries. Altman explained that the focus is on avoiding blanket restrictions and instead defining what the models can’t be used for. He highlighted the importance of allowing customization while ensuring certain ethical boundaries are respected.
Altman touched on the complexity of AI models adapting to different values globally. He acknowledged the need for global standards and customization based on user values. He also discussed the ongoing efforts to minimize the use of copyrighted content and the challenges associated with respecting opt-outs.
In the lightning round, Altman briefly addressed personal investments, the role of Ilia Sutskever at OpenAI, and the challenges and opportunities of incorporating generative AI into various industries. He concluded by emphasizing the need for infrastructure investments to deliver AI at the scale demanded by society.
OpenAI CEO Sam Altman said artificial general intelligence, or AGI, could be developed in the “reasonably close-ish future.”
AGI is a term used to refer to a form of artificial intelligence that can complete tasks to the same level, or a step above, humans.
Altman said AI isn’t yet replacing jobs at the scale that many economists fear, and that it’s already becoming an “incredible tool for productivity.”
“I can’t believe I’m saying this… but I think AGI will be developed in the reasonably close-ish future, and it will change the world much less than we all think. It will change jobs much less than we all think ”— Sam Altman. Yes, it’s hard to believe in that.
Artificial General Intelligence (AGI) refers to a form of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level comparable to that of a human being. Unlike conventional AI, which is designed for specific tasks, AGI aims to exhibit general cognitive abilities, enabling it to perform diverse activities without being limited to predefined domains. AGI seeks to emulate human-like intelligence, encompassing reasoning, problem-solving, perception, and adaptation to various contexts. The distinction lies in the broad and adaptable nature of AGI, contrasting with the more specialized and narrow focus of traditional AI systems.
John Carmack says that AGI could be created by a single individual.
Perhaps AGI needs a latent space of ideas, a continuous vector space capturing notions rather than image characteristics. If we had such a thing, we could iteratively refine conceptual responses to prompts and wordify them at the end of the process rather than the beginning.
Certainly, humans working on challenging problems operate in such a latent space, where thought refines a notional or visual model of an idea. Sometimes we frame our thought by trial ballooning it with language, but often we’re working at a purely conceptual level.
Definition from Wikipedia: “A latent space, also known as a latent feature space or embedding space, is an embedding of a set of items within a manifold in which items resembling each other are positioned closer to one another. Position within the latent space can be viewed as being defined by a set of latent variables that emerge from the resemblances from the objects.”