GenAI productivity boost is overrated // let AGI for sci-fi stories
MIT economist Daron Acemoglu expresses skepticism about the potential impact of artificial intelligence (AI), predicting it will be a “great disappointment.” He acknowledges the impressiveness of generative AI technology but suggests that the hype has surpassed reality, leading to setbacks in 2024. Acemoglu identifies false information and hallucination as issues, particularly in AI generating inaccurate content. He doubts the effectiveness of supervised learning as a quick fix and emphasizes the difficulty in anchoring predictions to known truths due to the models’ architecture.
The economist is pessimistic about achieving artificial general intelligence (AGI) and attributes it to storytelling in the AI era. He believes that expectations of exponential productivity improvements and advancements toward AGI will result in blaming failures on faulty AI implementation by businesses. Acemoglu advocates for a more practical approach, emphasizing the importance of identifying human tasks that can be augmented by AI models and providing appropriate training.
Despite conflicting narratives, he notes the prevalent use of large language models like ChatGPT in social media and online search, leading to increased monetization of personalized digital ads and a rise in manipulation and misinformation online. Acemoglu expresses doubt about the effectiveness of antitrust action against major tech companies, predicting it will go nowhere due to a lack of courage from courts and policymakers. He foresees shortcomings becoming apparent in 2024, prompting discussions about new laws and regulations, possibly gaining bipartisan support.
Hype is hype
Sam Altman was briefly let go as CEO of OpenAI, but is now back after support from employees. He feels the team is more motivated and productive than ever.
Altman believes multimodality (speech, images, video) and improvements in reasoning ability and reliability will be key milestones for AI in the next 2 years. Customizability and personalization will also be important.
He sees major productivity gains from AI in coding, healthcare and education. Eventually AI agents may be able to do entire jobs rather than just tasks.
Altman believes strong global cooperation and regulation will be needed for the most powerful AI systems to ensure safety. He supports the idea of an international regulatory body like the IAEA for nuclear energy.
Both agree the rapid pace of AI advancement could force societal adaptation faster than ever before. But they are optimistic humans will find new fulfilling problems and purposes, though it may look very different from today.
Driving down the cost of intelligence will help spread the benefits of AI more equitably. Altman believes they are on track to reduce costs enormously.
OpenAI remains a small company, with around 500 employees. Altman credits their mission-driven talent for major breakthroughs despite breaking the rules of typical startups.
Generative AI like LLMs have been touted as a boon to collective productivity. But the authors argue that leaning into the hype too much could be a mistake. Assessments of productivity typically focus on the task level and how individuals might use and benefit from LLMs. Using such findings to draw broad conclusions about firm-level performance could prove costly. The authors argue that leaders need to understand two core problems of LLMs before adopting them company-wide: 1) their persistent ability to produce convincing falsities and 2) the likely long-term negative effects of using LLMs on employees and internal processes. The authors outline a long-term perspective on LLMs, as well as what kinds of tasks LLMs can perform reliably.
That’s according to the latest report from Boston Consulting Group (BCG) released Friday, which found some 90% of executives are taking a “wait and see” approach, either holding off on trying out GenAI or experimenting with it in minor ways.