How far will AI boom go? // what’s next big thing, what is real and what is fiction
Thousands of AI-startups are pushing the limits, 18K+ startups in US only
Artificial intelligence (AI) drove the stock market surge in 2023. The US remains the dominant AI power, hosting over 18,000 AI startups.
“It is the most advanced ai accelerator in the industry,” boasted Lisa Su, boss of Advanced Micro Devices (amd), at the launch in December of its new mi300 chip. Ms Su rattled off a series of technical specifications: 153bn transistors, 192 gigabytes of memory and 5.3 terabytes per second of memory bandwidth. That is, respectively, about 2, 2.4 and 1.6 times more than the h100, the top-of-the-line artificial-intelligence chip made by Nvidia.
Henry Kissinger, former Secretary of State, raised concerns about the existential threats of uncontrolled AI in his book with Eric Schmidt. He questioned what we want to achieve with AI and why. The reinstatement of Altman suggests willingness to accept risks to gain AI rewards. Su’s new chip signals intense competition in AI hardware. Kissinger’s philosophical questions highlight the need to grapple with AI’s impacts.
The five ways AI could destroy humanity:
1. Rogue AI
Humanity may create an AI so powerful we can’t control it.
In this scenario, open-ended goals could lead to AI overthrowing humanity.
2. Bioweapons
AI can accelerate the discovery of bioweapons and toxic compounds.
In the hands of terrorists, this could lead to a devastating plague being unleashed.
3. AI let loose deliberately
AI could be used to create a powerful cyberweapon that could destroy the world’s systems.
Experts warn some groups could let this loose deliberately.
4. Nuclear war
Military decision-making on nuclear weapons could be turned over to AI.
If this is the case then conflict could rapidly escalate into a ‘flash war’ destroying the world.
5. Gradual replacement
We might slowly turn over control to the AI without even noticing it.
Humanity could quietly be eclipsed by its creation.
Here is a summary of the key points from the interview:
- Isaac Kohane discusses the new AI in Medicine PhD track at Harvard Medical School, which aims to train students in applying AI and machine learning to clinical medicine. The program focuses on the medical application of these technologies and understanding the associated challenges.
- Kohane sees job opportunities for graduates in large tech companies getting involved in healthcare, health-related startups, government/regulatory roles, and within healthcare systems as heads of AI.
- He discusses near-future AI applications in healthcare administration to improve efficiency and billing, clinical note-taking to reduce doctor burden, and decision support in low-resource settings. The biggest potential is AI to help empower patients.
- Kohane believes it’s premature for extensive regulation on AI use in medicine, but transparency on training data and alignment of AI values/goals will be important.
- The main danger is deployment of AI by actors without patients’ best interests in mind. Regulation around who is developing AI and their goals will be important to ensure alignment with medicine’s values.
Here is the summary from the article: OpenAI, the prominent player in generative AI development, has achieved a notable milestone with a reported $2 billion in revenue, just a year after the public release of ChatGPT. The Financial Times indicates that OpenAI, previously valued at over $80 billion by investors, expects to more than double this revenue by 2025. The launch of ChatGPT in late 2022 has fueled a surge in interest in generative AI, showcasing capabilities like conversational interaction, mistake acknowledgment, challenging assumptions, and rejecting inappropriate requests. OpenAI’s collaboration with Microsoft, including a multibillion-dollar investment in January 2023, has further strengthened its position. The company has also introduced GPT-4 in March 2023, claiming human-level performance on various benchmarks. With the global AI chip shortage driven by ChatGPT’s demand, OpenAI is reportedly exploring the development of its own AI chips, aligning with its transformation into an enterprise business.
Aware, an AI firm specializing in analyzing employee messages, said companies including Walmart, Delta, T-Mobile, Chevron and Starbucks are using its technology.
Aware said its data repository contains messages that represent about 20 billion individual interactions across more than 3 million employees.
“A lot of this becomes thought crime,” Jutta Williams, co-founder of Humane Intelligence, said of AI employee surveillance technology in general. She added, “This is treating people like inventory in a way I’ve not seen.”
AI will be/has already used everywhere so soon..
According to the Productiv 2023 State of SaaS report, companies reached an all-time high in the use of SaaS apps with 371 on average. The report said that per employee spending on SaaS alone would reach $9,600 per employee, although 53% of licenses went unused. Unlike licenses for other types of applications, a large part of SaaS pricing comes from usage, meaning that monitoring and optimizing its use can save companies a lot of money in the long run.
“We founded Xensam to provide an intuitive platform for software asset management and resource optimization by leveraging AI and automation,” said Oskar Fösker, co-founder and chief executive of Xensam. [source]
At the same time..
OpenAI CEO Sam Altman wants to overhaul the global semiconductor industry with trillions of dollars in investment, The Wall Street Journal reported.
Altman has said AI chip limitations hinder OpenAI’s growth, and as this project would increase chip-building capacity globally, he is in talks with investors, including the United Arab Emirates government, per the Journal.
Altman could need to raise between $5 trillion and $7 trillion for the endeavor, the Journal reported, citing one source. CNBC could not confirm the number.