Data, compute & algorithms // “3 pillars of AI” — by Scale AI CEO Alexandr Wang
No doubt data, compute and ideas/algos are 3 core things of AI tech
Humanity’s Last Exam is very interesting project for collecting hard-questions for AI. What’s wrong with it? We can provide datasets of any difficulty level, but it looks like Chinese room approach.
Meet Alexandr Wang. His resume at 19: • Pro coder since 15 • Machine learning engineer at Quora • Solved AI problems PhD holders couldn’t
But Wang wasn’t satisfied. He saw a billion-dollar opportunity others missedThe AI boom was in full swing. • Alexa was in millions of homes • Self-driving cars were hitting the streets • ChatGPT was on its way But there was a massive problem no one was talking about.
Wang spotted it. And he was about to build an empire around the solution…The issue? Data. AI models were starving for high-quality, labeled data. But it’s a tedious, error-prone, and expensive nightmare. It was the invisible bottleneck choking AI’s potential. Wang’s solution?
Turn data labeling into a global, AI-powered machine. In 2016, at 19, he co-founded Scale AI. Their revolutionary approach: • Global workforce for data labeling • AI enhances human work This human-AI combo set Silicon Valley on fire:
By 2019, Scale AI hit unicorn status. Wang was just 22. Scale AI’s client list is wild: • Uber • Pinterest • OpenAI … the list goes on. But Wang wasn’t done. He set his sights on an even bigger target…The U.S. government. In 2020, the Department of Defense came asking for help with: • Satellite imagery analysis • Drone footage processing • Predictive maintenance for vehicles Suddenly, Wang’s tech was safeguarding national security:
As AI grips the world, Scale AI has become indispensable: • Training Tesla’s cars to outsmart human drivers • Helping Nvidia see the world in 1s and 0s • Keeping Meta’s feed clean of digital toxins 300+ top companies now relied on Wang’s magic. Scale AI’s new valuation?
$14.3 billion. By 2024, Wang turned 27. His net worth: $2 billion. But money wasn’t the goal. Impact was. Wang’s vision was to accelerate humanity’s progress through AI. How? By tackling some of the world’s most pressing challenges:
Scale AI began working on: • Climate change models • Drug discovery • Disaster response optimization The question wasn’t if AI would change the world. It was how fast. And Wang was determined to push the boundaries. His journey teaches us:
1. Spot invisible problems 2. Scale solutions globally 3. Aim for world-changing impact The next big idea isn’t in a lab. It’s with someone ready to drop everything and build it. But here’s the most exciting part:
The AI revolution is just beginning. As you read this, Wang’s tech is: • Making roads safer • Protecting national security • Accelerating breakthroughs It started with a teen who dared to think differently. But Wang’s story isn’t just about one exceptional individual…
Alexander discusses his company’s role as an infrastructure provider in AI, focusing on data as one of AI’s three key pillars (data, compute, and algorithms). While companies like OpenAI focus on algorithms and NVIDIA on compute, their company powers data for large models like OpenAI, Meta, and Microsoft. The recent financing round aims to strengthen their ability to serve the AI ecosystem by collaborating with companies across various layers — compute, models, and applications.
Their goal is to fuel AI’s progress towards AGI (artificial general intelligence) by generating frontier data, advancing complex reasoning, and multi-modality. They also emphasize the importance of human expertise in data production while addressing past concerns about fair labor practices for workers, especially in the global South.
Reflecting on their rapid growth, Alexander highlights advancements in AI since 2019, including progress from GPT-2 to GPT-4, and expresses optimism for future technological developments and their potential societal impact.
Humanity’s Last Exam is a project by Scale AI and the Center for AI Safety (CAIS) aimed at measuring AI’s progress toward expert-level systems. The initiative seeks to create the world’s most difficult public AI benchmark by gathering challenging questions from experts across diverse fields. Contributors whose questions are accepted can earn prizes from a $500,000 pool and be co-authors on the paper detailing the dataset.
Scale’s Safety, Evaluations, and Alignment Lab (SEAL) is focused on robust evaluation methods for frontier AI models, while CAIS, a nonprofit dedicated to AI safety, aims to mitigate risks associated with AI advancements. As AI models increasingly excel at existing benchmarks, new, more difficult tests are needed to assess their true problem-solving capabilities.
Experts are invited to submit questions that would challenge current AI systems. The top 50 questions will receive $5,000 each, and the next 500 questions will earn $500 each. Questions must be original, difficult for non-experts, and not easily answerable through online searches. The submission deadline is November 1, 2024.
Objective: The goal is to collect the hardest, broadest set of questions to evaluate AI systems’ capabilities. If you submit a question that stumps current AI models, your name will be associated with it, and you’ll be invited as a co-author on the resulting paper. The questions can be from any field, such as math, rocket engineering, or philosophy. Exceptional and niche questions from your research or exams are welcome.
Process:
1. Write a Difficult Question: Submit a question in English that AIs and average humans struggle to answer. The goal is to find questions only exceptional individuals could solve.
2. AI Difficulty Test: AI models will try to answer the question to gauge its difficulty.
3. Write a Solution: If AIs struggle, you’ll write a clear, concise answer.
4. Peer Review: Your question, solution, and rationale will undergo expert review.
5. Publish: Accepted questions will be credited to you in the paper, with top contributors highlighted. Prize money up to $5,000 is available for the best questions.
Guidelines:
- Questions must be original, not copied or variants of each other.
- They should be challenging for non-experts and not answerable via simple online searches.
- Questions should be objective and include all necessary context within the question.
- Avoid questions related to weapons or virology.