AI // “mission: impossible” movie
Here is a summary of the key points from the article:
- President Biden recently signed an executive order on artificial intelligence after viewing the new “Mission: Impossible” movie, which features a villainous AI called “the Entity.”
- In the movie, the Entity goes rogue, destroys a Russian submarine, and threatens global intelligence agencies. Tom Cruise’s character tries to stop it.
- The movie seemed to unnerve Biden and highlight potential dangers of AI, according to his chief of staff. Biden was already concerned about AI issues like voice cloning.
- The executive order aims to ensure AI is “safe, secure, and trustworthy.” It directs the executive branch to enact its guidance within 365 days.
- The order comes after months of meetings on AI where Biden was “impressed and alarmed.” He saw fake images and poetry generated by AI, reinforcing worries.
So in summary, concerns raised by the new “Mission: Impossible“, especially around rogue AI, appear to have influenced Biden’s decision to sign an executive order to regulate and safeguard AI technology.
Here is a summary of the key points in the article:
- Stories and cultural narratives have long influenced how we think about artificial intelligence (AI). Fictional tales of intelligent robots and automation reflect both hopes and fears about technological progress.
- Current debates on AI are dominated by opposing views — either AI presents an existential threat we must guard against, or it offers tremendous benefits we should pursue. This framing obscures other realities.
- Mainstream AI narratives ignore the material/resource costs, labor abuses, and current harms of AI systems. They focus speculation on future risks, positioning tech company leaders as authorities. This excludes public interests.
- These narratives draw on familiar pro-business arguments (e.g. regulation stifles innovation, economic competition justifies reducing safeguards). This frames AI policy as a technical concern rather than a political balancing act.
- Fundamentally, AI needs vast amounts of data for training. Today’s data economy emerged from the same tech companies now pursuing AI. Data is treated as an individual asset rather than a collective good.
- To meaningfully engage the public on AI’s root causes, we need to build a concept of data as a common good. This will require inclusion of excluded voices and deep analysis of existing narratives. It’s a long-term project but could enable fundamentally different management of AI and data.