Augmented research // Generative AI can be useful

Despite AI being primarily statistical and having limited reasoning abilities, relying on pre-trained templates for generation, it expands the capabilities of research.

sbagency
3 min readFeb 1, 2024
https://www.microsoft.com/en-us/research/blog/microsoft-research-forum-new-series-explores-bold-ideas-in-technology-research-in-the-era-of-ai/

Here is a summary of the key points from the video:

- Peter Lee gave the first Microsoft Research Forum talk, discussing recent advances in AI like GPT-4 and their broader impacts.

- He drew a parallel to the disruption caused by the discovery of cell division in biology in the 1700s, which overturned the theory of cell crystallization.

- Microsoft Research studied GPT-4 in secret for several weeks before publicly releasing findings in the “Sparks of Artificial General Intelligence” paper. This led to collaborations on understanding GPT-4’s capabilities.

- Responsible AI has become a major focus at Microsoft, integrating it across the company. There is great interest in AI’s societal impacts like healthcare, education, law, etc.

- Microsoft open sourced small AI models like Pi for transparency and understanding alignment issues. Autogen allows orchestrating multiple models.

- The interplay between AI model specialization vs generalization is still not well understood. Specialization may cause losses in capability.

- AI is influencing much of Microsoft Research now, even areas like program analysis. A new “AI for Science” lab focuses on applications in science.

- Major advances made in areas like new battery materials and disease treatments show the promise of AI for science.

- Lee sees tremendous optimism in researchers working openly and responsibly to realize the benefits of AI while mitigating risks.

“Sparks of Artificial General Intelligence” — sounds like sci-fi, not a reality.

Here are some key points from the conversation:

- They are at an exciting inflection point in AI, seeing sparks of general intelligence but still limitations around real world perception/action. Key aspirations are AI assistants that can operate in the physical world reliably and align with human intentions.

- Models like GPT-4 show promising reasoning when scaled up. Research like Project Fi explores achieving similar capabilities with smaller models by using reasoning-dense, textbook-style data.

- Pre-training gets you so far. Additional skills and alignment can be taught through post-training with tailored data. Multi-agent orchestration is another frontier for expanding capabilities.

- Evaluating progress is crucial but challenging. New ways to benchmark and test end-to-end systems can light the way, not just evaluate models.

- Safety and responsibility are critical even at the research stage with rigorous processes before release. Engaging the global research community also helps build things better.

- The pace of research is accelerating. Shorter timelines from research to product. Organizationally this calls for a focused mission and iterative building with the community.

- Staying grounded in perspectives outside the lab is important. Sharing work openly, learning from real world use cases, and collaborating across disciplines contributes to developing AI responsibly.

There is no real general or artificial intelligence, nobody knows what general intelligence is. Even the human intelligence is specialized, not a general.

--

--

sbagency
sbagency

Written by sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.

No responses yet