AI agents // new update, agents can replace 99% apps

sbagency
3 min readOct 23, 2023

--

https://twitter.com/ecardenas300/status/1716090957416816902
https://github.com/xlang-ai/openagents
https://twitter.com/bindureddy/status/1714445426114629744

Here is a summary of the key points about AI agents:

  • AI agents are software systems that combine multiple AI capabilities like LLMs, ML models, and code execution to perform tasks.
  • Examples include creating purchase orders forecasts, analyzing contracts, and automating customer service responses.
  • Building an agent requires LLM prompt engineering, code execution, data transformation, orchestration, pipelines, input/output connectors, and monitoring.
  • An end-to-end agent platform makes it easy to develop, test, iterate, and deploy agents.
  • As LLMs become more advanced, millions of AI agents will go into production to automate business processes like sales, accounting, and legal work.
  • Over time, AI agents will be able to handle more of the repetitive and routine work humans currently do.
https://github.com/geekan/MetaGPT
https://github.com/geekan/MetaGPT/blob/main/docs/resources/software_company_cd.jpeg
https://twitter.com/oanaolt/status/1715935977724842067
https://www.linkedin.com/pulse/autonomous-ai-agents-transforming-industries-enhancing-1f/
https://www.twitch.tv/videos/1952565105
https://twitter.com/MetaGPT_/status/1716143776236810566/photo/2

Instead of replacing workers, I believe in a future where AI Agents empower humans to be 100x more effective and productive. They are and will always be co-pilots. AI will not replace you. But another human who’s good at using AI will.

https://twitter.com/DrJimFan/status/1713955586310816210
https://selfrag.github.io/
https://selfrag.github.io/

The issue: Factual inaccuracies of versatile LLMs
Despite their remarkable capabilities, large language models (LLMs) often produce responses containing factual inaccuracies due to their sole reliance on the parametric knowledge they encapsulate. They often generate hallucinations, especially in long-tail, their knowledge gets obsolete, and lacks attribution.

Is Retrieval-Augmented Generation a silver bullet?
Retrieval-Augmented Generation (RAG), an ad hoc approach that augments LMs with retrieval of relevant knowledge, decreases such issues and shows effectiveness in knowledge-intensive tasks such as QA. However, indiscriminately retrieving and incorporating a fixed number of retrieved passages, regardless of whether retrieval is necessary, or passages are relevant, diminishes LM versatility or can lead to unhelpful response generation. Moreover, there’s no guarantee that generations are entailed by cited evidence.

Self-RAG training consists of three models, a Retriever, a Critic and a Generator.

--

--

sbagency
sbagency

Written by sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.

No responses yet