sbagencyData, compute & algorithms // “3 pillars of AI” — by Scale AI CEO Alexandr WangNo doubt data, compute and ideas/algos are 3 core things of AI tech1d ago1d ago
sbagencyOpenAI o1 alternatives // reasoning [datasets] is all you needCoT and similar techniques aren’t new and already implemented in open source/scientific community projects.2d ago2d ago
sbagencyPrompt tuning + RAG // FT no needLLMs are good at in-context learning // more specific, precise and parameterized prompts + RAG yield better results.4d ago4d ago
sbagencyReduce LLMs hallucinations // use of RIG/RAGDataGemma is a series of fine-tuned Gemma 2 models used to help LLMs access and incorporate reliable public statistical data from Data…6d ago6d ago
sbagencyReasoning models // how do they work, why should we careReasoning is a core feature of intelligence, but how can it be imitated at the scale of cutting-edge hardware and large models?6d ago6d ago
sbagencyLLM on FPGA // why? cybersecurityFPGA is the most advanced hw cybersec technology, LLMs are the new processors of the futureSep 10Sep 10
sbagencyFuture of AI // techno dystopia/utopia or something in betweenBuilding fully autonomous self-driving cars is still a big challenge, when AGI is much spoken at the same timeSep 5Sep 5
sbagencyKnowledge graphs // AI reasoning improvedKnowledge graphs as triplets (object_1, relation, object_2) result in reduced or lost information, KGS == some facts, no context, no…Sep 4Sep 4
sbagencyLLMs Alignment, Fine-tuning, RAG // most popular techniquesThere are some controversial questions without a clear answer, with many options to consider.Sep 1Sep 1
sbagencyNext-gen AI must excel in math, reasoning, planning, etc. // System 2 cognitive functionsCurrent AI systems can reproduce (approximate) known sequences, but they are unable to synthesize entirely new knowledgeAug 28Aug 28