Knowledge/data extraction/distillation from/with LLMs // Hack LLMs

For many real world apps there is no need to use large models, but LLMs can be used as source of data/knowledge and beyond

sbagency
3 min readMay 30, 2024
https://www.edgeimpulse.com/blog/llm-knowledge-distillation-gpt-4o/

The latest large language models (LLMs), such as ChatGPT-4o, are remarkable for their multimodal capabilities, enabling them to interpret text, images, and audio in a human-like manner. However, the deployment of these large models in real-time applications faces challenges due to their size and the latency of cloud-based processing.

Edge Impulse has introduced a new functionality that leverages LLMs to address these challenges. By using GPT-4o for knowledge distillation, Edge Impulse can train significantly smaller models that can run directly on edge devices. This approach enables real-time, on-device AI without the need for cloud services.

For example, a factory manager could use an LLM to monitor safety compliance on the factory floor. While traditional LLMs are too large and slow for real-time use, Edge Impulse can distill the knowledge from an LLM to create a compact model that performs similar tasks locally, offering low latency and reduced costs.

Edge Impulse’s new feature allows users to automatically analyze and label visual data using the intelligence of an LLM, and then train a vision-based model small enough to run on edge devices. This was demonstrated in a project where a video of a living room was labeled by GPT-4o and used to train a model that accurately detected toys in real-time on an Arduino Nicla Vision.

This innovative functionality is available to enterprise customers of Edge Impulse, providing a powerful tool for deploying efficient AI solutions on the edge. Users can sign up for an enterprise trial to explore these capabilities further.

https://arxiv.org/pdf/2403.08345

The conventional process of building Ontologies and Knowledge Graphs (KGs) heavily relies on human domain experts to define entities and relationship types, establish hierarchies, maintain relevance to the domain, fill the ABox (or populate with instances), and ensure data quality (including amongst others accuracy and completeness). On the other hand, Large Language Models (LLMs) have recently gained popularity for their ability to understand and generate human-like natural language, offering promising ways to automate aspects of this process. This work explores the (semi-)automatic construction of KGs facilitated by open-source LLMs. Our pipeline involves formulating competency questions (CQs), developing an ontology (TBox) based on these CQs, constructing KGs using the developed ontology, and evaluating the resultant KG with minimal to no involvement of human experts. We showcase the feasibility of our semi-automated pipeline by creating a KG on deep learning methodologies by exploiting scholarly publications. To evaluate the answers generated via Retrieval-Augmented-Generation (RAG) as well as the KG concepts automatically extracted using LLMs, we design a judge LLM, which rates the generated content based on ground truth. Our findings suggest that employing LLMs could potentially reduce the human effort involved in the construction of KGs, although a human-inthe-loop approach is recommended to evaluate automatically generated KGs.

--

--

sbagency
sbagency

Written by sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.