Reading your mind by AI // digital dystopia
Technically it’s possible, more data is required
The speaker, originally from Taiwan and now based in Australia, discusses the inefficiency of traditional methods for transferring thoughts into computers, such as typing or touchscreens. He highlights the challenge of capturing words from the mind effectively, emphasizing the importance of words in human communication. He introduces an AI-driven solution that decodes brain signals into words, bypassing the need for physical input.
The demonstration involves two team members, Charles and Daniel, using EEG sensors to decode silently spoken sentences. While the system achieves about 50% accuracy, it successfully decodes parts of the intended sentences, showcasing promising progress in brain-computer interface (BCI) technology. The speaker also demonstrates another application where Daniel selects objects by looking at them, with the system decoding his intent through EEG signals, achieving varying levels of success.
The technology uses AI, specifically deep learning, to decode brain signals and large language models to refine the output. Despite challenges like interference and individual differences in neural signatures, the speaker is optimistic about its potential applications, including aiding non-verbal communication and enhancing privacy.
However, he acknowledges concerns about privacy and ethics, emphasizing the need for careful consideration. He envisions a future where thoughts can be directly translated into text, revolutionizing how humans interact with computers and each other. The speaker concludes by challenging the audience to rethink what constitutes natural communication and inviting them to imagine a world where thinking alone can produce visible results on a screen.
Riccardo, a psychologist and PhD candidate, discusses the prevalence of lying in daily life and humans’ poor ability to detect lies. Studies show that people, even experts, can only detect lies with about 50% accuracy. Riccardo explores the potential of AI, specifically large language models (LLMs), in detecting deception. In his study, he fine-tuned an LLM called FLAN-T5 on datasets containing truthful and deceptive statements. The model achieved 70–80% accuracy when trained and tested within specific contexts but struggled to generalize across different contexts unless exposed to diverse examples during training.
Riccardo envisions a future where AI could enhance lie detection in areas like national security, politics, recruitment, and social media, making these domains safer and more trustworthy. However, he warns of the risks, such as blindly trusting AI outputs, losing human critical thinking, and eroding societal trust. He advocates for AI systems that provide clear explanations for their decisions, empowering humans to make informed judgments rather than relying solely on technology. The key is to use AI as a tool to enhance human capabilities while maintaining ethical standards and fostering trust.
Researchers at The University of Texas at Austin have developed an AI-based tool that can translate a person’s thoughts into continuous text without requiring them to comprehend spoken language. This advancement builds on their earlier work, which required extensive training on a person’s brain activity while listening to audio stories. The new method reduces the training time to just one hour by using silent videos, making it more practical for individuals with aphasia, a brain disorder affecting about a million people in the U.S. that impairs language production and comprehension.
The tool uses a transformer model similar to ChatGPT and a converter algorithm to map a new person’s brain activity onto previously trained data, enabling faster and more efficient decoding. The researchers found that the brain processes stories similarly whether they are heard or seen, suggesting that the decoder taps into higher-level semantic representations rather than language itself. This approach could eventually help people with aphasia communicate more effectively.
The system requires cooperative participants and cannot be misused if individuals resist by thinking other thoughts. While tested on neurologically healthy individuals, the team is now collaborating with aphasia researchers to adapt the tool for those with the condition. The work is supported by several foundations and aims to create user-friendly brain-computer interfaces to assist people with language impairments.