Neuromorphic Quantum Computing and beyond// hack the limits

Quantum, Neuro, Crypto,.. scams or breakthroughs?)

7 min readMar 7, 2024

The Quromorphic project is focused on developing human brain-inspired hardware with quantum functionalities. This involves constructing superconducting quantum neural networks to create dedicated neuromorphic quantum machine learning hardware. The goal is to surpass classical von Neumann architectures in the next generation. The project combines advancements in machine learning and quantum computing to create a novel technology. Unlike traditional machine learning on von Neumann hardware, neuromorphic quantum hardware can be trained on multiple batches of real-world data in parallel, potentially leading to a quantum advantage. The approach also shows promise with moderate fault tolerance. In the long term, neuromorphic hardware architectures are expected to be crucial for both classical and quantum computing, especially for distributed and embedded tasks where current architectures face scalability challenges. Quromorphic aims to provide proof-of-concept demonstrations and a roadmap for the technology’s exploitation. The project includes the implementation of feed-forward networks and non-equilibrium quantum annealers, with the ultimate goal of surpassing existing machine learning capabilities and achieving a quantum advantage. Simulations will be conducted to explore the application of this new technology to real-world problems in preparation for its future exploitation.

Researchers from Tohoku University have developed a theoretical framework for high-performance spin wave reservoir computing (RC) that leverages spintronics technology. This breakthrough brings us closer to realizing energy-efficient, nanoscale computing with unprecedented computational power.

Their findings, published in the journal npj Spintronics on March 1, 2024, unveil a path towards mimicking the brain’s processing capabilities, low power consumption, and ability to adapt to neural networks through neuromorphic computing. This revolutionary approach allows scientists to explore nanoscale realms with GHz speed and low energy consumption.

While artificial neural networks have demonstrated remarkable performance in various tasks, current software-based technologies are limited by the constraints of conventional electric computers in terms of computational speed, size, and energy consumption.

Reservoir computing (RC) operates via a fixed, randomly generated network called the “reservoir,” which memorizes past input information and performs nonlinear transformations. This unique characteristic enables the integration of physical systems, such as magnetization dynamics, to perform tasks like time-series forecasting and speech recognition on sequential data.

Although spintronics has been proposed as a means to realize high-performance devices, existing devices have failed to achieve high performance at nanoscales with GHz speed.

The study by Natsuhiko Yoshinaga and colleagues proposed a physical RC that harnesses propagating spin waves. Their theoretical framework utilizes response functions that link input signals to propagating spin dynamics, elucidating the mechanism behind the high performance of spin wave RC and highlighting the scaling relationship between wave speed and system size to optimize the effectiveness of virtual nodes.

By employing the unique properties of spintronics technology and drawing from various subfields, including condensed matter physics and mathematical modeling, the researchers have potentially paved the way for a new era of intelligent computing, bringing us closer to realizing a physical device that can be applied to weather forecasts, speech recognition, and other applications.

Physical implementation of neuromorphic computing using spintronics technology has attracted recent attention for the future energy-efficient AI at nanoscales. Reservoir computing (RC) is promising for realizing the neuromorphic computing device. By memorizing past input information and its nonlinear transformation, RC can handle sequential data and perform time-series forecasting and speech recognition. However, the current performance of spintronics RC is poor due to the lack of understanding of its mechanism. Here we demonstrate that nanoscale physical RC using propagating spin waves can achieve high computational power comparable with other state-of-art systems. We develop the theory with response functions to understand the mechanism of high performance. The theory clarifies that wave-based RC generates Volterra series of the input through delayed and nonlinear responses. The delay originates from wave propagation. We find that the scaling of system sizes with the propagation speed of spin waves plays a crucial role in achieving high performance.

Researchers revisit low-energy electron diffraction to reconfirm prior surface analysis data on antiferromagnetic NiO

The study reported in this document focuses on revisiting and re-establishing the understanding of surface properties of the antiferromagnetic (AF) crystal nickel oxide (NiO) using low-energy electron diffraction (LEED) analysis. The key points are:

1. Researchers from Sophia University carried out LEED analysis on NiO, which is experiencing a renaissance as a promising material for ultrafast spintronics due to its unique AF and spin properties.

2. They obtained the I-V spectra of the ‘half-order beam’ and observed a surface wave resonance (SWR) effect, providing insights into the energy-temperature dependence of LEED and coherent spin exchange scattering in NiO.

3. The study aimed to improve upon old experimental techniques for deciphering coherent spin exchange scattering and provide a reliable theoretical analysis using recent techniques.

4. The researchers used LEED dynamical theory to interpret the experimental results and clearly reveal the SWR observed in the I-V curve.

5. The temperature dependence measured over a wide range enabled a more quantitative comparison with conventional molecular field theory.

6. The study reaffirms previous experimental data on surface-spin structure and magnetic properties of NiO, while providing new insights through the I-V spectra, SWR conditions, and wide temperature dependence analysis.

7. The research highlights the renewed interest in antiferromagnetic materials like NiO for potential applications in ultrafast spintronics and quantum computing.

Next-gen ultra-low power LLM accelerator

The team, led by Professor Yoo Hoi-jun at the KAIST PIM Semiconductor Research Center, developed a “Complementary-Transformer” AI chip, which processes GPT-2 with an ultra-low power consumption of 400 milliwatts and a high speed of 0.4 seconds, according to the Ministry of Science and ICT.

The 4.5-mm-square chip, developed by using Korean tech giant Samsung Electronics Co.’s 28 nanometer process, has 625 times less power consumption compared with global AI chip giant Nvidia’s A-100 GPU, which requires 250 watts of power to process LLMs, the ministry explained.

One of the open challenges in quantum computing is to find meaningful and practical methods to leverage quantum computation to accelerate classical machine learning workflows. A ubiquitous problem in machine learning workflows is sampling from probability distributions that we only have access to via their log probability. To this end, we extend the well-known Hamiltonian Monte Carlo (HMC) method for Markov Chain Monte Carlo (MCMC) sampling to leverage quantum computation in a hybrid manner as a proposal function. Our new algorithm, Quantum Dynamical Hamiltonian Monte Carlo (QD-HMC), replaces the classical symplectic integration proposal step with simulations of quantum-coherent continuous-space dynamics on digital or analogue quantum computers. We show that QD-HMC maintains key characteristics of HMC, such as maintaining the detailed balanced condition with momentum inversion, while also having the potential for polynomial speedups over its classical counterpart in certain scenarios. As sampling is a core subroutine in many forms of probabilistic inference, and MCMC in continuously-parameterized spaces covers a largeclass of potential applications, this work widens the areas of applicability of quantum devices.

Companies operating in the Neuromorphic Computing Market: Intel IBM BrainChip Qualcomm NVIDIA Hewlett-Packard Samsung Accenture Cadence-Design Knowm + army of startups

The quantum kernel method has attracted considerable attention in the field of quantum machine learning. However, exploring the applicability of quantum kernels in more realistic settings has been hindered by the number of physical qubits current noisy quantum computers have, thereby limiting the number of features encoded for quantum kernels. Hence, there is a need for an efficient, application-specific simulator for quantum computing by using classical technology. Here we focus on quantum kernels empirically designed for image classification and demonstrate a field programmable gate arrays (FPGA) implementation. We show that the quantum kernel estimation by our heterogeneous CPU–FPGA computing is 470 times faster than that by a conventional CPU implementation. The co-design of our application-specific quantum kernel and its efficient FPGA implementation enabled us to perform one of the largest numerical simulations of a gate-based quantum kernel in terms of features, up to 780-dimensional features. We apply our quantum kernel to classification tasks using the Fashion-MNIST dataset and show that our quantum kernel is comparable to Gaussian kernels with the optimized hyperparameter.




Tech/biz consulting, analytics, research for founders, startups, corps and govs.