Project

Brain2Speech: Decoding Brain Signals into Speech
Active
Lead: Yizi Zhang

This project focuses on non-invasive brain-to-speech decoding from MEG signals, mapping neural activity directly to auditory speech units using the LibriBrain dataset. Our goal is to advance MEG-based speech decoding and identify the temporal and spatial patterns in the brain that support recovering speech elements.

BCI MEG Speech Processing
Audio-Visual Brain Encoding
Active
Lead: Linyang He

This project investigates how visual speech, such as lip movements, contributes to brain activity during natural language comprehension. By integrating frame-level visual features from the mouth region with linguistic representations from deep language models, we build multimodal encoding models to test whether visual information provides unique predictive power beyond auditory and linguistic cues.

Brain Encoding ECoG Speech Processing NeuroAI
BrainSight: Image-to-Brain Visual Encoding
Active
Lead: Pinyuan Feng

This project aims to understand how the human brain encodes visual information by building models that predict brain activity (e.g., fMRI, EEG signals) from naturalistic images. By linking state-of-the-art computer vision models with neural data, we seek to uncover which image features best explain activity in different brain regions and how artificial systems align or diverge from human visual processing.

Brain Encoding Computational Vision GenAI
Generative Decoders of Visual Cortical Representations
Completed
Lead: Michael Zhou

This project investigates how neural signals can be transformed back into visual images using deep learning. Leveraging the THINGS Ventral Stream Spiking Dataset (TVSD) — which records single-neuron activity from macaque visual areas V1, V4, and IT — the team developed a generative decoding pipeline that maps brain activity to perceived images. By integrating AlexNet, VDVAE, and Versatile Diffusion, the project revealed strong parallels between mid-level visual features in artificial networks and biological vision, advancing our understanding of how the brain encodes and reconstructs the visual world.

BCI Diffusion Models Single Neuron
Multiphase-EEG: SSVEP Speller
Completed
Lead: Matheu Campbell

This project develops an SSVEP-based speller system. The goal is to use brain wave data collected from an OpenBCI headset to control an online keyboard interface. The system includes data acquisition, visual stimulus presentation, and signal decoding pipelines for steady-state visually evoked potentials (SSVEP), enabling users to type characters using only their brain activity.

BCI EEG SSVEP Speller
Drone: EEG-Controlled Flight
Completed
Lead: Matheu Campbell

Develop a program that can interpret brain signals from an EEG headset and convert them into control instructions for a drone in real time. The system integrates data collection, signal processing, and control interfaces to enable closed-loop brain-to-drone communication.

BCI EEG Drone Control