๐Ÿง  Aura-Agent: Neural Guardian โ€” Seizure Prediction via Teacher-Student Knowledge Distillation

#13
by Babajaan - opened

๐Ÿง  Aura-Agent: Neural Guardian

A proactive, mobile-first seizure prediction system with a 30-minute pre-ictal warning, built end-to-end with PyTorch and smolagents.

๐Ÿ”— Live Demo: Babajaan/Aura-Agent-Neural-Guardian


What It Does

Aura-Agent predicts epileptic seizures 30 minutes before onset from raw 19-channel EEG signals and deploys to Android via an ultra-lightweight student model.

Architecture

Component Model Params
Teacher Transformer EEG Classifier (4-layer, 4-head, GELU, attention pooling) 661K
Student RGF-Net โ€” Ring-Buffer GRU + FiLM biometric conditioning 37.5K (< 100K โœ…)
Agent smolagents CodeAgent with 4 custom tools LLM-powered

Key Technical Highlights

1. RGF-Net Student Model (< 100K params, ~36.7 KB quantized)

  • Ring-Buffer GRU enables O(1) constant-memory streaming inference on-device
  • FiLM conditioning (Perez et al., 2018) modulates GRU features using patient biometrics: (1+ฮณ)โŠ™h + ฮฒ where ฮณ,ฮฒ are generated from [age, sex, HR, HRV, seizure_history, medication_adherence]
  • EEGNet-inspired depthwise-separable convolutions for temporal+spatial feature extraction
  • Supports ONNX export and INT8 dynamic quantization for Android Edge-AI

2. Three-Component Knowledge Distillation Loss

  • Soft-label KD (Hinton, 2015): Temperature-scaled KL-divergence (T=4.0)
  • Hard-label CE: Class-weighted cross-entropy for pre-ictal/inter-ictal imbalance
  • Feature distillation (TimeKD, 2025): SmoothL1 alignment of projected student embeddings to teacher embedding space โ€” critical for cross-architecture (Transformerโ†’GRU) distillation

3. smolagents CodeAgent for Autonomous Emergency Response

  • RGFNetSeizureRiskTool (Tool subclass with lazy setup()) โ€” runs model inference
  • send_sos_alert, get_patient_gps, dispatch_first_aid (@tool decorators)
  • Agent follows a strict medical protocol: Assess Risk โ†’ if โ‰ฅ 0.75 โ†’ GPS โ†’ SOS โ†’ First-Aid
  • CodeAgent chosen over ToolCallingAgent specifically for if/else conditional branching

Grounded in Literature

The architecture and training recipe draw from:

  • STAN (arxiv 2511.01275) โ€” Spatio-temporal attention for seizure forecasting
  • FiLM (arxiv 1709.07871) โ€” Feature-wise Linear Modulation
  • Hinton KD (arxiv 1503.02531) โ€” Knowledge Distillation
  • TimeKD (arxiv 2505.02138) โ€” Feature distillation for time-series
  • Edge DL (arxiv 2012.00307) โ€” Quantized models for neural implants

Interactive Demo Features

The Gradio Space includes 6 tabs:

  1. ๐Ÿ—๏ธ Architecture โ€” Inspect both models with full parameter breakdowns
  2. ๐ŸŽฏ Train & Distill โ€” Run the KD pipeline live with tunable hyperparameters
  3. ๐Ÿ”ฎ Live Inference โ€” Test RGF-Net with synthetic EEG and adjustable patient biometrics
  4. ๐Ÿค– Agent Workflow โ€” Visualize the smolagents CodeAgent architecture
  5. ๐Ÿ“ KD Loss โ€” Mathematical walkthrough of the distillation loss
  6. ๐Ÿ“ Source Code โ€” Project structure and references

Built with PyTorch, smolagents, and Gradio. Feedback welcome!

Sign up or log in to comment