- Overview
- Features
- System Architecture
- Installation
- Quick Start
- Hugging Face Deployment
- API Documentation
- Voice Processing
- Model Interpretability
- Data Processing Pipeline
- Contributing
- Contact
NeuroLab is a sophisticated multimodal analysis platform that combines EEG (Electroencephalogram) data processing with voice emotion detection to provide comprehensive mental state classification. The system leverages machine learning to identify mental states such as relaxed, focused, and stressed, making it valuable for applications in mental health monitoring, neurofeedback, and brain-computer interfaces.
- Real-time EEG Processing: Stream and analyze EEG data in real-time
- Voice Emotion Detection: TensorFlow-based audio analysis with rule-based fallback
- Multimodal Analysis: Combine EEG and voice data for comprehensive assessment
- Multiple File Format Support: Compatible with .edf, .bdf, .gdf, .csv, WAV, MP3, and more
- Advanced Signal Processing: Comprehensive preprocessing and feature extraction
- Machine Learning Integration: TensorFlow/Keras models with graceful degradation
- NLP-based Recommendations: AI-driven personalized insights and recommendations
- RESTful API: FastAPI-powered endpoints for seamless integration
- Interactive Web UI: Gradio interface for easy testing and demonstration
- Scalable Architecture: Modular design for easy extension and maintenance
- Relaxed (State 0): Calm, neutral emotional states
- Focused (State 1): Alert, positive, engaged states
- Stressed (State 2): Anxious, fearful, negative states
neurolab_model/
βββ api/ # API endpoints and routing
β βββ auth.py # Authentication endpoints
β βββ training.py # Model training endpoints
β βββ voice.py # Voice processing endpoints
β βββ streaming_endpoint.py
βββ config/ # Configuration files
β βββ database.py
β βββ settings.py
βββ core/ # Core functionality
β βββ config/
β βββ data/
β βββ ml/
β βββ models/
β βββ services/
βββ preprocessing/ # Data preprocessing modules
β βββ features.py
β βββ labeling.py
β βββ load_data.py
β βββ preprocess.py
βββ utils/ # Utility functions
β βββ ml_processor.py
β βββ nlp_recommendations.py
β βββ voice_processor.py
β βββ model_manager.py
βββ data/ # Raw data storage
βββ processed/ # Processed data and trained models
βββ main.py # Application entry point
βββ requirements.txt # Project dependencies
βββ README.md
- Python 3.8+
- pip package manager
- (Optional) MongoDB for data storage
- (Optional) InfluxDB for time-series data
-
Clone the Repository
git clone https://github.com/neurolab-0x/ai.neurolab.git neurolab_model cd neurolab_model -
Create a Virtual Environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install Dependencies
pip install -r requirements.txt
-
Install Additional Audio Libraries (Recommended for voice processing)
pip install librosa soundfile
-
Environment Setup
cp .env.example .env # Configure your .env file with appropriate settings -
Verify Installation
python -c "import tensorflow as tf; print(f'TensorFlow: {tf.__version__}')" python -c "import torch; print(f'PyTorch: {torch.__version__}')"
Start the API server:
uvicorn main:app --reloadServer will run on: http://localhost:8000
Access API Documentation:
- Interactive docs: http://localhost:8000/docs
- Alternative docs: http://localhost:8000/redoc
Launch the interactive web UI:
python gradio_app.pyInterface will run on: http://localhost:7860
Features:
- π Manual EEG input with sliders
- π² Sample data generation and testing
- π CSV file upload and analysis
- βΉοΈ Model information and status
Test EEG Analysis:
import requests
eeg_data = {
"alpha": 10.5,
"beta": 15.2,
"theta": 6.3,
"delta": 2.1,
"gamma": 30.5
}
response = requests.post('http://localhost:8000/analyze', json=eeg_data)
print(response.json())Deploy NeuroLab to Hugging Face Spaces for easy testing and API access.
1. Install Hugging Face CLI:
pip install huggingface_hub
huggingface-cli login2. Prepare deployment:
python scripts/prepare_hf_space.py3. Create and deploy Space:
cd neurolab-hf-space
git init
git add .
git commit -m "Deploy NeuroLab"
git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/neurolab-eeg-analysis
git push -u origin main4. Access your Space:
https://huggingface.co/spaces/YOUR_USERNAME/neurolab-eeg-analysis
- Gradio Space: Interactive web interface (recommended for testing)
- Docker Space: Full FastAPI backend with all endpoints
- Model Hub: Upload trained models for inference
- π Quick Start Guide - Fast deployment in minutes
- π Full Deployment Guide - Comprehensive instructions
- π GitHub Actions - Automated deployment
from gradio_client import Client
client = Client("YOUR_USERNAME/neurolab-eeg-analysis")
result = client.predict(
alpha=10.5, beta=15.2, theta=6.3, delta=2.1, gamma=30.5
)
print(result)GET /health- System health check and diagnosticsGET /- API information and available endpoints
-
POST /upload- Upload and process EEG files- Supports files up to 500MB
- Returns mental state classification and analysis
-
POST /analyze- Analyze EEG data- Real-time EEG data processing
- Returns mental state, confidence, and metrics
-
POST /detailed-report- Generate comprehensive analysis report- Includes cognitive metrics
- Provides NLP-based recommendations
- Optional report saving
POST /recommendations- Get personalized recommendations- Based on mental state analysis
- NLP-powered insights
- Customizable recommendation count
POST /calibrate- Calibrate model with new dataPOST /train- Train model with custom dataset (requires auth)
The voice processing module analyzes audio for emotion detection and maps emotions to mental states compatible with EEG analysis.
- Angry β Stressed (State 2)
- Fear β Stressed (State 2)
- Sad β Stressed (State 2)
- Neutral β Relaxed (State 0)
- Calm β Relaxed (State 0)
- Happy β Focused (State 1)
- Surprise β Focused (State 1)
GET /voice/healthCheck if voice processor is initialized and ready.
GET /voice/emotionsList all supported emotions and their mental state mappings.
POST /voice/analyzeUpload and analyze an audio file for emotion detection.
Example:
import requests
with open('audio.wav', 'rb') as f:
files = {'file': ('audio.wav', f, 'audio/wav')}
response = requests.post('http://localhost:8000/voice/analyze', files=files)
result = response.json()
print(f"Emotion: {result['data']['emotion']}")
print(f"Mental State: {result['data']['mental_state']}")
print(f"Confidence: {result['data']['confidence']}")POST /voice/analyze-batchAnalyze multiple audio files with pattern analysis.
Features:
- Process up to 50 files simultaneously
- Aggregate emotion distribution
- Calculate average mental state
- Identify dominant emotions
POST /voice/analyze-rawAnalyze raw audio data (base64 or bytes array).
Example:
import base64
import requests
with open('audio.wav', 'rb') as f:
audio_bytes = f.read()
audio_base64 = base64.b64encode(audio_bytes).decode()
payload = {
"audio_data": {
"data": audio_base64,
"format": "base64"
},
"sample_rate": 16000
}
response = requests.post('http://localhost:8000/voice/analyze-raw', json=payload)Combine EEG and voice data for comprehensive mental state assessment:
import requests
# Analyze EEG data
eeg_response = requests.post('http://localhost:8000/analyze', json=eeg_data)
eeg_state = eeg_response.json()['mental_state']
# Analyze voice data
with open('audio.wav', 'rb') as f:
voice_response = requests.post('http://localhost:8000/voice/analyze',
files={'file': f})
voice_state = voice_response.json()['data']['mental_state']
# Combine results
combined_state = (eeg_state + voice_state) / 2
print(f"Combined Mental State: {combined_state}")- Explains model predictions by attributing feature importance
- Identifies which EEG features contribute most to classifications
- Available via:
/interpretability/explain?explanation_type=shap
- Provides local explanations for individual predictions
- Available via:
/interpretability/explain?explanation_type=lime - Can be included in streaming responses with
include_interpretability=true
- Ensures confidence scores accurately reflect true probabilities
- Methods: temperature scaling, Platt scaling, isotonic regression
- Available via:
/interpretability/calibrate?method=temperature_scaling
Usage Example:
from utils.interpretability import ModelInterpretability
interpreter = ModelInterpretability(model)
# Get SHAP explanations
shap_results = interpreter.explain_with_shap(X_data)
# Calibrate confidence
cal_results = interpreter.calibrate_confidence(X_val, y_val,
method='temperature_scaling')
# Make predictions with calibrated confidence
predictions = interpreter.predict_with_calibration(X_test)- Data Loading - File validation and format checking
- Preprocessing - Artifact removal, filtering, normalization
- Feature Extraction - Temporal, frequency domain, statistical features
- State Classification - Mental state prediction with confidence scoring
- Audio Loading - Multiple format support (WAV, MP3, etc.) using scipy, soundfile, or fallback methods
- Preprocessing - Normalization, resampling to 16kHz
- Feature Extraction - RMS energy, zero-crossing rate, spectral centroid, spectral rolloff
- Emotion Detection - TensorFlow-based model or rule-based classification fallback
- State Mapping - Convert emotions to mental states (7 emotions β 3 states)
- Data preparation and splitting
- Feature engineering
- Model selection and hyperparameter tuning
- Cross-validation
- Model calibration
- Performance evaluation
- Accuracy
- Precision
- Recall
- F1 Score
- ROC-AUC
- Confidence calibration metrics
NeuroLab includes a user-friendly Gradio interface for easy testing and demonstration.
Manual Input Tab:
- Interactive sliders for each EEG frequency band
- Real-time analysis as you adjust values
- Visual feedback on mental state
Sample Data Tab:
- Pre-generated data for different mental states
- Quick testing without manual input
- Demonstrates expected outputs
CSV Upload Tab:
- Upload CSV files with EEG data
- Automatic processing and analysis
- Supports multiple rows (uses mean values)
Model Info Tab:
- View model status and configuration
- Check TensorFlow availability
- Model architecture details
python gradio_app.pyAccess at: http://localhost:7860
1. TensorFlow GPU not detected:
# Check GPU availability
python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
# Install CUDA-enabled TensorFlow if needed
pip install tensorflow[and-cuda]2. Voice processing errors:
# Install audio processing libraries
pip install librosa soundfile scipy3. Model not found:
- Ensure
./processed/trained_model.h5exists for EEG analysis - Ensure
./model/voice_emotion_model.h5exists for voice processing - System will use rule-based fallback if models are missing
4. Port already in use:
# Use a different port
uvicorn main:app --port 8001
# or for Gradio
python gradio_app.py # Edit server_port in the file5. Import errors:
# Reinstall dependencies
pip install -r requirements.txt --force-reinstall- Voice API Documentation - Detailed voice processing API guide
- Voice Setup Guide - Installation and troubleshooting
- API Documentation - Complete API reference
We welcome contributions! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
AI Model Maintainer: Mugisha Prosper
Email: nelsonprox92@gmail.com
Project: Neurolabs Inc
Repository: GitHub
Built with β€οΈ by the NeuroLab Team