Speech Emotion Classification
๐ฏ Overview
This repository contains a sophisticated deep learning model for speech emotion classification. The model is designed to detect and classify emotions from audio recordings with high accuracy using advanced neural network architectures. It combines acoustic features from both Mel-frequency cepstral coefficients (MFCCs) and mel-spectrograms to analyze emotional content in speech.
๐ Key Features
- Multi-modal Architecture: Combines CNN and MLP branches for comprehensive feature analysis
- Real-time Processing: Capable of processing and analyzing speech in real-time
- High Accuracy: State-of-the-art performance on emotion classification tasks
- Cross-platform Compatibility: Runs seamlessly on Windows, macOS, and Linux
- Hugging Face Integration: Easy model sharing and deployment via Hugging Face Hub
๐ Dataset
The model was trained on the RAVDESS (Ryerson Audio-Visual Database of Emotional Speech and Song) dataset, which contains high-quality recordings of professional actors expressing different emotions. The dataset includes 8 distinct emotions:
- ๐ Neutral: Emotionless speech
- ๐ Calm: Calm and relaxed emotion
- ๐ Happy: Joyful and cheerful emotion
- ๐ข Sad: Melancholic and sorrowful emotion
- ๐ก Angry: Irritated and mad emotion
- ๐ฑ Fearful: Scared and apprehensive emotion
- ๐ค Disgust: Revolted and repulsed emotion
- ๐ฎ Surprised: Astonished and amazed emotion
๐ Performance Metrics
| Metric | Value |
|---|---|
| Test Accuracy | ~42.13% |
| Precision (weighted) | ~72.53% |
| Recall (weighted) | ~42.13% |
| F1-Score (weighted) | ~40.90% |
๐ ๏ธ Installation
Prerequisites
- Python 3.7 or higher
- pip package manager
Setup
- Clone the repository:
git clone https://github.com/your-username/speech_emotion_classification.git
cd speech_emotion_classification
- Create a virtual environment (recommended):
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install the required dependencies:
pip install -r requirements.txt
Or install the dependencies manually:
pip install tensorflow numpy librosa scikit-learn huggingface_hub pandas matplotlib seaborn
๐ Usage
1. Load and Use the Model
import librosa
import numpy as np
from tensorflow import keras
# Load the pre-trained model
model = keras.models.load_model('./path/to/model.keras')
# Load an audio file
audio_path = 'path/to/audio.wav'
y, sr = librosa.load(audio_path, sr=None)
# Extract features
mfcc_features = librosa.feature.mfcc(y=y, sr=sr, n_mfcc=13)
spectrogram_features = librosa.feature.melspectrogram(y=y, sr=sr)
# Normalize and reshape features according to your preprocessing pipeline
# (Implementation depends on how the model was trained)
# Make prediction
# For multi-modal models, pass both feature arrays: [mfcc_features_reshaped, spec_features_reshaped]
predictions = model.predict([mfcc_features_reshaped, spec_features_reshaped])
# Get emotion with highest probability
emotion_labels = ['neutral', 'calm', 'happy', 'sad', 'angry', 'fearful', 'disgust', 'surprised']
predicted_emotion = emotion_labels[np.argmax(predictions)]
print(f"Predicted emotion: {predicted_emotion}")
2. Train Your Own Model
python auto_train.py
3. Test the Model
python test_prediction_pipeline.py
4. Execute the APP
streamlit run app.py
๐๏ธ Architecture
The model uses a sophisticated multi-modal architecture:
- MFCC Branch: Processes Mel-frequency cepstral coefficients using dense neural network layers
- Spectrogram Branch: Processes mel-spectrogram features using convolutional layers
- Fusion Layer: Combines both feature representations before final classification
- Output Layer: Softmax layer for emotion classification across 8 emotional states
๐ Project Structure
speech_emotion_classification/
โโโ app.py # Streamlit web application
โโโ auto_train.py # Automated training script
โโโ debug_labels.py # Label debugging utilities
โโโ driver.py # Main execution script
โโโ push_to_hub.py # Hugging Face model upload script
โโโ split_model.py # Model splitting utilities
โโโ test_*.py # Test files
โโโ requirements.txt # Project dependencies
โโโ README.md # This file
โโโ ...
๐งช Evaluation
To evaluate the model on custom audio files:
python test_prediction_pipeline.py
This will run the model on the test dataset and provide detailed performance metrics.
๐ง Limitations
- Performance may vary with different accents and languages
- Audio quality (noise, clarity) can significantly affect accuracy
- Emotions expressed in speech can be culturally dependent
- Requires clear audio with minimal background noise for best results
- Shorter audio clips (5-10 seconds) typically work better than longer recordings
๐ก๏ธ Ethical Considerations
- This model should not be used to make critical decisions about individuals without their explicit consent
- Results should be interpreted with caution and not treated as definitive psychological assessments
- Consider privacy implications when processing audio of individuals
- Use responsibly and ethically, with appropriate consent when analyzing personal speech
- Be aware of potential bias in the training data and its impact on model predictions
๐งช Reproducibility
To ensure reproducible results:
- Set random seeds:
import numpy as np
import tensorflow as tf
import random
np.random.seed(42)
tf.random.set_seed(42)
random.seed(42)
- Use the same training data and preprocessing pipeline
๐ค Contributing
Contributions are welcome! Here's how you can contribute:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Please make sure to update tests as appropriate and follow the existing code style.
Development Setup
git clone https://github.com/Rayyan9477/speech_emotion_classification.git
cd speech_emotion_classification
pip install -r requirements.txt
pip install -r requirements-dev.txt # For development dependencies
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Citation
If you use this model in your research, please cite:
@software{speech_emotion_classification,
author = {AI Research Team},
title = {Speech Emotion Classification Model},
year = {2025},
url = {https://github.com/your-username/speech_emotion_classification}
}
๐ Acknowledgments
- The RAVDESS dataset creators for providing the high-quality emotional speech data
- The TensorFlow team for providing an excellent deep learning framework
- The Librosa team for audio processing capabilities
- The Hugging Face team for model sharing capabilities
Evaluation results
- Accuracy on RAVDESSself-reported0.421
- Precision (weighted) on RAVDESSself-reported0.725
- Recall (weighted) on RAVDESSself-reported0.421
- F1-Score (weighted) on RAVDESSself-reported0.409