This project implements a neural network using Python. The implementation includes key features such as model architecture, training, evaluation, and optimization. The project is structured to be modular and easily extensible.
- Customizable Model Architecture: Supports multiple layers with configurable activation functions and layer sizes.
- Training and Evaluation: Implements training loops with backpropagation and optimizers.
- Dataset: Uses the
EMNIST
dataset. Loads and preprocesses datasets efficiently. - Performance Metrics: Computes accuracy, precision, recall, and F1 score for classification tasks.
- Hyperparameter Tuning: Allows modification of learning rates, batch sizes, and optimization techniques.
- Report: Generates a detailed report with training and evaluation metrics.
The core implementation includes the following components:
It consists of:
- Input Layer
- Hidden Dense Layers (with ReLU as activation function)
- Dropout Layer (for regularization)
- Output Layer (for classification or regression tasks)
The training process involves:
- Forward propagation
- Loss computation (cross-entropy for classification)
- Backpropagation
- Parameter updates using ADAM optimizer
- Performance tracking
The implementation includes visualization tools to analyze:
- Training and validation loss
- Accuracy trends
- Model performance metrics
To run this project, ensure you have the following dependencies installed:
pip install -r requirements.txt
Run the neural_net.ipynb
notebook step by step to train and evaluate the neural network.
This project is licensed under the MIT License.