Skip to content

Comments

Neural network optimizers module#13683

Closed
shretadas wants to merge 1 commit intoTheAlgorithms:masterfrom
shretadas:master
Closed

Neural network optimizers module#13683
shretadas wants to merge 1 commit intoTheAlgorithms:masterfrom
shretadas:master

Conversation

@shretadas
Copy link

Neural Network Optimizers Module

This PR introduces a comprehensive neural network optimizers module that implements five widely used optimization algorithms for machine learning and deep learning.
The primary goal is to enhance the educational value of the repository by including well-documented, tested, and modular implementations.

Fixes #13662


What's Added

  • Add SGD (Stochastic Gradient Descent) optimizer

  • Add MomentumSGD with momentum acceleration

  • Add NAG (Nesterov Accelerated Gradient) optimizer

  • Add Adagrad with adaptive learning rates

  • Add Adam optimizer combining momentum and RMSprop

  • Include 61 comprehensive doctests (all passing)

  • Add abstract BaseOptimizer for a consistent interface

  • Include detailed mathematical documentation

  • Add educational examples and performance comparisons

  • Follow repository guidelines: type hints, error handling, and pure Python implementation


Technical Details

Algorithms Implemented:

Algorithm | Key Concept -- | -- SGD | θ = θ - α∇θ (basic gradient descent) MomentumSGD | v = βv + (1-β)∇θ, θ = θ - αv NAG | Uses lookahead gradients for better convergence Adagrad | Adapts learning rates per parameter Adam | Combines momentum and adaptive learning rates

Directory Structure:

neural_network/optimizers/ ├── __init__.py # Package initialization ├── README.md # Comprehensive documentation ├── base_optimizer.py # Abstract base class ├── sgd.py # Stochastic Gradient Descent ├── momentum_sgd.py # SGD with Momentum ├── nag.py # Nesterov Accelerated Gradient ├── adagrad.py # Adagrad optimizer ├── adam.py # Adam optimizer ├── test_optimizers.py # Comprehensive test suite └── IMPLEMENTATION_SUMMARY.md # Technical implementation details

Testing Coverage

  • 61 doctests with 100% pass rate

  • Error handling for edge cases

  • Multi-dimensional parameter support

  • Performance comparison examples


Describe Your Change

  • Add an algorithm

  • Fix a bug or typo in an existing algorithm

  • Add or change doctests (note: no mixed code/test changes in a single PR)

  • Documentation change


Checklist

  • I have read CONTRIBUTING.md.

  • This pull request is all my own work — I have not plagiarized.

  • I know that pull requests will not be merged if they fail the automated tests.

  • This PR only changes one algorithm file. (For multiple algorithms, separate PRs are recommended.)

  • All new Python files are placed inside an existing directory.

  • All filenames are lowercase with no spaces or dashes.

  • All functions and variables follow Python naming conventions.

  • All functions include Python type hints.

  • All functions have doctests that pass automated testing.

  • All new algorithms include at least one URL to an authoritative explanation (e.g., Wikipedia).

  • If this pull request resolves one or more issues, the description includes the issue number(s) with a closing keyword: Fixes #13662.

Neural Network Optimizers Module

This PR introduces a comprehensive neural network optimizers module that implements five widely used optimization algorithms for machine learning and deep learning.
The primary goal is to enhance the educational value of the repository by including well-documented, tested, and modular implementations.

Fixes #13662

What's Added

Add SGD (Stochastic Gradient Descent) optimizer

Add MomentumSGD with momentum acceleration

Add NAG (Nesterov Accelerated Gradient) optimizer

Add Adagrad with adaptive learning rates

Add Adam optimizer combining momentum and RMSprop

Include 61 comprehensive doctests (all passing)

Add abstract BaseOptimizer for a consistent interface

Include detailed mathematical documentation

Add educational examples and performance comparisons

Follow repository guidelines: type hints, error handling, and pure Python implementation

Technical Details

Algorithms Implemented:

Algorithm Key Concept
SGD θ = θ - α∇θ (basic gradient descent)
MomentumSGD v = βv + (1-β)∇θ, θ = θ - αv
NAG Uses lookahead gradients for better convergence
Adagrad Adapts learning rates per parameter
Adam Combines momentum and adaptive learning rates

Directory Structure:

neural_network/optimizers/
├── init.py # Package initialization
├── README.md # Comprehensive documentation
├── base_optimizer.py # Abstract base class
├── sgd.py # Stochastic Gradient Descent
├── momentum_sgd.py # SGD with Momentum
├── nag.py # Nesterov Accelerated Gradient
├── adagrad.py # Adagrad optimizer
├── adam.py # Adam optimizer
├── test_optimizers.py # Comprehensive test suite
└── IMPLEMENTATION_SUMMARY.md # Technical implementation details

Testing Coverage

61 doctests with 100% pass rate

Error handling for edge cases

Multi-dimensional parameter support

Performance comparison examples

Describe Your Change

Add an algorithm

Fix a bug or typo in an existing algorithm

Add or change doctests (note: no mixed code/test changes in a single PR)

Documentation change

Checklist

I have read CONTRIBUTING.md
.

This pull request is all my own work — I have not plagiarized.

I know that pull requests will not be merged if they fail the automated tests.

This PR only changes one algorithm file. (For multiple algorithms, separate PRs are recommended.)

All new Python files are placed inside an existing directory.

All filenames are lowercase with no spaces or dashes.

All functions and variables follow Python naming conventions.

All functions include Python type hints
.

All functions have doctests
that pass automated testing.

All new algorithms include at least one URL to an authoritative explanation (e.g., Wikipedia).

If this pull request resolves one or more issues, the description includes the issue number(s) with a closing keyword
: Fixes #13662.

- Add SGD (Stochastic Gradient Descent) optimizer
- Add MomentumSGD with momentum acceleration
- Add NAG (Nesterov Accelerated Gradient) optimizer
- Add Adagrad with adaptive learning rates
- Add Adam optimizer combining momentum and RMSprop
- Include comprehensive doctests (61 tests, all passing)
- Add abstract BaseOptimizer for consistent interface
- Include detailed mathematical documentation
- Add educational examples and performance comparisons
- Follow repository guidelines: type hints, error handling, pure Python

Implements standard optimization algorithms for neural network training
with educational focus and comprehensive testing coverage.
@algorithms-keeper
Copy link

Closing this pull request as invalid

@shretadas, this pull request is being closed as none of the checkboxes have been marked. It is important that you go through the checklist and mark the ones relevant to this pull request. Please read the Contributing guidelines.

If you're facing any problem on how to mark a checkbox, please read the following instructions:

  • Read a point one at a time and think if it is relevant to the pull request or not.
  • If it is, then mark it by putting a x between the square bracket like so: [x]

NOTE: Only [x] is supported so if you have put any other letter or symbol between the brackets, that will be marked as invalid. If that is the case then please open a new pull request with the appropriate changes.

@algorithms-keeper algorithms-keeper bot added the awaiting reviews This PR is ready to be reviewed label Oct 22, 2025
@shretadas
Copy link
Author

Thank you for the review and the guidance.

I have updated the PR description and checklist to reflect the items that are applicable to this submission. I understand only “[x]” is accepted for marking a point as completed.

I have two options and I’d like to know which you prefer:

  1. If possible, please re-open this PR so I can push a follow-up commit that only updates the checklist and minor docs; or
  2. If you prefer a fresh PR, I will open a new PR with the corrected description and the same implementation files.

Below is the updated PR description (already includes Fixes #13662). I will update the PR body now; please let me know whether to re-open this one or if I should create a new PR. Thank you for the help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awaiting reviews This PR is ready to be reviewed invalid

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add neural network optimizers module to enhance training capabilities

1 participant