A collection of numerical optimization algorithms implemented in Python (and some in R), focusing on educational understanding and practical applications in AI/ML. This repository aims to help students and practitioners learn about optimization techniques through clear implementations and detailed explanations.
This repository serves as:
- A learning resource for understanding optimization algorithms
- A practical reference for implementing numerical methods
- A platform for experimenting with different optimization techniques
- A collaborative space for sharing knowledge and improvements
-
Gradient-Based Methods
- First-Order Methods (Gradient Descent and variants)
- Second-Order Methods (Newton and Quasi-Newton)
- Stochastic Methods
-
Root Finding Methods
- Newton-Type Methods
- Fixed-Point Iteration Methods
- Bracketing Methods
-
Linear Programming
- Simplex Method
- Interior Point Methods
-
Convex Optimization
- Unconstrained Optimization
- Constrained Optimization
- Convex Programming Methods
-
Global Optimization
- Direct Search Methods
- Population-Based Methods
- Trust Region Methods
Each implementation includes:
- Detailed mathematical explanations
- Step-by-step implementation notes
- Usage examples
- Visualization helpers
Curated list of resources we found helpful:
Resource | Description | Level |
---|---|---|
Convex Optimization (Stanford) | Comprehensive course by Stephen Boyd | Advanced |
Numerical Optimization (Nocedal & Wright) | Standard reference text | Advanced |
Root Finding Algorithms | Practical implementations by Oscar Veliz | Intermediate |
We welcome contributions! Whether you want to:
- Fix bugs
- Add new algorithms
- Improve documentation
- Share insights or use cases
Please see our Contributing Guidelines for more details.
Name | Role | GitHub |
---|---|---|
Saeed Ahmad | Maintainer | @saeedahmadicp |
Izhar Ali | Maintainer | @ali-izhar |
This project is licensed under the MIT License - see the LICENSE file for details.
Built for learning and experimentation 📚