In scikit-learn's LogisticRegression docs they write
This class implements regularized logistic regression using the ‘liblinear’ library, ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ solvers
Logistic regression doesn't have a closed form solution. So it must use some optimization algorithm like gradient descent or Adam. So all, we need are the partial derivatives and we should be good to go. So what are these "solvers" and where do they fit into the picture?