I've written a second answer inspired by a newer question which asks how fit a nonlinear model for tree growth: nls() singular gradient matrix at initial parameter estimates error.
On the surface the study of how trees grow doesn't have much in common with the study of how students learn. However, if we flip a growth curve we get a learning curve. Here is Figure 1 in (Karlsson, 2000).

K. Karlsson. Height growth patterns of Scots pine and Norway spruce in the coastal areas of western Finland. Forest Ecology and Management, 135(1):205–216, 2000.
The figure shows a variety of growth curves, some with more curvature than others. This is interesting because, as discussed in the thread about fitting the nonlinear growth model, the data might trend linearly (without any curvature). So we might prefer a model that allows for curvature if there is evidence for it in the data.
So in my second answer I fit four models:
- [blue]
y ~ x (linear)
- [green]
log(y) ~ x (log linear, suggested by @whuber)
- [yellow] segmented at the mid-point, suggested by me after eyeballing the data
$$
\begin{aligned}
y =
\begin{cases}
\beta_0 + \beta_1 x & x \leq 10\\
\beta^*_0 & x > 10
\end{cases}
\end{aligned}
$$
- [red] nonlinear learning curve = inverted growth curve
$$
\begin{aligned}
y =
\beta_0 - \beta_1\left\{1 - \exp(-\beta_2x)\right\}^{\beta_3}
\end{aligned}
$$
where $x$ is the question id and $y$ is the time to answer the question (in seconds). R code to reproduce the analysis is attached at the end.

For what it's worth, the segmented model has the smallest residual sum of squares (RSS). As @whuber points out in a comment, it's also the most likely model to be overfitted to the data. In fact, this model would only make sense if the experimental design supports it. For example, the students got a break half-way through the exam. Or questions 11 to 20 are designed to have the same difficulty.
Another observation is that the non-linear model suggests that response times decrease rapidly during the first one third of the test and then begins to level off more quickly than implied by the log linear model.
sum(resid(m_linear)^2)
#> [1] 391.5296
sum(m_log_linear$.resid^2)
#> [1] 341.9794
sum(resid(m_nonlinear)^2)
#> [1] 314.8828
sum(m_segmented$.resid^2)
#> [1] 285.9755
R code to fit the four models, reproduce the figure and compute the residual sums of squares.
# I've extracted the data with WebPlotDigitizer.
# https://automeris.io/WebPlotDigitizer/
x <- seq(20)
y <- c(40.5, 30.9, 28, 23.5, 26.6, 27.7, 25.8, 23.5, 20.8, 23.1, 12.8, 10.1, 12.6, 23.3, 17.6, 9.5, 17.3, 17.2, 9.7, 11.4)
library("tidyverse")
fit_nonlinear <- function(x, y) {
See this answer by @whuber for an explanation how to fit
a nonlinear model with least squares.
https://stats.stackexchange.com/a/599097/237901
The model.
f <- function(x, beta) {
b0 <- beta["b0"]
b1 <- beta["b1"]
b2 <- beta["b2"]
b3 <- beta["b3"]
exp(b0) - exp(b1) * (1 - exp(-b2^2 * x))^(1 + sin(b3))
}
Make a guess for an initial fit.
soln0 <- c(b0 = log(40), b1 = log(40), b2 = sqrt(0.1), b3 = 1)
Polish this fit.
The control object shows how to specify some of the most useful aspects
of the search.
nls(
y ~ exp(b0) - exp(b1) * (1 - exp(-b2^2 * x))^(1 + sin(b3)),
start = soln0,
control = list(minFactor = 2^(-16), maxiter = 1e4, warnOnly = TRUE)
)
}
fit_log_linear <- function(x, y) {
tibble(x, y) %>%
mutate(
.fitted = exp(fitted(lm(log(y) ~ x))),
.resid = y - .fitted
)
}
fit_segments <- function(x, y) {
m4 <- tibble(x, y)
m4.le10 <- lm(y ~ x, data = subset(m4, x <= 10))
m4.gt10 <- lm(y ~ 1, data = subset(m4, x > 10))
m4 %>%
mutate(
.fitted = if_else(x <= 10,
predict(m4.le10, newdata = .),
predict(m4.gt10, newdata = .)
),
.resid = y - .fitted
)
}
m_linear <- lm(y ~ x)
m_log_linear <- fit_log_linear(x, y)
m_nonlinear <- fit_nonlinear(x, y)
m_segmented <- fit_segments(x, y)
plot(x, y, xlab = "Question", ylab = "Time")
legend(
15, 40,
legend=c("linear", "log linear", "nonlinear", "segmented"),
col=c("#2297E6", "#61D04F", "#DF536B", "#F5C710"),
lty=1
)
abline(
m_linear,
lwd = 2, col = "#2297E6"
)
lines(
x, m_log_linear$.fitted,
lwd = 2, col = "#61D04F"
)
lines(
x, fitted(m_nonlinear),
lwd = 2, col = "#DF536B"
)
lines(
x[x <= 10], m_segmented$.fitted[x <= 10],
lwd = 2, col = "#F5C710"
)
lines(
x[x > 10], m_segmented$.fitted[x > 10],
lwd = 2, col = "#F5C710"
)
Created on 2022-12-26 with reprex v2.0.2