A table in Chapter 7 of Frank Harrell's course notes compares several approaches to analyzing longitudinal data like these. As you noticed, repeated-measures ANOVA has strict requirements. Other approaches provide more flexibility, although each has its own assumptions and there is no one method best for all data.
Let's put your data together into a useful data frame. I assume that what's of interest is the number of trials attempted rather than time elapsed.
trialDat <- data.frame(
timeToComplete=c(Individual1,Individual2,Individual3,Individual4,Individual5),
ID = c(rep("1",length(Individual1)),rep("2",length(Individual2)),rep("3",length(Individual3)),rep("4",length(Individual4)),rep("5",length(Individual5))),
trialNo=c(1:length(Individual1),1:length(Individual2),1:length(Individual3),1:length(Individual4),1:length(Individual5))
)
Plotting the data is always a good idea. The lines are simple linear regression lines for each Individual.
ggplot(trialDat,
aes(x=trialNo,y=timeToComplete,group=ID,color=ID)) +
geom_point() + geom_smooth(method=lm,formula=y~x)

With only 5 individuals you might just treat ID as a fixed effect and allow for different slopes in the training curves among the individuals. That pretty much recapitulates what was done in the above plot. The trialNo is treated as linearly associated with timeToComplete. More flexible modeling might be called for in general.
lmMod <- lm(timeToComplete~ID*trialNo,data=trialDat)
That provides a different fit for each individual. You can use the Anova() function in the R car package to give a simple summary of the results.
car::Anova(lmMod)
# Anova Table (Type II tests)
#
# Response: timeToComplete
# Sum Sq Df F value Pr(>F)
# ID 600.51 4 34.050 2.358e-09
# trialNo 1040.35 1 235.954 1.389e-13
# ID:trialNo 218.24 4 12.374 1.634e-05
# Residuals 101.41 23
That suggests a significant association between timeToComplete and trialNo, and that the association differs among individuals (ID:trialNo interaction).
A linear mixed model, as suggested by @utobi, models some regression coefficients as having Gaussian distributions among the individuals, rather than fitting separate lines for each. The lme4 package is often used for such modeling. This model allows both for different intercepts and different slopes among the individuals, with a correlation between slopes and intercepts. This site's lmer cheat sheet shows how to set up such models.
lmerMod <- lme4::lmer(timeToComplete ~
trialNo + (1+trialNo|ID), data = trialDat)
Generalized least squares has many applications. It's suited to longitudinal data; for this purpose you specify a type of correlation of observations within an individual. The nlme package implements that approach. This example assumes a "continuous autoregressive" form for the correlation.
glsMod <- nlme::gls(timeToComplete~trialNo, data=trialDat,
correlation=nlme::corCAR1(form= ~trialNo|ID))
The Harrell reference cited above pays particular attention to generalized least squares. His rms package has a Gls() interface to the nlme function.