I am a data science intern and I have been tasked with testing the time scalability of the schedule builder. Basically I have collected data and made a bunch of fit lines using the lmfit module. Now I need to run the schedule builder and see how well my fit lines predict the results. Unfortunately in school I never had to take it farther than making the fit lines so I am a bit unclear how to proceed. I have been reading about confidence intervals but my statistics is a little shaky and I am not looking for theoretical accuracy but am testing my model against real results. All of my fit lines look like this
output = (slope +- slope_error)*input + (intercept +- intercept_error)
So really what I need help with is finding the range of acceptable outputs based on a specific input. Can I use normal error propogation? For instance output = input +- sqrt((slope_error^2) + (intercept_error^2)) And if my result falls within this range then my model predicted correctly.
My other thought is that we can use the errors to get the acceptable range. For example the maximum possible value is: output = (slope + slope_error)*input + (intercept + intercept_error)
And the minimum value is output = (slope - slope_error)*input + (intercept - intercept_error)
So if my result falls in between these two values then the the model is good.
Or perhaps both of these are incorrect, I am a bit lost right now.

ypred=x*m+nand you will get the predicted valueypredwhich you can compare with the real valueyreal. The distance metric you use depends on the problem. L1, L2, Mahalanobis... – Elerium115 Dec 07 '21 at 15:29