I'll preface this by saying that I haven't taken a stats class yet, so talk to me like a five year old. If it matters, the data I'm working with are execution times for a program and I'm trying to determine a meaningful average execution time.
I have a set of data. My professor says that to remove outliers, I should calculate the standard deviation and remove all values outside a range given by (mean - 3*std_dev):(mean + 3*std_dev), and then I should repeat that process on the new data (without those outliers) until no outliers are found, then use the new data set to determine an average execution time.
Why would I use this iterative approach rather than only applying the process once?