The sum of the mean and standard deviation of a non-normal distribution can exceed the value of the largest sample. For a good explanation of why, see Can mean plus one standard deviation exceed maximum value?.
My related "going-the-other-way" question is, if a set of 15,000 samples, all ages between 9 and 20 for a certain criteria, whose mean is plainly no smaller than 9, is claimed to have a standard deviation of 16, what can one conclude about the distribution of those samples?
This is not a hypothetical example; it's taken from a paper published in the 1990s that I'm trying to understand and possibly find flaws in.