I see often that people spend too little time planning the experiment and too much time evaluating a corrupted dataset. Therefore, I get suspicious and ask why have you chosen this percentile? Have you thought about the frequency of corrupted data points, and their origin before evaluating the dataset? Do the values 1% and 99% just enhance the "argument" you are trying to make or are you being conservative? You should ask these questions yourself, and test if the answers are satisfying.
To the question: State what you are using to evaluate the data. Do not say that you are evaluating min and max values, but use the 1% and 99% percentiles instead. It's also good practice to run different evaluations using different values and test that the result is robust against the subjective choice (1%, 99%).
Other than this, I do not take an issue with the analysis. Here is sample R code.
## generate fake data
nDays = 200 # we take data for 200 days
nData = 24*2 # we take one data point every 30min => 48 data points per day
data = rnorm(nDays*nData) # fake data
day = factor(rep(1:nDays, each=nData))
store the data in a data frame:
df = data.frame(data, day)
calculate quantiles for each day:
library(dplyr)
q01 = df %>% group_by( day ) %>%
reframe(q = quantile(data, c(1e-2)))
q99 = df %>% group_by( day ) %>%
reframe(q = quantile(data, c(99e-2)))
plot them
dfq = data.frame( data = c(q01$q, q99$q),
grp = factor(c(rep('1%', nDays), rep('99%', nDays))) )
boxplot(data ~ grp, dfq)