Finding the answer to your question is not overly difficult, if one is used to proving PSPACE upper bounds. But I think one cannot find an answer to your question in the literature, so here it is:
Given a regular expression r of alphabetic width n, i.e. with n alphabetic letters, you can enumerate all regular expressions of alphabetic width 1,2,3, one by one (see the paper by Gruber et al. 2012), and test for each such candidate expression c whether L(c) = L(r). The first expression where the test succeeds is clearly of minimum alphabetic width.
Apart from the test for language equivalence, this can be implemented using "only" linear space (and using exponential time).
The regular expression equivalence test can be done by converting r and c each to a nondeterministic finite automaton, and check if these automata are equivalent. The latter can be done in PSPACE as follows (this algorithm is mentioned in the paper by Bonchi and Pous 2013): For each word $w$ of length at most $2^n$, test whether $w \in L(r)$ if and only if $w \in L(s)$. If all these $2^{n+1}-1$ tests are successful, then $L(r)=L(s)$.
Notes
The paper by Gruber et al. (2012) also covers the computation of the number of regular languages whose minimal regular expression is of size n for small values of n. This actually uses an implementation of regular expression minimization. The test for language equivalence is not done in polynomial space as described above, but by computing the minimal DFA. This requires exponential space in the worst case. But there are more practical ways (all practical approaches seem to use exponential space) for testing the equivalence of NFAs, see
http://languageinclusion.org/
References
- Hermann Gruber, Jonathan Lee, Jeffrey Shallit: Enumerating regular expressions and their languages, arXiv:1204.4982 [cs.FL]
- Filippo Bonchi, Damien Pous: Checking NFA equivalence with bisimulations up to congruence. POPL 2013.