A minimum does not exist. However, an infimum does. It follows from the fact that
The supremum of the variance of unimodal distributions defined on $[0,1]$ having mean $\mu$ is $\mu(2 - 3\mu)/3$ ($0 \le \mu \le 1/2$) or $(1-\mu)(3\mu-1)/3$ ($1/2\le \mu \le 1$).
The supremum is actually attained by a distribution that--although it does not have a density function--can still (in a generalized sense) be thought of as "unimodal"; it will have an atom at $0$ (when $\mu \lt 1/2$) or an atom at $1$ (when $\mu \gt 1/2$) but otherwise be uniform.
I will sketch the argument. The question asks us to optimize a linear functional
$$\mathcal{L_{x^2}}: D[0,1] \to \mathbb{R}$$
subject to various equality and inequality constraints, where $D[0,1]$ is the set of (signed) measures on the interval $[0,1]$. For differentiable $F:[0,1]\to\mathbb{R}$ and $g:[0,1]\to\mathbb{R}$ any continuous function, define
$$\mathcal{L}_g[F] = \int_0^1 g(x) dF(x),$$
and extend $\mathcal{L}$ to all of $D[0,1]$ by continuity.
The equality constraints are
$$\mathcal{L}_1[F] = 1$$
and
$$\mathcal{L}_x[F] = \mu.$$
The inequality constraints are that
$$f(x) \ge 0$$
and there exists $\lambda \in [0,1]$ (a "mode") such that for all $0\le x \le y \le \lambda$ and all $\lambda \le y \le x \le 1$,
$$f(x) \le f(y).$$
These constraints determine a convex domain $\mathcal{X}\subset D[0,1]$ over which $\mathcal{L}_{x^2}$ is to be optimized.
As with any linear program in a finite dimensional space, the extrema of $\mathcal{L}_g$ will be attained at vertices of $\mathcal{X}$. These evidently are the measures, absolutely continuous with respect to Lebesgue measure, that are piecewise constant, because the vertices are where almost all the inequalities become equalities: and most of those inequalities are associated with the unimodality of $F$ (non-increasing tail behavior).
In order to satisfy the two equality constraints we need make only a single break in the graph of $f$, say at a number $0\lt \lambda \lt 1$. Letting the constant value on the interval $[0,\lambda)$ be $a$ and the constant value on $(\lambda, 1]$ be $b$, an easy calculation based on the equality constraints yields
$$a = \frac{1+\lambda-2\mu}{\lambda},\ b = \frac{2\mu-\lambda}{1-\lambda}.$$

This figure says it all: it graphs the locally constant distribution function of mean $\mu$ with at most a single break at $\lambda$. (The plot of $f_{(\lambda,\mu)}$ for $\mu \gt 1/2$ looks like the reversal of this one.)
The value of $\mathcal{L}_{x^2}$ at such measures (which I will denote $f_{(\lambda, \mu)}$, the density of a distribution $F_{(\lambda, \mu)}$) is just as readily computed to be
$$\mathcal{L}_{x^2}[f_{(\lambda, \mu)}] = \frac{1}{3}\left(2\mu + (2\mu - 1)\lambda\right).$$
This expression is linear in $\lambda$, implying it is maximized at $0$ (when $\mu \lt 1/2$), $1$ (when $\mu \gt 1/2$), or at any value (when $\mu = 1/2$). However, except when $\mu=1/2$, the limiting values of the measures $f_{(\lambda, \mu)}$ are no longer continuous: the corresponding distribution $F = \lim_{\lambda\to 0} F_{(\lambda, \mu)}$ or $F = \lim_{\lambda\to 1} F_{(\lambda, \mu)}$ has a jump discontinuity at $0$ or $1$ (but not both).

This figure graphs the optimal $F$ for a mean of $\mu \approx 2/5$.
Regardless, the optimum value is
$$\sigma^2_\mu = \sup_\lambda \mathcal{L}_{x^2}[f_{(\lambda, \mu)}] = \frac{1}{3}\mu(2 - 3\mu).$$
Consequently, the infimum of $\mu(1-\mu)/\sigma^2$ for $0\le \mu \lt 1/2$ is
$$\mu(1-\mu)/\sigma^2_\mu = \frac{3-3\mu}{2-3\mu},$$
with a comparable expression when $1/2\lt \mu \le 1$ (obtained by replacing $\mu$ by $1-\mu$).

This figure plots the supremum $\mu(1-\mu)/\sigma^2_\mu$ versus $\mu$.