To allow flexibility in the algebra, and circumvent the fact that
$$\small
\begin{bmatrix}
\Sigma_{aa} & \Sigma_{ab}\\
\Sigma_{ba} & \Sigma_{bb}
\end{bmatrix}^{-1}
\neq
\begin{bmatrix}
\Sigma_{aa}^{-1} & \Sigma_{ab}^{-1}\\
\Sigma_{ba}^{-1} & \Sigma_{bb}^{-1}
\end{bmatrix}
$$
We replace the $\Sigma^{-1}$ by the precision matrix:
\begin{align}\boldsymbol \Sigma^{-1} =
\begin{bmatrix}
\Sigma_{aa} & \Sigma_{ab}\\
\Sigma_{ba} & \Sigma_{bb}
\end{bmatrix}^{-1}
=
\begin{bmatrix}
\Lambda_{aa} & \Lambda_{ab}\\
\Lambda_{ba} & \Lambda_{bb}
\end{bmatrix}
\end{align}
Now we can expand the quadratic exponent of the joint pdf of the partitioned multivariate Gaussian $\bf x =\begin{bmatrix}{\bf x_a}&{\bf x_b}\end{bmatrix}^T$:
$$-\frac{1}{2}({\bf x} - {\boldsymbol \mu})^T\, \Sigma^{-1} \, ({\bf x} - \boldsymbol \mu)\\
=
-\frac{1}{2}
\begin{bmatrix}
{\bf x}_a - \boldsymbol\mu_a & {\bf x_b} - \boldsymbol\mu_b \end{bmatrix}
\begin{bmatrix}
\Lambda_{aa} & \Lambda_{ab}\\
\Lambda_{ba} & \Lambda_{bb}
\end{bmatrix}
\begin{bmatrix}
{\bf x}_a - \boldsymbol\mu_a \\ {\bf x}_b - \boldsymbol \mu_b \end{bmatrix}
\small\\
=
-\frac{1}{2}\left[\small ({\bf x}_a-\boldsymbol\mu_a)^T\, \Lambda_{aa}({\bf x}_a-\boldsymbol\mu_a) +
2 \,({\bf x}_a-\boldsymbol\mu_a)^T\, \Lambda_{ab}({\bf x}_b-\boldsymbol\mu_b)+({\bf x}_b-\boldsymbol\mu_b)^T\, \Lambda_{bb}({\bf x}_b-\boldsymbol\mu_b)\right]
\\=
\color{blue}{-\frac{1}{2}\small ({\bf x}_a-\boldsymbol\mu_a)^T\, \Lambda_{aa}({\bf x}_a-\boldsymbol\mu_a)} -
\,({\bf x}_a-\boldsymbol\mu_a)^T\, \Lambda_{ab}({\bf x}_b-\boldsymbol\mu_b)
-\frac{1}{2} ({\bf x}_b-\boldsymbol\mu_b)^T \Lambda_{bb}({\bf x}_b-\boldsymbol\mu_b)
$$
to prove that we end up with quadratic exponents. Hence the joint distribution will be Gaussian. The mean and variance will fully characterize the distribution. The color indicates the only quadratic ${\bf x}_a$ form (see below).
Finding the mean and variance "completing the square" is the step used in here. However, in the book it seems as though the operation was simply to expand the quadratic form, and then convert the resulting expression into an $ax^2+bx+c$ polynomial form as commented by @them:
$$\small -\frac{1}{2}({\bf x}- \boldsymbol\mu)^T \Sigma^{-1}({\bf x}-\boldsymbol\mu) =-\frac{1}{2}\left({\bf x}^T\Sigma^{-1}{\bf x} - \boldsymbol\mu^T\Sigma^{-1}{\bf x} - X^T \Sigma^{-1} \boldsymbol\mu + \boldsymbol\mu^T\Sigma^{-1}\boldsymbol\mu \right)$$
and noting that ${\bf x}^T\Sigma^{-1}\boldsymbol\mu = \boldsymbol\mu^T\Sigma^{-1}{\bf x}$
$$\small -\frac{1}{2}({\bf x}- \boldsymbol\mu)^T \Sigma^{-1}({\bf x}-\boldsymbol\mu) =-\frac{1}{2}{\bf x}^T\Sigma^{-1}{\bf x} + {\bf x}^T \Sigma^{-1} \boldsymbol\mu -\frac{1}{2} \boldsymbol\mu^T\Sigma^{-1}\boldsymbol\mu $$
and since $-\frac{1}{2} \boldsymbol\mu^T\Sigma^{-1}\boldsymbol\mu$ does not depend on ${\bf x}$ we can just turn it into a constant $C$:
$$\begin{eqnarray}
\small -\frac{1}{2}({\bf x}- \boldsymbol\mu)^T \Sigma^{-1}({\bf x}-\boldsymbol\mu)
=\color{brown}{-\frac{1}{2}{\bf x}^T\Sigma^{-1}{\bf x}} + {\bf x}^T \Sigma^{-1} \boldsymbol\mu +C \qquad{(2.71)}\\
=-\frac{1}{2}\begin{bmatrix} {\bf x}_a & {\bf x}_b\end{bmatrix}^T\Lambda \begin{bmatrix} {\bf x}_a \\ {\bf x}_b\end{bmatrix} + {\bf x}^T \Sigma^{-1} \boldsymbol\mu +C\\
=\color{blue}{-\frac{1}{2}}\begin{bmatrix}\color{blue}{ {\bf x}_a} & X_b\end{bmatrix}^T
\begin{bmatrix}
\color{blue}{\Lambda_{aa}} & \Lambda_{ab}\\
\Lambda_{ba} & \Lambda_{bb}
\end{bmatrix}
\begin{bmatrix} \color{blue}{{\bf x}_a} \\ {\bf x}_b\end{bmatrix} + {\bf x}^T \Sigma^{-1} \boldsymbol\mu + C
\end{eqnarray}$$
When conditioning on $X_b$ (acting now as a constant), the quadratic form in the exponent $\small -\frac{1}{2}(X- \boldsymbol\mu)^T \Sigma^{-1}(X-\boldsymbol\mu) $, of which $\Sigma^{-1}$ is the variance, will given by the elements colored in blue (compare to the part in red two lines prior). This explains mention of
$$\color{blue}{-\dfrac{1}{2}{\bf x}_a^T\Lambda_{aa}{\bf x}_a }$$
in the book and in the OP. In this expression $\boldsymbol \mu$ has been assimilated into $C$; otherwise we have again the blue-colored expression in the second part of the answer.
Hence the variance of $f({\bf x}_a\vert {\bf x}_b)$ will be:
$$\Sigma_{a\vert b} = \Lambda_{aa}^{-1} $$
At this point the book moves on to the mean.
This link provides the pertinent three pages in Pattern Recognition and Machine Learning by Christopher Bishop.
And here is a link to very pertinent material on completing the square as a technique to derive the marginal and conditional pdf of a multivariate Gaussian.