12

Suppose the following matrix $A$ is given $$ \left[\begin{array}{ccc} 0.500 & -0.333 & -0.167\\ -0.500 & 0.667 & -0.167\\ -0.500 & -0.333 & 0.833\end{array}\right]$$ with its transpose $A^T$. The product $A^TA=G$ yields $$ \left[\begin{array}{ccc}0.750 & -0.334 & -0.417\\ -0.334 & 0.667 & -0.333\\ -0.417 & -0.333 & 0.750\end{array}\right]$$,

where $G$ is a Laplacian matrix. Note that matrices $A$ and $G$ are of rank 2, with the zero eigenvalue corresponding to eigenvector $1_n=\left[\begin{array}{ccc}1 & 1 & 1\end{array}\right]^T$.

I wonder what would be the way to obtain $A$ if only $G$ is given. I tried eigendecomposition $G=UEU^T$, and then set $A'=UE^{1/2}$, but obtained different result. I guess this has to do with rank deficiency. Could someone explain this? Clearly, the above example is for illustration; you could consider general Laplacian matrix decomposition of the above form.


Since, for instance, Cholesky decomposition could be used to find $G=LL^T$, the decomposition on $G$ could yield many solution. I'm interested in the solution that could be expressed as $$A=(I-1_nw^{T}),$$ where $I$ is a $3\times 3$ identity matrix, $1_n=[1~ 1~ 1]$, and $w$ being some vector satisfying $w^T1_n=1$. If it simplifies matters, you could assume that the entries of $w$ are non-negative.

usero
  • 1,663
  • 2
  • 14
  • 27
  • I think the comment you added about the representation of $A$ is only partially helpful. It assumes that there is exactly one eigenvalue equal to zero, but the non-determinancy is always there, isn't it? – Wolfgang Bangerth Apr 09 '12 at 21:17
  • @WolfgangBangerth I'm trying to figure out the meaning of "non-determinancy". If that is $det(A)=0$, it holds for the above example, and I'm not sure if it can be generalized for $A=I-1_nw^T$. However, except for $n=3$, I doubt that the solution would always exist. – usero Apr 10 '12 at 07:15
  • No, what I meant is that the solution to your problem isn't uniquely determined. I was pointing out the fact that whether the matrix has a zero eigenvalue or not doesn't actually change the fact that the square root problem has no unique solution. – Wolfgang Bangerth Apr 10 '12 at 20:03

4 Answers4

11

We have the matrix Laplacian matrix $G=A^TA$ which has a set of eigenvalues $\lambda_0\leq\lambda_1\leq\ldots\leq \lambda_n$ for $G\in\mathbb{R}^{n\times n}$ where we always know $\lambda_0 = 0$. Thus the Laplacian matrix is always symmetric positive semi-definite. Because the matrix $G$ is not symmetric positive definite we have to be careful when we discuss the Cholesky decomposition. The Cholesky decomposition exists for a positive semi-definite matrix but it is no longer unique. For example, the positive semi-definite matrix $$ A=\left[\!\!\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \!\!\right], $$ has infinitely many Cholesky decompositions $$ A=\left[\!\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \!\right] = \left[\!\begin{array}{cc} 0 & 0 \\ \sin\theta & \cos\theta \end{array} \!\right] \left[\!\begin{array}{cc} 0 & \sin\theta \\ 0 & \cos\theta \end{array} \!\right]=LL^T. $$

However, because we have a matrix $G$ that is known to be a Laplacian matrix we can actually avoid the more sophisticated linear algebra tools like Cholesky decompositions or finding the square root of the positive semi-definite matrix $G$ such that we recover $A$. For example, if we have the Laplace matrix $G\in\mathbb{R}^{4\times 4}$, $$ G=\left[\!\begin{array}{cccc} 3 & -1 & -1 & -1\\ -1 & 1 & 0 & 0 \\ -1 & 0 & 1 & 0 \\ -1 & 0 & 0 & 1 \\ \end{array}\!\right] $$ we can use graph theory to recover the desired matrix $A$. We do so by formulating the oriented incidence matrix. If we define the number of edges in the graph to be $m$ and the number of vertices to be $n$ then the oriented incidence matrix $A$ will be an $m\times n$ matrix given by $$ A_{ev} = \left\{\begin{array}{lc} 1 & \textrm{if }e=(v,w)\textrm{ and }v<w \\ -1 & \textrm{if }e=(v,w)\textrm{ and }v>w \\ 0 & \textrm{otherwise}, \end{array} \right. $$ where $e=(v,w)$ denotes the edge which connects the vertices $v$ and $w$. If we take a graph for $G$ with four vertices and three edges, then we have the oriented incidence matrix $$ A = \left[\!\begin{array}{cccc} 1 & -1 & 0 & 0\\ 1 & 0 & -1 & 0 \\ 1 & 0 & 0 & -1 \\ \end{array}\!\right], $$ and we can find that $G=A^TA$. For the matrix problem you describe you would construct a graph for $G$ with the same number of edges as vertices, then you should have the ability to reconstruct the matrix $A$ when you are only given the Laplacian matrix $G$.

Update:

If we define the diagonal matrix of vertex degrees of a graph as $N$ and the adjacency matrix of the graph as $M$, then the Laplacian matrix $G$ of the graph is defined by $G=N-M$. For example, in the following graph

we find the Laplacian matrix is $$ G=\left[\!\begin{array}{cccc} 3 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array}\!\right] - \left[\!\begin{array}{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ \end{array}\!\right]. $$ Now we relate the $G$ to the oriented incidence matrix $A$ using the edges and nodes given in the pictured graph. Again we find the entries of $A$ from $$ A_{ev} = \left\{\begin{array}{lc} 1 & \textrm{if }e=(v,w)\textrm{ and }v<w \\ -1 & \textrm{if }e=(v,w)\textrm{ and }v>w \\ 0 & \textrm{otherwise}, \end{array} \right.. $$ For example, edge $e_1$ connects the nodes $v_1$ and $v_2$. So to determine $A_{e_1,v_1}$ we note that the index of $v_1$ is less than the index of $v_2$ (or we have the case $v<w$ in the definition of $A_{ev}$). Thus, $A_{e_1,v_1} = 1$. Similarly by the way of comparing indices we can find $A_{e_1,v_2} = -1$. We give $A$ below in a more explicit way referencing the edges and vertices pictured. $$ A = \begin{array}{c|cccc} & v_1 & v_2 & v_3 & v_4 \\ \hline e_1 & 1 & -1 & 0 & 0\\ e_2 & 1 & 0 & -1 & 0 \\ e_3 & 1 & 0 & 0 & -1 \\ \end{array}. $$

Next, we generalize the concept of the Laplacian matrix to a weighted undirected graph. Let $Gr$ be an undirected finite graph defined by $V$ and $E$ its vertex and edge set respectively. To consider a weighted graph we define a weight function $$ w: V\times V\rightarrow \mathbb{R}^+, $$ which assigns a non-negative real weight to each edge of the graph. We will denote the weight attached to edge connecting vertices $u$ and $v$ by $w(u,v)$. In the case of a weighted graph we define the degree of each vertex $u\in V$ as the sum of all the weighted edges connected to $u$, i.e., $$ d_u = \sum_{v\in V}w(u,v). $$ From the given graph $Gr$ we can define the weighted adjacency matrix $Ad(Gr)$ as an $n\times n$ with rows and columns indexed by $V$ whose entries are given by $w(u,v)$. Let $D(Gr)$ be the diagonal matrix indexed by $V$ with the vertex degrees on the diagonal then we can find the weighted Laplacian matrix $G$ just as before $$ G = D(Gr) - Ad(Gr). $$

In the problem from the original post we know $$ G=\left[\!\begin{array}{ccc} \tfrac{3}{4} & -\tfrac{1}{3} & -\tfrac{5}{12} \\ -\tfrac{1}{3} & \tfrac{2}{3} & -\tfrac{1}{3} \\ -\tfrac{5}{12} & -\tfrac{1}{3} & \tfrac{3}{4} \\ \end{array}\!\right]. $$ From the comments we know we seek a factorization for $G$ where $G=A^TA$ and specify $A$ is of the form $A=I-1_nw^T$ where $w^T1_n=1$. For full generality assume the matrix $A$ has no zero entries. Thus if we formulate the weighted oriented incidence matrix to find $A$ we want the weighted adjacency matrix $Ad(Gr)$ to have no zero entries as well, i.e., the weighted graph will have loops. Actually calculating the weighted oriented incidence matrix seems difficult (although it may simply be a result of my inexperience with weighted graphs). However, we can find a factorization of the form we seek in an ad hoc way if we assume we know something about the loops in our graph. We split the weighted Laplacian matrix $G$ into the degree and adjacency matrices as follows $$ G=\left[\!\begin{array}{ccc} \tfrac{5}{4} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \tfrac{11}{12} \\ \end{array}\!\right]-\left[\!\begin{array}{ccc} \tfrac{1}{2} & \tfrac{1}{3} & \tfrac{5}{12} \\ \tfrac{1}{3} & \tfrac{1}{3} & \tfrac{1}{3} \\ \tfrac{5}{12} & \tfrac{1}{3} & \tfrac{1}{6} \\ \end{array}\!\right] = D(Gr)-Ad(Gr). $$
Thus we know the loops on $v_1$, $v_2$ and $v_3$ have weights $1/2$, $1/3$, and $1/6$ respectively. If we put the weights on the loops into a vector $w$ = $[\frac{1}{2}$ $\frac{1}{3}$ $\frac{1}{6}]^T$ then we can recover the matrix $A$ we want in the desired form $$ A = I-1_nw^T = \left[\!\begin{array}{ccc} \tfrac{1}{2} & -\tfrac{1}{3} & -\tfrac{1}{6} \\ -\tfrac{1}{2} & \tfrac{2}{3} & -\tfrac{1}{6} \\ -\tfrac{1}{2} & -\tfrac{1}{3} & \tfrac{5}{6} \\ \end{array}\!\right]. $$

It appears if we have knowledge of the loops in our weighted graph we can find the matrix $A$ in the desired form. Again, this was done in an ad hoc manner (as I am not a graph theorist) so it may be a hack that worked just for this simple problem.

Andrew Winters
  • 936
  • 1
  • 5
  • 10
  • Recovering $A$ in the case of Laplacian $G$ as you describe should not be a problem (in you example, only one row/column contain non-zero elements). I guess the matters complicate with the general "full" Laplacian as in my example. Given the $O(n^2)$ degrees of freedom of $G$, I'm not sure if the solution could be obtained, subject to constraints I give above. – usero Apr 09 '12 at 16:05
  • Right, the example $G$ I give is highly simplified but it will be possible in the general case where $G$ is a full matrix. The graph theory problem becomes more complicated as you increase the number of edges and vertices. In essence we replace a difficult matrix factorization problem with a difficult graph theory problem. – Andrew Winters Apr 09 '12 at 17:45
  • Ok, I would appreciate if you try to reconstruct $A$ as is given above from given $G$. – usero Apr 09 '12 at 19:25
  • @AndrewWinters: Could you clarify how $A$ is determined from $G$? It's not clear to me how your algorithm proceeds in the general case. – Geoff Oxberry Apr 09 '12 at 19:35
  • @AndrewWinters: Just have in mind the decomposition form I'm looking for, $A=I-1_nw$, $w^T1_n=1$. For $n>3$, one would have hard time decomposing given $G$ to get such $A$. – usero Apr 10 '12 at 07:17
  • @usero I updated my post to address the problem in your original post. You're correct that getting the decomposition form may be difficult (I was only able to obtain it if I assumed I knew something about the structure of the weighted graph). – Andrew Winters Apr 10 '12 at 13:47
  • Still, with $\frac{1}{2}n(n-1)$ degrees of freedom of some $G$, the solution with $n$ degrees of freedom of $w$ might not always exist (or?). The decomposition you posted is fine; you could get it easier by just considering $A$ from the very beginning of my question. Would you be able to do so for some general $G$? Is it hard or not always possible? – usero Apr 10 '12 at 13:58
  • 1
    I don't think it will be possible for a general $G$ as it seems that the specific factorization form $A=I-1_nw^T$ will only exist for certain types of weighted graphs. So the Laplacian matrices $G$ that are of the form $G=A^TA=(I-1_nw^T)^T(I-1_nw^T)$ are a subset of all possible Laplacian matrices. – Andrew Winters Apr 10 '12 at 14:07
  • Same conclusion I made. Thanks for the effort. – usero Apr 10 '12 at 14:10
  • Is there such a factorization for the cotangent Laplacian used in mesh analysis? – max Apr 22 '21 at 14:43
9

I think that you are confusing the unique matrix square-root of Hermitian positive semi-definite matrix $A$, i.e., a Hermitian positive semi-definite matrix $B$ satisfying,

$$ B^2 = A, $$

with the non-unique problem of finding a matrix $C$ satisfying

$$ C^H C = A, $$

where clearly the mapping $C \mapsto Q C$, for any unitary $Q$, preserves the identity. As you noticed, a Cholesky factorization provides one possible solution. However, note that Cholesky only works for Hermitian positive-definite matrices (with the possible exception of a Hermitian positive semi-definite matrix which is positive-definite if the last row and column are removed).

Lastly, one can constructively define the unique matrix square-root of a Hermitian positive semi-definite matrix through its eigenvalue decomposition, say

$$ A = U \Lambda U^H, $$

where $U$ is unitary and $\Lambda$ is diagonal with non-negative entries due to $A$ being positive semi-definite. The Hermitian matrix square-root can easily be identified as

$$ B = U \sqrt{\Lambda} U^H. $$

Jack Poulson
  • 7,599
  • 32
  • 40
  • You're right about the matrix square-root. Clearly, there are different ways to achieve the factorization, but could guarantee that $A$ (from my quest.) could be written as I formulated. – usero Apr 10 '12 at 19:26
6

In essence, what you're asking is to find the square root A of a matrix G, so that $$G = A^T A.$$ There are many ways to do that if $G$ is a symmetric matrix. For example, if $G$ is symmetric, then the Cholesky decomposition $G=L^TL$ provides you with one answer: $A=L$. But you already found another answer, with the matrix $A$ you already have. What this simply means is that there are many "square roots" of the matrix $G$, and if you want to have one particular one, you need to rephrase the question in such a way that you specify the structural properties of the "branch" of the square root that you're interested in.

I would say that this situation is not dissimilar to taking the square root among the real numbers using the complex numbers: there, too, in general you have two roots, and you have to say which one you want to make the answer unique.

Wolfgang Bangerth
  • 55,373
  • 59
  • 119
  • You're definitely right. Another way would be the spectral decomposition approach as I state above. I've made an edit to make the solution unique. Hopefully it won't complicate matters. – usero Apr 09 '12 at 11:56
  • Is a solution with the constraint I give above always existent? Perhaps it holds only for some cases, and not in general. – usero Apr 09 '12 at 14:23
  • Actually, Cholesky does not work in his case, as it (essentially) requires that the matrix is Hermitian positive-definite. – Jack Poulson Apr 10 '12 at 21:51
4

I think you can apply $LDL^{T}$ factorization for your matrix A. Since your matrix has non-negative eigenvalues, the diagonal matrix D will have non-negative entries along the diagonl. Then you can easily factorize $\hat{D} = \sqrt{D}$. And you get the matrix $G = L\hat{D}$. The eigendecomposition is not numerically stable, so I think you should avoid this kind of decomposition.

Willowbrook
  • 601
  • 3
  • 10
  • I have to disagree on two counts: (1) $LDL^T$ factorization does not work for singular matrices (his matrix is singular), and (2) eigenvalue decompositions of Hermitian matrices are considered stable, as their eigenvector matrices are unitary. – Jack Poulson Apr 11 '12 at 18:36
  • 1
    @JackPoulson I try a singular matrix A in matlab, and run ldl, it works. The zero eigenvalues corresponds to the zeros in the diagonal of D. – Willowbrook Apr 11 '12 at 19:10
  • 2
    I think that you will find that MATLAB's 'ldl' routine computes a block $LDL^T$ decomposition with pivoting, i.e., $PAP'=LDL^T$, where $D$ is not required to be diagonal (it can have $2 \times 2$ blocks). In order to avoid division by zero due to the matrix being singular, the zero diagonal entries are pivoted to the bottom-right of the matrix. – Jack Poulson Apr 11 '12 at 19:31