I am struggling to understand why eigentriplets arise when decomposing a signal by using singular spectrum analysis (SSA).
The term eigentriples refers to the components of a singular value decomposition (U, S, V).
When the reconstructed component are almost identical, it is said that they are "eigentriplets", because they have almost the same eigentripels.
To this extent I ought to clarify what is happening and why I still have this confusion, so here is an example code and the output of theirof:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
Number of elements in a series
N = 1000
Series abscissa
n = np.linspace(0, 360*10, N)
"""
s0 is linear vector with values going linearly from 0 to 3.6
s1 is a periodic sinusoid with magnitude 2
s2 is a dampened sinusoid with values starting at 2 and decreasing
s is all of the above mentioned signal plus random noise
"""
s0 = n/1000
s1 = 2np.sin(10n)
s2 = 2np.exp(-n/N)np.sin(50*n)
s = s0 + s1 + s2 + np.random.normal(0,1,N)
plt.figure()
plt.plot(n, s)
plt.xlabel("n")
plt.ylabel("Time Series")
plt.title("Example for Stack Exchange")
Then this signal will be taken through the SSA algorithm with an optimal window length of N/2. So it goes through this code:
# Getting window length L and lagged length K
L = 500-1
K = N - L + 1
Constructing the time lagged Hankel matrix
X=np.zeros((K,L))
for m in range (0, L):
X[:,m] = s[m:K+m]
Trajectory matrix
Cemb = np.dot(X.T,X)/K
Eigen decomposition
eigenValues, eigenVectors = np.linalg.eig(Cemb)
idx = eigenValues.argsort()[::-1]
eigenValues = eigenValues[idx]
eigenVectors = eigenVectors[:,idx]
Vectors of Principal Components
PC = np.dot(X,eigenVectors)
Pre-allocating Reconstructed Component Matrix
RC = np.zeros((N, L))
Reconstruct the elementary matrices without storing them
for i in range(L):
myBuf = np.outer(PC[:,i], eigenVectors[:,i].T)
myBuf = myBuf[::-1]
RC[:,i] = [myBuf.diagonal(j).mean()
for j in range(-myBuf.shape[0]+1, myBuf.shape[1])]
First 6 RC
fig, ax = plt.subplots(3,2)
ax = ax.flatten()
for i in range (0, 6):
ax[i].plot(RC[:,i])
ax[i].set_title(str(i))
plt.tight_layout()
We get these results, where we start to see that component 1 and 2 are the same, as well as 3 and 4. I can also see that the magnitude of s1 and s2 are also divided there. This is what I am not getting: why is this happening exactly ?
Here is also the pairwise weighted correlation of these vectors:
For references, please see:
https://en.wikipedia.org/wiki/Singular_spectrum_analysis
Any help is appreciated, thank you so much for reading! :D



The word eigentriplets is; for clarification, a special case of eigentriples, whereby the reconstructed components of the eigentriplets are the same (or nearly so)
– Tino D Sep 09 '21 at 13:46Then there are two ways to go further, either SVD or EVD.
With SVD, they will get three components (U,S,V). These are called eigentriples and then they will be used for the reconstructed components.
If two RCs are very similar (even though they were calculated by using different eigentriples), they are called eigentriplets (twin eigentriples)
What i am struggling with is understanding why eigentriplets arise in the first place
– Tino D Sep 10 '21 at 08:27