1

I am struggling to understand why eigentriplets arise when decomposing a signal by using singular spectrum analysis (SSA).

The term eigentriples refers to the components of a singular value decomposition (U, S, V).

When the reconstructed component are almost identical, it is said that they are "eigentriplets", because they have almost the same eigentripels.

To this extent I ought to clarify what is happening and why I still have this confusion, so here is an example code and the output of theirof:

import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

Number of elements in a series

N = 1000

Series abscissa

n = np.linspace(0, 360*10, N)

""" s0 is linear vector with values going linearly from 0 to 3.6 s1 is a periodic sinusoid with magnitude 2 s2 is a dampened sinusoid with values starting at 2 and decreasing s is all of the above mentioned signal plus random noise """ s0 = n/1000 s1 = 2np.sin(10n) s2 = 2np.exp(-n/N)np.sin(50*n)
s = s0 + s1 + s2 + np.random.normal(0,1,N)

plt.figure() plt.plot(n, s) plt.xlabel("n") plt.ylabel("Time Series") plt.title("Example for Stack Exchange")

This is an example signal

Then this signal will be taken through the SSA algorithm with an optimal window length of N/2. So it goes through this code:

# Getting window length L and lagged length K
L = 500-1
K = N - L + 1

Constructing the time lagged Hankel matrix

X=np.zeros((K,L)) for m in range (0, L): X[:,m] = s[m:K+m]

Trajectory matrix

Cemb = np.dot(X.T,X)/K

Eigen decomposition

eigenValues, eigenVectors = np.linalg.eig(Cemb) idx = eigenValues.argsort()[::-1]
eigenValues = eigenValues[idx] eigenVectors = eigenVectors[:,idx]

Vectors of Principal Components

PC = np.dot(X,eigenVectors)

Pre-allocating Reconstructed Component Matrix

RC = np.zeros((N, L))

Reconstruct the elementary matrices without storing them

for i in range(L): myBuf = np.outer(PC[:,i], eigenVectors[:,i].T) myBuf = myBuf[::-1] RC[:,i] = [myBuf.diagonal(j).mean()
for j in range(-myBuf.shape[0]+1, myBuf.shape[1])]

First 6 RC

fig, ax = plt.subplots(3,2) ax = ax.flatten() for i in range (0, 6): ax[i].plot(RC[:,i]) ax[i].set_title(str(i)) plt.tight_layout()

reconstructed components

We get these results, where we start to see that component 1 and 2 are the same, as well as 3 and 4. I can also see that the magnitude of s1 and s2 are also divided there. This is what I am not getting: why is this happening exactly ?

Here is also the pairwise weighted correlation of these vectors:

Weighted correlation

For references, please see:

https://www.mathworks.com/matlabcentral/mlc-downloads/downloads/submissions/58967/versions/2/previews/html/SSA_beginners_guide_v7.html

https://en.wikipedia.org/wiki/Singular_spectrum_analysis

Any help is appreciated, thank you so much for reading! :D

Tino D
  • 206
  • 1
    You ask "why is this happening." The referent of "this" appears to be your original statement, "why eigentriplets arise." But isn't that a given? When you perform SVD, by construction it produces U, S, and V. What, then, are you looking for in an answer? – whuber Sep 09 '21 at 13:43
  • @whuber as far as I understand this: eigentriplets is only used when the reconstructed components are very similar. I want to know why the eigentriplets appear, is it due to harmonics ?

    The word eigentriplets is; for clarification, a special case of eigentriples, whereby the reconstructed components of the eigentriplets are the same (or nearly so)

    – Tino D Sep 09 '21 at 13:46
  • 1
    I am having trouble finding any authoritative definition of "eigentriplet" (the word doesn't appear in either of your references) and your accounts are vague: would you mind explaining what you mean by "the same" and "reconstructed components"? Yes, I can see you produce some graphics that have qualitatively similar appearances, but what they mean and their relationships to SVD are obscure. I would guess you might be trying to ask something about degeneracies or near-degeneracies in an eigendecomposition, but that's only a guess. – whuber Sep 09 '21 at 13:54
  • SSA decomposes a signal into additive "reconstructed components or RC". To do this one must first hankelize the data with a window.

    Then there are two ways to go further, either SVD or EVD.

    With SVD, they will get three components (U,S,V). These are called eigentriples and then they will be used for the reconstructed components.

    If two RCs are very similar (even though they were calculated by using different eigentriples), they are called eigentriplets (twin eigentriples)

    What i am struggling with is understanding why eigentriplets arise in the first place

    – Tino D Sep 10 '21 at 08:27
  • 1
    Could you please explain what "very similar" means? How is similarity measured? And please tell us what you mean by "arise:" this sounds like you are asking for insight into whatever process it is that you are analyzing, rather than asking a statistical question. – whuber Sep 10 '21 at 13:43
  • from the pair wise weighted correlation I calculated and plotted afterwards, component 1 and 2 have a correlation of almost 1 and 3 and 4 as well. (see heatmap). My question is why does the signal split into two parts instead of one. This is the "arise" part. Maybe it had something to do with degeneracies like you mentioned ? – Tino D Sep 10 '21 at 16:40

0 Answers0