Questions tagged [linear-algebra]

A field of mathematics concerned with the study of finite dimensional vector spaces, including matrices and their manipulation, which are important in statistics.

Overview

Linear algebra is the field of mathematics concerned with the study of finite-dimensional vector spaces. Matrices and their manipulation, which form the algorithmic part of linear algebra, are particularly important in statistics.

References

The following are introductory references:

A thread focused on linear algebra references useful for applied statistics is the following:

The following threads from math.se also have lists of references:

671 questions
7
votes
1 answer

Can I get a Cholesky decomposition from the inverse of a matrix?

I have the inverse of a giant covariance matrix from which I'd like to draw random instances. The way I know how to do this is to do a Cholesky decomposition on the covariance matrix and use it to transform a vector of independent Gaussians. So…
Chuck37
  • 71
6
votes
1 answer

Interpreting a matrix calculation

I recently came across this problem, although trivial to compute by hand - is a little challenging for me to interpret. Notably, we have three matrices: $$\vec{c}= \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix},\hspace{0.2in} \vec{x}= …
Workhorse
  • 163
  • 4
5
votes
2 answers

Is there a book on applied linear algebra

I am trying to work through a book on linear algebra by Serge Lang. I see a whole bunch of rules, formulae, theorems but I don't understand how each of these things are applied in a real world context. I am studying linear algebra only because I…
Victor
  • 6,565
4
votes
1 answer

Chebyshev inequality in terms of RMS

I'm self studying the book Introduction to Applied Linear Algebra – Vectors, Matrices, and Least Squares In page 48, the author write: "It says,for example, that no more than 1/25 = 4% of the entries of a vector can exceed its RMS value by more than…
H. Yong
  • 43
3
votes
2 answers

What's the intuition for dot product as it is used in statistics?

I know how to CALCULATE dot product, but still don't understand MEANING of it when it is being used in the context of statistics. I mean, If I have to explain vector addition I would say that "Sum of two vectors give you the third vector that has…
3
votes
1 answer

eigendecomposition of a covariance matrix

For a random variable $X = (x_1,x_2,\ldots,x_n)^T$, I understand that the entries of the covariance matrix would just be the covariance of $x_i$ and $x_j$, but how do I find the eigenvalues and eigenvectors after that, and how does that turn into…
d.zhu
  • 31
3
votes
1 answer

Randomly sample bounds from many multi-dimensional points

Let $\mathbb{M}$ be an $m\times n$ matrix, with $m < n$. My data consists of a large list of points $\mathbf{x}_i\in\mathbb{R}^n$, $i=1,...,N$, where each point satisfies $\mathbb{M}\cdot\mathbf{x}_i = 0$ and $\mathbf{A} < \mathbf{x} < \mathbf{B}$,…
a06e
  • 4,410
  • 1
  • 22
  • 50
2
votes
1 answer

Linear algebra use case

I learning some machine learning course, and I would like to know in which case we use linea algebra and Matrix Algebra? Thank you Kind regards
Poisson
  • 151
2
votes
1 answer

Alternative form for weighted least squares

Coefficients $\beta$ can be estimated from $y$ by weighted least squares with: $ \hat\beta = (X^T\Sigma^{-1}X)^{-1} X^T \Sigma^{-1} y $ where $\Sigma$ is the covariance matrix of the noise. Let $N$ be an orthonormal basis of the null space of the…
1
vote
1 answer

Weighted linear regression

I have a set of $n$ events. Each event has $m$ variables. At least 1 event produces an observation. It is possible for several events to occur simultaneously. E.g. 4 Events. 3 variables each: $E_{11}=0.4, E_{12}=-0.3, E_{13}=-0.4, E_{21}=0.3, ... ,…
1
vote
1 answer

What is the term for summing all of the elements of a vector to produce a scalar?

Is there a specific term for turning a vector into a scalar by summing all of the elements of the vector? I am trying to describe a part of a model that requires this.
socialscientist
  • 761
  • 5
  • 15
1
vote
0 answers

Machine Learning: Linear classifier and possibility to separate

Possible Duplicate: Machine Learning: Linear classifier Suppose we have 2 kernel functions $K_1(x,y)$ and $K_2(x,y)$. We know, that the dataset ($(x_1,y_1),\ldots,(x_l,y_l),$ $y_i \in \{-1,1\}$ ) is separated with the first one (that is, there…
Max
1
vote
1 answer

Fast Mahalanobis distance computation with singular covariance matrix

I'm trying to calculate the following Mahlanobis distance. $x^{T}$pinv($C$)$x$ Since covariance matrix, $C$ is singular, pinv($C$) means pseudo-inverse of C. However, my $C$ is very large, so it's very time-consuming to calculate pinv($C$). Thus,…
regress
  • 107
0
votes
1 answer

Sum of multiple covariance matrices looks like identity matrix

Suppose $X$ and $Y$ are two $a \times b$ matrices, randomly sampled from the same normal distribution. I found an interesting phenomenon: If we sum $X X^T$ multiple times, each time $X$ is randomly sampled, the result $S$ will look like an identity…
Matthew Y
0
votes
1 answer

Transformation to linearize dataset

How can I transform the following dataset to a more linear representation? R code: x <- 1:20 y <- c(101, 84, 81, 80, 73, 69, 67, 63, 63, 62, 62, 61, 61, 60, 59, 58, 57, 57, 57, 55) d <- data.frame(x, y) ggplot(d, aes(x, y)) + geom_point() Here is…
Figaro
  • 1,152
1
2