Most Popular

1500 questions
8
votes
1 answer

Computing geodesic distances with diffusion

I am trying to solve an APSP (All-Pair Shortest Path) problem on a weighted graph. This graph is actually a 1, 2 or 3 dimensional grid, and the weights on each edge represent the distance between its two vertices. What I want to have is the geodesic…
matthieu
  • 131
  • 6
8
votes
0 answers

Finding the smallest root of a function on $[0, \infty)$

I would like to find the smallest real root of a 1-D real-valued function $f(x)$ on the domain $x\in [0,\infty)$. In this problem, I can make the following guarantees on $f$: $f$ does have a root at some finite, positive $x$. $f$ is differentiable…
Endulum
  • 735
  • 3
  • 9
8
votes
2 answers

Why does sparse linear algebra have a low arithmetic intensity?

I often see the terms "low arithmetic intensity" and "memory-bound" associated with sparse matrix operations. However, my intuition is that a sparse matrix operation should be less memory-bound, if anything. After all, for a highly sparse matrix,…
Sam Hatfield
  • 108
  • 8
8
votes
1 answer

bit-packing and compression of data structures in scientific computing

I recently came across this paper, which describes a scheme for saving memory in representing sparse matrices. A 32-bit integer can store numbers up to ~4 billion. But when you solve a PDE you've gone parallel and partitioned the problem long before…
Daniel Shapero
  • 10,263
  • 1
  • 28
  • 59
8
votes
3 answers

What is the most appropriate derivative free optimization algorithm

We can use random optimization/ derivative free/ direct search to find the minimum of some black box function $f$. If I have some 2D black box function, $f(x,y)$ - which I know to be convex - what is the best derivative-free method to use? i.e.…
user1887919
  • 233
  • 2
  • 4
8
votes
2 answers

Exploiting patterns in matrix for efficient matrix-vector multiplication

I have the following situation: I have a sequence of vectors $x_1, x_2,.. $ and for each I want to compute the product $Ax_i$ where $A$ is fixed at the outset. Although there is no information about the structure of $x_i$, $A$ typically has a…
Slug Pue
  • 189
  • 7
8
votes
0 answers

Speed and accuracy of Strassen vs Winograd matrix multiplication algorithms

I am doing work which requires as fast matrix multiplication as possible and just want to double-check with this community that the Winograd variant of Strassen's MM algorithm is the fastest practical (so no Coppersmith-Winograd) algorithm. I am…
kreitz
  • 91
  • 3
8
votes
2 answers

Nonblocking version of MPI_Barrier in MPI 2

I have a bunch of MPI processes exchanging request messages back and forth. Processes do not know which other processes will send them messages, or how many. Given this situation, I want an efficient way to know whether all other processes…
Geoffrey Irving
  • 3,969
  • 18
  • 41
8
votes
2 answers

How should I report profiling/timing information about my code?

I've seen a lot of publications in Computational Physics journals use different metrics for the performance of their code. Especially for GPGPU code, there seems to be a great variety of timing results people publish. In particular, I've…
limes
  • 405
  • 2
  • 10
8
votes
1 answer

Finite difference coordinate transformation for spherical polar coordinates

I have part of a problem that is described by the momentum conservation equation: $\frac{\partial \rho}{\partial t} + \frac{1}{\sin\theta} \frac{\partial}{\partial \theta}(\rho u \sin \theta) =0$ Where $u=f(\theta)$ and $ \rho = f(\theta,t) $…
Marm0t
  • 181
  • 4
8
votes
2 answers

Lagrange multipliers space is too rich in a mathematical view

Background: Lagrange multiplier method has been employed in numerous fields, such as contact problems, material interfaces, phase transformation, stiff constraints or sliding along interfaces. It is well known that a bad choice or design of Lagrange…
Wenjin Xing
  • 321
  • 1
  • 8
8
votes
0 answers

Eigenvalue with largest imaginary part

Iterative eigensolvers such as ARPACK, give the option to find a subset of the eigenvalues which have the largest imaginary part. My question is how do these algorithms work. As I understand it, generally such algorithms use Krylov spaces and…
as2457
  • 243
  • 1
  • 5
8
votes
2 answers

Eigenvalues of Small Matrices

I'm writing a small numerical library for 2x2, 3x3, and 4x4 matrices (real, unsymmetric). A lot of numerical analysis texts highly recommend against computing the roots of the characteristic polynomial and recommend using the double-shifted QR…
CADJunkie
  • 383
  • 1
  • 8
8
votes
1 answer

Clenshaw-type recurrence for derivative of Chebyshev series

The naive summation of a Chebyshev series \begin{align*} f(x) = \frac{c_0}{2} + \sum_{k=1}^{n-1} c_{k}T_{k}(x) \end{align*} which employs the three-term recurrence for evaluation of the Chebyshev polynomials is known to be numerically unstable near…
user14717
  • 2,155
  • 13
  • 14
8
votes
4 answers

Generate a set of orthogonal vectors to a given vector

I am looking for an alternative and robust alternative to Gram-Schmidt orthogonalization, but with one constraint: I have a unit vector $\mathbf{v}_1 \in \mathbb{R}^d \,\text{s.t.} \,\|\mathbf{v}_1^T\mathbf{v}_1\|=1$ and I want to find a set of…
Tolga Birdal
  • 2,229
  • 15
  • 24