8

I have some data defined in an array (an image) and I need to find the curl of a certain function.

Wikipedia has an integral definition of curl that I like, maybe it can be discrete.

$$ \nabla \times F = \frac{1}{|A|} \oint F \cdot d \mathbf{r}$$

Can we write this as some kind of finite difference. Let $F = (F_1, F_2, F_3)$. I can think of two different ways to write curl:

$$ \frac{1}{(2\epsilon)^2} \big[ F_2(x+\epsilon, y, z) + F_1(x, y+\epsilon, z) - F_2(x-\epsilon, y, z) + F_1(x, y-\epsilon, z) \Big] $$

Hopefully I wrote that average correctly. What if I used the corners of the square instead? Do we get curl still?

$$ \frac{1}{(2\epsilon)^2}\Big[ F(x+\epsilon, y + \epsilon, z) + F(x+\epsilon, y - \epsilon, z) - F(x-\epsilon, y - \epsilon, z) - F(x-\epsilon, y + \epsilon, z) \Big] $$

I don't know what to call these two operators $\nabla_1 $ and $\nabla_2$ who do these operators differ as $\epsilon \ll 1$ ?


My second definition of curl breaks down a little bit. However all the points $(x,y,z) + \epsilon \,(\pm 1, \pm 1, 0)$ all lie on a circle.

$$ \frac{1}{ \pi \epsilon^2} \sum_{\theta = \{ \frac{2\pi k}{n}: 0 \leq k < n \}} F\big[ (x,y,z) + \epsilon( \cos \theta, \sin \theta, 0 )\big] \cdot (-\sin \theta, \cos \theta, 0) $$

This operator should be proportional to curl for small $\epsilon \approx 0$, I think, but this is harder to put into a computer.

john mangual
  • 947
  • 3
  • 9
  • 19

2 Answers2

5

Continuous

It looks like you only need 2d curl, so let's start with a simpler continuous definition:

$$ \omega = \nabla \times \mathbf{u} = \frac{\delta v}{\delta x} - \frac{\delta u}{\delta y} $$

where 2d vector field $\mathbf{u}=(u,v)$ (same as your $\mathbf F = (F_1,F_2,F_3)$, dropping $F_3$). Note that curl is a vector and in the 2d version, and is purely in the z direction, and is customarily written as a scalar.

Discretization

We'll assume $(u,v)$ is sampled on a grid (i.e. at "vertices", where lines cross, and indices have integer values), with spacing $\Delta x$ and $\Delta y$ (equal for a square grid, but helps "typecheck" the discretization).

Forward differences

The simplest Finite Difference discretization method is forward differences: $$ \omega_{i,j} = \frac{ v_{i+1,j} - v_{i,j} }{ \Delta x } - \frac{ u_{i,j+1}-u_{i,j} }{ \Delta y } $$

Central differences

But central differences is more accurate (note the 2 in the denominator, because we're discretizing over 2 grid cells):

$$ \omega_{i,j} = \frac{ v_{i+1,j} - v_{i-1,j} }{ 2\Delta x } - \frac{ u_{i,j+1}-u_{i,j-1} }{ 2\Delta y } $$

Finer central differences

If you look again at the first forward differences method, you'll see it's the same as central differences, just for $\omega_{i+\frac{1}{2},j+\frac{1}{2}}$ instead of $\omega_{ i,j}$. EDIT Actually not, because the central differences are for different locations: the first for $\omega_{i+\frac{1}{2},j}$ and the second for $\omega_{i,j+\frac{1}{2}}$. Instead, for each term, we can average the other co-ordinate values to interpolate the center. That is,

$$ \omega_{i+\frac{1}{2},j+\frac{1}{2}} = \frac{ \frac{ v_{i+1,j} - v_{i,j} }{ \Delta x } + \frac{ v_{i+1,j+1} - v_{i,j+1} }{ \Delta x } }{2} - \frac{ \frac{ u_{i,j+1}-u_{i,j} }{ \Delta y } + \frac{ u_{i+1,j+1}-u_{i+1,j} }{ \Delta y } }{2} $$ At this point, the advantages of the Calculus of Finite Differences (in Philip Roe's answer) become stark. The above is: $$ \omega_{i+\frac{1}{2},j+\frac{1}{2}} = \mu_y \delta_x v_{i,j} - \mu_x \delta_y u_{i,j} $$

Let me unpack that first term: the $\mu_y$ averages over values at $j$ and $j+1$ in y-direction (i.e. the two expressions divided by $2$), the $\delta_x$ is the difference between values at $i$ and $i+1$ in x-direction (i.e. each of the aforementioned expressions, which are divided by $\Delta x$). I'm not sure what happens to the $\Delta x$; I think the idea is to let $\Delta x=1$, so it can be ignored.

Or (if you're better than me at remembering how the offset indices go) just: $$ \omega = \mu_y \delta_x v - \mu_x \delta_y u $$

These seem what you're going for with your $\nabla_1$ and $\nabla_2$.

EDIT Now I think you're going for finite differences along diagonal lines for your $\nabla_2$ (the corners of a square). IDK, but I think that would work, but you'd need to calculate $(u,v)$ perpendicular to each diagonal, and also adjust the lengths that you're dividing by (though I guess the wikipedia integral version may take care of this, since it uses area, not lengths). Of course, like the above, it would be for the center of a cell, $\omega_{ i+\frac{1}{2},j+\frac{1}{2} }$, rather than at the data points.

Accuracy

I won't go into it here, but the accuracy of the above Finite Difference methods is usually analysed with Taylor Series. Forward differences is first order $O(\Delta x)$, central differences is second order $O( \Delta x^2 )$.

hyperpallium
  • 364
  • 3
  • 11
2

This is made a lot easier by introducing the Calculus of Finite Differences.

If $u_{i,j}$ is grid function defined for integer $i,j$ then the y-differencing operator $$\delta_y u=u_{j+1}-u_j$$ is defined at $i,j+1/2$, so no need to write indices. Similarly, $$\mu_xu=(u_{i,j}+u_{i+1,j})/2 $$ is the x-averaging operator and is defined at $i+1/2,j$. We complete the list of operators with $\delta_x$ and $\mu_y$ These operators are binary and the output is defined halfway between the inputs. In the examples so far, they are on the edges of the cells that have primary quantities at vertices.

The operators can be combined according to most laws of algebra, added, multiplied, or raised to powers. For example $\mu_x\delta_x$ is a central difference on the original grid, and $\mu_y\delta_x$ approximates an x-derivative at a cell center. An important rule however, is that two operators can only be equated or combined if their outputs are at the same location.

A nice discrete representation of curl in the 2D case is $$\mu_y\delta_xv-\mu_x\delta_yu$$ at the cell centers if the data were at vertices or vice versa.

You could also use $$\mu_x\delta_xv-\mu_y\delta_yu$$ A drawback of this this formula is that if you imagine the array of data to be colored like a chessboard, it only uses data of one color. In some circumstances this leads to trouble.

Philip Roe
  • 1,154
  • 6
  • 5
  • Some typos: $ around u_{i,j}; an = in the $\mu_y$ definition(?); "You can" (third last para). – hyperpallium Jul 31 '17 at 08:16
  • "These operators are binary" - I think of a binary operator as something like addition, taking two operands ($A+B$); I'm guessing the two operands here are the grid-function indices $i,j$? But you say "defined halfway between the inputs", which doesn't make sense for $i,j$... yet, the examples show they're defined between successive indices (e.g. $i,j$ and $i+1,j$). I think the "defined" part means you have to supply offset integer indices - they won't work for integers e.g. $i=1$; you need e.g. $i=\frac{1}{2}$ or $1\frac{1}{2}$ – hyperpallium Jul 31 '17 at 08:25
  • "$μ_y δ_y$ approximates an x-derivative at a cell center" - they have y subscripts, is that correct? (or maybe it should be for "a y-derivative"... but then it's the same as the previous case, of $\mu_x \delta_x$... or could it be assuming values at cell-centres?) I would think just $\delta_x$ would "approximate an x-derivative at a cell center", since the difference in values is the "rise" of the slope, and the "run" from $i$ to $i+1$ is $\Delta x=1$. – hyperpallium Jul 31 '17 at 08:34
  • What is the order of application of operators? I'm sure there's a standard convention in CFD, but I don't know it e.g. I assume $\mu_x\delta_x$ means $\mu_x(\delta_x(\mathbf{u}))$, that is, $\delta_x$ is applied first, and $\mu_x$ operates on the resulting function. – hyperpallium Jul 31 '17 at 08:41
  • 1
    Thanks for the typos! Imagine you have a chessboard. You can define things at the centers of the squares $i,j$ or at the vertical interfaces $i+1/2,j$ or the horizontal interfaces $i,J+1/2$ or at the vertices $I+1/2,j+1/2$. The operators are binary, like a+b. The inputs are always the same (cell, interface,vertex) and the output is halfway between them. The operators commute, which is very powerful. Every operator contracts the domain by one cell size in one direction. – Philip Roe Jul 31 '17 at 09:24
  • 1
    The operator $\mu_x\mu_y$ averages four cells meeting at a vertex, or four vertices forming a cell, or four horizontal edges surrounding one vertical edge. The standard five-point Laplacian is just $\delta_x^2+\delta_y^2.$ The nine-point Laplacian is $\mu_y^2\delta_x^2+\mu_x^2\delta_y^2$. there are rules like $$\delta(uv)=\mu u \delta v+\mu v \delta u$$ – Philip Roe Jul 31 '17 at 09:26
  • A typo in my typo-report... I meant a = missing in the definition of $\mu_x$ (not $\mu_y$). i.e. is $\mu_x(u_{i,j}+u_{i+1,j})/2$ supposed to be something like $\mu_x u = (u_{i,j}+u_{i+1,j})/2$? – hyperpallium Jul 31 '17 at 12:37
  • This looks like a much simpler notation for finite differences (and maybe also less error-prone, if defined as helper functions in codes), but I have to keep expanding their definitions to understand them. Do you have any tips or tricks or intuitions for thinking in terms of them? Apart from practice... (BTW: The $\mu_y\delta_x$ one makes sense now!) – hyperpallium Jul 31 '17 at 12:43
  • 1
    Just try writing out few FD expressions that you are familiar with and see how much simpler they are. The thing that takes a bit of getting used to is how the operations change the locations, but they must do so consistently.

    There is also a very nice tie-in to Fourier analysis. $\mu_x$ multiplies everything by $\cos{(\theta_x/2)}$ and $\delta x$ by $2i\sin{(\theta_x/2})$

    – Philip Roe Jul 31 '17 at 13:05
  • Oh, I see: because they commute, the order of application doesn't matter (e.g. $\mu_x\delta_x = \delta_x\mu_x$). Please excuse my pedantry: these operators are applied to a function and produce another function, therefore (going by my discrete maths) they are unary operators. e.g. in the expression $\delta_y u$, the operand is $u$. Maybe it's different in numerics and CFD... but if they are binary operators, could you give an example expression, and identify the operands? (analogous to "in the expression $a+b$, the two operands are $a$ and $b$") – hyperpallium Aug 01 '17 at 06:24
  • Thanks, yes, these are much simpler; dropping $\Delta x$ and $\Delta y$ also helps. The ordinary finite differences have a straightforward derivation from differential expressions... whereas (it seems to me), you must first know what you're aiming at before you can get an equivalent expression in terms of these operators, which makes them an advanced topic. Or, is there a straightward way to derive them from a differential expression? – hyperpallium Aug 01 '17 at 06:30
  • Finally, I've mentioned this a couple of times, but there seems to be an = missing from your second definition (of $\mu_x$). – hyperpallium Aug 01 '17 at 06:32
  • Oh, $\delta$ means "difference", $\mu$ means "average", and the subscript indicates the direction (like partial differential notation $\mathbf{u_x} = \frac{\delta \mathbf{u}}{\delta x}$). But what happens to the $\Delta x$ and $\Delta y$? Do you need to put them back in... or do you normalize, so that the grid function $u_{i,j}$ has $\Delta x = \Delta y = 1$? If so, would there be any scaling problems? – hyperpallium Aug 02 '17 at 09:51
  • Comparing my two FD answers, and your FD calculus answer, for center-cell $u_{i+\frac{1}{2}, j+\frac{1}{2} }$ vs. at a vertex $u_{i,j}$ (we both have them at that order): (1) mine doesn't give the proper averaged result at the centre like yours, because it only uses the value to the right and above of $u_{i,j}$. (2) mine can "leapfrog" to avoid the "checkerboard" problem of yours. – hyperpallium Aug 02 '17 at 13:34
  • Sorry, I'm wrong: my second has the same "chessboard" problem as yours, since they are the same. – hyperpallium Aug 02 '17 at 14:11
  • In case you didn't get a notification: "What happens to the $\Delta x$" etc? Esp. for second-order expressions, with $\Delta x^2$ BTW I needed that Product Rule yesterday (for 1d: $\delta_x(uv) = \mu_x v \delta_x u + \mu_x u \delta_x v $, though I note you dropped the subscripts - because they're all the same? Or, a shorthand for applying it to each component in turn?). However, I'm still not familiar enough with the half-indices to be comfortable with the short form. PS: great to add your comment extensions to the answer! (and maybe headings to separate the Calculus precis from the answer). – hyperpallium Aug 07 '17 at 07:36
  • 1
    @hyperpallium What you define in your answer is not $\omega_{I,j}$ It IS $\omega{1+1/2,,j+1/2}$ With data in cells, $\delta_x$ puts the output on vertical faces and $\mu_y$ takes it to a vertex. Then when you go to shorthand, I'm afraid you give expressions for the divergence. :( Needs an edit.

    You can also use what are called divided differences where division by the mesh size is included in the differencing operators. These are usually not written differently, but an author must explain his convention. Works fine if $\Delta x$ and $\Delta y$ are both constant, but not otherwise.

    – Philip Roe Aug 07 '17 at 18:18
  • Thanks for the proofing! How do you, yourself, handle $\Delta x$ and $\Delta y$? (e.g. would it work to scale by $1/\Delta x$, to normalize, as if $\Delta x = 1$, so can ignore, including powers like $\Delta x^2$?). This checking of index offsets is tedious and error-prone (not just for this Calculus, but also for full FD, and "MAC" offset grid values in the first place)... it's something the computer should be doing, not us! I can see how to make the compiler type-check them (in Java) - do you think there would be interest in that? – hyperpallium Aug 08 '17 at 05:38
  • type-checker's working: 4 types for value-locations (center, vertex, mid-horiz, mid-vert); FD-ops toggle offsets for their direction; arithmetic ops require same-typed operands and result. Start with right value-location types (h means "half", for offset: u = new Cx_y(); v = new Cx_y();); ops are methods (e.g. curl = sub( u_y(d_x(v)), u_x(d_y(u))); check result-type (e.g. Cxh_yh curl = res). It accepts all your expressions; and caught both my mistakes. Grid value locations have been error-prone for me: i,j at center or vertex?; value locations?; complex FD's right? – hyperpallium Aug 10 '17 at 13:52
  • If someone used this answer as-is, and they had $\Delta x, \Delta y \neq 1$, they would get an incorrect result. Although you've mentioned "divided differences", what is the proper way for someone to use the answer, as given, to incorporate $\Delta x$ etc to get the correct result (other than just hacking it in, which loses simplicity)? Also for expressions like the Laplacian $\delta_x^2+\delta_y^2$, needing $\Delta x^2$ and $\Delta y^2$ divisors. – hyperpallium Aug 21 '17 at 06:10