It is well known that even in 1D, N-R needn't converge at all and it can hop all over the place. (Numerical Recipes has a good discussion of this behavior, as I recall.)
In more than one dimension, consider a function with a global optimum that nevertheless has regions where the Hessian is non-definite. Start the algorithm with a point in one of those regions. If you succeed eventually in finding the optimum, then at some point the signature of the Hessian must have changed. That gives a recipe for constructing counterexamples; $$f(x,y)=((x-1)^2+y^2)((x+1)^2+y^2)$$ comes to mind as being particularly simple. Anticipating factors that will appear when differentiating, I have divided its values by $4$ below.

These renderings of $f$ are by Wolfram Alpha. They show the principle of construction: each factor of $f$ is a paraboloid, one centered at $(-1,1)$ and the other at $(1,0).$ Their product creates a surface with two separated global minima (at those same points) but with a saddle in between (located at $(0,0)$).
The "saddle-shaped area" can be thought of as the region where the determinant of the Hessian is negative:

Thus, if any Newton-Raphson search should include points both inside and outside this region, the signs of some eigenvalues of the Hessian will change.
If we begin a Newton-Raphson search at, say, $(1/2, 0),$ it will quickly find the nearby critical point: $(1,0)$ in this case. But at $x_0 = (1/2,0)$ the Hessian is $$H[f](1/2,0) = \pmatrix{-\frac{1}{4} & 0 \\ 0 & \frac{5}{4}}$$ with a signature of $(1,1)$ (one positive and one negative eigenvalue) whereas at $x_1 = (1,0)$ the Hessian is $$H[f](1,0) = \pmatrix{2 & 0 \\ 0 & 2},$$ with a signature of $(2,0).$
The reverse change in signature can occur, too. If you begin at $(0,2),$ for instance, where the Hessian has eigenvalues $13$ and $3$ (signature $(2,0)$), then the algorithm will progress straight towards the saddlepoint at $(0,0),$ eventually changing the signature. For instance, after two steps the Hessian has eigenvalues $2.36$ and $-0.55$, with a signature of $(1,1).$
The calculations were made with this R code.
# A function to optimize.
f <- function(p) {
x <- p[1]; y <- p[2]; ((x-1)^2 + y^2) * ((x+1)^2 + y^2) / 4
}
# The function's gradient.
f.1 <- function(p) {
x <- p[1]; y <- p[2]; c(4*x*(x^2+y^2-1), 4*y*(x^2+y^2+1)) / 4
}
# The function's Hessian.
f.2 <- function(p) {
x <- p[1]; y <- p[2]; matrix(c(4*(3*x^2+y^2-1), 8*x*y, 8*x*y, 4*(x^2+3*y^2+1)), 2) / 4
}
#
# Quick-and-dirty Newton-Raphson iteration with diagnostic printing.
#
NR <- function(f, df, x.0, xtol=1e-8, ftol=1e-8, itermax=100) {
x <- x.0
y <- f(x) # Vector
dy <- df(x) # Matrix
for (iter in 1:itermax) {
cat(paste("Iteration", iter), ": ")
cat(eigen(dy, only.values=TRUE)$values, "\n")
dx <- solve(dy, -y)
if (sum(dx^2) <= xtol^2) break
x <- x + dx
y <- f(x)
dy <- df(x)
if (sum(y^2) <= ftol^2) break
}
cat(paste("Iteration", iter), ": ")
cat(eigen(dy, only.values=TRUE)$values, "\n")
list(x = x, y = y, niter = iter+1)
}
#
# Examples.
#
NR(f.1, f.2, c(1/2, 0), itermax=12)
NR(f.1, f.2, c(0, 2), itermax=12)