0

I have two data frames , one is with 0.8 million rows with x and Y coordinates, another data frame is of 70000 rows with X and Y coordinates. I want to know logic and code in R where I want to associate data point from frame 1 to closest point in data frame 2. Is there any standard package to do so ?

I am running nested for loop. But this is very slow as it is getting iterated for 0.8 million * 70000 times which is very time consuming.

M--
  • 20,766
  • 7
  • 52
  • 87
  • 1
    Please add some data (only a snippet, e.g. using `dput(head(your_data))`), code and your expected output. – Roman Oct 24 '16 at 09:26
  • For geospatial data see http://stackoverflow.com/questions/31766351/calculating-the-distance-between-points-in-different-data-frames, for euclidian distance see http://stackoverflow.com/questions/26720367/how-to-find-the-distance-between-two-data-frames and http://stackoverflow.com/questions/22231773/calculating-the-euclidean-dist-between-each-row-of-a-dataframe-with-all-other-ro. I found these by googling for `r calculate distance between two data.frames`. Also look through the other hits from that google search, there is quite a lot already available. – Paul Hiemstra Oct 24 '16 at 13:59

1 Answers1

2

I found a faster way to get the expected result using the data.table library:

library(data.table)

time0 <- Sys.time()

Here is some random data:

df1 <- data.table(x = runif(8e5), y = runif(8e5))
df2 <- data.table(x = runif(7e4), y = runif(7e4))

Assuming (x,y) are the coordinates in an orthonormal coordinate system, you can compute the square of the distance as follow:

dist <- function(a, b){
                dt <- data.table((df2$x-a)^2+(df2$y-b)^2)
                return(which.min(dt$V1))}

And now you can applied this function to your data to get the expected result:

results <- df1[, j = list(Closest =  dist(x, y)), by = 1:nrow(df1)]

time1 <- Sys.time()
print(time1 - time0)

It tooked me around 30 minutes to get the result on a slow computer.

EDIT:

As asked, I have tried severals other solutions using sapply or using adply from the plyr package. I have tested these solutions on smaller data frames to make it faster.

library(data.table)
library(plyr)
library(microbenchmark)

########################
## Test 1: data.table ##
########################

dt1 <- data.table(x = runif(1e4), y = runif(1e4))
dt2 <- data.table(x = runif(5e3), y = runif(5e3))

dist1 <- function(a, b){
                dt <- data.table((dt2$x-a)^2+(dt2$y-b)^2)
                return(which.min(dt$V1))}

results1 <- function() return(dt1[, j = list(Closest =  dist1(x, y)), by = 1:nrow(dt1)])

###################
## Test 2: adply ##
###################

df1 <- data.frame(x = runif(1e4), y = runif(1e4))
df2 <- data.frame(x = runif(5e3), y = runif(5e3))

dist2 <- function(df){
                dt <- data.table((df2$x-df$x)^2+(df2$y-df$y)^2)
                return(which.min(dt$V1))}

results2 <- function() return(adply(.data = df1, .margins = 1, .fun = dist2))

####################
## Test 3: sapply ##
####################

df1 <- data.frame(x = runif(1e4), y = runif(1e4))
df2 <- data.frame(x = runif(5e3), y = runif(5e3))

dist2 <- function(df){
                dt <- data.table((df2$x-df$x)^2+(df2$y-df$y)^2)
                return(which.min(dt$V1))}

results3 <- function() return(sapply(1:nrow(df1), function(x) return(dist2(df1[x,]))))

microbenchmark(results1(), results2(), results3(), times = 20)

#Unit: seconds
#       expr      min       lq     mean   median       uq      max neval
# results1() 4.046063 4.117177 4.401397 4.218234 4.538186 5.724824    20
# results2() 5.503518 5.679844 5.992497 5.886135 6.041192 7.283477    20
# results3() 4.718865 4.883286 5.131345 4.949300 5.231807 6.262914    20

The first solution seems to be significantly faster than the 2 other. This is even more true for a larger dataset.

Frank
  • 65,012
  • 8
  • 95
  • 173
Hugo
  • 507
  • 7
  • 21