0

Here is how the list looks:

>>>print(pelist)
[[1]]
     Power Type I Error
[1,]     1     0.024339

[[2]]
     Power Type I Error
[1,]   0.8     0.038095

[[3]]
     Power Type I Error
[1,]     1     0.032804

I can do it this way, but it quickly becomes impractical as the size of the list grows:

>>>rbind(pelist[[1]], pelist[[2]], pelist[[3]])
     Power Type I Error
[1,]   1.0     0.024339
[2,]   0.8     0.038095
[3,]   1.0     0.032804
mechanical_meat
  • 155,494
  • 24
  • 217
  • 209
qed
  • 21,094
  • 18
  • 110
  • 180
  • Does this answer your question? http://stackoverflow.com/questions/2851327/r-converting-a-list-of-data-frames-into-one-data-frame – sashkello Apr 29 '13 at 23:18
  • 1
    Those list elements are actually matrices with column names. – IRTFM Apr 29 '13 at 23:36

2 Answers2

3

The idiomatic approach is to use do.call

do.call(rbind, pelist)
flodel
  • 85,263
  • 19
  • 176
  • 215
mnel
  • 110,110
  • 27
  • 254
  • 248
0

Given that your list elements have all the same length, you can also use

 test_list=list(matrix(c(1,2),ncol=2,nrow=1),matrix(c(3,4),ncol=2,nrow=1),matrix(c(5,6),ncol=2,nrow=1))

 test_matrix=matrix(unlist(test_list),ncol=2,byrow=TRUE)

I am not sure, but this is probably faster than subsequent rbind calls.

cryo111
  • 4,334
  • 1
  • 14
  • 33
  • 1
    Both the OP and @mnel are doing a single call to `rbind`. And it being an .Internal function, you can bet it's not wasting resources. – flodel Apr 29 '13 at 23:38
  • BTW: You made me curious :) `library(rbenchmark); test_list=list(matrix(c(1,2),ncol=2,nrow=1),matrix(c(3,4),ncol=2,nrow=1),matrix(c(5,6),ncol=2,nrow=1)); bind1=function(x) matrix(unlist(x),ncol=2,byrow=TRUE); bind2=function(x) do.call(rbind,x); benchmark(replications=800000, bind1(test_list), bind2(test_list), columns=c("test", "elapsed", "replications"))` Same performance ;) – cryo111 Apr 30 '13 at 01:22