228

I have a list of Pandas dataframes that I would like to combine into one Pandas dataframe. I am using Python 2.7.10 and Pandas 0.16.2

I created the list of dataframes from:

import pandas as pd
dfs = []
sqlall = "select * from mytable"

for chunk in pd.read_sql_query(sqlall , cnxn, chunksize=10000):
    dfs.append(chunk)

This returns a list of dataframes

type(dfs[0])
Out[6]: pandas.core.frame.DataFrame

type(dfs)
Out[7]: list

len(dfs)
Out[8]: 408

Here is some sample data

# sample dataframes
d1 = pd.DataFrame({'one' : [1., 2., 3., 4.], 'two' : [4., 3., 2., 1.]})
d2 = pd.DataFrame({'one' : [5., 6., 7., 8.], 'two' : [9., 10., 11., 12.]})
d3 = pd.DataFrame({'one' : [15., 16., 17., 18.], 'two' : [19., 10., 11., 12.]})

# list of dataframes
mydfs = [d1, d2, d3]

I would like to combine d1, d2, and d3 into one pandas dataframe. Alternatively, a method of reading a large-ish table directly into a dataframe when using the chunksize option would be very helpful.

cs95
  • 330,695
  • 80
  • 606
  • 657
Whitebeard
  • 5,365
  • 5
  • 22
  • 28

6 Answers6

417

Given that all the dataframes have the same columns, you can simply concat them:

import pandas as pd
df = pd.concat(list_of_dataframes)
Trenton McKinney
  • 43,885
  • 25
  • 111
  • 113
DeepSpace
  • 72,713
  • 11
  • 96
  • 140
14

Just to add few more details:

Example:

list1 = [df1, df2, df3]

import pandas as pd
  • Row-wise concatenation & ignoring indexes

    pd.concat(list1, axis=0, ignore_index=True)
    

    Note: If column names are not same then NaN would be inserted at different column values

  • Column-wise concatenation & want to keep column names

    pd.concat(list1, axis=1, ignore_index=False)
    

    If ignore_index=True, column names would be filled with numbers starting from 0 to (n-1), where n is the count of unique column names

rmswrp
  • 391
  • 4
  • 4
10

If the dataframes DO NOT all have the same columns try the following:

df = pd.DataFrame.from_dict(map(dict,df_list))
cs95
  • 330,695
  • 80
  • 606
  • 657
meyerson
  • 4,504
  • 1
  • 19
  • 20
  • 9
    This solution doesn't work for me on Python 3.6.5 / Pandas v0.23.0. It errors with `TypeError: data argument can't be an iterator`. Converting to `list` first (to mimic Python 2.7) comes up with unexpected results too. – jpp Jul 16 '18 at 22:59
  • and if the all dataframes have the same column, how should we do ? – Thony Nadhir Mar 16 '20 at 20:37
7

You also can do it with functional programming:

from functools import reduce
reduce(lambda df1, df2: df1.merge(df2, "outer"), mydfs)
cs95
  • 330,695
  • 80
  • 606
  • 657
Jay Wong
  • 2,356
  • 4
  • 23
  • 44
  • 2
    `from functools import reduce` to use `reduce` – nishant Apr 24 '20 at 12:38
  • 2
    Would not recommend doing a pairwise merge for multiple DataFrames, it is not efficient at all. See `pd.concat` or `join`, both accept a list of frames and join on the index by default. – cs95 Jun 29 '20 at 06:09
0

concat also works nicely with a list comprehension pulled using the "loc" command against an existing dataframe

df = pd.read_csv('./data.csv') # ie; Dataframe pulled from csv file with a "userID" column

review_ids = ['1','2','3'] # ie; ID values to grab from DataFrame

# Gets rows in df where IDs match in the userID column and combines them 

dfa = pd.concat([df.loc[df['userID'] == x] for x in review_ids])
Lelouch
  • 449
  • 6
  • 6
0

panders concat works also as well in addition with functools

from functors import reduce as reduce
import pandas as pd;
deaf = pd.read_csv("http://www.aol.com/users/data.csv")
for q in range(0, Len(deaf)):
  new = map(lambda x: reduce(pd.concat(x))