7

Have this:

                                  items, name
0   { [{'a': 2, 'b': 1}, {'a': 4, 'b': 3}], this }
1   { [{'a': 2, 'b': 1}, {'a': 4, 'b': 3}], that }

But would like to have the list of dictionary objects exploded into (flattened?) into actual rows like this:

    a, b, name
0   { 2, 1, this}
1   { 4, 3, this}
0   { 2, 1, that}
1   { 4, 3, that}

Having been trying to use melt but with no luck, any ideas? suggestions?

Data to produce DataFrame:

data = {'items': [[{'a': 2, 'b': 1}, {'a': 4, 'b': 3}], [{'a': 2, 'b': 1}, {'a': 4, 'b': 3}]], 'name': ['this', 'that']}
De La Brez
  • 73
  • 1
  • 3

5 Answers5

5

Another way to use concat perhaps more cleanly:

In [11]: pd.concat(df.group.apply(pd.DataFrame).tolist(), keys=df["name"])
Out[11]:
        a  b
name
this 0  2  1
     1  4  3
that 0  2  1
     1  4  3

In [12]: pd.concat(df.group.apply(pd.DataFrame).tolist(), 
                        keys=df["name"]).reset_index(level="name")
Out[12]:
   name  a  b
0  this  2  1
1  this  4  3
0  that  2  1
1  that  4  3
cs95
  • 330,695
  • 80
  • 606
  • 657
Andy Hayden
  • 328,850
  • 93
  • 598
  • 514
  • 1
    Looks good but I'm getting this `AttributeError: 'DataFrame' object has no attribute 'group'` ? Maybe my version of pandas? – De La Brez Nov 07 '17 at 14:21
  • `pd.concat(df['items'].apply(pd.DataFrame).tolist(), keys=df["name"]).reset_index(level="name")` – De La Brez Nov 07 '17 at 14:28
3
ab = pd.DataFrame.from_dict(np.concatenate(df['items']).tolist())
lens = df['items'].str.len()
rest = df.drop('items', 1).iloc[df.index.repeat(lens)].reset_index(drop=True)
ab.join(rest)

   a  b  name
0  2  1  this
1  4  3  this
2  2  1  that
3  4  3  that
piRSquared
  • 265,629
  • 48
  • 427
  • 571
3
pd.concat([pd.DataFrame(df1.iloc[0]) for x,df1 in df.groupby('name').group],keys=df.name)\
     .reset_index().drop('level_1',1)
Out[63]: 
   name  a  b
0  this  2  1
1  this  4  3
2  that  2  1
3  that  4  3

Data Input

df = pd.DataFrame({ "group":[[{'a': 2, 'b': 1}, {'a': 4, 'b': 3}],[{'a': 2, 'b': 1}, {'a': 4, 'b': 3}]],
                   "name": ['this', 'that']})
BENY
  • 296,997
  • 19
  • 147
  • 204
2

Another solution is to set_index with "name" and explode "items". Then cast the resulting Series to a DataFrame.

s = df.set_index('name')['items'].explode()
out = pd.DataFrame(s.tolist(), index=s.index).reset_index()

Output:

   name  a  b
0  this  2  1
1  this  4  3
2  that  2  1
3  that  4  3

It appears, set_index + explode + DataFrame is faster (at least for OP's data) than all the other options given in the other answers.

%timeit -n 1000 out = pd.concat(df['items'].apply(pd.DataFrame).tolist(), keys=df["name"]).reset_index()
%timeit -n 1000 ab = pd.DataFrame.from_dict(np.concatenate(df['items']).tolist()); lens = df['items'].str.len(); rest = df.drop('items', axis=1).iloc[df.index.repeat(lens)].reset_index(drop=True); out = ab.join(rest)
%timeit -n 1000 out = pd.concat([pd.DataFrame(df1.iloc[0]) for x,df1 in df.groupby('name')['items']],keys=df.name).reset_index().drop('level_1',axis=1)
%timeit -n 1000 s = df.set_index('name')['items'].explode(); out = pd.DataFrame(s.tolist(), index=s.index).reset_index()
2.5 ms ± 29.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
1.75 ms ± 12.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
3.82 ms ± 433 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
1.46 ms ± 68 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
0
tmp_list = list()
for index, row in a.iterrows():
    for list_item in row['items']:
        tmp_list.append(dict(list_item.items()+[('name', row['name'])]))
pd.DataFrame(tmp_list)

   a  b  name
0  2  1  this
1  4  3  this
2  2  1  that
3  4  3  that
Cero
  • 1
  • 3