220

The pandas drop_duplicates function is great for "uniquifying" a dataframe. However, one of the keyword arguments to pass is take_last=True or take_last=False, while I would like to drop all rows which are duplicates across a subset of columns. Is this possible?

    A   B   C
0   foo 0   A
1   foo 1   A
2   foo 1   B
3   bar 1   A

As an example, I would like to drop rows which match on columns A and C so this should drop rows 0 and 1.

Georgy
  • 9,972
  • 7
  • 57
  • 66
Jamie Bull
  • 11,749
  • 12
  • 73
  • 112

6 Answers6

315

This is much easier in pandas now with drop_duplicates and the keep parameter.

import pandas as pd
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar"], "B":[0,1,1,1], "C":["A","A","B","A"]})
df.drop_duplicates(subset=['A', 'C'], keep=False)
renan-eccel
  • 152
  • 10
Ben
  • 4,364
  • 4
  • 21
  • 26
  • 2
    What if my columns are not explicitly labelled? How do I select the columns just based on their index? – Hamman Samuel Feb 09 '17 at 21:42
  • 4
    Maybe `df.reindex(df.iloc[:,[0,2]].drop_duplicates(keep=False).index)`? – Ben Feb 10 '17 at 23:09
  • 8
    you could try `df.drop_duplicates(subset=[df.columns[0:2]], keep = False)` – seeiespi Feb 14 '18 at 19:56
  • If your `subset` is just a single column like `A`, the `keep=False` will remove all rows. If you define `keep` as `first` or `last`, you will keep at least one record from all. It doesn't apply to the question but if your subset is a single column (like my case), this information might be helpful when dealing with `drop_duplicates` method: you might loose a lot of records, instead of just removing the duplicates. Regards :). – ivanleoncz Aug 05 '21 at 18:59
100

Just want to add to Ben's answer on drop_duplicates:

keep : {‘first’, ‘last’, False}, default ‘first’

  • first : Drop duplicates except for the first occurrence.

  • last : Drop duplicates except for the last occurrence.

  • False : Drop all duplicates.

So setting keep to False will give you desired answer.

DataFrame.drop_duplicates(*args, **kwargs) Return DataFrame with duplicate rows removed, optionally only considering certain columns

Parameters: subset : column label or sequence of labels, optional Only consider certain columns for identifying duplicates, by default use all of the columns keep : {‘first’, ‘last’, False}, default ‘first’ first : Drop duplicates except for the first occurrence. last : Drop duplicates except for the last occurrence. False : Drop all duplicates. take_last : deprecated inplace : boolean, default False Whether to drop duplicates in place or to return a copy cols : kwargs only argument of subset [deprecated] Returns: deduplicated : DataFrame

Jake
  • 1,319
  • 1
  • 10
  • 12
48

If you want result to be stored in another dataset:

df.drop_duplicates(keep=False)

or

df.drop_duplicates(keep=False, inplace=False)

If same dataset needs to be updated:

df.drop_duplicates(keep=False, inplace=True)

Above examples will remove all duplicates and keep one, similar to DISTINCT * in SQL

Ramanujam Allam
  • 1,090
  • 11
  • 11
16

use groupby and filter

import pandas as pd
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar"], "B":[0,1,1,1], "C":["A","A","B","A"]})
df.groupby(["A", "C"]).filter(lambda df:df.shape[0] == 1)
HYRY
  • 89,863
  • 23
  • 181
  • 185
14

Try these various things

df = pd.DataFrame({"A":["foo", "foo", "foo", "bar","foo"], "B":[0,1,1,1,1], "C":["A","A","B","A","A"]})

>>>df.drop_duplicates( "A" , keep='first')

or

>>>df.drop_duplicates( keep='first')

or

>>>df.drop_duplicates( keep='last')
Priyansh gupta
  • 876
  • 12
  • 10
10

Actually, drop rows 0 and 1 only requires (any observations containing matched A and C is kept.):

In [335]:

df['AC']=df.A+df.C
In [336]:

print df.drop_duplicates('C', take_last=True) #this dataset is a special case, in general, one may need to first drop_duplicates by 'c' and then by 'a'.
     A  B  C    AC
2  foo  1  B  fooB
3  bar  1  A  barA

[2 rows x 4 columns]

But I suspect what you really want is this (one observation containing matched A and C is kept.):

In [337]:

print df.drop_duplicates('AC')
     A  B  C    AC
0  foo  0  A  fooA
2  foo  1  B  fooB
3  bar  1  A  barA

[3 rows x 4 columns]

Edit:

Now it is much clearer, therefore:

In [352]:
DG=df.groupby(['A', 'C'])   
print pd.concat([DG.get_group(item) for item, value in DG.groups.items() if len(value)==1])
     A  B  C
2  foo  1  B
3  bar  1  A

[2 rows x 3 columns]
CT Zhu
  • 49,083
  • 16
  • 110
  • 125
  • 1
    If that was what I wanted, I'd just use `df.drop_duplicates(['A','C'])` as the default keeps one observation take the first or last as I mentioned in the question - although I've just realised I had the keyword wrong as I was writing from memory. What I want is to drop all rows which are identical on the columns of interest (A and C in the example data). – Jamie Bull May 15 '14 at 01:24