183
df = pd.DataFrame({'Col1': ['Bob', 'Joe', 'Bill', 'Mary', 'Joe'],
                   'Col2': ['Joe', 'Steve', 'Bob', 'Bob', 'Steve'],
                   'Col3': np.random.random(5)})

What is the best way to return the unique values of 'Col1' and 'Col2'?

The desired output is

'Bob', 'Joe', 'Bill', 'Mary', 'Steve'
Alex Riley
  • 152,205
  • 43
  • 245
  • 225
user2333196
  • 4,866
  • 6
  • 28
  • 33
  • 5
    See also [unique combinations of values in selected columns in pandas data frame and count](https://stackoverflow.com/questions/35268817/unique-combinations-of-values-in-selected-columns-in-pandas-data-frame-and-count) for a different but related question. The selected answer there uses `df1.groupby(['A','B']).size().reset_index().rename(columns={0:'count'})` – Paul Rougieux Jun 20 '19 at 09:34

8 Answers8

264

pd.unique returns the unique values from an input array, or DataFrame column or index.

The input to this function needs to be one-dimensional, so multiple columns will need to be combined. The simplest way is to select the columns you want and then view the values in a flattened NumPy array. The whole operation looks like this:

>>> pd.unique(df[['Col1', 'Col2']].values.ravel('K'))
array(['Bob', 'Joe', 'Bill', 'Mary', 'Steve'], dtype=object)

Note that ravel() is an array method that returns a view (if possible) of a multidimensional array. The argument 'K' tells the method to flatten the array in the order the elements are stored in the memory (pandas typically stores underlying arrays in Fortran-contiguous order; columns before rows). This can be significantly faster than using the method's default 'C' order.


An alternative way is to select the columns and pass them to np.unique:

>>> np.unique(df[['Col1', 'Col2']].values)
array(['Bill', 'Bob', 'Joe', 'Mary', 'Steve'], dtype=object)

There is no need to use ravel() here as the method handles multidimensional arrays. Even so, this is likely to be slower than pd.unique as it uses a sort-based algorithm rather than a hashtable to identify unique values.

The difference in speed is significant for larger DataFrames (especially if there are only a handful of unique values):

>>> df1 = pd.concat([df]*100000, ignore_index=True) # DataFrame with 500000 rows
>>> %timeit np.unique(df1[['Col1', 'Col2']].values)
1 loop, best of 3: 1.12 s per loop

>>> %timeit pd.unique(df1[['Col1', 'Col2']].values.ravel('K'))
10 loops, best of 3: 38.9 ms per loop

>>> %timeit pd.unique(df1[['Col1', 'Col2']].values.ravel()) # ravel using C order
10 loops, best of 3: 49.9 ms per loop
Poe Dator
  • 3,779
  • 2
  • 11
  • 31
Alex Riley
  • 152,205
  • 43
  • 245
  • 225
  • 4
    How do you get a dataframe back instead of an array? – Lisle Jun 03 '16 at 14:57
  • 1
    @Lisle: both methods return a NumPy array, so you'll have to construct it manually, e.g., `pd.DataFrame(unique_values)`. There's no good way to get back a DataFrame directly. – Alex Riley Nov 08 '17 at 12:41
  • @Lisle since he has used pd.unique it returns a numpy.ndarray as a final output. Is this what you were asking? – Ash Upadhyay Sep 05 '19 at 12:15
  • 5
    @Lisle, maybe this one df = df.drop_duplicates(subset=['C1','C2','C3'])? – tickly potato Jun 15 '20 at 19:11
  • 1
    To get only the columns you need into a dataframe you could do df.groupby(['C1', 'C2', 'C3']).size().reset_index().drop(columns=0). This will do a group by which will by default pick the unique combinations and calculate the count of items per group The reset_index will change from multi-index to flat 2 dimensional. And the end is to remove the count of items column. – andrnev May 03 '21 at 10:02
15

I have setup a DataFrame with a few simple strings in it's columns:

>>> df
   a  b
0  a  g
1  b  h
2  d  a
3  e  e

You can concatenate the columns you are interested in and call unique function:

>>> pandas.concat([df['a'], df['b']]).unique()
array(['a', 'b', 'd', 'e', 'g', 'h'], dtype=object)
Mike
  • 6,151
  • 3
  • 26
  • 48
  • This doesn't work when you have something like this `this_is_uniuqe = { 'col1': ["Hippo", "H"], "col2": ["potamus", "ippopotamus"], } ` – sixtyfootersdude Nov 11 '20 at 00:17
10
In [5]: set(df.Col1).union(set(df.Col2))
Out[5]: {'Bill', 'Bob', 'Joe', 'Mary', 'Steve'}

Or:

set(df.Col1) | set(df.Col2)
James Little
  • 1,824
  • 13
  • 12
6

An updated solution using numpy v1.13+ requires specifying the axis in np.unique if using multiple columns, otherwise the array is implicitly flattened.

import numpy as np

np.unique(df[['col1', 'col2']], axis=0)

This change was introduced Nov 2016: https://github.com/numpy/numpy/commit/1f764dbff7c496d6636dc0430f083ada9ff4e4be

erikreed
  • 1,349
  • 1
  • 15
  • 20
3

for those of us that love all things pandas, apply, and of course lambda functions:

df['Col3'] = df[['Col1', 'Col2']].apply(lambda x: ''.join(x), axis=1)
Lisle
  • 1,390
  • 2
  • 13
  • 20
3

here's another way


import numpy as np
set(np.concatenate(df.values))
muon
  • 10,786
  • 7
  • 60
  • 76
2

Non-pandas solution: using set().

import pandas as pd
import numpy as np

df = pd.DataFrame({'Col1' : ['Bob', 'Joe', 'Bill', 'Mary', 'Joe'],
              'Col2' : ['Joe', 'Steve', 'Bob', 'Bob', 'Steve'],
               'Col3' : np.random.random(5)})

print df

print set(df.Col1.append(df.Col2).values)

Output:

   Col1   Col2      Col3
0   Bob    Joe  0.201079
1   Joe  Steve  0.703279
2  Bill    Bob  0.722724
3  Mary    Bob  0.093912
4   Joe  Steve  0.766027
set(['Steve', 'Bob', 'Bill', 'Joe', 'Mary'])
WitchGod
  • 13,603
  • 4
  • 47
  • 51
0
list(set(df[['Col1', 'Col2']].as_matrix().reshape((1,-1)).tolist()[0]))

The output will be ['Mary', 'Joe', 'Steve', 'Bob', 'Bill']

smishra
  • 2,715
  • 25
  • 29