12

I have a pandas DataFrame that I want to separate into observations for which there are no missing values and observations with missing values. I can use dropna() to get rows without missing values. Is there any analog to get rows with missing values?

#Example DataFrame
import pandas as pd
df = pd.DataFrame({'col1': [1,np.nan,3,4,5],'col2': [6,7,np.nan,9,10],})

#Get observations without missing values
df.dropna()
Gaurav Bansal
  • 4,411
  • 10
  • 41
  • 74

2 Answers2

29

Check null by row and filter with boolean indexing:

df[df.isnull().any(1)]

#  col1 col2
#1  NaN  7.0
#2  3.0  NaN
Psidom
  • 195,464
  • 25
  • 298
  • 322
  • 4
    Or if you were super concerned about performance, `df[np.isnan(df.values).any(1)]` - the difference in `any`'s performance between an ndarray and a DataFrame has always seemed noticeable to me. – miradulo Oct 08 '17 at 01:46
6

~ = Opposite :-)

df.loc[~df.index.isin(df.dropna().index)]

Out[234]: 
   col1  col2
1   NaN   7.0
2   3.0   NaN

Or

df.loc[df.index.difference(df.dropna().index)]
Out[235]: 
   col1  col2
1   NaN   7.0
2   3.0   NaN
BENY
  • 296,997
  • 19
  • 147
  • 204