75

I've got a dataset with a big number of rows. Some of the values are NaN, like this:

In [91]: df
Out[91]:
 1    3      1      1      1
 1    3      1      1      1
 2    3      1      1      1
 1    1    NaN    NaN    NaN
 1    3      1      1      1
 1    1      1      1      1

And I want to count the number of NaN values in each string, it would be like this:

In [91]: list = <somecode with df>
In [92]: list
    Out[91]:
     [0,
      0,
      0,
      3,
      0,
      0]

What is the best and fastest way to do it?

smci
  • 29,564
  • 18
  • 109
  • 144
Chernyavski.aa
  • 973
  • 1
  • 8
  • 16
  • Similar question for columns: [How do I get a summary count of missing/NaN data by column in 'pandas'?](http://stackoverflow.com/questions/22257527/how-do-i-get-a-summary-of-the-counts-of-missing-data-in-pandas) – smci Nov 17 '16 at 10:59

1 Answers1

112

You could first find if element is NaN or not by isnull() and then take row-wise sum(axis=1)

In [195]: df.isnull().sum(axis=1)
Out[195]:
0    0
1    0
2    0
3    3
4    0
5    0
dtype: int64

And, if you want the output as list, you can

In [196]: df.isnull().sum(axis=1).tolist()
Out[196]: [0, 0, 0, 3, 0, 0]

Or use count like

In [130]: df.shape[1] - df.count(axis=1)
Out[130]:
0    0
1    0
2    0
3    3
4    0
5    0
dtype: int64
Zero
  • 66,763
  • 15
  • 141
  • 151