I am trying to determine whether there is an entry in a Pandas column that has a particular value. I tried to do this with if x in df['id']. I thought this was working, except when I fed it a value that I knew was not in the column 43 in df['id'] it still returned True. When I subset to a data frame only containing entries matching the missing id df[df['id'] == 43] there are, obviously, no entries in it. How to I determine if a column in a Pandas data frame contains a particular value and why doesn't my current method work? (FYI, I have the same problem when I use the implementation in this answer to a similar question).
10 Answers
in of a Series checks whether the value is in the index:
In [11]: s = pd.Series(list('abc'))
In [12]: s
Out[12]:
0 a
1 b
2 c
dtype: object
In [13]: 1 in s
Out[13]: True
In [14]: 'a' in s
Out[14]: False
One option is to see if it's in unique values:
In [21]: s.unique()
Out[21]: array(['a', 'b', 'c'], dtype=object)
In [22]: 'a' in s.unique()
Out[22]: True
or a python set:
In [23]: set(s)
Out[23]: {'a', 'b', 'c'}
In [24]: 'a' in set(s)
Out[24]: True
As pointed out by @DSM, it may be more efficient (especially if you're just doing this for one value) to just use in directly on the values:
In [31]: s.values
Out[31]: array(['a', 'b', 'c'], dtype=object)
In [32]: 'a' in s.values
Out[32]: True
- 328,850
- 93
- 598
- 514
-
3I don't want to know whether it is unique necessarily, mainly I want to know if it's there. – Michael Jan 23 '14 at 21:47
-
42I think `'a' in s.values` should be faster for long Series. – DSM Jan 23 '14 at 21:48
-
6@AndyHayden Do you know why, for `'a' in s`, pandas chooses to check the index rather than the values of the series? In dictionaries they check keys, but a pandas series should behave more like a list or array, no? – Lei Nov 22 '17 at 17:55
-
@LeiHuang a Series is a key-value datastructure (I guess so is a DataFrame which acts the same here), also the `in` can be _very_ efficient on an Index, it's `O(1)`, whereas on an array it'll be `O(n)`. But you're right this is something that you do find oneself doing occasionally... my claim is generally there's a better way/alternative... – Andy Hayden Nov 22 '17 at 20:08
-
4Starting from pandas 0.24.0 , using `s.values` and `df.values` is highly discoraged. See [this](https://pandas-dev.github.io/pandas-blog/pandas-extension-arrays.html). Also, `s.values` is actually much slower in some cases. – Qusai Alothman Feb 01 '19 at 07:09
-
2@QusaiAlothman neither `.to_numpy` or `.array` are available on a Series, so I'm not entirely sure what alternative they're advocating (I don't read "highly discouraged"). In fact they're saying that .values may not return a numpy array, e.g. in the case of a categorical... but that's fine as `in` will still work as expected (indeed more efficiently that it's numpy array counterpart) – Andy Hayden Feb 01 '19 at 07:44
-
Adding to the comment of @QusaiAlothman : Use ``"a" in s.series``! – greeeeeeen Jan 17 '20 at 19:45
-
1this works for me. You can perform this on a dataframe column by executing: ``"a" in df["column_name"].values`` – Robvh Jan 29 '20 at 11:10
-
1Since the question is whether a column (not the index) contains a value and using the `.values` property is the most efficient way to do that, using that property should appear first in this answer. – Mr. Lance E Sloan Aug 20 '20 at 19:33
-
Note that this doesn't work for the case `np.nan in s.values`. `s` may contain `np.nan` and will still return `False`. – Stalpotaten Feb 02 '21 at 20:47
You can also use pandas.Series.isin although it's a little bit longer than 'a' in s.values:
In [2]: s = pd.Series(list('abc'))
In [3]: s
Out[3]:
0 a
1 b
2 c
dtype: object
In [3]: s.isin(['a'])
Out[3]:
0 True
1 False
2 False
dtype: bool
In [4]: s[s.isin(['a'])].empty
Out[4]: False
In [5]: s[s.isin(['z'])].empty
Out[5]: True
But this approach can be more flexible if you need to match multiple values at once for a DataFrame (see DataFrame.isin)
>>> df = DataFrame({'A': [1, 2, 3], 'B': [1, 4, 7]})
>>> df.isin({'A': [1, 3], 'B': [4, 7, 12]})
A B
0 True False # Note that B didn't match 1 here.
1 False True
2 True True
- 110
- 2
- 8
- 9,990
- 1
- 26
- 37
-
6You could also use the [DataFrame.any()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.any.html) function: `s.isin(['a']).any()` – thando Jul 06 '20 at 08:35
found = df[df['Column'].str.contains('Text_to_search')]
print(found.count())
the found.count() will contains number of matches
And if it is 0 then means string was not found in the Column.
- 1,438
- 14
- 20
-
4
-
3
-
1This approach worked for me but I had to include the parameters `na=False` and `regex=False` for my use case, as explained here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html – Mabyn Oct 07 '19 at 22:24
-
1But string.contains does a substring search. Ex: If a value called "head_hunter" is present. Passing "head" in str.contains matches and gives True which is wrong. – karthikeyan Jun 08 '20 at 14:09
-
@karthikeyan Its not wrong. Depends on the context of your search. What if you are search for addresses or product. You'll need all product that fit description. – Shahir Ansari Jun 27 '20 at 08:48
I did a few simple tests:
In [10]: x = pd.Series(range(1000000))
In [13]: timeit 999999 in x.values
567 µs ± 25.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [24]: timeit 9 in x.values
666 µs ± 15.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [16]: timeit (x == 999999).any()
6.86 ms ± 107 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [21]: timeit x.eq(999999).any()
7.03 ms ± 33.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [22]: timeit x.eq(9).any()
7.04 ms ± 60 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [15]: timeit x.isin([999999]).any()
9.54 ms ± 291 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [17]: timeit 999999 in set(x)
79.8 ms ± 1.98 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Interestingly it doesn't matter if you look up 9 or 999999, it seems like it takes about the same amount of time using the in syntax (must be using some vectorized computation)
In [24]: timeit 9 in x.values
666 µs ± 15.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [25]: timeit 9999 in x.values
647 µs ± 5.21 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [26]: timeit 999999 in x.values
642 µs ± 2.11 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [27]: timeit 99199 in x.values
644 µs ± 5.31 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [28]: timeit 1 in x.values
667 µs ± 20.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Seems like using x.values is the fastest, but maybe there is a more elegant way in pandas?
- 2,211
- 1
- 22
- 42
-
2It would be great if you change order of results from smallest to largest. Nice work! – smm Jan 10 '20 at 00:06
Or use Series.tolist or Series.any:
>>> s = pd.Series(list('abc'))
>>> s
0 a
1 b
2 c
dtype: object
>>> 'a' in s.tolist()
True
>>> (s=='a').any()
True
Series.tolist makes a list about of a Series, and the other one i am just getting a boolean Series from a regular Series, then checking if there are any Trues in the boolean Series.
- 65,118
- 12
- 70
- 89
Simple condition:
if any(str(elem) in ['a','b'] for elem in df['column'].tolist()):
- 65,118
- 12
- 70
- 89
- 247
- 2
- 5
Use
df[df['id']==x].index.tolist()
If x is present in id then it'll return the list of indices where it is present, else it gives an empty list.
- 2,930
- 1
- 17
- 28
- 29
- 3
Suppose you dataframe looks like :
Now you want to check if filename "80900026941984" is present in the dataframe or not.
You can simply write :
if sum(df["filename"].astype("str").str.contains("80900026941984")) > 0:
print("found")
- 801
- 9
- 10