61

I have some data from log files and would like to group entries by a minute:

 def gen(date, count=10):
     while count > 0:
         yield date, "event{}".format(randint(1,9)), "source{}".format(randint(1,3))
         count -= 1
         date += DateOffset(seconds=randint(40))

 df = DataFrame.from_records(list(gen(datetime(2012,1,1,12, 30))), index='Time', columns=['Time', 'Event', 'Source'])

df:

 Event  Source
 2012-01-01 12:30:00     event3  source1
 2012-01-01 12:30:12     event2  source2
 2012-01-01 12:30:12     event2  source2
 2012-01-01 12:30:29     event6  source1
 2012-01-01 12:30:38     event1  source1
 2012-01-01 12:31:05     event4  source2
 2012-01-01 12:31:38     event4  source1
 2012-01-01 12:31:44     event5  source1
 2012-01-01 12:31:48     event5  source2
 2012-01-01 12:32:23     event6  source1

I tried these options:

  1. df.resample('Min') is too high level and wants to aggregate.
  2. df.groupby(date_range(datetime(2012,1,1,12, 30), freq='Min', periods=4)) fails with exception.
  3. df.groupby(TimeGrouper(freq='Min')) works fine and returns a DataFrameGroupBy object for further processing, e.g.:

    grouped = df.groupby(TimeGrouper(freq='Min'))
    grouped.Source.value_counts()
    2012-01-01 12:30:00  source1    1
    2012-01-01 12:31:00  source2    2
                         source1    2
    2012-01-01 12:32:00  source2    2
                         source1    2
    2012-01-01 12:33:00  source1    1
    

However, the TimeGrouper class is not documented.

What is the correct way to group by a period of time? How can I group the data by a minute AND by the Source column, e.g. groupby([TimeGrouper(freq='Min'), df.Source])?

Gabriel
  • 37,092
  • 64
  • 207
  • 369
serguei
  • 633
  • 1
  • 5
  • 8

3 Answers3

65

You can group on any array/Series of the same length as your DataFrame --- even a computed factor that's not actually a column of the DataFrame. So to group by minute you can do:

df.groupby(df.index.map(lambda t: t.minute))

If you want to group by minute and something else, just mix the above with the column you want to use:

df.groupby([df.index.map(lambda t: t.minute), 'Source'])

Personally I find it useful to just add columns to the DataFrame to store some of these computed things (e.g., a "Minute" column) if I want to group by them often, since it makes the grouping code less verbose.

Or you could try something like this:

df.groupby([df['Source'],pd.TimeGrouper(freq='Min')])
salomonvh
  • 1,669
  • 1
  • 14
  • 15
BrenBarn
  • 228,001
  • 34
  • 392
  • 371
  • 4
    Thank you. I got the result I was looking for with this statement: df.groupby([df.index.map(lambda t: datetime(t.year, t.month, t.day, t.hour, t.minute)), df.Source, df.Event]).size().unstack(level=2) – serguei Jun 17 '12 at 19:15
  • 2
    how can i extend it to 30 minutes ? – igauravsehrawat Aug 20 '14 at 07:32
  • 11
    This pd.TimeGrouper can be used to group by multiples of time units `df.groupby(pd.TimeGrouper(freq='30Min'))` – salomonvh Sep 16 '15 at 12:47
  • I want to group data by days, but my day ends at 02:00 not at 24:00. How can this be done? – Andi Anderle Feb 20 '18 at 11:57
  • 4
    TimeGrouper is deprecated since pandas 21 (https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.21.0.html#deprecations) - would pdGrouper be a drop in replacement here? – Mr_and_Mrs_D Jul 07 '19 at 13:43
23

Since the original answer is rather old and pandas introduced periods a different solution is nowadays:

df.groupby(df.index.to_period('T'))

Additionally, you can resample

df.resample('T')
Quickbeam2k1
  • 4,902
  • 2
  • 21
  • 42
14

pd.TimeGrouper is now depreciated. Here is v1.05 update using pd.Grouper

df['Date'] = df.index

df.groupby(['Source',pd.Grouper(key = 'Date', freq='30min')])
Prageeth Jayathissa
  • 1,448
  • 8
  • 16