10

I have one record per ID with start date and end date

id  age state   start_date  end_date
123 18  CA     2/17/2019    5/4/2019
223 24  AZ     1/17/2019    3/4/2019

I want to create a record for each day between the start and end day, so I can join daily activity data to it. The target output would look something like this

id  age state   start_date
123 18  CA      2/17/2019
123 18  CA      2/18/2019
123 18  CA      2/19/2019
123 18  CA      2/20/2019
123 18  CA      2/21/2019
            …
123 18  CA      5/2/2019
123 18  CA      5/3/2019
123 18  CA      5/4/2019

And of course do this for all ids and their respective start dates in the dataset. Any help is much appreciated - thanks!

L Xandor
  • 1,323
  • 2
  • 20
  • 36

2 Answers2

15

melt, GroupBy, resample & ffill

First we melt (unpivot) your two date columns to one. Then we resample on day basis:

melt = df.melt(id_vars=['id', 'age', 'state'], value_name='date').drop('variable', axis=1)
melt['date'] = pd.to_datetime(melt['date'])

melt = melt.groupby('id').apply(lambda x: x.set_index('date').resample('d').first())\
           .ffill()\
           .reset_index(level=1)\
           .reset_index(drop=True)

Output

          date     id   age state
0   2019-02-17  123.0  18.0    CA
1   2019-02-18  123.0  18.0    CA
2   2019-02-19  123.0  18.0    CA
3   2019-02-20  123.0  18.0    CA
4   2019-02-21  123.0  18.0    CA
..         ...    ...   ...   ...
119 2019-02-28  223.0  24.0    AZ
120 2019-03-01  223.0  24.0    AZ
121 2019-03-02  223.0  24.0    AZ
122 2019-03-03  223.0  24.0    AZ
123 2019-03-04  223.0  24.0    AZ

[124 rows x 4 columns]

Edit:

I had to revisit this problem in a project, and looks like using DataFrame.apply with pd.date_range and DataFrame.explode is almost 3x faster:

df["date"] = df.apply(
    lambda x: pd.date_range(x["start_date"], x["end_date"]), axis=1
)
df = (
    df.explode("date", ignore_index=True)
    .drop(columns=["start_date", "end_date"])
)

Output

      id  age state       date
0    123   18    CA 2019-02-17
1    123   18    CA 2019-02-18
2    123   18    CA 2019-02-19
3    123   18    CA 2019-02-20
4    123   18    CA 2019-02-21
..   ...  ...   ...        ...
119  223   24    AZ 2019-02-28
120  223   24    AZ 2019-03-01
121  223   24    AZ 2019-03-02
122  223   24    AZ 2019-03-03
123  223   24    AZ 2019-03-04

[124 rows x 4 columns]
Erfan
  • 36,997
  • 6
  • 53
  • 68
  • 1
    damn, nice one. will be using this in future. – Umar.H Aug 02 '19 at 22:46
  • melt = melt.set_index('date').resample('d',).first().ffill().reset_index() this command reduces the number of records (was expecting to explode it by the number of days between the start and end date). The prior step there are two records for each ID, but after the line mentioned above only a small number of Ids are left, some with only 1 record even though they have multple days between the start and end date – L Xandor Aug 02 '19 at 23:18
  • Yes, you are right, I forgot about `groupby` per id. See edit, this should give correct output. @LXandor – Erfan Aug 02 '19 at 23:24
  • Works great. Thank you for the help – L Xandor Aug 06 '19 at 15:28
2

Use listcomp and pd.date_range on values of columns start_date and end_date to create list of date for each rec. Next, construct a new dataframe from result of listcomp and join back to the other 3 columns of df. Finally, set_index, stack and reset_index back

a = [pd.date_range(*r, freq='D') for r in df[['start_date', 'end_date']].values]
df[['id', 'age', 'state']].join(pd.DataFrame(a)).set_index(['id', 'age', 'state']) \
                          .stack().droplevel(-1).reset_index()

Out[187]:
      id  age state          0
0    123   18    CA 2019-02-17
1    123   18    CA 2019-02-18
2    123   18    CA 2019-02-19
3    123   18    CA 2019-02-20
4    123   18    CA 2019-02-21
5    123   18    CA 2019-02-22
6    123   18    CA 2019-02-23
7    123   18    CA 2019-02-24
8    123   18    CA 2019-02-25
9    123   18    CA 2019-02-26
10   123   18    CA 2019-02-27
11   123   18    CA 2019-02-28
12   123   18    CA 2019-03-01
13   123   18    CA 2019-03-02
14   123   18    CA 2019-03-03
15   123   18    CA 2019-03-04
16   123   18    CA 2019-03-05
17   123   18    CA 2019-03-06
18   123   18    CA 2019-03-07
19   123   18    CA 2019-03-08
20   123   18    CA 2019-03-09
21   123   18    CA 2019-03-10
22   123   18    CA 2019-03-11
23   123   18    CA 2019-03-12
24   123   18    CA 2019-03-13
25   123   18    CA 2019-03-14
26   123   18    CA 2019-03-15
27   123   18    CA 2019-03-16
28   123   18    CA 2019-03-17
29   123   18    CA 2019-03-18
..   ...  ...   ...        ...
94   223   24    AZ 2019-02-03
95   223   24    AZ 2019-02-04
96   223   24    AZ 2019-02-05
97   223   24    AZ 2019-02-06
98   223   24    AZ 2019-02-07
99   223   24    AZ 2019-02-08
100  223   24    AZ 2019-02-09
101  223   24    AZ 2019-02-10
102  223   24    AZ 2019-02-11
103  223   24    AZ 2019-02-12
104  223   24    AZ 2019-02-13
105  223   24    AZ 2019-02-14
106  223   24    AZ 2019-02-15
107  223   24    AZ 2019-02-16
108  223   24    AZ 2019-02-17
109  223   24    AZ 2019-02-18
110  223   24    AZ 2019-02-19
111  223   24    AZ 2019-02-20
112  223   24    AZ 2019-02-21
113  223   24    AZ 2019-02-22
114  223   24    AZ 2019-02-23
115  223   24    AZ 2019-02-24
116  223   24    AZ 2019-02-25
117  223   24    AZ 2019-02-26
118  223   24    AZ 2019-02-27
119  223   24    AZ 2019-02-28
120  223   24    AZ 2019-03-01
121  223   24    AZ 2019-03-02
122  223   24    AZ 2019-03-03
123  223   24    AZ 2019-03-04

[124 rows x 4 columns]
Andy L.
  • 24,086
  • 3
  • 14
  • 26