2

I learned from examples on the internet that when processing time series with RNN or LSTM, the time series should be divided into overlapping time windows like that:

[1,2,3,4,5,6] => [[1,2,3],[2,3,4][3,4,5][4,5,6]]

This was quite a surprise for me since I thought that sequence learning was kind of built into a recurrent network.

  1. Does it mean the topology from above can only learn sequences of 3 elements?
  2. Does it mean I can feed the time windows in random order?
  3. If not, why bother splitting the sequence into time windows instead of simply feeding the net element by element?
York Yang
  • 486
  • 7
  • 12
Andrzej Gis
  • 12,678
  • 13
  • 81
  • 127

0 Answers0