1

I have read that deep learning models outperforms than the traditional machine learning models.

I have a time-series classification problem where the output is 0 or 1. I used LSTM to classify my timeseries as follows.

model = Sequential()
model.add(Conv1D(10, kernel_size=3, input_shape=(25,4)))
model.add(Conv1D(10, kernel_size=2))
model.add(GlobalMaxPooling1D())
model.add(Dense(10))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

Unfortunately, my deep learning model gives very bad results (e.g., 0.333333). I am worried why this happens. Then I tried a machine learning model (using randomforest) and it gave accuracy about 0.6.

I am upsetting why the deep learning model gives that bad results. I would like to get your feedback on why this happens, and is there a way to avoid this.

I am happy to provide more details if needed.

EmJ
  • 602
  • 1
  • It looks like you used a convolutional neural network, not LSTM. 2) Are you talking about performance on training data or on data not used to build the model?
  • – Dave Oct 31 '19 at 07:50
  • @Dave I am using 10-fold cross validation. It would be really great if you could tell me how I can use CNN for a classification problem as I have never used CNN before. I look forward to hearing from you. Thank you :) – EmJ Oct 31 '19 at 10:21
  • 1
  • I would be interested in the accuracy you get on training data. Do you know the term "overfitting"? 2) You just did use a CNN. If you want to use LSTM, don't use CNN. (Okay, they can be combined, but at least make sure to use LSTM if you want to have long short-term memory in your network). If you're unfamiliar with the idea behind CNN, Brandon Rohrer has an excellent video on YouTube: https://www.youtube.com/watch?v=FmpDIaiMIeA. He also has a longer one that I have not seen: https://www.youtube.com/watch?v=JB8T_zN7ZC0.
  • – Dave Oct 31 '19 at 10:56
  • 2