0

I am using LSTM to model time series data. My target variable is categorical so I am using one-hot encoding. The goal is to predict the target class based on the given time. My dataset spans over eight days.

   input_nodes = look_back = 10
   batch_size = 128
   train_generator = create_data_generator(train, look_back, outputs, batch_size, class_weights_dict)
   validation_generator = create_test_generator(test, look_back, outputs, batch_size)

model = Sequential() model.add(LSTM(50, input_shape=(input_nodes, outputs))) #model.add(Dense(50, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(outputs, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

fit network

history = model.fit_generator(train_generator, steps_per_epoch=math.ceil(len(train)/batch_size), epochs=25, validation_data=validation_generator, validation_steps=math.ceil(len(test)/batch_size))

Here are my results.

enter image description here

enter image description here

Is there anything that I can try to improve my validation accuracy and loss?

  • Since you're using one-hot encoding of the categories, then the categories must be mutually exclusive. Why are you using sigmoid instead of softmax as the output activation? Do you have 3 or more categories? – Sycorax Mar 17 '23 at 21:22
  • Oh, I guess I misunderstood then. I will switch back to softwax. The fluctuations are still there and the validation accuracy is till around 50%. – user2585933 Mar 17 '23 at 21:31

0 Answers0