I've read on one site that "if val_loss starts increasing and val_acc also increases, it could be a sign of overfitting". I thought these model results fit well, however, does the results below, in and of itself; lead you to view this as overfitting?
model= classification_model()
#fit model
model.fit(X_train, y_train, validation_data= (X_test, y_test), epochs = 10, verbose = 2)
#eval model
results = model.evaluate(X_test, y_test, verbose =0)
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
- 113s - loss: 0.1839 - accuracy: 0.9449 - val_loss: 0.0984 - val_accuracy: 0.9679
Epoch 2/10
- 104s - loss: 0.0774 - accuracy: 0.9759 - val_loss: 0.0834 - val_accuracy: 0.9742
Epoch 3/10
- 160s - loss: 0.0526 - accuracy: 0.9834 - val_loss: 0.0869 - val_accuracy: 0.9752
Epoch 4/10
- 133s - loss: 0.0395 - accuracy: 0.9875 - val_loss: 0.0705 - val_accuracy: 0.9789
Epoch 5/10
- 103s - loss: 0.0285 - accuracy: 0.9913 - val_loss: 0.0808 - val_accuracy: 0.9775
Epoch 6/10
- 99s - loss: 0.0258 - accuracy: 0.9917 - val_loss: 0.0733 - val_accuracy: 0.9804
Epoch 7/10
- 113s - loss: 0.0222 - accuracy: 0.9927 - val_loss: 0.0934 - val_accuracy: 0.9779
Epoch 8/10
- 111s - loss: 0.0213 - accuracy: 0.9928 - val_loss: 0.0909 - val_accuracy: 0.9792
Epoch 9/10
- 108s - loss: 0.0165 - accuracy: 0.9945 - val_loss: 0.1161 - val_accuracy: 0.9771
Epoch 10/10
- 143s - loss: 0.0171 - accuracy: 0.9948 - val_loss: 0.1040 - val_accuracy: 0.9752