I am running Python 3.6.8 (Anaconda Distribution) on a MacBook Pro 2017. I keep running into this issue where my kernel dies when I try to called the fit method of a Keras model. This occurs in Spyder, as well as in Jupyter Notebook. The code I am trying to run is as follows:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from tensorflow import set_random_seed
from sklearn.datasets import make_circles
np.random.seed(1)
n = 16400
X, y = make_circles(n, noise=0.2, factor=0.8)
set_random_seed(1)
model = Sequential()
model.add(Dense(4, input_shape=(2,), activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='Adam')
h = model.fit(X, y, batch_size=1024, epochs=50, verbose=2)
When running this code in Spyder, I get the following output:
Epoch 1/50
2019-01-24 15:24:13.535202: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-01-24 15:24:13.535417: I tensorflow/core/common_runtime/process_util.cc:69] Creating new thread pool with default inter op setting: 4. Tune using inter_op_parallelism_threads for best performance.
Kernel died, restarting
The strange thing is that this code runs correctly when I set n=16300. When trying other types of synthetic datasets, this threshold changes, but there is always some point at which the kernel crashes when I use training sets above that size. In some cases, it is as low as 400. For some datasets, the code will run if I set the batch_size to be 1, but the kernel will crash if I try a batch_size of 2.
Any thoughts as to why this might be happening?
Thanks in advance.