I have written a simple neural network by keras (R API) as follow:
# Creates dense embedding layer; outputs 3D tensor
# with shape (batch_size, sequence_length, output_dim)
layer_embedding(input_dim = 100,
output_dim = 30,
input_length = ncol(ttrainx)) %>%
#bidirectional(layer_lstm(units = 64)) %>%
layer_lstm(units = 10) %>%
layer_dropout(rate = 0.5) %>%
layer_dense(units =2, activation = 'sigmoid')
The codes above works without any issue, I am just using 10 sample instances with two outputs (one is the lead of the first one). However, I wonder I always get a prediction between 0 and 1 for my outputs (even I test above on a file with more than 100 training samples), even I am using the same training samples on test:
> result
[,1] [,2]
[1,] 0.8137890 0.6513228
[2,] 0.8190086 0.6572025
[3,] 0.8033106 0.6475145
[4,] 0.7816594 0.6308075
[5,] 0.7592142 0.6084980
[6,] 0.7806532 0.6365856
[7,] 0.8263645 0.6561120
[8,] 0.8299966 0.6609393
[9,] 0.8048422 0.6523835
[10,] 0.7748901 0.6305881
I even use tanh function in the layer output but the outputs are between 0 and 1
layer_dense(units =2, activation = 'tanh')
outputs are:
[,1] [,2]
[1,] 0.7627973 0.6377947
[2,] 0.7790936 0.6529062
[3,] 0.7719792 0.6425777
[4,] 0.7518294 0.6246758
[5,] 0.6936694 0.5419450
[6,] 0.7229034 0.6205049
[7,] 0.7928969 0.6572189
[8,] 0.7629467 0.6395736
[9,] 0.7811863 0.6578141
[10,] 0.7765120 0.6340105
Can anyone explain what is the cause of this problem?