A function used to quantify the difference between observed data and predicted values according to a model. Minimization of loss functions is a way to estimate the parameters of the model.
Questions tagged [loss-function]
533 questions
31
votes
6 answers
L2 loss vs. mean squared loss
I see some literature consider L2 loss (least squared error) and mean squared error loss are two different kinds of loss functions.
However, it seems to me these two loss functions essentially compute the same thing (with a 1/n factor…
Edamame
- 2,745
- 5
- 24
- 33
5
votes
2 answers
Why using a partial derivative for the loss function?
What is the purpose of computing the partial derivative of the loss function in order to find the best parameters that minimize the error?
Considering the loss function of a linear model, we want to find the best parameters that minimize the error.…
PwNzDust
- 149
- 3
2
votes
1 answer
What is the entity of cross entropy (loss)
Cross-entropy (loss), $-\sum y_i\;\log(\hat{p_i})$, estimates the amount of information needed to encode $y$ using Huffman encoding based on the estimated probabilities $\hat{p}$. Therefore one could claim it should be considered to measure the…
Herbert
- 123
- 5
2
votes
2 answers
LogCoshLoss on pytorch
Hi I am currently testing multiple loss on my code using PyTorch, but when I stumbled on log cosh loss function I did not find any resources on the PyTorch documentation unlike Tensor flow which have as build-in function
is it excite in Pytorch with…
DCnoob
- 131
- 1
- 6
2
votes
2 answers
How to implement a GridSearchCV custom scorer that is dependent on a training feature?
I would like to code a custom scoring function using the make_scorer function, where my custom_function(y_true, y_pred)calculates the DAILY sumproduct of y_true and y_pred and outputs, say the mean, for example. The problem is that the timestamps…
momobz0
- 21
- 1
- 2
2
votes
0 answers
Negative evidence lower bound?
As my title already says: can the ELBO be negative?
$ELBO_\lambda = KL[q_\lambda(w)||P(w)] - \mathbb{E}_q[\log P(\mathcal{D|w})] $
Can I theoretically adjust my prior $P(w)$ such that $KL=0$ and also $\mathbb{E}_q>0$? If yes, why call it a lower…
Andreas Look
- 921
- 5
- 14
1
vote
1 answer
Gradient Descent - how many values are calculated in loss function?
I'm a little bit confused how loss function is calculated in neural network training. There's is said that in theory when using Grid Search or Monte Carlo methods we can calculate all the possible loss function values. But obviously this requires…
Tauno
- 799
- 2
- 9
- 9
1
vote
1 answer
Effects of L2 loss and smooth L1 loss
Can any one tell me what the effects of $L_2$ loss and smooth $L_1$ loss (i.e. Huber loss with $\alpha = 1$) are, and when to use each of them ?
HOANG GIANG
- 159
- 9
1
vote
1 answer
Meaning of subscript in min max value function
This possibly is a very stupid question, but i have not been able to find the answer on the internet and have got no clue which keywords to use while searching.
What's the meaning of $\mathbb{E}_{x \sim p_{data}(h)} [...]$
Where ... is some…
deKeijzer
- 113
- 5
1
vote
0 answers
What if training loss is negative
I have designed a custom loss function where I have to maximize the KL divergence. So I took KL(P||Q) and negated it and now my loss function is -KL(P||Q) and this loss function leads to negative values. I tried squaring loss which is resulting in a…
SS Varshini
- 239
- 5
- 13
0
votes
1 answer
SparseCategoricalcrossEntropy(from_logits=True) internally apply softmax?
Regarding Tensorflow/Keras SparseCategoricalcrossEntropy.
SparseCategoricalcrossEntropy(from_logits=True) expects the logits that has not been normalized by softmax. Then does SparseCategoricalcrossEntropy applies softmax internally?
I suppose…
mon
- 711
- 2
- 10
- 19
0
votes
1 answer
Function growing faster for negative inputs than for positives
I am working on a regression problem where I want to model the loss function in a way that it "punishes" to big errors much more than small errors (so I am in the realm of exponential functions) but also in a way that is punishes a negative error…
Moritz
- 103
- 2
-1
votes
1 answer
Enable to reproduce the loss of training while predicting
i use CNN model for a regression problem with a custom loss
def loss_M2(y_true,y_pred):
y_true_f=K.flatten(y_true)
y_pred_f=K.flatten(y_pred)
M2=K.max(K.abs(K.cumsum((y_pred_f-y_true_f),axis=0)))
return M2
ISSUE : when i…
TARBOUN
- 1