Most Popular
1500 questions
12
votes
4 answers
Is overfitting always a bad thing?
DNN can be used to recognize pictures. Great. For that usage, it's better if they are somewhat flexible so as to recognize as cats even cats that are not on the pictures on which they trained (i.e. avoid overfitting). Agreed.
But when one uses NN as…
ZakC
- 347
- 2
- 7
12
votes
4 answers
What is the difference between self-supervised and unsupervised learning?
What is the difference between self-supervised and unsupervised learning? The terms logically overlap (and maybe self-supervised learning is a subset of unsupervised learning?), but I cannot pinpoint exactly what that difference is.
What are the…
Robin van Hoorn
- 2,366
- 1
- 10
- 33
12
votes
5 answers
Why is a bias parameter needed in neural networks?
I have read several resources, including previously asked questions such as this. I have also read arguments related to intercepts needed to separate linearly separable data. If my neural network can perform feature transformation, what is the need…
SajanGohil
- 326
- 4
- 11
12
votes
1 answer
What are the best known gradient-free training methods for deep learning?
As I know, the current state of the art methods for training deep learning networks are variants of gradient descent or stochastic gradient descent.
What are the best known gradient-free training methods for deep learning (mostly in visual tasks…
rkellerm
- 334
- 1
- 9
12
votes
2 answers
What are bottleneck features?
In the blog post Building powerful image classification models using very little data, bottleneck features are mentioned. What are the bottleneck features? Do they change with the architecture that is used? Are they the final output of convolutional…
Abhishek Bhatia
- 437
- 2
- 5
- 15
12
votes
1 answer
Why does a transformer not use an activation function following the multi-head attention layer?
I was hoping someone could explain to me why in the transformer model from the "Attention is all you need" paper there is no activation applied after both the multihead attention layer and to the residual connections. It seems to me that there are…
chasep255
- 173
- 1
- 7
12
votes
5 answers
Why are deep neural networks and deep learning insufficient to achieve general intelligence?
Everything related to Deep Learning (DL) and deep(er) networks seems "successful", at least progressing very fast, and cultivating the belief that AGI is at reach. This is popular imagination. DL is a tremendous tool to tackle so many problems,…
Eric Platon
- 1,510
- 10
- 22
12
votes
1 answer
How can Viv generate new code based on some user's query?
I have been looking into Viv, an artificial intelligent agent in development. Here is a demonstration of Viv (by Dag Kittlaus).
Based on what I understand, this AI can generate new code and execute it based on a query from the user.
What I am…
N. Chalifour
- 161
- 2
12
votes
1 answer
What are the fundamental differences between VAE and GAN for image generation?
Starting from my own understanding, and scoped to the purpose of image generation, I'm well aware of the major architectural differences:
A GAN's generator samples from a relatively low dimensional random variable and produces an image. Then the…
Alexander Soare
- 1,339
- 2
- 11
- 27
12
votes
3 answers
Are neural networks the only way to reach "true" artificial intelligence?
Currently, most research done in artificial intelligence focuses on neural networks, which have been successfully used to solve many problems. A good example would be DeepMind's AlphaGo, which uses a convolutional neural network. There are many…
Eka
- 1,066
- 8
- 24
12
votes
1 answer
What exactly is the advantage of double DQN over DQN?
I started looking into the double DQN (DDQN). Apparently, the difference between DDQN and DQN is that in DDQN we use the main value network for action selection and the target network for outputting the Q values.
However, I don't understand why…
Chukwudi
- 369
- 2
- 7
12
votes
10 answers
Could an AI feel emotions?
Assuming humans had finally developed the first humanoid AI based on the human brain, would It feel emotions? If not, would it still have ethics and/or morals?
MountainSide Studios
- 353
- 1
- 9
12
votes
1 answer
What is the difference between one-shot learning, transfer learning and fine tuning?
Lately, there are lots of posts on one-shot learning. I tried to figure out what it is by reading some articles. To me, it looks like similar to transfer learning, in which we can use pre-trained model weights to create our own model. Fine-tuning…
Hiren Namera
- 741
- 6
- 19
12
votes
8 answers
What are the advantages of having self-driving cars?
What are the advantages of having self-driving cars?
We will be able to have more cars in the traffic at the same time, but won't it also make more people choose to use the cars, so both the traffic and the public health will actually become…
Jamgreen
- 309
- 1
- 5
12
votes
2 answers
Is anybody still using Conceptual Dependency Theory?
Roger Schank did some interesting work on language processing with Conceptual Dependency (CD) in the 1970s. He then moved somewhat out of the field, being in Education these days. There were some useful applications in natural language generation…
Oliver Mason
- 5,387
- 13
- 32