Multicolinearity is a problem in linear regression mainly because on how the model is fitted. Assuming that there exists a unique solution to the problem, the parameters can be estimated by inverting $X^TX$, but this is not possible in the case of perfect colinearity as the matrix is not invertible, and is troublesome in the case of non-perfect conlinearity, as the inverse is inacurate/instable due to its large condition number.
For neural networks multicolinearity is not a problem, as parameters are fitted using backward propagation, which does not require to invert any matrix or to assume that there exists a unique solution to the problem (in fact there is usually more than one optimal in neural networks). See, for example, that in neural networks for image classification input variables are usually very highly correlated.