As neural networks are a set of linear operators, using categorical variables with an arbitrary assignment of order where there is no justification (e.g., Doctor:1, Teacher:2) will teach the model to treat higher values as more important.
However is this true if you add a non-linear activation function between layers--this will transform the network to be non-linear and as such there would be no need to remove the ordered nature of nominal variables using, for example, non-hot encoder.