Specification of a loss function is not sufficient to describe a machine learning algorithm, you also must describe the allowable shapes of the prediction surface.
By prediction surface, I mean the graph of the function
$$ x \mapsto \text{predicted_value}(x) $$
So, for example, for logistic regression the prediction surface is the graph of a function like:
$$ f(x) = \frac{1}{1 + e^{(\beta_0 + \beta_1 x + \cdots \beta_k x_k)}} $$
and for a decision tree the prediction surface is a piecewise constant function, where the region's on which the prediction function is constant are rectangles parallel to the coordinate axis.
For KNN the prediction surface is chosen to be constant on Voronoi cells, the polyhedral regions that are defined by the KNN condition. I.e., a region is all the points whose K-nearest neighbours are some K training data points. This decision is made outside the context of a loss function, it depends instead on the specification of a distance metric.
Within each Voronoi cell the choice of a loss function can guide how one should calculate the predicted value. For example, the mean squared error loss would compel us to choose the mean of all the K-nearest training data points, and the log-loss (in the case of classification) would compel us to choose the proportion of K-nearest data points labeled with the positive class.
This is the same situation as decision trees, where the choice of a loss function leads to a way to calculate a predicted value in the terminal nodes of the tree based on all the training data points that reside in that terminal node.