The fact that the used approach returns 4 as the optimal number of clusters does not imply that there are 4 separate groups of observations. In order to test this empirically you can generate a random dataset with 80,000 observations and 11 variables and repeat the procedure. I bet the function would still return an optimal number of clusters (maybe even 4), but since the data was generated randomly we would know that the actual number should be none (or 1).
Furthermore, k-means is based on euclidean distance and observations are clustered according to the closest centroid. Which means that it assumes the clusters to be of more or less equal sizes. So if in your data there is 2 huge cluster and 10 smaller ones, the smaller ones would likely not be differentiated into separate groups.
You can try other methods for determining how many clusters of separated groups of points there exist. What I would try first is doing principal component analysis and trying to visualise the scatter on the first 3 components, to see if there are areas of densely populated points, separated by less-densely populated borders.
However, the point made by @Tim still stands - clustering is often a subjective procedure and there might not be an objective way to select the number of clusters. As an example, consider the exercise of clustering animals in a zoo. We might cluster them based on the number of legs, or by color, or by height, or by what they eat, or by what their natural habitat is, or by how long they live, etc etc. All of these groupings would be different, yet all of them would also be valid. Same idea extends to the number of clusters. I might cluster animals based on whether they live in Europe, Africa, Americas, or Australia. And you might divide these continents further into north/south/east/west - giving more clusters. Yet we would both be right in our own way.