I've difficulties in looking for an answer by myself because I don't know the jargon. I need somebody who'll points me in the right direction.
Assume I have 100000 points with coordinates in the unit square, so points like (0.55313, 0.236831).
I want to create a B/W picture with a given side in pixels (say 1000 x 1000) which shows these points.
The naive approach is to multiply by the side the coordinates and round, so point (0.55313, 0.236831) becomes pixel (553, 237) and to turn pixel (553,237) on.
This is rather unsatisfactory. If the points are many, say millions, I could easily fill all the area with a black blob that gives me 0 information.
So, what I thought was to have an array 1000x1000 of floating point numbers that I can update somehow when I add a point, and at the end I can normalize the values and obtain a grayscale image, which will be "blacker" where more points are.
Now the question: which algorithms are used to add a "blurred" point? I.e., I would like to take into account the real coordinates, so that, if a point is near the border of a "pixel" I would like its addition to my array to "smudge" a little the nearby pixels. Like when you draw a point in a drawing program and it actually adds a little disk with decreasing (linear? Gaussian?) intensity.
Surely it is a problem that has been already been studied and solved, just I don't know how it is called...
thanks