I think that the methods which require no modification to the environment--e.g. gmapping with 2D lidar or ORB-slam with camera--are probably too computationally intense it to run on the pi alone. You would need to run a separate server and connect over Wi-Fi. This approach works, and I've actually gotten it to work over the internet instead of just a local network.
However, if you're willing to limit yourself to one room, you could try putting up some visual fiducials with known locations.
You could try an approach somewhat like what Paul suggests in a comment, with a bird's eye view camera that just tracks the robot as it moves around the room and computes its location directly. With this approach, you're going to need an external computer anyway, so you might as well do visual slam with a camera mounted on the robot.
A camera is significantly cheaper than a lidar, though you can get a salvaged Neato xv 2d lidar on eBay for about $70.
All of this would be easier if you had wheel encoders to provide an odometry estimate.