I have kinda Truck table in my Postgresql DB. I write a new data every minute. And I want to get latest element of each group by serial. But problem is my database is so huge and making query takes long time. Now I have 470000 row data. I tried make query:
rec_ = Trucks.objects.filter(serial='tsr1801').distinct('serial').order_by('serial', '-id')
It is responding so slow. And I tried other methods shown as below, but these queries are also working slow.
1) https://stackoverflow.com/a/19930802/7456750
Mine: Trucks.objects.annotate(max_id=Max('id')).filter(id=F('max_id'))
2) https://stackoverflow.com/a/17887296/7456750
Mine: Trucks.objects.values('serial', 'payload', 'datetime').annotate(id=Max('id'))
Is there any way to get latest element of each group, which works fast?
+-----+-------------+------------+---------------------+
| Id | Serial | Payload | Datetime |
+-----+-------------+------------+---------------------+
| 1 | tsr1801 | 24432 | 2018-11-01 12:00:00 |
+-----+-------------+------------+---------------------+
| 2 | tsr1802 | 20234 | 2018-11-01 12:01:00 |
+-----+-------------+------------+---------------------+
| 3 | tsr1802 | 21234 | 2018-11-01 12:01:00 |
+-----+-------------+------------+---------------------+
| 4 | tsr1801 | 24332 | 2018-11-01 12:02:00 |
+-----+-------------+------------+---------------------+
| 5 | tsr1801 | 21532 | 2018-11-01 12:03:00 |
+-----+-------------+------------+---------------------+
| 6 | tsr1802 | 19234 | 2018-11-01 12:02:00 |
+-----+-------------+------------+---------------------+
| 7 | tsr1801 | 18040 | 2018-11-01 12:04:00 |
+-----+-------------+------------+---------------------+
| 9 | tsr1801 | 27452 | 2018-11-01 12:05:00 |
+-----+-------------+------------+---------------------+