32

I have a task which needs to be run on 'most' objects in my database once every some period of time (once a day, once a week, whatever). Basically this means that I have some query that looks like this running in it's own thread.

for model_instance in SomeModel.objects.all():
    do_something(model_instance)

(Note that it's actually a filter() not all() but none-the-less I still end up selecting a very large set of objects.)

The problem I'm running into is that after running for a while the thread is killed by my hosting provider because I'm using too much memory. I'm assuming all this memory use is happening because even though the QuerySet object returned by my query initially has a very small memory footprint it ends up growing as the QuerySet object caches each model_instance as I iterate through them.

My question is, "what is the best way to iterate through almost every SomeModel in my database in a memory efficient way?" or perhaps my question is "how do I 'un-cache' model instances from a django queryset?"

EDIT: I'm actually using the results of the queryset to build a series of new objects. As such, I don't end up updating the queried-for objects at all.

Chris W.
  • 34,983
  • 34
  • 91
  • 128
  • You'll have to provide some hint as to what you're doing with the queryset. Django has rules, and a number of operations require loading the entire QuerySet into memory, where other operations merely process the rows one-at-a-time. http://docs.djangoproject.com/en/1.2/topics/db/queries/#querysets-are-lazy. Please provide some hint as to how you're using your QuerySet objects. – S.Lott Jan 31 '11 at 23:04
  • Sorry, I should specify that I am using the information from the QuerySet objects to create new objects (of a different type). So I am never actually updating the objects I'm querying for. – Chris W. Feb 01 '11 at 19:15

8 Answers8

22

What about using django core's Paginator and Page objects documented here:

https://docs.djangoproject.com/en/dev/topics/pagination/

Something like this:

from django.core.paginator import Paginator
from djangoapp.models import SomeModel

paginator = Paginator(SomeModel.objects.all(), 1000) # chunks of 1000

for page_idx in range(1, paginator.num_pages):
    for row in paginator.page(page_idx).object_list:
        # here you can do what you want with the row
    print "done processing page %s" % page_idx
mpaf
  • 6,173
  • 5
  • 36
  • 40
  • 2
    why isn't this the accepted answer ? since it is a native django solution, and seems like the least effort and accomplishes the job – madhukar93 May 02 '18 at 08:54
  • 3
    Out of curiosity, is this not similar to `iterator()` ? https://docs.djangoproject.com/en/2.1/ref/models/querysets/#iterator In fact, Paginator will call `count` first (len otherwise). Does not it make slightly more inefficient? Why would this option be better than using iterator? – jbondia Apr 29 '19 at 10:41
  • 5
    Ought to be `for page_idx in range(1, paginator.num_pages+1):` or you'll skip the last page – Mickey Ristroph Jun 14 '19 at 12:34
17

So what I actually ended up doing is building something that you can 'wrap' a QuerySet in. It works by making a deepcopy of the QuerySet, using the slice syntax--e.g., some_queryset[15:45]--but then it makes another deepcopy of the original QuerySet when the slice has been completely iterated through. This means that only the set of Objects returned in 'this' particular slice are stored in memory.

class MemorySavingQuerysetIterator(object):

    def __init__(self,queryset,max_obj_num=1000):
        self._base_queryset = queryset
        self._generator = self._setup()
        self.max_obj_num = max_obj_num

    def _setup(self):
        for i in xrange(0,self._base_queryset.count(),self.max_obj_num):
            # By making a copy of of the queryset and using that to actually access
            # the objects we ensure that there are only `max_obj_num` objects in
            # memory at any given time
            smaller_queryset = copy.deepcopy(self._base_queryset)[i:i+self.max_obj_num]
            logger.debug('Grabbing next %s objects from DB' % self.max_obj_num)
            for obj in smaller_queryset.iterator():
                yield obj

    def __iter__(self):
        return self

    def next(self):
        return self._generator.next()

So instead of...

for obj in SomeObject.objects.filter(foo='bar'): <-- Something that returns *a lot* of Objects
    do_something(obj);

You would do...

for obj in MemorySavingQuerysetIterator(in SomeObject.objects.filter(foo='bar')):
    do_something(obj);

Please note that the intention of this is to save memory in your Python interpreter. It essentially does this by making more database queries. Usually people are trying to do the exact opposite of that--i.e., minimize database queries as much as possible without regards to memory usage. Hopefully somebody will find this useful though.

Chris W.
  • 34,983
  • 34
  • 91
  • 128
14

You can't use Model.objects.all().iterator() because it will fetch all the elements on your table at once. Neither can you use Model.objects.all()[offset:offset+pagesize], because it will cache the results. Either will exceed your memory limit.

I've tried to mix both solutions, and it worked:

offset = 0
pagesize = 1000
count = Model.objects.all().count()
while offset < count:
    for m in Model.objects.all()[offset : offset + pagesize].iterator:
        do_something with m
    offset += pagesize

Change pagesize to fit your requirements, and optionally change the [offset : offset + pagesize] to the [offset * pagesize : (offset + 1) * pagesize] idiom if it fits you better. Also, of course, replace Model by your actual model name.

Marcos Dumay
  • 165
  • 1
  • 3
  • 3
    What do you mean by "catch your results"? – Divick Dec 02 '16 at 06:47
  • 4
    cache I'd assume – Aaron McMillin Nov 16 '17 at 14:43
  • 2
    QuerySet.iterator() will use offsets to fetch the rows one by one without caching results, depending on the database. Postgres and Oracle are specifically called out in the docs for doing this. – Derek Feb 21 '20 at 16:36
  • 2
    I'm not sure why the parent answer decries the use of iterator because (as noted by @Derek) `.iterator(batch_size=N)` seems to do exactly what the question asked for...at least with Postgres' server side cursors? For example, I have a query which can return of the order of 1000 rows (model instances) where each row contains circa 20MB of JSON data. Without .iterator(), peak memory usage was well over 20GB. This was solved completely using .iterator(batch_size=N) which trades memory against database round trips. – Shaheed Haque Aug 20 '21 at 20:27
  • 1
    No. `queryset.iterator()` has the parameter `chunk_size` and will only fetch as many rows as defined per iteration. I have used it in one of my project and it really helps a lot to reduce memory consumption for large querysets. see: https://docs.djangoproject.com/en/3.2/ref/models/querysets/#iterator – Erik Kalkoken Jan 20 '22 at 02:27
10

Many solutions implement sql OFFSET and LIMIT via slicing the queryset. As stefano notes, with larger datasets this becomes very inefficient. The proper way of handling this is to use server-side cursers to keep track of the OFFSET.

Native server-side cursor support is in the works for django. Until it's ready, here is a simple implementation if you are using postgres with the psycopg2 backend:

def server_cursor_query(Table):
    table_name = Table._meta.db_table

    # There must be an existing connection before creating a server-side cursor
    if connection.connection is None:
        dummy_cursor = connection.cursor()  # not a server-side cursor

    # Optionally keep track of the columns so that we can return a QuerySet. However,
    # if your table has foreign keys, you may need to rename them appropriately
    columns = [x.name for x in Table._meta.local_fields]

    cursor = connection.connection.cursor(name='gigantic_cursor')) # a server-side
                                                                   # cursor

    with transaction.atomic():
        cursor.execute('SELECT {} FROM {} WHERE id={}'.format(
            ', '.join(columns), table_name, id))

        while True:
            rows = cursor.fetchmany(1000)

                if not rows:
                    break

                for row in rows:
                    fields = dict(zip(columns, row))
                    yield Table(**fields)

See this blog post for a great explanation of memory issues from large queries in django.

drs
  • 5,248
  • 3
  • 38
  • 64
  • 1
    Big +1 for using the correct tools for the job. It would be great seeing support in the Django ORM indeed. By the way, if you don't mind poking a bit through the internals, you can invoke `sql, params = queryset.query.get_compiler(using=queryset.db).as_sql()` to get the SQL query from a queryset. And you should use Table.from_db to turn it into an actual instance on recent Django versions. – spectras Jun 01 '15 at 14:10
  • @spectras, `.as_sql()` looks very useful. However, I would need to get the field names in the same order as the `.as_sql()` `SELECT` statement in order to create the Table instance at the end. Is there a way to do that without parsing the `.as_sql()` manually? – drs Jun 02 '15 at 11:06
  • 1
    Well, yes, if you don't mind the poking around, I put this here: https://gist.github.com/spectras/f22d303088e4b2c498de If you do use it, I'd recommend setting up some tests to ease Django upgrades. Support for select_related() could be added as well, that'd be 3 lines, but an additional tie to another ORM internal so... – spectras Jun 02 '15 at 13:05
5

There is a django snippet for this:

http://djangosnippets.org/snippets/1949/

It iterates over a queryset by yielding rows of smaller "chunks" of the original queryset. It ends up using significantly less memory while allowing you to tune for speed. I use it in one of my projects.

Nick
  • 51
  • 1
  • 1
3

I'm continuing research and it kind of looks like I want to do the equivalent of an SQL OFFSET and LIMIT, which according to Django Doc's on Limiting Querysets means I want to use the slice syntax, e.g., SomeModel.objects.all()[15:25]

So now I'm thinking maybe something like this is what I'm looking for:

# Figure out the number of objects I can safely hold in memory
# I'll just say 100 for right now
number_of_objects = 100 
count = SomeModel.objects.all().count():
for i in xrange(0,count,number_of_objects):
    smaller_queryset = SomeModel.objects.all()[i:i+number_of_objects]
    for model_instance in smaller_queryset:
        do_something(model_instance)

By my reckoning this would make it so that smaller_queryset would never grow too large.

Chris W.
  • 34,983
  • 34
  • 91
  • 128
2

The following is a queryset iterator that splits the queryset into chunks and is not much slower than the basic iterator (it will be a linear number of database queries, as opposed to 1, but it will only one query per 1,000 rows). This function pages by primary key, which is necessary for efficient implementation since offset is a linear time operation in most SQL databases. It has constant memory use, proportionate to the page size.

def queryset_iterator(queryset, page_size=1000):
    if not queryset:
        return
    max_pk = queryset.order_by("-pk")[0].pk
    # Scale the page size up by the average density of primary keys in the queryset
    adjusted_page_size = int(page_size * max_pk / queryset.count())
    
    pages = int(max_pk / adjusted_page_size) + 1
    for page_num in range(pages):
        lower = page_num * adjusted_page_size
        page = queryset.filter(pk__gte=lower, pk__lt=lower+page_size)
        for obj in page:
            yield obj

Use looks like:

for obj in queryset_iterator(Model.objects.all()):
    # do stuff

This code has three assumptions:

  1. Your primary keys are integers (this will not work for UUID primary keys).
  2. The primary keys of the queryset are at least somewhat uniformly distributed. If this is not true, the adjusted_page_size can end up too large and you may get one or several massive pages as part of your iteration.

To give a sense of the overhead, I tested this on a Postgres table with 40,000 entries. The queryset_iterator adds about 80% to the iteration time vs raw iteration (2.2 seconds vs 1.2 seconds). That overhead does not vary substantially for page sizes between 200 and 10,000, though it starts going up below 200.

Zags
  • 31,395
  • 12
  • 87
  • 123
2

The following approach doesn't use an expensive database offset query and avoids calculating the page number, making it more efficient. Limitations specified in the docstring.

def queryset_pk_iterator(queryset, batch_size=1000):
    """
    Iterator that splits the queryset into batches to reduce memory consumption.
    Useful in cases where builtin .iterator() method of the queryset skips the "prefetch_related" optimization.

    :param queryset: Queryset to iterate over. The supplied queryset must not specify order and limit/offset.
        Queryset objects must have a monotonically increasing and ordering primary key.
    :param batch_size: Size of the batches into which to split the queryset.
    :return: iterator object
    """
    pk = None
    while True:
        batch_queryset = queryset.order_by('pk')
        if pk is not None:
            batch_queryset = batch_queryset.filter(pk__gt=pk)
        batch_queryset = batch_queryset[:batch_size]
        obj = None
        for obj in batch_queryset:
            yield obj
        if obj is None:
            return
        pk = obj.pk
dtatarkin
  • 951
  • 6
  • 6