36

Here is the situation:

  • I get gzipped xml documents from Amazon S3

      import boto
      from boto.s3.connection import S3Connection
      from boto.s3.key import Key
      conn = S3Connection('access Id', 'secret access key')
      b = conn.get_bucket('mydev.myorg')
      k = Key(b)
      k.key('documents/document.xml.gz')
    
  • I read them in file as

      import gzip
      f = open('/tmp/p', 'w')
      k.get_file(f)
      f.close()
      r = gzip.open('/tmp/p', 'rb')
      file_content = r.read()
      r.close()
    

Question

How can I ungzip the streams directly and read the contents?

I do not want to create temp files, they don't look good.

Michal Charemza
  • 24,475
  • 11
  • 89
  • 143
daydreamer
  • 80,741
  • 175
  • 429
  • 691

4 Answers4

39

Yes, you can use the zlib module to decompress byte streams:

import zlib

def stream_gzip_decompress(stream):
    dec = zlib.decompressobj(32 + zlib.MAX_WBITS)  # offset 32 to skip the header
    for chunk in stream:
        rv = dec.decompress(chunk)
        if rv:
            yield rv

The offset of 32 signals to the zlib header that the gzip header is expected but skipped.

The S3 key object is an iterator, so you can do:

for data in stream_gzip_decompress(k):
    # do something with the decompressed data
Martijn Pieters
  • 963,270
  • 265
  • 3,804
  • 3,187
  • Thank you for the reply, @MartijnPieters! Strangely, that doesn't seem to have solved the problem. (Apologies for the following 1 liner) `dec = zlib.decompressobj(32 + zlib.MAX_WBITS); for chunk in app.s3_client.get_object(Bucket=bucket, Key=key)["Body"].iter_chunks(2 ** 19): data = dec.decompress(chunk); print(len(data));` Seems to output 65505 then 0, 0, 0, 0, 0, .... could this be something to do with `iter_chunks()`? – WillJones Apr 05 '20 at 11:13
  • @WillJones: please post a separate question for that, this is not something we can hash out in comments. Sorry! – Martijn Pieters Apr 05 '20 at 13:18
  • Not a problem @Martijn! Will do so now. – WillJones Apr 05 '20 at 17:31
  • Just created the issue here: https://stackoverflow.com/questions/61048597/ungzipping-chunks-of-bytes-from-from-s3-using-iter-chunks Thank you for taking a look! – WillJones Apr 05 '20 at 19:52
11

I had to do the same thing and this is how I did it:

import gzip
f = StringIO.StringIO()
k.get_file(f)
f.seek(0) #This is crucial
gzf = gzip.GzipFile(fileobj=f)
file_content = gzf.read()
Alex
  • 441
  • 5
  • 6
6

For Python3x and boto3-

So I used BytesIO to read the compressed file into a buffer object, then I used zipfile to open the decompressed stream as uncompressed data and I was able to get the datum line by line.

import io
import zipfile
import boto3
import sys

s3 = boto3.resource('s3', 'us-east-1')


def stream_zip_file():
    count = 0
    obj = s3.Object(
        bucket_name='MonkeyBusiness',
        key='/Daily/Business/Banana/{current-date}/banana.zip'
    )
    buffer = io.BytesIO(obj.get()["Body"].read())
    print (buffer)
    z = zipfile.ZipFile(buffer)
    foo2 = z.open(z.infolist()[0])
    print(sys.getsizeof(foo2))
    line_counter = 0
    for _ in foo2:
        line_counter += 1
    print (line_counter)
    z.close()


if __name__ == '__main__':
    stream_zip_file()
Shek
  • 1,403
  • 6
  • 16
  • 31
  • 1
    I noticed that the memory consumption increases significantly when we do `buffer = io.BytesIO(obj.get()["Body"].read())`. However `read(1024)` reading a certain portion of the data keeps the memory usage low! – user 923227 Mar 19 '18 at 21:52
  • 7
    `buffer = io.BytesIO(obj.get()["Body"].read())` reads the whole file into memory. – Kirk Broadhurst May 11 '18 at 18:39
0

You can try PIPE and read contents without downloading file

    import subprocess
    c = subprocess.Popen(['-c','zcat -c <gzip file name>'], shell=True, stdout=subprocess.PIPE,         stderr=subprocess.PIPE)
    for row in c.stdout:
      print row

In addition "/dev/fd/" + str(c.stdout.fileno()) will provide you FIFO file name (Named pipe) which can be passed to other program.