24

I'm guessing this is an easy fix, but I'm running into an issue that it's taking nearly an hour to save a pandas dataframe to a csv file using the to_csv() function. I'm using anaconda python 2.7.12 with pandas (0.19.1).

import os
import glob
import pandas as pd

src_files = glob.glob(os.path.join('/my/path', "*.csv.gz"))

# 1 - Takes 2 min to read 20m records from 30 files
for file_ in sorted(src_files):
    stage = pd.DataFrame()
    iter_csv = pd.read_csv(file_
                     , sep=','
                     , index_col=False
                     , header=0
                     , low_memory=False
                     , iterator=True
                     , chunksize=100000
                     , compression='gzip'
                     , memory_map=True
                     , encoding='utf-8')

    df = pd.concat([chunk for chunk in iter_csv])
    stage = stage.append(df, ignore_index=True)

# 2 - Takes 55 min to write 20m records from one dataframe
stage.to_csv('output.csv'
             , sep='|'
             , header=True
             , index=False
             , chunksize=100000
             , encoding='utf-8')

del stage

I've confirmed the hardware and memory are working, but these are fairly wide tables (~ 100 columns) of mostly numeric (decimal) data.

Thank you,

Kimi Merroll
  • 271
  • 1
  • 3
  • 8
  • Hardware bottleneck. Keep a tab on your disk throughput, and also check for empty disk space. – Kartik Nov 17 '16 at 16:46
  • As I mentioned, I did check the disk space and can copy large files to the drive with expected speed. Also, I should have mentioned I'm writing to an SSD (Samsung 950) – Kimi Merroll Nov 17 '16 at 17:47
  • 4
    Try without the `chunksize` kwag... It could be a lot of things, like quoting, value conversion, etc. Try to [profile](https://docs.python.org/3.5/library/profile.html) it and see where it spends most of its time. – Kartik Nov 17 '16 at 18:07
  • any update on that? I ran on a similar problem lately – ℕʘʘḆḽḘ Apr 23 '17 at 18:50
  • i have an ssd on pci express and face the same issue. hardware should not be the bottleneck in this case... – PlagTag Jun 12 '17 at 15:16
  • See https://stackoverflow.com/a/54617862/6646912 for a benchmark showing how to speed up writing large DataFrames with mostly numeric data to a csv. – krassowski Feb 10 '19 at 15:52

3 Answers3

15

Adding my small insight since the 'gzip' alternative did not work for me - try using to_hdf method. This reduced the write time significantly! (less than a second for a 100MB file - CSV option preformed this in between 30-55 seconds)

stage.to_hdf(r'path/file.h5', key='stage', mode='w')
Amir F
  • 2,101
  • 14
  • 10
  • 1
    This solution works for me, while the .gz solution made no difference. .to_hdf method wrote out 1.5GB in 13 seconds. .to_csv took too long to time, even with changes suggested by Frane – alliedtoasters Jun 25 '19 at 20:18
  • 1
    Yes, the .gz solution made no difference made no difference for a file size of 5GB – Hardik Gupta Aug 30 '19 at 11:10
  • 1
    I went from 4 minutes with .to_csv, to 8 seconds with .to_hdf !!!! Thanks @amir-f !! – Paul Oct 23 '20 at 14:07
12

You are reading compressed files and writing plaintext file. Could be IO bottleneck.

Writing compressed file could speedup writing up to 10x

    stage.to_csv('output.csv.gz'
         , sep='|'
         , header=True
         , index=False
         , chunksize=100000
         , compression='gzip'
         , encoding='utf-8')

Additionally you could experiment with different chunk sizes and compression methods (‘bz2’, ‘xz’).

Frane
  • 504
  • 6
  • 13
9

You said "[...] of mostly numeric (decimal) data.". Do you have any column with time and/or dates?

I saved an 8 GB CSV in seconds when it has only numeric/string values, but it takes 20 minutes to save an 500 MB CSV with two Dates columns. So, what I would recommend is to convert each date column to a string before saving it. The following command is enough:

df['Column'] = df['Column'].astype(str) 

I hope that this answer helps you.

P.S.: I understand that saving as a .hdf file solved the problem. But, sometimes, we do need a .csv file anyway.

Kevin
  • 14,269
  • 7
  • 44
  • 64
lucas F
  • 303
  • 3
  • 5