157

I have a very large data set and I can't afford to read the entire data set in. So, I'm thinking of reading only one chunk of it to train but I have no idea how to do it. Any thought will be appreciated.

smci
  • 29,564
  • 18
  • 109
  • 144
bensw
  • 2,358
  • 3
  • 18
  • 29

1 Answers1

266

If you only want to read the first 999,999 (non-header) rows:

read_csv(..., nrows=999999)

If you only want to read rows 1,000,000 ... 1,999,999

read_csv(..., skiprows=1000000, nrows=999999)

nrows : int, default None Number of rows of file to read. Useful for reading pieces of large files*

skiprows : list-like or integer Row numbers to skip (0-indexed) or number of rows to skip (int) at the start of the file

and for large files, you'll probably also want to use chunksize:

chunksize : int, default None Return TextFileReader object for iteration

pandas.io.parsers.read_csv documentation

smci
  • 29,564
  • 18
  • 109
  • 144
  • That's ok, they're slightly hidden. The doc could do with these examples. `chunksize` is a bit of a pain, you have to deal with unevenly-sized chunks. Also preallocate your arrays/dataframes with the fixed size you know you'll need, don't dynamically do concat/append whenever you can avoid it. – smci May 25 '14 at 09:00
  • ...and also, it's not like the interface is `nstart=,nend=...`. You have to do the arithmetic on `skiprows = nend - nrows` – smci May 25 '14 at 09:10
  • 1
    I guess that's just taken over from SQL: `LIMIT nstart, skiprows` :/ – FooBar May 25 '14 at 11:33
  • ...and don't forget off-by-n errors if you also use `header=n/list` – smci May 26 '14 at 07:13