122

What's the Python way to read in a CSV file into a pandas DataFrame (which I can then use for statistical operations, can have differently-typed columns, etc.)?

My CSV file "value.txt" has the following content:

Date,"price","factor_1","factor_2"
2012-06-11,1600.20,1.255,1.548
2012-06-12,1610.02,1.258,1.554
2012-06-13,1618.07,1.249,1.552
2012-06-14,1624.40,1.253,1.556
2012-06-15,1626.15,1.258,1.552
2012-06-16,1626.15,1.263,1.558
2012-06-17,1626.15,1.264,1.572

In R we would read this file in using:

price <- read.csv("value.txt")  

and that would return an R data.frame:

> price <- read.csv("value.txt")
> price
     Date   price factor_1 factor_2
1  2012-06-11 1600.20    1.255    1.548
2  2012-06-12 1610.02    1.258    1.554
3  2012-06-13 1618.07    1.249    1.552
4  2012-06-14 1624.40    1.253    1.556
5  2012-06-15 1626.15    1.258    1.552
6  2012-06-16 1626.15    1.263    1.558
7  2012-06-17 1626.15    1.264    1.572

Is there a Pythonic way to get the same functionality?

smci
  • 29,564
  • 18
  • 109
  • 144
mazlor
  • 1,685
  • 4
  • 17
  • 20
  • will you commend on what's special with is dataframe and what statistical operation can you do with it? – LWZ Jan 16 '13 at 19:53
  • 3
    dataframe is can contain more than one types of data , for example every column can be a list , and you can treat every list individually applying some functions on them , and talking about statistical operations , like having the mean , standard deviation , quartile , ... – mazlor Jan 16 '13 at 20:29
  • Thanks! This is actually very useful to me. I've always load csv file with the csv module which gives me a list of lists. This data.frame sounds way better! – LWZ Jan 16 '13 at 21:13
  • @LWZ: see my edits and link, lmk if that succinctly answers your questions. That's as much as we can do here to 'sell' the benefits package. The wider question *"What are the benefits of using a pandas dataframe over plain Python array/list-of-list?"* has many benefits, way too many to list here... – smci Jun 30 '19 at 19:49

9 Answers9

191

pandas to the rescue:

import pandas as pd
print pd.read_csv('value.txt')

        Date    price  factor_1  factor_2
0  2012-06-11  1600.20     1.255     1.548
1  2012-06-12  1610.02     1.258     1.554
2  2012-06-13  1618.07     1.249     1.552
3  2012-06-14  1624.40     1.253     1.556
4  2012-06-15  1626.15     1.258     1.552
5  2012-06-16  1626.15     1.263     1.558
6  2012-06-17  1626.15     1.264     1.572

This returns pandas DataFrame that is similar to R's.

root
  • 69,888
  • 24
  • 104
  • 118
18

To read a CSV file as a pandas DataFrame, you'll need to use pd.read_csv.

But this isn't where the story ends; data exists in many different formats and is stored in different ways so you will often need to pass additional parameters to read_csv to ensure your data is read in properly.

Here's a table listing common scenarios encountered with CSV files along with the appropriate argument you will need to use. You will usually need all or some combination of the arguments below to read in your data.

┌──────────────────────────────────────────────────────────┬─────────────────────────────┬────────────────────────────────────────────────────────┐
│  ScenarioArgumentExample                                               │
├──────────────────────────────────────────────────────────┼─────────────────────────────┼────────────────────────────────────────────────────────┤
│  Read CSV with different separator¹                      │  sep/delimiter              │  read_csv(..., sep=';')                                │
│  Read CSV with tab/whitespace separator                  │  delim_whitespace           │  read_csv(..., delim_whitespace=True)                  │
│  Fix UnicodeDecodeError while reading²                   │  encoding                   │  read_csv(..., encoding='latin-1')                     │
│  Read CSV without headers³                               │  header and names           │  read_csv(..., header=False, names=['x', 'y', 'z'])    │
│  Specify which column to set as the index⁴               │  index_col                  │  read_csv(..., index_col=[0])                          │
│  Read subset of columns                                  │  usecols                    │  read_csv(..., usecols=['x', 'y'])                     │
│  Numeric data is in European format (eg., 1.234,56)      │  thousands and decimal      │  read_csv(..., thousands='.', decimal=',')             │
└──────────────────────────────────────────────────────────┴─────────────────────────────┴────────────────────────────────────────────────────────┘

Footnotes

  1. By default, read_csv uses a C parser engine for performance. The C parser can only handle single character separators. If your CSV has a multi-character separator, you will need to modify your code to use the 'python' engine. You can also pass regular expressions:

    df = pd.read_csv(..., sep=r'\s*\|\s*', engine='python')
    
  2. UnicodeDecodeError occurs when the data was stored in one encoding format but read in a different, incompatible one. Most common encoding schemes are 'utf-8' and 'latin-1', your data is likely to fit into one of these.

  3. header=False specifies that the first row in the CSV is a data row rather than a header row, and the names=[...] allows you to specify a list of column names to assign to the DataFrame when it is created.

  4. "Unnamed: 0" occurs when a DataFrame with an un-named index is saved to CSV and then re-read after. Instead of having to fix the issue while reading, you can also fix the issue when writing by using

    df.to_csv(..., index=False)
    

There are other arguments I've not mentioned here, but these are the ones you'll encounter most frequently.

cs95
  • 330,695
  • 80
  • 606
  • 657
  • 1
    Can you please repaste/reformat the table-as-image as Unicode text using e.g. https://ozh.github.io/ascii-tables ? Otherwise it's hard to read, and won't be indexed for content search, or findable with plain old Ctrl-F text search. – smci Jun 30 '19 at 19:54
  • @smci I agree what you said about indexing (plus the fact that images cannot be edited by other people) but it does not look nearly as good and requires scrolling which makes it hard to follow. – cs95 Jun 30 '19 at 20:17
  • anyway you need to rewrite it to be user-friendly to a new user; instead of incomprehensible stuff like *'Avoid "Unnamed: 0"'* it should say something plain English like *'`index_col`: tell pandas which column(s) to use as index for your dataframe'* – smci Jun 30 '19 at 20:19
  • @smci Looks like ASCII tables don't play nicely with superscripts -- it messes up the table formatting. – cs95 Jun 30 '19 at 20:25
  • cs95: I recommended Unicode, not ASCII. Unicode works fine. So just change the default in the pulldown on https://ozh.github.io/ascii-tables – smci Jun 30 '19 at 20:26
  • @smci Yes, sorry, that's what I meant. I tried Unicode first, then some of the other options. Here's a [paste](https://pastebin.com/raw/efUibhWc) if you want to take a look. The formatting is messed up on rows which have a superscript. Trying to fix it will result in the column separators being out of alignment ever so slightly. – cs95 Jun 30 '19 at 20:28
  • cs95 Please just post (malformed) table already, we (or others) can finesse the appearance. If HTML `1` doesn't render great, you can use Unicode superscript chars ⁰¹²³⁴⁵⁶⁷⁸⁹ , or else fallback on `[1], [2], ...` or `[a], [b], ...` – smci Jun 30 '19 at 20:32
  • @smci I've edited. There's a few more posts like this with image tables, I'll convert them as necessary. – cs95 Jun 30 '19 at 20:40
  • Note also that the ```float_precision``` option only exists for the C engine, so there can be problems if you have a .csv file where for some things you'd need the python engine (e.g. to handle multi-character separators) and for others you'd need the C engine. – Holzner Sep 05 '19 at 14:10
  • What if the csv file is a comma separated "," and one of columns contains two words (for example: first_name, last_name) separated by comma "," too? – Ali F Apr 29 '22 at 14:18
10

Here's an alternative to pandas library using Python's built-in csv module.

import csv
from pprint import pprint
with open('foo.csv', 'rb') as f:
    reader = csv.reader(f)
    headers = reader.next()
    column = {h:[] for h in headers}
    for row in reader:
        for h, v in zip(headers, row):
            column[h].append(v)
    pprint(column)    # Pretty printer

will print

{'Date': ['2012-06-11',
          '2012-06-12',
          '2012-06-13',
          '2012-06-14',
          '2012-06-15',
          '2012-06-16',
          '2012-06-17'],
 'factor_1': ['1.255', '1.258', '1.249', '1.253', '1.258', '1.263', '1.264'],
 'factor_2': ['1.548', '1.554', '1.552', '1.556', '1.552', '1.558', '1.572'],
 'price': ['1600.20',
           '1610.02',
           '1618.07',
           '1624.40',
           '1626.15',
           '1626.15',
           '1626.15']}
siddharthlatest
  • 2,173
  • 1
  • 19
  • 23
7
import pandas as pd
df = pd.read_csv('/PathToFile.txt', sep = ',')

This will import your .txt or .csv file into a DataFrame.

Rishabh
  • 311
  • 3
  • 3
6

Try this

import pandas as pd
data=pd.read_csv('C:/Users/Downloads/winequality-red.csv')

Replace the file target location, with where your data set is found, refer this url https://medium.com/@kanchanardj/jargon-in-python-used-in-data-science-to-laymans-language-part-one-12ddfd31592f

Dulangi_Kanchana
  • 951
  • 7
  • 17
1
%cd C:\Users\asus\Desktop\python
import pandas as pd
df = pd.read_csv('value.txt')
df.head()
    Date    price   factor_1    factor_2
0   2012-06-11  1600.20 1.255   1.548
1   2012-06-12  1610.02 1.258   1.554
2   2012-06-13  1618.07 1.249   1.552
3   2012-06-14  1624.40 1.253   1.556
4   2012-06-15  1626.15 1.258   1.552
chahat
  • 21
  • 5
0

You can use the csv module found in the python standard library to manipulate CSV files.

example:

import csv
with open('some.csv', 'rb') as f:
    reader = csv.reader(f)
    for row in reader:
        print row
KurzedMetal
  • 12,066
  • 5
  • 38
  • 64
-1

Note quite as clean, but:

import csv

with open("value.txt", "r") as f:
    csv_reader = reader(f)
    num = '  '
    for row in csv_reader:
        print num, '\t'.join(row)
        if num == '  ':  
            num=0
        num=num+1

Not as compact, but it does the job:

   Date price   factor_1    factor_2
1 2012-06-11    1600.20 1.255   1.548
2 2012-06-12    1610.02 1.258   1.554
3 2012-06-13    1618.07 1.249   1.552
4 2012-06-14    1624.40 1.253   1.556
5 2012-06-15    1626.15 1.258   1.552
6 2012-06-16    1626.15 1.263   1.558
7 2012-06-17    1626.15 1.264   1.572
Lee-Man
  • 358
  • 1
  • 8
-2
import pandas as pd    
dataset = pd.read_csv('/home/nspython/Downloads/movie_metadata1.csv')
Samsul Islam
  • 2,496
  • 2
  • 16
  • 21