For example I have a file like below:
ex.txt
1 2
3 4 5
2
I want to read this file using pandas' read_table()
import pandas as pd
df = pd.read_table("ex.txt", sep = " ", header = None)
However, this code display tokenizing error. I wish to read all data, so I don't hope to use bad_line_error = False.
The number of my real files are 24 and each file's size is over 1GB. So I cannot modify all files manually.
I look forward to the results below:
1 2 Nan
3 4 5
2 NaN NaN