327

I am trying to read a large csv file (aprox. 6 GB) in pandas and i am getting a memory error:

MemoryError                               Traceback (most recent call last)<ipython-input-58-67a72687871b> in <module>()----> 1 data=pd.read_csv('aphro.csv',sep=';')...MemoryError:

Any help on this?

Vaidøtas I.'s user avatar
Vaidøtas I.
5649 silver badges25 bronze badges
askedSep 21, 2014 at 17:46
Rajkumar Kumawat's user avatar
3

16 Answers16

465

The error shows that the machine does not have enough memory to read the entireCSV into a DataFrame at one time. Assuming you do not need the entire dataset inmemory all at one time, one way to avoid the problem would be toprocess the CSV inchunks (by specifying thechunksize parameter):

chunksize = 10 ** 6for chunk in pd.read_csv(filename, chunksize=chunksize):    # chunk is a DataFrame. To "process" the rows in the chunk:    for index, row in chunk.iterrows():        print(row)

Thechunksize parameter specifies the number of rows per chunk.(The last chunk may contain fewer thanchunksize rows, of course.)


pandas >= 1.2

read_csv withchunksize returns a context manager, to be used like so:

chunksize = 10 ** 6with pd.read_csv(filename, chunksize=chunksize) as reader:    for chunk in reader:        process(chunk)

SeeGH38225

Dan Dascalescu's user avatar
Dan Dascalescu
154k66 gold badges336 silver badges422 bronze badges
answeredSep 21, 2014 at 17:54
unutbu's user avatar
Sign up to request clarification or add additional context in comments.

21 Comments

you generally need 2X the final memory to read in something (from csv, though other formats are better at having lower memory requirements). FYI this is true for trying to do almost anything all at once. Much better to chunk it (which has a constant memory usage).
@altabq: The problem here is that we don't have enough memory to build a single DataFrame holding all the data. The solution above tries to cope with this situation by reducing the chunks (e.g. by aggregating or extracting just the desired information) one chunk at a time -- thus saving memory. Whatever you do, DO NOT callDF.append(chunk) inside the loop. That will useO(N^2) copying operations. It is better to append the aggregated datato a list, and then build the DataFrame from the list withone call topd.DataFrame orpd.concat (depending on the type of aggregated data).
@altabq: CallingDF.append(chunk) in a loop requiresO(N^2) copying operations whereN is the size of the chunks, because each call toDF.append returns a new DataFrame. Callingpd.DataFrame orpd.concatonce outside the loop reduces the amount of copying toO(N).
@Pyderman: Yes, thechunksize parameter refers to the number of rows per chunk. The last chunk may contain fewer thanchunksize rows, of course.
@Pyderman: Yes; callingpd.concat([list_of_dfs])once after the loop is much faster than callingpd.concat ordf.append many times within the loop. Of course, you'll need a considerable amount of memory to hold the entire 6GB csv as one DataFrame.
|
140

Chunking shouldn't always be the first port of call for this problem.

  1. Is the file large due to repeated non-numeric data or unwanted columns?

    If so, you can sometimes see massive memory savings byreading in columns as categories and selecting required columns viapd.read_csvusecols parameter.

  2. Does your workflow require slicing, manipulating, exporting?

    If so, you can usedask.dataframe to slice, perform your calculations and export iteratively. Chunking is performed silently by dask, which also supports a subset of pandas API.

  3. If all else fails, read line by line via chunks.

    Chunkvia pandas or viacsv library as a last resort.

answeredJan 23, 2018 at 17:45
jpp's user avatar

2 Comments

It looks likechunks have the same meaning of "the number of lines", right?
@Belter, ..yes.
72

For large data l recommend you use the library "dask"
e.g:

# Dataframes implement the Pandas APIimport dask.dataframe as dddf = dd.read_csv('s3://.../2018-*-*.csv')

You can read more from the documentationhere.

Another great alternative would be to usemodin because all the functionality is identical to pandas yet it leverages on distributed dataframe libraries such as dask.

From my projects another superior library isdatatables.

# Datatable python libraryimport datatable as dtdf = dt.fread("s3://.../2018-*-*.csv")
answeredApr 17, 2018 at 11:21
Simbarashe Timothy Motsi's user avatar

6 Comments

Any benefits over pandas, could appreciate adding a few more pointers
I haven't used Dask for very long but the main advantages in my use cases were that Dask can run parallel on multiple machines, it can also fit data as slices into memory.
thanks! is dask a replacement for pandas or does it work on top of pandas as a layer
Welcome, it works as a wrapper for Numpy, Pandas, and Scikit-Learn.
I've tried to face several problems with Dask and always throws an error for everything. Even with chunks It throws Memory errors too. Seestackoverflow.com/questions/59865572/…
|
40

I proceeded like this:

chunks=pd.read_table('aphro.csv',chunksize=1000000,sep=';',\       names=['lat','long','rf','date','slno'],index_col='slno',\       header=None,parse_dates=['date'])df=pd.DataFrame()%time df=pd.concat(chunk.groupby(['lat','long',chunk['date'].map(lambda x: x.year)])['rf'].agg(['sum']) for chunk in chunks)
cs95's user avatar
cs95
406k106 gold badges745 silver badges798 bronze badges
answeredSep 24, 2014 at 12:46
Rajkumar Kumawat's user avatar

1 Comment

Is there a reason you switched fromread_csv toread_table?
12

You can read in the data as chunks and save each chunk as pickle.

import pandas as pd import picklein_path = "" #Path where the large file isout_path = "" #Path to save the pickle files tochunk_size = 400000 #size of chunks relies on your available memoryseparator = "~"reader = pd.read_csv(in_path,sep=separator,chunksize=chunk_size,                     low_memory=False)    for i, chunk in enumerate(reader):    out_file = out_path + "/data_{}.pkl".format(i+1)    with open(out_file, "wb") as f:        pickle.dump(chunk,f,pickle.HIGHEST_PROTOCOL)

In the next step you read in the pickles and append each pickle to your desired dataframe.

import globpickle_path = "" #Same Path as out_path i.e. where the pickle files aredata_p_files=[]for name in glob.glob(pickle_path + "/data_*.pkl"):   data_p_files.append(name)df = pd.DataFrame([])for i in range(len(data_p_files)):    df = df.append(pd.read_pickle(data_p_files[i]),ignore_index=True)
Chris Adams's user avatar
Chris Adams
18.7k4 gold badges26 silver badges44 bronze badges
answeredOct 24, 2018 at 8:40
Lukas Humpe's user avatar

2 Comments

If your finaldf fits entirely in memory (as implied) and contains the same amount of data as your input, surely you don't need to chunk at all?
You would need to chunk in this case if, for example, your file is very wide (like greater than 100 columns with a lot of string columns). This increases the memory needed to hold the df in memory. Even a 4GB file like this could end up using between 20 and 30 GB of RAM on a box with 64 GB RAM.
11

I want to make a more comprehensive answer based off of the most of the potential solutions that are already provided. I also want to point out one more potential aid that may help reading process.

Option 1: dtypes

"dtypes" is a pretty powerful parameter that you can use to reduce the memory pressure ofread methods. Seethis andthis answer. Pandas, on default, try to infer dtypes of the data.

Referring to data structures, every data stored, a memory allocation takes place. At a basic level refer to the values below (The table below illustrates values for C programming language):

The maximum value of UNSIGNED CHAR = 255                                    The minimum value of SHORT INT = -32768                                     The maximum value of SHORT INT = 32767                                      The minimum value of INT = -2147483648                                      The maximum value of INT = 2147483647                                       The minimum value of CHAR = -128                                            The maximum value of CHAR = 127                                             The minimum value of LONG = -9223372036854775808                            The maximum value of LONG = 9223372036854775807

Refer tothis page to see the matching between NumPy and C types.

Let's say you have an array of integers ofdigits. You can both theoretically and practically assign, say array of 16-bit integer type, but you would then allocate more memory than you actually need to store that array. To prevent this, you can setdtype option onread_csv. You do not want to store the array items as long integer where actually you can fit them with 8-bit integer (np.int8 ornp.uint8).

Observe the following dtype map.

Source:https://pbpython.com/pandas_dtypes.html

You can passdtype parameter as a parameter on pandas methods as dict onread like {column: type}.

import numpy as npimport pandas as pddf_dtype = {        "column_1": int,        "column_2": str,        "column_3": np.int16,        "column_4": np.uint8,        ...        "column_n": np.float32}df = pd.read_csv('path/to/file', dtype=df_dtype)

Option 2: Read by Chunks

Reading the data in chunks allows you to access a part of the data in-memory, and you can apply preprocessing on your data and preserve the processed data rather than raw data. It'd be much better if you combine this option with the first one,dtypes.

I want to point out the pandas cookbook sections for that process, where you can find ithere. Note those two sections there;

Option 3: Dask

Dask is a framework that is defined inDask's website as:

Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love

It was born to cover the necessary parts where pandas cannot reach. Dask is a powerful framework that allows you much more data access by processing it in a distributed way.

You can use dask to preprocess your data as a whole, Dask takes care of the chunking part, so unlike pandas you can just define your processing steps and let Dask do the work. Dask does not apply the computations before it is explicitly pushed bycompute and/orpersist (see the answerhere for the difference).

Other Aids (Ideas)

  • ETL flow designed for the data. Keeping only what is needed from the raw data.
    • First, apply ETL to whole data with frameworks like Dask or PySpark, and export the processed data.
    • Then see if the processed data can be fit in the memory as a whole.
  • Consider increasing your RAM.
  • Consider working with that data on a cloud platform.
answeredMar 10, 2020 at 11:17
null's user avatar

Comments

7

Before using chunksize option if you want to be sure about the process function that you want to write inside the chunking for-loop as mentioned by @unutbu you can simply use nrows option.

small_df = pd.read_csv(filename, nrows=100)

Once you are sure that the process block is ready, you can put that in the chunking for loop for the entire dataframe.

answeredMar 18, 2020 at 19:57
sam's user avatar

Comments

6

The function read_csv and read_table is almost the same. But you must assign the delimiter “,” when you use the function read_table in your program.

def get_from_action_data(fname, chunk_size=100000):    reader = pd.read_csv(fname, header=0, iterator=True)    chunks = []    loop = True    while loop:        try:            chunk = reader.get_chunk(chunk_size)[["user_id", "type"]]            chunks.append(chunk)        except StopIteration:            loop = False            print("Iteration is stopped")    df_ac = pd.concat(chunks, ignore_index=True)
Ron's user avatar
Ron
1,2499 silver badges21 bronze badges
answeredApr 26, 2017 at 15:02
Tyrion W's user avatar

2 Comments

It would help if stated what your question is in this post. Like "What is the difference between read_csv and read_table?" or "Why does read table need a delimiter?"
It depends how your file looks. Some files have common delimiters such as "," or "|" or "\t" but you may see other files with delimiters such as 0x01, 0x02 (making this one up) etc. So read_table is more suited to uncommon delimiters but read_csv can do the same job just as good.
5

Solution 1:

Using pandas with large data

Solution 2:

TextFileReader = pd.read_csv(path, chunksize=1000)  # the number of rows per chunkdfList = []for df in TextFileReader:    dfList.append(df)df = pd.concat(dfList,sort=False)
petezurich's user avatar
petezurich
10.3k10 gold badges48 silver badges63 bronze badges
answeredDec 5, 2018 at 8:25
blacksheep's user avatar

2 Comments

Here again we are loading the 6 GB file totally to the memory, Is there any options, we can process the current chunk and then read the next chunk
just don't dodfList.append, just process each chunk (df ) separately
3

Here follows an example:

chunkTemp = []queryTemp = []query = pd.DataFrame()for chunk in pd.read_csv(file, header=0, chunksize=<your_chunksize>, iterator=True, low_memory=False):    #REPLACING BLANK SPACES AT COLUMNS' NAMES FOR SQL OPTIMIZATION    chunk = chunk.rename(columns = {c: c.replace(' ', '') for c in chunk.columns})    #YOU CAN EITHER:     #1)BUFFER THE CHUNKS IN ORDER TO LOAD YOUR WHOLE DATASET     chunkTemp.append(chunk)    #2)DO YOUR PROCESSING OVER A CHUNK AND STORE THE RESULT OF IT    query = chunk[chunk[<column_name>].str.startswith(<some_pattern>)]       #BUFFERING PROCESSED DATA    queryTemp.append(query)#!  NEVER DO pd.concat OR pd.DataFrame() INSIDE A LOOPprint("Database: CONCATENATING CHUNKS INTO A SINGLE DATAFRAME")chunk = pd.concat(chunkTemp)print("Database: LOADED")#CONCATENATING PROCESSED DATAquery = pd.concat(queryTemp)print(query)
answeredMay 27, 2019 at 6:12
jonathask's user avatar

Comments

2

You can try sframe, that have the same syntax as pandas but allows you to manipulate files that are bigger than your RAM.

answeredJan 7, 2017 at 13:22
nunodsousa's user avatar

2 Comments

"The data in SFrame is stored column-wise on the GraphLab Server side" is it a service or a package?
2

If you use pandas read large file into chunk and then yield row by row, here is what I have done

import pandas as pddef chunck_generator(filename, header=False,chunk_size = 10 ** 5):   for chunk in pd.read_csv(filename,delimiter=',', iterator=True, chunksize=chunk_size, parse_dates=[1] ):         yield (chunk)def _generator( filename, header=False,chunk_size = 10 ** 5):    chunk = chunck_generator(filename, header=False,chunk_size = 10 ** 5)    for row in chunk:        yield rowif __name__ == "__main__":filename = r'file.csv'        generator = generator(filename=filename)        while True:           print(next(generator))
answeredNov 13, 2017 at 5:34
paulg's user avatar

Comments

1

In case someone is still looking for something like this, I found that this new library calledmodin can help. It uses distributed computing that can help with the read. Here's a nicearticle comparing its functionality with pandas. It essentially uses the same functions as pandas.

import modin.pandas as pdpd.read_csv(CSV_FILE_NAME)
answeredApr 11, 2019 at 4:57
Jaskaran's user avatar

1 Comment

Can you comment on how this new modulemodin compares with the well-establisheddask.dataframe? For example, seemove from pandas to dask to utilize all local cpu cores.
0

In addition to the answers above, for those who want to process CSV and then export to csv, parquet or SQL,d6tstack is another good option. You can load multiple files and it deals with data schema changes (added/removed columns). Chunked out of core support is already built in.

def apply(dfg):    # do stuff    return dfgc = d6tstack.combine_csv.CombinerCSV([bigfile.csv], apply_after_read=apply, sep=',', chunksize=1e6)# orc = d6tstack.combine_csv.CombinerCSV(glob.glob('*.csv'), apply_after_read=apply, chunksize=1e6)# output to various formats, automatically chunked to reduce memory consumptionc.to_csv_combine(filename='out.csv')c.to_parquet_combine(filename='out.pq')c.to_psql_combine('postgresql+psycopg2://usr:pwd@localhost/db', 'tablename') # fast for postgresc.to_mysql_combine('mysql+mysqlconnector://usr:pwd@localhost/db', 'tablename') # fast for mysqlc.to_sql_combine('postgresql+psycopg2://usr:pwd@localhost/db', 'tablename') # slow but flexible
answeredOct 14, 2018 at 22:44
citynorman's user avatar

Comments

0

If you havecsv file withmillions of data entry and you want to load full dataset you should usedask_cudf,

import dask_cudf as dcdf = dc.read_csv("large_data.csv")
answeredJul 31, 2021 at 16:09
Sudhanshu's user avatar

Comments

0
def read_csv_with_progress(file_path, sep):    import pandas as pd    from tqdm import tqdm    chunk_size = 50000  # Number of lines to read in each iteration    # Get the total number of lines in the CSV file    print("Calculating average line length + getting file size")    counter = 0    total_length = 0    num_to_sample = 10    for line in open(file_path, 'r'):        counter += 1        if counter > 1:            total_length += len(line)        if counter == num_to_sample + 1:            break    file_size = os.path.getsize(file_path)    avg_line_length = total_length / num_to_sample    avg_number_of_lines = int(file_size / avg_line_length)    chunks = []    with tqdm(total=avg_number_of_lines, desc='Reading CSV') as pbar:        for chunk in pd.read_csv(file_path, chunksize=chunk_size, low_memory=False, sep=sep):            chunks.append(chunk)            pbar.update(chunk.shape[0])    print("Concating...")    df = pd.concat(chunks, ignore_index=True)    return df
answeredAug 13, 2023 at 10:10
Nathan G's user avatar

Comments

Protected question. To answer this question, you need to have at least 10 reputation on this site (not counting theassociation bonus). The reputation requirement helps protect this question from spam and non-answer activity.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.