- Notifications
You must be signed in to change notification settings - Fork2.2k
-
Dear Pandas-ai team, We are currently encountering an issue noted in#82. Specifically, we are working with two large dataframes: one with 1992 rows and 3000 columns, and another with 3000 rows and 400 columns. Our Large Language Model (LLM) is producing errors indicating that we have exceeded the maximum number of tokens it can process. Here is the specific error message we are receiving:
The size of these dataframes appears to be causing the LLM to reach its token limit, resulting in processing failures. Is there any way to address this issue within the pandas-ai framework? Specifically, we are interested in implementing a solution that splits the dataframes into smaller chunks or optimizes the columns processed. Any guidance or recommendations on how to handle this effectively would be greatly appreciated. Thank you |
BetaWas this translation helpful?Give feedback.