Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Scalability issues#1726

aghozlane started this conversation inGeneral
Apr 25, 2025· 0 comments
Discussion options

Dear Pandas-ai team,

We are currently encountering an issue noted in#82. Specifically, we are working with two large dataframes: one with 1992 rows and 3000 columns, and another with 3000 rows and 400 columns. Our Large Language Model (LLM) is producing errors indicating that we have exceeded the maximum number of tokens it can process.

Here is the specific error message we are receiving:

BadRequestError: Error code: 400 - {'error': {'message': 'litellm.BadRequestError: OpenAIException - max_tokens must be at least 1, got -3022.. Received Model Group=pixtral-large-2411-local\nAvailable Model Group Fallbacks=None', 'type': None, 'param': None, 'code': '400'}}

The size of these dataframes appears to be causing the LLM to reach its token limit, resulting in processing failures.

Is there any way to address this issue within the pandas-ai framework? Specifically, we are interested in implementing a solution that splits the dataframes into smaller chunks or optimizes the columns processed. Any guidance or recommendations on how to handle this effectively would be greatly appreciated.

Thank you

You must be logged in to vote

Replies: 0 comments

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Category
General
Labels
None yet
1 participant
@aghozlane

[8]ページ先頭

©2009-2025 Movatter.jp