Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

License

NotificationsYou must be signed in to change notification settings

ray-project/llmval-legacy

Repository files navigation

LLMVal is a tool for validating and benchmarking LLMs.

Validation: we send a simple query to the LLM and ensure the returned datais valid. In particular it checks for inter-request cross-over(request A gets the responses for request B).

Benchmarking: LLMVal measures time to first token (TTFT),inter-token latency (ITL) and requests that take longer than 3 secondsto start returning data.

Variation in input and output token lengths is a design parametersince this is intended to be representative. This is becausethere are some optimizations (e.g. continuous batching) thatwe know work better with varying input and output length.

Supported endpoints

Currently supported endpoints include:

  • Any OpenAI compatible endpoints, including Anyscale Endpoints,Anyscale Private Endpoints, OpenAI, Fireworks, Perplexity etc
  • Together
  • Vertex AI
  • SageMaker

Please seerequirments.txt for more details on dependency requirments.

Upcoming refactor

This is prototype code. We are currently refactoring the code to be moreextensible (including a pluggable endpoints, varying traffic load etc).

In addition we plan to:

  • Make running the benchmark not only possible fromcommand line, but also possible to integrate easily into CI/CD or job schedulingsystems.
  • Control where the generated files and information go.
  • Automate report generation.

We expect this refactor to be complete some time in November 2023.

A note on rate limits

Many LLM providers have extremely low rate limits by default (e.g. Perplexity 3 requests per 90 seconds).

You can use the sleep parameter to overcome these difficulties, but it does affect the representativeness of the results.

Other systems do not have rate limits, but we consider that if the TTFT exceeds 3 second for more than5% of queries that the system is overloaded.

Default values

Default values are the ones that we use for testing Anyscale Endpoints.The distribution of inputs and outputs roughly mirrors the input and outputpatterns we see there.

We recommend setting the seed (or using the provided seed) to reduce variable butstill have randomization.

Do a python llmval.py --help to see all options.

Usage

  1. Provide API base and key in .env file. Check out env_sample.txt
  2. Test out Anyscale Endpoint with following command by sending 20 requests
    python llmval.py -r 20 -m "meta-llama/Llama-2-70b-chat-hf"
  3. Control input token numbers by setting min/max lines, and control output token number by setting req-lines and max_tokens
    python llmval.py -r 20 -f openai -m "gpt-3.5-turbo" --min-lines 8 --max-lines 10
    python llmval.py -r 20 -f openai -m "gpt-3.5-turbo" --req-lines 3 --max-tokens 128
  4. Control sleep between rounds to avoid hitting rate limit
    python llmval.py -r 20 -f fireworks -m "accounts/fireworks/models/llama-v2-70b-chat" --sleep 10
  5. Output will be saved atframework-timestamp.json andframework-timestamp_raw.json
  6. Use Jupyter with analyze-raw.ipynb to visualize and/or interact with the raw data.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp