Genai Perf Client#

Refer to thetrtllm-serve documentation for starting a server.

SourceNVIDIA/TensorRT-LLM.

 1#! /usr/bin/env bash 2 3genai-perfprofile\ 4-mTinyLlama-1.1B-Chat-v1.0\ 5--tokenizerTinyLlama/TinyLlama-1.1B-Chat-v1.0\ 6--endpoint-typechat\ 7--random-seed123\ 8--synthetic-input-tokens-mean128\ 9--synthetic-input-tokens-stddev0\10--output-tokens-mean128\11--output-tokens-stddev0\12--request-count100\13--request-rate10\14--profile-export-filemy_profile_export.json\15--urllocalhost:8000\16--streaming