Curl Chat Client#

Refer to thetrtllm-serve documentation for starting a server.

SourceNVIDIA/TensorRT-LLM.

 1#! /usr/bin/env bash 2 3curlhttp://localhost:8000/v1/chat/completions\ 4-H"Content-Type: application/json"\ 5-d'{ 6        "model": "TinyLlama-1.1B-Chat-v1.0", 7        "messages":[{"role": "system", "content": "You are a helpful assistant."}, 8                    {"role": "user", "content": "Where is New York?"}], 9        "max_tokens": 16,10        "temperature": 011    }'