Curl Completion Client#
Refer to thetrtllm-serve documentation for starting a server.
SourceNVIDIA/TensorRT-LLM.
1#! /usr/bin/env bash 2 3curlhttp://localhost:8000/v1/completions\ 4-H"Content-Type: application/json"\ 5-d'{ 6 "model": "TinyLlama-1.1B-Chat-v1.0", 7 "prompt": "Where is New York?", 8 "max_tokens": 16, 9 "temperature": 010 }'