You signed in with another tab or window.Reload to refresh your session.You signed out in another tab or window.Reload to refresh your session.You switched accounts on another tab or window.Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/blogs/Best_perf_practice_on_DeepSeek-R1_in_TensorRT-LLM.md
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -412,9 +412,10 @@ Generally, you should make sure that `max_batch_size` is not too low to bottlene
412
412
413
413
For more details on`max_batch_size` and`max_num_tokens`, refer to[Tuning Max Batch Size and Max Num Tokens](../performance/performance-tuning-guide/tuning-max-batch-size-and-max-num-tokens.md).
414
414
415
-
###Not supported: MLA chunked context support on Hopper
415
+
###MLA chunked context
416
+
417
+
MLA currently supports the chunked context feature on both Hopper and Blackwell GPUs. You can use`--enable_chunked_context` to enable it. This feature is primarily designed to reduce TPOT (Time Per Output Token). The default chunk size is set to`max_num_tokens`. If you want to achieve a lower TPOT, you can appropriately reduce the chunk size. However, please note that this will also decrease overall throughput. Therefore, a trade-off needs to be considered.
416
418
417
-
MLA chunked context support has been added on Blackwell GPUs, while it's not supported on Hopper yet. On Hopper, note that`max_num_tokens` has to be at least larger than the max input sequence length of the samples in dataset.
418
419
For more details on`max_num_tokens`, refer to[Tuning Max Batch Size and Max Num Tokens](../performance/performance-tuning-guide/tuning-max-batch-size-and-max-num-tokens.md).
Copy file name to clipboardExpand all lines: examples/models/core/deepseek_v3/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -786,7 +786,7 @@ The converted checkpoint could be used as `<YOUR_MODEL_DIR>` and consumed by oth
786
786
KV cache reuse is supported for MLA on SM90 and SM100. It is enabled by default. Due to extra operations like memcpy and GEMMs, GPU memory consumption may be higher and the E2E performance may have regression in some cases. Users could pass`KvCacheConfig(enable_block_reuse=False)` to LLM API to disable it.
787
787
788
788
###Chunked Prefill
789
-
Chunked Prefill is supported for MLA only on SM100 currently. You should add`--enable_chunked_prefill` to enable it. The GPU memory consumption is highly correlated with`max_num_tokens` and`max_batch_size`. If encountering out-of-memory errors, you may make these values smaller. (`max_num_tokens` must be divisible by kv cache's`tokens_per_block`)
789
+
Chunked Prefill is supported for MLA only onSM90 andSM100 currently. You should add`--enable_chunked_prefill` to enable it. The GPU memory consumption is highly correlated with`max_num_tokens` and`max_batch_size`. If encountering out-of-memory errors, you may make these values smaller. (`max_num_tokens` must be divisible by kv cache's`tokens_per_block`)
790
790
791
791
More specifically, we can imitate what we did in the[Quick Start](#quick-start):