Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit6de58d3

Browse files
committed
readme
1 parent46daf5d commit6de58d3

File tree

2 files changed

+10
-0
lines changed

2 files changed

+10
-0
lines changed

‎mftcoder_accelerate/README.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -345,6 +345,11 @@ Frequently used arguments are provided in ```configs/***_train_config``` and exp
345345
-**coba_update_interval**: The frequency at which CoBa updates weights. It is commonly set to 1, meaning weights are updated at every step.
346346
-**coba_sample_valid_num**: The number of validation batches to be sampled by CoBa at each step. Theoretically, when this value equals the total number of validation batches, the fitted convergence slope most closely approximates the actual situation. However, considering computational requirements, it is recommended to set it to 1.
347347

348+
####DPO Arguments Configuration
349+
-**xxpo**: preference optimization type, "dpo" or "orpo".
350+
-**beta**: DPO beta, smaller beta allows larger distance between dpo model and ref model.
351+
-**rpo_alpha**: The coefficient of the```chosen``` NLL loss added to dpo loss.
352+
348353
##4. Model Usage
349354

350355
###4.1 Merge Adaptor weights

‎mftcoder_accelerate/README_cn.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -287,6 +287,11 @@ _**训练需要的参数配置在```configs/*_train_config```中,主要参数
287287
-**coba_update_interval**: CoBa更新权重的频率。一般设置为1,即每一步都对权重做更新。
288288
-**coba_sample_valid_num**: CoBa每一步要取的valid batch数。理论上当该值等于valid batch总数量时,拟合出的收敛斜率最逼近真实情况,但考虑到计算需求,建议设置为1。
289289

290+
####DPO 相关参数配置
291+
-**xxpo**: 偏好对齐方法, "dpo" 或者 "orpo".
292+
-**beta**: DPO beta, beta 越小,允许对齐后的dpo模型与ref模型的距离越远
293+
-**rpo_alpha**: 加到dop损失的```chosen``` NLL损失的系数,0的话就是原始DPO。
294+
-
290295
##4. 模型使用
291296

292297
###4.1 权重合并

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp