Skip to content
Merged
Show file tree
Hide file tree
Changes from 20 commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion docs/source/Instruction/Supported-models-and-datasets.md
Original file line number Diff line number Diff line change
Expand Up @@ -1137,6 +1137,7 @@
|-|default|huge dataset|-|pretrain, quality|[allenai/c4](https://huggingface.co/datasets/allenai/c4)|
|[bespokelabs/Bespoke-Stratos-17k](https://modelscope.cn/datasets/bespokelabs/Bespoke-Stratos-17k)|default|16710|480.7±236.1, min=266, max=3556|chat, sft, cot, r1|[bespokelabs/Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)|
|-|default|huge dataset|-|pretrain, quality|[cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B)|
|[clip-benchmark/wds_voc2007_multilabel](https://modelscope.cn/datasets/clip-benchmark/wds_voc2007_multilabel)|default|2501|112.0±0.0, min=112, max=112|multilabel, multi-modal|[clip-benchmark/wds_voc2007_multilabel](https://huggingface.co/datasets/clip-benchmark/wds_voc2007_multilabel)|
|[codefuse-ai/CodeExercise-Python-27k](https://modelscope.cn/datasets/codefuse-ai/CodeExercise-Python-27k)|default|27224|337.3±154.2, min=90, max=2826|chat, coding, 🔥|-|
|[codefuse-ai/Evol-instruction-66k](https://modelscope.cn/datasets/codefuse-ai/Evol-instruction-66k)|default|66862|440.1±208.4, min=46, max=2661|chat, coding, 🔥|-|
|[damo/MSAgent-Bench](https://modelscope.cn/datasets/damo/MSAgent-Bench)|default<br>mini|638149|859.2±460.1, min=38, max=3479|chat, agent, multi-round|-|
Expand Down Expand Up @@ -1164,6 +1165,7 @@
|[modelscope/clue](https://modelscope.cn/datasets/modelscope/clue)|cmnli|391783|81.6±16.0, min=54, max=157|text-generation, classification|[clue](https://huggingface.co/datasets/clue)|
|[modelscope/coco_2014_caption](https://modelscope.cn/datasets/modelscope/coco_2014_caption)|train<br>validation|454617|389.6±68.4, min=70, max=587|chat, multi-modal, vision, 🔥|-|
|[modelscope/gsm8k](https://modelscope.cn/datasets/modelscope/gsm8k)|main|7473|88.6±21.6, min=41, max=241|qa, math|-|
|[open-r1/DAPO-Math-17k-Processed](https://modelscope.cn/datasets/open-r1/DAPO-Math-17k-Processed)|all|17398|122.3±65.2, min=41, max=1517|math, rlvr|[open-r1/DAPO-Math-17k-Processed](https://huggingface.co/datasets/open-r1/DAPO-Math-17k-Processed)|
|[open-r1/verifiable-coding-problems-python](https://modelscope.cn/datasets/open-r1/verifiable-coding-problems-python)|default|35735|559.0±255.2, min=74, max=6191|grpo, code|[open-r1/verifiable-coding-problems-python](https://huggingface.co/datasets/open-r1/verifiable-coding-problems-python)|
|[open-r1/verifiable-coding-problems-python-10k](https://modelscope.cn/datasets/open-r1/verifiable-coding-problems-python-10k)|default|1800|581.6±233.4, min=136, max=2022|grpo, code|[open-r1/verifiable-coding-problems-python-10k](https://huggingface.co/datasets/open-r1/verifiable-coding-problems-python-10k)|
|[open-r1/verifiable-coding-problems-python-10k_decontaminated](https://modelscope.cn/datasets/open-r1/verifiable-coding-problems-python-10k_decontaminated)|default|1574|575.7±234.3, min=136, max=2022|grpo, code|[open-r1/verifiable-coding-problems-python-10k_decontaminated](https://huggingface.co/datasets/open-r1/verifiable-coding-problems-python-10k_decontaminated)|
Expand Down Expand Up @@ -1193,7 +1195,7 @@
|[swift/RedPajama-Data-V2](https://modelscope.cn/datasets/swift/RedPajama-Data-V2)|default|huge dataset|-|pretrain, quality|[togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2)|
|[swift/ScienceQA](https://modelscope.cn/datasets/swift/ScienceQA)|default|16967|101.7±55.8, min=32, max=620|multi-modal, science, vqa, quality|[derek-thomas/ScienceQA](https://huggingface.co/datasets/derek-thomas/ScienceQA)|
|[swift/SlimOrca](https://modelscope.cn/datasets/swift/SlimOrca)|default|517982|405.5±442.1, min=47, max=8312|quality, en|[Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca)|
|[swift/TextCaps](https://modelscope.cn/datasets/swift/TextCaps)|default<br>emb|huge dataset|-|multi-modal, en, caption, quality|[HuggingFaceM4/TextCaps](https://huggingface.co/datasets/HuggingFaceM4/TextCaps)|
|[swift/TextCaps](https://modelscope.cn/datasets/swift/TextCaps)|default<br>emb<br>rerank|huge dataset|-|multi-modal, en, caption, quality|[HuggingFaceM4/TextCaps](https://huggingface.co/datasets/HuggingFaceM4/TextCaps)|
|[swift/ToolBench](https://modelscope.cn/datasets/swift/ToolBench)|default|124345|2251.7±1039.8, min=641, max=9451|chat, agent, multi-round|-|
|[swift/VQAv2](https://modelscope.cn/datasets/swift/VQAv2)|default|huge dataset|-|en, vqa, quality|[HuggingFaceM4/VQAv2](https://huggingface.co/datasets/HuggingFaceM4/VQAv2)|
|[swift/VideoChatGPT](https://modelscope.cn/datasets/swift/VideoChatGPT)|Generic<br>Temporal<br>Consistency|3206|87.4±48.3, min=31, max=398|chat, multi-modal, video, 🔥|[lmms-lab/VideoChatGPT](https://huggingface.co/datasets/lmms-lab/VideoChatGPT)|
Expand Down
5 changes: 5 additions & 0 deletions docs/source/Megatron-SWIFT/Command-line-parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,6 +218,11 @@
- qk_head_dim: QK 投影中 head 的维度。 `q_head_dim = qk_head_dim + qk_pos_emb_head_dim`。默认为None,自动从config.json读取。
- qk_pos_emb_head_dim: QK 投影中位置嵌入的维度。默认为None,自动从config.json读取。

**MTP参数**
- mtp_num_layers: 多token预测(MTP)层的数量。MTP将每个位置的预测范围扩展到多个未来token。此MTP实现使用D个顺序模块依次预测D个额外的token。默认为None。
- 注意:mtp_num_layers的值,将不自动从config.json获取,需手动设置。你可以参考config.json中的`num_nextn_predict_layers`字段填写该值。使用mcore-bridge时,将优先从safetensors文件中加载MTP权重,若无法找到,则进行随机初始化。
- mtp_loss_scaling_factor: 多token预测(MTP)损失的缩放因子。我们计算所有深度上MTP损失的平均值,然后乘以该缩放因子得到总体MTP损失,它将作为一个额外的训练目标。默认为0.1。

**Tuner参数**:
- train_type: 可选为'lora'和'full'。默认为'full'。
- 🔥freeze_llm: 该参数只对多模态模型生效,可用于全参数训练和LoRA训练,但会产生不同的效果。若是全参数训练,将freeze_llm设置为True会将LLM部分权重进行冻结;若是LoRA训练且`target_modules`设置为'all-linear',将freeze_llm设置为True将会取消在LLM部分添加LoRA模块。该参数默认为False。
Expand Down
4 changes: 3 additions & 1 deletion docs/source_en/Instruction/Supported-models-and-datasets.md
Original file line number Diff line number Diff line change
Expand Up @@ -1138,6 +1138,7 @@ The table below introduces information about the datasets integrated with ms-swi
|-|default|huge dataset|-|pretrain, quality|[allenai/c4](https://huggingface.co/datasets/allenai/c4)|
|[bespokelabs/Bespoke-Stratos-17k](https://modelscope.cn/datasets/bespokelabs/Bespoke-Stratos-17k)|default|16710|480.7±236.1, min=266, max=3556|chat, sft, cot, r1|[bespokelabs/Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)|
|-|default|huge dataset|-|pretrain, quality|[cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B)|
|[clip-benchmark/wds_voc2007_multilabel](https://modelscope.cn/datasets/clip-benchmark/wds_voc2007_multilabel)|default|2501|112.0±0.0, min=112, max=112|multilabel, multi-modal|[clip-benchmark/wds_voc2007_multilabel](https://huggingface.co/datasets/clip-benchmark/wds_voc2007_multilabel)|
|[codefuse-ai/CodeExercise-Python-27k](https://modelscope.cn/datasets/codefuse-ai/CodeExercise-Python-27k)|default|27224|337.3±154.2, min=90, max=2826|chat, coding, 🔥|-|
|[codefuse-ai/Evol-instruction-66k](https://modelscope.cn/datasets/codefuse-ai/Evol-instruction-66k)|default|66862|440.1±208.4, min=46, max=2661|chat, coding, 🔥|-|
|[damo/MSAgent-Bench](https://modelscope.cn/datasets/damo/MSAgent-Bench)|default<br>mini|638149|859.2±460.1, min=38, max=3479|chat, agent, multi-round|-|
Expand Down Expand Up @@ -1165,6 +1166,7 @@ The table below introduces information about the datasets integrated with ms-swi
|[modelscope/clue](https://modelscope.cn/datasets/modelscope/clue)|cmnli|391783|81.6±16.0, min=54, max=157|text-generation, classification|[clue](https://huggingface.co/datasets/clue)|
|[modelscope/coco_2014_caption](https://modelscope.cn/datasets/modelscope/coco_2014_caption)|train<br>validation|454617|389.6±68.4, min=70, max=587|chat, multi-modal, vision, 🔥|-|
|[modelscope/gsm8k](https://modelscope.cn/datasets/modelscope/gsm8k)|main|7473|88.6±21.6, min=41, max=241|qa, math|-|
|[open-r1/DAPO-Math-17k-Processed](https://modelscope.cn/datasets/open-r1/DAPO-Math-17k-Processed)|all|17398|122.3±65.2, min=41, max=1517|math, rlvr|[open-r1/DAPO-Math-17k-Processed](https://huggingface.co/datasets/open-r1/DAPO-Math-17k-Processed)|
|[open-r1/verifiable-coding-problems-python](https://modelscope.cn/datasets/open-r1/verifiable-coding-problems-python)|default|35735|559.0±255.2, min=74, max=6191|grpo, code|[open-r1/verifiable-coding-problems-python](https://huggingface.co/datasets/open-r1/verifiable-coding-problems-python)|
|[open-r1/verifiable-coding-problems-python-10k](https://modelscope.cn/datasets/open-r1/verifiable-coding-problems-python-10k)|default|1800|581.6±233.4, min=136, max=2022|grpo, code|[open-r1/verifiable-coding-problems-python-10k](https://huggingface.co/datasets/open-r1/verifiable-coding-problems-python-10k)|
|[open-r1/verifiable-coding-problems-python-10k_decontaminated](https://modelscope.cn/datasets/open-r1/verifiable-coding-problems-python-10k_decontaminated)|default|1574|575.7±234.3, min=136, max=2022|grpo, code|[open-r1/verifiable-coding-problems-python-10k_decontaminated](https://huggingface.co/datasets/open-r1/verifiable-coding-problems-python-10k_decontaminated)|
Expand Down Expand Up @@ -1194,7 +1196,7 @@ The table below introduces information about the datasets integrated with ms-swi
|[swift/RedPajama-Data-V2](https://modelscope.cn/datasets/swift/RedPajama-Data-V2)|default|huge dataset|-|pretrain, quality|[togethercomputer/RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2)|
|[swift/ScienceQA](https://modelscope.cn/datasets/swift/ScienceQA)|default|16967|101.7±55.8, min=32, max=620|multi-modal, science, vqa, quality|[derek-thomas/ScienceQA](https://huggingface.co/datasets/derek-thomas/ScienceQA)|
|[swift/SlimOrca](https://modelscope.cn/datasets/swift/SlimOrca)|default|517982|405.5±442.1, min=47, max=8312|quality, en|[Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca)|
|[swift/TextCaps](https://modelscope.cn/datasets/swift/TextCaps)|default<br>emb|huge dataset|-|multi-modal, en, caption, quality|[HuggingFaceM4/TextCaps](https://huggingface.co/datasets/HuggingFaceM4/TextCaps)|
|[swift/TextCaps](https://modelscope.cn/datasets/swift/TextCaps)|default<br>emb<br>rerank|huge dataset|-|multi-modal, en, caption, quality|[HuggingFaceM4/TextCaps](https://huggingface.co/datasets/HuggingFaceM4/TextCaps)|
|[swift/ToolBench](https://modelscope.cn/datasets/swift/ToolBench)|default|124345|2251.7±1039.8, min=641, max=9451|chat, agent, multi-round|-|
|[swift/VQAv2](https://modelscope.cn/datasets/swift/VQAv2)|default|huge dataset|-|en, vqa, quality|[HuggingFaceM4/VQAv2](https://huggingface.co/datasets/HuggingFaceM4/VQAv2)|
|[swift/VideoChatGPT](https://modelscope.cn/datasets/swift/VideoChatGPT)|Generic<br>Temporal<br>Consistency|3206|87.4±48.3, min=31, max=398|chat, multi-modal, video, 🔥|[lmms-lab/VideoChatGPT](https://huggingface.co/datasets/lmms-lab/VideoChatGPT)|
Expand Down
6 changes: 6 additions & 0 deletions docs/source_en/Megatron-SWIFT/Command-line-parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,6 +231,12 @@ For guidance on selecting parallelization strategies, please refer to the [Train
- qk_head_dim: Dimension of the head in the QK projection. `q_head_dim = qk_head_dim + qk_pos_emb_head_dim`. Default is None and will be automatically read from config.json.
- qk_pos_emb_head_dim: Dimension of the position embedding in the QK projection. Default is None and will be automatically read from config.json.


**MTP Parameters**
- mtp_num_layers: Number of Multi-Token Prediction (MTP) layers. MTP extends the prediction scope at each position to multiple future tokens. This MTP implementation uses D sequential modules to sequentially predict D additional tokens. Default is None.
- Note: The value of mtp_num_layers will not be automatically retrieved from config.json and must be set manually. You can refer to the `num_nextn_predict_layers` field in config.json to fill in this value. When using mcore-bridge, MTP weights will be loaded from safetensors files first. If not found, random initialization will be performed.
- mtp_loss_scaling_factor: Scaling factor of Multi-Token Prediction (MTP) loss. We compute the average of MTP losses across all depths, then multiply it by this scaling factor to obtain the overall MTP loss, which serves as an additional training objective. Default is 0.1.

**Tuner Parameters**:

- train_type: Options are `'lora'` and `'full'`. Default is `'full'`.
Expand Down
13 changes: 13 additions & 0 deletions examples/infer/sglang/mtp.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
CUDA_VISIBLE_DEVICES=0,1,2,3 \
swift infer \
--model ZhipuAI/GLM-4.5-Air \
--sglang_tp_size 4 \
--infer_backend sglang \
--val_dataset AI-ModelScope/alpaca-gpt4-data-zh#100 \
--sglang_context_length 8192 \
--max_new_tokens 2048 \
--sglang_mem_fraction_static 0.7 \
--sglang_speculative_algorithm EAGLE \
--sglang_speculative_eagle_topk 1 \
--sglang_speculative_num_steps 3 \
--sglang_speculative_num_draft_tokens 4
7 changes: 5 additions & 2 deletions examples/megatron/lora/glm4_5_106b.sh
Original file line number Diff line number Diff line change
@@ -1,10 +1,13 @@
# thinking -> non-thinking
# demo: thinking -> non-thinking
# 4 * 70GiB; 40s/it
PYTORCH_CUDA_ALLOC_CONF='expandable_segments:True' \
NPROC_PER_NODE=4 \
CUDA_VISIBLE_DEVICES=0,1,2,3 \
megatron sft \
--load GLM-4.5-Air-mcore \
--model ZhipuAI/GLM-4.5-Air \
--load_safetensors true \
--save_safetensors true \
--mtp_num_layers 1 \
--dataset 'swift/Chinese-Qwen3-235B-2507-Distill-data-110k-SFT' \
--load_from_cache_file true \
--train_type lora \
Expand Down
5 changes: 4 additions & 1 deletion examples/megatron/lora/qwen3_235b.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,12 @@ PYTORCH_CUDA_ALLOC_CONF='expandable_segments:True' \
NPROC_PER_NODE=8 \
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
megatron sft \
--load Qwen3-235B-A22B-Instruct-2507-mcore \
--model Qwen/Qwen3-235B-A22B-Instruct-2507 \
--dataset 'swift/Chinese-Qwen3-235B-2507-Distill-data-110k-SFT#2000' \
'swift/self-cognition#1000' \
--load_safetensors true \
--save_safetensors true \
--merge_lora false \
--load_from_cache_file true \
--train_type lora \
--lora_rank 8 \
Expand Down
59 changes: 59 additions & 0 deletions examples/models/qwen3_next/mtp.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# 8 * 60GiB, 10s/it

PYTORCH_CUDA_ALLOC_CONF='expandable_segments:True' \
NPROC_PER_NODE=8 \
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
megatron sft \
--model Qwen/Qwen3-Next-80B-A3B-Instruct \
--load_safetensors true \
--save_safetensors true \
--mtp_num_layers 1 \
--dataset 'swift/Chinese-Qwen3-235B-2507-Distill-data-110k-SFT#2000' \
'swift/self-cognition#1000' \
--load_from_cache_file true \
--train_type lora \
--lora_rank 8 \
--lora_alpha 32 \
--target_modules all-linear \
--expert_model_parallel_size 4 \
--moe_permute_fusion true \
--moe_grouped_gemm true \
--moe_shared_expert_overlap true \
--moe_aux_loss_coeff 1e-6 \
--micro_batch_size 2 \
--global_batch_size 16 \
--recompute_granularity full \
--recompute_method uniform \
--recompute_num_layers 1 \
--max_epochs 1 \
--finetune true \
--cross_entropy_loss_fusion true \
--lr 1e-4 \
--lr_warmup_fraction 0.05 \
--min_lr 1e-5 \
--save megatron_output/Qwen3-Next-80B-A3B-Instruct \
--eval_interval 200 \
--save_interval 200 \
--max_length 2048 \
--num_workers 8 \
--dataset_num_proc 8 \
--no_save_optim true \
--no_save_rng true \
--sequence_parallel true \
--attention_backend flash \
--model_author swift \
--model_name swift-robot


# CUDA_VISIBLE_DEVICES=0,1,2,3 \
# swift infer \
# --model megatron_output/Qwen3-Next-80B-A3B-Instruct/vx-xxx/checkpoint-xxx \
# --sglang_tp_size 4 \
# --infer_backend sglang \
# --sglang_context_length 8192 \
# --max_new_tokens 2048 \
# --sglang_mem_fraction_static 0.7 \
# --sglang_speculative_algorithm NEXTN \
# --sglang_speculative_eagle_topk 1 \
# --sglang_speculative_num_steps 3 \
# --sglang_speculative_num_draft_tokens 4
10 changes: 10 additions & 0 deletions swift/llm/argument/infer_args.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,12 @@ class SglangArguments:
sglang_kv_cache_dtype: str = 'auto'
sglang_enable_dp_attention: bool = False
sglang_disable_custom_all_reduce: bool = True
# speculative decoding
# e.g. EAGLE, EAGLE3, NEXTN
sglang_speculative_algorithm: Optional[str] = None
sglang_speculative_num_steps: Optional[int] = None
sglang_speculative_eagle_topk: Optional[int] = None
sglang_speculative_num_draft_tokens: Optional[int] = None

def get_sglang_engine_kwargs(self):
kwargs = {
Expand All @@ -76,6 +82,10 @@ def get_sglang_engine_kwargs(self):
'kv_cache_dtype': self.sglang_kv_cache_dtype,
'enable_dp_attention': self.sglang_enable_dp_attention,
'disable_custom_all_reduce': self.sglang_disable_custom_all_reduce,
'speculative_algorithm': self.sglang_speculative_algorithm,
'speculative_num_steps': self.sglang_speculative_num_steps,
'speculative_eagle_topk': self.sglang_speculative_eagle_topk,
'speculative_num_draft_tokens': self.sglang_speculative_num_draft_tokens,
}
if self.task_type == 'embedding':
kwargs['task_type'] = 'embedding'
Expand Down
3 changes: 2 additions & 1 deletion swift/llm/infer/infer_engine/infer_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ def _post_init(self, template=None):
self.max_model_len = self.model_info.max_model_len
self.task_type = self.model_info.task_type
self.config = self.model_info.config
self.max_tokens_offset = 0
if template is None:
ckpt_dir = get_ckpt_dir(self.model_dir, getattr(self, 'adapters', None))
logger.info('Create the default_template for the infer_engine')
Expand Down Expand Up @@ -220,7 +221,7 @@ def set_default_max_tokens(self, request_config: RequestConfig, inputs: Dict[str
max_model_len = 8192
logger.warning(
'The current model is unable to retrieve `max_model_len`. It is set to the default value of 8192.')
max_max_tokens = max_model_len - num_tokens
max_max_tokens = max_model_len - num_tokens + self.max_tokens_offset
if max_tokens is None:
request_config.max_tokens = max_max_tokens
elif max_max_tokens < request_config.max_tokens:
Expand Down
10 changes: 10 additions & 0 deletions swift/llm/infer/infer_engine/sglang_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,10 @@ def __init__(
kv_cache_dtype: str = 'auto',
enable_dp_attention: bool = False,
disable_custom_all_reduce: bool = True,
speculative_algorithm: Optional[str] = None,
speculative_num_steps: Optional[int] = None,
speculative_eagle_topk: Optional[int] = None,
speculative_num_draft_tokens: Optional[int] = None,
log_level='error',
engine_kwargs: Optional[Dict[str, Any]] = None,
template: Optional[Template] = None,
Expand Down Expand Up @@ -88,6 +92,10 @@ def __init__(
kv_cache_dtype=kv_cache_dtype,
enable_dp_attention=enable_dp_attention,
disable_custom_all_reduce=disable_custom_all_reduce,
speculative_algorithm=speculative_algorithm,
speculative_num_steps=speculative_num_steps,
speculative_eagle_topk=speculative_eagle_topk,
speculative_num_draft_tokens=speculative_num_draft_tokens,
log_level=log_level,
skip_tokenizer_init=True,
trust_remote_code=True,
Expand All @@ -98,6 +106,8 @@ def __init__(
self.server_args.is_embedding = True
self.engine = sgl.Engine(server_args=self.server_args)
self._load_generation_config()
if speculative_num_draft_tokens is not None:
self.max_tokens_offset = -speculative_num_draft_tokens

def _load_generation_config(self) -> None:
generation_config_path = os.path.join(self.model_dir, 'generation_config.json')
Expand Down
4 changes: 4 additions & 0 deletions swift/megatron/argument/megatron_args.py
Original file line number Diff line number Diff line change
Expand Up @@ -496,6 +496,10 @@ class MegatronArguments(ExtraMegatronArguments):
qk_head_dim: Optional[int] = None
qk_pos_emb_head_dim: Optional[int] = None

# mtp
mtp_num_layers: Optional[int] = None
mtp_loss_scaling_factor: float = 0.1

# fp8
fp8_format: Literal['e4m3', 'hybrid'] = None
fp8_recipe: Literal['tensorwise', 'delayed', 'mxfp8', 'blockwise'] = 'delayed'
Expand Down
Loading
Loading