Skip to content

Add Qwen3 1.7B configs#3103

Open
rxu183 wants to merge 1 commit intoAI-Hypercomputer:mainfrom
rxu183:richard/qwen1_7b
Open

Add Qwen3 1.7B configs#3103
rxu183 wants to merge 1 commit intoAI-Hypercomputer:mainfrom
rxu183:richard/qwen1_7b

Conversation

@rxu183
Copy link

@rxu183 rxu183 commented Feb 6, 2026

Description

Adds Qwen3 1.7B model configs. It's extremely similar to other Qwen model configs, with slight changes to the base_emb_dim and base_mlp_dim relative to the 0.6B. I've made a slight change to the documentation listing 1.7B as a supported model.

Tests

I tested this on a Google Colab v6e1 instance, via the provided Qwen SFT demo notebook. The training cell with TFLOPs and loss succeeded, so I'm pretty confident the architecture mapping and parameter conversion was done properly. To reproduce, all I changed was the notebook to use 1.7B instead of the 0.6B model. However, I wasn't able to run the cells regarding vLLM, due to a ipython issue.

vLLM Error

---------------------------------------------------------------------------
UnsupportedOperation                      Traceback (most recent call last)
[/tmp/ipython-input-2349923095.py](https://localhost:8080/#) in <cell line: 0>()
      4 
      5 tunix_model = TunixMaxTextAdapter(trainer.model)
----> 6 vllm_rollout = VllmRollout(
      7     model=tunix_model,
      8     tokenizer=tokenizer,

17 frames
[/usr/local/lib/python3.12/dist-packages/ipykernel/iostream.py](https://localhost:8080/#) in fileno(self)
    309             return self._original_stdstream_copy
    310         else:
--> 311             raise io.UnsupportedOperation("fileno")
    312 
    313     def _watch_pipe_fd(self):

UnsupportedOperation: fileno

Logs

INFO:absl:Lazy loading DISABLED. Loading full HuggingFace model: Qwen/Qwen3-1.7B...
config.json: 100% 726/726 [00:00<00:00, 4.57MB/s]
model.safetensors.index.json: 25.6kB [00:00, 118MB/s]
Fetching 2 files: 0% 0/2 [00:00<?, ?it/s]
model-00002-of-00002.safetensors: 0% 0.00/622M [00:00<?, ?B/s]

...

Starting SFT Training...
Training: 100%
 500/500 [01:22<00:00, 12.23step/s, _train_loss=0.764, _train_perplexity=2.15, _train_steps_per_sec=12.1]
Per train step:
 Total TFLOPs: 10.93 
 split as 96.70% learnable weight flops and 3.30% attention flops
SFT Training Complete!


Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant