Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Post History

60%
+1 −0
Q&A Why would the tokenizer for encoder-decoder model for machine translation use bos_token_id == eos_token_id? How does it know when a sequence ends?

I see on this PyTorch model Helsinki-NLP/opus-mt-fr-en (HuggingFace), which is an encoder-decoder model for machine translation: "bos_token_id": 0, "eos_token_id": 0, in its config.json. ...

1 answer  ·  posted 21d ago by Franck Dernoncourt‭  ·  edited 19d ago by Alexei‭

Question pytorch machine-translation tokenization encoder-decoder
#7: Post edited by user avatar Alexei‭ · 2025-04-23T06:31:20Z (19 days ago)
removed non software development related tag
#6: Post edited by user avatar Franck Dernoncourt‭ · 2025-04-22T18:59:54Z (19 days ago)
reddit shitware removed my post, so reposting
  • I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 0,
  • ```
  • in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
  • Why set bos_token_id == eos_token_id? How does it know when a sequence ends?
  • By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 2,
  • ```
  • ----
  • Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
  • ```
  • {
  • "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "swish",
  • "add_bias_logits": false,
  • "add_final_layer_norm": false,
  • "architectures": [
  • "MarianMTModel"
  • ],
  • "attention_dropout": 0.0,
  • "bad_words_ids": [
  • [
  • 59513
  • ]
  • ],
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 512,
  • "decoder_attention_heads": 8,
  • "decoder_ffn_dim": 2048,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 6,
  • "decoder_start_token_id": 59513,
  • "decoder_vocab_size": 59514,
  • "dropout": 0.1,
  • "encoder_attention_heads": 8,
  • "encoder_ffn_dim": 2048,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 6,
  • "eos_token_id": 0,
  • "forced_eos_token_id": 0,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 512,
  • "max_position_embeddings": 512,
  • "model_type": "marian",
  • "normalize_before": false,
  • "normalize_embedding": false,
  • "num_beams": 4,
  • "num_hidden_layers": 6,
  • "pad_token_id": 59513,
  • "scale_embedding": true,
  • "share_encoder_decoder_embeddings": true,
  • "static_position_embeddings": true,
  • "transformers_version": "4.22.0.dev0",
  • "use_cache": true,
  • "vocab_size": 59514
  • }
  • ```
  • Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
  • `](https://huggingface.co/facebook/mbart-large-50):
  • ```
  • {
  • "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "gelu",
  • "add_bias_logits": false,
  • "add_final_layer_norm": true,
  • "architectures": [
  • "MBartForConditionalGeneration"
  • ],
  • "attention_dropout": 0.0,
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 1024,
  • "decoder_attention_heads": 16,
  • "decoder_ffn_dim": 4096,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 12,
  • "decoder_start_token_id": 2,
  • "dropout": 0.1,
  • "early_stopping": true,
  • "encoder_attention_heads": 16,
  • "encoder_ffn_dim": 4096,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 12,
  • "eos_token_id": 2,
  • "forced_eos_token_id": 2,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 200,
  • "max_position_embeddings": 1024,
  • "model_type": "mbart",
  • "normalize_before": true,
  • "normalize_embedding": true,
  • "num_beams": 5,
  • "num_hidden_layers": 12,
  • "output_past": true,
  • "pad_token_id": 1,
  • "scale_embedding": true,
  • "static_position_embeddings": false,
  • "transformers_version": "4.4.0.dev0",
  • "use_cache": true,
  • "vocab_size": 250054,
  • "tokenizer_class": "MBart50Tokenizer"
  • }
  • ```
  • ---
  • Crossposts:
  • * https://ai.stackexchange.com/q/48427/4
  • * https://qr.ae/pAAezR
  • * https://redd.it/1k3vdl7
  • * https://redd.it/1k3vddf
  • * https://redd.it/1k3vdri
  • * https://redd.it/1k3vdu5
  • * https://redd.it/1k3vdwy
  • * https://redd.it/1k3vdxy
  • * https://redd.it/1k3ve37
  • * https://redd.it/1k3ve6z
  • * https://redd.it/1k3vedz
  • I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 0,
  • ```
  • in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
  • Why set bos_token_id == eos_token_id? How does it know when a sequence ends?
  • By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 2,
  • ```
  • ----
  • Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
  • ```
  • {
  • "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "swish",
  • "add_bias_logits": false,
  • "add_final_layer_norm": false,
  • "architectures": [
  • "MarianMTModel"
  • ],
  • "attention_dropout": 0.0,
  • "bad_words_ids": [
  • [
  • 59513
  • ]
  • ],
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 512,
  • "decoder_attention_heads": 8,
  • "decoder_ffn_dim": 2048,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 6,
  • "decoder_start_token_id": 59513,
  • "decoder_vocab_size": 59514,
  • "dropout": 0.1,
  • "encoder_attention_heads": 8,
  • "encoder_ffn_dim": 2048,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 6,
  • "eos_token_id": 0,
  • "forced_eos_token_id": 0,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 512,
  • "max_position_embeddings": 512,
  • "model_type": "marian",
  • "normalize_before": false,
  • "normalize_embedding": false,
  • "num_beams": 4,
  • "num_hidden_layers": 6,
  • "pad_token_id": 59513,
  • "scale_embedding": true,
  • "share_encoder_decoder_embeddings": true,
  • "static_position_embeddings": true,
  • "transformers_version": "4.22.0.dev0",
  • "use_cache": true,
  • "vocab_size": 59514
  • }
  • ```
  • Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
  • `](https://huggingface.co/facebook/mbart-large-50):
  • ```
  • {
  • "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "gelu",
  • "add_bias_logits": false,
  • "add_final_layer_norm": true,
  • "architectures": [
  • "MBartForConditionalGeneration"
  • ],
  • "attention_dropout": 0.0,
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 1024,
  • "decoder_attention_heads": 16,
  • "decoder_ffn_dim": 4096,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 12,
  • "decoder_start_token_id": 2,
  • "dropout": 0.1,
  • "early_stopping": true,
  • "encoder_attention_heads": 16,
  • "encoder_ffn_dim": 4096,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 12,
  • "eos_token_id": 2,
  • "forced_eos_token_id": 2,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 200,
  • "max_position_embeddings": 1024,
  • "model_type": "mbart",
  • "normalize_before": true,
  • "normalize_embedding": true,
  • "num_beams": 5,
  • "num_hidden_layers": 12,
  • "output_past": true,
  • "pad_token_id": 1,
  • "scale_embedding": true,
  • "static_position_embeddings": false,
  • "transformers_version": "4.4.0.dev0",
  • "use_cache": true,
  • "vocab_size": 250054,
  • "tokenizer_class": "MBart50Tokenizer"
  • }
  • ```
  • ---
  • Crossposts:
  • * https://ai.stackexchange.com/q/48427/4
  • * https://qr.ae/pAAezR
  • * https://redd.it/1k3vdl7
  • * https://redd.it/1k3vddf
  • * https://redd.it/1k3vdri
  • * https://redd.it/1k3vdu5
  • * https://redd.it/1k3vdwy
  • * https://redd.it/1k3vdxy
  • * https://redd.it/1k3ve37
  • * https://redd.it/1k3ve6z -> https://redd.it/1k5efse
  • * https://redd.it/1k3vedz -> https://redd.it/1k3vdl7
#5: Nominated for promotion by user avatar Alexei‭ · 2025-04-22T15:46:17Z (20 days ago)
#4: Post edited by user avatar Franck Dernoncourt‭ · 2025-04-20T20:20:42Z (21 days ago)
  • I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 0,
  • ```
  • in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
  • Why set bos_token_id == eos_token_id? How does it know when a sequence ends?
  • By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 2,
  • ```
  • ----
  • Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
  • ```
  • {
  • "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "swish",
  • "add_bias_logits": false,
  • "add_final_layer_norm": false,
  • "architectures": [
  • "MarianMTModel"
  • ],
  • "attention_dropout": 0.0,
  • "bad_words_ids": [
  • [
  • 59513
  • ]
  • ],
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 512,
  • "decoder_attention_heads": 8,
  • "decoder_ffn_dim": 2048,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 6,
  • "decoder_start_token_id": 59513,
  • "decoder_vocab_size": 59514,
  • "dropout": 0.1,
  • "encoder_attention_heads": 8,
  • "encoder_ffn_dim": 2048,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 6,
  • "eos_token_id": 0,
  • "forced_eos_token_id": 0,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 512,
  • "max_position_embeddings": 512,
  • "model_type": "marian",
  • "normalize_before": false,
  • "normalize_embedding": false,
  • "num_beams": 4,
  • "num_hidden_layers": 6,
  • "pad_token_id": 59513,
  • "scale_embedding": true,
  • "share_encoder_decoder_embeddings": true,
  • "static_position_embeddings": true,
  • "transformers_version": "4.22.0.dev0",
  • "use_cache": true,
  • "vocab_size": 59514
  • }
  • ```
  • Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
  • `](https://huggingface.co/facebook/mbart-large-50):
  • ```
  • {
  • "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "gelu",
  • "add_bias_logits": false,
  • "add_final_layer_norm": true,
  • "architectures": [
  • "MBartForConditionalGeneration"
  • ],
  • "attention_dropout": 0.0,
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 1024,
  • "decoder_attention_heads": 16,
  • "decoder_ffn_dim": 4096,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 12,
  • "decoder_start_token_id": 2,
  • "dropout": 0.1,
  • "early_stopping": true,
  • "encoder_attention_heads": 16,
  • "encoder_ffn_dim": 4096,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 12,
  • "eos_token_id": 2,
  • "forced_eos_token_id": 2,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 200,
  • "max_position_embeddings": 1024,
  • "model_type": "mbart",
  • "normalize_before": true,
  • "normalize_embedding": true,
  • "num_beams": 5,
  • "num_hidden_layers": 12,
  • "output_past": true,
  • "pad_token_id": 1,
  • "scale_embedding": true,
  • "static_position_embeddings": false,
  • "transformers_version": "4.4.0.dev0",
  • "use_cache": true,
  • "vocab_size": 250054,
  • "tokenizer_class": "MBart50Tokenizer"
  • }
  • ```
  • ---
  • Crossposts:
  • - https://ai.stackexchange.com/q/48427/4
  • - https://qr.ae/pAAezR
  • I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 0,
  • ```
  • in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
  • Why set bos_token_id == eos_token_id? How does it know when a sequence ends?
  • By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 2,
  • ```
  • ----
  • Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
  • ```
  • {
  • "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "swish",
  • "add_bias_logits": false,
  • "add_final_layer_norm": false,
  • "architectures": [
  • "MarianMTModel"
  • ],
  • "attention_dropout": 0.0,
  • "bad_words_ids": [
  • [
  • 59513
  • ]
  • ],
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 512,
  • "decoder_attention_heads": 8,
  • "decoder_ffn_dim": 2048,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 6,
  • "decoder_start_token_id": 59513,
  • "decoder_vocab_size": 59514,
  • "dropout": 0.1,
  • "encoder_attention_heads": 8,
  • "encoder_ffn_dim": 2048,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 6,
  • "eos_token_id": 0,
  • "forced_eos_token_id": 0,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 512,
  • "max_position_embeddings": 512,
  • "model_type": "marian",
  • "normalize_before": false,
  • "normalize_embedding": false,
  • "num_beams": 4,
  • "num_hidden_layers": 6,
  • "pad_token_id": 59513,
  • "scale_embedding": true,
  • "share_encoder_decoder_embeddings": true,
  • "static_position_embeddings": true,
  • "transformers_version": "4.22.0.dev0",
  • "use_cache": true,
  • "vocab_size": 59514
  • }
  • ```
  • Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
  • `](https://huggingface.co/facebook/mbart-large-50):
  • ```
  • {
  • "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "gelu",
  • "add_bias_logits": false,
  • "add_final_layer_norm": true,
  • "architectures": [
  • "MBartForConditionalGeneration"
  • ],
  • "attention_dropout": 0.0,
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 1024,
  • "decoder_attention_heads": 16,
  • "decoder_ffn_dim": 4096,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 12,
  • "decoder_start_token_id": 2,
  • "dropout": 0.1,
  • "early_stopping": true,
  • "encoder_attention_heads": 16,
  • "encoder_ffn_dim": 4096,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 12,
  • "eos_token_id": 2,
  • "forced_eos_token_id": 2,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 200,
  • "max_position_embeddings": 1024,
  • "model_type": "mbart",
  • "normalize_before": true,
  • "normalize_embedding": true,
  • "num_beams": 5,
  • "num_hidden_layers": 12,
  • "output_past": true,
  • "pad_token_id": 1,
  • "scale_embedding": true,
  • "static_position_embeddings": false,
  • "transformers_version": "4.4.0.dev0",
  • "use_cache": true,
  • "vocab_size": 250054,
  • "tokenizer_class": "MBart50Tokenizer"
  • }
  • ```
  • ---
  • Crossposts:
  • * https://ai.stackexchange.com/q/48427/4
  • * https://qr.ae/pAAezR
  • * https://redd.it/1k3vdl7
  • * https://redd.it/1k3vddf
  • * https://redd.it/1k3vdri
  • * https://redd.it/1k3vdu5
  • * https://redd.it/1k3vdwy
  • * https://redd.it/1k3vdxy
  • * https://redd.it/1k3ve37
  • * https://redd.it/1k3ve6z
  • * https://redd.it/1k3vedz
#3: Post edited by user avatar Franck Dernoncourt‭ · 2025-04-20T20:07:30Z (21 days ago)
  • Why would the tokenizer for encoder-decoder model for machine translation use bos_token_id == eos_token_id?
  • Why would the tokenizer for encoder-decoder model for machine translation use bos_token_id == eos_token_id? How does it know when a sequence ends?
  • I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 0,
  • ```
  • in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
  • Why set bos_token_id == eos_token_id?
  • By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 2,
  • ```
  • ----
  • Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
  • ```
  • {
  • "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "swish",
  • "add_bias_logits": false,
  • "add_final_layer_norm": false,
  • "architectures": [
  • "MarianMTModel"
  • ],
  • "attention_dropout": 0.0,
  • "bad_words_ids": [
  • [
  • 59513
  • ]
  • ],
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 512,
  • "decoder_attention_heads": 8,
  • "decoder_ffn_dim": 2048,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 6,
  • "decoder_start_token_id": 59513,
  • "decoder_vocab_size": 59514,
  • "dropout": 0.1,
  • "encoder_attention_heads": 8,
  • "encoder_ffn_dim": 2048,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 6,
  • "eos_token_id": 0,
  • "forced_eos_token_id": 0,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 512,
  • "max_position_embeddings": 512,
  • "model_type": "marian",
  • "normalize_before": false,
  • "normalize_embedding": false,
  • "num_beams": 4,
  • "num_hidden_layers": 6,
  • "pad_token_id": 59513,
  • "scale_embedding": true,
  • "share_encoder_decoder_embeddings": true,
  • "static_position_embeddings": true,
  • "transformers_version": "4.22.0.dev0",
  • "use_cache": true,
  • "vocab_size": 59514
  • }
  • ```
  • Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
  • `](https://huggingface.co/facebook/mbart-large-50):
  • ```
  • {
  • "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "gelu",
  • "add_bias_logits": false,
  • "add_final_layer_norm": true,
  • "architectures": [
  • "MBartForConditionalGeneration"
  • ],
  • "attention_dropout": 0.0,
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 1024,
  • "decoder_attention_heads": 16,
  • "decoder_ffn_dim": 4096,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 12,
  • "decoder_start_token_id": 2,
  • "dropout": 0.1,
  • "early_stopping": true,
  • "encoder_attention_heads": 16,
  • "encoder_ffn_dim": 4096,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 12,
  • "eos_token_id": 2,
  • "forced_eos_token_id": 2,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 200,
  • "max_position_embeddings": 1024,
  • "model_type": "mbart",
  • "normalize_before": true,
  • "normalize_embedding": true,
  • "num_beams": 5,
  • "num_hidden_layers": 12,
  • "output_past": true,
  • "pad_token_id": 1,
  • "scale_embedding": true,
  • "static_position_embeddings": false,
  • "transformers_version": "4.4.0.dev0",
  • "use_cache": true,
  • "vocab_size": 250054,
  • "tokenizer_class": "MBart50Tokenizer"
  • }
  • ```
  • ---
  • Crossposts:
  • - https://ai.stackexchange.com/q/48427/4
  • - https://qr.ae/pAAezR
  • I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 0,
  • ```
  • in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
  • Why set bos_token_id == eos_token_id? How does it know when a sequence ends?
  • By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 2,
  • ```
  • ----
  • Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
  • ```
  • {
  • "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "swish",
  • "add_bias_logits": false,
  • "add_final_layer_norm": false,
  • "architectures": [
  • "MarianMTModel"
  • ],
  • "attention_dropout": 0.0,
  • "bad_words_ids": [
  • [
  • 59513
  • ]
  • ],
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 512,
  • "decoder_attention_heads": 8,
  • "decoder_ffn_dim": 2048,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 6,
  • "decoder_start_token_id": 59513,
  • "decoder_vocab_size": 59514,
  • "dropout": 0.1,
  • "encoder_attention_heads": 8,
  • "encoder_ffn_dim": 2048,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 6,
  • "eos_token_id": 0,
  • "forced_eos_token_id": 0,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 512,
  • "max_position_embeddings": 512,
  • "model_type": "marian",
  • "normalize_before": false,
  • "normalize_embedding": false,
  • "num_beams": 4,
  • "num_hidden_layers": 6,
  • "pad_token_id": 59513,
  • "scale_embedding": true,
  • "share_encoder_decoder_embeddings": true,
  • "static_position_embeddings": true,
  • "transformers_version": "4.22.0.dev0",
  • "use_cache": true,
  • "vocab_size": 59514
  • }
  • ```
  • Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
  • `](https://huggingface.co/facebook/mbart-large-50):
  • ```
  • {
  • "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "gelu",
  • "add_bias_logits": false,
  • "add_final_layer_norm": true,
  • "architectures": [
  • "MBartForConditionalGeneration"
  • ],
  • "attention_dropout": 0.0,
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 1024,
  • "decoder_attention_heads": 16,
  • "decoder_ffn_dim": 4096,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 12,
  • "decoder_start_token_id": 2,
  • "dropout": 0.1,
  • "early_stopping": true,
  • "encoder_attention_heads": 16,
  • "encoder_ffn_dim": 4096,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 12,
  • "eos_token_id": 2,
  • "forced_eos_token_id": 2,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 200,
  • "max_position_embeddings": 1024,
  • "model_type": "mbart",
  • "normalize_before": true,
  • "normalize_embedding": true,
  • "num_beams": 5,
  • "num_hidden_layers": 12,
  • "output_past": true,
  • "pad_token_id": 1,
  • "scale_embedding": true,
  • "static_position_embeddings": false,
  • "transformers_version": "4.4.0.dev0",
  • "use_cache": true,
  • "vocab_size": 250054,
  • "tokenizer_class": "MBart50Tokenizer"
  • }
  • ```
  • ---
  • Crossposts:
  • - https://ai.stackexchange.com/q/48427/4
  • - https://qr.ae/pAAezR
#2: Post edited by user avatar Franck Dernoncourt‭ · 2025-04-20T20:04:22Z (21 days ago)
  • I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 0,
  • ```
  • in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
  • Why set bos_token_id == eos_token_id?
  • By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 2,
  • ```
  • ----
  • Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
  • ```
  • {
  • "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "swish",
  • "add_bias_logits": false,
  • "add_final_layer_norm": false,
  • "architectures": [
  • "MarianMTModel"
  • ],
  • "attention_dropout": 0.0,
  • "bad_words_ids": [
  • [
  • 59513
  • ]
  • ],
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 512,
  • "decoder_attention_heads": 8,
  • "decoder_ffn_dim": 2048,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 6,
  • "decoder_start_token_id": 59513,
  • "decoder_vocab_size": 59514,
  • "dropout": 0.1,
  • "encoder_attention_heads": 8,
  • "encoder_ffn_dim": 2048,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 6,
  • "eos_token_id": 0,
  • "forced_eos_token_id": 0,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 512,
  • "max_position_embeddings": 512,
  • "model_type": "marian",
  • "normalize_before": false,
  • "normalize_embedding": false,
  • "num_beams": 4,
  • "num_hidden_layers": 6,
  • "pad_token_id": 59513,
  • "scale_embedding": true,
  • "share_encoder_decoder_embeddings": true,
  • "static_position_embeddings": true,
  • "transformers_version": "4.22.0.dev0",
  • "use_cache": true,
  • "vocab_size": 59514
  • }
  • ```
  • Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
  • `](https://huggingface.co/facebook/mbart-large-50):
  • ```
  • {
  • "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "gelu",
  • "add_bias_logits": false,
  • "add_final_layer_norm": true,
  • "architectures": [
  • "MBartForConditionalGeneration"
  • ],
  • "attention_dropout": 0.0,
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 1024,
  • "decoder_attention_heads": 16,
  • "decoder_ffn_dim": 4096,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 12,
  • "decoder_start_token_id": 2,
  • "dropout": 0.1,
  • "early_stopping": true,
  • "encoder_attention_heads": 16,
  • "encoder_ffn_dim": 4096,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 12,
  • "eos_token_id": 2,
  • "forced_eos_token_id": 2,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 200,
  • "max_position_embeddings": 1024,
  • "model_type": "mbart",
  • "normalize_before": true,
  • "normalize_embedding": true,
  • "num_beams": 5,
  • "num_hidden_layers": 12,
  • "output_past": true,
  • "pad_token_id": 1,
  • "scale_embedding": true,
  • "static_position_embeddings": false,
  • "transformers_version": "4.4.0.dev0",
  • "use_cache": true,
  • "vocab_size": 250054,
  • "tokenizer_class": "MBart50Tokenizer"
  • }
  • ```
  • I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 0,
  • ```
  • in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
  • Why set bos_token_id == eos_token_id?
  • By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
  • ```
  • "bos_token_id": 0,
  • "eos_token_id": 2,
  • ```
  • ----
  • Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
  • ```
  • {
  • "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "swish",
  • "add_bias_logits": false,
  • "add_final_layer_norm": false,
  • "architectures": [
  • "MarianMTModel"
  • ],
  • "attention_dropout": 0.0,
  • "bad_words_ids": [
  • [
  • 59513
  • ]
  • ],
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 512,
  • "decoder_attention_heads": 8,
  • "decoder_ffn_dim": 2048,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 6,
  • "decoder_start_token_id": 59513,
  • "decoder_vocab_size": 59514,
  • "dropout": 0.1,
  • "encoder_attention_heads": 8,
  • "encoder_ffn_dim": 2048,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 6,
  • "eos_token_id": 0,
  • "forced_eos_token_id": 0,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 512,
  • "max_position_embeddings": 512,
  • "model_type": "marian",
  • "normalize_before": false,
  • "normalize_embedding": false,
  • "num_beams": 4,
  • "num_hidden_layers": 6,
  • "pad_token_id": 59513,
  • "scale_embedding": true,
  • "share_encoder_decoder_embeddings": true,
  • "static_position_embeddings": true,
  • "transformers_version": "4.22.0.dev0",
  • "use_cache": true,
  • "vocab_size": 59514
  • }
  • ```
  • Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
  • `](https://huggingface.co/facebook/mbart-large-50):
  • ```
  • {
  • "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
  • "_num_labels": 3,
  • "activation_dropout": 0.0,
  • "activation_function": "gelu",
  • "add_bias_logits": false,
  • "add_final_layer_norm": true,
  • "architectures": [
  • "MBartForConditionalGeneration"
  • ],
  • "attention_dropout": 0.0,
  • "bos_token_id": 0,
  • "classif_dropout": 0.0,
  • "classifier_dropout": 0.0,
  • "d_model": 1024,
  • "decoder_attention_heads": 16,
  • "decoder_ffn_dim": 4096,
  • "decoder_layerdrop": 0.0,
  • "decoder_layers": 12,
  • "decoder_start_token_id": 2,
  • "dropout": 0.1,
  • "early_stopping": true,
  • "encoder_attention_heads": 16,
  • "encoder_ffn_dim": 4096,
  • "encoder_layerdrop": 0.0,
  • "encoder_layers": 12,
  • "eos_token_id": 2,
  • "forced_eos_token_id": 2,
  • "gradient_checkpointing": false,
  • "id2label": {
  • "0": "LABEL_0",
  • "1": "LABEL_1",
  • "2": "LABEL_2"
  • },
  • "init_std": 0.02,
  • "is_encoder_decoder": true,
  • "label2id": {
  • "LABEL_0": 0,
  • "LABEL_1": 1,
  • "LABEL_2": 2
  • },
  • "max_length": 200,
  • "max_position_embeddings": 1024,
  • "model_type": "mbart",
  • "normalize_before": true,
  • "normalize_embedding": true,
  • "num_beams": 5,
  • "num_hidden_layers": 12,
  • "output_past": true,
  • "pad_token_id": 1,
  • "scale_embedding": true,
  • "static_position_embeddings": false,
  • "transformers_version": "4.4.0.dev0",
  • "use_cache": true,
  • "vocab_size": 250054,
  • "tokenizer_class": "MBart50Tokenizer"
  • }
  • ```
  • ---
  • Crossposts:
  • - https://ai.stackexchange.com/q/48427/4
  • - https://qr.ae/pAAezR
#1: Initial revision by user avatar Franck Dernoncourt‭ · 2025-04-20T20:03:03Z (21 days ago)
Why would the tokenizer for encoder-decoder model for machine translation use bos_token_id == eos_token_id?
I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:

```
  "bos_token_id": 0,
  "eos_token_id": 0,
```

in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).

Why set bos_token_id == eos_token_id?

By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:


```
  "bos_token_id": 0,
  "eos_token_id": 2,
```


----

Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):

```
{
  "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
  "_num_labels": 3,
  "activation_dropout": 0.0,
  "activation_function": "swish",
  "add_bias_logits": false,
  "add_final_layer_norm": false,
  "architectures": [
    "MarianMTModel"
  ],
  "attention_dropout": 0.0,
  "bad_words_ids": [
    [
      59513
    ]
  ],
  "bos_token_id": 0,
  "classif_dropout": 0.0,
  "classifier_dropout": 0.0,
  "d_model": 512,
  "decoder_attention_heads": 8,
  "decoder_ffn_dim": 2048,
  "decoder_layerdrop": 0.0,
  "decoder_layers": 6,
  "decoder_start_token_id": 59513,
  "decoder_vocab_size": 59514,
  "dropout": 0.1,
  "encoder_attention_heads": 8,
  "encoder_ffn_dim": 2048,
  "encoder_layerdrop": 0.0,
  "encoder_layers": 6,
  "eos_token_id": 0,
  "forced_eos_token_id": 0,
  "gradient_checkpointing": false,
  "id2label": {
    "0": "LABEL_0",
    "1": "LABEL_1",
    "2": "LABEL_2"
  },
  "init_std": 0.02,
  "is_encoder_decoder": true,
  "label2id": {
    "LABEL_0": 0,
    "LABEL_1": 1,
    "LABEL_2": 2
  },
  "max_length": 512,
  "max_position_embeddings": 512,
  "model_type": "marian",
  "normalize_before": false,
  "normalize_embedding": false,
  "num_beams": 4,
  "num_hidden_layers": 6,
  "pad_token_id": 59513,
  "scale_embedding": true,
  "share_encoder_decoder_embeddings": true,
  "static_position_embeddings": true,
  "transformers_version": "4.22.0.dev0",
  "use_cache": true,
  "vocab_size": 59514
}
```


Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
`](https://huggingface.co/facebook/mbart-large-50):


```
{
  "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
  "_num_labels": 3,
  "activation_dropout": 0.0,
  "activation_function": "gelu",
  "add_bias_logits": false,
  "add_final_layer_norm": true,
  "architectures": [
    "MBartForConditionalGeneration"
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 0,
  "classif_dropout": 0.0,
  "classifier_dropout": 0.0,
  "d_model": 1024,
  "decoder_attention_heads": 16,
  "decoder_ffn_dim": 4096,
  "decoder_layerdrop": 0.0,
  "decoder_layers": 12,
  "decoder_start_token_id": 2,
  "dropout": 0.1,
  "early_stopping": true,
  "encoder_attention_heads": 16,
  "encoder_ffn_dim": 4096,
  "encoder_layerdrop": 0.0,
  "encoder_layers": 12,
  "eos_token_id": 2,
  "forced_eos_token_id": 2,
  "gradient_checkpointing": false,
  "id2label": {
    "0": "LABEL_0",
    "1": "LABEL_1",
    "2": "LABEL_2"
  },
  "init_std": 0.02,
  "is_encoder_decoder": true,
  "label2id": {
    "LABEL_0": 0,
    "LABEL_1": 1,
    "LABEL_2": 2
  },
  "max_length": 200,
  "max_position_embeddings": 1024,
  "model_type": "mbart",
  "normalize_before": true,
  "normalize_embedding": true,
  "num_beams": 5,
  "num_hidden_layers": 12,
  "output_past": true,
  "pad_token_id": 1,
  "scale_embedding": true,
  "static_position_embeddings": false,
  "transformers_version": "4.4.0.dev0",
  "use_cache": true,
  "vocab_size": 250054,
  "tokenizer_class": "MBart50Tokenizer"
}
```