Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
I see on this PyTorch model Helsinki-NLP/opus-mt-fr-en (HuggingFace), which is an encoder-decoder model for machine translation: "bos_token_id": 0, "eos_token_id": 0, in its config.json. ...
#4: Post edited
- I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
- ```
- "bos_token_id": 0,
- "eos_token_id": 0,
- ```
- in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
- Why set bos_token_id == eos_token_id? How does it know when a sequence ends?
- By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
- ```
- "bos_token_id": 0,
- "eos_token_id": 2,
- ```
- ----
- Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
- ```
- {
- "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
- "_num_labels": 3,
- "activation_dropout": 0.0,
- "activation_function": "swish",
- "add_bias_logits": false,
- "add_final_layer_norm": false,
- "architectures": [
- "MarianMTModel"
- ],
- "attention_dropout": 0.0,
- "bad_words_ids": [
- [
- 59513
- ]
- ],
- "bos_token_id": 0,
- "classif_dropout": 0.0,
- "classifier_dropout": 0.0,
- "d_model": 512,
- "decoder_attention_heads": 8,
- "decoder_ffn_dim": 2048,
- "decoder_layerdrop": 0.0,
- "decoder_layers": 6,
- "decoder_start_token_id": 59513,
- "decoder_vocab_size": 59514,
- "dropout": 0.1,
- "encoder_attention_heads": 8,
- "encoder_ffn_dim": 2048,
- "encoder_layerdrop": 0.0,
- "encoder_layers": 6,
- "eos_token_id": 0,
- "forced_eos_token_id": 0,
- "gradient_checkpointing": false,
- "id2label": {
- "0": "LABEL_0",
- "1": "LABEL_1",
- "2": "LABEL_2"
- },
- "init_std": 0.02,
- "is_encoder_decoder": true,
- "label2id": {
- "LABEL_0": 0,
- "LABEL_1": 1,
- "LABEL_2": 2
- },
- "max_length": 512,
- "max_position_embeddings": 512,
- "model_type": "marian",
- "normalize_before": false,
- "normalize_embedding": false,
- "num_beams": 4,
- "num_hidden_layers": 6,
- "pad_token_id": 59513,
- "scale_embedding": true,
- "share_encoder_decoder_embeddings": true,
- "static_position_embeddings": true,
- "transformers_version": "4.22.0.dev0",
- "use_cache": true,
- "vocab_size": 59514
- }
- ```
- Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
- `](https://huggingface.co/facebook/mbart-large-50):
- ```
- {
- "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
- "_num_labels": 3,
- "activation_dropout": 0.0,
- "activation_function": "gelu",
- "add_bias_logits": false,
- "add_final_layer_norm": true,
- "architectures": [
- "MBartForConditionalGeneration"
- ],
- "attention_dropout": 0.0,
- "bos_token_id": 0,
- "classif_dropout": 0.0,
- "classifier_dropout": 0.0,
- "d_model": 1024,
- "decoder_attention_heads": 16,
- "decoder_ffn_dim": 4096,
- "decoder_layerdrop": 0.0,
- "decoder_layers": 12,
- "decoder_start_token_id": 2,
- "dropout": 0.1,
- "early_stopping": true,
- "encoder_attention_heads": 16,
- "encoder_ffn_dim": 4096,
- "encoder_layerdrop": 0.0,
- "encoder_layers": 12,
- "eos_token_id": 2,
- "forced_eos_token_id": 2,
- "gradient_checkpointing": false,
- "id2label": {
- "0": "LABEL_0",
- "1": "LABEL_1",
- "2": "LABEL_2"
- },
- "init_std": 0.02,
- "is_encoder_decoder": true,
- "label2id": {
- "LABEL_0": 0,
- "LABEL_1": 1,
- "LABEL_2": 2
- },
- "max_length": 200,
- "max_position_embeddings": 1024,
- "model_type": "mbart",
- "normalize_before": true,
- "normalize_embedding": true,
- "num_beams": 5,
- "num_hidden_layers": 12,
- "output_past": true,
- "pad_token_id": 1,
- "scale_embedding": true,
- "static_position_embeddings": false,
- "transformers_version": "4.4.0.dev0",
- "use_cache": true,
- "vocab_size": 250054,
- "tokenizer_class": "MBart50Tokenizer"
- }
- ```
- ---
- Crossposts:
- https://ai.stackexchange.com/q/48427/4- https://qr.ae/pAAezR
- I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
- ```
- "bos_token_id": 0,
- "eos_token_id": 0,
- ```
- in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
- Why set bos_token_id == eos_token_id? How does it know when a sequence ends?
- By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
- ```
- "bos_token_id": 0,
- "eos_token_id": 2,
- ```
- ----
- Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
- ```
- {
- "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
- "_num_labels": 3,
- "activation_dropout": 0.0,
- "activation_function": "swish",
- "add_bias_logits": false,
- "add_final_layer_norm": false,
- "architectures": [
- "MarianMTModel"
- ],
- "attention_dropout": 0.0,
- "bad_words_ids": [
- [
- 59513
- ]
- ],
- "bos_token_id": 0,
- "classif_dropout": 0.0,
- "classifier_dropout": 0.0,
- "d_model": 512,
- "decoder_attention_heads": 8,
- "decoder_ffn_dim": 2048,
- "decoder_layerdrop": 0.0,
- "decoder_layers": 6,
- "decoder_start_token_id": 59513,
- "decoder_vocab_size": 59514,
- "dropout": 0.1,
- "encoder_attention_heads": 8,
- "encoder_ffn_dim": 2048,
- "encoder_layerdrop": 0.0,
- "encoder_layers": 6,
- "eos_token_id": 0,
- "forced_eos_token_id": 0,
- "gradient_checkpointing": false,
- "id2label": {
- "0": "LABEL_0",
- "1": "LABEL_1",
- "2": "LABEL_2"
- },
- "init_std": 0.02,
- "is_encoder_decoder": true,
- "label2id": {
- "LABEL_0": 0,
- "LABEL_1": 1,
- "LABEL_2": 2
- },
- "max_length": 512,
- "max_position_embeddings": 512,
- "model_type": "marian",
- "normalize_before": false,
- "normalize_embedding": false,
- "num_beams": 4,
- "num_hidden_layers": 6,
- "pad_token_id": 59513,
- "scale_embedding": true,
- "share_encoder_decoder_embeddings": true,
- "static_position_embeddings": true,
- "transformers_version": "4.22.0.dev0",
- "use_cache": true,
- "vocab_size": 59514
- }
- ```
- Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
- `](https://huggingface.co/facebook/mbart-large-50):
- ```
- {
- "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
- "_num_labels": 3,
- "activation_dropout": 0.0,
- "activation_function": "gelu",
- "add_bias_logits": false,
- "add_final_layer_norm": true,
- "architectures": [
- "MBartForConditionalGeneration"
- ],
- "attention_dropout": 0.0,
- "bos_token_id": 0,
- "classif_dropout": 0.0,
- "classifier_dropout": 0.0,
- "d_model": 1024,
- "decoder_attention_heads": 16,
- "decoder_ffn_dim": 4096,
- "decoder_layerdrop": 0.0,
- "decoder_layers": 12,
- "decoder_start_token_id": 2,
- "dropout": 0.1,
- "early_stopping": true,
- "encoder_attention_heads": 16,
- "encoder_ffn_dim": 4096,
- "encoder_layerdrop": 0.0,
- "encoder_layers": 12,
- "eos_token_id": 2,
- "forced_eos_token_id": 2,
- "gradient_checkpointing": false,
- "id2label": {
- "0": "LABEL_0",
- "1": "LABEL_1",
- "2": "LABEL_2"
- },
- "init_std": 0.02,
- "is_encoder_decoder": true,
- "label2id": {
- "LABEL_0": 0,
- "LABEL_1": 1,
- "LABEL_2": 2
- },
- "max_length": 200,
- "max_position_embeddings": 1024,
- "model_type": "mbart",
- "normalize_before": true,
- "normalize_embedding": true,
- "num_beams": 5,
- "num_hidden_layers": 12,
- "output_past": true,
- "pad_token_id": 1,
- "scale_embedding": true,
- "static_position_embeddings": false,
- "transformers_version": "4.4.0.dev0",
- "use_cache": true,
- "vocab_size": 250054,
- "tokenizer_class": "MBart50Tokenizer"
- }
- ```
- ---
- Crossposts:
- * https://ai.stackexchange.com/q/48427/4
- * https://qr.ae/pAAezR
- * https://redd.it/1k3vdl7
- * https://redd.it/1k3vddf
- * https://redd.it/1k3vdri
- * https://redd.it/1k3vdu5
- * https://redd.it/1k3vdwy
- * https://redd.it/1k3vdxy
- * https://redd.it/1k3ve37
- * https://redd.it/1k3ve6z
- * https://redd.it/1k3vedz
#3: Post edited
Why would the tokenizer for encoder-decoder model for machine translation use bos_token_id == eos_token_id?
- Why would the tokenizer for encoder-decoder model for machine translation use bos_token_id == eos_token_id? How does it know when a sequence ends?
- I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
- ```
- "bos_token_id": 0,
- "eos_token_id": 0,
- ```
- in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
Why set bos_token_id == eos_token_id?- By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
- ```
- "bos_token_id": 0,
- "eos_token_id": 2,
- ```
- ----
- Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
- ```
- {
- "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
- "_num_labels": 3,
- "activation_dropout": 0.0,
- "activation_function": "swish",
- "add_bias_logits": false,
- "add_final_layer_norm": false,
- "architectures": [
- "MarianMTModel"
- ],
- "attention_dropout": 0.0,
- "bad_words_ids": [
- [
- 59513
- ]
- ],
- "bos_token_id": 0,
- "classif_dropout": 0.0,
- "classifier_dropout": 0.0,
- "d_model": 512,
- "decoder_attention_heads": 8,
- "decoder_ffn_dim": 2048,
- "decoder_layerdrop": 0.0,
- "decoder_layers": 6,
- "decoder_start_token_id": 59513,
- "decoder_vocab_size": 59514,
- "dropout": 0.1,
- "encoder_attention_heads": 8,
- "encoder_ffn_dim": 2048,
- "encoder_layerdrop": 0.0,
- "encoder_layers": 6,
- "eos_token_id": 0,
- "forced_eos_token_id": 0,
- "gradient_checkpointing": false,
- "id2label": {
- "0": "LABEL_0",
- "1": "LABEL_1",
- "2": "LABEL_2"
- },
- "init_std": 0.02,
- "is_encoder_decoder": true,
- "label2id": {
- "LABEL_0": 0,
- "LABEL_1": 1,
- "LABEL_2": 2
- },
- "max_length": 512,
- "max_position_embeddings": 512,
- "model_type": "marian",
- "normalize_before": false,
- "normalize_embedding": false,
- "num_beams": 4,
- "num_hidden_layers": 6,
- "pad_token_id": 59513,
- "scale_embedding": true,
- "share_encoder_decoder_embeddings": true,
- "static_position_embeddings": true,
- "transformers_version": "4.22.0.dev0",
- "use_cache": true,
- "vocab_size": 59514
- }
- ```
- Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
- `](https://huggingface.co/facebook/mbart-large-50):
- ```
- {
- "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
- "_num_labels": 3,
- "activation_dropout": 0.0,
- "activation_function": "gelu",
- "add_bias_logits": false,
- "add_final_layer_norm": true,
- "architectures": [
- "MBartForConditionalGeneration"
- ],
- "attention_dropout": 0.0,
- "bos_token_id": 0,
- "classif_dropout": 0.0,
- "classifier_dropout": 0.0,
- "d_model": 1024,
- "decoder_attention_heads": 16,
- "decoder_ffn_dim": 4096,
- "decoder_layerdrop": 0.0,
- "decoder_layers": 12,
- "decoder_start_token_id": 2,
- "dropout": 0.1,
- "early_stopping": true,
- "encoder_attention_heads": 16,
- "encoder_ffn_dim": 4096,
- "encoder_layerdrop": 0.0,
- "encoder_layers": 12,
- "eos_token_id": 2,
- "forced_eos_token_id": 2,
- "gradient_checkpointing": false,
- "id2label": {
- "0": "LABEL_0",
- "1": "LABEL_1",
- "2": "LABEL_2"
- },
- "init_std": 0.02,
- "is_encoder_decoder": true,
- "label2id": {
- "LABEL_0": 0,
- "LABEL_1": 1,
- "LABEL_2": 2
- },
- "max_length": 200,
- "max_position_embeddings": 1024,
- "model_type": "mbart",
- "normalize_before": true,
- "normalize_embedding": true,
- "num_beams": 5,
- "num_hidden_layers": 12,
- "output_past": true,
- "pad_token_id": 1,
- "scale_embedding": true,
- "static_position_embeddings": false,
- "transformers_version": "4.4.0.dev0",
- "use_cache": true,
- "vocab_size": 250054,
- "tokenizer_class": "MBart50Tokenizer"
- }
- ```
- ---
- Crossposts:
- - https://ai.stackexchange.com/q/48427/4
- - https://qr.ae/pAAezR
- I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
- ```
- "bos_token_id": 0,
- "eos_token_id": 0,
- ```
- in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
- Why set bos_token_id == eos_token_id? How does it know when a sequence ends?
- By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
- ```
- "bos_token_id": 0,
- "eos_token_id": 2,
- ```
- ----
- Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
- ```
- {
- "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
- "_num_labels": 3,
- "activation_dropout": 0.0,
- "activation_function": "swish",
- "add_bias_logits": false,
- "add_final_layer_norm": false,
- "architectures": [
- "MarianMTModel"
- ],
- "attention_dropout": 0.0,
- "bad_words_ids": [
- [
- 59513
- ]
- ],
- "bos_token_id": 0,
- "classif_dropout": 0.0,
- "classifier_dropout": 0.0,
- "d_model": 512,
- "decoder_attention_heads": 8,
- "decoder_ffn_dim": 2048,
- "decoder_layerdrop": 0.0,
- "decoder_layers": 6,
- "decoder_start_token_id": 59513,
- "decoder_vocab_size": 59514,
- "dropout": 0.1,
- "encoder_attention_heads": 8,
- "encoder_ffn_dim": 2048,
- "encoder_layerdrop": 0.0,
- "encoder_layers": 6,
- "eos_token_id": 0,
- "forced_eos_token_id": 0,
- "gradient_checkpointing": false,
- "id2label": {
- "0": "LABEL_0",
- "1": "LABEL_1",
- "2": "LABEL_2"
- },
- "init_std": 0.02,
- "is_encoder_decoder": true,
- "label2id": {
- "LABEL_0": 0,
- "LABEL_1": 1,
- "LABEL_2": 2
- },
- "max_length": 512,
- "max_position_embeddings": 512,
- "model_type": "marian",
- "normalize_before": false,
- "normalize_embedding": false,
- "num_beams": 4,
- "num_hidden_layers": 6,
- "pad_token_id": 59513,
- "scale_embedding": true,
- "share_encoder_decoder_embeddings": true,
- "static_position_embeddings": true,
- "transformers_version": "4.22.0.dev0",
- "use_cache": true,
- "vocab_size": 59514
- }
- ```
- Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
- `](https://huggingface.co/facebook/mbart-large-50):
- ```
- {
- "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
- "_num_labels": 3,
- "activation_dropout": 0.0,
- "activation_function": "gelu",
- "add_bias_logits": false,
- "add_final_layer_norm": true,
- "architectures": [
- "MBartForConditionalGeneration"
- ],
- "attention_dropout": 0.0,
- "bos_token_id": 0,
- "classif_dropout": 0.0,
- "classifier_dropout": 0.0,
- "d_model": 1024,
- "decoder_attention_heads": 16,
- "decoder_ffn_dim": 4096,
- "decoder_layerdrop": 0.0,
- "decoder_layers": 12,
- "decoder_start_token_id": 2,
- "dropout": 0.1,
- "early_stopping": true,
- "encoder_attention_heads": 16,
- "encoder_ffn_dim": 4096,
- "encoder_layerdrop": 0.0,
- "encoder_layers": 12,
- "eos_token_id": 2,
- "forced_eos_token_id": 2,
- "gradient_checkpointing": false,
- "id2label": {
- "0": "LABEL_0",
- "1": "LABEL_1",
- "2": "LABEL_2"
- },
- "init_std": 0.02,
- "is_encoder_decoder": true,
- "label2id": {
- "LABEL_0": 0,
- "LABEL_1": 1,
- "LABEL_2": 2
- },
- "max_length": 200,
- "max_position_embeddings": 1024,
- "model_type": "mbart",
- "normalize_before": true,
- "normalize_embedding": true,
- "num_beams": 5,
- "num_hidden_layers": 12,
- "output_past": true,
- "pad_token_id": 1,
- "scale_embedding": true,
- "static_position_embeddings": false,
- "transformers_version": "4.4.0.dev0",
- "use_cache": true,
- "vocab_size": 250054,
- "tokenizer_class": "MBart50Tokenizer"
- }
- ```
- ---
- Crossposts:
- - https://ai.stackexchange.com/q/48427/4
- - https://qr.ae/pAAezR
#2: Post edited
- I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
- ```
- "bos_token_id": 0,
- "eos_token_id": 0,
- ```
- in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
- Why set bos_token_id == eos_token_id?
- By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
- ```
- "bos_token_id": 0,
- "eos_token_id": 2,
- ```
- ----
- Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
- ```
- {
- "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
- "_num_labels": 3,
- "activation_dropout": 0.0,
- "activation_function": "swish",
- "add_bias_logits": false,
- "add_final_layer_norm": false,
- "architectures": [
- "MarianMTModel"
- ],
- "attention_dropout": 0.0,
- "bad_words_ids": [
- [
- 59513
- ]
- ],
- "bos_token_id": 0,
- "classif_dropout": 0.0,
- "classifier_dropout": 0.0,
- "d_model": 512,
- "decoder_attention_heads": 8,
- "decoder_ffn_dim": 2048,
- "decoder_layerdrop": 0.0,
- "decoder_layers": 6,
- "decoder_start_token_id": 59513,
- "decoder_vocab_size": 59514,
- "dropout": 0.1,
- "encoder_attention_heads": 8,
- "encoder_ffn_dim": 2048,
- "encoder_layerdrop": 0.0,
- "encoder_layers": 6,
- "eos_token_id": 0,
- "forced_eos_token_id": 0,
- "gradient_checkpointing": false,
- "id2label": {
- "0": "LABEL_0",
- "1": "LABEL_1",
- "2": "LABEL_2"
- },
- "init_std": 0.02,
- "is_encoder_decoder": true,
- "label2id": {
- "LABEL_0": 0,
- "LABEL_1": 1,
- "LABEL_2": 2
- },
- "max_length": 512,
- "max_position_embeddings": 512,
- "model_type": "marian",
- "normalize_before": false,
- "normalize_embedding": false,
- "num_beams": 4,
- "num_hidden_layers": 6,
- "pad_token_id": 59513,
- "scale_embedding": true,
- "share_encoder_decoder_embeddings": true,
- "static_position_embeddings": true,
- "transformers_version": "4.22.0.dev0",
- "use_cache": true,
- "vocab_size": 59514
- }
- ```
- Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
- `](https://huggingface.co/facebook/mbart-large-50):
- ```
- {
- "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
- "_num_labels": 3,
- "activation_dropout": 0.0,
- "activation_function": "gelu",
- "add_bias_logits": false,
- "add_final_layer_norm": true,
- "architectures": [
- "MBartForConditionalGeneration"
- ],
- "attention_dropout": 0.0,
- "bos_token_id": 0,
- "classif_dropout": 0.0,
- "classifier_dropout": 0.0,
- "d_model": 1024,
- "decoder_attention_heads": 16,
- "decoder_ffn_dim": 4096,
- "decoder_layerdrop": 0.0,
- "decoder_layers": 12,
- "decoder_start_token_id": 2,
- "dropout": 0.1,
- "early_stopping": true,
- "encoder_attention_heads": 16,
- "encoder_ffn_dim": 4096,
- "encoder_layerdrop": 0.0,
- "encoder_layers": 12,
- "eos_token_id": 2,
- "forced_eos_token_id": 2,
- "gradient_checkpointing": false,
- "id2label": {
- "0": "LABEL_0",
- "1": "LABEL_1",
- "2": "LABEL_2"
- },
- "init_std": 0.02,
- "is_encoder_decoder": true,
- "label2id": {
- "LABEL_0": 0,
- "LABEL_1": 1,
- "LABEL_2": 2
- },
- "max_length": 200,
- "max_position_embeddings": 1024,
- "model_type": "mbart",
- "normalize_before": true,
- "normalize_embedding": true,
- "num_beams": 5,
- "num_hidden_layers": 12,
- "output_past": true,
- "pad_token_id": 1,
- "scale_embedding": true,
- "static_position_embeddings": false,
- "transformers_version": "4.4.0.dev0",
- "use_cache": true,
- "vocab_size": 250054,
- "tokenizer_class": "MBart50Tokenizer"
- }
```
- I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation:
- ```
- "bos_token_id": 0,
- "eos_token_id": 0,
- ```
- in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json).
- Why set bos_token_id == eos_token_id?
- By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID:
- ```
- "bos_token_id": 0,
- "eos_token_id": 2,
- ```
- ----
- Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en):
- ```
- {
- "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
- "_num_labels": 3,
- "activation_dropout": 0.0,
- "activation_function": "swish",
- "add_bias_logits": false,
- "add_final_layer_norm": false,
- "architectures": [
- "MarianMTModel"
- ],
- "attention_dropout": 0.0,
- "bad_words_ids": [
- [
- 59513
- ]
- ],
- "bos_token_id": 0,
- "classif_dropout": 0.0,
- "classifier_dropout": 0.0,
- "d_model": 512,
- "decoder_attention_heads": 8,
- "decoder_ffn_dim": 2048,
- "decoder_layerdrop": 0.0,
- "decoder_layers": 6,
- "decoder_start_token_id": 59513,
- "decoder_vocab_size": 59514,
- "dropout": 0.1,
- "encoder_attention_heads": 8,
- "encoder_ffn_dim": 2048,
- "encoder_layerdrop": 0.0,
- "encoder_layers": 6,
- "eos_token_id": 0,
- "forced_eos_token_id": 0,
- "gradient_checkpointing": false,
- "id2label": {
- "0": "LABEL_0",
- "1": "LABEL_1",
- "2": "LABEL_2"
- },
- "init_std": 0.02,
- "is_encoder_decoder": true,
- "label2id": {
- "LABEL_0": 0,
- "LABEL_1": 1,
- "LABEL_2": 2
- },
- "max_length": 512,
- "max_position_embeddings": 512,
- "model_type": "marian",
- "normalize_before": false,
- "normalize_embedding": false,
- "num_beams": 4,
- "num_hidden_layers": 6,
- "pad_token_id": 59513,
- "scale_embedding": true,
- "share_encoder_decoder_embeddings": true,
- "static_position_embeddings": true,
- "transformers_version": "4.22.0.dev0",
- "use_cache": true,
- "vocab_size": 59514
- }
- ```
- Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50
- `](https://huggingface.co/facebook/mbart-large-50):
- ```
- {
- "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
- "_num_labels": 3,
- "activation_dropout": 0.0,
- "activation_function": "gelu",
- "add_bias_logits": false,
- "add_final_layer_norm": true,
- "architectures": [
- "MBartForConditionalGeneration"
- ],
- "attention_dropout": 0.0,
- "bos_token_id": 0,
- "classif_dropout": 0.0,
- "classifier_dropout": 0.0,
- "d_model": 1024,
- "decoder_attention_heads": 16,
- "decoder_ffn_dim": 4096,
- "decoder_layerdrop": 0.0,
- "decoder_layers": 12,
- "decoder_start_token_id": 2,
- "dropout": 0.1,
- "early_stopping": true,
- "encoder_attention_heads": 16,
- "encoder_ffn_dim": 4096,
- "encoder_layerdrop": 0.0,
- "encoder_layers": 12,
- "eos_token_id": 2,
- "forced_eos_token_id": 2,
- "gradient_checkpointing": false,
- "id2label": {
- "0": "LABEL_0",
- "1": "LABEL_1",
- "2": "LABEL_2"
- },
- "init_std": 0.02,
- "is_encoder_decoder": true,
- "label2id": {
- "LABEL_0": 0,
- "LABEL_1": 1,
- "LABEL_2": 2
- },
- "max_length": 200,
- "max_position_embeddings": 1024,
- "model_type": "mbart",
- "normalize_before": true,
- "normalize_embedding": true,
- "num_beams": 5,
- "num_hidden_layers": 12,
- "output_past": true,
- "pad_token_id": 1,
- "scale_embedding": true,
- "static_position_embeddings": false,
- "transformers_version": "4.4.0.dev0",
- "use_cache": true,
- "vocab_size": 250054,
- "tokenizer_class": "MBart50Tokenizer"
- }
- ```
- ---
- Crossposts:
- - https://ai.stackexchange.com/q/48427/4
- - https://qr.ae/pAAezR
#1: Initial revision
Why would the tokenizer for encoder-decoder model for machine translation use bos_token_id == eos_token_id?
I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation: ``` "bos_token_id": 0, "eos_token_id": 0, ``` in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json). Why set bos_token_id == eos_token_id? By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID: ``` "bos_token_id": 0, "eos_token_id": 2, ``` ---- Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en): ``` { "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en", "_num_labels": 3, "activation_dropout": 0.0, "activation_function": "swish", "add_bias_logits": false, "add_final_layer_norm": false, "architectures": [ "MarianMTModel" ], "attention_dropout": 0.0, "bad_words_ids": [ [ 59513 ] ], "bos_token_id": 0, "classif_dropout": 0.0, "classifier_dropout": 0.0, "d_model": 512, "decoder_attention_heads": 8, "decoder_ffn_dim": 2048, "decoder_layerdrop": 0.0, "decoder_layers": 6, "decoder_start_token_id": 59513, "decoder_vocab_size": 59514, "dropout": 0.1, "encoder_attention_heads": 8, "encoder_ffn_dim": 2048, "encoder_layerdrop": 0.0, "encoder_layers": 6, "eos_token_id": 0, "forced_eos_token_id": 0, "gradient_checkpointing": false, "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2" }, "init_std": 0.02, "is_encoder_decoder": true, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2 }, "max_length": 512, "max_position_embeddings": 512, "model_type": "marian", "normalize_before": false, "normalize_embedding": false, "num_beams": 4, "num_hidden_layers": 6, "pad_token_id": 59513, "scale_embedding": true, "share_encoder_decoder_embeddings": true, "static_position_embeddings": true, "transformers_version": "4.22.0.dev0", "use_cache": true, "vocab_size": 59514 } ``` Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50 `](https://huggingface.co/facebook/mbart-large-50): ``` { "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large", "_num_labels": 3, "activation_dropout": 0.0, "activation_function": "gelu", "add_bias_logits": false, "add_final_layer_norm": true, "architectures": [ "MBartForConditionalGeneration" ], "attention_dropout": 0.0, "bos_token_id": 0, "classif_dropout": 0.0, "classifier_dropout": 0.0, "d_model": 1024, "decoder_attention_heads": 16, "decoder_ffn_dim": 4096, "decoder_layerdrop": 0.0, "decoder_layers": 12, "decoder_start_token_id": 2, "dropout": 0.1, "early_stopping": true, "encoder_attention_heads": 16, "encoder_ffn_dim": 4096, "encoder_layerdrop": 0.0, "encoder_layers": 12, "eos_token_id": 2, "forced_eos_token_id": 2, "gradient_checkpointing": false, "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2" }, "init_std": 0.02, "is_encoder_decoder": true, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2 }, "max_length": 200, "max_position_embeddings": 1024, "model_type": "mbart", "normalize_before": true, "normalize_embedding": true, "num_beams": 5, "num_hidden_layers": 12, "output_past": true, "pad_token_id": 1, "scale_embedding": true, "static_position_embeddings": false, "transformers_version": "4.4.0.dev0", "use_cache": true, "vocab_size": 250054, "tokenizer_class": "MBart50Tokenizer" } ```