Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Why would the tokenizer for encoder-decoder model for machine translation use bos_token_id == eos_token_id? How does it know when a sequence ends?
I see on this PyTorch model Helsinki-NLP/opus-mt-fr-en
(HuggingFace), which is an encoder-decoder model for machine translation:
"bos_token_id": 0,
"eos_token_id": 0,
in its config.json
.
Why set bos_token_id == eos_token_id? How does it know when a sequence ends?
By comparison, I see that facebook/mbart-large-50 uses in its config.json
a different ID:
"bos_token_id": 0,
"eos_token_id": 2,
Entire config.json
for Helsinki-NLP/opus-mt-fr-en
:
{
"_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en",
"_num_labels": 3,
"activation_dropout": 0.0,
"activation_function": "swish",
"add_bias_logits": false,
"add_final_layer_norm": false,
"architectures": [
"MarianMTModel"
],
"attention_dropout": 0.0,
"bad_words_ids": [
[
59513
]
],
"bos_token_id": 0,
"classif_dropout": 0.0,
"classifier_dropout": 0.0,
"d_model": 512,
"decoder_attention_heads": 8,
"decoder_ffn_dim": 2048,
"decoder_layerdrop": 0.0,
"decoder_layers": 6,
"decoder_start_token_id": 59513,
"decoder_vocab_size": 59514,
"dropout": 0.1,
"encoder_attention_heads": 8,
"encoder_ffn_dim": 2048,
"encoder_layerdrop": 0.0,
"encoder_layers": 6,
"eos_token_id": 0,
"forced_eos_token_id": 0,
"gradient_checkpointing": false,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"init_std": 0.02,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"max_length": 512,
"max_position_embeddings": 512,
"model_type": "marian",
"normalize_before": false,
"normalize_embedding": false,
"num_beams": 4,
"num_hidden_layers": 6,
"pad_token_id": 59513,
"scale_embedding": true,
"share_encoder_decoder_embeddings": true,
"static_position_embeddings": true,
"transformers_version": "4.22.0.dev0",
"use_cache": true,
"vocab_size": 59514
}
Entire config.json
for facebook/mbart-large-50
:
{
"_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
"_num_labels": 3,
"activation_dropout": 0.0,
"activation_function": "gelu",
"add_bias_logits": false,
"add_final_layer_norm": true,
"architectures": [
"MBartForConditionalGeneration"
],
"attention_dropout": 0.0,
"bos_token_id": 0,
"classif_dropout": 0.0,
"classifier_dropout": 0.0,
"d_model": 1024,
"decoder_attention_heads": 16,
"decoder_ffn_dim": 4096,
"decoder_layerdrop": 0.0,
"decoder_layers": 12,
"decoder_start_token_id": 2,
"dropout": 0.1,
"early_stopping": true,
"encoder_attention_heads": 16,
"encoder_ffn_dim": 4096,
"encoder_layerdrop": 0.0,
"encoder_layers": 12,
"eos_token_id": 2,
"forced_eos_token_id": 2,
"gradient_checkpointing": false,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"init_std": 0.02,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"max_length": 200,
"max_position_embeddings": 1024,
"model_type": "mbart",
"normalize_before": true,
"normalize_embedding": true,
"num_beams": 5,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 1,
"scale_embedding": true,
"static_position_embeddings": false,
"transformers_version": "4.4.0.dev0",
"use_cache": true,
"vocab_size": 250054,
"tokenizer_class": "MBart50Tokenizer"
}
Crossposts:
- https://ai.stackexchange.com/q/48427/4
- https://qr.ae/pAAezR
- https://redd.it/1k3vdl7
- https://redd.it/1k3vddf
- https://redd.it/1k3vdri
- https://redd.it/1k3vdu5
- https://redd.it/1k3vdwy
- https://redd.it/1k3vdxy
- https://redd.it/1k3ve37
- https://redd.it/1k3ve6z -> https://redd.it/1k5efse
- https://redd.it/1k3vedz -> https://redd.it/1k3vdl7
1 answer
Not sure if this is really a coding question (maybe the ML or AI Tech proposals might be better suited).
Not having distinct begin and end tokens is not that extraordinary. E.g. the GPT-2 tokenizer also uses only a single special token (rather than multiple distinct tokens). It is simply a design choice that needs to be made at the beginning of training. Once the model has been trained, you just have to stick with the tokenisation that the model has been trained with.
When you have multiple sequences in one input string, the end of the previous sequence is likely going to be the beginning of the next one anyway. Therefore, I suspect that it does not have that much benefit in general for regular auto-regressive models. I expect it might be more advantageous for bi-directional models, where distinguishing between the end of the forward direction and the end of the reversed direction (i.e. the beginning of the forward sequence) might be helpful.
1 comment thread