Welcome to Software Development on Codidact!
Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.
Post History
Not sure if this is really a coding question (maybe the ML or AI Tech proposals might be better suited). Not having distinct begin and end tokens is not that extraordinary. E.g. the GPT-2 tokeniz...
#1: Initial revision
Not sure if this is really a coding question (maybe the [ML](https://proposals.codidact.com/posts/289179) or [AI Tech](https://proposals.codidact.com/posts/289124) proposals might be better suited). Not having distinct begin and end tokens is not that extraordinary. E.g. the [GPT-2 tokenizer](https://github.com/openai/tiktoken/blob/main/tiktoken_ext/openai_public.py#L17C1-L30C6) also uses only a single special token (rather than multiple distinct tokens). It is simply a design choice that needs to be made at the beginning of training. Once the model has been trained, you just have to stick with the tokenisation that the model has been trained with. When you have multiple sequences in one input string, the end of the previous sequence is likely going to be the beginning of the next one anyway. Therefore, I suspect that it does not have that much benefit in general for regular auto-regressive models. I expect it might be more advantageous for bi-directional models, where distinguishing between the end of the forward direction and the end of the reversed direction (i.e. the beginning of the forward sequence) might be helpful.