Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Comments on Building a language model completely from scratch

Parent

Building a language model completely from scratch

+3
−5

What I would like to do

I would like to try to build a language model 100% from scratch if possible, for a learning experience. That means no external libraries and no pre-curated datasets.

  • It is ok if the performance is terrible.
  • If it is usable for anything, that might be a plus.

Languages

My best language is Python, but I’m open to doing this in a low-level language, like C or Rust; or, possibly JavaScript, or even Haskell. I guess this means it is the abstract structure of the program that interests me, not as much the specific language-specific code that implements it.

Rough idea

I would code a very simple version of the algorithm used in an LLM, presumably a transformer if possible, and to generate the data on my own, I would either write my own independent web crawler, or it would be cool if I could train it off my own language data that I generate somehow - even by talking to the model. Even if in a very restricted and small vocabulary or something.

What I need help with

Please submit concrete information about the architecture of the code, as opposed to external reference materials.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

2 comment threads

Question is not bad (3 comments)
Too broad? (1 comment)
Post
+3
−1

This is not feasible as described.

To learn LLMs, you can look at models like 3B WizardLM. These are open source and should be possible to just train and run as is. The build may be very complex, and consumer hardware may be insufficient (but the easy solution is to run it on the cloud).

You can also look at earlier language models like BERT or seq2seq. The architecture of these isn't quite as sophisticated as modern LLMs, but they rely on the same principles like encoders/decoders and attention.

These are all neural networks with a large number of nodes. The LLM jargon like encoders is just a particular style of wiring the nodes that turns out to work well in some cases. But you're saying no external libraries, so I guess you're thinking of also re-implementing your own neural network library, like PyTorch, from scratch. PyTorch also relies on some beefy linear algebra libraries, by the way, that use optimized C code. I guess you're thinking of implementing those yourself too, since they're external?

The requirement to not use a pre-curated dataset is another problem. If you mean that you want to build your own training data, no, you can't. You need a huge amount of data to train an LM, just collecting it is a megaproject in itself, you won't be able to do it. If you mean that you just want a raw dataset, like a dump of Wikipedia, then yes you can do that - in fact many open source, free models like WizardLM do this. This alone actually kills their performance. I think WizardLM may have started curating the data better now, though.

What you are asking is like saying "I'll develop a program like Adobe Illustrator from scratch, including writing an OS and device drivers for it". While it's tempting to engage in a fantasy where you just work 10x faster than the average dev (LLMs are not made by "average" devs, but no matter), and you just scope the problem down a little bit, and it all sort of works out... It doesn't. In reality, thousands of people work for years to create these, and that sort of productivity multiplier is not realistic for an individual unless you reduce the goal so as to have 0 performance. Just writing all the English UI text for Illustrator would probably take you months.

To learn LLMs, I would say the best path is to pick some key element of them and use off the shelf stuff for everything else. For example, if you're interested in model bias, just train WizardLM but with a different dataset. If you're interested in encoders, look at implementing seq2seq with PyTorch.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

1 comment thread

OK, but asker is not asking how long it would take, just what the steps would be. A good answer is: Y... (1 comment)
OK, but asker is not asking how long it would take, just what the steps would be. A good answer is: Y...
user253751‭ wrote 10 months ago

OK, but asker is not asking how long it would take, just what the steps would be. A good answer is: You would have to do this, this, this, this and this. But that will probably take a very long time, so consider taking these shortcuts.