Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Post History

62%
+3 −1
Q&A Building a language model completely from scratch

This is not feasible as described. To learn LLMs, you can look at models like 3B WizardLM. These are open source and should be possible to just train and run as is. The build may be very complex, ...

posted 9mo ago by matthewsnyder‭

Answer
#1: Initial revision by user avatar matthewsnyder‭ · 2024-02-15T16:07:23Z (9 months ago)
This is not feasible as described.

To learn LLMs, you can look at models like 3B WizardLM. These are open source and should be possible to just train and run as is. The build may be very complex, and consumer hardware may be insufficient (but the easy solution is to run it on the cloud).

You can also look at earlier language models like BERT or seq2seq. The architecture of these isn't quite as sophisticated as modern LLMs, but they rely on the same principles like encoders/decoders and attention.

These are all neural networks with a large number of nodes. The LLM jargon like encoders is just a particular style of wiring the nodes that turns out to work well in some cases. But you're saying no external libraries, so I guess you're thinking of also re-implementing your own neural network library, like PyTorch, from scratch. PyTorch also relies on some beefy linear algebra libraries, by the way, that use optimized C code. I guess you're thinking of implementing those yourself too, since they're external?

The requirement to not use a pre-curated dataset is another problem. If you mean that you want to build your own training data, no, you can't. You need a huge amount of data to train an LM, just collecting it is a megaproject in itself, you won't be able to do it. If you mean that you just want a raw dataset, like a dump of Wikipedia, then yes you can do that - in fact many open source, free models like WizardLM do this. This alone actually kills their performance. I think WizardLM may have started curating the data better now, though.

What you are asking is like saying "I'll develop a program like Adobe Illustrator from scratch, including writing an OS and device drivers for it". While it's tempting to engage in a fantasy where you just work 10x faster than the average dev (LLMs are not made by "average" devs, but no matter), and you just scope the problem down a little bit, and it all sort of works out... It doesn't. In reality, thousands of people work for years to create these, and that sort of productivity multiplier is not realistic for an individual unless you reduce the goal so as to have 0 performance. Just writing all the English UI text for Illustrator would probably take you months.

To learn LLMs, I would say the best path is to pick some key element of them and use off the shelf stuff for everything else. For example, if you're interested in model bias, just train WizardLM but with a different dataset. If you're interested in encoders, look at implementing seq2seq with PyTorch.