Skip to content

Conversation

@Cydral
Copy link
Contributor

@Cydral Cydral commented Jan 7, 2025

This example demonstrates a minimal implementation of a Very Small Language Model (VSLM) using Dlib's Transformer architecture.
The code showcases key features of the new Transformer layers, including attention mechanisms, positional embeddings, and a classification head, while maintaining a simple character-based tokenization approach.

Using Shakespeare's text as training data, the example illustrates both the training process and text generation capabilities, making it an excellent educational tool for understanding Transformer architecture basics.
The implementation is intentionally kept lightweight with a small parameter count to ensure quick training and generation while still achieving perfect memorization of training sequences, demonstrating the effectiveness of attention mechanisms in sequence learning tasks.

Cydral and others added 3 commits January 7, 2025 13:02
That method has been deprecated in Python 3.12 and will be removed from
Python 3.14. Replace it with a direct call to
`importlib.util.find_spec()`, which `pkgutil.find_loader()` was wrapping
around.
@davisking davisking merged commit 8fdd2a6 into davisking:master Jan 19, 2025
10 checks passed
@davisking
Copy link
Owner

Nice, this is really cool. Sorry for taking so long to review it. Just played around with it for a while and it's awesome :D

@Cydral
Copy link
Contributor Author

Cydral commented Jan 20, 2025

Thanks. It was indeed a huge challenge to integrate this. I'm working on a second example that will give a complete approach, with the support of appropriate tokenisation.

@davisking
Copy link
Owner

Sweet, I'll look forward to it 😁

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants