المواضيع الرائجة
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
My next blog post is dropping this week, and it’s a much deeper dive than usual.
I’ll be walking through how I fine-tuned Microsoft’s Phi-3-mini-4k-instruct (3.8B) with LoRA on my Mac using MLX.
The experiment: exploring whether a 3.8B model that runs locally can be fine-tuned to "speak like me" by training it on my own blog posts.
I’ve already pushed the LoRA adapter weights to Hugging Face.
But more importantly, the post will share the entire process so that more technical folks can learn how to get started with fine-tuning:
- Preparing the training data
- Training the model and hyperparameters
- Evaluating results
- Publishing to Hugging Face
And I’ll share all the code required to do it yourself.

14.98K
الأفضل
المُتصدِّرة
التطبيقات المفضلة