top of page

Advanced LLM Fine-Tuning: Customization for Real-World Impact

This hands-on workshop is designed for developers who have a foundational understanding of Large Language Models (LLMs) and prompt engineering and are eager to take their expertise to the next level. Attendees will explore the flexibility and control that open-source LLMs offer, learning how to fine-tune and integrate these models into intelligent, autonomous AI systems.

Suraj Subramanian - Meta AI.png

Suraj Subramanian

ML Engineer

Meta AI

  • Twitter
  • LinkedIn

Time and Location

April 16
9:00am - 3:30pm
Cobb Galleria

Curriculum

What You’ll Learn

Participants will gain practical experience with cutting-edge fine-tuning methods, including:
✅ Full-Parameter Fine-Tuning – Achieve state-of-the-art performance by updating all model weights (resource-intensive).
✅ Parameter-Efficient Fine-Tuning (PEFT) – Leverage techniques like LoRA (Low-Rank Adaptation) and QLoRA (Quantized LoRA) to fine-tune LLMs efficiently on consumer GPUs.
✅ Experiment Tracking & Evaluation – Learn best practices for logging, tracking, and evaluating fine-tuned models using tools like Weights & Biases (W&B).
✅ Agentic AI Development (time permitting) – Explore how to build AI agents capable of autonomous decision-making and task execution, leveraging fine-tuned models.

 

Why Attend?

🔹 Learn by Coding – This workshop prioritizes hands-on implementation, guiding attendees through practical coding exercises and fine-tuning recipes using Meta Llama models.
🔹 Work with Limited Resources – Discover how to fine-tune large models on a single consumer-grade GPU (e.g., 24GB VRAM) using efficient techniques like QLoRA.
🔹 Production-Ready Skills – Walk away with the ability to customize LLMs for business applications, research, or AI-driven automation.
🔹 Access Best Practices – Leverage Hugging Face’s PEFT LoRA, TorchTune, and Axolotl for scalable fine-tuning and seamless deployment.

Workshop Requirements

  • Coming Soon

  • Coming Soon

Who is your Instructor?

Suraj Subramanian is a ML Engineer at Meta AI where he works on PyTorch, Llama and Llama Stack. He builds educational material to make the PyTorch ecosystem of SDKs easy to use.

bottom of page