Our colleagues from EuroCC Austria are kicking off the new year with a training series titled Foundations of LLM Mastery. The first event in this series, Fine-tuning with One GPU, is scheduled for 22 January 2025. This event is organized in collaboration with the VSC Research Center and TU Wien. Please note that the registration deadline is 6 January 2025.
This 3.5-hour course is designed for industry professionals, from startups to large enterprises, and will provide practical techniques for fine-tuning large language models (LLMs) while maintaining model performance. Participants will explore key concepts such as quantization, which includes strategies to reduce memory requirements for deploying LLMs on hardware with limited resources. Additionally, they will learn about parameter-efficient fine-tuning (PEFT) using LoRA, which enables fine-tuning large models by adjusting a select number of parameters. This approach reduces computational demands while still delivering high-quality results.