Here are my contacts
The course focuses on introducing foundational models and multimodal models. We will begin by examining the building blocks of modern large language models: the transformer architecture, its self-attention mechanism, and its generative configuration. We will gain a deeper understanding of how a foundational model is trained, fine-tuned, and used during inference, highlighting some of the most popular optimisation techniques. Next, we will do a long hands-on exercise in which we will leverage a pre-trained model and customise it to a specific domain topic. If time allows us, agents will be introduced.
Missing: