Courses
Graph neural networks
Instructor: Fabrizio Silvestri
This course thoroughly introduces Graph Neural Networks (GNNs), tailored for Ph.D. students with a background in scientific areas. It offers a balanced view of both theoretical aspects and practical applications of GNNs. We will start with the basics of machine learning, graph theory, and neural networks to build a foundation for understanding GNNs. The course will cover essential GNN architectures, including Graph Convolutional Networks (GCNs) and Graph Attention Networks (GATs), along with recent developments in the field. At the end of the course, we will also cover some aspects regarding the explainability of GNN models' predictions.
Program analysis: from proving correctness to proving incorrectness
Instructor(s): Roberto Bruni, Roberta Gori
This course offers a focused exploration of formal methods in software development, with some emphasis on the shift of perspectives after Peter O'Hearn's influential paper on incorrectness logic. Instead of exploiting over-approximations to prove program correctness like done with classical formal methods, incorrectness reasoning exploits under-approximations for exposing true bugs.
The overall goal of incorrectness methods is to develop principled techniques to assist programmers with timely feedback about the presence of true errors, with few or zero false alarms.
The course will overview different approaches, like program logics, pointer analysis, and abstract interpretation, for both over- and under-approximation, as well as their combination.
Large Language Models
Instructor(s): Danilo Croce
In recent years, Large Language Models (LLMs) have revolutionized computational linguistics and computer science, offering new insights and capabilities in language processing. This course will guide participants through the journey from the foundational "Distributional Hypothesis" to the advent of word embeddings and the role of Transformer models. We will particularly focus on how models like GPT and LLaMA have evolved to support language inference tasks and can be fine-tuned for specific applications, culminating in the creation of sophisticated models such as ChatGPT, Alpaca, and Vicuna. Addressing the challenge of their computational complexity, we'll explore efficient fine-tuning methods like quantization and low-rank adaptation (LoRA), making it feasible to operate these models on standard hardware. The course will highlight the development of a unified architecture that demonstrated its versatility across EVALITA 2023's twenty-two semantic processing tasks, showcasing the practical application of LLMs in tackling complex linguistic challenges. Attendees will leave with a clear understanding of how to train and adapt foundational models for wide-ranging NLP tasks. All the material presented in the course is available at:https://github.com/crux82/BISS-2024