Lecturers

Introduction to Machine Learning
Simon L. Gay, Université Grenoble Alpes, LCIS

Abstract: This lecture introduces the main concepts and principles used in Artificial Intelligence models, such as artificial neurons, neuronal networks and deep convolutional network. By introducing, step by step, models from simple neuron to complex deep networks, this lecture aims to open the black box that current AI models represent. In the proposed lab classes, you will implement a fully functional artificial neuron and a small convolutional network.

Lecturer's bio: Simon L. Gay received a degree in computer science engineering in 2010 from ENSEEIHT engineering school, Toulouse, France, and the Ph.D. degree from Claude Bernard University, Lyon, France, in 2014. Since 2020, he is an associate professor at the University of Grenoble, France. His research interests include developmental learning, artificial knowledge building and behavior emergence through interaction with the environment, space integration and bio-inspired navigation. His current researches include developing his learning and navigation mechanisms in multi-agents contexts.

 

Decentralized machine learning as an enabler of decentralized online services
Sonia Ben Mokhtar, CNRS, LIRIS

Abstract: There is a strong momentum towards data-driven services at all layers of society and industry. This started from large scale web-based applications such as Web search engines (e.g., Google, Bing), social networks (e.g., Facebook, TikTok, Twitter, Instagram) and recommender systems (e.g., Amazon, Netflix) and is becoming increasingly pervasive thanks to the adoption of handheld devices and the advent of the Internet of Things. Recent initiatives such as Web 3.0 are coming with the promise of decentralizing such services for empowering users with the ability to gain back control over their personal data, and prevent a few economic actors from over concentrating decision power. However decentralizing online services calls for decentralizing the machine learning algorithms on which they heavily rely. In this presentation I will present the work we carry out in the team towards decentralizing machine learning. I will particularly focus on two aspects namely the impact of decentralization on personalization and on privacy.

Lecturer's bio: Sonia Ben Mokhtar is a CNRS research director at the LIRIS laboratory, Lyon, France and the head of the distributed systems and information retrieval group (DRIM). She received her PhD in 2007 from Université Pierre et Marie Curie before spending two years at University College London (UK). Her research focuses on the design of resilient and privacy-preserving distributed systems. Sonia has co-authored 70+ papers in peer-reviewed conferences and journals and has served on the editorial board of IEEE Transactions on Dependable and Secure Computing and co-chaired major conferences in the field of distributed systems (e.g., ACM Middleware, IEEE DSN). Sonia has served as chair of ACM SIGOPS France and as co-chair of GDR RSD a national academic network of researchers in distributed systems and networks.

 

Towards Machine Learning Models that We Can Trust: Hacking and (properly) Testing AI
Maura Pintor, University of Cagliari, PRALab

Abstract: As current data-driven AI and machine-learning methods have not been designed to deal with attackers and security in mind, it is important to evaluate these technologies properly before deploying them in the wild. To understand AI’s sensitivity to such attacks and counter the effects, machine-learning model designers craft worst-case adversarial perturbations and test them against the model they are evaluating. However, many of the proposed defenses have been shown to provide a false sense of security due to failures of the attacks rather than actual robustness. To this end, we will dive into the literature on machine learning evaluation in the context of evasion attacks and analyze (and reproduce) failures of the past to prevent these mistakes from happening again.

Lecturer's bio: Maura Pintor is an Assistant Professor at the Pattern Recognition and Applications Laboratory (PRALab), in the Dept. of Electrical and Electronic Engineering of the University of Cagliari (Italy). She received her PhD in Electronic and Computer Engineering from the University of Cagliari in 2022. Her PhD thesis, "Towards Debugging and Improving Adversarial Robustness Evaluations", provides a framework for optimizing and debugging adversarial attacks. She was a visiting student at Eberhard Karls Universitaet Tuebingen, Germany, from March to June 2020 and at the Software Competence Center Hagenberg (SCCH), Austria, from May to August 2021. She is a reviewer for ACM CCS, ECCV, ICPR, IJCAI, ICLR, NeurIPS, ACSAC, ICCV, ARES, and for the journals IEEE TIFS, IEEE TIP, IEEE TDSC, IEEE TNNLS, TOPS. She is co-chair of the ACM Workshop on Artificial Intelligence and Security (AISec), co-located with ACM CCS.

 

Generative AI in Cybersecurity: Generating Offensive Code from Natural Language
Pietro Liguori, University of Naples Federico II, DESSERT group

Abstract: In this presentation, we will explore the role of AI code generators in cybersecurity, focusing on how these technologies enhance offensive strategies. Leveraging the capabilities of Generative AI, AI code generators translate natural language descriptions into executable code. The presentation will highlight practical applications of AI in cybersecurity, demonstrating how AI-generated code can be used in the generation of offensive security code. We will examine datasets to illustrate the generation of security exploits. Additionally, we will discuss evaluation methodologies for models, emphasizing the importance of robustness and reliability in AI-generated code. A significant part of the presentation will focus on the assessment of AI-generated code, employing both automatic and human evaluation metrics to ensure accuracy and effectiveness. Finally, we will analyze the role of data processing and data augmentation techniques, showcasing how these methods enhance the performance and resilience of AI code generators in security contexts. By providing a comprehensive overview of AI's role in cybersecurity, participants will gain valuable insights into the future of automated offensive security practices and the critical importance of thorough assessment to ensure the robustness and reliability of AI-generated code.

Lecturer's bio: Pietro Liguori is an Assistant Professor at the Department of Electrical Engineering and Information Technology (DIETI) at the University of Naples Federico II, Italy. He holds a Ph.D. in Information Technologies and Electrical Engineering and is a member of the Dependable and Secure Software Engineering and Real-Time Systems (DESSERT) group. His research focuses on the security and robustness of AI code generators and the application of large language models in offensive security. On these topics, he authored several papers appeared in international journals and conferences. He has published on topics including fault-injection testing, failure mode analysis, and runtime failure detection in cloud computing infrastructures.

 

Formal Methods for Machine Learning Pipelines
Caterina Urban, INRIA, ANTIQUE Team

Abstract: Formal methods offer rigorous assurances of correctness for both hardware and software systems. Their use is well established in industry, notably to certify safety of critical applications subjected to stringent certification processes. With the rising prominence of machine learning, the integration of machine-learned components into critical systems presents novel challenges for the soundness, precision, and scalability of formal methods.
This lecture serves as an introduction to formal methods tailored for machine learning pipelines, highlighting their strengths and limitations. We will present several approaches through the lens of different software properties and targeting software across all phases of a machine learning pipeline, with a focus on abstract interpretation-based techniques. We will then conclude by offering perspectives for future research directions in this evolving context.

Lecturer's bio: Caterina is a research scientist in the Inria research team ANTIQUE (ANalise StaTIQUE), working on static analysis methods and tools to enhance the reliability and our understanding of data science and machine learning software. She is Italian and studied for her Bachelor’s (2009) and a Master’s (2011) degree in Computer Science at the University of Udine. She then moved to France and completed her Ph.D. (2015) in Computer Science, working under the supervision of Radhia Cousot and Antoine Miné at École Normale Supérieure. Before joining Inria (2019), she was a postdoctoral researcher at ETH Zurich in Switzerland.

 

Multiagent trust management models
Laurent Vercouter, INSA Rouen, LITIS

Abstract: Trust Management models follow a soft security approach in which the behavior of (software) entities is observed, monitored and analyzed to evaluate their trustworthiness. In decentralized and multiagent systems, trust evaluation and decision algorithms have to be adapted to the dynamics, scale and distribution of these systems. The course begins with an introduction of the general motivations of trust in multiagent systems. The typical problems tackled by trust and reputation model are presented as well as the main approaches going from game-theoretical models to socio-cognitive approaches. The foundations of trust models are explained. Starting from sociological studies, we identify the main concepts involved in trust models. Clear definitions of concepts such as trust, image, reputation, recommendation, ... are given using the typology proposed by L. Mui, the functional ontology of reputation proposed by S. Casare and the work of Conte and Paolucci.

Lecturer's bio: Laurent Vercouter is a Full Professor in Computer Science at INSA Rouen Normandie and in the LITIS laboratory since 2011. He received his PhD in computer science in 2000 and was previously Assistant Professor at the Ecole Nationale Supérieure des Mines of Saint-Etienne. His research expertise is in the domain of multiagent systems with a specific focus on decentralized trust management systems, multiagent reinforcement learning and socio-technical systems. He is one of the funding members of the ART Testbed project that started in 2004 to provide a standard evaluation platform for multiagent trust models. He has led or participated to several international and national projects in his research fields and has published more than 100 papers on his works.

 

The legal issues of AI
William Letrone and Ludovica Robustelli, Nantes University, DCS research center

Abstract: What are the legal issues and implication of artificial intelligence systems? The question is crucial considering the risk associated with AI and its pervasiveness in the digital landscape. There is a fast-growing body of work focused on regulating the technology. At the international level and within national jurisdictions, actors have recognized the need to adapt existing frameworks or enact new laws to ensure adequate safeguards against improper AI systems and the misuse thereof. In fact, while AI systems can be abused to perpetrate illegitimate activities such as disinformation campaigns, identity-theft, personal data extraction and mass surveillance, AI systems are also under scrutiny for risks inherent to the way they operate. From privacy, to data protection, auto-determination, equality and non-discrimination, a constellation of rights and values are exposed to improper use of AI systems. This lecture sheds light about the legal and fundamental rights issues associated with AI technology. In doing so, it will highlight prominent legal issues when it comes to AI, and provide an overview of emerging AI frameworks focused on mitigating the negative impact of AI on individuals and societies at large such as the recent EU AI Act.

Lecturers' bio: Dr. William Letrone is a CNRS postdoctoral researcher at the DCS research center of Nantes University, France. In 2018, he obtained a Master¿s 2 degree in International Security and Defense from Grenoble-Alpes University, before moving to Japan, where was sponsored by the Japanese government (Mext program) to conduct a doctoral research about the legal status of state-sponsored disinformation campaigns under international law. He received his juris doctor degree in 2023 from Kobe University, Japan. He is currently a member of the iPOP research program and working on AI law, comparing how privacy issues with respect to AI are handled across jurisdictions. Dr. Ludovica Robustelli is a CNRS postdoctoral researcher at the DCS research center of Nantes University France. She is a member of the iPOP research program and focuses her work on Generative AI and data protection.

 

ML for Cybersecurity in converged energy systems: saviour or a villain?
Angelos K. Marnerides, University of Cyprus, KIOS Research and Innovation Centre of Excellence

Abstract: In today’s networked systems, ML-based approaches are regarded as core functional blocks for a plethora of applications ranging from network intrusion detection and unmanned aerial vehicles up to medical applications and smart energy systems. Nonetheless, regardless of the capabilities demonstrated through such schemes it has been recently shown that they are also prone to attacks targeting their intrinsic algorithmic properties. Therefore, attackers are nowadays capable at instrumenting adversarial ML processes mainly by injecting noisy or malicious training data samples in order to undermine the process of a given ML algorithm. This talk aims to discuss and describe this relatively new problem and further demonstrate examples targeting Virtual Power Plant (VPP) applications.

Lecturer's bio: Dr. Angelos K. Marnerides is a Professor (Asst.) of Cyber Physical Systems Security at the University of Cyprus, in the Department of Electrical & Computer Engineering and KIOS Research and Innovation Centre of Excellence. Previously, he was a Professor (Assoc.) at the University of Glasgow, leading the Glasgow Cyber Defence Group. His research focuses on applied security and resilience for Internet-enabled cyber physical systems using data-driven approaches. Dr. Marnerides’ research has received significant funding in excess of €7M+ from the industry (e.g., Fujitsu, BAE, Raytheon, EDF), governmental bodies (e.g., EU, IUK, EPSRC) as well as UK national security and defence entities (e.g., NCSC, GCHQ, MoD Dstl). Dr. Marnerides is currently the project coordinator for the €5.8M COCOON project funded by the EU Horizon Innovation Action (IA) being the first ever EU IA project coordinated by UCY KIOS and UCY in general. A Senior Member (SMIEEE) of IEEE and member of the ACM since 2007, he has played significant roles in various IEEE conferences, earning IEEE CoMSoc contribution awards in 2016 and 2018. He completed his PhD in Computer Science from Lancaster University in 2011, and has held lectureships and postdoctoral positions at institutions including Carnegie Mellon University, University of Porto, University College London, and Lancaster University.

 

Integrating AI technologies into mission-critical embedded automotive systems
Georg Macher, Graz University of Technology, Institute of Technical Informatics

Abstract: Despite the advances of AI-based functionalities, concerns regarding trust and reliability persist when integrating AI technologies into mission-critical embedded systems (e.g. critical industrial infrastructures, automotive, etc.). This talk explores the challenges and opportunities faced in the context of embedded systems in the automotive domain. By selecting suitable design patterns and development concepts that consider the classification of AI technology classes and usages, the aim is to balance existing methods and the properties inherent in AI technologies, allowing for the applicability and necessary risk reduction. The lecture will address questions related to (i) dependability features of AI-based systems, (ii) road blockers and socio-technological aspects of AI-based systems, and (iii) engineering concepts for integrating AI-based and cloud-based services.

Lecturer's bio: Georg Macher worked as a Project Manager R&D focusing on autonomous vehicle projects and safety & cyber-security in AVL’s powertrain engineering R&D department. In September 2018, he joined the Institute of Technical Informatics as Senior Scientist and leads the Industrial Informatics research group. His research activities include systems and software engineering, software technology, process improvement, functional safety and cyber-security engineering. He is author and co-author of over 160 publications and a permanent member of EuroSPI program committee, SafeComp WS boards, and the SoQrates industrial working group (automotive OEM & Tier WG focusing on safety, cyber-security, and engineering processes). He is also an industry consultant, coach, and trainer, focusing on dependability engineering and the automotive domain.

Online user: 2 Privacy
Loading...