Check out our related open workshops:
Check out our related in-house workshops:
During this conference, you will learn more about the regulatory and technical solutions to build trust and explainability into AI solutions, and to make sure that your automated decision making and "black box" solutions are unbiased, ethical, fair and trustworthy.
|Full Programme of the Trustworthy AI Conference:|
|Registration with Coffee/Tea and Croissants, Networking and Exhibition|
|9h30||Words of Welcome by the Conference Chairmen Patrick Van Renterghem (CEO @ IT Works) and Hans Arents (Senior Advisor Digital Government at Vlaamse Overheid) (Rooms A + B)|
Opening Keynote: Patrick Van Eecke - AI is here, offering many opportunities and challenges: how Europe is planning to regulate Artificial Intelligence
Patrick Van Eecke will discuss with you the impact of AI on our business practices and how European policy makers are planning to regulate the development and use of human-centric AI. In his talk, Patrick focuses on how Europe intends to regulate bias, fairness, ethics, trust and explainability of AI.
|10h45||Coffee/Tea, Networking and Exhibitions|
|11h15||Morning Presentations (3 x 30 minutes + Q & A)|
|Making AI Ethical and Fair
Fairness and Transparency: Algorithmic Explainability, some Legal and Ethical Perspectives (Nazanin Gifani, Data Protection Officer, EURA NOVA)
In this presentation, some of the ethical and legal issues of automated decision making will be discussed, including algorithmic fairness, transparency and explainability. The big question here is: can AI help us to make fairer decisions ?
Ethical AI at VDAB (Vincent Buekenhout, Ethical AI Lead, VDAB)
Vincent looks at the various AI initiatives at VDAB, its AI4Good strategy, the way applications are designed, and most of all, the way ethics, measurements through KPI's, explainability and fairness play a role in this. Vincent will explain how ethics-by-design works at VDAB.
How to implement error-proof Machine Learning in business-critical processes with explainability and human-centric design (Deevid De Meyer, co-founder, Brainjar)
This presentation outlines how Brainjar uses human-centric design and explainability to create machine learning systems that work together with humans to improve efficiency while reducing error rate.
|13h00||Lunch, Networking and Exhibitions|
Responsible AI: An Example AI Development Process with Focus on Risks and Controls (Martijn Cuypers, Insurance Director, and Hugo Pires, Senior Manager at PwC)
Organisations need to make sure that they use AI in an appropriate way. Martijn and Hugo explain how to ensure that the developments are ethically sound and comply with regulations, how to have end-to-end governance, and how to address bias and fairness, interpretability and explainability, and robustness and security.
During the conference, we'll be looking at an example AI development process with focussing on the risks to be managed and the controls that can be established.
Obedient Digital Twins: (Paul Valckenaers, Senior Researcher, UCLL)
Paul explains how intelligence is added to a corresponding reality without introducing limitations into a world-of-interest. The outcome is obedience: a conflict with an obedient digital twin is a conflict with its real-world counterpart. Illustrated by healthcare examples.
AI & Ethics: The Belgian Industry Vision & Initiatives (Jelle Hoedemaekers, Expert ICT Standardisation, Agoria)
Jelle will explain why Belgian companies are working on ethical AI, and provide an overview of Belgian and European AI Initiatives with a focus on ethics.
|Technical Track: Adding Explainability and Removing Bias in Machine Learning, NLP and Chatbots
Introduction to Bias in ML (Machine Learning) (Matthias Feys, CTO, ML6)
In this talk, you will learn what bias in ML models actually means. You will get insights in the complexity of the problem and learn realistic ways to reduce bias.
He Said, She Said: Finding and Fixing Bias in NLP (Natural Language Processing) (Yves Peirsman, NLP Town)
Yves presents several instances where bias has posed a risk to the successful adoption of NLP systems, and discusses what techniques exist to discover these biases before the systems are put in production.
Building Trust and Explainability into Chatbots: the Partena Ziekenfonds Business Case (Karel Kremer, CTO of Oswald)
Chatbots and conversational interfaces are taking over customer service departments by storm. In many companies, they provide first-line support to customers. Based on the Partena Ziekenfonds business case, Karel will share a few critical success factors...
|15h45||Coffee/Tea, Networking and Room Switch|
|16h15||Panel Discussion with speakers and participants, moderated by Deevid De Meyer|
|17h15||End of the Conference, Start of the Networking Drink|