Open PhD Position researching the robustness of explainable AI
We are offering one 100% positions granted by the German federal ministry for education and research starting asap until the end of 2026. The goal of the project is to determine the mathematical robustness of xAI (explainable AI) approaches for AI-based decision-making systems. Together with two psychology grad students and the project leader Alexander Wilhelm plus two professors (a behavioral economist and me) we will try to understand how psychological effective and mathematically robust xAI approaches are.We are searching for kind, devoted and ambitious researchers who like to delve into discrete structures and find those loopholes that enable manipulation of the result. We are going to analyze decision trees as a surrogate model and Shapley values as a measure for the importance of properties in a given decision. Both are used to determine which properties are deemed crucial for the result of an AI decision making system - the idea being that things such as discrimination can be identified with those methods.
We have already shown that decision trees can be manipulated and thereby hiding obvious discrimination. Starting in 2025, our research will focus on Shapley values. Our results will likely influence the regulation of xAI requirements and thus have a potentially large impact on society. My group is a diverse team of people dedicated to better understand the role of algorithmic decision-making systems and their consequences. We appreciate friendly and open persons that like to work hard and contribute to the team.
If you are interested, please send your CV to the project leader, Alexander Wilhelm, at [email protected]