Responsible Decision Making in Dynamic Environments
Workshop at ICML 2022
Date: July 23, 2022
Venue: Ballroom I, Baltimore Convention Center
Livestream: https://icml.cc/virtual/2022/workshop/13453
Algorithmic decision-making systems are increasingly used in sensitive applications such as advertising, resume reviewing, employment, credit lending, policing, criminal justice, and beyond. The long-term promise of these approaches is to automate, augment and/or eventually improve on the human decisions which can be biased or unfair, by leveraging the potential of machine learning to make decisions supported by historical data. Unfortunately, there is a growing body of evidence showing that the current machine learning technology is vulnerable to privacy or security attacks, lacks interpretability, or reproduces (and even exacerbates) historical biases or discriminatory behaviors against certain social groups.
Most of the literature on building socially responsible algorithmic decision-making systems focus on a static scenario where algorithmic decisions do not change the data distribution. However, real-world applications involve nonstationarities and feedback loops that must be taken into account to measure and mitigate fairness in the long-term. These feedback loops involve the learning process which may be biased because of insufficient exploration, or changes in the environment’s dynamics due to strategic responses of the various stakeholders. From a machine learning perspective, these sequential processes are primarily studied through counterfactual analysis and reinforcement learning.
The purpose of this workshop is to bring together researchers from both industry and academia working on the full spectrum of responsible decision-making in dynamic environments, from theory to practice. In particular, we encourage submissions on the following topics:
- Fairness,
- Privacy and security,
- Robustness,
- Conservative and safe algorithms,
- Explainability and interpretability.
Invited Speakers
|
Aaron Roth
University of Pennsylvania |
Aaron Roth is the Henry Salvatori Professor of Computer and Cognitive Science at the University of Pennsylvania computer science department. He received his PhD from Carnegie Mellon University. His main interests are in algorithms and machine learning, and specifically in the areas of private data analysis, fairness in machine learning, game theory and mechanism design, and learning theory. |
|
Craig Boutilier
|
Craig Boutilier is Principal Scientist at Google. He was a Professor in the Department of Computer Science at the University of Toronto (on leave) and Canada Research Chair in Adaptive Decision Making for Intelligent Systems. His current research efforts focus on various aspects of decision making under uncertainty: preference elicitation, mechanism design, game theory and multiagent decision processes, economic models, social choice, computational advertising, Markov decision processes, reinforcement learning and probabilistic inference. |
|
Cynthia Rudin
Duke University |
Cynthia Rudin is a professor of computer science, electrical and computer engineering, statistical science, mathematics, and biostatistics & bioinformatics at Duke University. She directs the Interpretable Machine Learning Lab, whose goal is to design predictive models with reasoning processes that are understandable to humans. Her lab applies machine learning in many areas, such as healthcare, criminal justice, and energy reliability. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is the recipient of the 2022 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (the “Nobel Prize of AI”). She is a fellow of the American Statistical Association, the Institute of Mathematical Statistics, and the Association for the Advancement of Artificial Intelligence. Her work has been featured in many news outlets including the NY Times, Washington Post, Wall Street Journal, and Boston Globe. |
|
Finale Doshi-Velez
Harvard University |
Finale Doshi-Velez is a Gordon McKay Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. She completed her MSc from the University of Cambridge as a Marshall Scholar, her PhD from MIT, and her postdoc at Harvard Medical School. Her interests lie at the intersection of machine learning, healthcare, and interpretability. |
|
Masoud Mansoury
University of Amsterdam |
Masoud Mansoury is a postdoctoral researcher at Amsterdam Machine Learning Lab at University of Amsterdam, Netherlands. He is also a member of Discovery Lab collaborating with the Data Science team at Elsevier Company in the area of recommender systems. Masoud received his PhD in Computer and Information Science from Eindhoven University of Technology, Netherlands, in 2021. He has published his research works in top conferences such as FAccT, RecSys, and CIKM. His research interests include recommender systems, algorithmic bias, and contextual bandits. |
|
Solon Barocas
Microsoft, Cornell University |
Solon Barocas is a Principal Researcher in the New York City lab of Microsoft Research and an Adjunct Assistant Professor in the Department of Information Science at Cornell University. His research explores ethical and policy issues in artificial intelligence, particularly fairness in machine learning, methods for bringing accountability to automated decision-making, and the privacy implications of inference. |
News
- Accepted papers and talks are now visible.
- Camera-ready and video submission deadlines are updated in important dates.
- Notifications for Accepted papers are out.
- Call for papers is out! Last date to submit is May 31, 2022. Please check instructions on how to submit.
Contact us
The organizers may be reached at responsibledecisionmaking <AT> gmail <DOT> com
Related past workshops
- Socially Responsible Machine Learning (SRML) - ICLR, 2022
- Socially Responsible Machine Learning - ICML, 2021
- Learning in Presence of Strategic Behavior - NeurIPS, 2021
- Workshop on Responsible AI - ICLR, 2021
- Workshop on Consequential Decision Making in Dynamic Environments - NeurIPS, 2020
- Law & Machine Learning (LML) - ICML, 2020
- Workshop on Human Interpretability in Machine Learning (WHI) - ICML, 2020
- Reinforcement Learning for Real Life Workshop - ICML, 2021
- Safe and Robust Control of Uncertain Systems - NeurIPS, 2021
- Political Economy of Reinforcement Learning (PERLS) Workshop - NeurIPS, 2021