Objectives
The rapid development of AI technologies and their increasing ubiquity in society are transforming many aspects of our day-to-day lives and revolutionizing a range of domains such as transportation and healthcare. The widespread use of AI systems in society, however, has given rise to serious concerns about accountability, fairness, privacy, safety, and transparency.
Among the AI research community, there is a growing body of literature devoted to tackling these issues. Meanwhile, legal experts and policymakers are initiating greater efforts to regulate AI technologies. For example, the European Commission published a draft of the Artificial Intelligence Act in 2021. We believe it is imperative to bring together experts in AI and Law to discuss open questions and regulatory challenges, as well as to develop an interdisciplinary research agenda.
This bridge meeting targets two groups of audiences: (i) AI researchers who are interested in engaging with legal perspectives on building AI systems with enhanced accountability, fairness, privacy, safety, and transparency, and (ii) legal scholars who are interested in engaging with AI research perspectives on proposing or reforming legal and regulatory governance models for emerging AI technologies. We envision that the bridge meeting will facilitate interdisciplinary dialogues, connect AI and legal researchers for potential collaborations, and lay a solid foundation for community building.
Topics
The bridge meeting will focus on the following topics that have been drawing enormous attention and debate in both the AI and Law communities.
- Accountability and Safety: e.g. What are the mechanisms through which accountability of AI systems can be a chieved? How should regulators assure safety and efficacy of safety-critical AI systems such as medical devices or autonomous vehicles?
- Explainability and Transparency: e.g. What constitutes a sufficient explanation of what the AI system is doing? How should individuals be provided access to information such as the factors, the logic, and the techniques that produce an AI decision-making outcome?
- Fairness and Non-discrimination: e.g. What methods are available to detect and address potential unfairness in AI systems? How should AI developers mitigate bias in training data and in AI algorithms to avoid discriminatory impacts?
- Privacy: e.g. What types of privacy risks arise in the development and deployment of AI systems? How effective are data protection laws such as the General Data Protection Regulation (GDPR) at addressing those privacy risks?