Back to Brige2AI main page
This is a working resource page for the Ethics activities of the Bridge2AI program.
The Bridge2AI program will use this wiki to bring together resources relevant to the Ethics Modules within the Data Generation Projects and the Ethic Core within the BRIDGE Center.
- Pervasive Data Ethics for Computational Research
- The Consortium for the Science of Sociotechnical Systems Researchers (CSST)
- WHO outlines principles for ethics in health AI (June 30, 2021)
- Understanding Artificial Intelligence Ethics and Safety, The Alan Turing Institute
- Government's Role in AI, Brookings Institute, https://www.youtube.com/watch?v=PO08ECx8ru4
- A Closer Look: The Department of Defense AI Ethical Principles, The JAIC 24 FEB 2020
- The Institute for Ethical AI & Machine Learning
- NIH BRAIN Initiative, Neuroethics - "Instilling a culture of ethical inquiry, not compliance"
- A Proposed Framework on Integrating Health Equity and Racial Justice into the Artificial Intelligence Development Lifecycle
The Ethics of Data Collection -- Is your data ethically sourced?
Ethics: Fair Representation and Transparency
The core ethical principle is “do no harm.” Biased consequences of these tools are due to human factors that can be mitigated by utilizing ethical data curation and AI design principles. Biased human judgments can affect AI systems in the data that systems learn from and in the way algorithms are designed. Therefore, algorithms must ethically be explainable, auditable, and transparent to to mitigate potential biases resulting from historical patterns of discrimination.
In addition, risks of systemic “ism” (racism, sexism, and ageism) in medicine often are overlooked. Without appropriate precautions, AI Systems may replicate patterns of racial, gender and age biases in ways that can deepen and justify historical inequality. It is ethically imperative that the quality of data is representative of the actual population and considers the multitude of factors that contribute to, and significantly impact, health outcomes for all populations, especially the vulnerable and marginalized. Data sets must include minority and marginalized populations with attention to historical biases. The current AI field is at risk of replicating or perpetuating historical biases and power imbalances and could deepen and perpetuate racial, gender and age biases if historical contexts are not considered in data curation, in the construction and testing of the algorithms, machine training, and in application.
Ethically, there needs to be greater transparency and more demographic data on racial and ethnic, gender and age profiles of new data and the use of secondary data sets with adequate marginalized population data. Vulnerable research participants should receive special attention, as they may suffer from stigmatization, have limited power, lower educational levels, poverty, limited resources, inadequate physical strength and/or other necessary attributes to protect and defend their own interests.
AI needs to address bias and fairness that goes beyond technical debiasing and extends to application biases by utilizing a wider social analysis of how AI is used in context. This necessitates including multi- disciplinary expertise, including underrepresented non-technical uses and community stakeholders. Cultural presuppositions must be considered throughout the design, encoding, training and application.
Development of standard ethical operating procedures (SOPs) that will assure the conduct of ethical data curation, algorithm development, machine training and application to mitigate biases and ensure no harm.
Noise: A Flaw in Human Judgment
by Daniel Kahneman (Author), Olivier Sibony (Author), Cass R. Sunstein (Author)
2/24/22, Scientific American, The Culture of Engineering Overlooks the People It’s Supposed to Serve