In the rapidly evolving field of artificial intelligence, the race to innovate often outpaces the imperative for ethical scrutiny.
As state and federal agencies increasingly integrate artificial intelligence (AI) into their operations, the adoption of rigorous review processes akin to those of academic Institutional Review Boards (IRBs) is becoming imperative. IRBs are designed to ensure that research involving human subjects meets ethical standards, protecting participants’ rights and well-being.
At the Yale School of Public Health, where I studied biostatistics, I was taught the critical importance of ethics in research, which was underscored by the rigorous requirement of submitting IRB proposals to ensure my research proposals adhered to well-established ethical guidelines. The IRB process in academic institutions involves a comprehensive review of the research proposals to ensure ethical compliance, which includes the evaluation of the purpose of the study, the methodology to be used, the nature and degree of any risks posed to participants, and the mechanisms in place for obtaining informed consent.
The institutional review board also considers data handling procedures, particularly how privacy and confidentiality will be maintained. This rigorous scrutiny ensures that ethical standards are upheld throughout the research lifecycle, from data collection to the dissemination of results.
This practice ensures that studies are scientifically sound and ethically responsible, protecting participants and maintaining public trust. As we delve into vast troves of data and harness advanced AI technologies, the implications of our findings and the methodologies we employ must be scrutinized with equal rigor.
AI use may deal with data that are deeply intertwined with personal and societal dimensions. The potential for AI to impact societal structures, influence public policy, and reshape economies is immense. This power carries with it an obligation to prevent harm and ensure fairness, necessitating a formal and transparent review process akin to that overseen by IRBs.
The use of AI without meticulous scrutiny of the training data and study parameters can inadvertently perpetuate or exacerbate harm to minority groups. If the data used to train AI systems is biased or non-representative, the resulting algorithms can reinforce existing disparities.
For example, AI used in predictive policing or loan approval processes might disproportionately disadvantage minority communities if the training data reflects historical biases. Similarly, health care algorithms trained primarily on data from non-diverse populations may fail to accurately diagnose or treat conditions prevalent in minority groups, leading to unequal healthcare outcomes. Such outcomes underscore the critical importance of ensuring diversity and fairness in the dataset and rigorously defining study parameters to prevent the inadvertent perpetuation of discrimination and inequality.
Thus, I advocate for the establishment of dedicated ethical review boards — modeled on the IRB framework — for AI use across government. These boards would evaluate the ethical dimensions of AI projects, focusing on aspects such as data privacy, algorithmic transparency, and potential biases. They would also ensure that the AI systems are developed in a manner that respects human dignity and societal values.
The dual imperatives of innovation and ethics can coexist. By instituting a rigorous ethical review process, the AI community can foster a culture of responsibility and trust. This approach will not stifle innovation; rather, it will ensure that our societal advancements are groundbreaking and grounded in ethical practice. By aligning AI use with established ethical standards, we safeguard the well-being of all stakeholders and guide AI towards its most beneficial and equitable applications.
Josemari Feliciano is a former biostatistics student at Yale School of Public Health. The opinions expressed are solely his own and do not express the views or opinions of his employer or the federal government.