Dr. Reid Blackman is an ethicist who specializes in ethics related to new technologies, in particular, artificial intelligence or AI. He is the founder and CEO of Virtue, an ethical risk consultancy for businesses and other organizations that use AI. He is also the author of “Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI”. We recently had a lively discussion with Reid about AI ethics in general and in human subjects research.
Tell us about yourself and how you got interested in ethics related to AI: I am an ethicist by training with a PhD in philosophy and over 20 years of experience teaching at the college level, conducting research, and publishing on various ethics topics. One of my main interests is the ethics of emerging technologies, including AI. A few years ago, I decided to leave academia to start a consulting firm focused on business ethics. With increasing attention in the past few years on privacy issues related to technology such as Europe’s GDPR, and to social justice issues, it became clear to me that companies were at risk of going ethically astray in a very public way with their use of AI. My company provides advice to businesses to help them be more responsible in their use of AI.
Let’s start out by defining what you mean by AI: When I refer to AI, I mean software that is designed to learn by example; a common AI technique is machine learning. AI software uses existing data to develop algorithms for a specific desired purpose or outcome. Two developments have made AI modeling increasingly feasible. One is the massive amounts of data available from the increasing use of technology in all aspects of our lives. Secondly, computers have become better at processing large amounts of data very quickly, making machine learning techniques easier to use. AI software is taught to recognize mathematical patterns or associations from data it is provided and then uses what it has “learned” to make decisions or predictions about new data. Examples of machine learning include facial recognition software and programs that decide who should get a bank loan. AI is also being increasingly used in research; for example, to create prediction models for disease risk, medication compliance, and success in higher education.
What do you believe are the main ethical issues with AI? There are three main categories of ethical risk for AI: black box or model explainability, biased models, and privacy violations.
These risks seem to be concerned with risks of the AI approach. But what about risks that lie with the AI developers, such as underlying assumptions of those involved in AI development? Yes, this is also a concern, particularly assumptions that drive the choice of training data to be used in the models or in the choice of the overall goal of an AI model. As noted, If the training data used are biased, model outcomes will likewise be biased, and developers need to think carefully about the choice of training data. For example, if a model to predict whether a skin lesion could be melanoma is created by using images only from light skinned individuals, the resulting model will likely not recognize melanoma lesions in darker skinned persons. Developers who use data that are convenient or that reflect only their experience must consider how these data may be biased.
We need to recognize that risks can be introduced both by the approach as well as by individuals involved in developing AI models. AI is becoming increasingly used for many different purposes, from decisions about who gets accepted into universities, about loans and home ownership, to disease prognosis and medication responses. This increased reliance on AI means that we will continue to see the types of ethical problems we have been discussing.
Would these ethical risks apply in the use of AI for research? Yes, these really apply in all AI applications, including the use of AI in clinical and other human subjects research.
How should the ethical issues associated with AI be addressed and what advice would you give to investigators who want to use AI in their research? In my opinion, trying to get the data scientists developing AI models to recognize their own individual biases will be difficult. But they should recognize the need to understand the training data they propose to use and the impact of the goals of the model at the design stage. For example:
It sounds like a variety of outcomes should be considered, such as quality of life in the above example. Yes, it is important to consider the larger context in which these models would be used.
How can we address the concern researchers will likely have about the added time needed to consider AI ethical issues on top of already tight study timelines? Addressing AI ethics will require a commitment of time. But these issues can be managed more efficiently if there are processes in place to consider data quality, potential biases, and model goals. This would include determination of where in human subjects research study lifecycle AI ethics should be considered, who can help address these, and the processes by which these issues will be addressed. Establishing a general plan for AI research will make it easier to include this as a standard part of the development process.
When we are teaching grant writing, we always emphasize the need to have a broad research team that includes not only scientists with topical expertise but others like statisticians to ensure a robust design and community or patient representatives to make sure the study will address major participant concerns. For studies involving AI, it sounds like we should also be advocating for the inclusion of someone with AI ethics expertise. How can we move the research community forward on this? I would certainly advocate getting AI ethics input on potential issues at the study design stage to help set model goals and select the training data to be used. This could include identifying main ethical risks and potential mitigation strategies as well as cost/benefit analyses of mitigation in terms of time and resources to enable focus on feasible strategies for risks that could undermine the overall validityof a project. Researchers also need to be clear about the limitations of what a model can do.
A cautionary recent example is an AI model that was designed to help health care providers select patients that would benefit from additional medical care. The outcome of the algorithm was found to have a racial bias as it identified predominantly white patients to receive additional health services. Further analysis suggested that this was because the model used personal health expenditures as a proxy for health care needs and expenditures are higher for white patients who have more financial resources. Identifying ethical risks such as the biased data in this example, requires critical consideration of counterfactuals, or alternative explanations for the data being used. In this case, socioeconomic status and race also explained the pattern of patient health care expenditures in the training data. Ethicists are experts at considering counterfactuals.
On the positive side, my sense is that the human subjects research community recognizes the risks inherent in AI, especially in health care and clinical applications. However, I would strongly encourage that this recognition be accompanied by the development of concrete risk management plans.
What guidance can you offer to IRBs reviewing research that involves AI? What should the IRB be looking for to ensure appropriate protections for research participants to prevent potential harm from AI processes? IRBs should establish procedures for evaluating proposed AI human subjects research. They would also likely benefit from expert input for the review of specific projects. Yes, IRBs often bring in substantive experts to help with the review of specific research proposals, so this is an approach most IRBs already utlize.
After a research project involving AI is underway, what monitoring is needed to ensure that AI process remain ethical and do not unintentionally create harm for research participants? Ideally, there should be monitoring for AI bias and other risks throughout the project lifecycle, from study conception until the AI is no longer being used. As AI models continue to be used, new data are often used to further refine the models. New data should be evaluated for biases to determine the impact on model output. For example, if there is a shift in the types of population subgroups using an AI mode, the resulting output may become more weighted toward this group and performance in other subgroups may be less accurate.
What are the most pressing needs currently for AI Ethics? In my opinion, the biggest obstacle is the willingness to seek solutions to these inherent ethical problems. There is an unfortunate perception that nothing can be done or there is no one to help address AI ethical risks. Basic training to increase awareness about these issues and potential mitigation strategies are sorely needed for researchers and oversight groups like IRBs. But both groups also need to be willing to seek expert input and develop procedures to ensure that the risks of AI are appropriately considered and managed. This type of training is also needed in medical and graduate science education so that future researchers understand how and when to seek expert input for their research. Also needed is the recognition that risks need to be evaluated not only before starting an AI project but also across AI model lifecycles.
Everything we discussed today will be of tremendous value to our readers. But before we wrap up, we’d like to learn more about your company and also about your new book “Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI”
I launched my company, Virtue to advise organizations about integrating ethical risk mitigation into the development and deployment of artificial intelligence and other emerging technologies. We have worked directly with a variety of companies such as Ernst & Young, Future Workplace, Opentext, and Kitu Super Coffee and have also provided virtual training and publications on these topics.
The book arose out of my consulting experience which identified a broad need for clear guidance around AI ethics. It focuses on the main AI risks, how these arise, and what can be done to mitigate them. The main audience was viewed as non-technical CEOs, those serving on Board of Directors. and computer science students. However, the content is broadly applicable to anyone in the AI space: those creating and those using AI, including those involved in the conduct and oversight of AI research. The book purposely avoids the use of jargon and provides a clear roadmap for identifying and managing AI risks by establishing specific processes and getting expert input. The approaches described in the book are also useful beyond AI for considering ethical issues in other areas.
Any closing thoughts for researchers who work with AI? Researchers need to demonstrate due diligence in AI projects, not only at the start of projects to identify key assumptions, potential biases in the training data, determine if the model goals are appropriate, and evaluate outcomes, but also throughout the project lifecycle. Researcher will likely need assistance with this, and I would urge them to identify someone with AI ethics expertise that can work with their research team from the beginning of an AI research project. Such an expert can also help put into place standard processes for managing AI ethical risks for future projects.