Artificial Intelligence Ethics in a Nutshell

Charles Onstott

VP & Chief Technology Officer at CALIBRE Systems

Learning Objectives

Artificial intelligence presents great opportunities for companies and government organizations to increase value and mission effectiveness. With these opportunities comes ethical risk. Bias is probably the ethical risk top of mind for most executives, but there are others. Such risks may demotivate leaders and organizations from using artificial intelligence. I recommend organizations embrace AI with ethics in mind since doing so will perpetuate organizational core values, mitigate risk, and likely lead to better business and mission outcomes.


"It is possible that Robert is one of the first people in the United States to have been wrongly accused of a crime due to a false positive for facial recognition algorithm."

Charles Onstott

VP & Chief Technology Officer at CALIBRE Systems

Transcript

Hi, and welcome to my presentation on Artificial Intelligence in a Nutshell. Now, if you are like me, you are a Technology Executive in an organization that is considering or already using Artificial Intelligence in some capacity. You’ve also seen a lot of news about the promise of AI, and you have heard from some people that AI has the potential to destroy the human race as we know it. Now, if you’re like me, you really don’t think that the robots will come to wipe us out anytime soon, but there may yet be something to the risks that AI can pose to your organization and to society more generally. Now, since I love philosophy and ethics, I’ve studied up on this subject, and I wanted to give you a sense of what some of the issues are, and some of the things that can be done to help mitigate them. Now, of course, I could not possibly cover the entire topic of AI and Ethics in 20 minutes. I’m also not a lawyer, so I’m not giving any kind of legal advice. My goal here is really just to give you enough to maybe motivate you to want to learn a little bit more, maybe find out what your organization is doing, and also maybe even take a leadership role in driving these issues and addressing these issues in your organization and more broadly.


My name is Charles Onstott, and I’m the Chief Technology Officer of Calibre Systems. We are a Management Consulting and Digital Transformation Company focused primarily on the US government. I’ve worked in the technology industry for over 30 years. I just started here at Calibre a few weeks ago actually. Prior to that, I have worked at SGIC for over 27 years, which is also a large technology integrator focused in the US government. I started out as a Unix System Administrator and I worked my way up to the CTO position over a long career there. Prior to my role as their CTO, I ran the cyber cloud and data science services for the company. I sold these services throughout various US state and local and commercial markets that SGIC serves.


I have a master’s degree from the University of Chicago in religion, specifically the philosophy religion, and my bachelor’s degree from Oklahoma State is also in philosophy. My main research interest in graduate school was the philosophy of religion and its intersection with the philosophy of technology. Now, that in itself is a whole topic that I’m happy to talk about it anytime, but it does explain why I have a long-time interest in technology, ethics, and philosophy, and that leads me to my topic today.


I want to tell you about Robert Williams. He works for an auto parts store and in January of last year, he received a phone call at work from the Detroit Police Department asking him to come to the station to talk to them about some stolen watches. He thought that it was a prank call, so he ignored it. Later that day, he was greeted at his house by police officers who handcuffed him in his lawn, and in front of his wife and children, hauled him across the lawn in front of his neighbors, and putting him in the back of a police car. At the detention center, he was booked, including having his mugshot taken, and his DNA, and fingerprints. He was held there for 30 hours before he was released, and when he was interrogated, he was shown pictures of a black man looking at watches in a Shinola store because those watches were stolen. It’s important to know that Robert Williams is also black. He was asked if those pictures were pictures of him and Robert lifted up the pictures held on by his face and he said, “I hope you all don’t think that all black men look alike.” Robert was not the person in the photographs nor the person that stole watches, so how did the police come to suspect that it was him? Well, it turns out that a facial recognition AI algorithm incorrectly identified him as the same person as the one in the photographs. It is possible that Robert is one of the first people in United States to have been wrongly accused of a crime due to a false positive for facial recognition algorithm. Now he has since been released, he’s received a public apology from the police department and my understanding is that his records will be expunged. In April of this year, he filed a lawsuit against the Detroit Police Department for violating his fourth amendment rights and Michigan civil rights laws.


Of course, such platforms aren’t used today, and many police departments as well as corporate security offices. They provide real benefits to police officers, including accurately identifying criminals, and routinely are used to identify missing and sexually exploited children. Our artificial intelligence then can, and my view, should be used for good. I say should because in contrast, and some people talk on this subject, and Robert Williams himself, they don’t believe that, because of the known issues of AI, they should be used in these situations. Personally, I don’t think that we should limit the use of AI simply because of these problems. Instead, organizations must be mindful of their limitations and the issues that they can pose so that they get used responsibly. Robert Williams’ experience raises a whole host of questions about what went wrong. We’re going to revisit this throughout the presentation, but one of the key issues is that a face recognition system mistook him for another person. So, it raises the question: are facial recognition systems biased in some way?


In December of 2019, NIST published a landmark study from a face recognition vendor test they conducted, and part three of that study was the demographic effects of facial recognition algorithms. NIST uses data sets from four sources including domestic mug shots, people applying for immigration benefits, Visa photographs, as well as people crossing the border into the United States. In all the processed over 18 million images of nearly eight and a half million people through 189 commercial algorithms, their study determined, not surprisingly, the algorithm performance does vary pretty widely from one to the next. They were particularly interested in determining the false negative and the false positive error rates of these algorithms. A false negative is when the algorithm fails to detect that two different images: the one that you’re checking, and the one that is in the database, the things you’re checking it against, it fails to detect that those are actually the same person. That might be someone who is crossing the border and is also an unwanted criminal, and they fail to detect that they’re actually the same person, that would be a false negative. A false positive is when the algorithm indicates that the two images are the same people when in fact, they’re not, which is what happened to Robert Williams. So in most law enforcement applications, false positives will generally have higher consequences for individuals than false negatives.


The study found that false positive rates were higher for West and East African people, East Asian people, women, the very young, and the elderly. News headlines will often summarize the results of something like AI algorithms are biased, and this study shows. While that is true, it also hides one of the important details of their findings, that in the United States, false positive rates are higher for Asians. But it turns out that in China, the false positive rates are lower for Asians. This suggests that data and the algorithms really do matter. Understanding how an AI model was trained is critical to understanding how it is likely to be performed. It is equally important to test the model to verify that it performs as expected. Now, while the NIST study concludes that it is imperative to know your algorithm and data, it’s also important to recognize that machine learning algorithms are part of a broader system. Therefore, it’s really important to pay attention to that broader system, as opposed to fixating just on the algorithm.


Here are a few things that are part of a facial recognition system, for example, there are cameras, and because those cameras are taking pictures that end up in a database for comparison. They’re not the same camera. There can be lots of different cameras, and some of those cameras are better than others. Some are taking pictures in black and white, and some are not. Then, there’s a face recognition algorithm, and that face recognition algorithm is processing images in a database to generate the face recognition model. The image database itself is a issue to be taking a look at. Is that image database that they’re using to train a model consist of, let’s say 80% percent white people because it reflects the general US population? Therefore, the 20% that are other races besides white or a minority in the database, then the facial recognition model is going to learn how to differentiate between white images more so than other races, simply because you got a lot more instances of white images to compare. There’s also processes and procedures like the police procedures that needed to be followed. Then there’s human behavior. As we all know, just because you have policies and procedures, doesn’t mean everybody’s following them all the time. Then, system performance that can impact things, as well as your testing practices, and of course, project deadlines put pressures on teams to do things that hopefully don’t result in cutting corners, but they could.


If we go back to Robert Williams, the man falsely accused with death due to an AI algorithm, it turns out that the quality of the images used in this case were actually grainy and poor. As far as the police procedures and human behavior aspect are the police department itself conducted an investigation and its own police chief has publicly stated and I quote, “that Robert Williams arrest was a result of shoddy police work.” We have already discussed that algorithms have higher false positive rates for black people. It’s important to recognize that the algorithm model are part of a larger systems that also included photos taken by cameras, video footage, police procedures, people, and while it is easy to blame the AI, it isn’t always just going to be about the AI model.


Likewise, when you’re thinking about AI systems in your organization, you can see that there are a number of ways in which issues will creep into that system that have nothing to do with the algorithm. When looking for ways to improve performance and fairness, the outputs, the entire system should also be considered. I shared with you Robert Williams’ story because law enforcement agencies use facial recognition systems in machine learning in many other contexts, but there are many stories that are similar in the commercial world as well.


I think all of us can relate to difficulties in recruiting great talent in technology positions. There are a number of challenges, but one of them is when opening a job opening, there might be over 100 applicants for that position, most of those people aren’t even qualified. Our recruiters are having to sift through all these resumes to get us the handful that actually are qualified for the job. In order to save them time, we’re always looking for ways to arm them with tools to process the resumes. One very large online retailer put together a team that train AI system to analyze resumes and determine which ones were the best fit for a given job description, but they also learned that the system they developed was actually favoring men over women for technical roles like computer programmer. That was because they train the system using 10 years of resumes they had previously received, which were predominantly male, because the IT industry is predominantly male. Effectively, the model learned that men were preferable for these roles more so than women.


Just as examples of that, it turned out that if the word women as in the captain of the women’s chess team was in the resume, it did not get selected, or if it was an all women’s college, it did not get selected. As a result, they scrapped the system—I do not at all think this company had any malicious intent. It was simply they’re trying to help their recruiters save time, but it’s a good example of how applying AI to certain tasks pose risks. In this case, talented women who are potentially and likely not considered for a role where they might have been actually the best person for the job.


There are many other ways in which artificial intelligence can pose risks when it comes to making recommendations and operations. Loan decisions, for example, could be making preferential treatment of one race over another. Similarly, there are similar risks in medicine. Londa Schiebinger, bringer of Stanford University, was quoting a recent blog post at Stanford’s Human Centered AI website saying, quote, “the white body and the male body have long been the norm in medicine guiding drug discovery, treatment, and standards of care, so it’s important that we do not let AI devices fall into that historical pattern.” As a case in point on this, I talked with a researcher a couple of years ago, who was using a digital twin of the human body to conduct research on radiation effects on the human body. He pointed out to me in that conversation that the only available robust digital twin is based on a human white male, and that the results of his research would not necessarily apply to people of other races, or women.


AI could also pose privacy violation risks, even when the organization’s being really conscious about minimizing those risks, and making decisions based on things like race and gender. The reason for that is that because these machine learning models are trained with hundreds and thousands of parameters and massive data sets, they can begin to create correlations between these parameters that effectively do determine the person’s race. That can, in fact, create bias that we may not have anticipated simply because there’s too many variables involved. Some AI Ethicists also want companies to consider the environmental impacts of very large models since the energy consumption generate the models can be significant. If your organization’s core values include environmental protection, or even if your company has said that they will be carbon neutral by some point in time, then this is one to pay attention to, because very large models, in fact, are big energy users that can contribute to environmental issues. Other ethicists are shining light on the significant advantages that are large organizations that have money have over ones that do not have as much money. This creates a digital divide between rich and poor countries, regions, and even universities. In fact, the National Defense Authorization Act for FY21 creates a national research cloud that will provide more equitable access to compute and storage resources for research. The many ethical issues that AI poses feels a lot of tension around the use of machine learning and many government and commercial applications.


Do you have teams currently looking at or using machine learning? Are you thinking about these potential ramifications? You might think, well, I’m the CIO and we only really use it for internal IT operations, but does that include insider threat detection? Do you suspect that if your IT process is biased, it might pose a risk for your company? Do you support systems and functions in the company that are likely using machine learning for their purposes like candidate selection or loan decision making or drug discovery, and if so, is it clear where your roles and responsibilities lie and start and stop versus other people that are responsible for AI in your organization? As a corporate executive, do you know whether your peers and other link leaders are thinking about this in order to protect the company as well as society more generally?


The promise of artificial intelligence is very high, and organizations should use AI, but they need to do so fully aware of the limitations and potential adverse issues that it can create. I recommend organizations at a minimum should do these things. First, they should develop a technology ethics, which is based on the core values of the organization. If your organization’s core value includes things like excellence and quality, then you would want to know that your model is in fact reflecting those values, and they should be in the technology ethics statement. If your organization values things like inclusion and diversity, social justice, or environmental protection, then they should also be reflected in your technology ethics. Then you want to develop policies that support the technology ethics, that are also in compliance with the law. I’m going to touch a little bit more on policies in a moment. Then you want to implement processes that ensure compliance with those policies, and these would expand not only your design, development, and testing processes, but they would include things like hiring, procurement, contracting processes, as well as others. Finally, you want to educate your leadership and AI workforce on the company’s ethical stance and on adopting processes and controls to ensure compliance with the policies.


On the policy point, there’s a book called the Ethical Algorithm by Michael Kearns and Erin Roth. It makes the point that there always is going to be a trade off between the error rate of an algorithm and its fairness. It’s very unlikely that you will end up with an ultimate model that completely eliminates unfairness. You’ll almost always be in a situation of making a trade between fairness and error. There are a lot of mathematical reasons for this that they go into in the book, and not everybody necessarily agrees that this is actually always the case. By and large, I think it’s fair to say that that is going to be a trade off that will have to be made. One of the most important points of this is that an artificial intelligence is not going to be able to tell you what is that trade off, it can’t advise you on that. That really has to come from your organization’s core values and the technology ethics and policies that are in place. That means, that organizations have to put a lot of thought into what is acceptable from a fairness versus performance factor. You should do that based on your company’s core values, predefined ethics policy, and what the application is actually doing, and whether it’s going to impact people adversely if it makes mistakes, or over selects one group of people over another.


You may be wondering, “What does technology ethics look like?” Well, some public companies have already developed and published their ethical principles publicly like Google. The Department of Defense published theirs in 2020, and they were affirmed department-wide just recently. Theirs look like this, they have responsible, so DOD personnel will basically have ownership of the fact that AI can have consequences, and they will use appropriate levels of judgment and care using these technologies. They’re equitable, that they will take deliberate steps to drive out bias into systems. They’re traceable, and this is really important, that processes and procedures are documented, things are measurable, so that in case something goes wrong, it’s easy to figure out where it might have gone wrong. They’re reliable, they should have explicit well defined uses and safety, security, and effectiveness will also be taken into testing and assurance so that the system can be considered reliable. Finally, they must be governable, they will design to engineer their capabilities to fulfill the intended functions while possess the ability to detect and avoid unintended consequences. This is a good example of ethical principles that you might consider for your organization, but there’s many others.


To close, I want you to imagine that the future of your organization looks like this: that you have leveraged the power of AI to drive revenues, profits, or mission outcomes that are beyond your wildest dreams. It has done so without reproach, since you have created an implemented and ethical approach to AI and train your workforce on it. It’ll obviously take time to get there, but this, like so many things, is a journey.


Here are a few next actions that will get you started on your journey. You’ll soon learn that many of our peers are doing these things too. Find out what your company is already doing in AI and what it is doing about AI ethics, if anything. Secondly, talk to your peers about what their companies are doing. Read some examples of AI policies from other companies and government institutions, as they will have addressed many dimensions of AI ethics already to get a sense of what the issues are and how other organizations are thinking about them. Read the Ethical Algorithm or similar books as a way to get a deeper overview of the issues. I also recommend you take a look at the National Security Commission on artificial intelligence. This commission was commissioned by the Congress to look at the current state of AI in the United States, and included some industry luminaries like Eric Schmidt from Google and Safra Catz from NorCal, Andy Jazzy of Amazon. It gives a great overview of the landscape of AI in the United States. They also published this report called The Key Considerations for Responsible Development and Fielding of Artificial Intelligence, which summarize many of the things that you would want to be considering in how you carry out ethical AI in your organization. Finally, be aware that government regulation is on the horizon. We expect to see some government acquisitions of AI including criteria for companies at least being able to explain their ethical principles and how they adhere to them. Congress recently wrote a letter to NIST asking them to establish an ethical AI framework in conjunction with industry, you know, similar to what they did with the Cybersecurity Framework. In Europe, they issued a commission called the High Level Expert Group on Artificial Intelligence, which published a report called the Ethics Guidelines for Trustworthy AI. All these things point to regulation coming down the line.


If you contemplate the use of AI in your business operations, services or products, you’ll want at a minimum, pay attention and participate in providing feedback to legislators and NIST, and other government agencies as appropriate, as they establish their frameworks and regulations. I think doing this handful of things will at least get you started on your path to ethical AI to the benefit of your company and to society more broadly.


Best of luck to you on your journey. Feel free to contact me at my contact information there below. Always happy to talk to people about this and learn what you’re doing and share what we’re doing as well. Thank you very much.


Get full Q/N Access

Sign up to Q/N with a few details to watch this presentation.

  • Hidden
  • Hidden