We recently chatted with Merve Hickok, Founder of AIEthicist.org and Lighthouse Career Consulting. Take a listen to the podcast below or read the transcript. (Transcript lightly edited for clarity.)
Listen to all the Explainable AI Podcasts here
Fiddler: Welcome to the Fiddler Explainable AI podcast. My name is Anusha Sethuraman. Today I have with me Merve Hickok, who is an AI ethics expert and who will be talking to us about ethics and AI overall and what we can do to better implement ethics. Merve, thank-you so much for joining us. Please, would you mind introducing yourself and telling us a little bit about your background?
Merve: Thanks, Anusha, I'm excited to be here. My name is Merve Hickok. I'm the founder of Lighthouse Career Consulting and aiethicist.org website. I worked almost twenty years in different countries for a couple of Fortune 100 companies. I currently work at a social enterprise in Northern Nevada, and I also do advocacy work and training and speaking engagements on ethical AI.
Fiddler: Wonderful. Thank you for that introduction. As an AI practitioner of being in this AI space, what is top of mind for you these days?
Merve: There's some great, encouraging examples of AI used for social good, and to create better opportunities and accessibility to resources to people. However, there are a couple of things that do worry me. The top one is lack of any regulation and accountability on AI, especially in the US. When you're talking about a system, having a potentially adverse impact on social justice and protection of privacy. I think we are past the point of birthing pains with AI and should kind of start focusing on how to grow a healthy child.
Fiddler: I like that analogy.
Merve: I'm concerned with the adoption of commercial automated decision systems by local and federal agencies without much scrutiny and insight into the systems, without understanding how that particular system or AI was developed, what might be the consequences, any corrective processes in place, etc.
The second issue on top of my mind is the nationalization of AI research and development. My last count was about 37 countries that either published their AI strategies or made public their intention to do so. It does worry me that the strategies of bigger powers focusing on control of AI technology and militarization of that will eventually resemble the arms race of what we experienced during the Cold War.
Fiddler: What industries have you primarily worked with in AI where regulation maybe plays a big part or maybe it's across the board?
Merve: I would say the regulation or lack thereof is across the board in different industries. My background is in HR technologies, and process improvement...and with my current job, HIPAA privacy and security. I've done those rules within investment banking, hospitality, as well as my current job at the social enterprise working on developing learning systems for individuals with disabilities. I don't see much of a difference in terms of the needs for regulations or accountability across industries.
Fiddler: How are businesses in these different industries using AI today?
Merve: I think for hospitality it's a relatively new development and is more focused on chat bots and virtual concierges, or augmented reality solutions where you can experience a destination before actually making a decision to go. Since I'm in Nevada I'm going to do a casino plug here that casinos are starting to use AI solutions and data analytics for predicting consumer behavior, or to detect fraud, or personalize experiences within the casino.
For finance - one of the dirty secrets of finance and technology is big financial institutions like Bank of America Merrill Lynch where I spent a decade or JP Morgan, etc, they employ more developers than most of the big tech companies. And they've been doing that heavy investment in software developers for more than a decade. You can look at the birth of further sectors from those, like FinTech and InsureTech, they use AI anywhere from algorithmic or high-frequency trading to risk modeling to credit scoring to fraud detection, to credit eligibility to call centers. Within finance use of AI or the maturity level of AI use is incredibly further down the road than the rest.
In terms of services to people with disabilities, going back to the accessibility side, I think the focus on development has been more on accessibility. Things like voice to text, image to voice, or helping the transportation and mobility or with communication devices like command assistance where you are able to do different tasks with voice commands or motion commands, etc.
Fiddler: Makes sense. How do you think ethics plays into all these different projects today?
Merve: I like the fact that the conversation around ethics is more now than it was a few years ago. But the actual application of ethics in AI projects still leaves a lot to desire. For me, ethics needs to be a part of your culture, a part of your organization, whether you're developing the AI products, on the sales and marketing team selling it, or whether you're the end customer or consumer looking to buy the products and implement that in your own company or your government agency. Ethics should be part of every single step and not an afterthought which it currently is. A lot of the times you come across cost-benefit and resource efficiency calculations that come before conversations about ethics.
Fiddler: Based on your knowledge and research here, why is ethics so important in AI today and why should we be paying so much attention to it?
Merve: The way that I stand on these issues is coming from my experience from HR – HR technologies and the work that I do now to help individuals with disabilities. You always come across this conscious or unconscious bias of the people that you work with or you interact with and how those biases interact with the rest of the world. A lot of the times we make decisions or tend to rationalize our decisions after the fact, if that makes sense.
You have to remember that algorithms or any code that you write in algorithms and in the system are created by humans for humans with data about humans. There's always this human element in it and ethics becomes so important in AI because these systems with their data processing power, they're dependency on using past data, can really magnify this existing issues or injustices or biases, and inflict exponential damage with their efficiency.
But on the flip side, I also think AI has the capacity to demonstrate the logical part of the decisions and force people to think of the consequences; to magnify it and put a mirror against you and provide all of us involved a chance to debate. You're not going to have an AI fixing all issues of bias or discrimination or structural injustices for humans. But I think with an ethical mindset, AI would actually give us a huge opportunity, as well as a tool to analyze these problems, make better judgments, look at data and the inferences we come up with, or that AI itself comes up with, and give us a chance to discuss and come up with better solutions that adhere to ethical principles like fairness, and beneficence or accountability.
At the end of the day you're still asking, and you have to ask the question of, "Just because we can, should we use AI?" I love technology, I'll admit that. I'm a bit of a geek myself and I think we can definitely use AI with an ethical mindset for the better.
Fiddler: AI has a lot of potential. It also has a lot of potential to potentially harm, so it's about how do we solve for that. You mentioned that AI ethics is not mainstream and that people are not thinking about it right from the start and it's an afterthought, so to speak. Why do you think this is? What are some of the challenges in implementing AI ethics?
Merve: Where to start. I think at the top of the list is that there is no single definition of ethics or fairness for that matter. That everyone or every culture where you would implement your solution can agree on. You can't code it into your algorithms. But let's say even if you pick up something that the majority of the people are okay with, or can work with it, then one of the other challenges is the implementation of it.
Most of what's happening right now or a lot of examples that I see is where ethics is constantly an add on. That you can plug at some point in time to your solution or to your process. But, as I mentioned before, ethics should be covering the full span of a product or a system life cycle. It needs to start at the very beginning when you're asking the question, "Should I?" And then go through the whole cycle.
There is an ethical decision when you're creating your value, your problem statement in your algorithm. There are decisions when you are deciding who should be in your team, which data to use, how you are going to train your system. Even when you're coming out with processes on how you're going to change stuff: how you're going to change your code, how you going to test it, how are you going to monitor it? So, there are decisions that have ethical implications at every step. You can't just say, "Okay I'm done. Now I'm going to apply ethics to this part, or at the very end."
Fiddler: You’re saying kind of to think about ethics right from the start, foundationally, implementing it into every stage?
Merve: Absolutely. When there's a start-up or when looking at bigger implementations there are sometimes competing priorities like market share, profit, resources, deadlines, etc. Ethical culture or an ethical mindset should never be a tradeoff to those. That's not a correct dichotomy; it's not an either/or.
Fiddler: That's probably one of the challenges in that revenue versus being ethical. What do we focus on? How do you think we should think about solving some of these gaps and implementing ethics and get over these hurdles?
Merve: Like I said, culture is the biggest one, but discussions on accountability regarding automated decision systems and potential biases or concerns around that have developed this need for explainability and interpretability in these systems.Organizations, whether that be private or public, as well as users, I think will demand more information and explanation from the developers going forward.And whether this is generated for ethical concerns, privacy reasons, or to reduce reliability, that's another question, but I think we should look at solutions around how to address those, how to explain your systems, how to interpret those correctly, and how to give comfort to your end consumers.
Another piece is obviously around having clean and representative data sets. A lot of the time organizations either collect data and ask questions later or again, there's this tradeoff between cost-benefit where they end up using data sets created by others for other reasons. And they try to flex that data to develop their own solutions. What you end up with is making inferences and decisions with the data you have and not necessarily the data you should have for your own purpose. That becomes a really convoluted situation.
I want to also add the lack of diversity is a challenge, whether in your team or whether looking at impact assessment, one way of overcoming those is always have more diverse teams, but also talking with more diverse populations that are impacted by these.
Fiddler: You mentioned being able to explain your decisions and obviously the data itself being comprehensive, but it's not always the case that it can actually get comprehensive data. But then would you be able to explain what's going on in the wild with your model in order to address some of these data issues? How does this resonate with you in terms of why do you need something like explainability and how important is it for enterprises or any business?
Merve: Extremely important. Explainable AI is obviously not the answer but it is a very crucial step. You cannot just go, like if there's a human's life, you can't just say, "The algorithm said so or the AI said so." There are still a lot of debates around the legislation around this in terms of liability, and who's liable.
I will give this very basic example from your personal experience when you start a new job. You don't immediately have autonomy to make decisions when you start a new job. You, over a number of occasions, you have to explain yourself and prove how you analyze the data, how you came to your conclusions, you have to prove your competency on making these decisions. And then your supervisors, stakeholders, provide feedback, you discuss the implications of those decisions, you learn more, you improve your data, you adjust your data, start asking better questions. And it's only once you have established that experience and trust within your organization or within your structure that then you get to make decisions that are implemented. But still, at that point, it doesn't mean that your supervisors, stakeholders, or customers will never ask you questions, provide you with feedback or help you correct issues.
I always go back to this example of you should be able to walk people through your thought process, establish the trust, but even then, establishing that trust should not prevent anyone to interpret that and provide feedback and improve your solutions.
In a world with more and more sophisticated cyber-attacks and data poisoning, the other reason that explainable AI resonates so much with me is you can use this as a control component, where looking at your model, your ratings, your data, and all your data points, etc, it can actually be another control mechanism to ensure that your data is not manipulated. It’s extremely crucial for businesses and organizations.
Fiddler: Thank-you for that insight. What are the three key things that business teams or any team building AI over the next few years should focus on from an ethics perspective?
Merve: Internalizing ethics in your training, your product development and your support services going back to that life cycle. This would ensure that your product does good out in the world but not deepen the structural injustices. But it would also give your workforce tools to be aware of impact of other systems around that, whether it's systems that impact them personally or your competitors, I think you can't go wrong with more ethics and internalizing that.
Another focus should be around building trust by creating more transparent, explainable, ethical AI that has strong governance models. Without trust, I don't think we're going to go any further than a certain point in time.
I would also say securing and governing your data - what you'd collect and utilize for your business and for your solution. I think with concerns growing over privacy and control over personal data there will be more requirements from businesses to provide access and control to people over their digital data. Just like we're saying in EU and GDPR, there will come a point in the US as well when you're going to have to explain what you're doing with this data, why that decision was made, etc. I think businesses should be focusing on getting ready for those changes as well.
Fiddler: Yeah, we're already seeing it here in California with the CCPA, for example.
Merve: Demand is going to grow from that wave, from Europe to California, I think that's going to expand.
Fiddler: As we talk about the future, where do you think AI is going, especially in terms of ethics implementation?
Merve: This would be the one where I'm not too optimistic in terms of where it's going next. I definitely would like to see more AI solutions that improve social justice or handle issues like climate crisis, and sustainability, or cultivate education, etc. But when you're looking at the amount of investment going into autonomous systems, facial, voice and object recognition, geospatial mapping and stuff like that, I think the expansion, my prediction on the expansion of AI is more within the defense and surveillance industry, more than anywhere else. Even if you're not necessarily developing a solution for military purposes that do use features of technology I think it will inevitably lead to solutions and developments into military and surveillance uses.
My only hope is that such developments will also expand the public debate around accountability, explainability, privacy and how those should be handled. I think that we're seeing an example of that debate now with facial recognition technology and use of it across the US. I'm predicting that it’s not going somewhere where it's for social good, but I think that that development is going to trigger some conversations that we need to have more.
Fiddler: Thank you so much, Merve, for joining us and sharing all of this insight. I really appreciate it and hope we get the chance to connect with you again soon.
Merve: Absolutely, Anusha. And again, thanks for having me.