We were delighted to host three brilliant AI minds working in the financial sector to a panel on our AI in Finance and RegTech Summit stage last week.
Speakers on this panel included Ronan Brennan, Strategy and Innovation Manager at NatWest, Mackenzie Wallace, Specialist in FinTech/RegTech at The World Bank, and David Bryan, Director of Presales at MANTA. The panel was broken down into the four themes listed below and was finished with a Q&A.
- The question of fair & responsible AI
- The challenge of governance in Finance and RegTech
- The challenge of regulating AI
- The law brings it to life
Manta also recently released a guide on how to achieve full compliance and build trust in the financial sector. Click here to access the free guide.
David Bryan, Director of Presales, MANTA=
“When you think about credit reports or ratings, often it's a mystery to people how it’s composed or where it comes from, but there is a potential for this to become more mysterious as we use AI. The ability to validate this data and trust this data is becoming more and more important. There has to be trust in both collection and processing.”
“I think AI is a big factor in changing the path of information that we’re looking at. As the information world has become more sophisticated and with more channels being used, it’s more difficult to find sources of information and to be able to trace them. Similarly, it's hard to put governance on the sources of this information. We have to focus on trust in data, increase governance on the data. Now AI is helping deliver better information, there needs to be governance on how to discover it.”
“AI is relatively new to all of us and is creating a wealth of knowledge, but this needs to be governed. data storage has an increased responsibility too to protect customer information. We need to create sophisticated tools to develop with the speed of AI. I have been trying to make the presentation of the information behind the scenes understandable for transparency.”
POLL — How well do you understand the data used when making your personal credit decisions?
I’m confident I know how this works (33.3%)
I have a rough understanding (33.3%)
I could guess, but I’m not confident (16.67%)
I have no idea (16.67%)
“It’s very hard to regulate intent, I come down on the side of regulating results and trying to make the world a more open place.”
“AI is new to all of this (data governance), but we have to be able to govern the information AI creates. The regulations that have been introduced to protect personal info are well intended, but we need sophisticated tools to be able to do that.”
“If I can develop something with AI that might reasonably predict someone’s likelihood of defaulting on a payment, is that something we should be doing? I think that where we can start is being as open as possible, being transparent, the types of calculations & logic that are being applied (credit history & availability).”
Mackenzie Wallace, Specialist on FinTech/RegTech, The World Bank
“When deploying ML models, it needs to be well managed. There’s a big difference between building a model and then doing the due diligence, building trust, and then rolling it out. Model risk governance is at the core of how you would go about building models in an organization.”
“I think it’s important to also think about applications in relation to risk. What is the AI application we are talking about — NLP v ML etc, as there are different levels of maturity. Secondly, what is the business case you are applying it to, for example with underwriting, the margin for error is incredibly small, therefore we need to evaluate risk the entire way through.”
“As you mentioned, with Goldman Sachs (referencing bias investigation), they are not using AI models for credit decisions, but this (the investigation) highlighted inadequacy in the processes we have today. As applications get more sophisticated, we need to understand where these biases exist and how we can reduce it, it’s not a question of IF there is bias, but how we can decrease it.”
“Model transparency becomes really important. AI-based models have large implications on fairness & inclusion — When AI is deployed correctly it can have a positive impact.”
POLL — How easy is it for you to determine the source of the numbers on your corporate financial reports?
Very easy (0%)
Can be determined with reasonable effort (71.43%)
Very difficult (14.29%)
Ronan Brennan, Strategy and Innovation Manager, NatWest
“The maturity of understanding where the risk lies (in using AI models in Finance) is gaining importance. When I was trying to build data governance policies, I found myself building them for the technological environment I was hoping to see in the 10 years, instead of the environment we’re living in now.”
“One of the key challenges that have been faced, is the maturity of understanding where the risk lies — If you find yourself talking to someone designing a model & they think their model does not discriminate against certain categories, that for me is one of the core challenges — if your model is really good, it's going to figure out what categories you aren’t giving — Models can deduce & build from unstructured data…… how do you get to the point where it's at the business case model, where those potential ethical risks are.”
POLL How important is it to you to be able to determine the source of the numbers on your corporate financial reports?
Somewhat Important (12.5%)
Not Important (0%)
“People became interested in decision making through AI when women were receiving worse credit ratings than men from algorithms. What was interesting is that although cleared of legal wrongdoing, they (credit companies) acknowledged that this imbalance is of concern and that there is more work to be done here. Then we started to ask how can we approach that risk of embedding structural biases in current regulation and future regulations.”
“When I look at AI projects, I think about what a fair outcome is. For me, the requirement for fairness in finance increases with our technical capability — we can righty expect more of ourselves as we can deliver greater degrees of fairness than we have historically.”
“It was thought the Apple card may be discriminatory towards women, with lower credit limits than men which got people really interested in how these decisions were made. The outcome was that Apple would found to not be discriminatory, but what was interesting was that the dept took the time within its reports that asses if their credit decisions were discriminatory. Problems could have arisen from how their AI product was rolled out rather than the decisions it made.”
Question: In a sentence how would you define responsible use of data?
David Bryan: “Well I think everybody’s been focused on the responsible use of data, we’ve seen a lot of regulation pop up — Everybody’s familiar with European regulations GDPR, right now we’re focused on how we control the presence of personal information as it relates to transactions.”
Mackenzie Wallace: “To me responsible use is about a few things, 1 Informed consent, and 2 Transparency & how it's used — 3rd Is the accuracy of underline data. I think of it as a protection of that data from unauthorized use. I personally look to the CFUBs top principles for consumer protection in this area.”
Question: AI is quite a new and shiny set of technologies that are actually relatively varied, so when we talk about them as a one size fits all way it could be problematic — how should regulators regulate? What should they regulate?
Mackenzie Wallace: “My view is that AI is the latest step in the long history of technological advancements — Rather than trying to regulate a very specific thing, I think regulating the principles is much more effective — Such as data protection, how the data is used, fairness & bias.”
Ronan Brennan: “My core worry is that we approach a slight tipping point where we have a set of technologies that potentially shine a light in places where sub-par decisions have been made previously, and if those aren’t addressed immediately, could cause problems down the line in terms of later generations of this technology.”
POLL — How important do you think the role of regulators will be in AI being a force for good in Financial services?
Most important (50%)
As important as banks and other FS institutions (37.5%)
Less important than banks and other FS institutions (12.50%)
Not important at all (0%)
Question: When considering responsible use of data, how can we approach that risk of continuing to embed structural bias set?
Mackenzie Wallace: “The first step is to recognize that there are severe biases. As models get more sophisticated, we need to know that bias exists and our question should be How Can We Reduce It?”
David Bryan: “Sometimes we blame tech for things that already existed such as bias. It's equally important that we look at how these new technologies can help us improve these situations.”
Ronan Brennan: “When I looked into this broader topic in the past, with all the potential for AI to exacerbate biases, it also has the capacity to reduce this. Let companies use this tech to investigate the sources of those structural bias.”
POLL — Do you think AI adoption is likely to enhance Fairness in Financial Services?
Without doubt! (25%)
Probably, it’s progress (37.5%)
Possibly, but I’m not convinced (12.5%)
Highly unlikely (25%)
Question: How broad a definition of AI is useful for regulation?
Mackenzie Wallace: “I think it goes back to being risk proportionate, so I think this is a fun question to entertain, but when it comes to regulating activity you need to look at the risk & result of those models, and the regulation should be appropriate to that.”
David Bryan: “I think that the previous parts of our conversation regarding discussing not trying to tackle the technology in place to accomplish something, but instead regulating what the purpose of that tech is — I suggest we do not attempt to regulate by definition of AI, but regulate by definition something we wish to control, such as race, gender.”
Panel Speaker Bios:
Ronan Brennan — Strategy and Innovation Manager — NatWest
Ronan is a Strategy & Innovation Manager at NatWest, with an academic background examining potential inequality outcomes from widespread “AI” adoption. He has previously helped break ground on AI Model Risk Governance, Emerging Technology Strategy, and Platform Business Models within Financial Services.
David Bryan — Director of Presales — MANTA
David directs the presales team with MANTA. Prior to joining MANTA from BlackLine, a financial close management SaaS provider where he was Vice President, Global Presales. Over a 20+ year career in the information technology sector, Mr. Bryan has guided technology teams with Computer Associates and Infor. Before focusing on technology, Mr. Bryan served as Chief Financial Officer in the services industry. Mr. Bryan received his Bachelor of Science degree from the University of Virginia.
Mackenzie Wallace, Specialist on FinTech/RegTech, The World Bank
Mackenzie Wallace is passionate about the use of data and technology, both as a product innovator and financial regulator. He is co-author of the World Bank’s 2021 technical note, “The Next Wave of Suptech Innovation: Suptech Solutions for Market Conduct Supervision,” on the growing use of such solutions by financial authorities globally. He is a former financial regulator and early employee of the U.S. Consumer Financial Protection Bureau (CFPB), where he helped pioneer the authority’s innovative consumer complaint system and public complaint database. He also served as FinTech Policy Advisor at USAID where he helped create the RegTech for Regulators Accelerator (R2A), working with financial authorities globally to embed data and technology into supervision. He currently serves as Director and Head of Product at FinTech, MPOWER Financing, designing inclusive financial products to make higher education more accessible.