What’s in the box? A critical look at transparency in AI for banks
AI-powered automation is already being used by banks for processes such as KYC and AML. And more banks will inevitably follow suit as AI technology takes over an increasing number of repetitive banking tasks.
By Krik Gunning - CEO
SHARE
But there are growing questions about the security of AI systems and how much human control should be put in place. Transparency is a key factor and involves several considerations such as:
The relevance and ethics of data sources.
The level of human intervention needed for an optimal balance between speed and control.
The ability to explain to regulators and customers how a decision was made.
The stakes are high. Banks are currently making big investments in growing their KYC and AML teams to ensure they are protected against legal issues, their clients are safe, and their own reputations remain intact. However, this resource-heavy approach comes at a high cost, is difficult to scale, and is unlikely to make our financial systems more secure.
In this article, we will look at some of the key considerations for banks around transparency in AI tooling to ensure they are well-equipped to make the best decisions for their unique situations.
Ensure training data is relevant and ethical
AI models need to be trained on data so they can recognize patterns and arrive at correct decisions. But the source of that training data is a critical consideration. Whether you are outsourcing your AI systems or building them in-house, the data needs to be representative of your users to ensure decisions are fair and accurate. And it must be gathered in a transparent and ethical way.
Sourcing data ethically is a big challenge for banks, particularly when it comes to complex questions around surveillance and racial bias (we explore this in more depth in our article about compliance, ethics, and security in AI). Ultimately, you want to avoid being associated with any unethical application of data, which can be both financially and reputationally damaging.
How Fourthline approaches training data
Fourthline uses biometric AI checks to assess videos and images for liveness, allowing for faster and more accurate KYC and AML processing flows. We take an ethical approach to gathering AI training data through our onboarding flow during which clients agree to share their biometric information to improve the service.
Safeguarding this data is our highest priority. We employ robust security measures to guarantee the highest level of protection around the clock. With 24/7 server and service surveillance, our teams receive real-time updates to promptly address any potential issues. Fourthline operates on AWS cloud service servers, which comply with the highest standards in IT and operational security. As such, our data cannot be used for illegitimate or unethical purposes by any third parties.
Maintain an optimal balance of automation and human control
AI can do many tasks faster and better than humans. It also gives humans more time to do high-value strategic work, such as staying up to date with the latest fraud trends and optimizing processes.
But there are always risks and gray areas and being too hands-on, or too hands-off with AI can cause problems. For example, with certain biometric checks, you may take a conservative approach and block any partial or indeterminate match. Of course, the problem with this approach is that you will also block legitimate customers, damage your reputation, and lose revenue. Or you could be overly trusting and fail to notice fraud trends as they evolve. There always needs to be a human expert to fall back on when AI decisions are indeterminate.
On the other hand, relying solely on manual processes is unsustainably expensive. Consider the scenario of a remediation project in which years-old KYC data does not meet today’s standards; no selfie, expired passports, checks that haven’t been performed, and so on. From a regulatory perspective, this is a risk and the bank will be given a deadline to mitigate it. A common course of action at this stage is for the bank to hire consultants to update and organize the data manually. This is normally followed by an expensive, slow process involving documents being sent by mail to bank clients, asking them to fill out forms. This is not compatible with the digital banking standards clients now expect.
Alternatively, banks could leverage AI tools to initiate a KYC flow via the banking app. This would evaluate the legitimacy of ID documents and selfies and streamline the acquisition of the data points necessary to address the regulatory risks. And it would take a matter of minutes at a fraction of the cost.
How Fourthline approaches human control
Our AI models are trained on the same data domain as the one to which they are applied, reducing the risk of indeterminate, or wrong decisions. Nonetheless, a risk still remains. Therefore, Fourthline introduces human checks at key moments. These trained experts have access to a rich knowledge base that helps them process cases accurately, per regulatory requirements. Below is an example of Fourthline’s KYC flow combining AI and human input:
Fourthline’s AI agent performs automated checks on all incoming cases and assigns outcomes in line with the risk appetite and requirements of the business partner.
Open checks that require human review are flagged in the Fourthline Case Review Portal - an AI-powered solution where human teams can efficiently process and audit all types of cases, with a complete overview and evidenced audit trails.
There are two buttons next to each next to each open case inside the portal. A green button marks it as complete and a red button will either send it back to request more information or reject the whole case.
Additionally, our experts review historical data and compare it to existing activities to see if things have changed. In doing so, we can add or amend our checks to ensure our partners stay ahead of trends or changes.
Guarantee you can explain decisions to regulators
It is a regulatory requirement in the EU that Identity Verification and AML solution providers are able to explain how their solutions work .
This is important because AI decision-making can resemble a "black box," which is frustrating for both regulators and clients. There are numerous examples of financial discrimination, including decisions based on the country prefix of an IBAN, or ethnically diverse clients being subject to more checks.
Put these things together, and you have a situation in which you may run into legal and reputational risks because you can’t explain how a decision was reached, even if your training data and solution are ethical and unbiased.
How Fourthline approaches explainability for regulators
If a regulator wants to know how decisions are reached and how the model works, we can provide an audit trail and a reasoning for every action. We also equip our business partners and external auditors with the tools to perform audits themselves. For example, for each Identity Verification case, we generate a specialized Client Due Diligence (CDD) report which contains:
PDF report with all data and files we processed, all checks performed (both automated, and human checks), the final outcome, and the risk score
XML containing all data and any corrections
All identity files
This complete set of reporting and audit trails includes all the information and proof points needed to comply with local 5AML regulations in every European country. Collaboration with clients on remediation projects (like the scenario explored in the section above) has helped banks save millions of dollars while staying compliant.
The impact that AI can have on KYC and AML today
Banks looking to adopt AI technology today should think critically about the big issues around transparency, including data integrity, explainability, and when to involve human expertise. With the right combination, they are perfectly positioned to drastically reduce the unsustainable overheads associated with compliance, and make smarter, more accurate decisions with KYC and AML. Doing so has an enormous potential impact in terms of accuracy and time saved.