While several types of economic crimes exist, this article will focus only on money laundering, fraud, and corruption. The general framework of the study that has inspired this post was based on a comparison between the traditional methods of financial crime detection and the “new” ones that entail the use of artificial intelligence. Specifically, it looked at the anti-financial crime methodologies currently used in the United Kingdom.
Traditional methods of anti-financial crime
In the United Kingdom, the traditional approach of combating financial crime is based on audit and compliance. It is highly illustrative in that regard that, on the one hand, the Financial Conduct Authority (FCA) in its Handbook requires certain entities to have certain systems and controls and, on the other, the FCA Financial Crime Guide requires firms to have robust systems, controls, and governance arrangements.
Specifically, in relation to money laundering, in 1997, the Financial Services Authority (now the FCA) had outlined a two-stage risk-based approach to combat money laundering. This two-stage approach required entities to first devise a list of products and services that categorize risk status and second, to put in place a new set of procedures to verify client identities. These methods placed requirements on entities for risk management and client verification but such requirements have not effectively addressed the money laundering problem. Even presently, the FCA has noted that many of the firms were only in the early stages in relation to money laundering risk assessment and that more emphasis must be put in customer risk assessment and customer due diligence. The emphasis on more effective surveillance systems for transactions can be seen in the recent Upper Tribunal case involving Linear Investments where the FCA imposed a fine on Linear Investment’s failure “to take reasonable care to organize and control its affairs responsibly and effectively with adequate risk management systems in relation to the detection and reporting of potential instances of market abuse.”
As Professor Nicholas Ryder argued, it seems that the most familiar regulatory mechanism for money laundering detection is the Suspicious Activity Reports (“SARs”) regime. SARs are argued to be the most important weapon in the fight against money laundering. SARs are considered as an effective tool in anti-financial crime detection as, in the regulated sector, failure to submit such reports is criminalized under the Proceeds of Crime Act 2002. However, a survey has shown that the success of suspicious transaction monitoring was on the decline (from 22% in 2016 to 10% in 2018).
As regards fraud, much of the UK government effort has been focused on educating the public on how to avoid being victims of fraud. As an example, the UK has advocated the immediate reporting of any suspected fraud to Action Fraud [https://reporting.actionfraud.police.uk/login], the national reporting center specifically for fraud. Entities such as the Charity Finance Group has emphasized the importance of an ‘anti-fraud culture’ within organizations as a key strand of any counter-fraud strategy. Despite these initiatives, 50% of UK businesses are said to not have carried out a general fraud risk assessment. For those who have fraud assessments in place, these assessments are seen to be static documents that do not respond to our complex and evolving environment – an environment where cybercrime and cyber-attack protection is paramount.
In relation to corruption, the governments’ major responses include passing laws banning bribe-giving and cooperation across national boundaries. The US Foreign Corrupt Practice Act in 1977 served as the pioneer legislation for outlawing bribery in international business. Other countries followed the United States as soon as they realized that bribery and corruption had become global issues. The model used by the US Foreign Corrupt Practice Act contained two principle mechanisms – outright prohibition on payments to foreign officials and accounting and recordkeeping requirements for foreign operations of publicly held companies. Another mechanism used by states such as the United Kingdom is the requirement for transparency of transactions such as the reporting of political donations of a certain amount. But such a transparency regime is inadequate due to the limited nature of the information required to be disclosed (i.e. while the prima facie parties to an exchange are disclosed, the reason behind why the money was given is not always clear, nor is the reaction of the recipient apparent).
A report by Transparency International UK mentions that there have been 237 corruption cases in the last 30 years in UK’s offshore jurisdictions wherein 1,201 different registered companies have aided corruption and bribery. The Corruption Perception Index 2018 shows that the UK is perceived as the 11th least corrupt state, but at the same time a PWC’s survey has demonstrated that 23% of surveyed UK organizations have admitted to having experienced corruption in 2016/2017. This stark contrast between perception and reality has contributed to rising difficulty in detecting bribery & corruption within UK organizations.
Is AI the future of Anti-financial crime?
Artificial intelligence (AI) is one of the most talked-about topics today. There has been immense growth and development in the subject matter in the modern era, yet AI does not have a standard definition. For the purposes of this study, the following definition of AI is used: “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.” In other words, AI uses external information from big data sources as input to identify underlying rules and patterns by the use of machine learning in order to produce a probabilistic output. These AI applications are encountered by us in everyday activities such as Facebook’s facial recognition, Iphone’s Siri voice recognition, and Tesla’s self-driving cars.
This ubiquitous nature of AI technology represents a double-edged sword. While technology is used to solve many of our everyday problems, the same AI technology is being used by cybercriminals to perpetrate global financial crimes. The figures presented show that legal systems are unable to keep pace with the tremendous speed of changes in AI technology which is used by criminals to commit financial crimes. It would thus be the only natural to combat these crimes with the same or better technology.
AI experts believe that the best AI systems must contain three components namely: big data, computing power, and the work of AI algorithm engineers. He adds that computer power and engineering talent have thresholds, and so the quantity of data becomes the decisive factor in determining the overall power and accuracy of an AI system. Where can we find such amounts of data? We don’t have to look far to find entities that hold and record mammoths of data – the banks and other financial institutions. It is no coincidence then that these are also the entities involved in some of the biggest financial crime scandals. With virtually millions of transactions done each day, these entities are a goldmine for data but also for financial crime.
Recognizing the mammoths of data from transactions in banks and other financial institutions, AI companies like Quantexa and Ayasdi have already partnered with global banks such as HSBC in combating financial crime. As an example, the technology introduced by Quantexa allows HSBC to spot potential money laundering activity by analyzing transactional data of its customers. HSBC’s Chief Operating Officer recognizes that AI technology is now being used to automate anti-money laundering tasks that traditionally needed thousand of humans. Ayasdi, on the other hand, has pointed out that most traditional anti-money laundering processes conducted by banks are unable to detect unusual activity, resulting in a waste of time and resources. With the use of AI technology, banks are seeing a significant increase in inefficiency. This higher efficiency through AI technology comes from the combination of a higher number of suspicious activity detection and lower number of false alerts.
Similar technology is being used in fraud identification, where AI technology is focused on identifying complex fraud patterns and reducing the number of false positives through consolidating large volumes of data, such as geolocation, tagging, IP addresses, phone numbers, and usage patterns. In 2018, text spoofing blocker was introduced in the UK to block scam texts by allowing banks to register the sender’s ID. The same year saw the introduction of the Mule Insights Tactical Solution known as MITS by Vocalink which uses AI technology to identify ‘mule accounts’ by tracking suspicious payments as they are moved between bank and building society accounts, regardless of whether the payment amount is split between multiple accounts, or whether those accounts belong to the same or different financial institutions.
AI technology is also used to detect bribery and corruption. It relies on the analysis of multiple sources of information found in emails, phone calls, messaging, and expense reports. The peculiar nature of bribery and corruption, however, makes it harder for AI engineers to write algorithms for its detection. There are also issues of data privacy and confidentiality especially when personal emails or phone calls are involved.
Another overarching challenge for the introduction of AI technology, not just for bribery & corruption but for all the financial crimes, is the general hesitation of entities such as regulated institutions to introduce new technologies within their organization. A 2018 survey showed that a quarter of respondents had no plans to use artificial intelligence for anti-fraud measures. Issues raised by firms include the lack of internal technology capabilities. It is their contention that without being able to truly understand and operate these AI technologies themselves, they would have difficulty convincing regulatory authorities such as the FCA, which itself may not have the technical expertise required to appropriately evaluate an institution’s adoption of new AI technology solutions. The financial cost of introducing new technology is also seen as a big deterrent. For smaller firms, practical issues such as a limited budget for compliance represent a big hindrance. For larger institutions engaged in business internationally, standardization of AI technology for their various compliance systems across multiple jurisdictions proves to be the challenge.
From an analysis of the current landscape, it would seem that traditional methods are inadequate as they cannot keep up with the evolution of technology. Again, I am saying that they are inadequate but to clarify, they are also still necessary. Let me explain. While criminals have become adept at ‘gaming’ the financial markets, no matter the level of sophistication of an AI system, a degree of human control is necessary, as even the smartest of AI systems are prone to mistakes and need to be corrected from time to time. There is also much value in the traditional methods of anti-financial crime as these initiatives were carefully planned and have sound bases. The traditional mechanisms cannot be abandoned because there is a strong preference of entities to stick with the “old” ways and reject the “new.”
Thus, it seems logical that a compromise seems to be the way forward. As the number of money laundering cases continues to rise, it seems that companies are only now beginning to accept the use of AI systems to augment the current methods of combating financial crime. We have seen this from the introduction of AI systems in banks such as HSBC. Even the FCA believes that public-private partnership is key in reducing the harm caused by money laundering. To reiterate, there is nothing inherently wrong with the current regulations based on customer risk assessment and customer due diligence but there is a problem of efficiency in the current compliance systems. An AI framework can thus be used to enhance the current risk assessment approach of the regulators. With the use of competitively priced and effective AI systems currently available in the market, companies can lower their compliance costs yet increase output efficiency in detecting money laundering. Additionally, there must be a specific focus on the introduction of AI anti-money laundering systems in banks, credit institutions, and gambling operation entities as these have been identified as the primary source of SARs.
Similarly, AI technologies such as those introduced by Ayasdi that introduce functions to identify complex fraud patterns and those introduced by Vocalink through its MITS system to identify mule accounts represent a significant improvement in the speed and efficiency of detecting fraud patterns and identifying fraudsters. The exponential increase in speed and efficiency can make the difference by preventing or recovering the billions of pounds lost through fraud. Using these technologies while continuously educating the public on fraud protection seems optimal.
So to answer the question “Is AI the future of anti-financial crime?”, the answer is that AI will most likely be an integral part of the future of anti-financial crime. The optimal approach would be combining the best aspects of the traditional regulatory methods with the efficiencies of the new AI technologies. The solution is not the use of AI alone, but the use of AI together with existing law and regulation mechanisms. Having said that, it is inevitable that everyone must face the future. Even if AI technology is at the forefront of anti-economic crime development, the regulations that are already in place should not be abandoned but merely tweaked to accommodate AI-based tools. New technologies will inevitably arrive but it is up to the recipients of this technology on how to use it – for good or evil. Regulation can thus still play a crucial role in enforcing the correct approach to the use of AI. The creation of AI technology is neutral and humans cannot merely stand as passive puppets. They must undoubtedly act for the good when using such technology. That is the challenge.
The views, opinions, and positions expressed within all posts are those of the author alone and do not represent those of the Corporate Social Responsibility and Business Ethics Blog or of its editors. The blog makes no representations as to the accuracy, completeness, and validity of any statements made on this site and will not be liable for any errors, omissions or representations. The copyright of this content belongs to the author and any liability with regards to infringement of intellectual property rights remains with the author.