A Case for The Responsible Use of Artificial Intelligence in Global Financial Services

As Klaus Schwab noted in the Fourth Industrial Revolution paper: “We stand on the brink of a technological revolution that will fundamentally alter the way we live, work, and relate to one another. In its scale, scope, and complexity, the transformation will be unlike anything humankind has experienced before”.

The #AIFearFactor – the fears of AI technology gaining dominance over human beings – used to be a theoretical fixture within the realm of science fiction. Who can forget the confrontation scenes between Hal – the computer – and Dave – the human – in Stanley Kubrick’s masterpiece 2001: A Space Odyssey of 1968?

However, what were once imaginary threats, have now evolved into actual concerns given recent advancements in the field of AI. The following words of Stephen Hawking are illustrative of this: “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.”

Although complex and broad philosophical and socioeconomic implications arise from the development of AI-based systems, this article will focus on the use of AI in the global financial services sector and, specifically, on its potential impacts on employees.

The corporations within the financial services sector have been among the earliest to adopt complex AI-focused organizational projects and the use of Artificial Intelligence within this industry can be categorized into three levels:

Assisted Intelligence, which widely available today, improves what people and organizations are already doing. It consists of combining technology with human-driven policies to manage productivity and minimize litigation, security, and other risks. Examples are monitoring employees’ emails, blocking access to some websites, as well as phone tapping and GPS tracking.

Augmented Intelligence, which has emerged more recently, aims at helping people and organizations do things they couldn’t otherwise do. In this case, systems are taught – not programmed. Examples include Chatbots, advisory services and use cases in risk, regulations, compliance and fraud prevention.

Autonomous Intelligence, which is to be deployed in the near future, focuses on the establishment of machines that act on their own. For instance, the Australian Westpac Banking Corporation is currently trialing artificial intelligence-powered video cameras to read the mood of staff and report back to managers.

In particular, significant corporate investments are being made by major financial institutions to advance AI for employee surveillance. Consequently, concerns are mounting and several studies are focusing on the inherent risks connected to the introduction of AI in the workplace. From the employees’ perspective, “being seen” may connote either being “under control” or with giving up privacy. A glaring example of the risks connected to workplace surveillance is represented by  AI from Humanyze, which is being trialed by some high street banks. Under this system, a tracker is worn on a lanyard around the neck, which constantly gathers data to monitor employee location, body language, and assess stress levels via voice analysis of the tone of conversations. Though will not record content, it looks at the amount of time the employee talks, who they talk to, the tone of their voice, activity levels as well as dynamics of how often they interrupt others. It can also make predictions about how productive and happy they are at work; can say how well an employee sleeps at night, that they interrupt colleagues too often, don’t take the stairs instead of the lift, or aren’t “optimized” for morning meetings.

There is no doubt at all that the use of AI to monitor employees could represent positive steps forward in avoiding corporate misconducts that typically occur where lack of controls are experienced. It is equally clear that the introduction of AI-driven surveillance systems may lead us to the establishment of a working environment similar to the dystopian society described in George Orwell’s novel Nineteen Eighty-Four.

In the absence of proper regulations and legal safeguards to protect workers’ rights, the adoption of adequate corporate social responsibility measures appears paramount. With the employee skepticism AI may meet with, financial institutions have to consider how trust can be built amongst all staff members. A solution can be increasing transparency around the adoption of AI so that employees will know how the incorporated AI will be used, the functions it will perform, the decisions that it will influence and the opportunities it may bring.

A burning issue is represented by the automatism that the data gathered and analyzed by an AI-driven surveillance system may generate in corporate decision-making processes. Where a disciplinary procedure or any other evaluation would be activated or justified through such data, it will be crucial to establishing processes in which the “human factor” is prevailing.

The challenges of preparing for AI will be extremely complex for financial corporations around the world. As a result, it is necessary for all financial institutions to develop strategic plans and appropriate responses ahead of implementation. It is worthy to note that Deloitte stressed that only 17 percent of global executives are ready to manage people, robots, and AI working side by side—the lowest readiness level for a trend in five years.

One of the most relevant challenges is related to inherent ethical implications as well as the potential abuses that can be carried out to the detriment of employees where the vast amounts of sensitive and or personally identifiable data that may be collected by the AI-driven programs could create major concerns for employees’ personal privacy. These may be seen as intrusive and deserves greater transparency on data acquisition, usage, sharing, and storage, with specific informed consent to be given and withdrawn if necessary, with no repercussions to the individual.

The EU’s General Data Protection Regulation (GDPR) will take effect in May 2018, however already seem to have made organizations signing up to binding codes of conducts non-obligatory, which may lead to loopholes especially when transferring employee data to third parties.

In any case, regulatory intervention would not be per se sufficient. It is imperative that global financial institutions implement AI-driven projects in a responsible way. Corporations must be aware of the risks of placing trust in AI algorithms to take important decisions in relation to their employees. Transparency of an AI’s algorithm – how it works – and its Accountability – its ethics and rule of law – should be clear. Ethical concerns with regards to transparency is about the complexity of algorithms and the data they use which may mean data can be inaccessible to those whose data is being used. In some cases, engineers who create an algorithm may not know its inner-workings, as the algorithms can move quickly meaning certain algorithms cannot be transparent by their very nature due to “black-boxing.”

This inscrutability in AI-driven processes challenges calls for transparency and is a major corporate social responsibility issue which raises serious questions that should be subject to thorough scrutiny. Another challenge is the determination of AI algorithm ownership within global financial institutions. Employees will need guidance in working out who AI is owned by between their line managers, HR or AI administrators; and certainty that AI can be taken to court as well as who will pay if the consequence of AI has led to harmful impacts on them.

The intention to use AI for employee surveillance in Global Financial Services should not lead to a way of restricting citizens’ human rights, such as the right to a private life or the right to freedom of thought, conscience, and religion. To reassure and support their employees therefore, Financial Services Corporations have to re-write their Codes of Conduct to reflect the use of AI by incorporating their core values into every AI product designed or deployed and by being transparent to the extent to which AI will be used for all employees, directly or otherwise as well as all through the corporation’s departments and supply chains.

As PwC advocates, “Automation and Artificial Intelligence (AI) will affect every level of the business and its people, it’s too important an issue to leave to IT (or HR) alone,” it will be best served if the AI ethics, values and behaviours adopted should then be shared across the entire company and not just among leaders. This AI education matters in creating directional clarity, given AI’s complexity. Financial services corporations could also review and publish these Codes of Conduct in their annual reports mandatorily.

Already 67% of CEOs think that AI and automation (including blockchain) will have a negative impact on stakeholder trust in their industry over the next five years. As a result, it appears crucial for global financial institutions to adopt effective corporate social responsibility solutions as espoused herein to avoid reputational risks and other specific risks related to the introduction of AI-based systems for employee surveillance which may include: constructive dismissal claims as monitoring may breach the duty of trust and confidence; discrimination claims if employees feel they have been unfairly singled out for monitoring; human rights breaches as surveillance may interfere with employee’s right to privacy; and breach of data protection principles that could lead to more fines in this sector.

12 thoughts on “A Case for The Responsible Use of Artificial Intelligence in Global Financial Services

  1. Lola Ololade Durodola, A very interesting article indeed. However, I believe AI can be a force for good in CSR dimension. The first is using AI will promote accountability in corporate business operation, this is because there will be a substantial record of corporate business information, a record of business information that could be a vital tool in term of access to corporation information for a legal proceeding. The other side of the debate could be transparency, how this information is utilised and used, who have access to this information and how valid it is for the business and the social dynamics of our human rights. Finally, we will be faced with the issue of privacy under human rights law, whether at international level or domestic level. So, perhaps the issue is about finding the balance between transparency, regulation and accountability, to promote CSR as a force for good? What is missing in business operation, whether at CSR or legal proceeding in relation to accountability, is the lack of information available to employees and litigants.

    1. Thank your for your comments Emmanuel. I agree with you. AI can be a force for good and in the Global financial Corporations, its already proving useful in Assisted Intelligence and Augmented Intelligence. It is Autonomous Intelligence – where machines act completely on their own and make decisions that affect employees that I believe is mounting concerns. Your are correct to say transparency will be crucial as I explained in my post. Finding the balance will require Global Financial Institutions to be Responsible and if some of the CSR Solutions espoused in this post can be adopted, then

  2. This is a very interesting article! I agree that responsibility is key for the meaningful use of AI. But I struggle to find any moral justification for using AI to monitor employees’ moods. Responsible HR management means you talk to people about their moods, you don’t track them. What is your stance?

    1. I am certainly glad that you find the paper interesting Dorothea. Really kind of you to join the conversation on the blog from Twitter. I agree with you that adoption of adequate corporate social responsibility measures by the corporations within global financial services should be paramount. The current absence of proper regulations and legal safeguards to protect workers’ right where Artificial Intelligence is deployed needs to be addressed, as well as other important aspects of which moral and ethical issues are right at the top of the agenda as you correctly highlighted. There are not just moral but legal risks as well as reputational risks that may be involved where AI is used for employee surveillance. Debates such as this will help bring awareness and hopefully action and are also vital for #AILiteracy which can only help prepare us all for the future. For instance, only today in the United Kingdom, there was a robust debate on #AIEthics and Accountability at the All Party Parliamentary Group on AI, exploring the impact and implications of exactly the same issues we are espousing here. The way we work and relate to one another in the workplace may increasingly be determined by AI starting from recruitment processes to monitoring and then corporate decision-making processes. I therefore advocated that financial institutions establish processes in which the ‘human factor’ is prevailing. These are only a few of the reasons why this topic is crucial and must remain at the fore front of conversations on Artificial Intelligence.

  3. The article provided a very comprehensive overview of the relationship between AI and corporate, and also raises important questions like the accountability issue.

    1. Thank you for making a really valid comment Jie Lu. The issues of accountability – here described as the ethics and rule of law of Artificial Intelligence which includes the ability of the financial institutions to be able to clearly explain to employees how AI will be used, the functions it will perform, the decisions that it will influence and the opportunities it may bring, are indeed very important. This is why in this paper I advocated that all corporations in global financial services in their preparation for #AI should remember to act responsibly particularly as this will directly impact the lives of employees and as a result, society at large. The inscrutability in AI-driven processes challenges calls for transparency and is a major Corporate Social Responsibility (CSR) issue which therefore raises serious questions that should be subject to thorough scrutiny.

    1. Thank you for you comments. I am grateful for your contribution and hope this paper helps to continue to stimulate the conversation about the responsible use of Artificial Intelligence in Global Financial Services as well as the impact #AI will have on their employees in particular and the general populace at large.

  4. The article is well drafted but I think, AI in general is too complex for financial markets to manage without good strategic plans in place

  5. Hi Lola, thank you for this fascinating article! I’m a grad student with backgrounds in philosophy and business analysis, so your point about the need for the “human factor” and transparency amid the frequently nebulous algorithms jumped out at me.

    I agree that, especially if an AI program would be making a judgement rather than simply collecting or monitoring data, a human touch would be vital. The desire for the human factor seems to stem from the fact that humans can make not only judgements, but prudential judgements. Prudence here, and perhaps even virtue in general, would be the differentiating quality which humans have and machines ostensibly lack.

    By virtue, I mean good habits which a person develops over time. One can pursue and excel in the natural virtues – prudence, justice, fortitude, and temperance – and societies at large seem to prize these moral habituations in their citizens. Empiricists like David Hume would argue that benefit to society is the reason or cause for moral action, while scholastics would say that humans should act virtuously because more fully attain to their nature by doing so. (Living one’s best life, to use the modern lingo.)

    Regardless of the philosophical underpinnings, societal norms do seem to condition (or intend to condition) people to act ethically. Machines and algorithms are wonderfully logical and sophisticated, and with machine learning they can even improve on their existing functions. However, they seem to lack in their nature the sort of prudential or ethical human factor per se, so it makes sense to desire that factor when executing judgements.

    It seems possible to program a wide variety of contingencies or scenarios, but do you think there will ever be a reliable means of conveying (through programming or learning) a genuine capacity for prudence or equity in AI akin to that achieved in humans through societal or natural conditioning? And if that might be possible, do you imagine that might be something that could ever become industry-standard?

Leave a Reply