As Klaus Schwab noted in the Fourth Industrial Revolution paper: “We stand on the brink of a technological revolution that will fundamentally alter the way we live, work, and relate to one another. In its scale, scope, and complexity, the transformation will be unlike anything humankind has experienced before”.
The #AIFearFactor – the fears of AI technology gaining dominance over human beings – used to be a theoretical fixture within the realm of science fiction. Who can forget the confrontation scenes between Hal – the computer – and Dave – the human – in Stanley Kubrick’s masterpiece 2001: A Space Odyssey of 1968?
However, what were once imaginary threats, have now evolved into actual concerns given recent advancements in the field of AI. The following words of Stephen Hawking are illustrative of this: “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization.”
Although complex and broad philosophical and socioeconomic implications arise from the development of AI-based systems, this article will focus on the use of AI in the global financial services sector and, specifically, on its potential impacts on employees.
The corporations within the financial services sector have been among the earliest to adopt complex AI-focused organizational projects and the use of Artificial Intelligence within this industry can be categorized into three levels:
Assisted Intelligence, which widely available today, improves what people and organizations are already doing. It consists of combining technology with human-driven policies to manage productivity and minimize litigation, security, and other risks. Examples are monitoring employees’ emails, blocking access to some websites, as well as phone tapping and GPS tracking.
Augmented Intelligence, which has emerged more recently, aims at helping people and organizations do things they couldn’t otherwise do. In this case, systems are taught – not programmed. Examples include Chatbots, advisory services and use cases in risk, regulations, compliance and fraud prevention.
Autonomous Intelligence, which is to be deployed in the near future, focuses on the establishment of machines that act on their own. For instance, the Australian Westpac Banking Corporation is currently trialing artificial intelligence-powered video cameras to read the mood of staff and report back to managers.
In particular, significant corporate investments are being made by major financial institutions to advance AI for employee surveillance. Consequently, concerns are mounting and several studies are focusing on the inherent risks connected to the introduction of AI in the workplace. From the employees’ perspective, “being seen” may connote either being “under control” or with giving up privacy. A glaring example of the risks connected to workplace surveillance is represented by AI from Humanyze, which is being trialed by some high street banks. Under this system, a tracker is worn on a lanyard around the neck, which constantly gathers data to monitor employee location, body language, and assess stress levels via voice analysis of the tone of conversations. Though will not record content, it looks at the amount of time the employee talks, who they talk to, the tone of their voice, activity levels as well as dynamics of how often they interrupt others. It can also make predictions about how productive and happy they are at work; can say how well an employee sleeps at night, that they interrupt colleagues too often, don’t take the stairs instead of the lift, or aren’t “optimized” for morning meetings.
There is no doubt at all that the use of AI to monitor employees could represent positive steps forward in avoiding corporate misconducts that typically occur where lack of controls are experienced. It is equally clear that the introduction of AI-driven surveillance systems may lead us to the establishment of a working environment similar to the dystopian society described in George Orwell’s novel Nineteen Eighty-Four.
In the absence of proper regulations and legal safeguards to protect workers’ rights, the adoption of adequate corporate social responsibility measures appears paramount. With the employee skepticism AI may meet with, financial institutions have to consider how trust can be built amongst all staff members. A solution can be increasing transparency around the adoption of AI so that employees will know how the incorporated AI will be used, the functions it will perform, the decisions that it will influence and the opportunities it may bring.
A burning issue is represented by the automatism that the data gathered and analyzed by an AI-driven surveillance system may generate in corporate decision-making processes. Where a disciplinary procedure or any other evaluation would be activated or justified through such data, it will be crucial to establishing processes in which the “human factor” is prevailing.
The challenges of preparing for AI will be extremely complex for financial corporations around the world. As a result, it is necessary for all financial institutions to develop strategic plans and appropriate responses ahead of implementation. It is worthy to note that Deloitte stressed that only 17 percent of global executives are ready to manage people, robots, and AI working side by side—the lowest readiness level for a trend in five years.
One of the most relevant challenges is related to inherent ethical implications as well as the potential abuses that can be carried out to the detriment of employees where the vast amounts of sensitive and or personally identifiable data that may be collected by the AI-driven programs could create major concerns for employees’ personal privacy. These may be seen as intrusive and deserves greater transparency on data acquisition, usage, sharing, and storage, with specific informed consent to be given and withdrawn if necessary, with no repercussions to the individual.
The EU’s General Data Protection Regulation (GDPR) will take effect in May 2018, however already seem to have made organizations signing up to binding codes of conducts non-obligatory, which may lead to loopholes especially when transferring employee data to third parties.
In any case, regulatory intervention would not be per se sufficient. It is imperative that global financial institutions implement AI-driven projects in a responsible way. Corporations must be aware of the risks of placing trust in AI algorithms to take important decisions in relation to their employees. Transparency of an AI’s algorithm – how it works – and its Accountability – its ethics and rule of law – should be clear. Ethical concerns with regards to transparency is about the complexity of algorithms and the data they use which may mean data can be inaccessible to those whose data is being used. In some cases, engineers who create an algorithm may not know its inner-workings, as the algorithms can move quickly meaning certain algorithms cannot be transparent by their very nature due to “black-boxing.”
This inscrutability in AI-driven processes challenges calls for transparency and is a major corporate social responsibility issue which raises serious questions that should be subject to thorough scrutiny. Another challenge is the determination of AI algorithm ownership within global financial institutions. Employees will need guidance in working out who AI is owned by between their line managers, HR or AI administrators; and certainty that AI can be taken to court as well as who will pay if the consequence of AI has led to harmful impacts on them.
The intention to use AI for employee surveillance in Global Financial Services should not lead to a way of restricting citizens’ human rights, such as the right to a private life or the right to freedom of thought, conscience, and religion. To reassure and support their employees therefore, Financial Services Corporations have to re-write their Codes of Conduct to reflect the use of AI by incorporating their core values into every AI product designed or deployed and by being transparent to the extent to which AI will be used for all employees, directly or otherwise as well as all through the corporation’s departments and supply chains.
As PwC advocates, “Automation and Artificial Intelligence (AI) will affect every level of the business and its people, it’s too important an issue to leave to IT (or HR) alone,” it will be best served if the AI ethics, values and behaviours adopted should then be shared across the entire company and not just among leaders. This AI education matters in creating directional clarity, given AI’s complexity. Financial services corporations could also review and publish these Codes of Conduct in their annual reports mandatorily.
Already 67% of CEOs think that AI and automation (including blockchain) will have a negative impact on stakeholder trust in their industry over the next five years. As a result, it appears crucial for global financial institutions to adopt effective corporate social responsibility solutions as espoused herein to avoid reputational risks and other specific risks related to the introduction of AI-based systems for employee surveillance which may include: constructive dismissal claims as monitoring may breach the duty of trust and confidence; discrimination claims if employees feel they have been unfairly singled out for monitoring; human rights breaches as surveillance may interfere with employee’s right to privacy; and breach of data protection principles that could lead to more fines in this sector.