Ghosts in the Machine

By Baker McKenzie

AlphaGo, an artificially intelligent (AI) software program, defeated the world champion of an ancient board game called Go on March 15, 2016. The game is immensely complex, with a total combination of possible moves numbering several hundred orders of magnitude more than the number of atoms in the universe. Winning the series four-to-one, AlphaGo’s victory showcased significant advances in AI’s ability to recognise and learn obscure patterns, and adapt strategies.

Just two weeks after AlphaGo’s impressive victory, a new chatbot called Tay exposed a darker side of AI. Designed to engage in friendly online conversation with people and assist them with Microsoft services, Tay’s unique feature was that “she” learns from her interactions. When Tay was publically released, a coordinated barrage of abuse and incessant trolling by Twitter users taught Tay the wrong lessons. The program began spewing racist, sexist, and xenophobic comments, revealing the potential for flaws in the design and programming of AI, as well as the uneasy interaction between AI and people.

Both events expose a tension underlying the introduction of AI. Programs like AlphaG demonstrate how AI can analyze vast amounts of information, recognize sophisticated patterns and empower humans with new analytical capabilities. Conversely, Tay’s malicious malfunction serves as a reminder that technology is far from infallible, particularly when interacting with humans.

A global survey of 424 senior executives from financial institutions and fintech companies, and interviews with leading experts in the field, also show that this tension is apparent as AI is pioneered across financial markets. Many see AI as a tool that will help improve financial institutions’ risk management, for example through more in-depth assessment of risk in portfolios and more comprehensive and informed credit risk assessment. In these applications, AI promises not reckless speed or loss of control, but rather an unprecedented depth and breadth of insight, and the ability to act on information and learn from its actions.

However, many experts acknowledge a degree of risk surrounding the use of AI, stemming partly from uncertainty—it is, after all, still at experimental stages in many applications including trading, portfolio management, and credit assessment. As a result, the risk of malfunctioning algorithms and concerns surrounding the security, privacy, and quality of data has led to calls for regulation.

There is an even greater unease about the regulatory response to AI. Participants in the study express a distinct lack of confidence that regulators have the adequate knowledge and skills to stay abreast of new financial technologies. Indeed survey participants suspect that regulators are only just beginning to understand the potential implications of AI for financial markets and companies. For now, much of their attention is focused on fighting the last war, identifying compliance breaches by humans directly abusing technology. Their attention is beginning to turn to the integrity of algorithms, and any rule writing on machine learning in the next few years will focus here.

It may also not be surprising, given how nascent the use of AI is in the sector, that a large number of financial institutions in the survey are not confident that all AI-related legal risks have been understood by their organization. For example, data and privacy risks will increase by virtue of the much larger volumes of data AI-driven models will collect and analyze. Intellectual property disputes are also likely to increase, as the ownership of algorithms causes friction between companies and regulators. Contract and litigation risk might also emerge, in the likely event of AI malfunction and programming errors.

AI and machine learning will undoubtedly alter the headcount and the nature of skills required in the industry. A significant minority of survey respondents fear the effects on the workforce will be negative within the next few years. But wholesale displacement of humans is for the longer term—nearly 7 in 10 believe AI will bring complete or substantial change to their own jobs over the next 15 years. Even in trading, where automation is already widespread, human roles will remain critical in areas such as algorithm validation and monitoring, as well as compliance. At this point, few believe machine-learning models can or should drive financial-market operations completely independently of human control.

How disruptive will AI and machine learning be?

Over the next three years, the most dramatic changes will be felt in the areas of trading (according to 64% of respondents), financial analysis (60%), and IT (60%). Many also expect machine learning to materially affect risk assessment (59%), credit assessment (57%), and investment portfolio management (52%).

Risk assessment and financial research are the areas where companies are most likely to experiment with machine learning applications in the next three years.

Andrew Lo, Director of the Laboratory for Financial Engineering at the MIT Sloan School of Management and the founder of a quantitative investment management firm believes the effect will be wide ranging: “I suspect that it’s going to transform all aspects of the financial industry because there are so many parts of it that can be automated using these kind of algorithms and access to large pools of data.”

Peter Hafez believes machine learning will, in addition to trading and research, greatly benefit consumer credit scoring as well as the compliance function in different types of financial institutions. He notes, for example, that compliance managers are beginning to use unstructured content such as news feeds to alert them about suspicious trading.

Machine learning techniques have already found application in retail investment advisory. “Roboadvisers”—investment management websites providing automated advice to investors—form an area of AI coming under active regulatory scrutiny, according to John Price, who serves as a commissioner in the Australian Securities and Investment Commission. In their latest Financial Advice Market Review, the UK’s Financial Conduct Authority has even gone so far as to recommend robo-advice as a cost effective way for financial institutions to “streamline advice” to their customers.

Disruption on the horizon

Most of the experts agree that the technology will have many positive applications.

Paul Ebner, a senior portfolio manager within BlackRock’s Scientific Active Equity Unit, believes trading will benefit from the depth of analysis machine learning tools enable over a wide breadth of companies. “It’s being able to go a couple of steps deeper than you could just by using, say, data in a spreadsheet. Speed matters but it’s a different kind of speed than high-frequency trading. For us it’s being able to process a lot of data very quickly and coming up with the right answer that the markets will eventually discover.”

BlackRock’s Scientific Active Equity unit—a roughly 100-member team which includes data scientists and machine learning specialists as well as more traditional financial-industry ‘quants’ —is putting machine learning techniques to work in different ways to forecast share price movements.

“We’re applying tools to analyze data about companies and using that data to forecast the fundamentals, and then ultimately to forecast their stock returns and construct portfolios around that,” Ebner said.

Survey respondents are clear about another benefit machine learning will bring: 64 percent believe its use will have a positive effect on competitiveness in financial markets. At first glance, this finding seems counter-intuitive. The high costs of the best talent and the most advanced technologies should make AI accessible to only those with the deepest pockets. This is true to an extent—only a firm like Bridgewater Associates could afford to hire the chief engineer behind IBM’s legendary supercomputer, Watson.

However, smaller companies, and even individuals, have proven that they can be at the forefront of new innovations. This is partly because open-source software empowers smaller organizations to experiment with advanced algorithms and code. A recent example was seen in March 2016 when two retired hedge fund “quants,” with no prior experience of working with AI, managed to design an algorithm that diagnoses heart disease from MRI images. These individuals were relatively inexperienced in AI and yet they were able to design a transformative application by downloading software from the opensource site GitHub.

Only time will tell what this means for competition across financial organizations. However, while industry giants, such as BlackRock, will be doing a lot of the heavy lifting of research and pilot work, AI applications and services could become widely available for a range of small and medium-sized organizations.

AI insight

What is clear is that advances in AI and data analytics are leading to a great expansion in the quantity and type of data being used to inform decision making. Where investment decisions were previously made on traditional metrics such as market prices, interest rates, or earning figures, AI can now factor events and sentiments into the asset-price prediction process. For example, the crunching of unstructured data is helping advance sentiment analysis.

“It isn’t just about sentiment in the traditional sense, such as guidance, but also about the facts that can be extracted from unstructured content and that can be delivered in a machine-readable format,” Hafez said.

Machine-trading models typically analyze earnings statements and company reports. In time, they will capture much more through the ability to analyze news about product releases and recalls, regulatory approvals, acquisitions, and other market events. Hafez also expects machine learning models to be fed insights gleaned from images, video footage, and live streaming. There are companies, he says, that use satellite images to track the number of cars in the parking lots of large retailers’ stores to try and understand the likely direction of their sales revenue and earnings. This suggests that AI could change the parameters by which financial institutions make investment decisions. Traditional metrics will decline in importance as the subject of analysis, as financial institutions gather huge amounts of unstructured data, which can only be made intelligible through AI and machine learning.

Analytics tools are getting better at understanding context—another critical differentiator of machine learning. Paul Ebner explains that the machine learning tools his team uses are now able to use context to understand the nuances of word use.

“For example, the word ‘garbage’ in an SEC filing probably refers to waste management, but the same word on an investor blog is probably a critical term for the stock or the firm’s management,” he said. “We’re able to build dictionaries that learn and evolve based on the environments we’re pulling the language from.”

Similar technology is being introduced to assist retail customers with their complaints and queries. The Royal Bank of Scotland (RBS) has recently announced that it will be introducing an AI programme called Luvo, to help with customer complaints. The programme has been designed with an artificial personality that can mimic human characteristics such as friendliness, empathy, and reason. According to the programmers, Luvo can learn from its mistakes and gauge a person’s mood. In this light, AI is being cast as more friend than foe.

From Moore’s Law to Murphy’s Law

What could go wrong? Plenty, according to Andrew Lo. He believes the markets may be in for more flash crashes, for example, or for other negative developments about which neither the industry nor regulators currently have a clear understanding.

“The nature of these strategies makes them very difficult to understand,” he said. “That means that the interactions are going to be hard to predict, in the same way that nobody predicted the flash crash of May 6, 2010—even today we still don’t really understand what happened.”

Lo also points to the demise of Knight Capital, a major US trader whose software glitch in 2012 caused it $440 million in trading losses and sent it to the brink of bankruptcy: “I don’t think we’ve nearly fixed those kinds of issues, because ultimately you’re dealing with a mismatch between human ability and technology. It’s Moore’s law meets Murphy’s Law.”

Technology will not be able to remove the risks inherent in some financial activities, such as making bets on future events. These are likely to persist, regardless of whether humans or algorithms do the work.

“Financial institutions have been fined billions of dollars because of illegality and compliances breaches by traders,” Arun Srivastava, partner at law firm Baker McKenzie said. “A logical response by banks is to automate as much decision-making as possible, hence the number of banks enthusiastically embracing AI and automation. But while conduct risk may be reduced, the unknown risks inherent in aspects of AI have not been eliminated.”

All in the algorithms

Regulators are uncertain what the risks of machine learning are, but are focusing on algorithms as an area where problems could occur. Victoria Pinnington, senior vice president of market regulation of the Investment Industry Regulatory Organization of Canada, says her greatest concern is around the crafting of algorithms, in both machine learning and broader systematic trading contexts. “If there is a problem with the algorithm,” she said, “the impact on the markets could be considerable.”

Algorithms can malfunction in a variety of ways. One of the most common errors is known as “overfitting,” which usually occurs when an algorithm is overly complex and lacks a suitable hypothesis. In such cases, they fail to distinguish useful correlations (signals) from the mass of irrelevant data (the noise) and instead identify “phantom” factors or specious correlations. Imagine trying to record a classical music concert with a sensitive microphone. Overfitting would equate to picking up the surrounding background noises, rather than the sound of the orchestra.

“People could mismanage machine learning and not do the validation,” Babak Hodjat warned. “If you take a machine learning algorithm and not sufficiently validate it, you might have something that’s overfitting, that might look pretty good right now but might fail miserably tomorrow. More scrutiny is required there.”

The risk of programming error increases with the pressure to launch new programs. As mentioned earlier, differentiation is crucial to the success of an AI trader. Being the first successful application gives an organization a unique and profitable opportunity—an uncontested marketplace. Consequently, there is a risk that organizations will rush their strategy to market. Nate Soares, a former Google engineer and current research fellow at the Machine Intelligence Research Institute recently estimated in an interview with The Financial Times that “there is only a 5 percent chance of programming sufficient safeguards into advanced AI.”

Data, liability and legal risk

There is a great deal of uncertainty among survey respondents as to whether organizations understand the legal risks associated with new financial technologies: 47 percent are not confident that they do.

“The reason is that this technology is at a nascent stage, and it is evolving,” Price said. “The fact that people are cautious and a little unsure about what some of the risks might be, reflects that nascent stage of where the technology’s at.”

One risk is corporate liability. Flawed investment decisions could be made as a result of poor data, erroneous analysis about company performance, or malfunctioning algorithms, which could cause investors significant losses. Liability could also arise should machine learning models make flawed decisions about credit risk: financial losses could occur to lenders, or alternatively borrowers’ reputations could be damaged. There is also a lack of clarity about which parties would bear liability should such situations occur—the financial institutions themselves, the writers of the algorithms, the exchange platforms, data providers, or other parties.

The intelligent, data-crunching properties of machine learning may also take data protection and privacy risk to another level. Personal investor data or sensitive company data falling into undesired hands, whether by accident (to hackers) or design (to marketers and governments), is by now an all too familiar risk of the Internet age. This risk will grow simply by virtue of the much larger volumes of data that machine learning models will gather in.

Organizations will increasingly need to understand how data privacy is entwined with laws on consumer protection, as well as related pieces of legislation such as the EU Cookie Directive.

“Data, and the various rules and processes which both enable and regulate access to and use of that data, stand at the heart of disruptive fintech businesses,” Adrian Lawrence, partner at Baker McKenzie said. “Even the most advanced and intelligent algorithms and models are useless without efficient, secure, and legal access to detailed, accurate, and up-to-date data sets.”

Beyond legal risks, the survey respondents clearly lack confidence that the impact of AI is fully understood by their organizations. Nearly half—49 percent—of respondents are not confident that their organization understands the other material risks associated with AI; only 32 percent are confident. Given the early-stage development of applications, this finding indicates that AI will present organizations with a set of risks, most of which are still to be defined.

Over-reliance on AI

The biggest risks, according to some experts, lie less in machine learning techniques themselves, but rather in humans’ misuse of technology, or misplaced confidence in technology to achieve goals by itself without human guidance.

“If we have blind faith in technology,” warns Babak Hodjat, “things will go wrong. If the success of AI means more use of technology in an uncontrolled and non-principled way, then we’re risking more.”

Saeed Amen worries that the industry will use machine learning as a sort of black box. Should this be the case, he says, “They’ll end up creating a trading model that they don’t really understand the ins and outs of. That is a dangerous scenario but it’s the same with any systematic model. You really need to understand what’s going on in the trading strategy.”

Just like humans, programs, computers, and machines have the capacity to be stupid. The danger is that they can act at a far greater scale and speed. Examples such as the Knight Capital disaster, serve to illustrate the importance of maintaining human oversight, comprehension, and control of emerging AI systems.

Herein lies the contradiction at the core of the technology. When confronted with inherently risky tasks—such as making investment decisions and bets on unknown future events—over-reliance on AI can magnify systemic risks. Yet the same technology can improve the depth and quality of financial institutions’ due diligence of companies. Through their powerful data-crunching capabilities, such applications can also help identify fraud, money laundering, bribery, and other corrupt practices that more conventional methods would struggle to uncover.

The survey respondents appear hopeful that machine learning will help minimize risks, in some cases. Nearly 6 in 10 (58%) believe it will “greatly enhance” their risk-assessment processes. Machine learning techniques can, for example, be used to alert fund managers about emerging weaknesses in invested companies.

“AI should also reduce risk in some areas if deployed properly,” Astrid Raetze, partner at Baker McKenzie said. “Market misconduct and anti-money laundering/Know Your Customer processes are areas where regulators could harness AI to improve regulatory oversight and scrutiny.”

Machine learning-based analytics can also identify patterns in client activity that may point to some form of malfeasance. This helps explain why respondents point to risk assessment, ahead of other areas of operation, where they expect machine learning to be implemented over the next three years.

AI vs. IP

Survey respondents appear to agree that algorithms need more regulation. More than half of them said affording regulators access to examine trading algorithms would help keep the financial system safe. This is a telling result that comes at a time when a number of financial regulators are planning to make the source-code of algorithms open to examination by authorities.

For example, the US Commodity Futures Trading Commission (CFTC) is trying to push forward the regulation of Automated Trading (known as REGAT). One of the most controversial features of this proposal is enabling the CFTC and the US Department of Justice access to a financial firm’s trading algorithm. REGAT’s most vocal opponent, Chris Giancarlo (a Republican member of the US CFTC) has argued that giving regulators this degree of control will impose extra compliance costs on smaller market participants and hamper innovation in the futures market.

Giancarlo is primarily concerned that this regulation would mark an unprecedented invasion of private intellectual property (IP) rights by public authorities.

“I am unaware of any other industry where the federal government has such easy access to a firm’s intellectual property and future business strategies,” he said in a recent statement. “Other than possibly in the area of national defense and security …”

Similar mechanisms of oversight are expressed in MiFID market regulations in Europe and could lead to similar conflicts over the intellectual property of algorithms.

 

Before source code repositories are handed over, regulatory agencies also need to demonstrate competency on data and cybersecurity. In March 2014, a group of Chinese hackers managed to hack the US Office of Personnel Management. They stole the records of 21 million US federal employees, including senior members of the US CFTC.

Weaknesses like these would need to be remedied before authorities gained access to algorithms. The big question is whether regulatory authorities are in a position to keep up with rapid changes in technology.

Are regulators up to speed?

When asked if financial regulators are “keeping pace with advances in technology,” an overwhelming 76 percent of survey respondents said no. Nearly 7 in 10 expressed little or no confidence that “regulators have sufficient understanding of financial technologies and their impact on the financial services sector today.” One respondent commented that “regulators are woefully under-skilled in AI and need to boost their understanding or risk being marginalized.”

Regulators are certainly at a disadvantage vis-à-vis large financial institutions in the competition for data scientists and other professionals with knowledge of machine learning. This makes it difficult for them to remain completely up to date on technology developments in this area. Regulators are beginning to explore the role and implications of machine learning in financial markets.

As seen with REGAT and MiFID II, much of the exploration is taking place in the context of systematic trading and not specific to machine learning. Nevertheless, led by the Securities and Exchange Commission and the Financial Industry Regulatory Authority in the United States, the UK’s Bank of England and the Monetary Authority of Singapore, regulators are starting to learn about the role of AI and machine learning in financial markets.

In ASIC, John Price heads an innovation hub set up in early 2015 that is examining different areas of machine learning application in financial markets, and it is already providing advice to organizations using such techniques. Victoria Pinnington is spearheading a similar initiative within Canada’s IIROC. Both officials say their organizations are exchanging the results of their research with other regulators. Such interaction is in the spirit of recommendations made to regulators by the survey group. When asked what single step regulators should take to manage

the risks of new technologies, most respondents (32%) suggested collaboration between regulators and fintech companies. The second-largest group (25%) suggested coordinating regulatory efforts across markets, in a systematic global fashion.

Most industry executives in the survey believe that some form of new regulation will be required to deal with AI and machine learning. According to 60 percent of respondents, existing regulation needs to be improved and current regulation is not sufficient.

But regulators do not anticipate rules specific to AI to be written anytime soon. Those that emerge will focus on algorithms themselves or to the broader field of systematic trading. In Australia, Price says, any rule-writing is likely to be principles-based rather than prescriptive. “Any new rules will not say ‘do X, Y and Z’. Instead they will stipulate that firms must, for example, have adequate risk management procedures in place.”

Pilot and autopilot

Over time, machine learning will almost certainly push some people—traders, analysts, and other industry employees—out of their existing roles. Within 15 years, 68 percent of survey respondents expect to see complete or substantial change to their own jobs. Four in 10 respondents fear it will have a negative effect on the structure of the workforce sooner—within three years.

In most occupations, however, including trading, humans are unlikely to fade from the scene anytime soon. According to Hodjat, the individual trader’s role is going to diminish somewhat, but not entirely. He points out that certain types of trading expertise cannot be displaced, and that talented professionals will be needed, for example, to set up and validate the algorithms. This may frustrate the predictions of one Microsoft executive, who claimed in 2014 that, “robots will be running the city within 10 years, rendering investment bankers, analysts and even quants redundant.”

“You still need to use a modicum of market understanding and intuition when you use machine learning,” Amen said. “It’s not the case that you just put in a system and leave it for 10 years; you constantly want to be coming up with new ideas which are correlated as the market changes, and that still requires humans at the end of the day.”

Ebner likens the portfolio manager’s role in the age of machine learning to that of an airline pilot: “There’s structure around us and we may be on autopilot most of the way, but we enter the details into the navigation system and we decide when to engage the autopilot and when to fly manually. We’re in control of the plane.”

What’s next?

At the beginning of 2016, a group of the world’s leading entrepreneurs, including Peter Thiel and Elon Musk, announced that they would put$ 1 billion into creating an organization called OpenAI. The sole purpose is to help protect humanity from Artificial Intelligence. In an open letter, the founding members summarized the tension at the core of this technology, writing, “It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.”

A similar sentiment underlies the feelings surrounding AI’s application to financial markets. All recognize that there is much to learn about how transformative machine learning will be. There is also much to learn about its potential downsides.

Most of our survey respondents are cautiously optimistic about AI’s future role in financial markets. The optimism derives from the recognition of the great opportunity that awaits successful applications. However, like with all technology, it will largely depend on how it is wielded that will ultimately determine the risk and reward.

About the author: Founded in 1949, Baker McKenzie advises many of the world’s most dynamic and successful business organizations through our 12,000 staff in 77 offices in 47 countries. The Firm is known for its global perspective, deep understanding of the local language and culture of business, uncompromising commitment to excellence, and world-class fluency in its client service. Global revenues for the fiscal year ended June 30, 2015, were US$2.43 billion.

 

YOU'RE INVITED
Gala 2024

Gala 2024