Welcome to the fourth instalment in our interview series. Given AI and machine learning is pervading many industries and professions, we thought we'd reach out to a few experts in various industries to find out more about their experience to date. Finance data has certainly been subject to much interrogation over the years, and is no stranger to machine learning. To bring us up to speed on the latest, we spoke to Michael Kollo, PhD - 'The Curious Quant'.
VL: Tell us a little bit about yourself.
MK: I'm a 'card carrying' member of the quantitative community. I did my PhD in Finance at the London School of Economics and joined the investment management community, going through various roles and products during the course of my career.
VL: Given that rich finance background, how do you see the industry changing in the coming years?
MK: There are tectonic shifts in the financial services industry that are really reshaping the industry, which has been happening for many years. One major break was post financial crisis where, globally, there was an incredible regulatory response both in terms of the way that financial services were regulated, the amount of risks they took, but also in how financial markets functioned as quantitative easing took hold. These are long-term tectonic shifts. I'm afraid nothing happens quickly in financial services markets, but there are certainly evident trends toward more transaction based, lower cost, and more scalable industry.
VL: To what extent then has AI started permeating your industry? What are some of the more exciting AI applications you've seen?
MK: Financial services has a long standing love and hate relationship with data and forecasting models. It was probably one of the earlier adopters of standardised datasets, coming from equity markets and others, and has a long multi-decade history of creating risk, return, and various other models using statistical techniques. Over the years, one part of that industry, the quantitative finance part, has grown especially technical in its usage of data and modelling techniques, and so are probably the 'first responders on the scene' of the AI opportunity. Certainly within the hedge fund, and more speculative parts of asset management and market-making desks, reinforcement learning, and some random forest techniques have been examined. Finance however is an extremely difficult domain to put into practice these models for two primary reasons: non-stationarity of relationships, and the strong forecasting (causation not correlation) requirement.
VL: How then do you think professionals and companies in finance could be better prepared for technological change?
MK: Like any other industry, finance has different 'species' of professionals, from storytellers, to sales, to technical people. I think unusually, it does have a very strong acceptance of the importance of data and analytics, though it is still very strongly embracing legacy narratives around how markets work, what matters, and how (un)certain financial markets are.
VL: Are there any particular companies you're watching in this space?
MK: I think there is enormous work going on under the hood of some popular big banks and financial firms, though you wouldn't know it. The companies that service banks have become increasingly 'fintech' although these companies were originally meant to be challengers, not enablers. It turned out that the major banks have a great deal of expertise and regulatory oversight which meant that technology based challengers to finance companies have found it hard to be successful. Therefore, you can check out interesting companies like Quantopian who have created quantitative data environments for anyone to create signals and investment products, but there is still very limited market penetration today.
VL: Have you worked with any data scientists or other AI practitioners? What has been your experience?
MK: I have. My experience has been that the underlying 'source of truth' for these professionals is about the nature of data, and the patterns contained within. There is perhaps an overly strong focus on a 'bottom up' understanding of a process, i.e. that with sufficient data and diversity of observation, the process can be well described through the relationships in the data. This is somewhat at odds with most of the profession who see that data in finance is readily available, but primarily time-series and therefore limited in diversity. In other words, only one history. This means that data scientists can often run into domain-specific problems when plying their trade. Having said that, it is still a very interesting area to experiment with markets and trading.
VL: To change tack slightly, I know the ethics of data and AI is very close to your heart. What do you make of the current landscape regarding ethical AI?
MK: I find it a very personally important field. I feel that there is a huge rush globally to build new models and structures to capture certain statistical relationships, and specifically to profile individuals for products, services, etc. In that sense, it is very exciting. However, I find that the second-order effects, such as ethics and bias in models are of secondary importance, especially when weighed up against economic outcomes. In other words, if I can create a forecasting model that is economically effective, but uses postcode to characterise people for loans, most organisations are less likely to pay for resources and costs for monitoring and considering bias in these algorithms. I hope that this changes, but I feel that some regulatory pressure will be needed.
VL: Another thing that has risen to prominence recently is XAI, or explainable AI. Thinking from a quant perspective, how much do you care about how a model works, or is it only important that it works sufficiently?
MK: I think explanation is particularly important for highly uncertain problems, where the marginal gain from using a statistical forecasting model is relatively small, leaving a large unexplainable portion. Especially in this case, the explainability of the highly uncertain decisions that are made are important because the models may well be incorrect and fail, requiring the human to take ownership of an incorrect decision. Models without explainability are readily embraced when they work, but just as readily discarded as soon as they don't (even for random reasons). So I think in that sense, the problem space for most forecasting problems in finance are highly uncertain. Equally, I am increasingly thinking that one of the keys to attributing decisions in understanding bias in algorithms lies in the attribution of decisions for individuals. Sometimes, the model can produce the same forecast but for different reasons - some of these reasons may lead you to conclude there is bias, while others won't. So attribution is critical as to the 'why' even for the same outcomes.
VL: Any final words?
MK: "The race is long and in the end it's only with yourself." Ok, but more seriously, if you are reading this, you are probably into data, modelling, ML, AI or things like it. I would suggest, don't fall in love with a model, fall in love with the scientific process itself. Our collective goal is to under the world better, and so these models should primarily be used to gain a better understanding and comprehension of an uncertain world. Today that may be ML/AI, but tomorrow it may be something quite different. Chances are, in your career, there will be many 'truths' in terms of approaches or 'fads', or genuine innovations. You'll probably get good at different ideas, but ultimately, it's a quest for some kind of truth, and being discerning and exercising 'optimistic scepticism' is essential for all researchers.