“Are you on mute?” is perhaps the most succinct catchphrase which most comprehensively describes the post-covid landscape of work. Yet, despite the plethora of video conferencing tools out there, there is still something to be said for being there in person. If anything, the sheer volume of “virtual” interactions means that a “real” interaction becomes more valuable. If you really want to be there for someone, it’s not going to be through a scheduled Zoom call, is it?
Indeed, the attendance of in person conferences seem to have rebounded to where we were pre-covid, at least by eye. When it comes to quant conferences, QuantMinds (and its predecessor Global Derivatives) is the biggest quant event of the year, and has been since I started attending over a decade ago. Over the past week QuantMinds returned to London, to delve into what’s hot in quant. Whilst historically it’s been focused towards option pricing these days, many of the topics involve some element of machine learning and also LLMs. There were a huge number of talks at the event, so it’s not possible to go through every talk. However, I’ll nevertheless try to give a flavour of the event through the prism of a few talks and panel discussions that I’ll describe here.
LLMs, alt data and the quant stack
It really wouldn’t be a quant conference these days, if LLMs did not feature heavily. There were a number of sessions discussing LLMs at QuantMinds. The “Quant stack of the future” panel was moderated by Nicole Konigstein (quantmate), with panellists Federico Fontana (XAI Asset Management) and Alisa Rusanoff (ettech.ai) touched upon LLMs. In particular, it was noted that it was possible to fine tune LLMs, and they could be useful for dealing with unstructured data about companies. However, it could also be necessary to use multiple LLMs, and have a way to reroute your computation to the most appropriate LLM. Moreover, LLMs could be a tool to speedup the R&D cycle. At the same time, the traditional ML could still be useful. It was also possible to make ML explainable, using tools like Shapley values, as well as through the use of tools like dashboards.

Alisa Rusanoff (left), Nicole Konigstein (top right) & Federico Fontana (bottom right)
Rama Cont (Oxford University) moderated a panel on AI and the future of quant finance. The panellists were Dara Sosulski (HSBC), Andrey Chirikin (Schonfeld), Hans Buehler (XTX) and Nicole Konigstein (quantmate). It was noted that there are many different types of quant jobs, some work with code, others more with documents. There were three major streams where AI seemed to be used in quant. First, using AI/machine learning more broadly to model non-linear operations. Second, using it to help quant to code. However, it was still a challenge, because it was like learning a new library. With a large code base, historically you need to understand the architecture. This is just as true for LLM generated code. Third, LLMs could be used for text and trade entry.
There was a debate as to how it could impact numbers of quants. It was noted that potentially be a challenge more for junior quants than senior. At the same time it was noted that for more senior quants, they needed to challenge themselves, and could be stuck in what they know well. They needed to hire people to disrupt their own thinking.
Often being a quant was about taking ownership of existing systems, and a lot of it is now about engineering, say 80% versus 20% about innovation. There were skills that quants need to know, like time series modelling, and a need to know traditional statistics.
Historically, we might have written in assembler and C++, but with LLMs, our job could shift, where 80% of your time might be debugging what an LLM has written. Quants needed to ask themselves what is my value add? Quants need to be exports on the models they are creating. In a sense, it was about going more upstream. There were less differences between traders and quants these days. My favourite phrase that I heard during the panel, was that LLMs could be a force multiplier.

Dara Sosulski, Andrey Chirikhin, Rama Cont, Hans Buehler and Nicole Konigstein (from left to right)
Sticking to LLMs, Alexander Sokol (Compatibl) attempted to decipher LLMs from a psychological perspective. He noted that in experiments AI displayed similar cognitive biases to humans and these behaviours were learnt from data. This could be seen when replicating similar experiments to those seen in the work by Daniel Kahneman. Kahneman noted that humans have to types of thought processes, system 1, which was based on intuition and fast. By contrast, system 2, was more thinking step by step and much slower. AI could be made to think “fast” depending upon the prompt you use, or alternatively “slow”. Sokol gave several examples of the various cognitive biases, such as the framing effect, priming effect and semantic priming, where the precise way you prompt an LLM could induce such a bias. Sokol also noted that prompting an LLM once, was the equivalent of using a single Monte Carlo path: just as it was necessary to compute many Monte Carlo paths, you need to do the same prompt many times.

Alexander Sokol
Later, there was a panel on data management, with the discussion focussed on new data sources/alt data, as well as models and strategies. The moderator was Svetlana Borovkova (Bloomberg), with panellists Joe Hanmer (Fidelity International), Vincenzo Pola (Barclays) and Mark Fleming-Williams (CFM). It was noted how in the last years of the 2010s, there had been a lot of excitement for alt data. More recently, there had been a plateau in the alt data coming to the market. At the same time, the existing data types were getting better, for example through high frequency. LLMs were also making text data more important, hence it was possible to ingest machine readable data from sell side research. It was also possible to parse internal research on the buy side. Despite LLMs, it was still possible to use “traditional” NLP to do tasks such as sentiment. Despite the advent of LLMs there has not been a rush of web derived datasets, which people had expected. Indeed, one important point was that any AI created data, could have the starting point of data of now, so could take time to create a history.

Mark Fleming-Williams
Staying on the subject of NLP, Helyette Geman presented the Hype Index. The idea was to understand how unusual news could be used to forecast market stress. She talked about how she normalized the news coverage by sector weight, market cap etc.
Discussions on portfolio optimisation
Dilip Madan (Robert H Smith School of Business) discussed new ways of writing down objective functions for portfolio optimisation. He noted how with mean variance, whilst the in the objective function, mean behaves linear, the variance was in squared units. Hence, the mean and variance were in different scales. Expected utility theory was not used in practice much, given it modelled the dissatisfaction to making money – hardly a problem in practice for investors! He suggested the financial finance approach to try to fix these issues, of how much to invest in a particular situation.

Dilip Madan
With the presentation with Marco Bianchetti (Intesa Sanpaolo) and Fabio Vitale (LENATI), they also tackled portfolio optimisation, but from the perspective of retaining the general tilt/view of a portfolio, whilst attempting to reduce the risk. The problem was that there is a computational bottleneck, and essentially trying to find the needle in the haystack. A brute force strategy is not feasible give that a the solution space of a large-sized optimisation strategy solution is too big. They suggested using a particle swarm optimisation, which is inspired by social behaviour of bird flocks.
Raphael Douady (University of Paris I) talked about pansethetic analysis for portfolio construction involving extreme events, and how it involves evaluating how the environment impacts each object, where the environment is characterised by a series of stress tests and the analysis of future returns in all scenarios. He began his discussion by comparing AI vs the human brain. He noted that we have a gigantic training set. At the same time, information is the not the same thing as the amount of data. The human brain isn’t totally unsupervised, and indeed there are interactions in our thoughts with chemicals, controlling emotions, pain etc. Intelligence is about rejecting useless information. From the perspective of our own decision making, we could either take the view that it’s deterministic and hence predetermined, or random? However, in the both cases, it would imply that it isn’t our decision!

Raphael Douady
Applying quant to other asset classes, vol and news
Hamza Bahaji (Amundi) discussed structural factor investing in stocks and corporate liability. Historically factor based approaches have been mostly driven towards equities, and there has been somewhat of a methodology gap. It was largely case of non-causal empirical methods. Bahaji discussed how these approaches could be relevant to corporate bonds. The idea was to choose a structural model, then identify factors driving firm value, derive sensitivities to factors and then build portfolios with optional exposures to these factors.
Along similar lines, ie. moving quant to a new asset class, Stas Melnikov (SAS) chaired a panel on quant credit, with panellists Paul Kamenski (Man Numeric) and Konstantin Nemnow (State Street). It was noted how fixed income has largely been discretionary historically. It was a matter of chicken and egg. In the past, the key issue was understanding slippage in execution. However, we now have that execution data, hence, it was now possible to estimate it. At the same time, at times of market dislocation, like March 2020, when there was no liquidity, it was important for humans to assess the situation. Also with new issues, by definition data does not exist.
Sanae Houradi (HSBC) talked about extracting systematic alpha using S&P500 short dated options, which are comparatively new, with only 3 years of trading (when looking at every day of the week). On average, she noted that the volatility risk premium (ie. the difference between implied and realised volatility) was positive. Typically, the Sharpe ratio tended to be higher for weekly option, and there was more premium for short dated options. At the same time the premium which was captured was combination both of implied vs realised vol, and gap risk.

Barney Rowe
Barney Rowe (Fidelity International) discussed how to combine quantitative factors with a discretionary view/alpha capture to customise portfolios. It was important to condition for client requests around risk levels, tradable universe (and exclusions) as a well as for their own factor tilt views. At the same time the process needed to be explainable and shouldn’t be excessively sensitive. The approach used minimum distance optimisation to provide flexible and expressive portfolio construction.
News can be an important driver for markets. Vivek Anand (DB) and Luiz Silva (DB) discussed using news sentiment to understand markets, both from a macro top down level (for market regime identification), and also an asset specific bottom up perspective. However, it was a multidimensional problem, with news factor being one element, alongside other more traditional factors like risk barometers and positioning & flow data. They also used news sentiment to create a transient risk factor in order to understand price action better from a risk perspective.
On my side, I gave a presentation about inflation forecasting based on the work we’ve done at Turnleaf Analytics. New material included a recent paper, where I discussed ways of trading US CPI on a high frequency basis using inflation forecasts, and in particular looking at high frequency behaviour of FX around US CPI prints.
Conclusion
Every year, I’ve come back to QuantMinds, I’ve learnt something new, whether it’s from the presentations, panels and discussions. Whilst quants have not always embraced machine learning, it definitely now seems part of the process, and are experimenting with LLMs, but I don’t think it’s yet fully embedded in the process.
Most importantly, given that pretty much “everyone” in quant is there, it was great to catch up with friends at the event, and also to make new friends too!
