WBS Training Palermo Conference 2025

Oct 12, 2025

Italy is made up of twenty regions, each of which is very different from another, from Veneto to Lazio to Puglia. Two weeks ago I visited Sicily. Perhaps unsurprisingly for an island, the sea seems ever present in Sicily. At the crossroads of the Mediterranean, there have been many visitors to Sicily over the years, each of which have their mark, on this remarkable island. Looking at the architecture of Palermo, you can see many layers of this history. Even in the food, you have influences for example from North Africa, with some cous cous dishes. If you do go to Sicily, there is one thing you can’t escape, arancini, which are deep fried rice coated dumplings. I’m always one for puns, and the name of the local Palermo run club Aruncina run club, possibly has the best run club name I’ve ever seen. If you’re ever keen to explore Palermo on a run, I’d recommend joining Aruncina for one of their runs, and at the end of it, you’ll also be able to enjoy.. some aruncini obviously.

My main reason for visiting Palermo recently, was the WBS Training Quant Conference, which was held for the twenty first time. I’ve been to the WBS Training Quant Conference many times over the years, and it’s always a great event, a good opportunity to catch up with fellow quants and hear about some of the latest research areas in quant. In this article, I’ll to try to put together a few takeaways from the conference. Whilst it’s not going to be fully comprehensive, I hope it gives a flavour of the types of discussions at the event. One interesting point is that in the past events like this used to be focused mainly on option pricing and risk management. However, over the years, whilst pricing discussions are still an important element of the conference, increasingly machine learning has become another important strand of the event. Indeed, this year machine learning was prominent and in particular presentations on AI agents and LLMs.

Alexander Sokol, Compatibl

In Alexander Sokol’s talk, he delved into the psychology of LLMs. He noted that statistical experiments done on LLMs show they share many, but not all, of the cognitive biases of humans (such as those illustrated by experiments in the fantastic book, Thinking, Fast & Slow by Daniel Kahneman and the book he coauthored, Noise). It was however important to try experiments on LLMs which have not been published. Kahneman discussed two forms of thinking system 1, which does automatic tasks and system 2, which performs deliberate thinking. For AI to think “fast”, you need to limit the number of tokens an LLM uses. He noted that LLMs can fall fowl of the of the “Moses Illusion” in this scenario. You can ask a question such as “How many animals of each kind did Moses take on the Ark?”, and the “system 1” answer would be an animal. Whilst a “system 2” answer would realise Moses was used in the question instead of Noah. Sokol gave the results of experiments he did on LLMs.

He noted how they can subject to the framing effect, and also anchoring, depending on how you asked questions. There was also the priming effect, where certain words or concepts could activate related associations from memory. Perhaps it is not surprising that LLMs appear “human” like given the data they are trained on is (at least historically) based on that generated by humans.

AI agents came up several times during the event. Miquel Noguer Alonso discussed how you could use AI agents to replicate the an investment committee. There are of course caveats. You need the machine to have the right knowledge. It could be challenging to prevent a machine talking about the past, and you can’t really torture a commercial LLMs to forget the future when backtesting! Sometimes it was necessary to build your own LLM/use one you could control locally. Often in finance the challenge is that you often don’t have the data (eg. when looking at the impact of climate change), but at the time time you still need to model it. If you took the example of trying to create an investment committee, one solution could be to build an agentic model. You could use agents to work on different parts of the problem, such as having an agent to do the data and  analysis. Another agent could work on risk and compliance. You could use the MCP (model communication protocol) for them to communicate with one another.

Miquel Noguer Alonso, AIFI

Valer Zetocha also discussed AI agents, but from the context of how to use them on the trading floor. He discussed several use cases, such as automating processes, monitoring processes, use in risk hedging and more broadly for revenue generation. Particular use cases, could be a quoting buddy style agent, which would do automatic checks on specific situations (like over dividend dates). He talked about the pros and cons of an LLM setup on the cloud versus local etc. The buzzword of MCP also came up, as a way of getting these many agents to work together.

Valer Zetocha, Julius Baer

There was a panel entitled “Are we ready for AI colleagues”, perhaps an evitable extension to the idea of AI agents. The moderator was Alexander Sokol, with panellists Blanka Horvath, Ignacio Ruiz, Miquel Noguer Alonso and Ioana Boier. One question posed was what if an LLM had its own e-mail account and Teams account. How would the transition from a personal assistant to a team member be an improvement? Models are conditioned to be helpful to users. However, if this is supressed it could become uncooperative. It was also noted that the job of a quant deploying these models, was to make sure you had put into production a safe model. Ultimately you would be responsible for an AI agent. If for example, you sent one to a meeting on your behalf, you would be liable. An example was given of a lawyer using one would have issues if an AI agent made a mistake.

Alexander Sokol, Compatibl (LHS), Blanka Horvath (Oxford University) (centre) and Ignacio Ruiz (MoCaX Intelligence) (RHS)

GPUs have been crucial for the training of LLMs. They do have many other use cases in finance. Ioana Boier talked about their use in the context of accelerated portfolio optimization. Often portfolio optimisation is considered as a sequential problem. She noted how a strong signal but poorly allocated in a portfolio is often worse than a weak signal with better portfolio allocations. She discussed techniques to make portfolio optimisation parallel PDHG (and hence can be accelerated using GPUs). She presented a library cuFOLIO, for GPU based portfolio optimisation built on top of the existing cuOPT library for optimisation. The benefit of these high level libraries is that users could use Python, rather than programming a GPU directly using CUDA. She also mentioned cuML, which is the GPU equivalent of sklearn, and it even shares much of the same syntax.

Ioana Boier, Nvidia

Oleksiy Kondratyev discussed quantum machine learning. He noted that at present we are in the era of noisy intermediate scale quantum computers. There are algorithms which are resistant to noise (NISQ algorithms). Given the huge demand for computation, we have to get to quantum computing. We have Moore’s Law, but there are the laws of physics. Silicon atoms size is around 0.2nm. Quantum mechanical effects play a significant role at sub 1nm scales. Quantum computers are not necessarily faster, but they can compute some operations that we can’t compute on classical computers. The bottleneck is classical memory, we would have keep in classical memory the quantum state. You are trading speed for memory. The quantum states stores this.

Oleksiy Kondratyev, Imperial College

Christopher Kantos discussed the important role of training data in AI. He talked about domain specific training, in the context of earning calls. He also discussed the typical asymmetric reaction of negative vs positive information for stocks, as well as the differences in reaction function for large caps and small caps.

Keeping to the topic of NLP, Helyette Geman (John Hopkins) talked about how unusual news could be used to forecast market stress. She talked about the “novelty” which combined sentiment and unusual news to forecast volatility, not just returns.

Christopher Kantos, Alexandria

On my side, I talked about forecasting inflation using a data driven approach, which we do at Turnleaf Analytics, and in particular what Monet can tell us about this. When the impressionists exhibited their paintings to the public, it provoked an uproar from art critics, because it was so different from the past. However, over time, impressionist paintings became accepted despite being different. When it comes to forecasting inflation, historically, there has been a narrative led approach which has been to come with estimates with (comparatively) simple models. My argument was that with so much data available these days (we are in the big data age), we should should harness these datasets along with machine learning models to adopt a “data driven” approach, rather than a “narrative” approach.

Saeed Amen, Turnleaf Analytics

Conclusions

Throughout the conference, there were many presentations. I hope I’ve managed to give a bit of a flavour of some of these, where machine learning has become increasingly a feature (pun is intended!). A prediction for next year’s conference? I suspect they’ll be even more about AI agents and LLMs. Let’s see!