Boosting Intel MKL on AMD Ryzen processors

Over the weekend I had the time to test a workaround which caught quite a bit of attention when it was first published in November 2019 on Reddit under the title “How to force Matlab to use a fast codepath on AMD Ryzen/TR CPUs. Among MATLAB users it was known for a long time that the underlying Intel Math Kernel Library was optimized for Intel processors and notoriously slow on AMD processors, no matter whether the CPU supports efficient SIMD extensions or not. For example the linear algebra libraries BLAS and LAPACK are included in the MKL. This drawback also known as the “cripple AMD” routine exists for more than 10 years and does also affect NumPy as you can see from the Wikipedia Article for the Math Kernel Library: “However, as long as the master function detects a non-Intel CPU, it almost always chooses the most basic (and slowest) function to use, regardless of what instruction sets the CPU claims to support. This has netted the system a nickname of “cripple AMD” routine since 2009. As of 2019, MKL, which remains the choice of many pre-compiled Mathematical applications on Windows (such as NumPy, SymPy, and MATLAB), still significantly underperforms on AMD CPUs with equivalent instruction sets.” Source: Wikipedia

Ryzen

In other words, while AMD CPUs come with full SSE4, AVX and AVX2 support, software vendors are not obliged to state whether their software that claims to support AVX, AVX2, SSE, or any other SIMD features executes solely on supported Intel CPUs. In the case of the MKL, the library checks the vendor-ID of the CPU and if this does read “AuthenticAMD” rather than “GenuineIntel”, it switches to a standard SSE fallback mode, ignoring the CPUs real capabilities. The workaround consists of merely 4 lines of code in a batch file and it enforces the usage of AVX2 by MKL in MATLAB, no matter what vendor-ID is found. In my tests on a AMD Ryzen 7 3700X CPU I found an overall performance improvement with significantly large gains in individual tests, e.g., >200% for the double precision Cholesky factorization routine. I think this is fantastic news for MATLAB and NumPy users as without the “cripple AMD” routine AMDs Ryzen and Threadripper CPUs offer a very attractive price performance ratio.  

Visiting Lectureship at Goethe-University

In the winter semester 2019/2020 I will be a visiting lecturer in Prof. Bastian von Harrach‘s group at the Institute of Mathematics at the Goethe-University in Frankfurt am Main teaching a seminar on “Numerics for Implied Volatility Surface Construction”. To find more information about the seminar please visit the seminar’s website here or have a look at the announcement (both in German).

MathFinance Conference 2019 Recap

Over the weekend, I had time to reflect on how delightful this year’s MathFinance Conference in Frankfurt had been. While I felt that all talks of the conference were very interesting these are my personal highlights:

The conference started on Monday with an insightful talk by Thorsten Schmidt (University Freiburg) presenting new perspectives for understanding the concept of statistical arbitrage. Then followed a peek into the world of crypto currencies and crypto markets featuring 4 speakers from the Humboldt-University in Berlin including Wolfgang Härdle whose talk was particularly instructive for me, as I had not heard about the CRIX index (developed by Härdle and Trimborn) before. I particularly liked the fact that all the datasets presented in the talk can be accessed online so that the listeners could start playing around with the data right away. I learned from both, this session and playing around with the data that cryptos classify as a new asset class in the sense that its distribution features strongly differ from those of the more traditional asset classes. However, as was pointed out in the afternoon panel discussion, there is no intrinsic value in, e.g., Bitcoin and I would like to add here that from my perspective it’s intrinsic value is in fact negative due to it’s disastrous carbon footprint (the creation of new bitcoins requires immense computational effort and therefore enormous amounts of energy). So, while the blockchain technology is most probably here to stay, whether Bitcoin will become mainstream remains to be seen. I enjoyed the talk by Daniel Oeltz (Rivacon) on PAILAB which stands for Python Artificial Intelligence Laboratory – an open source pythonic workbench – and will be released by the end of April. Looking forward to it. I also gave my talk on our option-based indicator for stock price bubbles on Monday afternoon and particularly enjoyed the questions by the participants which made for a lively and intriguing discussion. The talks on Tuesday morning had a focus on modeling and numerics and I very much enjoyed Adil Reghaï’s (Natixis) talk on SLV which was in some sense the continuation of his great talk from last year’s conference. Antoine Jacquier’s (Imperial College London) talk on VIX options in rough volatility models was my personal highlight of the conference because afterwards I was convinced that at some point in the very near future the industry will adopt these models which are currently mainly subject of academic research. I will take this as an occasion to start studying the literature on rough volatility. In the afternoon session Ingo Mainert (Allianz Global Investors) gave a thought-provoking keynote speech on “Current Challenges and Developments of the Investment Industry”. Moreover, Uwe Wystup (MathFinance) provided an information-packed talk on FX option greeks and the conference was completed by Rolf Poulsen’s (University of Copenhagen) highly enjoyable talk entitled “The Fed Isn’t Federal – And Other Odd Things in Finance”.

I would like to say a huge thank you to the whole MathFinance team, particularly the Wystup family, Ansua, Uwe, Bristi and Rittik, for organizing this amazing conference full of thought-provoking talks and wonderful people at the forefront of quantitative finance. I am already looking forward to the 20th edition in 2020.

SDEs in Finance Workshop at LUT

As I returned from the Workshop on SDEs with Applications in Finance at Lappeenranta University of Technology, on my flight back to Frankfurt, I reflected on how enjoyable this workshop had been.

Among the many interesting talks (in fact I felt that all of them were very interesting), there were two which were so close to my field of interest that I will definitely have to catch up by reading the corresponding papers very carefully. Eero Immonen gave a great presentation on dynamical modeling of efficient financial markets based on a paper published in Physica A and another one published in the Journal of Mathematical Economics. Sebastian Springer gave a very interesting talk on correlation-integral likelihood for SDEs based on joint work with Heikki Haario, Janne Hakkarainen and Ramona Maraia. On a personal note, I am greatly honored I was allowed to provide my practitioner’s perspective on our recent work on data-driven indicators for stock price bubbles (which is joint work with Lassi Roininen, Petteri Piiroinen and Tobias Schoden) in a keynote talk which was also the very first talk in a brand new seminar series on Mathematical Sciences at Lappeenranta University of Technology. In the second keynote talk Petteri Piiroinen gave a great presentation on the theoretical aspects of our work ranging from strict local martingales and geometry in the SABR plane to connections to Feynman path integrals.

Many thanks to both the local organizers and the participants of this workshop for providing such a friendly and stimulating atmosphere and for many interesting discussions and questions.

Asset Price Bubbles: An Option-Based Indicator

Together with Petteri Piiroinen (University of Helsinki), Lassi Roininen (Univerity of Oulu) and my colleague Tobias Schoden at Deka Investment we have developed an option-based indicator for short-term asset price bubbles. One of the findings in the corresponding publication (which can be found here) is that in the sector of tech stocks short-term bubbles are quite common nowadays.

An even easier introduction to CUDA

This blog post, written by Mark Harris, Chief Technologist for GPU Computing Software at NVIDIA, gives a brief and very accessible introduction to the CUDA parallel computing platform. Mark also provides some exercises touching very recent developments such as the new Unified Memory capabilities that came with the Pascal generation of GPUs. A good read for programmers planning to start CUDA coding and a highly recommendable blog for those interested in GPGU computing.

Article Probabilistic interpretation of the Calderón problem published

Jointly with Petteri Piiroinen from University of Helsinki, we have continued our work on the probabilistic interpretation of the Calderón problem which was first posed by Alberto Calderón in the nowadays famous paper that layed the foundations for the mathematical study of the inverse conductivity boundary value problem (the inverse problem for electrical impedance tomography). After deriving a probabilistic interpretation of the forward problem with applications for problems with random background conductivities in a previous work (namely our Annals of Applied Probability article From Feynman–Kac formulae to numerical …), we have now tackled the inverse problem finding several interesting new formulations of the problem which might add to our understanding of the Calderón problem which is (in its most general form) still unsolved. Here is the link to our paper which was published in Inverse Problems and Imaging – AIMS.

 

HPC to your Dektop – Turbo Boosting MATLAB via the Techila Distributed Computing Solution in the Google Cloud

“3 day MATLAB computation done in 4 minutes with a cost of $10” is the message of a recent tweet by the Tampere based Finnish Techila Technologies Ltd folks which first got me interested in their distributed computing solution. The Techila system is a distributed computing middleware and management solution for computing servers, clusters and cloud services. I am mostly interested in the latter variant, more precisely, I decided to have a closer look at Techila in combination with MATLAB on the Google cloud platform which provides a full pay-as-you-go cloud solution together with Google’s state-of-the-art monitoring, logging and diagnostics tools. Sounds exciting? In my eyes it is even more appealing with regard to the fact that Google is offering Google cloud platform education grants which give students free credits and the ability to learn on the platform during their University courses thus having the potential to bring on-demand supercomputing to every student’s desk.  Over the weekend, I have tested this framework by studying a commonly used benchmark problem from mathematical finance, namely pricing and sensitivity calculation of a swaption portfolio within a LIBOR market model using Monte Carlo simulation. For students with no or restricted access to the local super computer it is no exaggeration to say that Techila might provide the biggest bang for the buck.

Reflecting Random walk on spheres article published

Together with my colleague Professor Sylvain Maire from Université du  Toulon, France, we have developed some extensions of the well-known random walk on spheres estimator to simulate reflecting and partially reflecting diffusion processes. In our recent article which has been published in the Journal of Computational Physics we adapt these techniques to the forward problem of electrical impedance tomography.

Here comes the abstract: In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance of the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.

Monograph published

Monograph cover

My monograph Anomaly Detection in Random Heterogeneous Media: Feynman-Kac Formulae, Stochastic Homogenization and Statistical Inversion with a foreword by Lassi Päivärinta has now been published (Springer -Verlag, 164 pages). The monograph is concerned with the analysis and numerical solution of a stochastic inverse anomaly detection problem in electrical impedance tomography (EIT). More precisely, we study the problem of detecting a parameterized anomaly in an isotropic, stationary and ergodic conductivity random field whose realizations are rapidly oscillating. For this purpose, we derive Feynman-Kac formulae to rigorously justify stochastic homogenization in the case of the underlying stochastic boundary value problem. We combine techniques from the theory of partial differential equations and functional analysis with probabilistic ideas, paving the way to new mathematical theorems which may be fruitfully used in the treatment of the problem at hand. Moreover, we propose an efficient numerical method in the framework of Bayesian inversion for the practical solution of the stochastic inverse anomaly detection problem.