Deepak Kanungo, founder and CEO of Hedged Capital LLC

Source | TensorFlow public number

Hedged Capital, a financial trading and advisory firm with an “AI-first” strategy, uses probabilistic models to trade in financial markets. In this paper, we will explore the three types of errors inherent in all financial models, and take the simple model in Tensorflow Probability (TFP) as an example to illustrate. Note: Tensorflow aim-listed Probability link www.tensorflow.org/probability…

Finance is not physics

Adam Smith, widely regarded as the father of modern economics, was in awe of Newton’s laws of mechanics and gravitation [1]. Since then, economists have been working to turn economics into something like physics. They aspire to develop theories that accurately explain and predict human economic activity at both the micro and macro levels. In the early part of the 20th century, the work of economists such as Irving Fisher intensified this desire, culminating in the movement of economic physics in the late 20th century.

For all the complex mathematics involved in modern finance, its theories are woefully inadequate, especially when compared with physics. Physics, for example, can predict with astonishing accuracy the movements of electrons in the moon and in computers. These predictions can be calculated by any physicist anywhere in the world at any time. Market participants, by contrast, struggle to explain the daily movements of the market or to predict stock prices at any time anywhere in the world.

That may be because finance is harder than physics. Unlike atoms and pendulums, humans are emotionally complex creatures with free will and potential cognitive biases. They tend to act inconsistently and constantly react to the actions of others. Moreover, market participants profit by exploiting or manipulating the system that regulates them.

After losing a fortune in the South Sea Company, Newton remarked, “I can calculate the course of the heavenly bodies, but not the madness of man.” Mind you, Newton has an eye for investment. He served at the British Mint for nearly 31 years and helped keep the pound on the gold standard for more than two centuries.

Are all financial models wrong?

We use models to simplify the complexity of the real world so that we can focus on the characteristics of the phenomena we are interested in. Clearly, maps fail to capture the richness of the terrain they are modeling. Statistician George Box famously quipped that “all models are wrong, but some are useful.”

This idea applies particularly to finance. Some scholars even argue that financial models are not only wrong, but extremely dangerous; The appearance of the physical sciences has led proponents of economic models to give false confidence in the accuracy of their predictive power. And this blind faith has brought many disastrous consequences to its adherents and the whole society [1], [2]. Renaissance Technologies, the most successful hedge fund in history, has put its critical view of financial theory into practice. They prefer to hire physicists, mathematicians, statisticians and computer scientists over people with backgrounds in finance or Wall Street. They use quantitative models based on non-financial theories such as information theory, data science and machine learning to make market trades.

Whether financial models are based on academic theory or empirical data mining strategies, they are subject to the three types of modeling errors described below. As a result, all models need to quantify the uncertainty inherent in their predictions. Errors in analysis and prediction may arise from any of the following modeling problems [1], [2], [3], [4] : inappropriate functional forms, inaccurate input parameters, or inability to adapt to structural changes in the market.

Three kinds of modeling errors

1. Errors in model specifications:

Almost all financial theories use normal distributions in their models. For example, normal distribution is the basis of Markowitz modern portfolio theory and Black-Scholes-Merton option pricing theory [1], [2], [3]. However, there are sufficient facts to show that stocks, bonds, currencies and commodities all have fat tail distribution [1], [2], [3]. In other words, extreme events occur far more frequently than normal distribution predicts.

If returns on asset prices were normally distributed, the world would never have had the following financial catastrophes: Black Monday, the Mexican peso crisis, the Asian currency crisis, the collapse of Long-term Capital Management (which, incidentally, was led by two Nobel Prize-winning economists) or the flash crash. Individual stock “mini-flash crashes” occur even more often than these large events.

However, due to its simplicity and ease of analysis, the normal distribution continues to be used in financial textbooks, courses and professionals for asset valuation and risk modeling. Given today’s advanced algorithms and computing resources, these reasons no longer make sense. This reluctance to abandon the normal distribution is a classic example of “drunkard hunting” : the principle stems from the joke about a drunk who loses his keys in a dark park and frantically searches under a lamppost because there is light under it.

2. Errors in model parameter estimation:

The reason for such errors may be that market participants have access to various levels of information at different speeds. They have different levels of processing power and different cognitive biases. These factors lead to great uncertainty in their cognition of model parameters.

Let’s look at a specific example of interest rates. As the basis for the valuation of all financial assets, interest rates are used to discount uncertain future cash flows of the asset and assess its current value. At the consumer level, for example, variable interest rates on credit cards are tied to a benchmark called the prime rate. That rate typically moves in tandem with the Federal funds rate, an interest rate important to the U.S. and global economy.

Suppose you want to estimate credit card interest rates a year from now. Let’s say the current prime rate is 2 percent, and your credit card company is charging you more than 10 percent. Given the strength of the economy, you think the Fed is more likely to raise interest rates than lower them. The Fed will meet eight times over the next 12 months and will raise the federal funds rate by 0.25% or leave it where it was.

In the TFP code example below (head over to Colab to see the full code), we use a binomial distribution to model your credit card interest rate at the end of the 12 months. Specifically, we will use the TensorFlow Probability binomial distribution class, which contains the following parameters: Total_count = 8 (number of trials or meetings), probS = {0.6, 0.7, 0.8, 0.9}, indicating our estimated range of probability that the Fed will raise the federal funds rate by 0.25% at each meeting.

Note: Colab link github.com/tensorflow/…

TensorFlow aim-listed Probability binomial distribution link www.tensorflow.org/probability…

# First we encode our assumptions.
num_times_fed_meets_per_year = 8.
possible_fed_increases = tf.range(
   start=0.,
   limit=num_times_fed_meets_per_year + 1)
possible_cc_interest_rates = 2. + 10. + 0.25 * possible_fed_increases 
prob_fed_raises_rates = tf.constant([0.6, 0.7, 0.8, 0.9])# Now we use TFP to compute probabilities in a vectorized manner.
# Pad a dim so we broadcast fed probs against CC interest rates.Prob_fed_raises_rates = prob_fed_raises_rates [... tf.newaxis] prob_cc_interest_rate = tfd.Binomial( total_count=num_times_fed_meets_per_year, probs=prob_fed_raises_rates).prob(possible_fed_increases)Copy the code

In the chart below, note how the probability distribution of your credit card interest rates over a 12-month period depends largely on your estimate of the probability that the Fed will raise rates at each of the eight meetings. You can see that for every 0.1% increase in the estimate of how much the Fed raises interest rates at each meeting, your credit card’s expected interest rate increases by about 0.3% over 12 months.

Even if all market participants use binomial distributions in their models, it is easy to see how they differ about the optimal future interest rate, as their estimates of probs differ. And this parameter is really hard to estimate. Many institutions have dedicated analysts (including former Fed employees) who analyse every Fed document, speech and event to try to estimate this parameter.

Recall that we assumed that the parameter probs in the model remained constant for the next eight Fed meetings. How likely is that? Members of the Federal Open Market Committee (FOMC), who set interest rates, do not set just one value. They can and do change their personal biases based on economic conditions that change over time. Assuming that the parameter probs remains constant for the next 12 months is not only unrealistic, but risky.

3. Errors caused by failure of the model to adapt to structural changes:

The random process of underlying data generation varies over time, which means it is not a fixed traversal. We live in a dynamic capitalist economy characterized by technological innovations and changing monetary and fiscal policies. Time-varying distributions of asset values and risk are the rule, not the exception. For this type of distribution, parameter values based on historical data are bound to produce errors in the prediction.

In the example above, if the economy shows signs of recession, the Fed might take a more neutral stance at the 4th meeting, which would allow you to change the PROBS parameter from 70% to 50% thereafter. This change to the PROBS parameter in turn alters your prediction of credit card interest rates.

Sometimes the time-varying distribution and its parameters change constantly or suddenly, as in the Mexican peso crisis. For sustained or abrupt changes, the models used need to adapt to changing market conditions. We may need new functional forms with different parameters to explain and forecast asset values and risks in the new regime.

Suppose that after the fifth meeting in the example above, the US economy suffers an external shock, such as a new populist government in Greece deciding to default on its debt. In that case, the Fed is more likely to lower interest rates than raise them. Given this structural change in the Fed’s attitude, we must change the binomial probability distribution in the model to a triennial distribution with appropriate parameters.

conclusion

Unlike physics, finance is not an exact forecasting discipline. The two are far apart. Therefore, we should not treat academic theories and financial models as if they were quantum mechanics.

All financial models, whether based on academic theory or data mining strategies, are subject to three types of modeling errors. While these three errors can be reduced with appropriate modeling tools, they cannot be completely eliminated. Information asymmetry and cognitive bias will always exist. Because of the dynamic nature of capitalism, human behavior and technological innovation, asset value and risk models change over time.

Financial models need a framework to quantify the uncertainty inherent in the prediction of time-varying stochastic processes. Equally important, the framework needs to constantly update the model or its parameters (or both) based on essentially new data sets. Such models must be trained with small data sets because the underlying environment may change too quickly to collect large amounts of relevant data.

In the next article, we will discuss the need for a modeling framework to quantify and model the uncertainties generated by three types of financial modeling errors.

Thank you

We would like to thank the Team at TensorFlow Probability, and in particular Mike Shwe and Josh Dillon, for their assistance in early drafts of this article.

reference

[1] The Money Formula, David Orrell and Paul Wilmott, Wiley, 2017

[2] J.R. Thompson, L.S. Baggett, W.C. Wojciechowski and E.E. Williams, Nobels For Nonsense, Journal of Post Keynesian Economics, Fall 2006

[3] Model Error, Katerina Simons, New England Economic Review, November 1997

[4] Bayesian Risk Management, Matt Sekerke, Published by Wiley, 2015