This is the part 1 of a 3 part series analyzing a betting game. If you'd like to jump to another part here are direct links:
- Intro Page
- Part 1 - A Simpler Problem (You are here)
- Part 2 - Analysis and Simulations
- Part 3 - Mathematical Restructuring
- Conclusion
An Even Simpler Game: An Introduction to the Kelly Criterion
Before we look at a situation with 2 coins, let's consider a game with just 1. Say you are allowed to make repeated even-money bets on a coin you know has a 60% chance of landing on heads. Obviously you'd want to bet on heads every time1. But how much would you want to bet each round to maximize your profits?
This problem is well-studied. John Larry Kelly developed the Kelly criterion in 1956. Essentially, when faced with a betting situation like this the criterion states that one should place bets that will maximize the expected value of the logarithm of the wealth (or, equivalently, maximize the geometric growth rate, which I personally find easier to think about).
This probably sounds extremely complicated. An intuitive approach is to think about how you might approach betting in such a situation. Every bet that you would make would be a percentage of how much you currently have - it doesn't make sense to put the same amount of money on a stock if you have $10,000 vs $10,000,000. Therefore, repeated winnings and losses would multiply with each other (rather than add and subtract) so a geometric approach makes more sense than an arithmetic one.
You can find how much you could expect to win by finding how many bets you would expect to win and to lose. In 10 flips you should expect 6 heads and 4 tails to occur. If the percent of your stake that you bet each time is , then the most probable outcome is
where is your initial stake and is what you would expect your end stake to be. The rate, then, would be the 10th root, so
Our standard geometric growth rate has emerged (though in the future we will be substituting for readability).
In order to find what value of maximizes this function we need to take the derivative with respect to and set it equal to 0. The algebra is easier if maximize instead of which we can do because the log is monotonic. We have:
Thus the derivative is
So, after a bit of algebra, remembering , we have
In our example where , , so according to the Kelly criterion you should bet 20% of the stake every time.
What was derived here was not rigorous and describes only a special case of the Kelly criterion2. However, the above will be enough to motivate the rest of this discussion.
Finishing Out Our Example: Notes on the Differences Between Expected Value and Probable Values
Now that we know how much we are supposed to bet we can plug it back into our geometric growth rate formula to determine how much we can expect to make.
Before we said that
being the number of rounds played. Ignoring the fact that we may be betting small fractions of a penny, if we are able to be on bet on 10 flips and start with $1, we find
Not terrible, but what if we are given 100 flips?
There is a problem though. These figures are not actually the expected values of these scenarios, these are the most probable outcomes.
The expected value is defined as the summation, over all possible outcomes, of the value of the outcome multiplied by the probability of that outcome. In mathematical notation,
In this situation,
the stake after obtaining heads (with tails), and the probability is
the standard binomial distribution. With 100 flips, plugging the above in, we find that
considerably higher than our last value. As a side-note, we can actually find this much easier by using the arithmetic mean:
as the expected value is in fact the arithmetic mean over all possible outcomes.
But regardless, why is this new measure so much higher than the old one? Why did we build our optimization function around the geometric mean when the expected value uses the arithmetic?
To answer the first question, the expected value is much higher because there is a small chance of winning a very large amount (whereas you can only lose so much). These low-probability high-payoff values skew the average up. But over many trials the expected value does in fact give a much more accurate approximation of how much you could expect to win.
As to the latter, if we had instead chosen the arithmetic mean as our optimization function the outcomes are even more extreme: when considering any , , giving an impossibly small chance of winning . Otherwise the payoff is 0.
The expected value of this situation is higher, the massive payoff of making up for it's miniscule probability of hitting. The key to the Kelly Criterion, though, is that it guarentees the best results in the long run.
Continue to Part 2, which will explore the actual problem as proposed. This will answer the basic question and includes some Python to simulate the problem and perform some calculations.
Footnotes
-
This might not be completely obvious to some. This was actually performed in study: participants were given $25 dollars and 30 minutes to make bets. They kept whatever they won (limited to a maximum of $250). Some of the participants lost when they bet their entire stake on tails. You can read the full paper here. ↩
-
This article is a fantastic resource for exploring more of the areas that the full criterion covers. I'm a big fan of the interactive components. ↩