I sent a link to my previous blog post on the optimal play of Farkle to Professor Todd Neller, at Gettysburg College. (I thought he might be interested in it since it was largely based on his previous analysis of the simpler dice game Pig.) We ended up talking and decided to write a paper together on optimal Farkle play. Todd presented our paper at The 15th Advances in Computer Games Conference (ACG 2017), Leiden, Netherlands, July 4, 2017. Our paper was voted second place in the best paper competition.
The paper focuses on a more minimalist rule set (whereas my previous blog post solved for facebook farkle rules). The optimization equations are much simplified by using a pair of self-referential equations describing pre-roll and post-roll game states. The paper also includes a comparison of optimal play vs max-expected-score play, a mechanism allowing a human to perfectly replicate max-expected-score play, and some simple techniques you can use to win over 49% of your games against an optimal player.
As of the time of this post, the proceedings from the conference have not yet been published, but a link to our paper is provided here for your convenience:
Optimal Play of the Farkle Dice Game
There are some POV-Ray images included in the paper that graphically show the game states from which you should bank. For your viewing pleasure, I've included below links to the images in their original 16 mega-pixel detail.
Image by Matt Busche using modified povray source by Piotr Borys
Neller and Presser modeled a simple dice game called pig as a Markov Decision Process (MDP) and used value iteration to find the optimal game winning strategy1. Inspired by their approach, I've constructed a variant of an MDP which can be used to calculate the strategy that maximizes the chances of winning 2-player farkle. Due to the three consecutive farkle penalty, an unfortunate or foolish player can farkle repeatedly to achieve an arbitrarily large negative score. For this reason the number of game states is unbounded and a complete MDP model of farkle is not possible. To bound the problem, a limit on the lowest possible banked score is enforced. The calculated strategy is shown to converge exponentially to the optimal strategy as this bound on banked scores is lowered.
Each farkle turn proceeds by iteratively making a pre-roll banking decision, a (contingent) roll of the dice, and a post-roll scoring decision. I modified the classic MDP to include a secondary (post-roll) action to fit this turn model. A reward function that incentivizes winning the game is applied. A similarly modified version of value-iteration (that maximizes the value function for both the pre-roll banking decision, and the post-roll scoring decision) is then used to find an optimal farkle strategy.
With a lower bound of -2500 points for banked scores, there are 423,765,000 distinct game states and so it is not convenient to share the entire strategy in printed form. Instead, I provide some general characterizations of the strategy. For example, if both players use this same strategy, the player going first will win 53.487% of the time. I also provide samples of complete single-turn strategies for various initial banked scores. Currently, only the strategy for Facebook Farkle has been calculated, but the strategy for other scoring variants of farkle could easily be deduced using the same software.
In this post I share a useful programming technique I first saw used
twenty-some years ago while reading through some C code. The technique
combined aspects of an array and a linked-list into a single data construct.
I've here abstracted the technique into a reusable container I call a
partition. (If someone knows of some library in some language that offers a
similar container I'd appreciate a reference.)
A partition is a high performance container that organizes
a set of N sequential integer IDs (which together are called the domain) into
an arbitrary number of non-overlapping groups. (I.e., each ID of the
domain can be in at most one group.) The functionality of a partition
includes these constant time operations:
Given an ID, determine the group to which the ID belongs (if any).
Given a group, find an ID in that group (if any).
Given an ID, move that ID to a particular group (simultaneously removing
it from the group with which it was previously a member if any).
None of the above operations use the heap and are each considerably faster
than even a single push to standard implementations of a doubly linked list.
A partition has not one, but two fundamental classes: Domain and Group.
The set of sequential IDs in a partition are defined by a Domain
object; and a Group is a list of IDs from a Domain. Domain and Group each
have two templatization parameters M and V. IDs can be bound to a
user-defined member object of type M and Groups can be bound to a user
defined value object of type V. Together these associations enable
mappings from objects of type M (with known IDs) to objects of type V, and
conversely from objects of type V (with known Groups) to a list of objects
of type M.
The partition is potentially useful any time objects need to be
organized into groups; and that happens all the time! In this post, I show
how you can use a partition to track the states of a set of like objects.
This is just one possible usage of a partition and is intended only as a
tutorial example of how you can use this versatile storage class.
Image by Matt Busche
I've implemented a set of backtrack algorithms to find solutions to various
polyomino and polycube puzzles (2-D and 3-D puzzles where you have to fit
pieces composed of little squares or cubes into a confined space). Examples
of such puzzles include the Tetris Cube, the Bedlam Cube, the Soma Cube, and
Pentominoes. My approach to the problem is perhaps unusual in that I've
implemented many different algorithmic techniques simultaneously into a single
puzzle solving software application. I've found that the best algorithm to
use for a problem can depend on the size, and dimensionality of the puzzle.
To take advantage of this, when the number of remaining pieces reaches
configurable transition thresholds my software can turn off one algorithm and
turn on another. Three different algorithms are implemented: de Bruijn's
algorithm, Knuth's DLX, and my own algorithm which I call most-constrained-hole
(MCH). DLX is most commonly used with an ordering heuristic that picks the
hole or piece with fewest fit choices; but other simple ordering heuristics are shown
to improve performance for some puzzles.
In addition to these three core algorithms, a set of constraints are woven
into the algorithms giving great performance benefits. These include
constraints on the volumes of isolated subspaces, parity (or coloring)
constraints, fit constraints, and constraints to take advantage of rotational
In this (rather long) blog entry I present the algorithms and techniques I've
implemented and share my insights into where each works well, and where they
don't. You can also download my software source, an executable version of the
solver, and the solutions to various well known puzzles.
For ten years I've thought about replacing the decrepit deck behind my house. I wanted to do something unique to appease my sinful pride. This was the initial concept sketch for my deck consisting of two 15 foot diameter octagons placed side-by-side.
The octagon on the right is only about 14 inches above grade. The octagon on the left is one step up (maybe 7 inches higher) and surrounds an octagon-shaped hot-tub. I wanted the decking for each octagon to be laid out circularly (as shown) to emphasize the octagon shapes.
I modified my Zilch strategy generation software to model the scoring rules for the Super Farkle game available at Facebook. I wasn’t previously a Facebook user so I created my account just to try out my strategy. Over several days, I played about 180 games of Farkle and was winning about 55% of the time. But I’m not sure if this means much for a few reasons.
Zilch is a fun little dice game codified into an online game by Gaby
Vanhegan that can be played at
http://playr.co.uk/. Zilch is actually a
variation of the game Farkle which goes by several other names including
Zonk, 5000, 10000, Wimp Out,
Greed, Squelch and Hot Dice1. I've
worked out the strategy that maximizes your expected game score and wanted to
share the analysis, my strategy finder software, and the strategy itself.
Depending on whether you have zero, one or two consecutive zilches from
previous turns, three successively more conservative turn-play strategies are
required to maximize your long term average score. Using these three strategies
you rack up an average of 620.855 points per turn, which is the best you
can possibly do.