Continuous Double Auction
Underpins many financial exchanges such as NASDAQ, NYSE.
- Buyers continuously submit bids (offers to buy)
- Sellers continuously submit asks (offers to sell)
Continuous because orders can arrive at any time, and double because both sides actively place prices.
Leads to rapid convergence on theoretical equilibrium price
Origin
- Smith in 1962 demonstrated that a small group of humans, in a lab setting CDA, successfully converged on the equilibrium1.
- Gode and Sunders in 1993 explored how much intelligence is required of traders to converge on theoretical equilibrium2.
Zero Intelligence
Gode and Sunders simulated a CDA populated by two automated traders:
- zero-intelligence unconstrained (ZIU)
- zero-intelligence constrained (ZIC)
Both of which generated random quote prices, with ZIC constrained to never trade at a loss.
Their paper demonstrated that these simple non-adaptive algorithms performed similarly to human traders, and concluded that intelligence has little effect on efficiency in a CDA, proposing that Smith’s invisible hand may have more agency in financial markets than expected.
Minimal Intelligence
In 97, Cliff and Bruten extended Gode and Sunder’s work and proved that zero intelligence is not enough3.
Market conditions were found under which ZIC would fail to equilibrate, and thus proposed zero-intelligence plus (ZIP).
ZIP is known as a minimal-intelligence traders, due to relying on a basic stochastic machine learning algorithm for optimisation.
Another notable minimal-intelligence trader is Gjerstad and Dickhaut’s 98 model “GD”4 which is a probabilistic model.
Minimal Intelligence > Human Intelligence
In 2001, IBM demonstrated that ZIP and GD repeatedly outperformed human traders in a market populated by both5.
Their proposal: efficient trading algorithms had business potential worth billions of dollars annually.
Many refer to their study as initiating the rise of algorithmic trading in real financial markets, which since then spawned numerous new algorithmic traders. See 678910 for detailed explanations of some of these.
Differential Evolution
Population-based optimization algorithm used to optimize complex objective functions, especially when gradients aren’t available or the problem is noisy or non-smooth. Originally developed in 199511 by Storn and Price. It maintains a population of candidate solutions which it evolves over generations through mutation, crossover, and selection operations.
Let the population at generation be
where:
- is the population size (number of candidate solutions)
- is the -th candidate solution vector at generation
- is the dimensionality of the problem
Algorithm Parameters:
- is the mutation (or scaling) factor, typically
- is the crossover probability
Basic Operations
For each target vector , DE creates a trial vector through:
1. Mutation: Generate a mutant vector by combining randomly selected population members:
where are distinct random indices, and .
2. Crossover: Create trial vector by mixing components:
where and is a randomly chosen index ensuring at least one component is inherited from the mutant.
3. Selection: Choose the better vector for the next generation:
Variants
DE/rand/1
The basic DE variant where candidate solutions are chosen randomly. This is the formulation shown above in the mutation step.
Generational vs Steady-State
Generational: Maintains two populations simultaneously. Iterates through the first population and copies either the current solution or its mutated equivalent into the second population based on performance. At the end of iterating through the first population, the second population is complete, representing the next generation.
Steady-State: A single trial solution is selected at random from the population for each iteration. A mutated solution is created and evaluated against the trial solution. The better performing solution is immediately written back into the population, replacing the trial solution.
The steady-state implementation uses the same mutation formula as above, with the trial solution being .
JADE: Adaptive Differential Evolution
JADE (proposed by Zhang and Sanderson, 2009)12 is an adaptive variant that removes the need for trial-and-error tuning of and . It employs a greedier mutation strategy with mechanisms to prevent premature convergence.
DE/current-to-pbest/1
The mutation formula differs from DE/rand/1:
where:
- is the trial (current) solution
- is randomly chosen from the top best solutions in the population
- are randomly chosen:
- is a unique mutation factor for each trial solution
- Typically
With Archive
JADE can use an archive of replaced solutions to drive progress:
where (selected from population or archive). When a mutated solution replaces its trial solution, the replaced solution is added to . If , solutions are randomly removed until .
Adaptive Mutation Factor
Each mutation factor is generated using a Cauchy distribution:
The mean is updated after each generation (or every evaluations in steady-state):
where:
- is the adaptation rate
- is the set of successful values from the generation
- is the Lehmer mean:
The Lehmer mean propagates larger mutation factors to improve progress rate.
Parameter Selection
JADE introduces two control parameters:
- : adaptation rate
- : greediness of mutation
Important: The relationship between and population size matters. If and , then , effectively implementing current-to-best rather than current-to-pbest. This degrades to a greedier, less reliable strategy. Choose and such that multiple solutions are in the top subset.
Master’s Paper Exploring JADE in Stock Exchange
Footnotes
-
Vernon L. Smith. “An experimental study of competitive market behavior.” In “Journal of Political Economy”, p.111-137. 1962 ↩
-
Dhananjay K. Gode, and Shyam Sunder. “Allocative Efficiency of Markets with Zero-Intelligence Traders: Markets as a Partial Substitute for Individual Rationality”. In “Journal of Political Economy”, vol. 101, no. 1, pp. 119-137. 1993 ↩
-
Dave Cliff, and Janet Bruten. “Zero is Not Enough: On The Lower Limit of Agent Intelligence For Continuous Double Markets”. 1997. ↩
-
Steven Gjerstad, and John Dickhaut. “Price Formation in Double Auctions”. 1998. ↩
-
Rajarshi Das, James E. Hanson, Jeffrey O. Kephart, and Gerald Tesauro. “Agent-Human Interactions in the Coutinuous Double Auction”. 2001 ↩
-
Gerald Tesauro, and Jonathan L. Bredin. “Strategic sequential bidding in auctions using dynamic programming”. 2002. ↩
-
Steven Gjerstad. “The impact of pace in double auction bargaining.” 2003. ↩
-
Perukrishnen Vytelingum, Dave Cliff, and Nicholas R. Jennings. “Strategic bidding in continuous double auctions”. 2008. ↩
-
Jasmina Arifovic, and John Ledyard. “A behavioral model for mechanism design: Individual evolutionary learning. In “Journal of Economic Behavior & Organization”, vol. 78, pp. 374-395. ↩
-
Dave Cliff. “Parameterised-Response Zero-Intelligence Traders”. 2022. ↩
-
Rainer Storn, and Kenneth Price. “Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces”. 1997 ↩
-
Jingqiao Zhang, and Arthur C. Sanderson. “JADE: Adaptive Differential Evolution with Optional External Archive”. 2009. ↩