Search This Blog

Tuesday, 18 November 2014

Genetic Optimizers

Imagine something powerful enough to solve all the problems inherent in the creation of a human being. That something surely represents the ultimate in problem solving and optimization. What is it? It is the familiar process of evolution. Genetic optimizers endeavor to harness some of that incredible problem- solving power through a crude simulation of the evolutionary process. In terms of overall performance and the variety of problems that may be solved, there is no general-purpose optimizer more powerful than a properly crafted genetic one.

Genetic optimizers are Stochustic optimizers in the sense that they take advantage of random chance in their operation. It may not seem believable that tossing dice can be a great way to solve problems, but, done correctly, it can be! In addition to randomness, genetic optimizers employ selection and recombination. The clever integration of random chance, selection, and recombination is responsible for the genetic optimizer’s great power. A full discussion of genetic algorithms, which are the basis for genetic optimizers, appears in Part II.

Genetic optimizers have many highly desirable characteristics. One such characteristic is speed, especially when faced with combinatorial explosion. A genetic optimizer can easily be many orders of magnitude faster than a brute force optimizer when there are a multiplicity of rules, or parameters that have many possible values, to manipulate. This is because, like user-guided optimization, genetic optimization can focus on important regions of solution space while mostly ignoring blind alleys. In contrast to user-guided optimization, the benefit of a selective search is achieved without the need for human intervention

Genetic optimizers can swiftly solve complex problems, and they are also more immune than other kinds of optimizers to the effects of local maxima in the fitness surface or, equivalently, local minima in the cost surface. Analytic methods are worst in that they almost always walk right to the top of the nearest hill or bottom of the nearest valley, without regard to whether higher hills or lower valleys exist elsewhere. In contrast, a good genetic optimizer often locates the globally best solution-quite an impressive feat when accomplished for cantankerous fitness surfaces, such as those associated with matrices of neural connection weights

Another characteristic of genetic optimization is that it works well with fitness surfaces marked by discontinuities, flat regions, and other troublesome irregularities. Genetic optimization shares this characteristic with brute force, user-guided, annealing-based, and other nonanalytic optimization methods. Solutions that maximize such items as net profit, return on investment, the Sharpe Ratio, and others that define difficult, nonanalytic fitness landscapes can be found using a genetic optimizer. Genetic optimizers shine with difficult fitness functions that lie beyond the purview of analytic methods. This does not mean that they cannot be used to solve problems having more tractable fitness surfaces: Perhaps slower than the analytic methods, they have the virtue of being more resistant to the traps set by local optima.

Overall, genetic optimizers are the optimizers of choice when there are many parameters or rules to adapt, when a global solution is desired, or when arbitrarily complex (and not necessarily differentiable or continuous) fitness or cost functions must be handled. Although special-purpose optimizers can outperform genetic optimizers on specific kinds of problems, for general-purpose optimization, genetic optimizers are among the most powerful tools available.

What does a genetic optimizer look like in action? The dual moving-average crossover system discussed earlier was translated to Cl 1 so that the genetic optimizer in the C-Trader toolkit could be used to solve for the two system parameters, LenA and LenB. LenA, the period of the first moving average, was examined over the range of 2 through 50, as was LenB, the period of the second moving average. Optimization was for net profit so that the results would be directly comparable with those produced earlier by brute force optimization. Below is the Cl 1 code for the crossover system:

To solve for the best parameters, brute force optimization would require that 2,041 tests be performed; in TradeStation, that works out to about 56 minutes of computing time, extrapolating from the earlier illustration in which a small subset of the current solution space was examined. Only 1 minute of running time was required by the genetic optimizer; in an attempt to put it at a significant disadvantage, it was prematurely stopped after performing only 133 tests.

The output from the genetic optimizer appears in Table 3-2. In this table, PI represents the period of the faster moving average, P2 the period of the slower moving average, NETthe total net profit, NETLNG the net profit for long positions, NETSiS the net profit for short positions, PFAC the profit factor, ROA% the annualized rehm on account, DRAW the maximum drawdown, TRDS the number of trades taken by the system, WIN% the percentage of winning trades, AVGT the profit or loss resulting from the average trade, and FZTthe fitness of the solution (which, in this instance, is merely the total net p&it). As with the brute force data in Table 3-1, the genetic data have been sorted by net profit (fitness) and only the 25 best solutions were presented. Comparison of the brute force and genetic optimization results (Tables 3- 1 and3-2,  respectively) reveals that the genetic optimizer isolated a solution with a greater net profit ($172,725) than did the brute force optimizer ($145,125). This is no surprise since a larger solution space, not decimated by increments, was explored. The surprise is that the better solution was found so quickly, despite the handicap of a prematurely stopped evolutionary process. Results like these demonstrate the incredible effectiveness of genetic optimization.

No comments:

Post a Comment