Most traders do not seek failure, at least not consciously. However, knowledge of the way failure is achieved can be of great benefit when seeking to avoid it. Failure with an optimizer is easy to accomplish by following a few key rules. First, be sure to use a small data sample when running sirindations: The smaller the sample, the greater the likelihood it will poorly represent the data on which the trading model will actually be traded. Next, make sure the trading system has a large number of parameters and rules to optimize: For a given data sample, the greater the number of variables that must be estimated, the easier it will be to obtain spurious results. It would also be beneficial to employ only a single sample on which to run tests; annoying out-of-sample data sets have no place in the rose-colored world of the ardent loser. Finally, do avoid the headache of inferential statistics. Follow these rules and failure is guaranteed.
What shape will failure take? Most likely, system performance will look great in tests, but terrible in real-time trading. Neural network developers call this phenomenon “poor generalization”; traders are acquainted with it through the experience of margin calls and a serious loss of trading capital. One consequence of such a failure-laden outcome is the formation of a popular misconception: that all optimization is dangerous and to be feared.
In actual fact, optimizers are not dangerous and not all optimization should be feared. Only bad optimization is dangerous and frightening. Optimization of large parameter sets on small samples, without out-of-sample tests or inferential statistics, is simply a bad practice that invites unhappy results for a variety of reasons.
No comments:
Post a Comment