The way to begin is to decide what kind of system you, personally -- very personally -- want. Define the characteristics of that system. How many trades a year (minimum or maximum), what is the minimum percentage of trades that should be winners, what is the minimum win to loss ratio, what is the maximum percentage system drawdown, and so forth. Combine everything that is important into one objective function. That objective function has a single value. Every trading system over every ticker over every time period can be evaluated using this objective function with the result that every alternative has a single number associated with it -- the objective function score.
Taking advantage of an overall rising market does not count.
Every time any trading system makes a profitable trade, the market it trades becomes more efficient and more difficult to trade profitably.
I think the distinction between curve fitting and optimization is worth noting,
though accurately distinguishing one from the other is certainly beyond me.
Finding the best parameters over an arbitrary length of time and then expecting them to hold true for a multitude of conditions that may be encountered in the future is IMO a fools game, but optimizing chosen variables over a specified time period with the expectation that the performance of these variables should decay into the future is perhaps a more feasible approach.
I am aware there has been a fair amount research into this area by various academics in this field, but it would be fair to say the people who are profitably using this method prefer not to disclose. Bastards
From what I have read, machine learning applications (genetic algorithms, etc) are used to dynamicaly evaluate and update variables (or perhaps even overhaul the whole model) to optimize the next periods performance, based on a previous number of periods. Considering the calibre of individuals who subscribe to 'market cycles' and similar, I don't find it that hard to swallow.
If I optimise parameters and or Variables then I am un realistically expecting results to perform as well as they have on PAST optimised results working forward.What I have in reality is a set of parameters that are chosen by optimisation rather than "randomness" which when re applied forward will not be the optimised "Best" results looking BACK after a period of forward trading.
They are no better than random.!
The resulting objective function is determined by calculating what mark a feature earned and multiplying by its dollar allocation. Add them all up. The result will be a score between 0 and 100.
anyone help out?
IMO, the reason for performing optimisation is to get some sort of feel as to what has performed best in the past. That is not to say that one can expect the same level of performance going forward using the optimised parameter values, but it is nonetheless a start. There is absolutely nothing available in any past data that would indicate what the future performance is likely to be. So for me, the only kind of "edge" (if you can even call it that) is to trade a system that I know has performed well in the past, rather than one with random parameter settings. In my view, extracting what has worked in the past is pretty much all that is up for grabs when looking at past data and there are no better alternatives than that.
The key really is to extract these optimised parameter value from an in-sample set of data and then to verify it using out of sample data. By out of sample data verification, I mean performing any MonteCarlo analysis you deem necessary etc etc using the optimised parameter settings on out of sample data. If the out of sample testing shows good robustness in the figures and are relatively close to the optimised figures, then you may well have a very decent system. On the other hand, if the out of sample testing shows results that are very poor, then there is a real problem with the system and its back the drawing board.
That, in a nutshell, is my perception of the role that optimisation plays, it is merely a starting step which would hopefully lead to the formulation of a robust system that is better than random.
Sure. What do you think of these paramaters?
Parameters:
Short Breakoout, 2 -> 50 days, increment 1
Long Breakout, 20 -> 200 days, increment 1
Fast MA -> 10 -> 100, increment 1
Slow MA -> 50 -> 350, increment 1
ETA, 64 hours.I have a Core Duo damn it! Might be my clunky code implementation, but I've stripped it back. It wants to do 2+ million iterations, which might also explain it. If we trim the paramaters or increase the increments I expect it could speed up exponentially.
ASX.G
ASX.G Amibroker is not dual core aware by design.
I know, but at least one of the cores should be able to pump harder that this, it's 2007 don't u know!
Then we have liquidity,Closing price,Stop % and Position sizing.
- 20 FPO's from the Materials Sector for the years 1998 & 1999."An in-sample data set that is used to select the parameters for the trading system"
- the same 20 FPO's from the Materials Sector for the years 2000 & 2001."An out-of-sample data set that is used very frequently"
- Optimise the parameters (all) that will result in the highest objectiveSearch through the in-sample data set as much as you wish. Look for values of the parameters that maximize the value of the objective function; add rules and filters."
- If you are now confident with your model (system) then go live?"Beginning with the first in-sample period (the oldest in-sample period), find the optimum parameters, then test over the associated out-of-sample period, and record the results. Move on to the second in-sample period, optimize, test the second out-of-sample period, and record the results. After all the in-sample periods have been processed, concatenate the results from the out-of-sample periods. If those results are satisfactory, your confidence is increased."
- now I have a problem! If the model is not achieving the same results as"Eventually the characteristics of the market change and the model (system) is out-of-synch with the market. Maybe the market will return to its earlier state, but usually it will not, and a new model is needed, so we must reoptimize. That is, we must perform the next walk-forward iteration."
What software do you use HOWARD
The comments below are my interpretation of Howard Bandy's posts to date on this thread.
That's me done for now and I look forward to any feedback.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?