Australian (ASX) Stock Market Forum

System Robustness

IMO, the reason for performing optimisation is to get some sort of feel as to what has performed best in the past. That is not to say that one can expect the same level of performance going forward using the optimised parameter values, but it is nonetheless a start. There is absolutely nothing available in any past data that would indicate what the future performance is likely to be. So for me, the only kind of "edge" (if you can even call it that) is to trade a system that I know has performed well in the past, rather than one with random parameter settings. In my view, extracting what has worked in the past is pretty much all that is up for grabs when looking at past data and there are no better alternatives than that.

The key really is to extract these optimised parameter value from an in-sample set of data and then to verify it using out of sample data. By out of sample data verification, I mean performing any MonteCarlo analysis you deem necessary etc etc using the optimised parameter settings on out of sample data. If the out of sample testing shows good robustness in the figures and are relatively close to the optimised figures, then you may well have a very decent system. On the other hand, if the out of sample testing shows results that are very poor, then there is a real problem with the system and its back the drawing board.

That, in a nutshell, is my perception of the role that optimisation plays, it is merely a starting step which would hopefully lead to the formulation of a robust system that is better than random.

Hi bingk6 --

Perhaps we are thinking the same things, but I prefer to say it a little differently.

1. Optimization simple means an organized search through a large number of alternatives, assigning each alternative a score so that the alternatives can be ranked. In my opinion, the reason we are optimizing is not specifically to find something that worked in the past (although we do find that in the process), but to find some general characteristics that precede profitable trading opportunities that will hopefully continue to work in the future.

2. Your comments imply that the Monte Carlo analysis is applied to the out-of-sample results. Am I misunderstanding? Monte Carlo analysis is usually applied to in-sample data to determine the robustness of the parameters -- the sensitivity to small changes in parameter values. The results from Monte Carlo runs are incorporated into the objective function used to assign the score to each alternative. Applying Monte Carlo analysis to the previously out-of-sample results is the start of another stage of model building using that previously out-of-sample data now as an in-sample data set. A new out-of-sample data set will be required to test for model validity.

Thanks,
Howard
 
I have been using Excel spreadsheet for some Monte Carlo analysis. I anticipate less need for that when AmiBroker 5 is released.

Thanks,
Howard

So we should be seeing some Monte Carlo analysis in ver5 ?

I wonder if we can invite Tomasz to this board, given a lot of us are big users of Amibroker.
 
Hi Howard,

Hi rnr --

I believe that I said the out-of-sample data should be used very INfrequently.

I sincerely appologise as you are 100% correct - I've stuffed up with the cut & paste.
 
I think several extensions to AmiBroker are on the horizon. There have been a lot of requests for tools that help with trading system validation.

I am writing "Introduction to AmiBroker," which I hope to have ready in early 2008. Tomasz has suggested that I wait until Version 5 is out before writing several of the sections, and before taking screen images.

I'm not certain what is coming, so I'll leave it to Tomasz to make the announcements.

Tomasz posts regularly on the Yahoo boards. This is the main one:
http://finance.groups.yahoo.com/group/amibroker/

Thanks,
Howard
 
ASX.G
Have you looked at IO - Intelligent Optimizer?

I haven't used it for a while but it is very powerful - it should be on the Amibroker yahoo site. I would think that the task you mention should reduce to less than 8 hours. IO was initially PSO - Particle Swarm Opimization.

regards
 
Nick, Howard, Tech et al;

Apologies if any of this has been covered earlier in the thread,

I think the distinction between curve fitting and optimization is worth noting,
though accurately distinguishing one from the other is certainly beyond me.

Finding the best parameters over an arbitrary length of time and then expecting them to hold true for a multitude of conditions that may be encountered in the future is IMO a fools game, but optimizing chosen variables over a specified time period with the expectation that the performance of these variables should decay into the future is perhaps a more feasible approach.

I am aware there has been a fair amount research into this area by various academics in this field, but it would be fair to say the people who are profitably using this method prefer not to disclose. Bastards :rolleyes:

From what I have read, machine learning applications (genetic algorithms, etc) are used to dynamicaly evaluate and update variables (or perhaps even overhaul the whole model) to optimize the next periods performance, based on a previous number of periods. Considering the calibre of individuals who subscribe to 'market cycles' and similar, I don't find it that hard to swallow.

Interested to hear others thoughts on this area.



I completely agree, though could this apply more to swing trading systems than trend following? What effect would the frequency of trades have on the effective life of a system?

Also, to Nick & Howard, what has your experience been with short term trading systems? Most of the discussion in this area seems to be on medium to longer time frame systems, have you seen short term systems employed profitably?

Hi Julius --

Thanks for your comments. The earlier part of this thread does cover some of the questions you raise.

About my comment that using a trading system changes the market being traded. There is considerable evidence that that is true. In the 1970s and early 1980s, Donchian-style breakout systems were very successful for trading futures and commodities. With the advent of inexpensive computers, historical price data, and spreading of the details of those techniques, they stopped being profitable. Many of the trend-style traders have fallen on hard times. The CTA I worked for found that their (primarily trend-following) systems stopped working. John Henry, a very large trend-follower based in the US who had been wildly successful for many years, now regularly posts the worst record for CTAs. See Futures Magazine, July 2007, page 18 -- Henry has four funds among the five worst records, year-to-date.

Deciding whether any market is in a trend at any particular time depends on the time period over which it is measured. A market that looks choppy when plotted as daily bars may have very reliable trends when plotted as 15 minute bars.

Whatever objective function is being used, most people and organizations are limited by the drawdown they are willing to absorb.

It is well known that the expected drawdown for any position is proportional to the square root of the time it is held. Doubling the average holding period automatically increases the expected drawdown by 40%.

Shorter holding periods result in lower drawdowns.

Many institutions sell their services on the basis of low portfolio turnover and hold for longer periods. (In my opinion, most of those that have good performance is primarily due to the once-in-a-millennium bull market we have seen from 1982 to now.)

As far as using end-of-day data, the more I study, research, and test trading systems, the more I prefer short holding periods -- a few days, perhaps a week. I believe they are less susceptible to failure due to overuse, but only time will tell.

It is interesting to note the rapid rise in the popularity of exchange traded funds that track popular indices, including ETFs that increase the leverage. Some days those ETFs account for over 40% of all trading activity on US stock exchanges, measured as dollar volume.

The counterparty to my trade is probably not one of you -- it is probably an automated trading system designed by one of the large, well-funded trading organizations, equipped with the fastest computers, cleanest data feeds, and smartest system developers money can buy.

Thanks for listening,
Howard
www.quantitativetradingsystems.com
 
1. Optimization simple means an organized search through a large number of alternatives, assigning each alternative a score so that the alternatives can be ranked. In my opinion, the reason we are optimizing is not specifically to find something that worked in the past (although we do find that in the process), but to find some general characteristics that precede profitable trading opportunities that will hopefully continue to work in the future.

IMO, the main objective of the optimization phase is to select what appears to be the most robust set of parameter settings, which may not necessarily be the most profitable setting. This relates specifically to the sensitivity of these parameter values which as ASXG mentioned in a previous post means a lookout for a relative stable plateau or platform as opposed to a sharp point with steep fallouts in all directions. The most robust settings would be right bang in the middle of the plateau.The less sensitive the parameter values the greater the scope of these parameter values to change without significantly impacting performance. Therefore, outside of giving us valuable information regarding the “pockets” of outperformance within the in-sample data, I am not sure whether there are any information that can be gleened from the optimization exercise. If there are more “general characteristics that precede profitable trading opportunities” that can be extracted, I would certainly like to hear about them.


2. Your comments imply that the Monte Carlo analysis is applied to the out-of-sample results. Am I misunderstanding? Monte Carlo analysis is usually applied to in-sample data to determine the robustness of the parameters -- the sensitivity to small changes in parameter values. The results from Monte Carlo runs are incorporated into the objective function used to assign the score to each alternative. Applying Monte Carlo analysis to the previously out-of-sample results is the start of another stage of model building using that previously out-of-sample data now as an in-sample data set. A new out-of-sample data set will be required to test for model validity.

OK, some clarification here. The Monte Carlo analysis that I suggested being performed on the out of sample data is purely for verification purposes only, by using optimized parameter values created from running the optimization and parameter value sensitivity testing on the in-sample data. At no stage am I advocating that we convert what was previously out of sample data into in-sample data by re-optimising the previously out of sample data and extracting new optimized parameter values. Without the re-optimisation process, one really does not convert out of sample data to in-sample data.

As part of the walk forward process, we would use optimized parameter values to test against out of sample data. The point is that while we are performing this walk forward process, there is really nothing stopping us performing Monte Carlo testing at the same time. If the system gives more signals than the trader has to trade then Monte Carlo just give the testing procedure more “credibility” by subjecting the out of sample data to a more comprehensive level of testing than a single walk-through could ever provide. This then provides more level of “confidence” should the results come out as expected …..
 
IMO, the main objective of the optimization phase is to select what appears to be the most robust set of parameter settings, which may not necessarily be the most profitable setting. This relates specifically to the sensitivity of these parameter values which as ASXG mentioned in a previous post means a lookout for a relative stable plateau or platform as opposed to a sharp point with steep fallouts in all directions. The most robust settings would be right bang in the middle of the plateau.The less sensitive the parameter values the greater the scope of these parameter values to change without significantly impacting performance. Therefore, outside of giving us valuable information regarding the “pockets” of outperformance within the in-sample data, I am not sure whether there are any information that can be gleened from the optimization exercise. If there are more “general characteristics that precede profitable trading opportunities” that can be extracted, I would certainly like to hear about them.

OK, some clarification here. The Monte Carlo analysis that I suggested being performed on the out of sample data is purely for verification purposes only, by using optimized parameter values created from running the optimization and parameter value sensitivity testing on the in-sample data. At no stage am I advocating that we convert what was previously out of sample data into in-sample data by re-optimising the previously out of sample data and extracting new optimized parameter values. Without the re-optimisation process, one really does not convert out of sample data to in-sample data.

As part of the walk forward process, we would use optimized parameter values to test against out of sample data. The point is that while we are performing this walk forward process, there is really nothing stopping us performing Monte Carlo testing at the same time. If the system gives more signals than the trader has to trade then Monte Carlo just give the testing procedure more “credibility” by subjecting the out of sample data to a more comprehensive level of testing than a single walk-through could ever provide. This then provides more level of “confidence” should the results come out as expected …..

Take a bow, son.
You've clearly made it.

Just one thing to add; theres no problems at all IMO with re-optimising your parameters with the out-of-sample data, in which case it becomes in-sample data, as long you dont do this all the time, and as long as you still have "new" out-of-sample data (or several datasets) to test the robustness of the system. As Howard pointed out, preferably this data should only be used once or the least amount of times possible.
 
Here's what I get with some of the params with optimization. I'm still learning amibroker, and my data isn't the best. I've done it only on the current ASX 300 over 10 years.

If it's any use I can adjust any of the optimizations or details.

The attached file is actual a .zip file renamed to .pdf to get around the file restrictions on the site. So you need to save it and rename.

// The optimize params mean in order default, min value, max value, step
// So HighBreakout will try 5, 10, 15, 20
HighBreakOut = Optimize("HighBreakOut", 10, 5, 20, 5);
ShortEMA = Optimize("ShortEMA", 40, 10, 60, 10);
HighestHigh = Optimize("HighestHigh", 70, 30, 120, 20);
LongEMA = Optimize("LongEMA", 180, 120, 360, 30);

SetOption("CommissionMode", 2); //$$ per trade
SetOption("CommissionAmount", 30);
SetOption("MaxOpenPositions", 10 );
SetOption("InitialEquity", 100000 );
PositionSize = -10; // always invest only 10% of the current Equity

cond1=Cross(H,Ref(HHV(H,HighBreakOut),-1)); // when todays high crosses last highest high over the last 10 periods
cond2=H > EMA(C,ShortEMA); // todays high is greater than the 40 day Exp MA of closes
cond3=HHVBars(H,HighestHigh) == 0; // todays high is the highest for 70 periods
cond4=EMA(V*C,21) > 500000; // ensure at least $500k of money flow
cond5=C < 10.00; // only trading in stocks less than $10
cond6=C > O; // todays close higher than open

// the following line is the trigger if all conditions satisfied
Buy=cond1 AND cond2 AND cond3 AND cond4 AND cond5 AND cond6;

// here we define variables used once in the trade
ApplyStop( stopTypeLoss, stopModePercent, amount=10 );
Sell= Cross(Ref(EMA(L,LongEMA),-1),C); // close crosses below yesterdays average of the low

Bingk6

Similar to my thinking.
but is it really better than random.
Logic says it should be.
But no real reason why it will be.

I dont have amibroker so dont have the facilities to find optimum variables over a portfolio.
Id be interested in what they are for T/Trader and then test the results over data and with tradesim.All I need is the optimum values.
My suspicion is that the edge if any wouldnt equate to much.

anyone help out?

Interested in Nicks take.
 

Attachments

  • ttrader.pdf
    99.8 KB · Views: 11
Hi Julius --
The counterparty to my trade is probably not one of you -- it is probably an automated trading system designed by one of the large, well-funded trading organizations, equipped with the fastest computers, cleanest data feeds, and smartest system developers money can buy.
www.quantitativetradingsystems.com

Since the site is called "Aussie Stock Forums" many posters here would not trade the same markets that you would.

Are there any statistics on how much of the market is traded this way? I find it fascinating that this is happening. Is it possible that a lot of trading is done by a group of super computers with custom trading systems? I would assume that many would concentrate on the most liquid markets due the the volume of capital involved.
 
Buggalug
Here's what I get with some of the params with optimization. I'm still learning amibroker, and my data isn't the best. I've done it only on the current ASX 300 over 10 years.
Is this a variant of tech trader?

I dropped it into Excel for some 3D optimisation charts. It would be interesting to try;
LongEMA = Optimize("LongEMA", 100, 65, 125, 20); // or maybe increment by 10

or something similar since RAR (or CAR for that matter) was highest at the lowest value of of LongEMA looking at the 3D chart. You could fix the short EMA to 20 since it doesn't appear to vary that much.

It appears that the best returns also give the highest drawdown - not an unusual occurence. So you can make more money (maybe) if you are prepared to accept more drawdown risk. It's all about tradeoffs and compromise.

By the way I haven't studied the code at all, only the results.

regards
stevo
 
Buggalug

Is this a variant of tech trader?

I dropped it into Excel for some 3D optimisation charts. It would be interesting to try;
LongEMA = Optimize("LongEMA", 100, 65, 125, 20); // or maybe increment by 10

or something similar since RAR (or CAR for that matter) was highest at the lowest value of of LongEMA looking at the 3D chart. You could fix the short EMA to 20 since it doesn't appear to vary that much.

It appears that the best returns also give the highest drawdown - not an unusual occurence. So you can make more money (maybe) if you are prepared to accept more drawdown risk. It's all about tradeoffs and compromise.

By the way I haven't studied the code at all, only the results.

regards
stevo

Yeah it is, if you look in my quote T/A asked if anyone would have a look. I hope I have it right, I found the base code on this site.

I've tried this, adding what you said, but leaving some of the range for comparison. I've also added Bang For Buck to pick which trade to take if more than one come up.

HighBreakOut = Optimize("HighBreakOut", 10, 5, 20, 5);
ShortEMA = Optimize("ShortEMA", 40, 20, 40, 10);
HighestHigh = Optimize("HighestHigh", 70, 30, 120, 20);
LongEMA = Optimize("LongEMA", 100, 65, 225, 10);

SetOption("CommissionMode", 2); //$$ per trade
SetOption("CommissionAmount", 30);
SetOption("MaxOpenPositions", 10 );
SetOption("InitialEquity", 100000 );
PositionSize = -10; // always invest only 10% of the current Equity

BangForBuck = ((10000/C)* (MA(ATR(1),200))/100);
PositionScore = BangForBuck;

cond1=Cross(H,Ref(HHV(H,HighBreakOut),-1)); // when todays high crosses last highest high over the last 10 periods
cond2=H > EMA(C,ShortEMA); // todays high is greater than the 40 day Exp MA of closes
cond3=HHVBars(H,HighestHigh) == 0; // todays high is the highest for 70 periods
cond4=EMA(V*C,21) > 500000; // ensure at least $500k of money flow
cond5=C < 10.00; // only trading in stocks less than $10
cond6=C > O; // todays close higher than open

// the following line is the trigger if all conditions satisfied
Buy=cond1 AND cond2 AND cond3 AND cond4 AND cond5 AND cond6;

// here we define variables used once in the trade
ApplyStop( stopTypeLoss, stopModePercent, amount=10 );
Sell= Cross(Ref(EMA(L,LongEMA),-1),C); // close crosses below yesterdays average of the low

Same deal ... its the current ASX 300 for 10 years and have to rename the attachment back to .zip.
 

Attachments

  • ttrader2.pdf
    105.7 KB · Views: 18
Perhaps ignorance is bliss considering some of the previous posts, I would argue that eod trend following systems on stocks works better than most systems and more suitable for the average trader. Definitely atleast in the asian pacific markets, in my testing anyhow.

I have quite a few long term trend following systems, that test well on the asx300 or all ords constituent list, and the same systems also test just as well or better on a different market such as the HSCI (Hang Seng Composite Index) constituent list.

I would argue that leverage is not really required and perhaps should be avoided (unless sufficient capital is not available to trade a minimum number of stocks for a portfolio - about 5 to 10 is required to make these type of systems work ... and I would perhaps look at more boring types of leverage such as margin lending to do so).

These systems allow people to have a balance between trading and work, and not be glued in front of a screen all day or perhaps miss the one trade that will make the year. A steady income to pay bills is important. Perhaps trading only one instrument may mean missing that trade could be doom and gloom, but not such an issue when trading stocks, where there are many opportunities to make this up.

My belief anyhow is that long term trend following systems do work on stocks, and these systems success are based on often touted principles, that is, compounding profits, trade with the trend, and cut losses, and also using the systems edge simultaneously on a portfolio of shares.

Perhaps having these systems perform on another market just as well is validation that a system is robust ? Monte Carlo results is the validation I often use, before testing on other markets.

The post above is more focused towards long term stock portfolio trading systems, however I can see other types of trading systems being discussed such as short-term and swing trading systems, which perhaps have a more limited focus on having a portfolio of around 5-10 instruments. These type of systems I did not attempt to address.
 
Bugalug

Would love too have a look but I get this.
Ive PM'd you my email address.
 

Attachments

  • Abode.gif
    Abode.gif
    9 KB · Views: 85
Tech/a download and save it locally, rename it to a .zip file, unzip it and you'll find a .csv file which contains the optimisation output.

Good work buggalug.

ASX.G
 
Tech/a download and save it locally, rename it to a .zip file, unzip it and you'll find a .csv file which contains the optimisation output.

Good work buggalug.

ASX.G

Everyone that's looking at this just give a little time before going too far. Bingk6 is getting some different results than me so we're just cross checking.

Cheers
 
I wanted to get the "Optimum" variables and code them into M/S then through Tradesim for some checking/testing/stuffing around myself.
 
I wanted to get the "Optimum" variables and code them into M/S then through Tradesim for some checking/testing/stuffing around myself.

Tech,

Did you get my email?

I'm going to leave one going overnight with optimization for stops, the dollar limit (C < 10) and a few liquidity levels. Hopefully it's on the right track, i'll be interested in the results you get with your testing.
 
buggalug ,

My two cents, just looking at the parameters, I would not think you would need to run it overnight, to realise that these indicators, in any optimised combination will provide a robust system or a system that would get me excited enough to trade ... pls prove me wrong. Monte Carlo may find much variation in the results.

I would look at the indicators, and determine a reason why they should indicate a stock is performing strongly and would warrant an entry over other stocks ... don't get me wrong, I think MA and breakouts are a strong foundation to long term trend following systems, but I don't think brute forcing a simple system like this will yield much results.

I would spend some time , eyeballing charts, and try to determine common characteristics of previous outperforming stocks.
 
buggalug
It's good to see some code posted. I ran it on the All Ords stocks rather than the ASX300.

Just a few things to consider;

1. It would be good to set delay on the entry / exit using;
SetTradeDelays( 1, 1, 1, 1 ); /* delay entry/exit by one bar */

2. What is the trade price you use, the open, the average or random for the day?

3. Setting position sizing to 10% of capital might mean that you are better off using % brokerage instead of fixed $30 brokerage, depending on your broker, otherwise brokerage costs could end up too low as the simulation progresses.

4. Using bang for buck in the positionscore could give you results that are unrealistic. Possibly use Gp's approach to Monte Carlo (ie randomly ignore some trades) to get around this.

5. I also look at how the system handles larger amounts of money than $100,000 just because that is where we want to eventually be! Many systems cannot handle larger amounts of money very well due to liquidity issues and performance degrades. Check out how it goes with $1 million & $5 million.

6. check what limit trade size as a % of entry bar volume is set at. I wouldn't go above 10% of entry bar volume, and probably less.

7. I am not sure what price the ApplyStop is working on.

8. You mentioned that your data "isn't the best". You can waste a lot of time working with crappy data.

CAR results look ok, although the % system drawdown is something that I could not live with going forward. I would also like to see the win rate higher. But these are my main criteria. The open equity curve is a little tough, especially through 2002/2003 - it would be better if the system stepped aside through some of this period.

Sorry for the length of a post but when it comes to system testing there are a lot of things to consider. I haven't even scratched the surface. You can spend hours running opts only to find that something is set wrong, or the basic starting point is all wrong.

Howard's book looks like it addresses a lot of system design issues, and from his posts above he knows what he is talking about - but I haven't read it yet. But I will when I get time.

regards
 
Top