Australian (ASX) Stock Market Forum

System Robustness

Joined
5 January 2006
Posts
4,461
Reactions
1
A system which is not robust is one which performs well in certain market conditions BUT when those conditions are changed slightly then the results fall off dramatically.

Is a system considered robust if it which performs well even when altering the start and end dates of the test?

So if my system performs in a similar fashion when testing it over different 10-year periods, say, between 1985-1995, 1986-1996, 1987-1997, 1988-1998, and so on....
Then does that mean it is a robust system?

What are other ways to measure and improve system robustness?

Any comments and discussion would be much appreciated.
Thanks.
 
IO

Perform a sensitivity analysis of the variables being optimized and utilize parameter sensitivity as a means of directing the optimization process towards a more robust set of variable values.
 
SB

Rightly or wrongly I never optimise any parameters in a portfolio system.

Nizar
Sadly you'll have problems with survivourship in your example.
Testing over various markets and Montecarlo testing the results in those markets also help.
Its all about averages and consistency of those averages!!
If the resultant numbers are similar then you could have robustness.
You can also test on various Universes (Liquid ones) this will also give an indication.
A robust system will also perform if the parameters of the system are altered slightly without there being an appreciable difference in results.I'm sure you can see the logic in this.Finetuning should not be necessary in a great system.
Always remove the top 3 outlier trades both winners and losers to determine CONSISTANCY in returns.You dont want to see wild swings.
Look for COMMON patterns ---good and bad!

* Look back at price shocks in the markets and periods you test in and see if their impact is similar and their effects on RISK and PROFIT.

Finally dont EXPECT a Long only trading methodology to perform well in a bear market and Vice versa.

I have 7 bourses I use to test systems.
T/T works best with the Hong Kong Market strangley enough! (The whole market not a margin list).
 
You technical traders amaze me at what you do. Its a complete other language. I respect it.
 
SB
Rightly or wrongly I never optimise any parameters in a portfolio system.

Tech,

Testing a system and pressing that Tradesim Start Simulation button to check Monte Carlo results, adjusting and possibly adding another variable or two to get better results is optimising.

That "IO" above sees how sensitive a system is to finetuning and walks forward automatically.

The walk forward is the main bit about "robust"! ;)

SB
 
mB1

Its a complete other language.

Was to me once!

SB

Our definition of optimisation appears to be somewhat different.
How is curve fitting avoided? (Please dont say by walk forward analysis).
 
How is curve fitting avoided? (Please dont say by walk forward analysis).

IMO this is where Amibroker really rocks. You can use the variable optimisation and 3d graphing tool to see where different iterations of parameters sit for various CAGRs. It seems that when viewing a so-called robust system on the optimisation chart, the optimal variable settings don't just sit on the edge of a cliff or at the top of a pointy peak. Instead they're found at the top of a hill. Either side of the hill are suboptimal parameter settings. To be literal, lets take TechTrader (I cant run a test right now as my Amibroker is tied up testing something else, but I'll happily test it after if people are interested to see).

You might find that by optimising for the 180-day EMA trailing stop you discover the optimal parameter setting (highest CAGR) is 180. If changing your EMA to 200 or 160 causes your CAGR to fall off a cliff then it would seem the system is walking a very fine line between being effective and blowing up. I think in an ideally robust system the tops of your hills are nice and broad and somewhat rounded. They dont just spike up and fall away rapidly on either side.

By choosing the optimal parameter at the top of the hill, if/when market conditions change in the future, you'll still be closer to optimal under the new conditions than if you'd just chosen any old setting. If the new optimal is 160, and you're at 180, your 20 away from optimal on the hill. But if you chose 200, you're now 40 days away from optimal. I don't know if my explanation makes sense, but it was explained something like this in Way of the Turtle, and it made a lot of sense...albeit with a picture :)

Short of testing right up until today and waiting to see how the system reacts to future market conditions, the way to simulate the impact of an optimised parameter on new data (ie. that not used in the optimisation) is to leave out some amount of data from the optimisation and forward test.
 
You might find that by optimising for the 180-day EMA trailing stop you discover the optimal parameter setting (highest CAGR) is 180. If changing your EMA to 200 or 160 causes your CAGR to fall off a cliff then it would seem the system is walking a very fine line between being effective and blowing up. I think in an ideally robust system the tops of your hills are nice and broad and somewhat rounded. They dont just spike up and fall away rapidly on either side.

By choosing the optimal parameter at the top of the hill, if/when market conditions change in the future, you'll still be closer to optimal under the new conditions than if you'd just chosen any old setting. If the new optimal is 160, and you're at 180, your 20 away from optimal on the hill. But if you chose 200, you're now 40 days away from optimal. I don't know if my explanation makes sense, but it was explained something like this in Way of the Turtle, and it made a lot of sense...albeit with a picture :)

Short of testing right up until today and waiting to see how the system reacts to future market conditions, the way to simulate the impact of an optimised parameter on new data (ie. that not used in the optimisation) is to leave out some amount of data from the optimisation and forward test
.

Worth repeating.
The crux of it is in bold.

Curtis Faith did explain it well in his book.

tech/a said:
Nizar
Sadly you'll have problems with survivourship in your example.

Testing over various markets and Montecarlo testing the results in those markets also help.
Its all about averages and consistency of those averages!!
If the resultant numbers are similar then you could have robustness.
You can also test on various Universes (Liquid ones) this will also give an indication.
A robust system will also perform if the parameters of the system are altered slightly without there being an appreciable difference in results.I'm sure you can see the logic in this.Finetuning should not be necessary in a great system.
Always remove the top 3 outlier trades both winners and losers to determine CONSISTANCY in returns.You dont want to see wild swings.
Look for COMMON patterns ---good and bad!

Tech -- why will I have problems with survivorship in my example?
Can you please elaborate.
I thought survivorship is a problem only if your data doesnt include delisted stock.

The rest I understand and its great stuff. Thanks.
 
You might find that by optimising for the 180-day EMA trailing stop you discover the optimal parameter setting (highest CAGR) is 180. If changing your EMA to 200 or 160 causes your CAGR to fall off a cliff then it would seem the system is walking a very fine line between being effective and blowing up. I think in an ideally robust system the tops of your hills are nice and broad and somewhat rounded. They dont just spike up and fall away rapidly on either side.

Yes I agree I have made that point.

By choosing the optimal parameter at the top of the hill, if/when market conditions change in the future, you'll still be closer to optimal under the new conditions than if you'd just chosen any old setting. If the new optimal is 160, and you're at 180, your 20 away from optimal on the hill. But if you chose 200, you're now 40 days away from optimal. I don't know if my explanation makes sense, but it was explained something like this in Way of the Turtle, and it made a lot of sense...albeit with a picture :)

Sorry cant agree.Why is it not possible that your choice in the "Future" finds itself closer to the optimal value than it did at the beginning rather than further from it?
Optimisation leads to the question of when to "reset" the optimised values.
Weekly/Monthly/6 mthly.

Short of testing right up until today and waiting to see how the system reacts to future market conditions, the way to simulate the impact of an optimised parameter on new data (ie. that not used in the optimisation) is to leave out some amount of data from the optimisation and forward test.

(1) Why would you leave some out?
(2) At the end of the forward test period you would in all likelyhood have a different variable as the optimum value.

This impedes robust developement in my view.
Optimisation shouldnt be necessary to gain a positive expectancy.

When developing a portfolio method expecting a group of stocks to "hold true" to an optimised value for more than 24 hrs is unrealistic and inpractical.

The can be some benifit it optimising singular entities however.
The 2 approaches vastly different I would argue.
Portfolio/individual entity.
 
Worth repeating.
The crux of it is in bold.

Curtis Faith did explain it well in his book.



Tech -- why will I have problems with survivorship in my example?
Can you please elaborate.
I thought survivorship is a problem only if your data doesnt include delisted stock.

The rest I understand and its great stuff. Thanks.

Well Nizar if you have a database which has the same stocks in it 30 yrs ago as it has in it now I'd love a copy. Regardless of delistings, you'll have mergers,and new listings 100s of them to contaminate results.

Having said that if your talking a singular entity like a Future OR an Index---different story!
 
Well Nizar if you have a database which has the same stocks in it 30 yrs ago as it has in it now I'd love a copy. Regardless of delistings, you'll have mergers,and new listings 100s of them to contaminate results.

Isnt this why you pay for clean data?
Iv got premium data.
Iv checked it out myself, seems to do all that for you. Mergers, listings, share splits, consolidations, spin-offs, etc, etc.

Below is from their website
Free data is fine, if all you’re interested in is the latest price of a stock. But if you wish to run your data through a charting package, and make real use of it, you need to maintain it. For instance, you need to adjust it for capital re-constructions.

In the case of the ASX, about six stocks need to be adjusted on average per week. A further dozen will have been de-listed by the exchange or had a name or code change. In addition, new stocks that come on board need to be identified.

Premium Data handles all of these maintenance acvities, and others, automatically, as part of database maintenance.

If I recall, even you yourself called it serious data for serious traders ;)

Maybe Iv still missed the point?? :confused:
 
Isnt this why you pay for clean data?
Iv got premium data.
Iv checked it out myself, seems to do all that for you. Mergers, listings, share splits, consolidations, spin-offs, etc, etc.

Nizar.
Its not about the data.
Its about what's in the data now and what WAS in the data 10/20/30 yrs ago.
Simply 3 sets of VERY different data universes.
EG News limited was now isnt.Davnet was now isnt etc etc.PNN is now but wasnt-----etc.
 
Nizar.
Its not about the data.
Its about what's in the data now and what WAS in the data 10/20/30 yrs ago.
Simply 3 sets of VERY different data universes.
EG News limited was now isnt.Davnet was now isnt etc etc.PNN is now but wasnt-----etc.

Okay.
Let me just ask a straight question then so i can get a straight answer.
How do i overcome this problem?
 
SB

Yeh----so you forward test 2 yrs you then like it then trade the optimised parameters of 2 yrs ago for a further 2 yrs forward so that the reset of optimised isnt for another 4 yrs? At which such time you test it and find that if you include the last 2 yrs trading the variables that "Should have" been used are vastly different from those actually used.

The only reason optimised results show a "Best" performance is that they are based upon optimised variables NOW at the END of testing.

If attempting to do this with a portfolio worse again.
 
Okay.
Let me just ask a straight question then so i can get a straight answer.
How do i overcome this problem?

You cant in portfolio testing.
In singular entity testing you find the most data you can.
Shorter timeframes will have more available data as in a 5 min chart 240 bars make up a day and over 1000 a week.

You can see what I mean--cant you?
 
You cant in portfolio testing.
In singular entity testing you find the most data you can.
Shorter timeframes will have more available data as in a 5 min chart 240 bars make up a day and over 1000 a week.

You can see what I mean--cant you?

Yes i can.
So lets focus on where we can improve the system in this regard.
Anything further to say on this matter (robustness) please tech.

Stevo or Nick, if you are around, some wisdom here would be much appreciated.
 
SB
The only reason optimised results show a "Best" performance is that they are based upon optimised variables NOW at the END of testing.

If attempting to do this with a portfolio worse again.

NO (I can use that bold too hehe!) you don't trade it, you test it for a number of years as above to see if the "COULD have been achieved" results are similar to optimized results.

If the results are similar then possibly the system is robust.
 
Sorry cant agree.Why is it not possible that your choice in the "Future" finds itself closer to the optimal value than it did at the beginning rather than further from it?

I don't know. The question sounds like, "is it possible for my optimised variable to become MORE optimal". I would say, of course. But in the event that it becomes less optimal due to a shift in market conditions (which is eventually inevitable IMO), assuming a robust system and using the trailing stop EMA as another example, the nearer you were to optimal to begin with the lower the risk that you will be much further from optimal given any kind of shift in conditions ie. toward favouring a longer MA or a shorter MA.

You could almost say that each element of your universe in a portfolio method is like a change in market conditions. Since you can trade stocks in the future that didn't exist during back testing in the past.

When developing a portfolio method expecting a group of stocks to "hold true" to an optimised value for more than 24 hrs is unrealistic and inpractical.

I would think this is as good of a reason as any to run an optimisation across your universe (portfolio) and to try and pick a spot on top of the broad hills to set your parameter values. Finding places where the distribution of results is thickest and least varied should improve the chances of your system being effective with the greatest number of elements in your universe.
 
Top