# System Robustness



## nizar (12 August 2007)

A system which is not robust is one which performs well in certain market conditions BUT when those conditions are changed _slightly_ then the results fall off _dramatically._

Is a system considered robust if it which performs well even when altering the start and end dates of the test?

So if my system performs in a similar fashion when testing it over different 10-year periods, say, between 1985-1995, 1986-1996, 1987-1997, 1988-1998, and so on....
Then does that mean it is a robust system?

What are other ways to measure and improve system robustness?

Any comments and discussion would be much appreciated.
Thanks.


----------



## Sir Burr (12 August 2007)

IO

Perform a sensitivity analysis of the variables being optimized and utilize parameter sensitivity as a means of directing the optimization process towards a more robust set of variable values.


----------



## tech/a (12 August 2007)

*SB*

Rightly or wrongly I never optimise any parameters in a portfolio system.

*Nizar*
Sadly you'll have problems with survivourship in your example.
Testing over various markets and Montecarlo testing the results in those markets also help.
Its all about *averages* and *consistency* of those averages!!
If the resultant numbers are similar then you could have robustness.
You can also test on various Universes (Liquid ones) this will also give an indication.
A robust system will also perform if the parameters of the system are altered slightly *without* there being an appreciable difference in results.I'm sure you can see the logic in this.Finetuning *should not *be necessary in a great system.
Always remove the top 3 outlier trades both winners and losers to determine *CONSISTANCY* in returns.You dont want to see wild swings.
Look for COMMON patterns ---good and bad!

*** Look back at price shocks in the markets and periods you test in and see if their impact is similar and their effects on RISK and PROFIT.

Finally dont EXPECT a Long only trading methodology to perform well in a bear market and Vice versa.

I have 7 bourses I use to test systems.
T/T works best with the Hong Kong Market strangley enough! (The whole market not a margin list).


----------



## mb1 (12 August 2007)

You technical traders amaze me at what you do. Its a complete other language. I respect it.


----------



## Sir Burr (12 August 2007)

tech/a said:


> *SB*
> Rightly or wrongly I never optimise any parameters in a portfolio system.




Tech,

Testing a system and pressing that Tradesim *Start Simulation* button to check Monte Carlo results, adjusting and possibly adding another variable or two to get better results is optimising.

That "IO" above sees how sensitive a system is to *finetuning* and walks forward automatically.

The walk forward is the main bit about "robust"! 

SB


----------



## tech/a (12 August 2007)

*mB1*



> Its a complete other language.




Was to me once!

*SB*

Our definition of optimisation appears to be somewhat different.
How is curve fitting avoided? (Please dont say by walk forward analysis).


----------



## theasxgorilla (12 August 2007)

tech/a said:


> How is curve fitting avoided? (Please dont say by walk forward analysis).




IMO this is where Amibroker really rocks.  You can use the variable optimisation and 3d graphing tool to see where different iterations of parameters sit for various CAGRs.  It seems that when viewing a so-called robust system on the optimisation chart, the optimal variable settings don't just sit on the edge of a cliff or at the top of a pointy peak.  Instead they're found at the top of a _hill_.  Either side of the _hill_ are suboptimal parameter settings.  To be literal, lets take TechTrader (I cant run a test right now as my Amibroker is tied up testing something else, but I'll happily test it after if people are interested to see).

You might find that by optimising for the 180-day EMA trailing stop you discover the optimal parameter setting (highest CAGR) is 180.  If changing your EMA to 200 or 160 causes your CAGR to fall off a cliff then it would seem the system is walking a very fine line between being effective and blowing up.  I think in an ideally robust system the tops of your _hills_ are nice and broad and somewhat rounded.  They dont just spike up and fall away rapidly on either side.

By choosing the optimal parameter at the top of the _hill_, if/when market conditions change in the future, you'll still be closer to optimal under the new conditions than if you'd just chosen any old setting.  If the new optimal is 160, and you're at 180, your 20 away from optimal on the _hill_.  But if you chose 200, you're now 40 days away from optimal.  I don't know if my explanation makes sense, but it was  explained something like this in Way of the Turtle, and it made a lot of sense...albeit with a picture 

Short of testing right up until today and waiting to see how the system reacts to future market conditions, the way to simulate the impact of an optimised parameter on new data (ie. that not used in the optimisation) is to leave out some amount of data from the optimisation and forward test.


----------



## Sir Burr (12 August 2007)

tech/a said:


> *SB*
> How is curve fitting avoided? (Please dont say by walk forward analysis).




See above.

I didn't say it :


----------



## nizar (12 August 2007)

theasxgorilla said:


> You might find that by optimising for the 180-day EMA trailing stop you discover the optimal parameter setting (highest CAGR) is 180.  If changing your EMA to 200 or 160 causes your CAGR to fall off a cliff then it would seem the system is walking a very fine line between being effective and blowing up.  *I think in an ideally robust system the tops of your hills are nice and broad and somewhat rounded.  They dont just spike up and fall away rapidly on either side.*
> 
> By choosing the optimal parameter at the top of the _hill_, if/when market conditions change in the future, you'll still be closer to optimal under the new conditions than if you'd just chosen any old setting.  If the new optimal is 160, and you're at 180, your 20 away from optimal on the _hill_.  But if you chose 200, you're now 40 days away from optimal.  I don't know if my explanation makes sense, but it was  explained something like this in Way of the Turtle, and it made a lot of sense...albeit with a picture
> *
> Short of testing right up until today and waiting to see how the system reacts to future market conditions, the way to simulate the impact of an optimised parameter on new data (ie. that not used in the optimisation) is to leave out some amount of data from the optimisation and forward test*.




Worth repeating.
The crux of it is in bold.

Curtis Faith did explain it well in his book.



			
				tech/a said:
			
		

> *Nizar
> Sadly you'll have problems with survivourship in your example.*
> Testing over various markets and Montecarlo testing the results in those markets also help.
> Its all about averages and consistency of those averages!!
> ...




Tech -- why will I have problems with survivorship in my example?
Can you please elaborate.
I thought survivorship is a problem only if your data doesnt include delisted stock.

The rest I understand and its great stuff. Thanks.


----------



## tech/a (12 August 2007)

theasxgorilla said:


> You might find that by optimising for the 180-day EMA trailing stop you discover the optimal parameter setting (highest CAGR) is 180.  If changing your EMA to 200 or 160 causes your CAGR to fall off a cliff then it would seem the system is walking a very fine line between being effective and blowing up.  I think in an ideally robust system the tops of your _hills_ are nice and broad and somewhat rounded.  They dont just spike up and fall away rapidly on either side.




Yes I agree I have made that point.



> By choosing the optimal parameter at the top of the _hill_, if/when market conditions change in the future, you'll still be closer to optimal under the new conditions than if you'd just chosen any old setting.  If the new optimal is 160, and you're at 180, your 20 away from optimal on the _hill_.  But if you chose 200, you're now 40 days away from optimal.  I don't know if my explanation makes sense, but it was  explained something like this in Way of the Turtle, and it made a lot of sense...albeit with a picture




Sorry cant agree.Why is it not possible that your choice in the "Future" finds itself closer to the optimal value than it did at the beginning rather than further from it?
Optimisation leads to the question of when to "reset" the optimised values.
Weekly/Monthly/6 mthly. 



> Short of testing right up until today and waiting to see how the system reacts to future market conditions, the way to simulate the impact of an optimised parameter on new data (ie. that not used in the optimisation) is to leave out some amount of data from the optimisation and forward test.




(1) Why would you leave some out?
(2) At the end of the forward test period you would in all likelyhood have a different variable as the optimum value.

This impedes robust developement in my view.
Optimisation shouldnt be necessary to gain a positive expectancy.

When developing a portfolio method expecting a group of stocks to "hold true" to an optimised value for more than 24 hrs is unrealistic and inpractical.

The can be some benifit it optimising singular entities however.
The 2 approaches vastly different I would argue.
Portfolio/individual entity.


----------



## tech/a (12 August 2007)

nizar said:


> Worth repeating.
> The crux of it is in bold.
> 
> Curtis Faith did explain it well in his book.
> ...




Well Nizar if you have a database which has the same stocks in it 30 yrs ago as it has in it now I'd love a copy. Regardless of delistings, you'll have mergers,and new listings 100s of them to contaminate results.

Having said that if your talking a singular entity like a Future OR an Index---different story!


----------



## nizar (12 August 2007)

tech/a said:


> Well Nizar if you have a database which has the same stocks in it 30 yrs ago as it has in it now I'd love a copy. Regardless of delistings, you'll have mergers,and new listings 100s of them to contaminate results.




Isnt this why you pay for clean data?
Iv got premium data.
Iv checked it out myself, seems to do all that for you. Mergers, listings, share splits, consolidations, spin-offs, etc, etc.

Below is from their website


> Free data is fine, if all you’re interested in is the latest price of a stock. But if you wish to run your data through a charting package, and make real use of it, you need to maintain it. For instance, you need to adjust it for capital re-constructions.
> 
> In the case of the ASX, about six stocks need to be adjusted on average per week. A further dozen will have been de-listed by the exchange or had a name or code change. In addition, new stocks that come on board need to be identified.
> 
> Premium Data handles all of these maintenance acvities, and others, automatically, as part of database maintenance.




If I recall, even you yourself called it serious data for serious traders 

Maybe Iv still missed the point??


----------



## Sir Burr (12 August 2007)

tech/a said:


> Sorry cant agree.
> 
> Why is it not possible that your choice in the "Future" finds itself closer to the optimal value than it did at the beginning rather than further from it?


----------



## tech/a (12 August 2007)

> Isnt this why you pay for clean data?
> Iv got premium data.
> Iv checked it out myself, seems to do all that for you. Mergers, listings, share splits, consolidations, spin-offs, etc, etc.




Nizar.
Its not about the data.
Its about what's in the data now and what *WAS* in the data 10/20/30 yrs ago.
Simply 3 sets of VERY different data universes.
EG News limited was now isnt.Davnet was now isnt etc etc.PNN is now but wasnt-----etc.


----------



## nizar (12 August 2007)

tech/a said:


> Nizar.
> Its not about the data.
> Its about what's in the data now and what *WAS* in the data 10/20/30 yrs ago.
> Simply 3 sets of VERY different data universes.
> EG News limited was now isnt.Davnet was now isnt etc etc.PNN is now but wasnt-----etc.




Okay.
Let me just ask a straight question then so i can get a straight answer.
How do i overcome this problem?


----------



## tech/a (12 August 2007)

SB

Yeh----so you forward test 2 yrs you then like it then trade the optimised parameters of 2 yrs ago for a further 2 yrs forward so that the reset of optimised isnt for another 4 yrs? At which such time you test it and find that if you include the last 2 yrs trading the variables that "Should have" been used are vastly different from those actually used.

The only reason optimised results show a "Best" performance is that they are based upon optimised variables *NOW* at the *END* of testing.

If attempting to do this with a portfolio worse again.


----------



## tech/a (12 August 2007)

nizar said:


> Okay.
> Let me just ask a straight question then so i can get a straight answer.
> How do i overcome this problem?




You cant in portfolio testing.
In singular entity testing you find the most data you can.
Shorter timeframes will have more available data as in a 5 min chart 240 bars make up a day and over 1000 a week.

You can see what I mean--cant you?


----------



## nizar (12 August 2007)

tech/a said:


> You cant in portfolio testing.
> In singular entity testing you find the most data you can.
> Shorter timeframes will have more available data as in a 5 min chart 240 bars make up a day and over 1000 a week.
> 
> You can see what I mean--cant you?




Yes i can.
So lets focus on where we *can* improve the system in this regard.
Anything further to say on this matter (robustness) please tech.

Stevo or Nick, if you are around, some wisdom here would be much appreciated.


----------



## Sir Burr (12 August 2007)

tech/a said:


> SB
> The only reason optimised results show a "Best" performance is that they are based upon optimised variables *NOW* at the *END* of testing.
> 
> If attempting to do this with a portfolio worse again.




*NO* (I can use that bold too hehe!) you don't *trade it*, you *test it* for a number of years as above to see *if* the "COULD have been achieved" results are similar to optimized results. 

If the results are similar then possibly the system is robust.


----------



## theasxgorilla (12 August 2007)

tech/a said:


> Sorry cant agree.Why is it not possible that your choice in the "Future" finds itself closer to the optimal value than it did at the beginning rather than further from it?




I don't know.  The question sounds like, "is it possible for my optimised variable to become MORE optimal".  I would say, of course.  But in the event that it becomes less optimal due to a shift in market conditions (which is eventually inevitable IMO), assuming a robust system and using the trailing stop EMA as another example, the nearer you were to optimal to begin with the lower the risk that you will be much further from optimal given any kind of shift in conditions ie. toward favouring a longer MA or a shorter MA.

You could almost say that each element of your universe in a portfolio method is like a change in market conditions.  Since you can trade stocks in the future that didn't exist during back testing in the past.



tech/a said:


> When developing a portfolio method expecting a group of stocks to "hold true" to an optimised value for more than 24 hrs is unrealistic and inpractical.




I would think this is as good of a reason as any to run an optimisation across your universe (portfolio) and to try and pick a spot on top of the broad hills to set your parameter values.  Finding places where the distribution of results is thickest and least varied should improve the chances of your system being effective with the greatest number of elements in your universe.


----------



## tech/a (12 August 2007)

I'm only presenting my view/experiences.
I'm not here to convince anyone.

Nicks In Adelaide at the moment and not that well unfortunately.
The thought of Dinner with tech can be quite upsetting!


----------



## Sir Burr (13 August 2007)

Hi Nizar,

Here is an interesting page on "Robustness".

http://www.turtletrader.com/robust-trading-system.html

On the next page following Robust, on curve-fitting:-

_Trend Following parameters or rules work across a range of values. System parameters that work over a range of values are robust. If the parameters of a system are slightly changed and the performance adjusts drastically, beware. For example, if a system works great at 20, but does not work at 19 or 21 you have a system with poor robustness. On the other hand, if your system parameter is 50 and it also works at 40 or 60, your system is much more robust (and reliable)._

That optimizing "addon" to Amibroker runs a system across all of your parameters giving a visual result of whether a slight change in parameters could turn the system to crap.

The big evil obtimizing/curve fitting, I agree 100% it's dangerous but if you test it on past data then moving forward and using those parameters in the following year (still using historical data) you can see what effect it
may have. That software (IO) DOES NOT pick the best case (curve-fitted) parameters either.

If you have tested a system I'm sure you would have run multiple simulations to try and find better and better results (optimizing). This is just getting straight to the point quickly and seeing if it works in the future and if any changes in parameters result in a system failure.

Also, least parameters are more in a system.


----------



## Sir Burr (14 August 2007)

tech/a said:


> Rightly or wrongly I never optimise any parameters in a portfolio system.




Tech,

Thinking, thinking, thinking. 

_OK, a system based on truly random entries and exits wouldn't be optimized._

SB


----------



## tech/a (14 August 2007)

I guess it boils down to the our/ individuals definition of optimise.

To me that means testing a wide range of variables against an instrument in the attempt to find the variable which performs best.

The possibility that in a few weeks time the tested and selected optimised variable is no longer the best choice renders (to me atleast) the constant attempt to determine the best possible variable allocation pointless.

The situation is bad enough with a singular instrument (Stock/Future/Index etc) but then trying to apply it to a portfolio where the constituents are likely to move in an un correlated manner at any period in time---is (Again to me at least) a waste of time.

There is of course the arguement that the resultant variables chosen are no better or worse than those chosen by any other mean.
That the "Blueprint" if with an acceptable set of numbers can be traded just as any choice of variables with the same.

I'm yet to see a long term walked forward test either live or simulated which shows an appreciable improvement in a system over a reasonable period of time (500 periods of selected timeframe/2yrs on a daily).

But I'm willing to be educated.


----------



## _ExY_ (15 August 2007)

Great Thread,

Sir Burr correct me if im wrong but your statement 'That software (IO) DOES NOT pick the best case (curve-fitted) parameters either' is not entirely correct. 

Given IO is designed with two optimization modes, Swarm Optimization and Differential Optimization, (please view this for an explanation of swarm http://www.projectcomputing.com/resources/psovis/ also i don t know differential optimization that well perhaps someone could discuss this and mayb compare these two modes of optimization)and IO is generally in Swarm optimization.

Swarm optimization basically works by, you have a fitness space(like that in the link above), you spawn randomly a certain number of searchers throughout the space, then recursively, 

access the value of each searcher(in the link above this means how high each searcher is at its current location), 
the best values for each individual searcher is stored(storage of location and value of best), 
the best value obtained by the whole swarm is stored, 
and then each searcher is given a new location that is influenced by its own best value and the global best found so far

This loop is performed until some specific criteria is meet then the location of the best value is returned

Now what i think your trying to say, is that the solution that IO finds will be related to the 'curvature' of the surrounding landscape of the optimization space (also other parameters such as the number of particles(searchers), the manner in that the particles search, and also the nature of the search space itself, im sure theres more!) and by that the solutions that IO finds will be at the top of a hill such that hill will be reasonably rounded at the top. Also I think you are trying to point out that IO is unlikely to find maxima that simply stick straight out of the landscape (i.e in the link above this would be the equivalent to seeing something like a tall narrow 'stick like' shape pointing out of the landscape). 

So to some extent your statement is true, IO is *unlikely* to find these stick-like hills, but beware this shouldn't be assumed and sometimes IO might get lucky and find these.


----------



## nizar (15 August 2007)

Tech, using metastock/tradesim, how do i get a results of a simulation without the 10 best champions (and/or 10 worst dogs) ?


----------



## tech/a (15 August 2007)

This is how I do it.

A bit long winded.
Go to the trade records sheet.
Find those trades you wish to remove.(The Codes)
Record them on a pad.

Go to your Metastock exploration.
set up an exploration for the system
Go to securities and untick those you DONT wish
to be included.
re run the exploration and they wont be included.

*ExY*
Did you get an optimisation encyclopedia for Xmas!!


----------



## rnr (15 August 2007)

An alternative to Tech’s method is as follows:-

a) From the Trade Log Window note the unique trade numbers you wish to exclude from this simulation,

b) go to the Trade Database Window and remove the tick from the box of the trade numbers noted in a) above,

c) then press Start Simulation and the new results will now exclude those trades.


----------



## tech/a (15 August 2007)

*rnr*
Much better.
You've saved me time


----------



## nizar (15 August 2007)

Thanks RnR and tech.

Is anything wrong with a system if the results are much different when the top10 winners and top10 losers are taken out??

Richard Dennis did say:


> The worst mistake a trader can make is to miss a major profit opportunity. *95 percent of profits come from only 5 percent of the trades*.




Thoughts?

Surely techtrader was the same??


----------



## theasxgorilla (15 August 2007)

I think this is the essense of longer term trend following.  Often, < 40% winners, and of those 40% that win < 20% win really big, eclipsing the 80% that meander or fail.  Another approach for testing system resilience that I saw GP and Stevo experimenting with recently was to randomly skip a percentage of trades during a run.  The more you can skip without affecting your stats the less reliant your system is on the opportunity factor provided by future market conditions...or that was my understanding.


----------



## nizar (15 August 2007)

theasxgorilla said:


> I think this is the essense of longer term trend following.  Often, < 40% winners, and of those 40% that win < 20% win really big, eclipsing the 80% that meander or fail.  *Another approach for testing system resilience that I saw GP and Stevo experimenting with recently was to randomly skip a percentage of trades during a run. * The more you can skip without affecting your stats the less reliant your system is on the opportunity factor provided by future market conditions...or that was my understanding.




Great idea.

Does anybody know how to do this in metastock/tradesim?


----------



## tech/a (15 August 2007)

nizar said:


> Great idea.
> 
> Does anybody know how to do this in metastock/tradesim?





Never done it.
But would think you could put in different start and stop dates in Tradesim.
Ask stevo.

On taking out the best and worse Id only bother with say the top 3.
You just dont one a few trades skewing the results one way or the other radically.


----------



## _ExY_ (15 August 2007)

theasxgorilla said:


> Another approach for testing system resilience that I saw GP and Stevo experimenting with recently was to randomly skip a percentage of trades during a run.  The more you can skip without affecting your stats the less reliant your system is on the opportunity factor provided by future market conditions...or that was my understanding.




How can the information gained by this approach be anymore useful in testing the 'resilience' of the system? 

This method seems to resemble monte carlo whereas instead of randomly selecting trades at random your simply rejecting them(is there any difference?!)


----------



## R0n1n (15 August 2007)

Guys, while I got the attention of all you Tradesim experts I might ask a quick question. 

My mate has offered to test my system on his TradeSim if I can send him my backtest report as a trt file ... 

Now Amibroker can export a csv file of the backtest, but I donno any other method of converting it to a trt file format. 

Your help would be much appreciated.


----------



## tech/a (15 August 2007)

Sorry cant help.
BUT all he needs is the formulas in M/S format and your test settings,universe of stocks and time periods.


----------



## Sir Burr (15 August 2007)

R0n1n said:


> I donno any other method of converting it to a trt file format.




http://finance.groups.yahoo.com/group/amibroker/message/113685


----------



## R0n1n (15 August 2007)

I tried the Convert.exe (from Yahoo files) but it just created a 0 kb trt file. 

With that procedure, where do I imbed it into the AFL ?


----------



## Sir Burr (15 August 2007)

_ExY_ said:


> Great Thread,
> 
> Sir Burr correct me if im wrong




No need to 

SB
*
Edit: R0n1n check your PM*


----------



## R0n1n (15 August 2007)

just replied SB. thanx a ton.


----------



## GreatPig (15 August 2007)

_ExY_ said:
			
		

> This method seems to resemble monte carlo whereas instead of randomly selecting trades at random your simply rejecting them(is there any difference?!)



It is a form of Monte Carlo testing. The way I see it, randomly rejecting trades is much the same as randomly picking trades from the available signals, as long as you don't reject so many that the sparsity of signals biases the results too much.

Normally trades are selected based on preference rules, when more than one is available at a time. Randomising which of those trades is selected goes part way towards Monte Carlo testing, but unless your system gives lots of signals, there may be certain signals which will always be picked up simply because they are the only signals available at that time, especially in a shorter-term system. If one that's always picked up is a huge winner, then all your Monte Carlo results will be biased by that same trade, as you may never get a backtest without that trade in it. By randomly rejecting signals though, you will eventually get backtests where that trade is rejected, and will thus see some results that don't include it.

Also, this allows you to test your selection preference rules by keeping those rules (rather than randomising the selection) and simply rejecting some trades at random to ensure you get a variety of portfolios. At least then if a signal is rejected, the next most preferred one will be chosen rather than just a random one.

No idea about MS/Tradesim programming, but in AmiBroker I set a variable called MonteCarlo to some percentage (actually a fraction between 0-1) and then use the formula:

Buy = Buy AND Random() > MonteCarlo;

The larger you make MonteCarlo, the less chance the random value will be larger than it and thus the more chance that the buy signal will be wiped out.

Note though that if you're using the scale-in or scale-out values in the Buy array, to retain those values you'd need to write it as:

Buy = IIf(Random() > MonteCarlo, Buy, False);

although I'm not sure that you'd want to randomly drop scale-out signals.

I'm not saying that this is a perfect way of doing Monte Carlo testing, but I think it's better than just randomising the signal selection at any bar (which in AmiBroker means  setting the PositionScore variable to Random).

Cheers,
GP


----------



## theasxgorilla (16 August 2007)

_ExY_ said:


> How can the information gained by this approach be anymore useful in testing the 'resilience' of the system?
> 
> This method seems to resemble monte carlo whereas instead of randomly selecting trades at random your simply rejecting them(is there any difference?!)




I actually still don't get monte carlo analysis...do you reject any trades when you do monte carlo analysis?  Or do you just reorder them and observe things like Max DD?

The answer to the first question...my thoughts only...if you reject trades at random, and you have a large enough sample of trades to be considered significant, and through rejecting say 5% of the trades you notice that your CAGR or Max DD or some other relevant measure consistently takes a big hit, then it could be the case that your system is too dependent upon the opportunity factor present in your sample data, and may not perform as well with a reduced opportunity factor in the future.

ASX.G


----------



## tech/a (16 August 2007)

theasxgorilla said:


> I actually still don't get monte carlo analysis...do you reject any trades when you do monte carlo analysis?  Or do you just reorder them and observe things like Max DD?




*ASX*
The best discription I have come across is this.
MonteCarlo Analysis.
Imagine giving 20000 people (As many as you wish to test). Your trading method and all its rules.You give them all the same amount of Money and you send them off for X years (Your test period) to trade your system. After that period you have them come back to you and you tabulate results.




> The answer to the first question...my thoughts only...if you reject trades at random, and you have a large enough sample of trades to be considered significant, and through rejecting say 5% of the trades you notice that your CAGR or Max DD or some other relevant measure consistently takes a big hit, then it could be the case that your system is too dependent upon the opportunity factor present in your sample data, and may not perform as well with a reduced opportunity factor in the future.
> 
> ASX.G




Trades are not rejected. As with most systems there is not enough capital to trade EVERY signal.So if I start trading your system today and someone else tommorrow OR of the 5 trades selected I take a different on to 4 other people and each time my porfolio is buying a stock I choose differently to others,then the landscape of my portfolio and that of many others is likely to be radically different.
From constituents to time held to stopped out trades---etc.

So rather than one singular portfolio of trades being tested many 1000s can be tested.I have seen a system which has a 97% success rate (97% of portfolios are profitable,3% are losers) over 20000 portfolios.
Would you trade it?
Why/why not?
(Question is to anyone interested).


----------



## CFD (16 August 2007)

Although it's an extreme example I guess it would still depend on the R/R, maxDD, how many years it took to achieve, and what else it involved. If it was back fitted over two years of data, run it over a different two years data.

Good to see you calling a system, a system. Now if only we could call optimisation something else.


----------



## nizar (16 August 2007)

tech/a said:


> So rather than one singular portfolio of trades being tested many 1000s can be tested.I have seen a system which has a 97% success rate (97% of portfolios are profitable,3% are losers) over 20000 portfolios.
> Would you trade it?
> Why/why not?
> (Question is to anyone interested).




Definately not.
I was testing a system last nite. 99.98% profitability, 0.02% loss.
1 portfolio from the 5000 returned a loss over the testing period.

I wouldnt trade it if somebody paid me!

I could well be that unlucky bastard that doesnt make money!

When i test systems i look for the maximum max.DD over the 5000 or 10000 portfolios, and the minimum return. If i am happy with both of these then i may consider that system.
Im pretty hard to please though.
And of course i ignore the last 5 years in all backtests.
I dont wanna be kidding myself.


----------



## tech/a (16 August 2007)

> And of course I ignore the last 5 years in all backtests.




*Now why on earth would you do that?*
Your designing a LONG trading system.
You want it to catch trends and profit from them.
So you cut out the very period of trading your wanting to be involved with.

Thats like designing a boat and testing it on a freeway!!!


----------



## nizar (16 August 2007)

Tech/a.
See the last sentence of my previous post.


----------



## bingk6 (16 August 2007)

GreatPig said:


> Normally trades are selected based on preference rules, when more than one is available at a time. Randomising which of those trades is selected goes part way towards Monte Carlo testing, but unless your system gives lots of signals, there may be certain signals which will always be picked up simply because they are the only signals available at that time, especially in a shorter-term system. If one that's always picked up is a huge winner, then all your Monte Carlo results will be biased by that same trade, as you may never get a backtest without that trade in it. By randomly rejecting signals though, you will eventually get backtests where that trade is rejected, and will thus see some results that don't include it.




Hi GP,

Very sound logic I think. The only thing that I could argue against it is that for a fully mechanical trader, who follows his buy and sell rules to the letter, in the situation that you described above, he'll always be in a position to take that hugh winner because the funds will be available. Therefore the list of trades in each portfolio should always include that hugh winner because in real life, with funds available, that trade would be taken. If you were trying to simulate how a number of portfolios would perform over a period of time, randomly discarding trades would in all probability give you unrealistic results (for better or for worst).

Having said that, where I believe randomly discarding some trades comes into its own would be that it gives you a very good idea as to how robust your system is, in terms of its ability to generate good returns across a range of stocks, across a range of different time frames. It enables you to guage whether your system performance was improved significantly as a result of some huge winners or adversely affected by a few shockers. It allows the extremes to be factored out somewhat and is good for evaluating the efficiency of the entry and exits on their own. However, it does not allow you to accurately simulate how a portfolio would perform over a period of time, which IMHO, is a different kettle of fish again, as I explained in the first paragraph.


----------



## tech/a (16 August 2007)

*GP and Bingk6*

Good points from both. 

*Nizar*
Why do you think testing a bullish system through a bullish period is kidding yourself?
Surely you'd want to know if it out performed the index.
Smoothness of curve
How long trades were held (did you catch most of the trend available)
Drawdowns/stops,leverage during bullish periods.

Are suggesting that developement of a short system using the last 5 yrs would be best?


----------



## GreatPig (16 August 2007)

bingk6 said:
			
		

> he'll always be in a position to take that hugh winner



Yes, but that's a historic winner which may not normally occur and may not occur again. If the system only ends up with positive expectancy because of that trade, then if such a trade doesn't occur again during his trading life, his system won't make money.

A lot of ifs and buts of course, but just something to be wary of. I think a robust system shouldn't be highly reliant on any particular trade or small number of trades. 

Cheers,
GP


----------



## nizar (16 August 2007)

GreatPig said:


> Yes, but that's a historic winner which may not normally occur and may not occur again. If the system only ends up with positive expectancy because of that trade, then if such a trade doesn't occur again during his trading life, his system won't make money.
> 
> A lot of ifs and buts of course, but just something to be wary of. I think a robust system shouldn't be highly reliant on any particular trade or small number of trades.




GP. 
Great post there.

Tech/a.
I see your point.
I am mainly testing across the period 1992-2002 so i miss out on the last 5 years. This is my main testing period.

The worst portfolio from the 5000 (the no. of monte carlo simulations i am performing) has to be postive in this period and has to provide a reasonable return. Max.dd from the worst portfolio has to be acceptable as well.

When i then forward trade the system, the results were astronomical. along the lines of 85%pa 

*The reason I do this is because any longterm trend following system will obviously outperform in periods of outstanding bullishness (like 2003-2007). *

*I'd rather expect less from the system and for the results to suprise to the upside than the other way around.*

Will this bullish streak continue?
I dont know but i dont want to give myself a false dream.

Those that were testing and designing systems during 2001-2002 were fortunate in the sense that the period where they began trading hugely outperformed the period in which the system was tested in.

1992-2002 was in no means bearish, but its more slow and steady.

Though i must say at this stage the returns when tested from 1993-1998 are less than impressive.

*The stockmarket will have periods of bullishness and strong trends. The key is not lose too much (or make money from other systems??) in between those periods.*


----------



## tech/a (16 August 2007)

*GP*

Is this not covered taking out the best and worst trades in the test.
Particularly if one trade as you say makes 20% or more of the profit.

*Nizar.*
There is only a few ways of investigating the possibility of a bull or bear market period.
I think the answer is to have various systems.And to load up when a bullish period occurs.
These generally last for prolonged periods unlike Corrective moves (When viewed against the back drop of history).
I dont think your kidding yourself if you realise what periods your testing through.
What may happen though is a skewed result with regard to strings of Winners/Drawdowns and R/R (I dont see this skewing as being way way over
the top.)


----------



## bingk6 (16 August 2007)

GreatPig said:


> Yes, but that's a historic winner which may not normally occur and may not occur again. If the system only ends up with positive expectancy because of that trade, then if such a trade doesn't occur again during his trading life, his system won't make money.
> 
> A lot of ifs and buts of course, but just something to be wary of. I think a robust system shouldn't be highly reliant on any particular trade or small number of trades.
> 
> ...




It is always better to check for the performance of the system with the extremes removed, so that we are only checking the "bread and butter" stuff. Is there anyway in Amibroker to strip off say the top performing 2% and bottom 2% of trades for any stock when you are performing portolio level testing. The only way I can think off is to exclude a particular symbol from testing altogether, but thats not really what I am looking for as all trades for that symbol is removed.


----------



## GreatPig (16 August 2007)

tech/a said:
			
		

> Is this not covered taking out the best and worst trades in the test.



Probably similar, but I think taking out random trades would be easier to code.

Cheers,
GP


----------



## _ExY_ (16 August 2007)

GreatPig said:


> I think a robust system shouldn't be highly reliant on any particular trade or small number of trades.






bingk6 said:


> It is always better to check for the performance of the system with the extremes removed, so that we are only checking the "bread and butter" stuff.




Why? This doesn’t really answer Nizars original question. Nizars question is more about what is the likelihood of a trading system replicating its results in a different period. So does it matter if you’re testing to see how these extremes are removed? Well its not going to matter if they are there in the future. All of the tests proposed so far simply access how the system will perform if these outliers aren’t there. They are not about accessing weather or not the system will be able to pickup these in the future, which should be the aim when designing these tests (if these types of tests are devised they will be able to measure to some extent the robustness of the system). 

Also Nizars question is also about weather or not to deem a system exhibiting statistically significant behaviour based upon its results. You are trying to access weather or not these outliers (well the behaviour of all trades and from those trades the behaviour of the system) picked up by your system are something that is likely to be picked up in the future.



GreatPig said:


> If one that's always picked up is a huge winner, then all your Monte Carlo results will be biased by that same trade, as you may never get a backtest without that trade in it. By randomly rejecting signals though, you will eventually get backtests where that trade is rejected, and will thus see some results that don't include it.




I know that this may seem repetitive but this still doesn’t tell us anything we already know and doesn’t address Nizars original question. I could turn this line of thought around and say I have a system that has an expected mean < 0, but should I still trade it because the system may pickup because I may pickup a winning trade that makes the system suitable for trading? What im trying to say is, given we have a system that is profitable because of a small % of outlier trades, and perform Monte Carlo where several portfolio are produced and now lets say that one of these portfolios gives an unfavorable result. Now lets take this unfavorable portfolio and imagine that this was the actual result we got originally prior to performing the Monte Carlo analysis where we reject N trades. I could say well should I reject the system? Because it may pickup some favorable trades in the following N trades.


----------



## nizar (16 August 2007)

rnr said:


> An alternative to Tech’s method is as follows:-
> 
> a) From the Trade Log Window note the unique trade numbers you wish to exclude from this simulation,
> 
> ...




GP.
Its pretty easy to remove the outliers, refer to rnr's post above.

ExY.
Great post.


----------



## tech/a (16 August 2007)

_ExY_ said:


> Why? This doesn’t really answer Nizars original question. Nizars question is more about what is the likelihood of a trading system replicating its results in a different period. So does it matter if you’re testing to see how these extremes are removed? Well its not going to matter if they are there in the future. All of the tests proposed so far simply access how the system will perform if these outliers aren’t there. They are not about accessing weather or not the system will be able to pickup these in the future, which should be the aim when designing these tests (if these types of tests are devised they will be able to measure to some extent the robustness of the system).




Speaking for myself.
I want a system which doesnt need an outlier move to make it profitable.
I'm looking for a smooth equity curve rather than one which has a spike.
If I take out the spike and the end result of further testing is a smooth curve with a strong set of numbers,I can be reasonably sure that the system will also trade similar outlier moves. If it doesnt then its still acceptable.
If the outlier is the only reason for the system being profitable over a 10 year period I wont want to trade it for 10 yrs in the hope another outlier turns the system into profit. 



> Also Nizars question is also about weather or not to deem a system exhibiting statistically significant behaviour based upon its results. You are trying to access weather or not these outliers (well the behaviour of all trades and from those trades the behaviour of the system) picked up by your system are something that is likely to be picked up in the future.




See above.




> I know that this may seem repetitive but this still doesn’t tell us anything we already know and doesn’t address Nizars original question. I could turn this line of thought around and say I have a system that has an expected mean < 0, but should I still trade it because the system may pickup because I may pickup a winning trade that makes the system suitable for trading? What im trying to say is, given we have a system that is profitable because of a small % of outlier trades, and perform Monte Carlo where several portfolio are produced and now lets say that one of these portfolios gives an unfavorable result. Now lets take this unfavorable portfolio and imagine that this was the actual result we got originally prior to performing the Monte Carlo analysis where we reject N trades. I could say well should I reject the system? Because it may pickup some favorable trades in the following N trades.




I see your point and yes it could.
But do you really want to take a punt that the portfolio your trading wont be the portfolio that makes a loss.
Systems developement is about removing the "Punt" factor.


----------



## GreatPig (16 August 2007)

_ExY_ said:
			
		

> So does it matter if you’re testing to see how these extremes are removed?



Randomly removing buy signals is not just about removing outliers. As it randomly removes _any_ signal, there will be some backtests where the outliers are still present. It's primarily to help ensure that different backtests will take different portfolio paths, and that after lots of backtests, a large variety of portfolios using the same signals will have been tested. As far as outliers go, it's mainly to help ensure that none of them are always included in every portfolio backtest.



> this still doesn’t tell us anything we already know and doesn’t address Nizars original question.



It wasn't a response to Nizar's original question, rather to your more-recent question.

GP


----------



## tech/a (16 August 2007)

GP

Never done that myself but can see the merits.



> It's primarily to help ensure that different backtests will take different portfolio paths,


----------



## _ExY_ (16 August 2007)

Just saw this.



tech/a said:


> So rather than one singular portfolio of trades being tested many 1000s can be tested.I have seen a system which has a 97% success rate (97% of portfolios are profitable,3% are losers) over 20000 portfolios.
> Would you trade it?
> Why/why not?




If the Sum((FinalCapital - InitialCapital) for all portfolios)>0 then you should trade it.



tech/a said:


> I see your point and yes it could.
> But do you really want to take a punt that the portfolio your trading wont be the portfolio that makes a loss.
> Systems development is about removing the "Punt" factor.





Well I guess in the example used by nizar you would monte carlo it and find out if Sum(All portfolio outcomes)>0, if so you should trade it. 




GreatPig said:


> It wasn't a response to Nizar's original question, rather to your more-recent question.




Sorry for my hastyness and thankyou for disclosing the details of your technique.


----------



## GreatPig (16 August 2007)

_ExY_ said:
			
		

> If the Sum((FinalCapital - InitialCapital) for all portfolios)>0 then you should trade it.



I think I would want it better than that.

That's the same as saying if the average is > 0, but if it's too close to zero then that means you've roughly a 50-50 chance of making a profit (if the average is close to the median).

I'd be looking at least one standard deviation down to be greater than zero to be a bit more on the safe side - two standard deviations preferably. In fact, I'd rather want all backtest results to be greater than zero.

Cheers,
GP


----------



## nizar (16 August 2007)

GreatPig said:


> I think I would want it better than that.
> 
> That's the same as saying if the average is > 0, but if it's too close to zero then that means you've roughly a 50-50 chance of making a profit (if the average is close to the median).
> 
> ...




Hi GP.

I do look for all backtest results to be greater than zero as well.

Its nice if the extreme negative outlier is substantially in profit, but just to see him in the green is enough for me.

Of course, thats as long as its an outlier and the average result over the several 000s of simulations is much much higher and something that I would be very pleased with.


----------



## tech/a (17 August 2007)

Something to aim at perhaps.

Monte Carlo Report                                                                

Trade Database Filename                                                           
C:\TradeSimData\Weekly 01.trb                                                     

Simulation Summary                                                                
Simulation Date:                                       17/08/2007                 
Simulation Time:                                       4:43:24 AM                 
Simulation Duration:                                   54.74 seconds              

Trade Parameters                                                                  
Initial Capital:                                       $100,000.00                
Portfolio Limit:                                       100.00%                    
Maximum number of open positions:                      100                        
Position Size Model:                                   Fixed Percent Risk         
Percentage of capital risked per trade:                2.00%                      
Position size limit:                                   14.50%                     
Portfolio Heat:                                        100.00%                    
Pyramid profits:                                       Yes                        
Transaction cost (Trade Entry):                        $30.00                     
Transaction cost (Trade Exit):                         $30.00                     
Margin Requirement:                                    50.00%                     
Magnify Position Size(& Risk) according to Margin Req: Yes                        

Trade Preferences                                                                 
Trading Instrument:                                    Stocks                     
Break Even Trades:                                     Process separately         
Trade Position Type:                                   Process long trades only   
Entry Order Type:                                      Default Order              
Exit Order Type:                                       Default Order              
Minimum Trade Size:                                    $0.00                      
Accept Partial Trades:                                 No                         
Volume Filter:                                         Ignore Volume Information  
Pyramid Trades:                                        No                         
Use Level Zero trades only:                            Yes                        

Simulation Stats                                                                  
Number of trade simulations:                           10000                      
Trades processed per simulation:                       3171                       
Maximum Number of Trades Executed:                     166                        
Average Number of Trades Executed:                     150                        
Minimum Number of Trades Executed:                     132                        
Standard Deviation:                                    4.70                       

Profit Stats                                                                      
Maximum Profit:                                        $21,609,425.47 (21609.43%) 
Average Profit:                                        $7,856,738.16 (7856.74%)   
Minimum Profit:                                        $1,860,311.35 (1860.31%)   
Standard Deviation:                                    $2,673,221.18 (2673.22%)   
Probability of Profit:                                 100.00%                    
Probability of Loss:                                   0.00%                      

Percent Winning Trade Stats                                                       
Maximum percentage of winning trades:                  59.44%                     
Average percentage of winning trades:                  50.39%                     
Minimum percentage of winning trades:                  41.51%                     
Standard Deviation:                                    2.35%                      

Percent Losing Trade Stats                                                        
Maximum percentage of losing trades:                   58.49%                     
Average percentage of losing Trades:                   49.61%                     
Minimum percentage of losing trades:                   40.56%                     
Standard Deviation:                                    2.35%                      

Average Relative Dollar Drawdown Stats                                            
Maximum of the Average Relative Dollar Drawdown:       $97,590.10                 
Average of the Average Relative Dollar Drawdown:       $28,208.25                 
Minimum of the Average Relative Dollar Drawdown:       $9,935.73                  
Standard Deviation:                                    $8,859.15                  

Average Relative Percent Drawdown Stats                                           
Maximum of the Average Relative Percent Drawdown:      4.7030%                    
Average of the Average Relative Percent Drawdown:      2.9768%                    
Minimum of the Average Relative Percent Drawdown:      1.9259%                    
Standard Deviation:                                    0.3768%                    

Maximum Peak-to-Valley Dollar Drawdown Stats                                      
Maximum Absolute Dollar Drawdown:                      $1,804,845.84              
Average Absolute Dollar Drawdown:                      $461,552.77                
Minimum Absolute Dollar Drawdown:                      $109,716.28                
Standard Deviation:                                    $175,929.42                

Maximum Peak-to-Valley Percent Drawdown Stats                                     
Maximum Absolute Percent Drawdown:                     45.9054%                   
Average Absolute Percent Drawdown:                     27.5437%                   
Minimum Absolute Percent Drawdown:                     21.9219%                   
Standard Deviation:                                    3.8668%


----------



## R0n1n (17 August 2007)

Here is mine. Nearly there, just need to fix that Profit percentage lol... 

But seriously guys what area should I concentrate at ? 

The below report is thanx to the help received from *Sir Burr* in converting  AB it to Tradesim, thanx mate.


```
Trade Parameters
Initial Capital:                                           $100,000.00
Portfolio Limit:                                               100.00%
Maximum number of open positions:                         100
Position Size Model:                                Equal Dollar Units
Trade Size ($ value):                                       $10,000.00
Pyramid profits:                                                    No
Transaction cost (Trade Entry):                                 $10.00
Transaction cost (Trade Exit):                                  $10.00
Margin Requirement:                                            100.00%

Trade Preferences
Trading Instrument:                                             Stocks
Break Even Trades:                                  Process separately
Trade Position Type:                                Process all trades
Entry Order Type:                                        Default Order
Exit Order Type:                                         Default Order
Minimum Trade Size:                                              $0.00
Accept Partial Trades:                                              No
Volume Filter:                               Ignore Volume Information
Pyramid Trades:                                                     No
Use Level Zero trades only:                                        Yes

Simulation Stats
Number of trade simulations:                                     20000
Trades processed per simulation:                                 12619
Maximum Number of Trades Executed:                                1286
Average Number of Trades Executed:                                1121
Minimum Number of Trades Executed:                                 533
Standard Deviation:                                                      22.29

Profit Stats
Maximum Profit:                                  $187,217.86 (187.22%)
Average Profit:                                    $85,839.44 (85.84%)
Minimum Profit:                                    -$7,854.81 (-7.85%)
Standard Deviation:                                $23,692.78 (23.69%)
Probability of Profit:                                99.97%
Probability of Loss:                                 0.03%

Percent Winning Trade Stats
Maximum percentage of winning trades:                           41.38%
Average percentage of winning trades:                           37.97%
Minimum percentage of winning trades:                           34.48%
Standard Deviation:                                                    0.81%

Percent Losing Trade Stats
Maximum percentage of losing trades:                            65.52%
Average percentage of losing Trades:                            62.03%
Minimum percentage of losing trades:                            58.62%
Standard Deviation:                                                   0.81%

Average Relative Dollar Drawdown Stats
Maximum of the Average Relative Dollar Drawdown:             $1,260.98
Average of the Average Relative Dollar Drawdown:               $969.36
Minimum of the Average Relative Dollar Drawdown:               $793.81
Standard Deviation:                                                         $58.51

Average Relative Percent Drawdown Stats
Maximum of the Average Relative Percent Drawdown:              1.4579%
Average of the Average Relative Percent Drawdown:              0.8324%
Minimum of the Average Relative Percent Drawdown:              0.5581%
Standard Deviation:                                                          0.0983%

Maximum Peak-to-Valley Dollar Drawdown Stats
Maximum Absolute Dollar Drawdown:                           $50,662.59
Average Absolute Dollar Drawdown:                           $28,027.32
Minimum Absolute Dollar Drawdown:                           $17,345.20
Standard Deviation:                                                $4,301.58

Maximum Peak-to-Valley Percent Drawdown Stats
Maximum Absolute Percent Drawdown:                            47.4975%
Average Absolute Percent Drawdown:                            23.3832%
Minimum Absolute Percent Drawdown:                            13.6915%
Standard Deviation:                                                    3.6614%
```


----------



## nizar (17 August 2007)

Very impressive tech.
I like the worst portfolio and max max.DD. 
Looks the goods.

WHat is the testing period for these results, i notice only 150 trades were executed?

50% winners is nice.


----------



## tech/a (17 August 2007)

Its a longterm weekly method.
10 yrs test period.
150 trades indicates that your in trades for quite a while when they go.
I'll send a sim through of a single portfolio event for other figures for you to have a sqizz over the weekend as its home not at the office.


----------



## nizar (17 August 2007)

R0nin.
Just a few points.
1. Put a realistic brokerage fee in there. I tend to overestimate mine at $44 each way. $10 i dont think you will get that anywhere for $10k parcels.
2. You will find that the profit statistics will increase significantly if you use an alternative position size method. Either a % of total capital (eg. 10%) OR you fix your risk per trade at 1-2%.
3. Choose to pyramid.

Tech.
Id imagine R/R for that system would be *phenomenal*.


----------



## tech/a (17 August 2007)

> $10 i dont think you will get that anywhere for $10k parcels.




CFD's


----------



## R0n1n (17 August 2007)

good point guys, can u put a percentage for brokerage fees ?

Also where do I get the cheapest Historical data for ASX and US markets which is clean and fully adjusted. It used to be free somewhere on the net but I can't seem to find it. thanx.

Ronin.


----------



## nizar (17 August 2007)

R0n1n said:


> good point guys, can u put a percentage for brokerage fees ?
> 
> Also where do I get the cheapest Historical data for ASX and US markets which is clean and fully adjusted. It used to be free somewhere on the net but I can't seem to find it. thanx.
> 
> Ronin.




I dont know about cheapest, but i got my data for ASX and US markets from premiumdata.net.

I paid i think around au$200.
Though annoyingly they dont yet have the list of delisted stocks for US markets. They told me its coming soon though.

And yes in tradesim you can put in a % for brokerage.


----------



## _ExY_ (17 August 2007)

GreatPig said:


> I think I would want it better than that.
> 
> That's the same as saying if the average is > 0, but if it's too close to zero then that means you've roughly a 50-50 chance of making a profit (if the average is close to the median).
> 
> ...




Of course, but perhaps from a conceptual point of view basically any system that has an average more than zero will on average make more than zero. Now it comes down to user preference on how far above zero they actually want the system to be and what kind of portfolio vs return curve they want. But nonetheless from a conceptual point of view there would be nothing rationally wrong with investing in a system that has a portfolio average above zero.


----------



## R0n1n (17 August 2007)

nizar said:


> R0nin.
> Just a few points.
> 1. Put a realistic brokerage fee in there. I tend to overestimate mine at $44 each way. $10 i dont think you will get that anywhere for $10k parcels.
> 2. You will find that the profit statistics will increase significantly if you use an alternative position size method. Either a % of total capital (eg. 10%) OR you fix your risk per trade at 1-2%.
> 3. Choose to pyramid.




Nizar,

where abouts would those changes go ?


----------



## nizar (17 August 2007)

R0n1n said:


> Nizar,
> 
> where abouts would those changes go ?




R0n1n,

Ok, it seems your "Fixed dollar risk" and "Fixed percent risk" options are greyed out (top left of the trade parameters screen). This is likely to be because you have no initial stop coded into the metastock exploration.

You can still select "Equal percent dollar units".

For pyramiding, you need to code this into your metastock exploration. 

Have a look at the TradeSim thread, tech/a has his code there and this has pyramiding and also an initial stop.

Also, if you uncheck "Favour pyramid trade" in the Preferences screen, results would be better.

For brokerage as a % i thought you could do it but i mustve been mistaken.


----------



## GreatPig (17 August 2007)

_ExY_ said:
			
		

> But nonetheless from a conceptual point of view there would be nothing rationally wrong with investing in a system that has a portfolio average above zero.



To me there would be if it was too close to zero.

Remember an average is just an average. If that average of just above zero is from testing over the last 20 years, then you have nearly a 50% chance of never making any money during the next 20 years.

I certainly wouldn't be investing in that scheme.

Cheers,
GP


----------



## rnr (17 August 2007)

> For brokerage as a % i thought you could do it but i mustve been mistaken.




This can be done in the Preferences screen by choosing "fractional costs" from the Transaction Cost section.

Refer to TS screen shot in the post below.



> Quote:
> Originally Posted by nizar
> R0nin.
> Just a few points.
> ...


----------



## _ExY_ (17 August 2007)

GreatPig said:


> To me there would be if it was too close to zero.



That is personal preference (I acknowledge that I wouldn’t invest in such a system) but what im trying to say is that a rational machine would look at these results see that on average you should make money and recognise that if you didn’t make money then that is simply bad luck(given that the test itself isn’t faulty).



GreatPig said:


> I certainly wouldn't be investing in that scheme.




But you would if this was the only scheme. Without introducing other possible systems(and this definition should be extended to other investment opportunites), then this system would appear to be a suitable solution so looking at the average is simply a relative measure. Like I said, it is user-preference on what type of portfolio vs. return curve should be considered as a tradeable system. Designers will take into account their own personal circumstances will looking at the effect of possible outcomes by the system such as failure. However, these details are considerations for the trader and not something concerned for the system, if you start considering these details as something for the system then your start to extend your definition of system. These details are what define this relative measure amongst systems so that the designer can make a decision about what kind of system is suitable for them. 

I acknowledge what I’m saying isn’t really anything new and common-sense. I’m just trying to point out the grey areas when comes to measurements and how they should be interpreted. The rest is either personal experience or personal preference, both of which are invaluable.


----------



## tech/a (17 August 2007)

> The rest is either personal experience or personal preference,




What your saying is correct.
The ability to taylor a system to your own preferences is the goal of most retail traders (interested in systems trading).
Those preferences are of course developed through experience of what is acceptable to the individual and what is possible through design.
The whole idea is to eliminate for the individual those grey areas.
Each will have a different perception of grey.


----------



## theasxgorilla (17 August 2007)

GreatPig said:


> To me there would be if it was too close to zero.
> 
> Remember an average is just an average. If that average of just above zero is from testing over the last 20 years, then you have nearly a 50% chance of never making any money during the next 20 years.
> 
> I certainly wouldn't be investing in that scheme.




A random entry/exit system that I coded up in Amibroker, after 2500 runs using 10 years of data from 1/1/97 until 31/12/06, showed *a 95% chance of a CAGR > 7.7%* and *a 95% chance of experiencing a Max DD no worse than 33.3%*.  No single run had a negative CAGR or a Max DD greater than 55%.    This system used nothing more sophisticated than 10x10% position sizing and a 10% stoploss.  There was no leverage.

As mentioned, entry and exit were random, holding period was between 1 and 6 months, market exposure was 76%, average trades per run was 449...brokerage was included at $33 each way.  Average CAGR and Max DD was 12.2% and 27.4% respectively.  Dividends were not included, you might conservatively estimate an extra 2.5% CAGR if you include dividends.  

By adding a simple momentum filter (and I mean SIMPLE), *these results were improved to become 14.7% CAGR (still without dividends) and 25.6% Max DD*.  Market exposure was reduced to 61%.  No single run produced a CAGR of less than 2.5% or a Max DD greater than 50%.

My belief is that if you can't design a system that beats this system using the same data then you don't have a system with a sufficient edge to *beat the market*.


----------



## GreatPig (17 August 2007)

AG,

Do you mind posting your code for that system?

I just tried a random entry system based on the sort of time and parameters you mention and always get negative returns when run over all stocks (or most stocks).

I can get results similar to yours though over the current ASX200 or ASX300, but it seems to be very sensitive to trade price.

Cheers,
GP


----------



## tech/a (18 August 2007)

> My belief is that if you can't design a system that beats this system using the same data then you don't have a system with a sufficient edge to beat the market.




Your results are to be expected.
They simply mimick the market.
Nothing special there.
The equity curve would look like 
a chart of a composite of all those stocks
in your universe.


----------



## theasxgorilla (18 August 2007)

tech/a said:


> Your results are to be expected.
> They simply mimick the market.
> Nothing special there.
> The equity curve would look like
> ...




It doesn't always mimic the market, thats the point of the study.  It's the point of Monte Carlo too isn't it?  A single successful run through your data is  a precarious basis for trading a system with real money.  And its not only the averages or the outliers but the distribution of the results.

*GP*, I forgot to mention, I used a universe of todays XAO500, and a liquidity filter.  I don't want to swamp this thread with a mass of code...I PMed you instead.


----------



## bingk6 (18 August 2007)

AG,

I'll like to have a look at the code myself. Is this a long only system ? I'll be surprised if the system managed those sorts of numbers if it traded both long and short.


----------



## theasxgorilla (18 August 2007)

GreatPig said:


> but it seems to be very sensitive to trade price.




What do you mean by this GP?


----------



## theasxgorilla (18 August 2007)

bingk6 said:


> AG,
> 
> I'll like to have a look at the code myself. Is this a long only system ? I'll be surprised if the system managed those sorts of numbers if it traded both long and short.




  I would too.  Long only.  I'll PM you the code too.


----------



## bingk6 (18 August 2007)

theasxgorilla said:


> I would too.  Long only.  I'll PM you the code too.




cheers, will take a look


----------



## GreatPig (18 August 2007)

theasxgorilla said:
			
		

> What do you mean by this GP?



Meaning with the quick example I tried, by changing the trade prices from buy/sell at close to buy high, sell low, the typical return went from around +10%pa to seriously negative.

Which to me means it's very sensitive to the actual price of the trade on the day, probably because it picks up a lot of very short trades with thin margins.

Anyway, I'll give your code a go soon. Thanks for that.

Cheers,
GP


----------



## nizar (18 August 2007)

GreatPig said:


> Meaning with the quick example *I tried, by changing the trade prices from buy/sell at close to buy high, sell low, *the typical return went from around +10%pa to seriously negative.
> 
> Which to me means it's very sensitive to the actual price of the trade on the day, probably because it picks up a lot of very short trades with thin margins.
> 
> ...




A great way to test for system robustness.


----------



## GreatPig (18 August 2007)

theasxgorilla said:
			
		

> after 2500 runs using 10 years of data from 1/1/97 until 31/12/06, showed a 95% chance of a CAGR > 7.7% and a 95% chance of experiencing a Max DD no worse than 33.3%. No single run had a negative CAGR or a Max DD greater than 55%.



I don't have an XAO500 list, but have tried some preliminary trials of 100-300 runs over the same time period with both an all-stocks list and an ASX300 list using your exact code (less runs on the all-stocks list since it takes longer). Firstly I tried buy and sell on close, then buy high, sell low. You can see that it makes quite a lot of difference. There's also a significant difference between using all stocks and limiting it to the current ASX300.

ASX300 buy/sell on close:

Max Profit:  $646,118
Min Profit:  $21,970
Avg Profit:  $215,475 = *12.2% pa*
Median Profit:  $203,032
StdDev:  $107,748
Median-StdDev:  $95,283
Median-2*StdDev:  -$12,465

All stocks buy/sell on close:

Max Profit:  $369,977
Min Profit:  -$19,291
Avg Profit:  $118,653 = *7.6% pa*
Median Profit:  $107,479
StdDev:  $76,872
Median-StdDev:  $30.607
Median-2*StdDev:  -$46,265

ASX300 buy high, sell low:

Max Profit:  $136,079
Min Profit:  -$68,053
Avg Profit:  $10,742 = *1.03% pa*
Median Profit:  $4,664
StdDev:  $35,556
Median-StdDev:  -$30,892
Median-2*StdDev:  -$66,448

All stocks buy high, sell low:

Max Profit:  -$21,770
Min Profit:  -$87,851
Avg Profit:  -$63,038 = *-9.5% pa*
Median Profit:  -$64,831
StdDev:  $14,413
Median-StdDev:  -$79,244
Median-2*StdDev:  -$93,657

To me this highlights two points:

1. This particular system is rather sensitive to the trade price on the day. Perhaps if the system was modified to make it a longer-term system (eg. increase the minimum/average hold time) then it might become less sensitive.

2. Using the current version of an ASX300 or similar universe and then backtesting over earlier periods when those stocks may not have been in that universe provides a form of survivorship bias with a demonstrable advantage. If there were no advantage in that, then you'd expect the same tests of all stocks to give similar results.

Note that I'm using free data which has only been manually adjusted when I see obvious cases, so there may be some discrepancy between these results and those with clean data. However, I don't think that discrepancy would come close to accounting for the differences mentioned in the two points above.

Cheers,
GP


----------



## GreatPig (19 August 2007)

Tried another random trading system of my own with the following properties:

- Buy with random probability of 5% at each bar.
- Sell with random probability of 0.15% at each bar.
- Minimum hold time of 30 bars.
- No stops at all.
- Position score random.
- Initial capital $1m.
- Position size 2% of equity (ie. initially $20K).
- Number of shares < 10% of 100 day volume EMA.
- Number of shares < 20% of trade day volume.
- Minimum trade $5K.
- Maximum trade $100K.
- Minimum average daily turnover on trade day $50K.
- No minimum or maximum price.
- Trade price random between high and low on trade day.
- One bar trade delays.
- All current stocks universe (or most of them). Also tried ASX300.
- Commission of $30 per trade.
- No limit on number of open positions (set to 1000).
- 10 year period from 1/1/1997 to 31/12/2006.

The buy and sell probabilities and minimum hold time were set to try and emulate a medium to long term system. After a purchase, a position is held for a minimum of 30 bars and then has 0.15% probability of being sold on any subsequent bar. The volume tests are to ensure that the trades would have been reasonably possible on the day, based on available volume. The initial capital and position size is to allow a fairly large number of positions (typically 50 to start with) but still give a good chance that they'll be above the minimum trade value of $5K. That minimum has been chosen to try and avoid the commission becoming too significant a percentage of the trade (it's essentially what I use in real trading). The random buy and sell prices have been rounded to real trade values (nearest cent, half cent, or tenth of cent depending on price).

As a return comparison, the XAO averaged 8.82% pa over the same 10 year period, from 2424.6 to 5644.3.

The results over 600 runs across all stocks are:

Max Return: 20.86%
Min Return: 4.11%
Average Return: *11.18%*
Median Return: 10.68%
StdDev: 5.78%
Median-StdDev: 7.20%

Max MaxDD: -87.00% (system MaxDD)
Min MaxDD: -19.86%
Avg MaxDD: *-33.38%*
Median MaxDD: -32.30%
StdDev: 6.98%
Median-StdDev MaxDD: -39.28%

Max Avg bars held: 622.84
Min Avg bars held: 451.84
Avg Avg bars held: *532.20* (bit over 2 years)
Median Avg bars held: 530.91
StdDev: 28.01

Max % winners: 60.74%
Min % winners: 44.97%
Avg % winners: *53.31%*
Median % winners: 53.33%
StdDev: 2.55%

Max Win Avg % Profit: 192.22%
Min Win Avg % Profit: 61.44%
Avg Win Avg % Profit: *99.02%*
Median Win Avg % Profit: 95.40%
StdDev: 18.80%

Max Loss Avg % Loss: -44.28%
Min Loss Avg % Loss: -30.31%
Avg Loss Avg % Loss: *-36.88%*
Median Loss Avg % Loss: -36.81%
StdDev: 2.25%

The results for 600 passes using the (nearly) current ASX300:

Max Return: 23.75%
Min Return: 12.02%
Average Return: *17.52%*
Median Return: 17.35%
StdDev: 6.38%
Median-StdDev: 15.14%

Max MaxDD: -29.02% (system MaxDD)
Min MaxDD: -17.05%
Avg MaxDD: *-20.99%*
Median MaxDD: -20.52%
StdDev: 2.14
Median-StdDev MaxDD: -22.66%

Max Avg bars held: 606.17
Min Avg bars held: 439.35
Avg Avg bars held: *527.25* (bit over 2 years)
Median Avg bars held: 526.30
StdDev: 27.17

Max % winners: 76.17%
Min % winners: 57.72%
Avg % winners: *68.58%*
Median % winners: 68.51%
StdDev: 2.5%

Max Win Avg % Profit: 221.73%
Min Win Avg % Profit: 64.28%
Avg Win Avg % Profit: *106.47%*
Median Win Avg % Profit: 104.54%
StdDev: 18.91%

Max Loss Avg % Loss: -33.41%
Min Loss Avg % Loss: -19.36%
Avg Loss Avg % Loss: *-25.95%*
Median Loss Avg % Loss: -25.87%
StdDev: 2.25%

As can be seen, there is a significant improvement in results when using the current ASX300 as the universe. The two backtest runs were otherwise identical. Also interesting to note that the random system out-performed the XAO over the same period. Perhaps that's because of the survivorship bias of not having delisted stocks in the universe any more, or just because I need to do more runs (the system is a little slow, I think due to to all the available signals and volume testing, so even 600 runs took a while).

Cheers,
GP


----------



## nizar (19 August 2007)

Amazing how not even 1 run from the 600 resulted in failure.


----------



## theasxgorilla (19 August 2007)

nizar said:


> Amazing how not even 1 run from the 600 resulted in failure.




This was exactly the entire point of the test.  On the basis of a random system, how lucky or unlucky could a 'trader' with no edge be over the last 10 years.  The answer seems to be, not _that_ unlucky, and in some extreme cases _very_ lucky.  First point, consider this when you see even 10 years of outstanding results from a fund manager.  The interesting thing is that if you add in a few 'common sense' components like 10x10% position sizing, 10% stoploss, and a rate-of-change momentum filter (arguably the most simple form of trend indentification there is) you can enormously increase avg. CAGR, decrease avg. MaxDD and squeeze the distributions up such that the probability of results close to the averages goes up massively.  You can see the shift in distributions from adding each of these components to my test results on my blog.

http://theasxgorilla.blogspot.com/2007/08/2500.html

It isn't that I'm trying to say there isn't any risk of survivorship bias in testing this way...because there clearly could be.  Although what should also be considered is that by testing delisted stocks you will introduce both positive and negative survivorshiop bias...not all stocks delist from the XAO due to failure...what are the chances that if a stock reaches the XAO it delists due to failure compared with say M&A activity???  Impossible to know.

In any case I explained the discrepancies between the XAO and the testing by three things:

1. Survivorship bias (somewhat)
2. A 4 in 5 chance of _not_ picking an ASX100 stock.  Where do the fast moving, high _bang-for-buck_ shares live?
3. The index is market cap weighted...unless you also position size according to market cap there is a very high chance that you will place equal amounts on money into the fast moving issues as the slow moving ones, and it's the latter which ultimately determine the index.

Admittedly GPs is quite different to mine...I tried to make my system buy and sell more frequently as I wanted to mimic a more frantic trader who thinks they have an edge and are out smarting the market, but clearly don't.  I also didn't want long hold times, specifically because I knew people looking at the study would say, well, your just immitating buy and hold and we all know the market has gone up for 10 years.

I retain my original comment.  If you are testing on similar data for the last 10 years and you can't outperform (ie. tighter and better distribution of CAGR and Max DD from your Monte Carlo testing) a system that has no or minimal 'edge' then your system is quite likely more dependent upon market conditions than you realise.


----------



## theasxgorilla (19 August 2007)

GreatPig said:


> 1. This particular system is rather sensitive to the trade price on the day. Perhaps if the system was modified to make it a longer-term system (eg. increase the minimum/average hold time) then it might become less sensitive.




This is really unusual and interesting.

My testing was done on open prices, 1 day delay after buy and sell signal.  If you test with buying and selling closes, or buying highs and selling lows, what does this infer?  Is buying the open an 'edge'???  If we add a reasonable assumption for slippage and get similar results buying on open, can we conclude that buying on 'open' might be an edge?


----------



## GreatPig (19 August 2007)

I would expect a totally random long-term system over all stocks to roughly have the return of the market index. In my test, I deliberately left out any stops and went for purely random sell, except for the minimum hold time which is quite short compared to the average hold time anyway. For that reason, the maximum trade draw down was nearly always 99%+, as there would always be some shares that got picked up at $5-$10 and later sold for a few cents. Typically there were a number of those in each run, but the return held despite that.

I also chose a medium to long term outlook in this test to try and give decent size moves per trade, to minimise the impact of brokerage (I could have also set brokerage to zero I guess). I'll try again later using a shorter term outlook, but that generates a lot more signals and slows the backtesting down considerably more.

Cheers,
GP


----------



## GreatPig (19 August 2007)

theasxgorilla said:


> If you test with buying and selling closes, or buying highs and selling lows, what does this infer?



I think it just shows how sensitive the system is to trade price. I would naturally expect shorter term systems to be more sensitive, as they would typically have smaller gains and thus a small difference in trade price could make a significant difference to the gain.

If in your tests you find that trading using opening prices gives an advantage over say closing prices, then provided you can get in during the opening election each time and get the opening price, then I guess it should be a reasonable test. However, if you regularly miss the opening price and get some other price, then it could have a significant impact on the return. If the system can be made relatively insensitive to trade price, then it really wouldn't matter when you trade during the day. Hard to do though I think with a very short-term system.

Cheers,
GP


----------



## theasxgorilla (19 August 2007)

GreatPig said:


> I also chose a medium to long term outlook in this test to try and give decent size moves per trade, to minimise the impact of brokerage (I could have also set brokerage to zero I guess). I'll try again later using a shorter term outlook, but that generates a lot more signals and slows the backtesting down considerably more.




Brokerage was another reason I wanted shorter hold periods and more trades.  Everyone says that the first barrier to success in the markets is overcoming the built in costs.  This is going to be more prevalent with the punter who trades more frequently.  One of the flaws of my testing was actually that I used a fixed brokerage price.  Can you use minimum and fixed in combinion with Amibroker...so as to simulate the pricing structures of the online brokers?  Something for me to look into.

I made 2500 runs each time and it took the better part of a week to process all system variations.  It was even worse when I made a mistake and had to re-run a test.


----------



## GreatPig (19 August 2007)

You can make brokerage quite flexible if you use the commission table option.

From some quick preliminary tests of a shorter term random system, with average hold times of about 30 bars, I get about the same average return (approx 10% pa) but the variance goes through the roof. There are both big positive and big negative results.

Will do more testing and try adding stops, etc.

Cheers,
GP


----------



## nizar (19 August 2007)

ASX.G
My broker charges $22 per trade upto $50k parcels.
Then $44 per trade for parcels upto $100k.

So i just put $44 each way as my transaction costs when backtesting. Overestimate it a bit.
I dont think i will be trading $100k+ parcels until some time down the track.

Also, with regard to your random testing.

I do agree with you that at least one of the reasons is that you have an 80% chance of picking a stock outside the index (ie. not a large cap) and thus have a greater chance of picking a stock with a higher bang for buck rating.

But with that said -- what do you think about GPs results in which case the test over the ASX300 outperforms the one over the whole market?

I think the minimum hold time does lift the result here.

And ASX.G,
I read your blog daily 

I wish I could get mine fixed though, the damn pics wont enlarge when you click them for some reason, makes it really annoying as you cant read the charts properly.


----------



## GreatPig (20 August 2007)

The reason I think the return is better for the current ASX300 list when backtested over that many years is that you're only testing over the stocks that have now proven to be successful.

It's a bit like saying I know that PDN, ZFX, etc. have done really well over the last few years, so I'm going to restrict my universe to just them and leave out all the ones that I know haven't performed. Of course the results are going to be outstanding. Using the current ASX300 or whatever is a less-extreme example of that, the way I see it.

Cheers,
GP


----------



## nizar (20 August 2007)

GreatPig said:


> The reason I think the return is better for the current ASX300 list when backtested over that many years is that you're only testing over the stocks that have now proven to be successful.
> 
> It's a bit like saying I know that PDN, ZFX, etc. have done really well over the last few years, so I'm going to restrict my universe to just them and leave out all the ones that I know haven't performed. Of course the results are going to be outstanding. Using the current ASX300 or whatever is a less-extreme example of that, the way I see it.
> 
> ...




Yeh.
Its a shame that data providers dont have historical index constituents, a real shame.
It restricts backtesting to only over the entire market.
Yeh you can put price and volume turnover filters but thats not the same as making an index your universe.


----------



## tech/a (20 August 2007)

Radge keeps saying it.

The solution is simple.
Know *WHY* your system returns profit.

Know why it works as a trend following system
OR
Know why it works as a short system
OR 
Know why it works as a forward and reverse system
OR
Know why it works as a combination of systems.

Ever thought of investing some $$s in Radges "Building a System" course?


----------



## theasxgorilla (20 August 2007)

nizar said:


> But with that said -- what do you think about GPs results in which case the test over the ASX300 outperforms the one over the whole market?




Certainly, historical testing with a universe like the XAO or the ASX300 or ASX200 must be assumed to have some amount of survivorship bias.  Interestingly, the ASX300 outperforms the XAO over the same 10 period, period.  9.24% to 8.82%.  But I dont think this accounts for that big of a difference in the results 

Unfortunately no one seems to have comprehensive data AND even if we had comprehensive data, we need to make the universe dynamic to test accurately.  So, in lieu of all the aforementioned, IMO, the best we can do is use a random entry/exit system as a bench mark and aim to beat it.  As I said initially, if your system can't beat a system that has no edge when tested on the same survivorship biased data, then your system also has no or a negative edge.


----------



## R0n1n (20 August 2007)

Here is the testing of my long term system on the Nasdaq 100. Your comments please...
I have attached the testing report as a spreadsheet as its much easier to read.


----------



## nizar (20 August 2007)

R0n1n said:


> Here is the testing of my long term system on the Nasdaq 100. Yopur comments please...
> I have attached the testing report as a spreadsheet as its much easier to read.




Hi R0n1n,

Looks the goods to me.
28%pa is nothing to scoff at.
Only 82 trades in 10 years, which is mad.
Your not doing much for 28%pa!!
Average holding time almost 2.5years for the winners.
50% winners is nice, but i would imagine R/R would be something special 

Max.DD is 52%, but it looks like this is only through 1 run.

Have you done a montecarlo analysis on this system and how is it looking?


----------



## R0n1n (20 August 2007)

nizar said:


> Hi R0n1n,
> 
> Looks the goods to me.
> 28%pa is nothing to scoff at.
> ...





thanx for the review Nizar. I plan this system to be fairly long term and the pne I posted the results before to be short to medium term.
This is just the first version of the system, with heaps of modifications ( I am upto mod version S) . I haven't done a montecarlo analysis on it yet but plan to do it shortly.


----------



## stevo (21 August 2007)

Ronin
28% annualised gain sounds good but with 52% drawdown I personally would find it hard to trade - but we all have our own parameters and limitations that shape the way we trade. That's what makes the market!

Some thoughts;

One of the problems of comparing / testing systems is that position size can make such a big difference to results. For example tech's system had maximum number of open trades set to 100. So there is not really any limit to the number of trades that would be taken, pyramiding is set to "Yes" and position sizing is 2% risk. If trades were limited to 10 the results could be totally different. Not saying that there is anything wrong with tech's test. The number of positions would probably be self limiting, although since pyramiding is on it may be possible to have 20 plus trades at any one time even with 14.5% position size limit.

But the point is that position sizing alone can easily double / half returns and drawdown. So comparing systems can be difficult unless position sizing is kept constant for comparison purposes. One can tweak a system substantially playing with position sizing, as many of us have already discovered. 

With AB I use an #include<TestParams.afl> statement and put position sizing strategy in a separate AFL file so I can be sure the position sizing strategy doesn't vary.

I really like the concept of a random system as a benchmark - if a system isn't better than random why trade it? Also the random system approach shows the natural bias of the markets over the last 10 years or so - and probably much longer. 

A trader can use this natural bias in their favour. Adding some simple things, like ASX.G's simple momentum filter, or maybe a more dynamic stop, can narrow the range and slide the results up the X axis.

The idea of not optimising at all seems quite strange to me, but then it is probably easier to do in AmiBroker than Metastock. Everyone optimises, even by just looking at a chart and picking the values that they think will work best. ASX.G's already mentioned the power of AmiBroker's 3D optimisation charts. Walk forward testing is also a good approach to see how a system performs out of sample - it's even more valid when the walk forward is for real!

I usually test in AmiBroker using random entry and exit prices. Open price trades can be reasonably difficult to achieve, especially as trade size grows. 

The last month has been a pretty good test of any long only system, as well as a humbling experience for us overconfident, over-optimised, mechanical system traders! A 10% to 20% drawdown doesn't look like much on paper , but give back thousands of dollars and you get a different perspective. I just think of the trades I will make over the next 20 years and everything falls into place.

regards
stevo


----------



## It's Snake Pliskin (21 August 2007)

tech/a said:


> Radge keeps saying it.
> 
> The solution is simple.
> Know *WHY* your system returns profit.
> ...




Yes good points tech.
Have you done his course?


----------



## tech/a (21 August 2007)

It's Snake Pliskin said:


> Yes good points tech.
> Have you done his course?





No.


----------



## R0n1n (21 August 2007)

stevo said:


> With AB I use an #include<TestParams.afl> statement and put position sizing strategy in a separate AFL file so I can be sure the position sizing strategy doesn't vary.




Stevo, very valid points. All taken onboard. 

Can you out a sample position sizing AFL here or PM me please, it will save me some time in knocking one up( I am still learning AFL)

A few questions:

1) I'll probably start a thread on optimization, but how much should one optimise ? Will over optimization turn into curve fitting eventually ?

2) Once you have tested a system and done Monte Carlo analysis and are happy with it, do you just start trading it or you paper trade it in real time to see how it performs ? When do you switch it to production ?

3) With one of my systems I like to see the charts and the scans in weekly format as well. I use the code below to swith it to weekly (its Amibroker)


```
TimeFrame = Param("Weekly Timeframe? Y=1",0,0,1,1);

if( TimeFrame )	// switch to weekly
{
TimeFrameSet( inWeekly );
}
```

Is this right for backtesting as well ? I mean are the backtesting results are that of a weekly system ? (this code is at the very beginning of the system.)

4) Is it better to have multiple systems (for example a longterm, a short term CFD, a short term stock system) or just one system and tweak it according to times.

cheers,

Ronin.


----------



## Temjin (21 August 2007)

I've completely missed this thread, then it goes too long and I haven't had time to read through it.  



R0n1n said:


> A few questions:
> 
> 1) I'll probably start a thread on optimization, but how much should one optimise ? Will over optimization turn into curve fitting eventually ?




Over optimisation = curve fitting. If you are getting unrealistic results for a specific set of data, and then get another set of results (and worst) on an off-sample data, then you have indeed over optimised your system.

How much one should optimise their system? The less optimisation you need to do, and the more robust the results are, the better. 



> 2) Once you have tested a system and done Monte Carlo analysis and are happy with it, do you just start trading it or you paper trade it in real time to see how it performs ? When do you switch it to production ?




As a guide, you should either paper trade it first or UNDER trade for a few months to see if your system performs within your tested parameters. Then switch to full position sizing production. 



> 4) Is it better to have multiple systems (for example a longterm, a short term CFD, a short term stock system) or just one system and tweak it according to times.




It is ALWAYS BETTER to have as many uncorrelated systems in your portfolio as possible. However, be aware of all trending systems tend to correlate with each other over time. Counter-trend systems are less correlated with the trending systems. 

Remember all non-adaptive, parametric only systems will eventually fail over time. Unless you have a well developed plan and a good tweaking strategy that "re-adapt" your single system to current market conditions, you are best off to have all your systems to adapt itself automatically.


----------



## Temjin (21 August 2007)

stevo said:


> Ronin
> One of the problems of comparing / testing systems is that position size can make such a big difference to results......
> 
> But the point is that position sizing alone can easily double / half returns and drawdown. So comparing systems can be difficult unless position sizing is kept constant for comparison purposes. One can tweak a system substantially playing with position sizing, as many of us have already discovered.




You are completely right here.

Personally, I treat position sizing as an "adjustable power dial/knob" that can both proportionally increase both the return and risk of a system when need to be. If a certain power level of a system produced a historically tested risk that is unacceptable to the user, then he/she should reduce it proportionally. If the return at this acceptable risk level is unacceptable to the user, then he/she should further improve his system by either having better entries / exits (mainly exits), or be creative in money management techniques and even with the power dial itself. 

Systems should be compared based on their return/risk ratio. There are many definition of return and risk, so how you define it is up to you.


----------



## stevo (21 August 2007)

R0n1n said:


> Stevo, very valid points. All taken onboard.
> 
> Can you out a sample position sizing AFL here or PM me please, it will save me some time in knocking one up( I am still learning AFL)
> 
> ...




Have a look at http://www.amibroker.com/library/detail.php?id=545 for an idea of the components to use in backtesting.

Curve fitting is more likely to occur if you don't have enough data. By portfolio testing over a large number of stocks the data issue becomes less of an issue. Monte Carlo analysis also helps by showing the range of possible results, rather than just one good run.

I am hopeless at paper trading so I just trade real money to forward test a system. But I need to strongly believe that it will work before I put money on it - and I also need to be sure that I haven't done something dumb when backtesting.

I stay away from daily charts so I don't switch timeframes.

Multiple systems seem to be a good idea and the basic theory goes along the lines of having various uncorrelated systems working at the one time - just as long as they all make money over the long run. The problem is getting multiple systems that you would be happy to trade. I would struggle trading a short term system, yet others couldn't handle a long term system.

stevo


----------



## nizar (22 August 2007)

If you want to test a system over different timeframes, can you just go to the "Trade database Manager" window in TradeSim and adjust the dates at the bottom of the screen and run the simulation again?

OR

Do you have to run a separate metastock exploration each time you want to change the dates by adjusting this function:

ExtFml("Tradesim.SetStartRecordDate",1,09,1998);
ExtFml("Tradesim.SetStopRecordDate",19,05,2006);

Any ideas would be much appreciated.


----------



## tech/a (22 August 2007)

Change the dates as you have highlighted and run Metastock again.
That will create a database on those dates.

Ive often thought but not tried starting and stopping on different multiple dates.
IE dont trade May each year.
Sorry just thinking.


----------



## stevo (22 August 2007)

tech/a said:


> Ive often thought but not tried starting and stopping on different multiple dates.
> IE dont trade May each year.
> Sorry just thinking.




Good idea - take a break for one month a year! 

stevo


----------



## tech/a (22 August 2007)

R0n1n said:


> Here is the testing of my long term system on the Nasdaq 100. Your comments please...




Had a chance to comment.
Would like a Montecarlo sim on both.
As the Buy and hold would be dependant on what it was you bought and held? 
96 Trades for buy and hold.
I'm not understanding.Once you buy you hold so how does a portfolio have 96trades?
Whats your position sizing model?

I'm sure all will be clearer once you explain.
I'm a bit thick!


----------



## nizar (22 August 2007)

R0n1n.

Does the equity curve for "buy and hold" mean the index??


----------



## R0n1n (22 August 2007)

sorry for the late reply guys. Here is the discription of Buy & Hold, it should clear some of the confusion that I caused:



> The Buy & Hold Equity is represented as a blue line.  The line is calculated by taking an equally-sized Position in each symbol of the WatchList at the start of the $imulation period, and holding the Positions until the end of the period.  The $imulator uses real-world rules for even the Buy & Hold Positions.  It bases the size of the Positions on the closing value of the first bar of data, and opens the Positions at the opening price of the next bar.  Because of this you'll notice that the $imulator's Buy & Hold Exposure level is usually never exactly 100%, but is typically within 1% of 100% (no margin assumed).
> 
> The $imulator does not close Buy & Hold positions. The outstanding profit is based on closing price of the last bar, but no exit commission is applied.


----------



## tech/a (23 August 2007)

Ron.

Thanks 
But I still dont get it.
If the buy and hold only took the same number of positions which were triggered in your system then OK.
But it took more?
I presume also that it takes positions in every stock (as a comparison) to those in your portfolio everytime a buy occurs.Even though you stop out or exit the buy and hold continues.
While I understand the comparison its practical application in the real world is not possible due to lack of capital.You couldnt replicate the results.You could only buy and hold the first X positions then your fully capitalised.

Yes I understand its a benchmarking method!
But----As such to me its of little value.


----------



## Nick Radge (23 August 2007)

> I just think of the trades I will make over the next 20 years and everything falls into place.




If you wish to be a successful trader:

(1) Design a robust trend following system, 
(2) Control the risk,
(3) heed Stevo's words of wisdom above.


"Next 1000 trades" is my mantra. Tattoo it on your forehead.

In my experience many people have held the "holy grail" without knowing it. They discard it at the first hiccup. This recent correction will be a great example of people who perhaps gave back some open profits and feel a little pained by it. The pain will make them look elsewhere for something that doesn't offer pain. Of course all trading systems will induce some level of pain. What makes a successful trader is one that accepts some pain but totally understands that over the longer term the positive expectancy will continue to roll on unabated. Control the pain by controlling risk, then allow "time in" the system to work its magic.

Snake,
My course does not cover this stuff. It's all about "discretionary" trading. I have never put anything down on system trading.


----------



## howardbandy (23 August 2007)

Greetings --

I have written one book about trading system design, testing, and validation -- Quantitative Trading Systems -- and have another planned -- Trading System Validation.  I've read through the thread to date, and I'd like to add my two cents worth to the discussion of robustness.

1.  In order to measure robustness, there must be a metric, often called an objective function.  The metric is personal -- yours will probably be different than mine -- and it must be designed or chosen before the model development process begins in earnest.  Some people use compound annual rate of return, others have quite complex objective functions that include terms for trading frequency, annual rate of return, drawdown, holding period, and so forth.  The sensitivity of parameters to specific data can be addressed by perturbing the parameter values, measuring the associated objective value, and averaging.  Sensitivity to outliers can be addressed by limiting the profit or loss associated with them.  These, and other Monte Carlo techniques can be incorporated into the objective function.  

There is a very important point about the objective function -- if two trading systems or alternatives are compared over a given set of data, the one that scores highest according to the objective function must be the one that the person developing the system prefers.  If that is not the case, then there must be some factor that has not yet been included in the objective function.  That factor must be identified, quantified, and incorporated into the objective function before proceeding.  Further along in the trading system development process, we will not be able to see all the alternatives -- only the one alternative that scores highest will be used.

2.  The data being used must be divided into at least two data sets -- an in-sample data set that is used to select the parameters for the trading system, and an out-of-sample data set that is used very infrequently -- preferably exactly one time -- to measure the performance of the system on data that has never been seen before.  If the trading system is adjusted based on the results of the out-of-sample data, then that previously out-of-sample data has just become in-sample data, and a new out-of-sample data set must be used for validation.

3.  Search through the in-sample data set as much as you wish.  Look for values of the parameters that maximize the value of the objective function; add rules and filters.  When you are satisfied with the in-sample results, test on the out-of-sample data.  If the out-of-sample results are satisfactory, you have an indication that the system might be robust and might be tradable.

4.  To increase your level of confidence, perform the walk-forward test.  That is, select a length of time to use as the in-sample length and a length of time to use as the out-of-sample length.  The only arrangement of the two periods that is practical is to have the out-of-sample immediately follow the in-sample.  Start far enough ago that there is room for several out-of-sample periods before today.  Beginning with the first in-sample period (the oldest in-sample period), find the optimum parameters -- optimum is always determined by the highest value of the objective function -- then test over the associated out-of-sample period, and record the results.  Move on to the second in-sample period, optimize, test the second out-of-sample period, and record the results.  After all the in-sample periods have been processed, concatenate the results from the out-of-sample periods.  If those results are satisfactory, your confidence is increased.

5.  Contrary to popular belief, there is no statistic or other measurement that can be taken from the in-sample results that give any indication of the likelihood of a system to be profitable in the future.  Thirty closed trades is not enough -- 30,000 closed in-sample trades is not enough.  The results achieved over the in-sample period are meaningless.  

6.  I'll say that again -- the result achieved over the in-sample period are meaningless.  They are always good.  We do not stop searching until they are good.  

7.  There is no guarantee that any trading system will be profitable in the future.  The best we can hope for is a high degree of confidence.  The only way to build that confidence is by repeating the optimization - out-of-sample testing process.

8.  If you do not have acceptable test results on a truly out-of-sample data period, then out-of-sample testing begins tomorrow with real money.

Thanks for listening,
Howard
www.quantitativetradingsystems.com


----------



## nizar (23 August 2007)

howardbandy said:


> Greetings --
> 
> I have written one book about trading system design, testing, and validation -- Quantitative Trading Systems -- and have another planned -- Trading System Validation.  I've read through the thread to date, and I'd like to add my two cents worth to the discussion of robustness.
> 
> ...




Top post there, the bold and underlined sections is just what i thought was key.

Please Howard, visit us more often to share your wisdom 

I think I may have to get my hands on your book.


----------



## R0n1n (23 August 2007)

howardbandy said:


> 2.  The data being used must be divided into at least two data sets -- an in-sample data set that is used to select the parameters for the trading system, and an out-of-sample data set that is used very infrequently -- preferably exactly one time -- to measure the performance of the system on data that has never been seen before.  If the trading system is adjusted based on the results of the out-of-sample data, then that previously out-of-sample data has just become in-sample data, and a new out-of-sample data set must be used for validation.
> www.quantitativetradingsystems.com




So can I use the ASX Historical data as an in-sample data for system development and the US Historical data for performance measurement ? Or the vice-versa ??


----------



## nizar (23 August 2007)

R0n1n said:


> So can I use the ASX Historical data as an in-sample data for system development and the US Historical data for performance measurement ? Or the vice-versa ??




In my view.
Yes.

Another way is to do all your fine tuning and testing from, say, the period 1992-1999, and then once you are happy with results, you forward test it through 1999-2007.

I personally intend to do both.

ie. I will test my system across different timeframes of the same market AND also across different markets (US stocks and other otherseas bourses).


----------



## bingk6 (23 August 2007)

R0n1n said:


> So can I use the ASX Historical data as an in-sample data for system development and the US Historical data for performance measurement ? Or the vice-versa ??




Or alternatively, segregate the ASX stocks, eg in sample may be ASX100, and out of sample may be ASX200 exclude ASX100 or ASX300 exclude ASX200 etc etc.

Howard, top post. I am curious as to the period one should allocate for the insample testing. Some have commented that you should have bearish, flatish and bullish conditions all within the insample period, which could potentially make it a fairly long period. I am interested in your views on the minimum period one should alocate for insample testing.


----------



## nizar (23 August 2007)

bingk6 said:


> I am interested in your views on the minimum period one should alocate for insample testing.




Me too.
I dont think there is a minimum period.
It probably depends on how longterm you wish to make your system.

But im keen to hear Howard's thoughts on this matter.


----------



## howardbandy (23 August 2007)

Greetings all --

Thanks for the kind words and civilized response -- my views are upsetting to some people.

----

On the question of how to select the in-sample data and how to select the out-of-sample data.

If you are working with a single issue, say a major stock index or a specific commodity, then all of the in-sample data will be coming from one ticker and the question becomes "what periods of time should be used?"  The short answer is to use whatever periods of time give good results.  My feeling is that every market we model is dynamic -- non-stationary.  The characteristics of the market change over time.  The most obvious changes are easily measured -- the slope of a moving average, the average true range, the number of overnight gaps, as so forth.  Any single version of the trading system we build to model that market is static for the entire period we use it.  As long as the underlying market does not "move" too much, the model continues to accurately represent it, and the buy and sell signals continue to be profitable.  Eventually the characteristics of the market change and the model is out-of-synch with the market.  Maybe the market will return to its earlier state, but usually it will not, and a new model is needed, so we must reoptimize.  That is, we must perform the next walk-forward iteration.  

The length of time that the model and the market stay in synch determines the maximum length of the out-of-sample period.  While there is no theoretical need that all out-of-sample periods be the same length, it is common for them to be.  The algorithms to perform the walk-forward testing automatically are easiest to implement when reoptimization is done after a set number of bars.  The most often reoptimization can take place is after every bar, and that is acceptable, although computationally intensive.  The least often reoptimization can take place is never, and that is acceptable as long as the model and the market stay sufficiently in synch that the trades are profitable.  

Which brings up another issue I'll address here briefly, but put off for a later posting.  "Will a trading system that once worked, but is now broken, ever return to profitability?"  My answer is that all trading system eventually break and that broken systems very seldom return to profitability.  Trading systems are unique as models go (in the sense of statistical models of physical processes), in that the act of modeling changes the process being modeled -- every profitable trade made removes some of the inefficiency that the model has identified.  Every profitable trade actually made (paper trades do not count) makes it less likely that the next trade will be profitable.  For example, the Donchian-style breakout systems that worked so well in the 1970s and 80s no longer work, and will probably never work again.

Back to selecting the length of the in-sample period.  There must be a large enough number of data points such that the model can detect some general feature of the market.  The relationship between the number of data points and the number of parameters in the model is very similar to the relationship between the number of data points and the degree of a polynomial being fit to them.  As the degree of the polynomial increases, the goodness of the fit increases, but the accuracy of the prediction of the next data point does not necessarily increase.  The model becomes curve-fit to the data.  In some models that is desirable -- a model of the rotation of two stars around a common center of gravity.  But in trading systems, there is a peak in the fitness to the in-sample data where the general features of the market have been identified and learned, but the noise has not yet been learned.  To deal with this, some model development platforms use three data periods -- the first is the in-sample period used to pick the best parameter values, the second is another in-sample period used to guide the modeling process and determine when the fit has found its peak and stop, and the third is the out-of-sample period used to validate the model.

What is the practical solution?  For an end-of-day trading system, try out some different lengths for the in-sample period -- say two years, one year, six months, three months.  The best length will depend on the complexity of the model and the stability of that portion of the underlying market that your model is trying to match.  What length for the out-of-sample period?  Any length shorter than the amount of time it usually takes for the market to shift away from the model.  If a model is going to be accurate and profitable, it should immediately be accurate and profitable.  So try out-of-sample periods of one month, one week, one day.

I can hear someone saying "But wait, trying all these combinations and then picking the lengths is using results from out-of-sample testing to determine the model."  And they are correct.  But only the length of the in-sample period is being chosen by optimization, so only four data points are being tested -- two years, one year, six months, three months.  But at that, there is still a contamination of the purity of the out-of-sampleness.  So be careful.

----

Which leads directly to one of the other questions posted -- can a trading system be developed using one ticker and validated using other tickers?  That is, can the in-sample data be one stock and the out-of-sample data be another stock?  Yes, but there is a caution here as well.  

First, my opinion is that there is no requirement that a trading system must work on every data series in order for it to be considered robust -- that will just never happen.  For whatever fundamental reasons prices vary, I have no reason to expect that the share prices of a gold exploration company, an automobile manufacturer, a bank, a food producer, and a soft drink company all act the same.  But I should see similarities within sectors -- if a model works well for one insurance company, it should work reasonably well for most other insurance companies, and maybe even for banks.  

I can use one company as in-sample data to build a model, then other similar companies as out-of-sample data to validate it.  

But avoid what I call "optimizing the ticker space."  It is not valid to build a trading model using the data series from one ticker, then test it over, say, 500 different tickers and trade only those 10 of the 500 that were profitable.  That is just curve-fitting.  The 10 are probably profitable because they got lucky.  If 400 were profitable, there is hope.

There is a technique that helps determine which tickers to trade.  Develop a model using one ticker over one in-sample period of time, but reserve two out-of-sample periods of time.  Test the model on the universe of say 500 tickers for the first out-of-sample period and count the proportion of the 500 that are profitable.  Make a separate list of those that are profitable, and test them over the second out-of-sample period.  If the proportion that are profitable in the second out-of-sample period is about the same as the proportion that were profitable in the first in-sample period, then the model is not robust.  If the proportion that is profitable in the second out-of-sample period is high, then the model is probably robust.

----

Let me clarify one point.  Throughout this posting I have been using the term "is profitable."  That is shorthand for saying "score a high value using the objective function."  All measurements of the merit of a trading system are made by computing the score for the objective function -- not just the profitability.  

----

This has become longer, and a little more theoretical, than I intended.  I hope it is useful.

Thanks for listening,
Howard
www.quantitativetradingsystems.com


----------



## R0n1n (24 August 2007)

howardbandy said:


> my views are upsetting to some people.




Why would your view upset someone when they make sense is beyond me.....


----------



## tech/a (24 August 2007)

R0n1n said:


> Why would your view upset someone when they make sense is beyond me.....





Its common.
Known as the tall Poppy syndrome.
Ive been at the end of it myself
And accused (sometimes rightly) of dishing it out.

I have some questions for Howard but wish to have the time to ask them in full.Perhaps tommorow.


----------



## R0n1n (24 August 2007)

ok here is another attempt at getting my system off the ground. The testing universe is ASX200. It was developed on the Nasdaq 100. ( as howard bandy mentioned) . I have pasted the backtest reports from Tradesim and Amibroker. Time Period was 1 year.

Please feel free to comment, good and bad all welcome


----------



## howardbandy (24 August 2007)

Greetings --

The first question to ask is always "Are these results from in-sample or out-of-sample periods?"  

If the answer is in-sample, then ask to see the out-of-sample results.  The in-sample results give absolutely no indication of the likely profitability of trading the system.

If the answer is out-of-sample, then ask to see the equity curve from the concatenated out-of-sample runs.  If the equity curve looks like the kind that you would be comfortable to trade, then learn more about the system.

Thanks,
Howard
www.quantitativetradingsystems.com


----------



## nizar (24 August 2007)

Well this is (the workings of) my system tested on all ASX stocks from 1992-2002.

Still got alot of work to do to mininise that drawdown 



> Monte Carlo Report
> 
> Trade Database Filename
> C:\TradeSimData\Version A.trb
> ...


----------



## nizar (24 August 2007)

And the same system tested over 1997-2007 again with the universe being all ASX stocks.

The drawdown is disgusting  



> Monte Carlo Report
> 
> Trade Database Filename
> C:\TradeSimData\version A2.trb
> ...


----------



## R0n1n (24 August 2007)

howardbandy said:


> Greetings --
> 
> The first question to ask is always "Are these results from in-sample or out-of-sample periods?"





*Howard* the results are from out of sample data.


*nizar* My other system too suffers from really bad drawdown. 

What can be done to bring it down. Help from anyone would be much appreciated.


----------



## howardbandy (25 August 2007)

Greetings --

Sometimes -- drawdowns can be reduced by shortening holding periods.    Or, maybe the entries are too early or too late.

Try this:  

Look at the length of your average winning trade and average losing trade.  Say they are each 10 days.  You will be using half that length in the next step.  If they have different lengths, try this with both lengths.

Leave all of your buy logic in place.  Comment out your sell logic.

Use this for your experimental sell logic:
Sell = BarsSince(Buy) >=5;  

Run your backtests.  

If your drawdown is now lower, that is an indication that your holding period is too long.  Make your exits sooner.

If your drawdown has not changed much or is higher, then your entries need work.

Thanks,
Howard
www.quantitativetradingsystems.com


----------



## tech/a (25 August 2007)

howardbandy said:


> Greetings --
> 
> Sometimes -- drawdowns can be reduced by shortening holding periods.    Or, maybe the entries are too early or too late.
> 
> ...




Howard.
Struggling with this test.

By cutting the exit back to 50% of the average trading period surely this alters the landscape of the system dramatically.
The reason for the exit is no longer valid and now becomes an arbitory look back period. Arbitory as it will vary dramatically lengths of trades.
Those trades which were the trades that caught the trend (Any trend) would be instantly depleated.
All trends regardless of length would be on average cut back dramatically.
All your doing is altering the exit criteria and then applying the results to a system which has a different criteria for exit.Results surely for both are independant of each other.To me its like comparing one system to another and expecting them both to tell me something about each other---?? 



> If your drawdown has not changed much or is higher, then your entries need work.




Getting the head around this as well.
Wouldnt number of exits caused by initial stops be important for entries.
Infact the importance of an entry can quickly disappear the longer a stock is held.The original reason a stock is purchased becomes in significant and the reason you sell it becomes the most important focus.

Are we talking initial system drawdown or Peak to Valley drawdown.?

Arent initial drawdowns more dependant on entry and Peak to Valley Exit?
IE for Initial Drawdown I have found through testing that the optimum value is 7-12% of initial purchase price, regardless of how you set your exit.% or technical point.

Shorter exits tend to increase the number of stops above 20% which has an impact on staying on an emerging trend beyound "noise".
Longer exits cut down stop outs (20% will cut out around 95% in most cases) however tends to trap most stocks in to long periods of no mans land.The point between buy,stop and profit.
There becomes an opportunity cost as trades are stuck doing nothing.

Its been my experience that its at the ENTRY end of the trade where INITIAL drawdowns have an impact and virtually no impact on Peak to Valley D/D
and at the EXIT end where Peak to Valley drawdowns are most affected and initial drawdown virtually un affected.

Interested in your views.


----------



## nizar (25 August 2007)

tech/a said:


> Its been my experience that its at the ENTRY end of the trade where INITIAL drawdowns have an impact and virtually no impact on Peak to Valley D/D
> and at the EXIT end where Peak to Valley drawdowns are most affected and initial drawdown virtually un affected.




Hmmm yeh i thought the same thing as well.
Does tradesim tell you initial drawdown on its own?


----------



## tech/a (26 August 2007)

howardbandy said:


> Greetings --
> 
> I have written one book about trading system design, testing, and validation -- Quantitative Trading Systems -- and have another planned -- Trading System Validation.  I've read through the thread to date, and I'd like to add my two cents worth to the discussion of robustness.
> 
> ...




Although I have done this myself Ive never looked at it as sussinctly as you have put it here.

Excellent



> 2.  The data being used must be divided into at least two data sets -- an in-sample data set that is used to select the parameters for the trading system, and an out-of-sample data set that is used very infrequently -- preferably exactly one time -- to measure the performance of the system on data that has never been seen before.  If the trading system is adjusted based on the results of the out-of-sample data, then that previously out-of-sample data has just become in-sample data, and a new out-of-sample data set must be used for validation
> 
> 
> 
> 3.  Search through the in-sample data set as much as you wish.  Look for values of the parameters that maximize the value of the objective function; add rules and filters.  When you are satisfied with the in-sample results, test on the out-of-sample data.  If the out-of-sample results are satisfactory, you have an indication that the system might be robust and might be tradable.




Could this simply be data from another bourse or commodity?



> 4.  To increase your level of confidence, perform the walk-forward test.  That is, select a length of time to use as the in-sample length and a length of time to use as the out-of-sample length.  The only arrangement of the two periods that is practical is to have the out-of-sample immediately follow the in-sample.  Start far enough ago that there is room for several out-of-sample periods before today. Beginning with the first in-sample period (the oldest in-sample period), *find the optimum parameters -- optimum is always determined by the highest value of the objective function *-- then test over the associated out-of-sample period, and record the results.  Move on to the second in-sample period, optimize, test the second out-of-sample period, and record the results.  After all the in-sample periods have been processed, concatenate the results from the out-of-sample periods.  If those results are satisfactory, your confidence is increased.




Objective function
Highest value of the objective function do you simply mean those conditions which you see as your main objectives being met? Can you elaborate on the "In Black quote?"



> 5.  Contrary to popular belief, there is no statistic or other measurement that can be taken from the in-sample results that give any indication of the likelihood of a system to be profitable in the future.  Thirty closed trades is not enough -- 30,000 closed in-sample trades is not enough.  The results achieved over the in-sample period are meaningless.
> 
> 6.  I'll say that again -- the result achieved over the in-sample period are meaningless.  They are always good.  We do not stop searching until they are good.
> 
> 7.  There is no guarantee that any trading system will be profitable in the future.  The best we can hope for is a high degree of confidence.  The only way to build that confidence is by repeating the optimization - out-of-sample testing process.




I'm feeling like you mean optimisation of Parameters.
Isnt an in sample set of results and in particular a lengthy montecarlo test result supplying you with what I call a "Blueprint" which if your trading falls within the blueprints returned results will return you a profit within the parameters of profit returned in testing? Therefore confidence.



> 8.  If you do not have acceptable test results on a truly out-of-sample data period, then out-of-sample testing begins tomorrow with real money.
> 
> Thanks for listening,
> Howard
> www.quantitativetradingsystems.com




Think I'll become a client as well.
I dont like using Cards over the Nett---been done once---us there another way?Cheque--phone a card number through?


----------



## R0n1n (26 August 2007)

*optimisation of Parameters*

Is it better to optimise paramaters of indicators of a system seperately or do the whole optimisation in one go. The reason I ask is doing it seperately is a bit faster and I can do it over various times while doing the whole system takes considerable time.


----------



## tech/a (26 August 2007)

Ron.

I maybe wrong but I dont think Howard is talking about the *optimisation of Variables.*
Optimisation of parameters I am assuming are the components of Drawdown,String of Losses,Average time held Profit to loss etc,whatever those parameters are that are important to you in your systems developement.

But to your question.
Which comes with another question.
How do you avoid Curve fitting?
What makes you believe optimisation of variables will increase profitability "Walking forward"?


----------



## R0n1n (26 August 2007)

tech/a said:


> Ron.
> 
> I maybe wrong but I dont think Howard is talking about the *optimisation of Variables.*
> Optimisation of parameters I am assuming are the components of Drawdown,String of Losses,Average time held Profit to loss etc,whatever those parameters are that are important to you in your systems developement.
> ...




Tech, you just love to make my brain work hard on a sunday 

to answer you, to avoid curve fitting one can test it with one market and test it out on another market but not optomise it to that market.

To answer your last question with a question 
isin't optimisation of variables a method to just skew the numbers in your favor. But Curve fitting = when you skew numbers to prove your point, to prove that you have nailed the system ??


----------



## bingk6 (26 August 2007)

R0n1n said:


> ok here is another attempt at getting my system off the ground. The testing universe is ASX200. It was developed on the Nasdaq 100. ( as howard bandy mentioned) . I have pasted the backtest reports from Tradesim and Amibroker. Time Period was 1 year.
> 
> Please feel free to comment, good and bad all welcome




Hi ROn1n,

If the results were from out of sample testing, you've got yourself a very decent system. The CARs are very good and the drawdowns more than acceptable. Also the number of trade opportunities available to you over the one year period is more than useful. You mentioned the test period of 1 year, which specific period is that ? 

I am also assuming that both the Tradesim reports and Amibroker results were from testing the same system over the same period. The thing that surprises me somewhat is that teh Amibroker report - which would only be for a single run has produced results that exceeded the best results from 20,000 simulated runs using Tradesim. Possible off course, but highly unlikely, I would have thought.


----------



## theasxgorilla (26 August 2007)

R0n1n said:


> Time Period was 1 year.




I wouldn't think this is long enough.  Try and smash it up a bit...run it from 1/4/01 until 31/3/03 and see how it holds up.


----------



## howardbandy (26 August 2007)

Hi Tech/a --

You ask about "find the optimum parameters -- optimum is always determined by the highest value of the objective function "

I think that developing trading systems must always begin with the person or organization deciding what they want the outcome to be.  

For example, organizations that have enough capital to trade many markets may be able to trade a portfolio of systems, each of which has a low percentage of winning trades but with a high win to loss ratio for those trades taken.  Individuals may be uncomfortable with that and instead prefer systems that have smooth equity curves with a high percentage of winning trades, even though the win to loss ratio is about the same.

The statistics for those two examples might be:
1:  win 30%, w/l ratio 5:1
2:  win 70%, w/l ratio 1:1

If a person is trying to trade a system that makes him or her nervous about calling in the order and sometimes overriding the signal, that person is suffering a cognitive dissonance.  In my opinion, those books that talk about the psychology of trading are often talking about ways for the trader to convince him or her self to make the trades even though that person does not believe in the system.  

I think that is all wrong.  The way to begin is to decide what kind of system you, personally -- very personally -- want.  Define the characteristics of that system.  How many trades a year (minimum or maximum), what is the minimum percentage of trades that should be winners, what is the minimum win to loss ratio, what is the maximum percentage system drawdown, and so forth.  Combine everything that is important into one objective function.  That objective function has a single value.  Every trading system over every ticker over every time period can be evaluated using this objective function with the result that every alternative has a single number associated with it -- the objective function score.

Be realistic about the objective function.  Asking for a minimum of 80% winning trades with a minimum of 5 to 1 win to loss ratio will result in system that take one trade every five years -- when the signal does come, you will not trust it.

Having chosen the objective function first, begin evaluating possible trading systems.  When choosing among alternatives, the alternative you prefer will have the highest objective function score.  If you find any case where you prefer a trading system that has a lower score to one that has a higher score, then the objective function must be modified so that the system you prefer has the higher score.  You must be willing to accept that the best, for you, system is the one at the top of the ranking.

Now begin optimizing.  After an optimization run that evaluates thousands of alternatives, the best set of values, for you, will be the set at the top of the list.  

Remember where we are headed -- automated walk-forward.  In every walk-forward step: the trading system is optimized over the in-sample data; the alternatives are ranked; the values of the optimized variable that rank highest are used to trade over the out-of-sample data; the results from all the out-of-sample periods are evaluated in one take -- trade it or discard it -- to decide whether to trade the system with real money.  The goal is to be able to click one button "Walk-forward," and see the concatenated out-of-sample equity curve appear on your screen.  If you like the looks of the equity curve, trade the system with real money.  That process only works if the objective function incorporates everything that is important to you and if alternatives with higher objective function scores are always preferred to alternatives with lower objective function scores.

Thanks for listening,
Howard
www.quantitativetradingsystems.com


----------



## howardbandy (26 August 2007)

Hi Nizar --



nizar said:


> Me too.
> I dont think there is a minimum period.
> It probably depends on how longterm you wish to make your system.
> 
> But im keen to hear Howard's thoughts on this matter.




The question is how long to make the in-sample period.

The markets we are modeling are very dynamic, are non-stationary, and are changed by every profitable trade that is made.  What works keeps changing, and what used to work will never work again.  

The choice of how long to make the in-sample period is often tricky.  Begin by ignoring advice that by using a very long period the system will be able to recognize more possible conditions.  Very long in-sample periods result in systems that are unable to recognize anything.  Yes, you want your system to be able to recognize that a bear market has started and stop taking long trades.  But including data back to 1980 so that it can "see" October 1987 will not help.

Think about applying a standard moving average to a set of data -- add up the values and divide by the count.  The resulting number represents the average over the entire period and has a lag of one-half the number of data points.  If a trading system is based on moving averages, the longer the lag is, the later the signal will be.  If a moving average is being fit to a long data series, it will not fit any of it very closely -- if it is being fit to a short data series, the fit will be better.

So, my advice is to make the in-sample period as short as possible, consistent with producing good out-of-sample results.  Too long an in-sample period and the system will not perform well over any part of it.  Too short an in-sample period and the system will be curve-fit to the in-sample data and will not perform well out-of-sample.  

I am assuming that you are developing the model using a walk-forward process, so that you have more than one in-sample period.  If there is just one in-sample period, then there is just one out-of-sample period.  You will need to find additional out-of-sample data to validate the system.  You may be more disciplined than I am, but I cannot resist changing my models after I have seen the out-of-sample results.  It is OK to do that, provided you realize that you just transformed the out-of-sample data into in-sample data.  And in-sample results have no value in terms of predicting the out-of-sample performance.

Experiment until you find out what works.

Thanks for listening,
Howard
www.quantitativetradingsystems.com


----------



## howardbandy (26 August 2007)

Hi Ronin --



R0n1n said:


> ok here is another attempt at getting my system off the ground. The testing universe is ASX200. It was developed on the Nasdaq 100. ( as howard bandy mentioned) . I have pasted the backtest reports from Tradesim and Amibroker. Time Period was 1 year.
> 
> Please feel free to comment, good and bad all welcome




It looks like this is daily data.  Your winning trades average 44 days and your losing trades average 13 days.  Based on my analysis of US NASDAQ stocks, you would expect drawdowns to average about 15% in 44 days, and the distribution of those drawdowns is quite wide.  The drawdowns reported are in the 10% to 13% range, so it appears you are doing well.

Thanks,
Howard


----------



## nizar (26 August 2007)

Post #143 and #144 probably the best Iv read on this board and others.
I think im gonna have to get that book ASAP.
Thanks Howard.


----------



## theasxgorilla (26 August 2007)

howardbandy said:


> The choice of how long to make the in-sample period is often tricky.  Begin by ignoring advice that by using a very long period the system will be able to recognize more possible conditions.  Very long in-sample periods result in systems that are unable to recognize anything.  Yes, you want your system to be able to recognize that a bear market has started and stop taking long trades.  But including data back to 1980 so that it can "see" October 1987 will not help.




Hmmm, and what does one do when the system you've developed has been optimised on data that recognises ideal market conditions actually encounters less ideal market conditions?

Is it not possible that someone turning their system on today might be picking the beginning of the next great decline?  Wouldn't it be helpful to know how badly the system you've developed might survive such a decline?  I bet a lot of people stopped trend following equities sometime in the latter half of 2002, probably much earlier on the NASDAQ.

ASX.G


----------



## R0n1n (27 August 2007)

*Bingk6* - its for the last 365 days, including the correction we got recently.

yes both the reports are for the same system and same period. I haven't got much experience system backtesting and hence I post it here to get a good feedback.

*ASXG* - I will run it for the time period u mention as well. I did one backtest for the last 3 years and posting the AB report. 

*Howard* - how would one go about fixing a massive drawdown problem in a system ?  The previous report was for daily data, I am still working on a weekly system.

So does a system with win 70%, w/l ratio 5:1, that does gives a couple of trades a month exist ? or is that the holy grail system urban myth lol..

Here is the report for the last 3 year back test. In-sample was Nasdaq and out sample was ASX200. One question is that Max % system drawdown over the full three years or is it per year, if its 59.45% / year I'll have to go to the drawing board again..


----------



## howardbandy (27 August 2007)

Hi Tech/a --



tech/a said:


> Think I'll become a client as well.
> I dont like using Cards over the Nett---been done once---us there another way?Cheque--phone a card number through?




We use PayPal to collect funds.  We never see either a PayPal account number or a credit card number.  When PayPal is satisfied that you have paid, they let us know and we ship the book.

I make purchases over the net literally every day using either a credit card or a PayPal account.  Great selection, no sales tax, fast shipping, no traffic jams, etc.  

I, personally, have never had a bad experience using PayPal.  To my knowledge, none of the customers for our books has had a data security problem using PayPal.

If you have a PayPal account with money already in it, the transaction takes about 30 seconds.

But, you do not need a PayPal account at all.  You can just use your credit card for the one purchase.

Both MasterCard and VISA have a system where they create a "sub-account" with a limited credit limit and a quick expiration.  If I am buying something on-line from a vendor I do not know, I always use a sub-account.  Say the XYZ company has a computer utility program for $45 that I want to buy.  I go to their web site.  If it does not have a "secure" indicator (small padlock, etc), I do not buy.  If it is secure, I fill out the address information, then run the sub-account routine and generate a credit card number with a maximum credit limit of $46 and an expiration of next month.  The purchase goes through as usual.

If XYZ is a scam, they cannot charge anything else on that account because the credit limit has been used up.  And if they do not send my computer utility, the credit card company will help me get a refund.  Credit card companies are always on the side of the consumer.  Any charge of less than US$1000 is immediately reversed whenever the customer asked it to be.

But --- if you are more comfortable mailing a check (US funds on a US bank, please), the information you need to do that is on the book's web site also.

Thanks,
Howard
www.quantitativetradingsystems.com


----------



## howardbandy (27 August 2007)

Hi Ronin --



R0n1n said:


> *optimisation of Parameters*
> 
> Is it better to optimise paramaters of indicators of a system seperately or do the whole optimisation in one go. The reason I ask is doing it seperately is a bit faster and I can do it over various times while doing the whole system takes considerable time.




The procedure of optimizing one variable at a time is called "evolutionary operation" and is often used by industrial engineers.  It works when the relationship between the variables being optimized is "well behaved."  Those of you who have Quantitative Trading Systems, can read about it on page 43.  

Financial data is notoriously not well behaved.  

If it works (that is, if it finds the global peaks of objective function values you are looking for), use it -- it is much faster than optimizing everything at once.

Thanks,
Howard
www.quantitativetradingsystems.com


----------



## howardbandy (27 August 2007)

Greetings --



tech/a said:


> Ron.
> 
> I maybe wrong but I dont think Howard is talking about the *optimisation of Variables.*
> Optimisation of parameters I am assuming are the components of Drawdown,String of Losses,Average time held Profit to loss etc,whatever those parameters are that are important to you in your systems developement.
> ...




There is exactly one non-optimized, non-walk-forward method of trading.  That is to throw darts at the newspaper stock page.

Every time any of us decides to use one method rather than a second, we are ranking two alternatives and choosing among them.  Why stop at two alternatives?  Use an optimizer and look at thousands.

If we read about a system in a magazine, we have absolutely no way to know whether that system is robust or not.  It could be highly curve-fit.  If it appears in an advertisement, you can be certain that it is not the poorest performing example they could find.  Simply because we are not doing the curve-fitting ourselves does not insure that a system is not curve-fit.  On the other hand, if we do the development work ourselves, we can tell when a system is curve-fit and does not work out-of-sample. 

Every time we make a real trade with real money we are performing a walk-forward test on out-of-sample test.  Why wait until real money is on the table?  Use good system development, testing, and validation techniques and gain some knowledge about how the system will work before calling the broker.

Thanks,
Howard
www.quantitativetradingsystems.com


----------



## howardbandy (27 August 2007)

Hi Gorilla --



theasxgorilla said:


> Hmmm, and what does one do when the system you've developed has been optimised on data that recognises ideal market conditions actually encounters less ideal market conditions?
> 
> Is it not possible that someone turning their system on today might be picking the beginning of the next great decline?  Wouldn't it be helpful to know how badly the system you've developed might survive such a decline?  I bet a lot of people stopped trend following equities sometime in the latter half of 2002, probably much earlier on the NASDAQ.
> 
> ASX.G





Financial markets are often described as being in one of three states -- trending up, trading range, or trending down.  The definition of a trend often depends on the time frame.

Assume I want to trade all three conditions.  The market has been in an uptrend.  I know this because my trending system has been profitable.  It just exited a long position as prices turned down.  Is this the start of a downtrend, in which case I want to be short, or the start of a trading range, in which case I want to buy into weakness, of a minor pullback in a continuing uptrend, in which case my exit was too early?

Identification of trend is The question.  I wish I knew.

(Trend following used to work fairly well for commodities, but no long does, and probably never will again.  It has never worked very well for individual equities, at least for my objective function.  It does work for sector mutual funds and indices -- like FSELX and XLB.)


But, practically, every system must have a way of knowing it is wrong and exiting.  Once in a position, there are several ways to exit -- a signal, a time limit, a trailing stop, a loss limit stop, etc.  Build at least one exit method into every trading system.  

Note, certain to generate some interest -- stops hurt systems!  Try to design your systems so that very few exits are caused by a stop, particularly a maximum loss stop.

Systems very rarely perform better out-of-sample than in-sample.  So I should not expect to achieve the same results I saw while testing.

I do not need to know specifics of how my long-only system would have done in the market crash of 1929 or 1987.  All I need to know is that it would have exited.  

Thanks,
Howard
www.quantitativetradingsystems.com


----------



## theasxgorilla (27 August 2007)

howardbandy said:


> I do not need to know specifics of how my long-only system would have done in the market crash of 1929 or 1987.  All I need to know is that it would have exited.




Have exited, yes, and stayed out.  How do you decide how it stays out?  Is it at the discretion of the system designer, or has it been built into the system?  If the former, fine, as long as it's recognised that intuition is an ingredient in the trading of said system.  If the latter, then I absolutely think its vital to expose your system to worst case conditions and see how it holds up.

ASX.G


----------



## howardbandy (27 August 2007)

hI Ronin --



R0n1n said:


> *Bingk6* - its for the last 365 days, including the correction we got recently.
> 
> yes both the reports are for the same system and same period. I haven't got much experience system backtesting and hence I post it here to get a good feedback.
> 
> ...




This is a three year out-of-sample result?  If so, this looks pretty promising.  I find looking at the equity curve very helpful.  Can you post an image of it?

Thanks,
Howard


----------



## howardbandy (27 August 2007)

Greetings --

Here is the 3 year equity curve of a system that trades 2 times a month, has 70% winning trades, wins average 5%, losses average 1%.  Every $1 becomes $10 in three years.  Don't we all wish?

Thanks,
Howard


----------



## howardbandy (27 August 2007)

Hi Gorilla --



theasxgorilla said:


> Have exited, yes, and stayed out.  How do you decide how it stays out?  Is it at the discretion of the system designer, or has it been built into the system?  If the former, fine, as long as it's recognised that intuition is an ingredient in the trading of said system.  If the latter, then I absolutely think its vital to expose your system to worst case conditions and see how it holds up.
> 
> ASX.G




Only mechanical systems can be tested and validated.  Discretionary trading is always a possibility, but that is outside my area of expertise.

If the system is a mean-reversion system, it will want to buy weakness.  In a serious bear market, there are strong rallys.  Your system may identify them and buy them, or may avoid all long positions until the bear market is over -- it is all up to you and your program code.  Use anything that you can design, test, and validate.  Validation is the key.

Feel free to test over 1987, but testing over data earlier than that used for development has little or no meaning.  In developing models for trading systems, the out-of-sample period must be more recent that the in-sample period.  Every time any trading system makes a profitable trade, the market it trades becomes more efficient and more difficult to trade profitably.  Testing over data that is earlier is misleadingly encouraging.  

Thanks for listening,
Howard
www.quantitativetradingsystems.com


----------



## theasxgorilla (27 August 2007)

howardbandy said:


> Every time any trading system makes a profitable trade, the market it trades becomes more efficient and more difficult to trade profitably.




For very short-term systems that co-exist in markets where edges are sought out and arb'd away via massive computing power, yes, I can appreciate that this is observable and real.

But are you saying that the next time greed over powers fear on some stock somewhere that the further off into the future that I take such a trade the less likely it is that my current trend following system parameters can actually profit from any trend that manifests?

Is the entire ASX or NASDAQ or NSYE a market, or is each individual stock its own market by your definition?

ASX.G


----------



## howardbandy (27 August 2007)

Hi Gorilla --



theasxgorilla said:


> For very short-term systems that co-exist in markets where edges are sought out and arb'd away via massive computing power, yes, I can appreciate that this is observable and real.
> 
> But are you saying that the next time greed over powers fear on some stock somewhere that the further off into the future that I take such a trade the less likely it is that my current trend following system parameters can actually profit from any trend that manifests?
> 
> ...




Yup.  Every profitable trade reduces the likelihood that the next trade will be profitable.

Howard


----------



## Nick Radge (27 August 2007)

> It has never worked very well for individual equities, at least for my objective function.




I can't disagree more.


----------



## tech/a (27 August 2007)

> (Trend following used to work fairly well for commodities, but no long does, and probably never will again. It has never worked very well for individual equities, at least for my objective function.)




Worked fine for most of us over the last 5 yrs.
no longer does---from when?
Objective function---Id be interested in yours.


----------



## bingk6 (27 August 2007)

howardbandy said:


> The way to begin is to decide what kind of system you, personally -- very personally -- want.  Define the characteristics of that system.  How many trades a year (minimum or maximum), what is the minimum percentage of trades that should be winners, what is the minimum win to loss ratio, what is the maximum percentage system drawdown, and so forth.  Combine everything that is important into one objective function.  That objective function has a single value.  Every trading system over every ticker over every time period can be evaluated using this objective function with the result that every alternative has a single number associated with it -- the objective function score.




Hi Howard,

Excellent discussion in progress, many thanks for your contributions to date. I have one further question relating to the allocation of the "objective function score". All of the system parameters used in evaluating a system must have a certain weighting towards the calculation of this objective function score. The weighting of each parameter would be dependent on the preferences of the trader.  

Given that these parameters all present their values in different magnitudes (some in %, some in Dollar Values, some with really large numbers, some really small numbers etc etc), do you have any suggestion on how all of these system variables can be "normalised" (for one of better word) so that the final objective score is not overly skewed one way or another ? Presumably, you would also require various weights to be allocated to the different parameters as well, which could result in a lengthy mathematical formula used to calculate a *SINGLE* value which you would use as a basis for comparing systems with.  Do you have suggestions as to the formulas other may be using for calculating the score ??


----------



## howardbandy (28 August 2007)

Hi Bingk6 --

You are correct -- the objective function is created by combining all of the factors that are important, with each factor weighted according to its importance.

There are several methods for creating weighted objective functions.  

One that I like is to first list the important features -- even if they are thought to have only minor importance.

Go down the list of features and determine what metric already exists for that feature.  For example, average holding period is already in days or bars; compound annual rate of return is already in percent, and so forth.

Continuing with the list, for each feature decide what value should get full marks and what value should get no credit.  Draw a graph showing what value gets what percentage of full marks.  Full marks get a value of 1.0, no marks get a value of 0.0.  I have pasted an image I use in the book, Quantitative Trading Systems, below as an example.  The feature being scored is exposure -- full marks (1.00) for anything between 10 and 20 percent, linearly dropping to half marks (0.50) for anything over 40 percent.  To use the graph, lookup the exposure for the run you are scoring -- go across the chart until you come to that value, then go up until you hit the line -- that is the score for this feature for this run.

At this point each feature has been evaluated and a chart drawn showing how the marks will be assigned.

Still with the list, imagine that you have 100 dollars to allocate to the entire objective function.  Go down the list and decide how many dollars each of the features is worth.

The resulting objective function is determined by calculating what mark a feature earned and multiplying by its dollar allocation.  Add them all up.  The result will be a score between 0 and 100.  

Now -- you probably didn't get it just right the first time -- I never do.  Run some backtests.  For each backtest, plot the equity curve, print out the statistical report, and calculate the objective score.  Spread the sheets on the floor and put them in order according to how you would rank them as you read the statistics and look at the equity curve.  The ones you rank highest should have the highest objective function score.  In fact, the objective function scores should go from highest to lowest with none out of order.  If that is the case, your objective function is complete. If that is not the case, you will need to add other features or re-weight the ones you have.  Repeat until the order you prefer is reflected in the objective function score.

In AmiBroker, you can do exactly what I have just described.  Then, you can tell AmiBroker that it should automatically compute the score for your custom objective function every time it does a backtest or optimization run, and include that number on the report.

For those of you who already have the book, Quantitative Trading Systems, the procedure is described in Appendix A, starting on page 317.

Thanks,
Howard
www.quantitativetradingsystems.com


----------



## howardbandy (28 August 2007)

Hi Tech/a and Nick --

This will be interesting.

I was working as a research analyst for a Commodity Trading Advisor firm in the US.  The firm specialized in trend following systems.  They developed some of the first mechanical and computerized methods, and were very successful until the mid 1990s.  Almost all of the trading was commodity futures -- treasury notes, oil, and the like.  At one point, we were approached by a large fund who wanted us to manage an account for them in which we were to trade common stocks using trend following techniques.  We could not get it to work.  Breakouts did not work, moving average crossovers did not work.

By work, I mean develop, test and validate the models using good modeling techniques.  Optimizing over the symbol space does not count.   Taking advantage of an overall rising market does not count. 

Fast forward to 2007.  Trend following works great on market sectors, sector mutual funds, and sector exchange traded funds.  In fact, I describe several methods for building models based on the sector, then trading correlated equities.  

So -- If you are willing to discuss the details (if not, I understand completely) -- what common stock tickers do you find trade well using trend following techniques, and what entry methods do you find work?  

Thanks,
Howard
www.quantitativetradingsystems.com


----------



## tech/a (28 August 2007)

Howard.

Given conditions I'm sure you can prove that following trend trading will fail,just as given certain conditions you can also prove that trend trading is profitable.

I have watched my son prove Black is White mathematically,something I'm sure helps one doing his Doctorate in Physics.

But a cursory glance at a 100 yr chart of the DJIA or the ASX reveals a steady climb.



> Taking advantage of an overall rising market does not count.




Just as all you need to know is that your system will get you out BEFORE a 1987 event
You need to Know your system will GET YOU IN a Y2000 bull run.

Below is a link to a very rudimentary and simple TREND FOLLOWING system that has been traded live for 5 yrs. Its served me well.
All entries and exits are there if interested.
We (Those who took part in design and testing) are a long way from expert.

http://www.thechartist.com.au/forum/ubbthreads.php?ubb=postlist&Board=4&page=1


----------



## julius (28 August 2007)

Nick, Howard, Tech et al;

Apologies if any of this has been covered earlier in the thread,

I think the distinction between curve fitting and optimization is worth noting,
though accurately distinguishing one from the other is certainly beyond me.

Finding the best parameters over an arbitrary length of time and then expecting them to hold true for a multitude of conditions that may be encountered in the future is IMO a fools game, but optimizing chosen variables over a specified time period with the expectation that the performance of these variables _should_ decay into the future is perhaps a more feasible approach.

I am aware there has been a fair amount research into this area by various academics in this field, but it would be fair to say the people who are profitably using this method prefer not to disclose. Bastards 

From what I have read, machine learning applications (genetic algorithms, etc) are used to dynamicaly evaluate and update variables (or perhaps even overhaul  the whole model) to optimize the next periods performance, based on a previous number of periods. Considering the calibre of individuals who subscribe to 'market cycles' and similar, I don't find it that hard to swallow.

Interested to hear others thoughts on this area.



> Every time any trading system makes a profitable trade, the market it trades becomes more efficient and more difficult to trade profitably.




I completely agree, though could this apply more to swing trading systems than trend following? What effect would the frequency of trades have on the effective life of a system?

Also, to Nick & Howard, what has your experience been with short term trading systems? Most of the discussion in this area seems to be on medium to longer time frame systems, have you seen short term systems employed profitably?


----------



## tech/a (28 August 2007)

*Julius*  I dont think specifically.



> I think the distinction between curve fitting and optimization is worth noting,
> though accurately distinguishing one from the other is certainly beyond me.
> 
> _Finding the best parameters over an arbitrary length of time and then expecting them to hold true for a multitude of conditions that may be encountered in the future is IMO a fools game, but optimizing chosen variables over a specified time period with the expectation that the performance of these variables should decay into the future is perhaps a more feasible approach._
> ...




Like you I have pondered this topic at leangth from a logic rather than mathamatical view. My thoughts were very similar to your own which I have highlighted in italics.

In the end I have come to the following conclusion (Not saying its right or wrong just my conclusion).

If I chose a set of parameters and assign variables to them that are fixed,test and accumulate results,I can expect that the re application of them on forward data will return results within the Devaition parameters returned from any MonteCarlo analysis,provided the data doesnt step outside the boundaries of that which it was tested against.

If I optimise parameters and or Variables then I am un realistically expecting results to perform as well as they have on PAST optimised results working forward.What I have in reality is a set of parameters that are chosen by optimisation rather than "randomness" which when re applied forward will not be the optimised "Best" results looking BACK after a period of forward trading.
They are no better than random.

I have infact tested this to my best ability by optimising over a period then applying the optimised parameters over another forward period.The results are not as good as the optimised original results.
re optimising THOSE results return very different variables than those selected
from the initial optimisation.

An endless futile search for the very best selected variables.
You just wont find them as they will always alter through time.
The best you can get is an expected performance based on the "given" parameters selected for testing---wether they be optimised or randomly selected.Derived by allocation of parameters and variables tested against a data set which return a likley set of performance results deviating from the mean high and low.

If acceptable you trade if not you keep searching.

I know the explaination is circular but so is the exercise.
but hey I'm happy for my LOGIC to be challenged!


----------



## nizar (28 August 2007)

I havent got much to contribute but keep the excellent discussion coming guys. Im thoroughly enjoying it


----------



## bingk6 (28 August 2007)

tech/a said:


> If I optimise parameters and or Variables then I am un realistically expecting results to perform as well as they have on PAST optimised results working forward.What I have in reality is a set of parameters that are chosen by optimisation rather than "randomness" which when re applied forward will not be the optimised "Best" results looking BACK after a period of forward trading.
> They are no better than random.!




IMO, the reason for performing optimisation is to get some sort of feel as to what has performed best in the past. That is not to say that one can expect the same level of performance going forward using the optimised parameter values, but it is nonetheless a start. There is absolutely nothing available in any past data that would indicate what the future performance is likely to be. So for me, the only kind of "edge" (if you can even call it that) is to  trade a system that I know has performed well in the past, rather than one with random parameter settings. In my view, extracting what has worked in the past is pretty much all that is up for grabs when looking at past data and there are no better alternatives than that.

The key really is to extract these optimised parameter value from an in-sample set of data and then to verify it using out of sample data. By out of sample data verification, I mean performing any MonteCarlo analysis you deem necessary etc etc using the optimised parameter settings on out of sample data. If the out of sample testing shows good robustness in the figures and are relatively close to the optimised figures, then you may well have a very decent system. On the other hand, if the out of sample testing shows results that are very poor, then there is a real problem with the system and its back the drawing board.

That, in a nutshell, is my perception of the role that optimisation plays, it is merely a starting step which would hopefully lead to the formulation of a robust system that is better than random.


----------



## bingk6 (28 August 2007)

howardbandy said:


> The resulting objective function is determined by calculating what mark a feature earned and multiplying by its dollar allocation.  Add them all up.  The result will be a score between 0 and 100.




Hi Howard,

That is a good methodology, I'll give it a shot. Thanks


----------



## tech/a (28 August 2007)

Bingk6

Similar to my thinking.
but is it really better than random.
Logic says it _should_ be.
But no real reason why it _will_ be.

I dont have amibroker so dont have the facilities to find optimum variables over a portfolio.
Id be interested in what they are for T/Trader and then test the results over data and with tradesim.All I need is the optimum values.
My suspicion is that the edge if any wouldnt equate to much.

anyone help out?

Interested in Nicks take.


----------



## theasxgorilla (28 August 2007)

tech/a said:


> anyone help out?




Sure.  What do you think of these paramaters?

Parameters:

Short Breakoout, 2 -> 50 days, increment 1
Long Breakout, 20 -> 200 days, increment 1

Fast MA -> 10 -> 100, increment 1
Slow MA -> 50 -> 350, increment 1

ETA, 64 hours.   I have a Core Duo damn it!  Might be my clunky code implementation, but I've stripped it back.  It wants to do 2+ million iterations, which might also explain it.  If we trim the paramaters or increase the increments I expect it could speed up exponentially.

ASX.G


----------



## nizar (28 August 2007)

bingk6 said:


> IMO, the reason for performing optimisation is to get some sort of feel as to what has performed best in the past. That is not to say that one can expect the same level of performance going forward using the optimised parameter values, but it is nonetheless a start. There is absolutely nothing available in any past data that would indicate what the future performance is likely to be. So for me, the only kind of "edge" (if you can even call it that) is to  trade a system that I know has performed well in the past, rather than one with random parameter settings. In my view, extracting what has worked in the past is pretty much all that is up for grabs when looking at past data and there are no better alternatives than that.
> 
> *The key really is to extract these optimised parameter value from an in-sample set of data and then to verify it using out of sample data. By out of sample data verification, I mean performing any MonteCarlo analysis you deem necessary etc etc using the optimised parameter settings on out of sample data. If the out of sample testing shows good robustness in the figures and are relatively close to the optimised figures, then you may well have a very decent system. On the other hand, if the out of sample testing shows results that are very poor, then there is a real problem with the system and its back the drawing board.*
> 
> That, in a nutshell, is my perception of the role that optimisation plays, it is merely a starting step which would hopefully lead to the formulation of a robust system that is better than random.




I agree wholly with the above.


----------



## R0n1n (28 August 2007)

theasxgorilla said:


> Sure.  What do you think of these paramaters?
> 
> Parameters:
> 
> ...




ASX.G Amibroker is not dual core aware by design.


----------



## theasxgorilla (28 August 2007)

R0n1n said:


> ASX.G Amibroker is not dual core aware by design.




I know, but at least one of the cores should be able to pump harder that this, it's 2007 don't u know  !


----------



## tech/a (28 August 2007)

Such small increments are rediculous over longer timeframes.

for breakouts 5 days is ample and M/A's 10.

Then we have liquidity,Closing price,Stop % and Position sizing.

What software do you use *HOWARD*


----------



## R0n1n (28 August 2007)

theasxgorilla said:


> I know, but at least one of the cores should be able to pump harder that this, it's 2007 don't u know  !




lol... tell me about it... I have a dual core XEON 3.4 ghz with 4 gb ram and still Amibroker can gun it to its limits.


----------



## theasxgorilla (28 August 2007)

tech/a said:


> Then we have liquidity,Closing price,Stop % and Position sizing.




 A lot of stuff to look at, right??

This is a massive undertaking.  And when you optimise more than two variables at a time you lose the ability to see 3d optimisation charts that can help you see hot spots where variable placement is likely to be most robust.

The objective measure is not a bad idea, but it can tend to make the assessment of a system quite one dimensional.  Looking at the equity curve probably helps.  But I think there is an element of common sense and introspect that comes into deciding how to optimise and select a set of variables.

A combination of what Ed Seykota refers to as the steam-roller and the hunt-and-peck approach.

ASX.G


----------



## rnr (28 August 2007)

The comments below are my interpretation of Howard Bandy's posts to date on this thread.

If you disagree with my conclusions then please let me know so that I can establish whether I am running with the pack or running in the opposite direction and consequently about to get trampled!

For simplicity purposes I will relate to Australian stocks only.

a) 







> "An in-sample data set that is used to select the parameters for the trading system"



- 20 FPO's from the Materials Sector for the years 1998 & 1999.


b) 







> "An out-of-sample data set that is used very frequently"



- the same 20 FPO's from the Materials Sector for the years 2000 & 2001.
	      NB 1) The period for the two samples does not have to be the  same length and 2) the choice leaves room to utilise additional in-sample & out-of-sample data sets before going live.

c) "







> Search through the in-sample data set as much as you wish. Look for values of the parameters that maximize the value of the objective function; add rules and filters."



- Optimise the parameters (all) that will result in the highest objective
	  function value - now I can see why you chose Amibroker!!
	  NB This would not be referred to as curve fitting unless curve fitting was incorporated into your objective function.

d) 







> "Beginning with the first in-sample period (the oldest in-sample period), find the optimum parameters, then test over the associated out-of-sample period, and record the results. Move on to the second in-sample period, optimize, test the second out-of-sample period, and record the results. After all the in-sample periods have been processed, concatenate the results from the out-of-sample periods. If those results are satisfactory, your confidence is increased."



- If you are now confident with your model (system) then go live?

e) 







> "Eventually the characteristics of the market change and the model (system) is out-of-synch with the market. Maybe the market will return to its earlier state, but usually it will not, and a new model is needed, so we must reoptimize. That is, we must perform the next walk-forward iteration."



- now I have a problem! If the model is not achieving the same results as 
before and I have been using, as my data set, all the shares in the Materials 
Sector then where do I find my "out-of-sample data set" unless I stop trading 
this model for some time? Dynamic optimisation must use an in-sample data set.

That's me done for now and I look forward to any feedback.


----------



## howardbandy (28 August 2007)

tech/a said:


> What software do you use *HOWARD*




AmiBroker for charting, backtesting, and optimization.  In my opinion, it is head and shoulders the best software available to individuals and small companies.  Tomasz Janeczko has done a masterful job.  Version 5.0 will be released in a few months and will probably have some very powerful extensions to aid trading system validation.

http://www.amibroker.com/

I have been using Excel spreadsheet for some Monte Carlo analysis.  I anticipate less need for that when AmiBroker 5 is released.

I use Excel to format output for presentations -- particularly charts.

Thanks,
Howard


----------



## howardbandy (28 August 2007)

rnr said:


> The comments below are my interpretation of Howard Bandy's posts to date on this thread.
> 
> That's me done for now and I look forward to any feedback.




Hi rnr --

I believe that I said the out-of-sample data should be used very INfrequently.

Thanks,
Howard


----------



## howardbandy (28 August 2007)

bingk6 said:


> IMO, the reason for performing optimisation is to get some sort of feel as to what has performed best in the past. That is not to say that one can expect the same level of performance going forward using the optimised parameter values, but it is nonetheless a start. There is absolutely nothing available in any past data that would indicate what the future performance is likely to be. So for me, the only kind of "edge" (if you can even call it that) is to  trade a system that I know has performed well in the past, rather than one with random parameter settings. In my view, extracting what has worked in the past is pretty much all that is up for grabs when looking at past data and there are no better alternatives than that.
> 
> The key really is to extract these optimised parameter value from an in-sample set of data and then to verify it using out of sample data. By out of sample data verification, I mean performing any MonteCarlo analysis you deem necessary etc etc using the optimised parameter settings on out of sample data. If the out of sample testing shows good robustness in the figures and are relatively close to the optimised figures, then you may well have a very decent system. On the other hand, if the out of sample testing shows results that are very poor, then there is a real problem with the system and its back the drawing board.
> 
> That, in a nutshell, is my perception of the role that optimisation plays, it is merely a starting step which would hopefully lead to the formulation of a robust system that is better than random.




Hi bingk6 --

Perhaps we are thinking the same things, but I prefer to say it a little differently.

1.  Optimization simple means an organized search through a large number of alternatives, assigning each alternative a score so that the alternatives can be ranked.  In my opinion, the reason we are optimizing is not specifically to find something that worked in the past (although we do find that in the process), but to find some general characteristics that precede profitable trading opportunities that will hopefully continue to work in the future.

2.  Your comments imply that the Monte Carlo analysis is applied to the out-of-sample results.  Am I misunderstanding?  Monte Carlo analysis is usually applied to in-sample data to determine the robustness of the parameters -- the sensitivity to small changes in parameter values.  The results from Monte Carlo runs are incorporated into the objective function used to assign the score to each alternative.  Applying Monte Carlo analysis to the previously out-of-sample results is the start of another stage of model building using that previously out-of-sample data now as an in-sample data set.  A new out-of-sample data set will be required to test for model validity.

Thanks,
Howard


----------



## R0n1n (28 August 2007)

howardbandy said:


> I have been using Excel spreadsheet for some Monte Carlo analysis.  *I anticipate less need for that when AmiBroker 5 is released.*
> 
> Thanks,
> Howard




So we should be seeing some Monte Carlo analysis in ver5 ? 

I wonder if we can invite Tomasz to this board, given a lot of us are big users of Amibroker.


----------



## rnr (28 August 2007)

Hi Howard,



> Hi rnr --
> 
> I believe that I said the out-of-sample data should be used very INfrequently.




I sincerely appologise as you are 100% correct - I've stuffed up with the cut & paste.


----------



## howardbandy (28 August 2007)

I think several extensions to AmiBroker are on the horizon.  There have been a lot of requests for tools that help with trading system validation.    

I am writing "Introduction to AmiBroker," which I hope to have ready in early 2008.  Tomasz has suggested that I wait until Version 5 is out before writing several of the sections, and before taking screen images.

I'm not certain what is coming, so I'll leave it to Tomasz to make the announcements.

Tomasz posts regularly on the Yahoo boards.  This is the main one:
http://finance.groups.yahoo.com/group/amibroker/

Thanks,
Howard


----------



## stevo (28 August 2007)

ASX.G
Have you looked at IO - Intelligent Optimizer?

I haven't used it for a while but it is very powerful - it should be on the Amibroker yahoo site. I would think that the task you mention should reduce to less than 8 hours. IO was initially PSO - Particle Swarm Opimization.

regards


----------



## howardbandy (29 August 2007)

julius said:


> Nick, Howard, Tech et al;
> 
> Apologies if any of this has been covered earlier in the thread,
> 
> ...




Hi Julius --

Thanks for your comments.  The earlier part of this thread does cover some of the questions you raise.

About my comment that using a trading system changes the market being traded.  There is considerable evidence that that is true.  In the 1970s and early 1980s, Donchian-style breakout systems were very successful for trading futures and commodities.  With the advent of inexpensive computers, historical price data, and spreading of the details of those techniques, they stopped being profitable.  Many of the trend-style traders have fallen on hard times.  The CTA I worked for found that their (primarily trend-following) systems stopped working.  John Henry, a very large trend-follower based in the US who had been wildly successful for many years, now regularly posts the worst record for CTAs.  See Futures Magazine, July 2007, page 18 -- Henry has four funds among the five worst records, year-to-date.

Deciding whether any market is in a trend at any particular time depends on the time period over which it is measured.  A market that looks choppy when plotted as daily bars may have very reliable trends when plotted as 15 minute bars.

Whatever objective function is being used, most people and organizations are limited by the drawdown they are willing to absorb.

It is well known that the expected drawdown for any position is proportional to the square root of the time it is held.  Doubling the average holding period automatically increases the expected drawdown by 40%.

Shorter holding periods result in lower drawdowns.

Many institutions sell their services on the basis of low portfolio turnover and hold for longer periods.  (In my opinion, most of those that have good performance is primarily due to the once-in-a-millennium bull market we have seen from 1982 to now.)

As far as using end-of-day data, the more I study, research, and test trading systems, the more I prefer short holding periods -- a few days, perhaps a week.  I believe they are less susceptible to failure due to overuse, but only time will tell.

It is interesting to note the rapid rise in the popularity of exchange traded funds that track popular indices, including ETFs that increase the leverage.  Some days those ETFs account for over 40% of all trading activity on US stock exchanges, measured as dollar volume.  

The counterparty to my trade is probably not one of you -- it is probably an automated trading system designed by one of the large, well-funded trading organizations, equipped with the fastest computers, cleanest data feeds, and smartest system developers money can buy.

Thanks for listening,
Howard
www.quantitativetradingsystems.com


----------



## bingk6 (29 August 2007)

howardbandy said:


> 1.  Optimization simple means an organized search through a large number of alternatives, assigning each alternative a score so that the alternatives can be ranked.  In my opinion, the reason we are optimizing is not specifically to find something that worked in the past (although we do find that in the process), but to find some general characteristics that precede profitable trading opportunities that will hopefully continue to work in the future.




IMO, the main objective of the optimization phase is to select what appears to be the most robust set of parameter settings, which may not necessarily be the most profitable setting. This relates specifically to the sensitivity of these parameter values which as ASXG mentioned in a previous post means a lookout for a relative stable plateau or platform as opposed to a sharp point with steep fallouts in all directions. The most robust settings would be right bang in the middle of the plateau.The less sensitive the parameter values the greater the scope of these parameter values to change without significantly impacting performance. Therefore, outside of giving us valuable information regarding the “pockets” of outperformance within the in-sample data, I am not sure whether there are any information that can be gleened from the optimization exercise.  If there are more “general characteristics that precede profitable trading opportunities” that can be extracted, I would certainly like to hear about them.




howardbandy said:


> 2.  Your comments imply that the Monte Carlo analysis is applied to the out-of-sample results.  Am I misunderstanding?  Monte Carlo analysis is usually applied to in-sample data to determine the robustness of the parameters -- the sensitivity to small changes in parameter values.  The results from Monte Carlo runs are incorporated into the objective function used to assign the score to each alternative.  Applying Monte Carlo analysis to the previously out-of-sample results is the start of another stage of model building using that previously out-of-sample data now as an in-sample data set.  A new out-of-sample data set will be required to test for model validity.




OK, some clarification here. The Monte Carlo analysis that I suggested being performed on the out of sample data is purely for *verification* purposes only, by using optimized parameter values created from running the optimization and parameter value sensitivity testing on the in-sample data. At no stage am I advocating that we convert what was previously out of sample data into in-sample data by re-optimising the previously out of sample data and extracting new optimized parameter values. Without the re-optimisation process, one really does not convert out of sample data to in-sample data.

As part of the walk forward process, we would use optimized parameter values to test against out of sample data. The point is that while we are performing this walk forward process, there is really nothing stopping us performing Monte Carlo testing at the same time. If the system gives more signals than the trader has to trade then Monte Carlo just give the testing procedure more “credibility” by subjecting the out of sample data to a more comprehensive level of testing than a single walk-through could ever provide. This then provides more level of “confidence” should the results come out as expected …..


----------



## nizar (29 August 2007)

bingk6 said:


> IMO, the main objective of the optimization phase is to select what appears to be the most robust set of parameter settings, which may not necessarily be the most profitable setting. This relates specifically to the sensitivity of these parameter values which as ASXG mentioned in a previous post means a lookout for a relative stable plateau or platform as opposed to a sharp point with steep fallouts in all directions. The most robust settings would be right bang in the middle of the plateau.*The less sensitive the parameter values the greater the scope of these parameter values to change without significantly impacting performance. *Therefore, outside of giving us valuable information regarding the “pockets” of outperformance within the in-sample data, I am not sure whether there are any information that can be gleened from the optimization exercise.  If there are more “general characteristics that precede profitable trading opportunities” that can be extracted, I would certainly like to hear about them.
> 
> OK, some clarification here. The Monte Carlo analysis that I suggested being performed on the out of sample data is purely for *verification* purposes only, by using optimized parameter values created from running the optimization and parameter value sensitivity testing on the in-sample data. At no stage am I advocating that we convert what was previously out of sample data into in-sample data by re-optimising the previously out of sample data and extracting new optimized parameter values. Without the re-optimisation process, one really does not convert out of sample data to in-sample data.
> 
> As part of the walk forward process, we would use optimized parameter values to test against out of sample data. The point is that while we are performing this walk forward process, there is really nothing stopping us performing Monte Carlo testing at the same time. If the system gives more signals than the trader has to trade then Monte Carlo just give the testing procedure more “credibility” by subjecting the out of sample data to a more comprehensive level of testing than a single walk-through could ever provide. This then provides more level of “confidence” should the results come out as expected …..




Take a bow, son.
You've clearly made it.

Just one thing to add; theres no problems at all IMO with re-optimising your parameters with the out-of-sample data, in which case it becomes in-sample data, as long you dont do this all the time, and as long as you still have "new" out-of-sample data (or several datasets) to test the robustness of the system. As Howard pointed out, preferably this data should only be used once or the least amount of times possible.


----------



## buggalug (29 August 2007)

Here's what I get with some of the params with optimization. I'm still learning amibroker, and my data isn't the best. I've done it only on the current ASX 300 over 10 years.

If it's any use I can adjust any of the optimizations or details.

The attached file is actual a .zip file renamed to .pdf to get around the file restrictions on the site. So you need to save it and rename.

// The optimize params mean in order default, min value, max value, step
// So HighBreakout will try 5, 10, 15, 20
HighBreakOut = Optimize("HighBreakOut", 10, 5, 20, 5);
ShortEMA = Optimize("ShortEMA", 40, 10, 60, 10);
HighestHigh = Optimize("HighestHigh", 70, 30, 120, 20);
LongEMA = Optimize("LongEMA", 180, 120, 360, 30);

SetOption("CommissionMode", 2); //$$ per trade
SetOption("CommissionAmount", 30);
SetOption("MaxOpenPositions", 10 );
SetOption("InitialEquity", 100000 );
PositionSize = -10; // always invest only 10% of the current Equity

cond1=Cross(H,Ref(HHV(H,HighBreakOut),-1)); // when todays high crosses last highest high over the last 10 periods
cond2=H > EMA(C,ShortEMA); // todays high is greater than the 40 day Exp MA of closes
cond3=HHVBars(H,HighestHigh) == 0; // todays high is the highest for 70 periods
cond4=EMA(V*C,21) > 500000; // ensure at least $500k of money flow
cond5=C < 10.00; // only trading in stocks less than $10
cond6=C > O; // todays close higher than open

// the following line is the trigger if all conditions satisfied
Buy=cond1 AND cond2 AND cond3 AND cond4 AND cond5 AND cond6;

// here we define variables used once in the trade
ApplyStop( stopTypeLoss, stopModePercent, amount=10 );
Sell= Cross(Ref(EMA(L,LongEMA),-1),C); // close crosses below yesterdays average of the low



tech/a said:


> Bingk6
> 
> Similar to my thinking.
> but is it really better than random.
> ...


----------



## stevo (29 August 2007)

howardbandy said:


> Hi Julius --
> The counterparty to my trade is probably not one of you -- it is probably an automated trading system designed by one of the large, well-funded trading organizations, equipped with the fastest computers, cleanest data feeds, and smartest system developers money can buy.
> www.quantitativetradingsystems.com




Since the site is called "Aussie Stock Forums" many posters here would not trade the same markets that you would.

Are there any statistics on how much of the market is traded this way? I find it fascinating that this is happening. Is it possible that a lot of trading is done by a group of super computers with custom trading systems? I would assume that many would concentrate on the most liquid markets due the the volume of capital involved.


----------



## stevo (29 August 2007)

Buggalug


buggalug said:


> Here's what I get with some of the params with optimization. I'm still learning amibroker, and my data isn't the best. I've done it only on the current ASX 300 over 10 years.



Is this a variant of tech trader?

I dropped it into Excel for some 3D optimisation charts. It would be interesting to try;
LongEMA = Optimize("LongEMA", 100, 65, 125, 20); // or maybe increment by 10

or something similar since RAR (or CAR for that matter) was highest at the lowest value of of LongEMA looking at the 3D chart. You could fix the short EMA to 20 since it doesn't appear to vary that much. 

It appears that the best returns also give the highest drawdown - not an unusual occurence. So you can make more money (maybe) if you are prepared to accept more drawdown risk. It's all about tradeoffs and compromise.

By the way I haven't studied the code at all, only the results.

regards
stevo


----------



## buggalug (29 August 2007)

stevo said:


> Buggalug
> 
> Is this a variant of tech trader?
> 
> ...




Yeah it is, if you look in my quote T/A asked if anyone would have a look. I hope I have it right, I found the base code on this site.

I've tried this, adding what you said, but leaving some of the range for comparison. I've also added Bang For Buck to pick which trade to take if more than one come up.

HighBreakOut = Optimize("HighBreakOut", 10, 5, 20, 5);
ShortEMA = Optimize("ShortEMA", 40, 20, 40, 10);
HighestHigh = Optimize("HighestHigh", 70, 30, 120, 20);
LongEMA = Optimize("LongEMA", 100, 65, 225, 10);

SetOption("CommissionMode", 2); //$$ per trade
SetOption("CommissionAmount", 30);
SetOption("MaxOpenPositions", 10 );
SetOption("InitialEquity", 100000 );
PositionSize = -10; // always invest only 10% of the current Equity

BangForBuck = ((10000/C)* (MA(ATR(1),200))/100);
PositionScore = BangForBuck;

cond1=Cross(H,Ref(HHV(H,HighBreakOut),-1)); // when todays high crosses last highest high over the last 10 periods
cond2=H > EMA(C,ShortEMA); // todays high is greater than the 40 day Exp MA of closes
cond3=HHVBars(H,HighestHigh) == 0; // todays high is the highest for 70 periods
cond4=EMA(V*C,21) > 500000; // ensure at least $500k of money flow
cond5=C < 10.00; // only trading in stocks less than $10
cond6=C > O; // todays close higher than open

// the following line is the trigger if all conditions satisfied
Buy=cond1 AND cond2 AND cond3 AND cond4 AND cond5 AND cond6;

// here we define variables used once in the trade
ApplyStop( stopTypeLoss, stopModePercent, amount=10 );
Sell= Cross(Ref(EMA(L,LongEMA),-1),C); // close crosses below yesterdays average of the low

Same deal ... its the current ASX 300 for 10 years and have to rename the attachment back to .zip.


----------



## weird (29 August 2007)

Perhaps ignorance is bliss considering some of the previous posts, I would argue that eod trend following systems on stocks works better than most systems and more suitable for the average trader. Definitely atleast in the asian pacific markets, in my testing anyhow. 

I have quite a few long term trend following systems, that test well on the asx300 or all ords constituent list, and the same systems also test just as well or better on a different market such as the HSCI (Hang Seng Composite Index) constituent list. 

I would argue that leverage is not really required and perhaps should be avoided (unless sufficient capital is not available to trade a minimum number of stocks for a portfolio - about 5 to 10 is required to make these type of systems work ... and I would perhaps look at more boring types of leverage such as margin lending to do so).

These systems allow people to have a balance between trading and work, and not be glued in front of a screen all day or perhaps miss the one trade that will make the year.  A steady income to pay bills is important. Perhaps trading only one instrument may mean missing that trade could be doom and gloom, but not such an issue when trading stocks, where there are many opportunities to make this up.

My belief anyhow is that long term trend following systems do work on stocks, and these systems success are based on often touted principles, that is, compounding profits, trade with the trend, and cut losses, and also using the systems edge simultaneously on a portfolio of shares. 

Perhaps having these systems perform on another market just as well is validation that a system is robust ? Monte Carlo results is the validation I often use, before testing on other markets. 

The post above is more focused towards long term stock portfolio trading systems, however I can see other types of trading systems being discussed  such as short-term and swing trading systems, which perhaps have a more limited focus on having a portfolio of around 5-10 instruments. These type of systems I did not attempt to address.


----------



## tech/a (29 August 2007)

Bugalug

Would love too have a look but I get this.
Ive PM'd you my email address.


----------



## theasxgorilla (29 August 2007)

Tech/a download and save it locally, rename it to a .zip file, unzip it and you'll find a .csv file which contains the optimisation output.

Good work buggalug.

ASX.G


----------



## buggalug (29 August 2007)

theasxgorilla said:


> Tech/a download and save it locally, rename it to a .zip file, unzip it and you'll find a .csv file which contains the optimisation output.
> 
> Good work buggalug.
> 
> ASX.G




Everyone that's looking at this just give a little time before going too far. Bingk6 is getting some different results than me so we're just cross checking.

Cheers


----------



## tech/a (29 August 2007)

I wanted to get the "Optimum" variables and code them into M/S then through Tradesim for some checking/testing/stuffing around myself.


----------



## buggalug (29 August 2007)

tech/a said:


> I wanted to get the "Optimum" variables and code them into M/S then through Tradesim for some checking/testing/stuffing around myself.




Tech,

Did you get my email?

I'm going to leave one going overnight with optimization for stops, the dollar limit (C < 10) and a few liquidity levels. Hopefully it's on the right track, i'll be interested in the results you get with your testing.


----------



## weird (29 August 2007)

buggalug  , 

My two cents, just looking at the parameters, I would not think you would need to run it overnight, to realise that these indicators, in any optimised combination will provide a robust system or a system that would get me excited enough to trade ... pls prove me wrong. Monte Carlo may find much variation in the results.

I would look at the indicators, and determine a reason why they should indicate a stock is performing strongly and would warrant an entry over other stocks ... don't get me wrong, I think MA and breakouts are a strong foundation to long term trend following systems, but I don't think brute forcing a simple system like this will yield much results.

I would spend some time , eyeballing charts, and try to determine common characteristics of previous outperforming stocks.


----------



## stevo (29 August 2007)

buggalug
It's good to see some code posted. I ran it on the All Ords stocks rather than the ASX300.

Just a few things to consider;

1. It would be good to set delay on the entry / exit using;
SetTradeDelays( 1, 1, 1, 1 ); /* delay entry/exit by one bar */

2. What is the trade price you use, the open, the average or random for the day?

3. Setting position sizing to 10% of capital might mean that you are better off using % brokerage instead of fixed $30 brokerage, depending on your broker, otherwise brokerage costs could end up too low as the simulation progresses.

4. Using bang for buck in the positionscore could give you results that are unrealistic. Possibly use Gp's approach to Monte Carlo (ie randomly ignore some trades) to get around this.

5. I also look at how the system handles larger amounts of money than $100,000 just because that is where we want to eventually be! Many systems cannot handle larger amounts of money very well due to liquidity issues and performance degrades. Check out how it goes with $1 million & $5 million.

6. check what limit trade size as a % of entry bar volume is set at. I wouldn't go above 10% of entry bar volume, and probably less.

7. I am not sure what price the ApplyStop is working on.

8. You mentioned that your data "isn't the best". You can waste a lot of time working with crappy data.

CAR results look ok, although the % system drawdown is something that I could not live with going forward. I would also like to see the win rate higher. But these are my main criteria. The open equity curve is a little tough, especially through 2002/2003 - it would be better if the system stepped aside through some of this period.

Sorry for the length of a post but when it comes to system testing there are a lot of things to consider. I haven't even scratched the surface. You can spend hours running opts only to find that something is set wrong, or the basic starting point is all wrong. 

Howard's book looks like it addresses a lot of system design issues, and from his posts above he knows what he is talking about - but I haven't read it yet. But I will when I get time.

regards


----------



## weird (29 August 2007)

buggalug, the last post was partly in jest, as TT has a proven track record as a forward tested trend following system based on simple mechanical system parameters, and posted on reefcap ... my post was in pair with a previous post, that trend following systems can and do work, however I would look at additional filters to improve the results. Good luck, and look forward to seeing your results.


----------



## buggalug (30 August 2007)

Hi Guys,

I really just did this as an exercise to learn Amibroker, I'm not using a system along these lines. 

Stevo:
1. I've added this
2. Buy on next open, sell on next open
3. Good point, as most brokers charge a fixed amount then a percentage I've just picked a value, anyone seriously using this might want to put some sort of conditions in.
4/5/6 Your probably right again, maybe an exercise for the future. 
7. 10% by default, these results try a few values.
8. Right again, anyone going to pick a system on this would want to check it with their own data.

So to stress again, this is just a learning experience for me, and some others may find the results interesting.

Here is the lastest code and results:

HighBreakOut = Optimize("HighBreakOut", 20, 10, 40, 10);
ShortEMA = Optimize("ShortEMA", 30, 10, 70, 20);
HighestHigh = Optimize("HighestHigh", 30, 10, 100, 20);
LongEMA = Optimize("LongEMA", 65, 60, 200, 20);
Stop = Optimize("Stop", 10, 5, 20, 5);
DollarLimit = Optimize("DollarLimit", 10, 5, 20, 5);
Liquidity = Optimize("Liquidity", 500000, 500000, 500000, 200000);

SetOption("CommissionMode", 1); //% per trade
SetOption("CommissionAmount", 0.15);
SetOption("MaxOpenPositions", 10 );
SetOption("InitialEquity", 100000 );
SetTradeDelays( 1, 1, 1, 1 ); /* delay entry/exit by one bar */
PositionSize = -10; // always invest only 10% of the current Equity

BangForBuck = ((10000/C)* (MA(ATR(1),200))/100);
PositionScore = BangForBuck;

cond1=Cross(H,Ref(HHV(H,HighBreakOut),-1)); // when todays high crosses last highest high over the last 10 periods
cond2=H > EMA(C,ShortEMA); // todays high is greater than the 40 day Exp MA of closes
cond3=HHVBars(H,HighestHigh) == 0; // todays high is the highest for 70 periods
cond4=EMA(V*C,21) > Liquidity; // ensure at least $500k of money flow
cond5=C < DollarLimit; // only trading in stocks less than $10
cond6=C > O; // todays close higher than open

// the following line is the trigger if all conditions satisfied
Buy=cond1 AND cond2 AND cond3 AND cond4 AND cond5 AND cond6;

// here we define variables used once in the trade
ApplyStop( stopTypeLoss, stopModePercent, amount=Stop );
Sell= Cross(Ref(EMA(L,LongEMA),-1),C); // close crosses below yesterdays average of the low


ttrader3.zip


----------



## tech/a (30 August 2007)

Bugs

Thanks got your mail.

A question for all involved.
Bugs has now come up with some figures re optimisation of T/T.

I guess its time to trawl the figures and come up with the optimised variables which suit what I want from the system.Drawdown,string of losses,return etc.

Then run whatever systems testing is deamed necessary on those values.

I'll do that and get back with tradesim results.

Any other hints  (as I,ve never worked with optimisation!).
before I get into it.

Bugs what *would be handy *is the English for some of the result terminology used for amibroker so I know what it is I'm comparing.

Thanks for your effort.


----------



## buggalug (30 August 2007)

tech/a said:


> Bugs
> 
> Thanks got your mail.
> 
> ...




Tech/a,

My interpretation of the results seems like the parameters produce similar results, around 40-45% winners with 25-30% drawdown. Fiddling the numbers just means it locks in some stocks that happened to sore.

I've been looking for explainations of the columns myself with no luck, maybe someone else can help?

Out of curiousity I locked the short EMA to 30 and the long exit EMA to 120, with only two optimizations I get the below. So I'd be curious what you get with 30 short EMA, 120 long EMA, 40 Highest High and 30 HighBreakout just to see if my results are valid.


----------



## tech/a (30 August 2007)

Bugs.

Start date and end date.
Position size.
Stop I presume as I have it?
What other parameters?


----------



## buggalug (30 August 2007)

tech/a said:


> Bugs.
> 
> Start date and end date.
> Position size.
> ...




Tech:

Start Date : 2/1/1997
End Date : Today
Starting Equity: $100000
Max Open Positions : 10
Buy Next Open
Sell Next Open
Stop Loss : 10%
Brokerage: 0.15% (???)
Current ASX 300 (???)

EntryTrigger:= Cross(H,Ref(HHV(H,30),-1)) AND H > Mov(C,30,E) AND HHVBars(H,40)=0 AND Fml("Liquidity") > 500000 AND C < 10.00 AND C > O;

ExitTrigger:=Cross(Ref(Mov(L,120,E),-1),C); 

I haven't done this yet, I have a fixed 10%, so I'll guess i'll have to add it:
InitialStop:=If(Ref(C,-1)>0.90*EntryPrice,0.90*EntryPrice,Ref(C,-1));
Is that saying 10% or the previous close, whichever is more? So if it gains 20% that is the stop loss?

Do you have anything on which stock to choose if you have multiple signals?

Let me know if I missed anything.

Bugs


----------



## buggalug (30 August 2007)

buggalug said:


> Tech:
> 
> Start Date : 2/1/1997
> End Date : Today
> ...




Sorry that should be:

EntryTrigger:= Cross(H,Ref(HHV(H,20),-1)) AND H > Mov(C,30,E) AND HHVBars(H,80)=0 AND Fml("Liquidity") > 500000 AND C < 10.00 AND C > O;


----------



## nizar (2 September 2007)

tech/a said:


> Something to aim at perhaps.




Looks like im getting there 


Monte Carlo Report	

Trade Database Filename	
C:\TradeSimData\Daily07.trb	

Simulation Summary	
Simulation Date:	2/09/2007
Simulation Time:	1:11:41 PM
Simulation Duration:	464.06 seconds

Trade Parameters	
Initial Capital:	$30,000.00
Portfolio Limit:	100.00%
Maximum number of open positions:	100
Position Size Model:	Fixed Percent Risk
Percentage of capital risked per trade:	1.50%
Position size limit:	100.00%
Portfolio Heat:	100.00%
Pyramid profits:	Yes
Transaction cost (Trade Entry):	$44.00
Transaction cost (Trade Exit):	$44.00
Margin Requirement:	100.00%
Magnify Position Size(& Risk) according to Margin Req:	No
Margin Requirement Daily Interest Rate (Long Trades):	0.0000%
Margin Requirement Yearly Interest Rate (Long Trades):	0.0000%
Margin Requirement Daily Interest Rate (Short Trades):	0.0000%
Margin Requirement Yearly Interest Rate (Short Trades):	0.0000%

Trade Preferences	
Trading Instrument:	Stocks
Break Even Trades:	Process separately
Trade Position Type:	Process all trades
Entry Order Type:	Default Order
Exit Order Type:	Default Order
Minimum Trade Size:	$500.00
Accept Partial Trades:	No
Volume Filter:	Reject Trades if Position Size is greater than
	10.00% of the maximum traded volume
Pyramid Trades:	Yes
Favour Trade Pyramid:	No
Start Pyramid at any level up to level:	N/A
Maximum Pyramid Level Limited to:	N/A
Maximum Pyramid Count Limited to:	N/A

Simulation Stats	
Number of trade simulations:	10000
Trades processed per simulation:	7172
Maximum Number of Trades Executed:	351
Average Number of Trades Executed:	301
Minimum Number of Trades Executed:	258
Standard Deviation:	9.37

Profit Stats	
Maximum Profit:	$5,813,301.56 (19377.67%)
Average Profit:	$2,018,626.98 (6728.76%)
Minimum Profit:	$552,351.05 (1841.17%)
Standard Deviation:	$438,117.73 (1460.39%)
Probability of Profit:	100.00%
Probability of Loss:	0.00%

Percent Winning Trade Stats	
Maximum percentage of winning trades:	54.36%
Average percentage of winning trades:	47.84%
Minimum percentage of winning trades:	40.84%
Standard Deviation:	1.72%

Percent Losing Trade Stats	
Maximum percentage of losing trades:	59.16%
Average percentage of losing Trades:	52.16%
Minimum percentage of losing trades:	45.64%
Standard Deviation:	1.72%

Average Relative Dollar Drawdown Stats	
Maximum of the Average Relative Dollar Drawdown:	$18,582.40
Average of the Average Relative Dollar Drawdown:	$6,354.48
Minimum of the Average Relative Dollar Drawdown:	$1,427.79
Standard Deviation:	$1,528.06

Average Relative Percent Drawdown Stats	
Maximum of the Average Relative Percent Drawdown:	1.8207%
Average of the Average Relative Percent Drawdown:	1.0783%
Minimum of the Average Relative Percent Drawdown:	0.7348%
Standard Deviation:	0.1249%

Maximum Peak-to-Valley Dollar Drawdown Stats	
Maximum Absolute Dollar Drawdown:	$626,104.23
Average Absolute Dollar Drawdown:	$154,098.20
Minimum Absolute Dollar Drawdown:	$26,341.77
Standard Deviation:	$38,179.05

Maximum Peak-to-Valley Percent Drawdown Stats	
Maximum Absolute Percent Drawdown:	26.2756%
Average Absolute Percent Drawdown:	13.6261%
Minimum Absolute Percent Drawdown:	6.4724%
Standard Deviation:	3.0385%


----------



## tech/a (2 September 2007)

Looks good Nizar.


----------



## bingk6 (7 September 2007)

howardbandy said:


> Hi Gorilla --
> 
> Yup.  Every profitable trade reduces the likelihood that the next trade will be profitable.
> 
> Howard




I am very intrigued by the concept that trading systems will over time lose their edge and ultimately start to fail. I have been playing around with a volatility breakout system and have done a great deal of testing with it using ASX data, and it has shown reasonable promise. This system is very very short term, most trades (99%)  lasting <5 days and therefore the overall exposure to the market is very small. 

Recently, I purchased some historical data from FTSE, just to see how the system would perform on an entirely fresh batch of data.

Enclosed below is the equity curve arising from running the system. Please note the following:

1.	I don’t have the actual codes of the stocks belonging to the FTSE 100, so just arbitrarily set up a very simple filter that calculated MA(V*C,100) for every stock on the FTSE and selected the top 100, just nice and quick.

2.	No optimisation has been done on the FTSE data. All the parameter values have been set using ASX data.

3.	The system appeared to run very well, starting from ~end of 86 (start of my test data started) all the way through to the middle of 99 (some 13 years), before it literally fell off a cliff and has been in a down trend ever since.


My questions are as follows:

1)	Has this system hit its used-by date ?
2)	How can something that has worked so well up to a period of time suddenly stop working altogether.
3)	Given the short duration of the trades involved, one would not think that the professionals would have had the opportunity to study these volatility breakout strategies to any great degree and therefore come up with any counter-strategies to fade them ???
4)	To date, I have not seen any such marked deterioriation in the Australian market. Is this because the ASX market is too small a market for the professionals to implement their fading strategies, as there are there are much bigger fish to fry elsewhere ??


----------



## nizar (7 September 2007)

Interesting observation and analysis bingk6, id be keen to hear others' responses to your questions (those that are most expert than me!).



buggalug said:


> Sorry that should be:
> 
> EntryTrigger:= Cross(H,Ref(HHV(H,20),-1)) AND H > Mov(C,30,E) AND HHVBars(H,80)=0 AND Fml("Liquidity") > 500000 AND C < 10.00 AND C > O;




What does this mean?
HHVBars(H,80)=0

And also, looking forward, as a common sense assumption, do you think that it would be a good idea NOT to apply a price filter (C<10.00) because in the future its likely that most listed companies would have a higher price on average than today.

It would be interesting to see what the average share price was 10years ago as opposed to today.


----------



## rnr (7 September 2007)

> What does this mean?
> HHVBars(H,80)=0




Read as "today is the HIGHEST HIGH during the last 80 bars".


----------



## tech/a (7 September 2007)

Todays high is the highest high over the last 80 periods.

Price filter.(Larger caps).

Is a stock below $10 *more likely *to double or treble than a stock over $10?
The price filter is based upon the ASX300 (Roughly) so the small caps are basically not included.

You can of course make your universe whatever you want.
if it tests OK then fine.
Personal preference and a filter Radge suggested at the time of designing.
It increased results and hasnt done so bad realtime either.

Nizar.
Dont try to cover EVERY base!!
You wont have enough arms or legs!


----------



## buggalug (7 September 2007)

nizar said:


> What does this mean?
> HHVBars(H,80)=0
> 
> And also, looking forward, as a common sense assumption, do you think that it would be a good idea NOT to apply a price filter (C<10.00) because in the future its likely that most listed companies would have a higher price on average than today.
> ...




Nizar,

tech/a has explained his system a number of times on here and on thecharist, looking at those will explain what he trying to achieve.

The $10 limit probably isn't valid going forward, the same probably applies to the liquidity filter (I did a quick scan of the all ords I currently get 331 stocks that average more than $500k, the same scan in 1997 (of course the all ords would of been different back then) only gets 42.


----------



## rnr (7 September 2007)

> the same probably applies to the liquidity filter (I did a quick scan of the all ords I currently get 331 stocks that average more than $500k, the same scan in 1997 (of course the all ords would of been different back then) only gets 42.




The issue of liquidity when back-testing does raise the issue as noted by buggalug.

Perhaps one should include a CPI related turnover filter in their back-testing.

When you take into consideration that $500K now equated to approx $387K in Dec 1997.


----------



## buggalug (7 September 2007)

rnr said:


> The issue of liquidity when back-testing does raise the issue as noted by buggalug.
> 
> Perhaps one should include a CPI related turnover filter in their back-testing.
> 
> When you take into consideration that $500K now equated to approx $387K in Dec 1997.




Maybe as a rough hack, you could something like XAO*100? Giving something around $620k now, $240k back in 1997.


----------



## howardbandy (7 September 2007)

Greetings --

I am seeing code that restricts the purchase to stocks with a maximum price of $10.  And questions such as this one:

"Is a stock below $10 more likely to double or treble than a stock over $10?
The price filter is based upon the ASX300 (Roughly) so the small caps are basically not included."

I'd like to make two points.

One.  There is good evidence that lower price stocks gain a higher percentage than higher price stocks given conditions that cause both to rise.  

Two.  Historical prices cannot be relied upon unless those prices are Not adjusted for splits and dividends.  A stock that was $15 in 2004, then split 2:1 in 2006, will appear today to have been $7.50 in 2004.  But it was not, and the bias toward lower price stocks does not apply to a purchase made in 2004 in a backtest.

Thanks,
Howard
www.quantitativetradingsystems.com


----------



## theasxgorilla (8 September 2007)

howardbandy said:


> Two.  Historical prices cannot be relied upon unless those prices are Not adjusted for splits and dividends.  A stock that was $15 in 2004, then split 2:1 in 2006, will appear today to have been $7.50 in 2004.  But it was not, and the bias toward lower price stocks does not apply to a purchase made in 2004 in a backtest.




Really good point Howard.  It appears to mean that stats generated from testing a system on data flawed in the ways you describe must be dubious.  But no-one seems to have ASX data that isn't adjusted for splits or with the dividends adjusted into the share price. I've seen the Blackstar study and agree it would be far _nicer_ to have this kind of data to test with (not to mention a CPI adjusted liquidity filter...this is obviously far easier to do).  Without this kind of data, what do you do?  This could be yet another reason why the big boys don't play in this part of the market..the Aust market isn't big enough to warrant _fixing_ 22 years of data for testing systems like what the Blackstar guys do in the US.


----------



## Sir Burr (8 September 2007)

tech/a said:


> Think I'll become a client as well.
> I dont like using Cards over the Nett---been done once---us there another way? Cheque--phone a card number through?




Tech,

Way off topic  but I remembered your post when I saw this:

www.bopo.com.au

SB


----------



## tech/a (8 September 2007)

SB

Went B pay in the end.
Youve just reminded me to do step 2 in the set up.

Cheers.


----------



## howardbandy (8 September 2007)

Hi Gorilla --



theasxgorilla said:


> Really good point Howard.  It appears to mean that stats generated from testing a system on data flawed in the ways you describe must be dubious.  But no-one seems to have ASX data that isn't adjusted for splits or with the dividends adjusted into the share price. I've seen the Blackstar study and agree it would be far _nicer_ to have this kind of data to test with (not to mention a CPI adjusted liquidity filter...this is obviously far easier to do).  Without this kind of data, what do you do?  This could be yet another reason why the big boys don't play in this part of the market..the Aust market isn't big enough to warrant _fixing_ 22 years of data for testing systems like what the Blackstar guys do in the US.




The problem of finding high quality historical data is not unique to the Australian markets.  

Even for US stocks, historical data that lists the actual prices and volumes as traded, unadjusted for splits, distributions, and dividends, is very expensive.  

To remove survivor bias, the data should include delisted issues as well.

Once the data is located, the problem is further complicated Because the data is unadjusted.  As most of you who have tried to clean your data know, it is often impossible to tell when a 10 or 20 or 50% price change is a price change or a split.  The trading system software must be able to cope with the discontinuities, which few (none?) can.  For comparison, think about the way trading system packages that are designed to work with futures contracts handle the roll-over from one front month to the next.  They have an algorithm based on either a calendar or on volume that tells them when to switch, and they take the differential between the two contracts into account as they report the trading results.  Neither of those methods can be applied to stock distributions.

In an ideal world: we would have perfectly clean data; there would be a field for Actual price along with Open, High, Low, Close; there would be a field for a code that identifies what happened at every discontinuity; and our development software would be able to handle all of this.  

But that still does not enable us to handle situations where the fundamentals of the company change without the ticker changing and without a distribution.  For example, an automobile manufacturer sells its financing division.  Among US automakers, the financing division accounts for nearly all of the profit of the company.  The profitability and consequent price action change, but there is no way to determine what happened without reading the press releases, and there is no way to adjust previous prices to remove the contribution of the financing division.

I do not think we can completely, or even adequately, test for the effect of historical actual prices, splits, dividends, takeovers, spinoffs, or survivor bias.  

And I think it is Very dangerous to draw conclusions from long-only systems in the 1982 to 2007 time period.

Thanks for listening,
Howard
www.quantitativetradingsystems.com


----------



## nizar (8 September 2007)

howardbandy said:


> And I think it is Very dangerous to draw conclusions from long-only systems in the 1982 to 2007 time period.




Why so?


----------



## Shane Baker (9 September 2007)

Long term secular bull market for stocks.


----------



## Sir Burr (9 September 2007)

howardbandy said:


> Note, certain to generate some interest -- stops hurt systems!  Try to design your systems so that very few exits are caused by a stop, particularly a maximum loss stop.
> 
> Thanks,
> Howard
> www.quantitativetradingsystems.com




Howard,

This is very interesting. In testing any addition to a timed stop a system "degrades".

Adding the following stops (1 to 4) individually with the timed stop degrades the system with 1 being least 4 being most degrading (profit,drawdowns) as you mentioned above.

Timed stop +

1. Trailing stop % - exit next bar
2. Trailing stop % - intra day
3. Max stop loss % - exit next bar
4. Max stop loss % - exit intra day

Do/*can* people here trade without the above stops as a security blanket? 

SB


----------



## tech/a (9 September 2007)

howardbandy said:
			
		

> And I think it is Very dangerous to draw conclusions from long-only systems in the 1982 to 2007 time period.




I find this an amazing statement.
Howard is not the first to make it.Seen it often.

Would then if that period been a full on Bear market systems successfully developed around short trading be just as dangerous.Long only systems are and should be designed to catch periods exactly like these.If one had been developed before 1982 or even 1990 you would have taken part in periods for which it was designed.Whats wrong with that?



			
				howardbandy said:
			
		

> Note, certain to generate some interest -- stops hurt systems! Try to design your systems so that very few exits are caused by a stop, particularly a maximum loss stop.




Not the first time Ive seen this statement either and I have proven that to be the case in my own testing of various ideas. BUT Given

(1) The discussion previous along the lines of developing a method thats suits the individual AND
(2) The arguement that Trend systems are not to be trusted---the idea itself suggests to be profitable youd eventually need long trends in the direction your trading
AND
(3) The psychological element of applying a system.

I would have thought that *most* would find a combination of capital preservation and positive expectancy,coupled with turbo charged money management principals would be preferable.


----------



## howardbandy (9 September 2007)

Greetings --

Just a few comments to points made in the last few days.

1.  I think the folks who suggest that we traders need to work on the psychology of trading so that we can accept the behavior of our systems have it backwards.  By designing our own objective function, we encapsulate our preferences and our comfort zone into the trading systems right from the start.  

If we prefer short holding periods, we should ask for short holding periods -- that is, we should put a term in our objective function that rewards short holding periods -- then trading systems that score well will already be biased toward short holding periods and we will not have to adapt ourselves to systems that have long holding periods.  Substitute any characteristic that is important to you for holding period -- RAR, equity curve slope, equity curve smoothness, Sharpe ratio, commissions paid, and so forth.

2.  I have no complaint about trend-following systems.  If they work, use them.  My caution is that the 1982 to 2007 period has been an exceptionally strong bull market in equities.  Any trading system that takes long-only positions should have some way of identifying that the market conditions are not right for it, and it should exit open trades and inhibit new trades.

Some skeptical friends of mine think that we will need to wait until after the next ice age to see a bull market like this again.

Whether that turns out to be true or not, prices of equities have risen far above their mean. When prices revert to the mean, they always pass through the mean and are extended to the other extreme by about the same amount.  If that happens to equity prices, the excursion could take broad market averages down by 80% or more from here.  If there is a market drop, and if trend-following systems work to profit from it, those systems will not be long-only.

3.  Stops hurt systems.  That does not mean do not use stops.  It is just an observation that there are many ways to exit from a trade, and the worst of those ways is a maximum-loss stop.  

Exits can be triggered:
1.  By the action of an indicator or recognition of a pattern, similar to what caused the entry.
a.  The parameters can be the same as those that caused the entry, but in the other direction.
b.  The parameters can be different for exit than for entry.
c.  Some other indicator can be used.
2.  By the price reaching a profit target.
3.  By the time in the trade reaching a maximum holding period.
4.  By the price falling back to the level of a trailing stop.
5.  By the price falling back to the level of a maximum-loss stop.

In my experience and research, exits caused by 1, 2, or 3 are preferable.  Exits caused by 5 are worst.

Thanks for listening,
Howard
www.quantitativetradingsystems.com


----------



## howardbandy (9 September 2007)

Greetings --

One more point.  Tech/a mentions expectancy and wants his trading systems to have a positive expectancy.  

Expectancy is the gain or loss expected from the average trade (or bet, if you are reading about expectancy as related to gambling).   

Tech/a is Absolutely correct!

In order for a trading system to have a chance of working profitably, it Must have a positive expectancy.  There is no technique for position sizing or money management that will convert a system that has a negative expectancy into a profitable system.  But, it is possible to apply inappropriate position sizing or money management to a system that has a positive expectancy and turn it into a losing system.

There are exactly two variables in the equation that will tell you what the final equity of a system will be: the number of trades and the expectancy expressed as a percentage.

final equity = initial equity * ((1+expectancy) ^ number of trades)

If expectancy is negative, the final equity will be less than the initial equity.

Thanks,
Howard


----------



## It's Snake Pliskin (9 September 2007)

howardbandy said:


> Greetings --
> 
> Just a few comments to points made in the last few days.
> 
> ...




Howard thanks for the informative post. I agree with exiting on some criteria like you posted.


----------



## Temjin (9 September 2007)

Howard, how would you compare the correlation between the Edge Ratio of an entry indicator verse the overall expectancy of a system? 

I understand that the expectancy of a system is largely a function of both entry and exit, while money management / position sizing are simply used to keep yourself in the game to exploit that expectancy and/or increase reward through increasing risk at the same time.



			
				howardbrandy said:
			
		

> 3. Stops hurt systems. That does not mean do not use stops. It is just an observation that there are many ways to exit from a trade, and the worst of those ways is a maximum-loss stop.
> 
> Exits can be triggered:
> 1. By the action of an indicator or recognition of a pattern, similar to what caused the entry.
> ...




This is quite interesting as initially, I thought exit 4 and 5 was the best to minimise lost. But after further research and with my limited experience, I become more favourably biased toward exits caused by 1 and 2.

My "belief", yet untested, that the premature exit of an entry will degrade the whole system because you never left your winning trade to run its course because in "theory", they should have a higher probability of having a higher MFE.

That is, successful trades often rarely goes against you too far. And the use of trailing stops may limit risk, but will also limit potential returns that might reduce the overall performance of the system.


----------



## tech/a (10 September 2007)

Howard.
Your comments make complete sence.

I now realise that its is impossible for you and others speaking on such a topic to cover every aspect of your opinions and thoughts in a few paragraphs.
Like a good method it takes time to build the story so to speak.

Thanks again.


----------



## tech/a (10 September 2007)

Temjin.

I agree with your observations and I to have pondered and tested similar thoughts (although I dont have an edge ratio).

In line with 1 & 2 and also 3 being caught in trades which wallow in small profit to drawdown of initial capital has a detrimental effect.
This also raises another question
*When does an entry lose its validity* You enter a trade on a signal off it goes then retraces below your initial entry but above all stops.The entry signal is now gone and you are in a trade which has failed to continue given its original condition/s.
Should we stay in that trade in the hope it will continue in our direction? At what point is the entry signal invalid and if it is what are we doing holding the trade if its NOT in profit?

Ideally staying in trades which continue to accumulate open profit is what we attempt to do.
Trailing stops in my view should be widened as a trade matures in profit BUT with time and normal exit criteria still in place.

This is where system design falls into a catagory of balance.You wont (well I havent) find perfect settings for every parameter. There will always be a give and take.
Howards concept of objective function (Although we all do this to a degree have never seen it spelled out specifically) is a key element in what I or you would accept in our systems. Quite possibly different objectives/different settings/different money Management in the SAME system.


----------



## Chorlton (10 September 2007)

Hello All,

After reading through the recent posts, can somebody explain what "edge ratio" is and how it should be applied to a system?


Many Thanks....


----------



## julius (10 September 2007)

BingK,

System performance rarely has a linear response to changes in the input - hence the 'prickly' appearance the optimization graphs posted earlier.: A small shift in the underlying market state can be the straw that breaks the camels back.

It might be possible to find out more if you could post the trading stats from the system.

From what I know automated trading is fairly common within the institutions, but mostly as a tool to stagger orders to minimise market impact. I have read about a few of hedge funds that primarily trade 'black box' systems but no specifics unfortunately.


----------



## Temjin (10 September 2007)

Chorlton said:


> Hello All,
> 
> After reading through the recent posts, can somebody explain what "edge ratio" is and how it should be applied to a system?
> 
> ...




Curtis Faith invented the "Edge Ratio" quite recently when he posted the idea and formula in his own company's software forum on Oct 2006. The latest book, "Way of the Turtles" also have a small section to explain this Edge Ratio and its application with examples.  

P.S: He initially called it the Exclusion Ratio, but it was named as Edge Ratio in his book. 

More information is available on this thread at the TradingBlox forum.

http://www.tradingblox.com/forum/viewtopic.php?t=3310

Here is the formula he used. 



> I developed a new ratio (new to me at least) I call the Excursion Ratio which is defined as:
> 
> Average Volatility-Adjusted Maximum Favorable Excursion
> -------------------------------------------------------------------
> ...




The most interesting of this ratio is that his initial hypothesis was that on average, a random entry should never have an edge of over 1.0 over a given period of time. His testings has shown evidences that it is such a case. 



> This too confirmed my initial hypothesis that market prices after random entries would exhibit no tendency to move in any particular direction.




Definitely an interesting indicator to look at and something I would spend more time in.

How much of an edge does one need to make a profit? I don't know if I could answer that question. 

Though in recent threads/posts, there are claims that you can still make money (in theory) with a random entry and a random exit. Dr Van Tharp believed that it is still possible to create a profitable system with a random entry using well defined exit and money management strategies.


----------



## tech/a (10 September 2007)

> Though in recent threads/posts, there are claims that you can still make money (in theory) with a random entry and a random exit. Dr Van Tharp believed that it is still possible to create a profitable system with a random entry using well defined exit and money management strategies.




Now THAT I do believe is possible.


----------



## weird (10 September 2007)

Gary Smith - How I trade for a living,



> When you short, you are bucking the 200-year upward bias in the stock market. I don't like to bet against such
> odds. George Soros once told Victor Niederhoffer that he had lost more money on shorting stocks than on any
> other speculative activity. According to Niederhoffer, selling short stock is a ticket to the poorhouse. I'll
> leave shorting to the perennial bears. They seem to have some sadistic compulsion for failure anyway.




Random selection 'may' work, considering the upward bias of the market. 

If there were over 1000 bulls running up a hill, and a mate and I had a wager for the average number of bulls we selected to get to the top first,  I would think that if I was given an additional selection criteria where I could remove the lame and also the bulls running in the opposite direction  ... then I may have a slight advantage if he only had a random choice.

To me, the success of  random selection in a long only system, only reinforces the upward bias of the market ... that is a very good thing to know, however looking over 200 years of the stock market can tell you that as well.

BTW, don't get hung up on TT as being 'the' example of a trend trading system. I personally was not over excited when I tested it on the ASX300 ... however this was not the constituent list it was originally designed for.


----------



## Nick Radge (10 September 2007)

> According to Niederhoffer, selling short stock is a ticket to the poorhouse




He should know...he blew up shorting S&P 500 puts.


----------



## tech/a (10 September 2007)

> BTW, don't get hung up on TT as being 'the' example of a trend trading system. I personally was not over excited when I tested it on the ASX300 ... however this was not the constituent list it was originally designed for.




Cant agree more its "AN" example of a longterm trend following system.
Personally I think as a system its pretty simple.


----------



## howardbandy (10 September 2007)

Hi Julius --



julius said:


> From what I know automated trading is fairly common within the institutions, but mostly as a tool to stagger orders to minimise market impact.
> 
> I have read about a few of hedge funds that primarily trade 'black box' systems but no specifics unfortunately.




Not only do hedge funds use automated trading to stagger orders and hide the size of their orders, but you can use that software, too.
http://www.advisorpage.com/modules.php?name=News&file=article&sid=709

Hedge funds may trade automated systems that appear to be black boxes to us, but the details of those systems are well understood by them.

Thanks,
Howard


----------



## theasxgorilla (10 September 2007)

Temjin said:


> My "belief", yet untested, that *the premature exit of an entry will degrade the whole system because you never left your winning trade to run its course *because in "theory", they should have a higher probability of having a higher MFE.




There is a lot involved in this assumption.  Many (like myself) who think in terms of 1R losses and multi-R wins and 50/50 or lower win/loss percentages will say, "but it's necessary to let your winners run their course in order to achieve the positive expectacy of the system".  Yes, from that point of view it is.  But what if your system could hop off a winning trade and into another equal probability winning trade?  Maybe you could avoid large open equity drawdowns?  Maybe you reduce market exposure and Max DD for an acceptable decrease on another performance variable like CAGR?

I find that MAE/MFE analysis is best done when time-capped eg. measured daily out to say 100 days.  Measuring the MAE vrs MFE on something like QBE for the last five years probably isn't that useful as providing you bought sometime after June 2002 it doesn't matter what entry method you used, the MAE/MFE ratio is going to be stupendously high and skew the results.

ASX.G


----------



## theasxgorilla (10 September 2007)

For those who have Amibroker and are interested some code for computing the e-ratio live at:

http://theasxgorilla.blogspot.com/2007/09/e-ratio-40w-moving-average-system-code.html

If anyone reviews this and finds errors please let me know as I've developed a winning system based on it and I'm in the final stages of re-mortgaging and going 98% LVR on my house to raise funds for trading...joking, but seriously, if you find errors please post a comment and I'll revise the code.


----------



## nizar (10 September 2007)

theasxgorilla said:


> If anyone reviews this and finds errors please let me know as I've developed a winning system based on it and I'm in the final stages of re-mortgaging and going 98% LVR on my house to raise funds for trading...joking,




LOL you almost gave me a heart attack!!


----------



## nizar (10 September 2007)

howardbandy said:


> Greetings --
> 
> Just a few comments to points made in the last few days.
> 
> ...




Hi Howard.

Firstly, thanks for contributing another top post, i like especially the discussion on exits.
The points 2 and 3 on exits is something that im going to start working on shortly.

Just one comment on what you said about trend following systems (Iv highlighted this section in your post).

Dont you think that if you have a good entry, like that used in the study by Blackstar, when a stock makes an all-time high, that this would still perform well in sideways or bearish periods? (Not outstanding, but can beat the market perhaps?)

A stock making an all time high in a bearmarket or sideways market is a rare find, but one that does so is likely to be a champion. Say if in a bullmarket you have 50 stocks a year making all time highs, and in a bearmarket/sideways market you have only 3-4, if your entry is good enough to identify these winners, and you set your system to enable pyramiding, then theres a good chance that you will find your portfolio full of large positions in very few (champion) stocks.

The above is just my opinion, alot of it still hasnt been tested.

Comments and further discussion very much appreciated.


----------



## julius (11 September 2007)

Should have been more clear - by 'Black Box' I was refering to purely mechanical systems.


----------



## tech/a (11 September 2007)

"Black Box" means undisclosed mechanics of a system.
A Purely mechanical system need not be a Black Box.


----------



## nizar (27 September 2007)

Ok i did a walk-forward analysis on my system and it performed well, as was expected, because of the bullish market conditions.

The initial period for testing and optimisation was 01-01-1998 until 31-12-2003 (6 years).

The 3.5 year period from 01-01-2004 until 30-06-2007 was left untouched (out-of-sampel) until this morning when the walk-forward analysis was conducted.

Now the system is essentially a long-term trend following system with average holding time for winners to be about 200 days (6-7 months). Note that several of the big winners usually last for a year or longer.

To be able to compare the returns from my system to that of the market (XAO) on an annual basis, I told TradeSim to close out ALL TRADES at the end of the year (using the OR SetTriggerAtDate function as part of the ExitTrigger).

Now while this gave me an idea of yearly performance, what it also does is underestimate the results, because many of the winners are cut prematurely. This is shown because when i test it over 2-year blocks, the outperformance of the market is much more substantial.

These are the tests that i did to try to increase system robustness:

*Delisted stocks are always included. In fact, excluding these results in slightly decreased performance!

*Run monte carlo simulations (20,000) on a year by year basis and in 2-year blocks in both the in-sample and out-of-sample. This reduces start and end date biases.
 - In 2001, average return was 16.16%, with 99.93% of portfolio tests being profitable.
 - In 2002, average return was 12.17% with 99.97% of portfolio tests being profitable
 - In any 2-year block, the returns were always greater than the sum of the single years, and every portfolio being profitable.

*Run the system over a period I like to call, THE WORST, from 01-07-2001 until 01-03-2003. Pretty much peak to trough of the closest thing we've had to a bearmarket the last 15 years. XAO was -18% over this period. My system on average gained 8%, 82.4% of the portfolios in the green.

*Remove the best 3 and worst 3 trades from the trade database. Difference in results compared to when these are included is negligible.

*Model in worst case slippage. Profitability drops by 40-50%. Is this acceptable?

Any other further ideas as to how to increase system robustness would be appreciated (before i get too excited!).

Thanks,
Nizar.


----------



## stevo (30 September 2007)

Nizar
One comment that I would make, aside from a pretty thorough testing approach, is that closing out the trades prematurely on a long term system is introducing another exit criteria into the strategy. 

I replicated the test period you used for some ideas that I had recently, closing the trades as you did, as well as letting them run to conclusion. So buy orders were taken during the test period (2001-2002), but in one test the trades were closed at the end, whilst with the other test let them run until the system exit criteria were met.

I found that results, in this test and over this time period, were improved slightly when trades were automatically closed at the end of the test period. The results from this approach would be different from actual trading and would be dependent on the end date chosen.

I attached results from 2 tests. "ATR stop 2001 2002" closes trades at the end of the test period, whilst the other file doesn't. I guess that the differences are not all that big but, my thoughts are that just closing the trades at the end of the test period, especially for long term systems, will give a slightly distorted picture. With shorter term systems this may not be an issue.

Also if you step through 2 year periods, moving the start date forward 3 months at a time then the results will show the variability due to different startup times. Startup can be tough with a long term system because sometimes you won't see any decent closed trades profits for quite some time.

regards
stevo


----------



## nizar (30 September 2007)

stevo said:


> Nizar
> One comment that I would make, aside from a pretty thorough testing approach, is that closing out the trades prematurely on a long term system is introducing another exit criteria into the strategy.
> 
> I replicated the test period you used for some ideas that I had recently, closing the trades as you did, as well as letting them run to conclusion. So buy orders were taken during the test period (2001-2002), but in one test the trades were closed at the end, whilst with the other test let them run until the system exit criteria were met.
> ...




Thank you for your thoughts, Stevo.


----------

