# Monte Carlo Simulation of trades in backtesting a la Van Tharp



## aarbee (29 January 2009)

Monte Carlo Simulation of trades in backtesting a la Van Tharp

I’ve been reading Van Tharp’s latest offering “Definitive Guide to Position Sizing” and revisited the concepts of expectancy, R-Multiples etc. What I found new was the idea of performing a Monte Carlo analysis as a tool to really understand the probability of profits, DD etc by performing Monte Carlo on the trades  that come up in backtesting. He recommends calculating the R-Multiple for every trade in the backtest and then working out the Expectancy, Standard Deviation of R-Multiples (to ascertain variability of results) and then calculate the t-score or what he calls the System Quality Number (SQN). 

On this subject, I also went through the eBook “Trading Strategies – Using computer simulation to maximize profits and control risk” by Larry Sanders which addresses the topics of probability, marble game and MonteCarlo simulation. Larry Sanders has also designed the program TradeSim which he sells at www.tradelabstrategies.com  The book can also be downloaded at this site for free. This book also talks about the Monte Carlo analysis of  trades. 

My question is to all the mechanical traders on the forum, if any of you actually have used this type of Monte Carlo analysis to really work out the expected future behaviour of your systems in terms of probabilities and if so, then what program did you use for the Monte Carlo analysis. I would also really appreciate some feedback on the use of Trade Lab Strategies  TradeSim program (this is not  Compuvision’s TradeSim which most are familiar with). I am aware that the Compuvision’s TradeSim  can perform Monte Carlo simulation to cover the situation in backtesting when there are more triggers than can be taken on a particular day. 

Cheers,


----------



## MS+Tradesim (29 January 2009)

aarbee said:


> My question is to all the mechanical traders on the forum, if any of you actually have used this type of Monte Carlo analysis to really work out the expected future behaviour of your systems in terms of probabilities and if so, then what program did you use for the Monte Carlo analysis. I would also really appreciate some feedback on the use of Trade Lab Strategies  TradeSim program (this is not  Compuvision’s TradeSim which most are familiar with). I am aware that the Compuvision’s TradeSim  can perform Monte Carlo simulation to cover the situation in backtesting when there are more triggers than can be taken on a particular day.




I'm not sure what the question is here. I do heaps of MC analysis. I use Compuvision's Tradesim. Back when I was first looking at all this I remember looking at TradeLab's Tradesim and rejecting it, but I can't remember why - think it was something to do with the way it interacted with Metastock.


----------



## aarbee (29 January 2009)

MS+Tradesim said:


> I'm not sure what the question is here. I do heaps of MC analysis. I use Compuvision's Tradesim. Back when I was first looking at all this I remember looking at TradeLab's Tradesim and rejecting it, but I can't remember why - think it was something to do with the way it interacted with Metastock.




TradeLab's TradeSim cannot substitute Compuvision's Tradesim. They are two different beasts. 

Compuvision’s TradeSim:

Metastock generates a trade database comprising all trades that trigger on every day of the backtest period. This database is used by Tradesim to run a simulation.

In real time trading, if your system gives 10 triggers today and you have equity to take only 2 trades, then you would have to disregard the other 8 trades. Ideally, one should pick those 2 trades at random from the 10 triggers available. In MC backtesting, with TradeSim, the simulator does 1000s of passes, picking trades at random from the available triggers and spits out results showing the max/min/average of  profits/DD etc. In case, the daily triggers on every day of the backtest period could actually be taken, the MC tester is actually not used by Tradesim. 

TradeLab’s TradeSim:

I don’t think this simulator can do what is outlined above. My understanding is that one has to get the results of a backtest showing the profit/loss of the various trades and then TradeSim  performs a very detailed statistical analysis using Monte Carlo method to take these trades in 1000s of different combinations/order to give you the results for analysis. This would give a trader a relatively more thorough idea of what to expect from the system in the future. 

It is this type of MC analysis that I am seeking the info on. Both the publications I mentioned in my previous post give details of this type of analysis and the importance thereof.

Trust this clarifies.

Cheers,


----------



## tech/a (29 January 2009)

David Samborsky (Tradesim) has some pretty strong views re the way Tradesim handles Montecarlo analysis and how Amibroker handles it.

He has a strong arguement as to the poor nature of the Amibroker M/C testing.

I'll let him know about this thraec and he may pop in to explain.


----------



## aarbee (29 January 2009)

tech/a said:


> David Samborsky (Tradesim) has some pretty strong views re the way Tradesim handles Montecarlo analysis and how Amibroker handles it.
> 
> He has a strong arguement as to the poor nature of the Amibroker M/C testing.
> 
> I'll let him know about this thraec and he may pop in to explain.




Tech,
This is not about TradeSim vs Amibroker. 
This is about performing MC analysis on either the actual past trades or trades generated by whichever backtester in use. The backtest gives us one sample of trades - say 100 in number stretching over months or years. We then calculate R-Multiples for each  of these trades. These are fed into the MC simulator. The MC simulator randomly picks one R-multiple value from the sample and assumes it's the result of the first trade. For the next trade, it does the same thing and selects another (it could even be the same trade) trade from the population. It keeps doing a 100 times. This forms one equity curve of R-multiples. The simulator can do 1000s of such runs generating an equity curve for each. It then works out the results of DDs, profits, losing streaks, winning streaks etc and works out the statistical probabilities for each. This would be invaluable in getting an idea of what can be expected from the system in the future.
Pretty sure David Samborsky would have researched Monte Carlo analysis relating to trading systems and I would welcome his views. I would also like to hear from other experienced systems designers who might be employing this methodology. I do have a lot of respect for Van Tharp who lays great emphasis on this type of testing. 

Look forward to learning from others. 

Cheers,


----------



## tech/a (29 January 2009)

> This is not about TradeSim vs Amibroker.




Yeh I know and wasnt trying to swing it that way.

David's a pretty knowledgable guy--you just need to see Tradesim.
His view I thought would be of interest to all who use or wish to use Montecarlo analysis in their trading.


----------



## TradeSim (29 January 2009)

aarbee said:


> Tech,
> This is not about TradeSim vs Amibroker.
> This is about performing MC analysis on either the actual past trades or trades generated by whichever backtester in use. The backtest gives us one sample of trades - say 100 in number stretching over months or years. We then calculate R-Multiples for each  of these trades. These are fed into the MC simulator. The MC simulator randomly picks one R-multiple value from the sample and assumes it's the result of the first trade. For the next trade, it does the same thing and selects another (it could even be the same trade) trade from the population. It keeps doing a 100 times. This forms one equity curve of R-multiples. The simulator can do 1000s of such runs generating an equity curve for each. It then works out the results of DDs, profits, losing streaks, winning streaks etc and works out the statistical probabilities for each. This would be invaluable in getting an idea of what can be expected from the system in the future.




The only issue in doing this is the big assumption that the distribution of R-multiples is constant throughout time ie a stationary process as opposed to a non-stationary process. This may be true for a single security system but may not apply to a portfolio trading system which has trades from many different securities.

In TradeSim for Metastock we avoid doing this type of MC analysis for that reason. That is not to say that it is not useful to do it but there are some underlying assumptions that have to be made which may or may not reflect what happens in the real world.

Then there is the other issue of when you have the possibility of multiple entry triggers as in a portfolio trading system. Which back test do you use to base the distribution of R-multiples on ?? If it's just a single security back test then there is only one possible back test result. However if you are analysing a portfolio trading system that uses multiple entry triggers with equal weighting then there is not one unique outcome for a back test. There are many !! 

This is similar to the issue of trying to optimize a portfolio trading system with multiple entry triggers which have equal weighting. Which back test do you try and optimize for because it is possible to have many different back tests from the one system and all are equally valid historical back tests yielding different results and a different set of metrics. Usually some kludge has to be made by forcing a certain sequence of trades to yield only one outcome using some kind of signal or trade ranking criteria but then there is the issue of how one determines this ranking criteria. Some assumptions are made here as well which may or may not represent the reality of the situation.


----------



## aarbee (30 January 2009)

Hi David,

Many thanks for your comments. Great food for thought. Here are some of my thoughts on your comments



TradeSim said:


> The only issue in doing this is the big assumption that the distribution of R-multiples is constant throughout time ie a stationary process as opposed to a non-stationary process. This may be true for a single security system but may not apply to a portfolio trading system which has trades from many different securities.



I am not sure what you mean by above. The distribution of R-multiples would vary over different phases of the market and it would vary for a single security system as well as for multiple securites system.



TradeSim said:


> In TradeSim for Metastock we avoid doing this type of MC analysis for that reason. That is not to say that it is not useful to do it but there are some underlying assumptions that have to be made which may or may not reflect what happens in the real world.



There certainly would be assumptions that have to be made and recognized while interpreting the results. I suppose this would be a statistical analysis which would appeal to a trader who has a belief that trading is all about getting probability in his favour and takes the effort to understand and analyse it within the limitations of the assumptions made in the first place.



TradeSim said:


> Then there is the other issue of when you have the possibility of multiple entry triggers as in a portfolio trading system. Which back test do you use to base the distribution of R-multiples on ?? If it's just a single security back test then there is only one possible back test result. However if you are analysing a portfolio trading system that uses multiple entry triggers with equal weighting then there is not one unique outcome for a back test. There are many !!



In case of multiple triggers, would it be useful to take the entire population of qualifying trades and then conduct the MC in the manner described in my previous post in order to derive the statistics required. After all the trades in the population qualified and any one of those could have been used. In this type of MC analysis, we are not working on the exact sequence of trades that would have been taken in real trading. This is more akin to the marble game where each marble is pulled for a trade and is replaced in the bag before another one is pulled.



TradeSim said:


> This is similar to the issue of trying to optimize a portfolio trading system with multiple entry triggers which have equal weighting. Which back test do you try and optimize for because it is possible to have many different back tests from the one system and all are equally valid historical back tests yielding different results and a different set of metrics. Usually some kludge has to be made by forcing a certain sequence of trades to yield only one outcome using some kind of signal or trade ranking criteria but then there is the issue of how one determines this ranking criteria. Some assumptions are made here as well which may or may not represent the reality of the situation.



In terms of optimizing a portfolio trading system with multiple entry triggers I personally rank them according to ATR volatility in systems which work better on high volatility phases of the market and on liquidity for others. There are other valid ways to rank too. I think doing the MC in the way outlined in my previous post is even more important in such a case. 

I would very much like your comments on above. 

Cheers


----------



## julius (30 January 2009)

If you're simulating based on in-sample data, what information does monte-carlo simulation provide about out-of-sample performance ?


----------



## aarbee (1 February 2009)

julius said:


> If you're simulating based on in-sample data, what information does monte-carlo simulation provide about out-of-sample performance ?




There are no guarantees with what can be expected with out of sample data. However, with statistical MC analysis on either backtested data or historical trades date, the following should come to light to enable better decision making on position sizing, (including determination of risk per trade) deciding that the system is broken  and ceasing to trade it etc. 


Longest expected winning and losing streak
Highest expected DD
Lowest equity expected.

Above can be calculated in probabilities. For example, what is the highest DD at 1, 10, 50, 90 percentile level etc. I am still grappling with statistical probabilities but all this does appear very useful information in objective decision making.

Considering the lack of responses to this thread, I can only surmise that this type of trading system analysis is not a methodology commonly used by system designers here. The purpose of initiating this thread was to learn from others' knowledge and experience. 

Cheers


----------



## TradeSim (1 February 2009)

aarbee said:


> Hi David,
> 
> Many thanks for your comments. Great food for thought. Here are some of my thoughts on your comments
> 
> ...




In the context of a trading system, and using marbles as an example of a stationary random process, would be the situation where you would draw a marble out of a hat and then put it back and draw again from the same hat at a later time. Each time the same hat would have the same statistical  properties. In a non-stationary process each time you draw a marble it would be from a different hat each with a different set of marbles and hence different statistical properties. 

Assuming a stationary random process for a portfolio trading system might be making a rather bold assumption. Also the issue is further complicated by the fact that the process may not be completely random due to an underlying trend etc and so the data contains a deterministic bias which also varies with time.


----------



## TradeSim (1 February 2009)

julius said:


> If you're simulating based on in-sample data, what information does monte-carlo simulation provide about out-of-sample performance ?




Typically you would optimize a single pass back tested trading system on in-sample data and then test it on out of sample data to see if the optimization was correct.

Monte Carlo analysis gives you more insight into the variance of a trading system rather than trying to optimizing it using a single deterministic pass. Essentially the two types of analysis are mutually exclusive because it is almost impossible to optimize a trading system which has variance. For example if your trading system has some sort of random selection algorithm for the entry criteria then it is almost impossible to optimize this.

But as long as the statistical analysis from a thorough Monte Carlo analysis yields a robust trading system then it is not necessary to optimize anything as the worst case performance metrics will be the guiding factor on whether or not the trading system meets your requirements.


----------



## aarbee (1 February 2009)

TradeSim said:


> In the context of a trading system, and using marbles as an example of a stationary random process, would be the situation where you would draw a marble out of a hat and then put it back and draw again from the same hat at a later time. Each time the same hat would have the same statistical  properties. In a non-stationary process each time you draw a marble it would be from a different hat each with a different set of marbles and hence different statistical properties.
> 
> Assuming a stationary random process for a portfolio trading system might be making a rather bold assumption. Also the issue is further complicated by the fact that the process may not be completely random due to an underlying trend etc and so the data contains a deterministic bias which also varies with time.




Thanks for clarifying. A very valid point. I suppose this is what Howard Bandy refers to in his "Quantitative Trading Systems" when he says that in the Walk Forward Testing, "the time period is shortened in the hope that the market and the trading system remain in synchronization throughout - remain stationary, in statistical terms.

Can't say I am in full agreement with your expressed view of irrelevance of the single path backtesting, however, be that as it may, I would be very interested in hearing your views on chapter 20 (Walk Forward Testing" of Howard Bandy's aforementioned book.

I for one find the different experts' varied views on Monte Carlo testing and analysis quite confusing. Perhaps, one way of looking at it would be that there is just that much more for me to research and learn

Cheers


----------



## pilbara (2 February 2009)

aarbee said:


> He recommends calculating the R-Multiple for every trade in the backtest and then working out the Expectancy, Standard Deviation of R-Multiples (to ascertain variability of results)



I've found it useful to look at the distribution of all trades to look for outliers and remove them altogether.  By outliers I mean trades with vastly larger return than the average (greater than 10 times the average).  Most systems use stop loss techniques so there shouldn't be any single trades with very large losses, however there will be some very large profits and these can wildly affect the equity curve.  I think it's better to study the system without the outliers and treat them as a "bonus".


----------



## howardbandy (3 February 2009)

Greetings all --

The posting as originally entered is too long.  This is part 1.

Some of my thoughts on using Monte Carlo techniques with trading systems.

First, some background.

Monte Carlo analysis is the application of repeated random sampling done in order to learn the characteristics of the process being studied.

Monte Carlo analysis is particularly useful when closed form solutions to the process are not available, or are too expensive to carry out.  Even in cases when a formula or algorithm can supply the information desired, using Monte Carlo analysis can often be used.  

Here is an example of Monte Carlo analysis.  Assume that a student is unaware of the formula that relates the area of a circle to its diameter.  A Monte Carlo solution is to conceptually draw a square with sides each one unit in length on a graph, with the origin at the lower left corner.  The horizontal side goes from 0.0 to 1.0 along the x-axis and the vertical side goes from 0.0 to 1.0 along the y-axis.  Draw a circle with a diameter of one unit inside the square.  The center of the circle will be at coordinates 0.5, 0.5.  The Monte Carlo process to compute the area of the circle is to generate many random points inside the square (each point a pair of number with the values of the x-coordinate and y-coordinate being drawn from a uniform distribution between 0.0 and 0.999999), then count the number of those points that are also inside the circle.  The ratio between the number of points inside the circle to the number of points drawn gives an estimate of the constant pi.  Running this experiment several times, each using many random points, allows application of statistical analysis techniques to estimate the value of pi to within some probable uncertainty.  The process being studied in that example is stationary.  The relationship between the area of the circle and the area of the square is always the same.

When we are developing trading systems, the ultimate question we are most often asking is "What is the future performance of this trading system?"  Recall that the measure of goodness of a trading system is your own personal (or corporate) choice.  Some people want highest compounded annual return with little regard for drawdown.  Others value systems that have low drawdown, or infrequent trading, or whatever else may be important.  But, in all cases, the goal is to have the trading system be profitable.  Assume that many of us are trading a single issue over a period of several years, and that the price per share at the end of that period is the same as it was at the beginning of the period, with significant price variations in between.  If we ignore frictional costs -- the bid - ask spread of the market maker and the commission of the broker -- we are playing a zero-sum game.  Those of us who make money are taking it from those who lose money.  If, instead of the final price being the same as the beginning price, the final price is higher, then the price has an upward bias and more money is made than lost.  This is when we all get to claim it was our cleverness that made us money.  If the final price is lower, the price has a downward bias and more money is lost than made.  

The price data for the period we are trading has two components.  One is the information contained in the data that represents the reason the price changes -- the signal component.  The other is everything we cannot identify profitably -- the noise component.  Note that there may be two (or more) signal components.  Say one is a long term trend in profitability of the company, and the price follows profitability.  Say the other is cyclic price behavior that goes through two complete cycles every month for some unknown but persistent reason.  In every financial price series, there is always the random price variation that is noise.  The historical price data that we see consists, in this case, of trend plus cycle plus noise.  Each component has a strength that can be measured.  If the signal is strong enough, relative to the noise, our trading system can identify the signal and issue buy and sell signals to us.  If our trading system has coded into it logic that only recognizes changes in trend, the cycle component is noise as seen by that system.  That is -- anything that a trading system does not identify itself, even though it may have strong signal characteristics when analyzed in other ways, is noise.

Over the recent decades, analysis of financial data has progressed from simple techniques applied by a few people in a few markets using proprietary tools to sophisticated techniques applied by many people in many markets using tools that are widely available at low cost.  The techniques used successfully by Richard Donchian from the 1930s, and Richard Dennis and William Eckhart in the 1980s, were simple.  To the extent that the markets they traded did not have strong trends, every profitable trade they made was at the expense of another trader.  Today, every person hoping to have a profitable career in trading learns about techniques that did work at one time.  They are well documented and are often included in the trading system examples when a trading system development platform is installed.  

Assume that a data series is studied over a given date range.  Using hindsight, we can determine the beginning price and the ending price.  Continuing with hindsight, we can develop a trading system that recognizes the signal component -- some characteristic about the data series that anticipates and signals profitable trades.  By trying many combinations of logic and parameter values, we will eventually find a system that is profitable for the date range analyzed.  If we are lucky or clever, the system recognizes the signal portion of the data.  Or, the system may have simply been fit to the noise.  The data that was used to develop the system is called the in-sample data.  If the system does recognize the signal and a few of us trade that system, while all the rest of the traders make random trades, those of us who trade the system will make a profit.  On average, the rest lose.  As more and more people join us trading the system, each of us earns a lower profit.  In order to continue trading profitably, we must be earlier to recognize the signal, or develop better signal recognition logic and trade different signals or lower strength signals.  By the time the date range we have studied has passed, most of the profit that could have been taken out of that price series using that system has been taken.  Perhaps the future data will continue to carry the same signal in the same strength and some traders will make profitable trades using their techniques, or perhaps that signal changes, or perhaps so many traders are watching that system that the per-trade profit does not cover frictional costs.  

Data that was not used during the development of the system is called out-of-sample data.  But -- important point -- testing the profitability of a trading system that was developed using recent data on older data is guaranteed to over-estimate the profitability of the trading system.  

Financial data is not only time-series data, but it is also non-stationary.  There are many reasons related to profitability of companies and cyclic behavior of economies to explain why the data is non-stationary.  But -- another important point -- every profitable trade made increases the degree to which the data is non-stationary.  There is very little reason to expect that future behavior and profitability of well known trading systems will be the same as past behavior.  

Thanks for listening,
Howard


----------



## howardbandy (3 February 2009)

And Part 2 --

Which brings me to several key points in trading systems development.  

1.  Use whatever data you want to to develop your systems.  All of the data that is used to make decisions about the logic and operation of the system is in-sample data.  When the system developer -- that is you and me -- is satisfied that the system might be profitable, that conclusion was reached after thorough and extensive manipulation of the trading logic until it fits the data.  The in-sample results are good -- they are Always good -- we do not stop fooling with the system until they are good.  In-sample results have no value in predicting the future performance of a trading system.  None!  It does not matter whether the in-sample run results in three trades, or 30, or 30,000.  In-sample results have no value in predicting the future performance of a trading system.  Statistics gathered from in-sample results have no relationship to statistics that will be gathered from trading.  None!

The follow-on point, which relates to Monte Carlo analysis, is that rearranging the in-sample trades gives no insight into the future characteristics of the system.  Yes, you can see the effect of taking the trades in different orders.  But why bother?  They are still in-sample results and still have no value.

The Only way to determine the future performance of a trading system is to use it on data that it has never seen before.  Data that has not been used to develop the system is out-of-sample data.  

2.  As a corollary to my comments above, that out-of-sample data Must be more recent that the in-sample data.  The results of using earlier out-of-sample data are almost guaranteed to be better than the results of using more recent out-of-sample data.  Consequently, techniques known as boot-strap or jack-knife out-of-sample testing are inappropriate for testing financial trading systems.  

So, when is Monte Carlo analysis useful in trading system development?

1.  During trading system development.  It may be possible to test the robustness of the system by making small changes in the values of parameters.  This can be done by making a series of in-sample test runs, each run using the central value of the parameter (such as the length of a moving average) adjusted by a random amount.  The values of the parameters can be chosen using Monte Carlo methods.  Note that this does not guarantee that the system that works with a wide range of values over the in-sample period will be profitable out-of-sample, but it does help discard candidate systems that are unstable due to selection of specific parameter values.

Note that this technique is not appropriate for all parameters.  For example, a parameter may take on a limited set of values, each of which selects a specific logic.  Such parameters, associated with what are sometimes called state variables, are only meaningful for a limited set of values.

2.  During trading system development.  It may be possible to test the robustness of the system by making small changes in the data.  Adding a known amount of noise may help quantify the signal to noise ratio.  When done over many runs, it may reduce (smooth out) the individual noise components and help isolate the signal components.

3.  During trading system development. It may be possible to investigate the effect of having more opportunities to trade than resources to trade.  If the trading system has all of the following conditions:
A.  A large number of signals are generated at exactly the same time.  For example, using end-of-day data, 15 issues appear on the Buy list.
B.  The entry conditions are identical.  For example, all the issues are to be purchased at the market on the open.  If, instead, the entries are made off limit or stop orders, these can and should be resolved using intra-day data -- as they would be in real time trading.
C.  The number of Buys is greater than can be taken with the available funds.  For example, you only have enough money to buy 5 of the 15.

If your trading system development platform provides a method for breaking ties, use it.  For example, you may be able to calculate a reward-to-risk value for each of the potential trades.  Take those trades that offer the best ratio.  AmiBroker, for example, allows the developer to include logic to compute what is known as PositionScore.  Trades that are otherwise tied will be taken in order of PositionScore for as long as there are sufficient funds.

Alternatively, Monte Carlo methods allow you to test random selection of issues to trade.  My feeling is that very few traders will make a truly random selection of which issue to buy from the long list.  I recommend quantifying the selection process and incorporating it into the trading system logic.

4.  During trading system validation.  After the trading system has been developed using the in-sample data, it is tested on out-of-sample data.  Preferably there is exactly one test, followed by a decision to either trade the system or start over.  Every time the out-of-sample results are examined and any modification is made to the trading system based on those results, that previously out-of-sample data has become in-sample data.  It takes very few (often just one will do it) peeks at the out-of-sample results followed by trading system modification to contaminate the out-of-sampleness and destroy the predictive value of the out-of-sample analysis.  

One possibly valuable technique that will help you decide whether to trade a system or start over is a Monte Carlo analysis of the Out-of-sample results.  The technique is a reordering of trades, followed by generation of trade statistics and equity curves that would have resulted from each trade sequence.  What this provides is a range of results that might have been achieved.  Note that this technique cannot be applied to all trading systems without knowledge of how the system works.  If the logic of the system makes use of earlier results, such as equity curve analysis or sequence of winning or losing trades, then rearranging the trades will result in trade sequences that could never have happened and the analysis is misleading and not useful.  Also note that most of the results produced by the Monte Carol analysis could also be developed from techniques of probability and statistics without using Monte Carlo techniques -- runs of wins and losses, distribution of drawdown, and so forth.

In summary --

Monte Carlo analysis can be useful in trading system development.  But only in those cases described in items 1, 2, 3, and 4 above.  

Rearranging in-sample trades has no value.  

Obtaining meaningful results from Monte Carlo techniques requires large numbers -- thousands -- of additional test runs.  

If you decide to apply Monte Carlo techniques, I recommend that they be applied sparingly, primarily to test robustness of a likely trading system as in numbers 1 and 2 above, not in the early development stages. 

On the other hand -----

What is tremendously useful in trading system development is automated walk-forward testing.  I believe that is the Only way to answer the question "How can I gain confidence that my trading system will be profitable when traded?"  But that is the subject of another posting.

Thanks,
Howard


----------



## nizar (4 February 2009)

Hi everyone,

Wow.
Excellent response Howard.

If I could just add something.
I think monte carlo analysis does NOT *in no way whatsoever *replace out-of-sample testing.
I, and many other traders surely would use both in their systems testing and design.

Monte carlo analysis, in the way that I use it (TradeSim), is not re-arranging the trades (as this would be largely useless) or changing small parameters in the system rules (which is also important), but rather, it is testing the different possible paths through the sample of trades, each of which is equally likely in real-time trading.

Monte carlo analysis is only important/critical in systems or methods in which the trading capital is not sufficient to take every possible trade. Otherwise there would obviously be only one possible paths of trades as you can easily take each one.

And while I agree that in-sample testing has no determining ability in terms of future real-time trading performance, in my opinion, a system has to trade well in-sample for me to even bother testing it on out-of-sample data (walk forward analysis).

PositionScore as defined by AmiBroker is something which can also be used in TradeSim, where it is known as trade ranking, where you can choose to rank trades by any criteria that you want.

Nizar.


----------



## nizar (4 February 2009)

pilbara said:


> I've found it useful to look at the distribution of all trades to look for outliers and remove them altogether.  By outliers I mean trades with vastly larger return than the average (greater than 10 times the average).  Most systems use stop loss techniques so there shouldn't be any single trades with very large losses, however there will be some very large profits and these can wildly affect the equity curve.  I think it's better to study the system without the outliers and treat them as a "bonus".




I agree.
I usually remove the top 3-5 winners and losses and then review the performance.


----------



## TradeSim (4 February 2009)

howardbandy said:


> And Part 2 --
> 
> If your trading system development platform provides a method for breaking ties, use it.  For example, you may be able to calculate a reward-to-risk value for each of the potential trades.  Take those trades that offer the best ratio.  AmiBroker, for example, allows the developer to include logic to compute what is known as PositionScore.  Trades that are otherwise tied will be taken in order of PositionScore for as long as there are sufficient funds.
> 
> Alternatively, Monte Carlo methods allow you to test random selection of issues to trade.  My feeling is that very few traders will make a truly random selection of which issue to buy from the long list.  I recommend quantifying the selection process and incorporating it into the trading system logic.




I'm curious to know how you derive your trade selection criteria and what tests did you use to ascertain whether or not your trade selection criteria always results in the optimum selection of trades ?? It would be a tall leap of faith to assume that a few formulas would somehow reveal that one stock was a better bet than another even if it was only based on intuition. Without thorough statistical analysis your assumption and hypothesis could be completely wrong and misleading.

The reason why I say this is because I have just recently tested one of these one-pass portfolio trading systems with trade ranking and just by overriding the ranking and randomly selecting trades from multiple entry triggers I was able to generate a better outcome than the fixed selection based on a certain ranking criteria. Food for thought anyway 

With TradeSim for Metastock when you run enough simulations in a Monte Carlo analysis it will most likely cover the instance of the trade selection that was forced by the ranking criteria. However this trade selection will be one of many (from tens of thousands in fact) and most likely won't be the most optimum choice. It is possible that a completely random selection of trades could be a much better trade selection criteria than your ranking which maybe counter intuitive to what you think but that is why it needs to be tested and investigated properly which many people don't actually do.

Regards
David


----------



## howardbandy (5 February 2009)

Hi David --

I may be misunderstanding your question.  If so, correct me.  But I think these comments might help clarify.

The logic of the trading system is completely determined by the developer.

If the trading system creates a situation where all of the following conditions hold, then Monte Carlo simulation can be helpful:
A.  A lot of trades are signaled at exactly the same time.  For example, after the close of trading when using end-of-day data.
B.  The signals are for entry with exactly the same circumstances.  For example, at the next day's open.
C.  There are more signals than funds will allow to be taken.
This is just a repeat of the conditions I stated in my postings above.

If I have developed a trading system where this happens, then I will be faced with a decision as to which trades to take.  Say there are 15 buy signals and I have funds only for 5.  Which five do I take?  Do I rank them in some way or do I use a random selection process?

If I actually did use a random process, then Monte Carlo simulation would help me by showing me the distribution of results that might have occurred.  If I had a choice of 5 of 15 once a week for a year, I would be drawing random numbers to select which 5 each week for 52 times a year, and I would see an equity curve (and associated trade statistics) for each combination the Monte Carlo picked.  That would be helpful, and it might convince me that I should use a random method to select which 5 of 15.

My point is that I, personally, do not use random selection.  I rank them.  What I want to do is create a trading system where that set of three criteria never happens -- where I never need to make random choices.  So what I should be doing is incorporating my ranking method into the trading system explicitly so the 15 signals generated are in order of preference according to my selection criteria.  That is, according to what is best when measured using my objective function.  If I have done that, there is then no need for a Monte Carlo analysis because all of the trades -- all 5 for all 52 weeks -- will have been determined by the trading system.

My point is that Monte Carlo analysis is helpful when there is some random component of the trading system.  If there is no random component, then there is no need for Monte Carlo analysis.  Further, applying Monte Carlo analysis to a system that determined by the logic of the system adds no useful information, and may be distracting.

Echoing Nizar's comments -- whatever is done to study alternative in-sample results in no way reduces the need for rigorous out-of-sample testing.

Thanks for listening, 
Howard


----------



## aarbee (5 February 2009)

Hi Howard,

Many thanks for your detailed replies. I completely concur with your view on ranking of  trades in backtesting thus obviating the need for Monte Carlo and have been using the PositionScore to rank the trades.

Apropos, Monte Carlo analysis on the trade database, in your opinion, is there any value at all vis a vis stress testing the trading system. I am fully aware of your views on WFO etc. My next mission is to fully understand the automatice WFO methodology to effectively use it with my Objective Function in testing of trading systems.

Thanks again for responding. I am sure most members of the forum would find your responses of immense benefit.

Cheers,


----------



## TradeSim (6 February 2009)

aarbee said:


> Hi Howard,
> 
> Many thanks for your detailed replies. I completely concur with your view on ranking of  trades in backtesting thus obviating the need for Monte Carlo and have been using the PositionScore to rank the trades.
> 
> ...




But why rank the trades if tossing a pair of die and then selecting the trades according to the outcome of the die could provide a better outcome for the trading system as a whole ??

What is it as part of your trade ranking criteria that makes you think that you are always selecting the best trades and what tests did you use to verify this ??

I'm not saying that trade ranking is not a bad idea but blindly accepting a particular ranking strategy as some sort of panacea for developing an optimum trading system may be a bit short sighted without some sort of verification.


----------



## aarbee (6 February 2009)

TradeSim said:


> But why rank the trades if tossing a pair of die and then selecting the trades according to the outcome of the die could provide a better outcome for the trading system as a whole ??
> 
> What is it as part of your trade ranking criteria that makes you think that you are always selecting the best trades and what tests did you use to verify this ??
> 
> I'm not saying that trade ranking is not a bad idea but blindly accepting a particular ranking strategy as some sort of panacea for developing an optimum trading system may be a bit short sighted without some sort of verification.




I  refer you back to post #28 onwards in the following thread:
https://www.aussiestockforums.com/forums/showthread.php?t=2060&page=2&highlight=hometrader

Cheers,


----------



## tech/a (6 February 2009)

My understanding of Monte Carlo analysis is its use in the combination of as many variablesapplied to a data set as possible to then find with as much confidence as possible that any combination of variables will give a positive result.

Now I'm taking this beyond financial data and those variables WE set in our system. From what I understand OUR variables which we set in a system falls way short of the POSSIBLE variables that can be applied to any data set.
As such Monte Carlo analysis the way we use it --- evidently falls way short of a definative result.

Which then brings us to the view of ranking.
All well and good today but as not all possible varibles are present in the test it is highly likely that a test in a week or so will give an entirely different set of ranking.

If all possible variables were being viewed then the ranking would be more likely to continue.

Lets say I was testing the deflection in Steel.
My data only gave me variables related to heat applied to the steel.
I could also use Live or Dead load but I dont have information available for running of this Monte Carlo test on the steel.

Monte carlo ranks my results and I find an optimum temperature.
I'm sure you can see that these results would vary if I then introduced other variables which are present but not used in my analysis. Even just with temperature.

So with out ALL variables present ranking seems pointless?


----------



## weird (6 February 2009)

In terms of the original question, it's already been covered by David.

To discuss the other topics,

Single path ranking lends itself to too much over-optimization of backtesting results.

While I respect the scientific or statistical use of out of sample data, to confirm or validate an in sample data, I wonder if this is as valid to people with trading systems that test again and again and again on the 'same' static in and out of sample database, until they find something that agrees between the both.  Perhaps a smooth equity curve over both periods would suffice ? New data, such as actually trading, even better ? 

Personally I think the logic of performing out of sampling to confirm in sampling,  on the same static database of responses, again and again and again until something agrees between both, is flawed ... but would like to be proven wrong.

After one run on out of sample, that data set then becomes in sample. I hope the first run is a good one, unless you have enough other out of sample periods - and not the same - to further test on.


----------



## Nick Radge (6 February 2009)

I agree Dave. It's much more productive to understand how to gain a positive expectancy regardless of the data being tested. In sample/out of sample is just an added 'confidence' booster. I think an innate appreciation for positive expectancy and probability theory will do a lot more for long term survival.


----------



## aarbee (7 February 2009)

Nick Radge said:


> I agree Dave. It's much more productive to understand how to gain a positive expectancy regardless of the data being tested. In sample/out of sample is just an added 'confidence' booster. I think an innate appreciation for positive expectancy and probability theory will do a lot more for long term survival.




Nick and Tech/a,

Thanks for your posts. Putting the issue of ranking and in/out of sample testing aside, I would like your opinion on the validity or usefulness of conducting a MC analysis on the R-multiples dataset obtained from backtesting towards gaining a knowledge of higher likelihood of system behaviour in the future. Again, I am referring to the methodology put forth by Van Tharp (Definitive Guide to Position Sizing) and Larry Sanders (tradelabstrategies.com) ebook.

Best regards


----------



## Nick Radge (7 February 2009)

aarbee,
Yes, Monte Carlo simulations are necessary. I'm not sure how Van does it as I have not seen his book, but a single test run will leave you in the dark as to where that run stands within a series of runs. We don't know if its at the lower side of the range, which could mean the system is incorrectly discarded, or whether the run is at the top end of the range, meaning one has high expectations that may never be realized.


----------



## bingk6 (7 February 2009)

aarbee said:


> Nick and Tech/a,
> 
> Thanks for your posts. Putting the issue of ranking and in/out of sample testing aside, I would like your opinion on the validity or usefulness of conducting a MC analysis on the R-multiples dataset obtained from backtesting towards gaining a knowledge of higher likelihood of system behaviour in the future. Again, I am referring to the methodology put forth by Van Tharp (Definitive Guide to Position Sizing) and Larry Sanders (tradelabstrategies.com) ebook.
> 
> Best regards




aarbee,

Just a few simple thoughts.

If you ran just a single pass through your insample data and the results are acceptable, it does not necessarily mean you have a robust system. You should run the Monte Carlo simulations (using your insample data) so that you know the high and low bounds and where your single pass results rank in the overall scope of things. As an example, if you ran a simulation consisting of 50000 possible runs and 50% of those simulations show  losses and your single pass was  profitable, would you trade that system ??  I suspect not, but the point is that you would not have known that if you had not conducted that Monte Carlo test in the first place.

Alternatively, you may find that your single pass result ranks in the bottom 10% , in which case you could  legitimately question the effectiveness of your ranking strategy, when 90% of random selections are superior to your ranking strategy. Once again, you would not have known thatis if you had not conducted the Monte Carlo test in the first place. Monte Carlo analysis tells you a great deal about your system. 

The ideal situation is for the high and low bounds to be quite close together across a large number of simulation runs, and they are all sufficiently profitable, then you can have added confidence in your system because it has shown its ability to perform regardless of the actual trades taken. As far as possible, its best to take the “Luck” factor out of the equation and to be able to say that the system has demonstrated its robustess by being able to perform at an acceptable level no matter which combination of trades it takes. This confidence can only increase with a larger number of Monte Carlo runs (in excess  of  20000), a large number of trades being evaluated, across a large universe and finally for the test to be run over a sufficiently long period so that the system can be evaluated across a range of market conditions.

I should preface the previous paragraph by saying that the system must generate sufficient signals to provide sufficient alternative trade paths for the Monte Carlo process to evaluate. If there are insufficient signals then most of the runs will, in all likelihood, share large chunks of trades and you’ll end up with a small variance in results from high to low bound, which defeats the purpose of performing Monte Carlo analysis, and invalidates the test.

I should also mention the Monte Carlo simulations should only take place using insample data. As Howard has said on many occasions, perform as mush analysis on the insample data as you wish and when you are completely satisfied, try it out on the outsample data and see how it goes. That is the ultimate test.

I would put forward the proposition that a well structured testing methodology, encompassing comprehensive Monte Carlo Analysis using insample data provides, in all probability, better results in out of sample testing than a system developed without any Monte Carlo Analysis.


----------



## nizar (7 February 2009)

bingk6 said:


> I should also mention the Monte Carlo simulations should only take place using insample data.




Why so?

Just as there are multiple possible paths of trades in the in-sample data, there are many equally likely paths that are possible in the out-of-sample data.

If monte carlo analysis is required for in-sample testing, i don't see why it should not be used in the walk forward test.

I agree with everything else you said, and that's a good point you made about how monte carlo analysis is required so you know how your trade ranking affected the performance (upper, mid, or lower end).


----------



## TradeSim (7 February 2009)

nizar said:


> Why so?
> 
> Just as there are multiple possible paths of trades in the in-sample data, there are many equally likely paths that are possible in the out-of-sample data.
> 
> ...




Yes this is correct. In this case you would be comparing one set of statistics from in sample data with another set from out of sample data and hoping that the two would be in some sort of agreement. This is in contrast to comparing one set of metrics from in sample data to one set of metrics from out of sample data. The comparison of statistics from the two different sample spaces would provide a much more conclusive comparison.

Regards
David


----------



## aarbee (7 February 2009)

tech/a said:


> My understanding of Monte Carlo analysis is its use in the combination of as many variablesapplied to a data set as possible to then find with as much confidence as possible that any combination of variables will give a positive result.
> 
> Now I'm taking this beyond financial data and those variables WE set in our system. From what I understand OUR variables which we set in a system falls way short of the POSSIBLE variables that can be applied to any data set.
> As such Monte Carlo analysis the way we use it --- evidently falls way short of a definative result.
> ...




Let’s look at the following two scenarios:

AA  My system has  filters for Turnover, volatility, trend strength, etc that many times gives more signals on daily scans than the available capital would allow me to take.

BB  This other system has filters for Turnover, volatility, trend strength, and a couple of others that triggers trades daily that  are far fewer than AA and very rarely are there more signals than available capital to take them. 

In AA the MC as excellently done in TradeSim is useful for all the reasons that other posters have so clearly outlined. 
In BB the MC would be quite redundant because there would only be a single path.

Just because there is a single path in BB, does the backtesting result become any less reliable than AA?? There aren’t any more filters in BB, just the same as AA but more stringent. 
If your answer is “Yes”, I would like to hear the reason. If the answer is “No”, then what’s the problem with ranking. It’s just another filter. 

As for ranking changing every week, I am not sure I understand. The figure for any filter in the system would change and would be optimally different on weekly or monthly basis. That per se doesn't invalidate their use in screening the stocks. 

Cheers,


----------



## aarbee (7 February 2009)

Nick Radge said:


> aarbee,
> Yes, Monte Carlo simulations are necessary. I'm not sure how Van does it as I have not seen his book, but a single test run will leave you in the dark as to where that run stands within a series of runs. We don't know if its at the lower side of the range, which could mean the system is incorrectly discarded, or whether the run is at the top end of the range, meaning one has high expectations that may never be realized.




Hi Nick,
You are quite correct in pointing out the deficiency of a picking a single run when many different runs are possible. 
Van Tharp's recommended way of doing MC analysis is on the following lines. This is also the same as outlined on Larry Sanders site and his ebook "Trading Strategies".
In backtesting, a trades list is produced. This can be taken or a list of actual past trades is used. Let’s say the backtest has given us one sample of trades - say 100 in number stretching over months or years. We then calculate R-Multiples for each of these trades. These are fed into the MC simulator. The MC simulator randomly picks one R-multiple value from the sample and assumes it's the result of the first trade. For the next trade, it does the same thing and selects another (it could even be the same trade) trade from the population. It keeps doing this a 100 times. This forms one run and gives one equity curve of R-multiples. The simulator then does say 20,000 such runs generating an equity curve for each. It then works out the results of DDs, profits, losing streaks, winning streaks etc and works out the statistical probabilities for each. This would be invaluable in getting an idea of what can happen if the system is traded and would assist in understanding the system and working out the position sizing for the trades. I know that there are certain assumptions implicit in the above way of MC analysis. The whole purpose of this thread was to initiate discussion on the validity or usefulness, limited or otherwise, of conducting this kind of analysis.  I am quite well aware of the usefulness of MC the way Compuvision’s Tradesim does it and the efficient way it does so. It’s the MC on R-multiples that I need clarity about. 

Cheers


----------



## howardbandy (8 February 2009)

weird said:


> While I respect the scientific or statistical use of out of sample data, to confirm or validate an in sample data, I wonder if this is as valid to people with trading systems that test again and again and again on the 'same' static in and out of sample database, until they find something that agrees between the both.  Perhaps a smooth equity curve over both periods would suffice ? New data, such as actually trading, even better ?
> 
> Personally I think the logic of performing out of sampling to confirm in sampling,  on the same static database of responses, again and again and again until something agrees between both, is flawed ... but would like to be proven wrong.
> 
> After one run on out of sample, that data set then becomes in sample. I hope the first run is a good one, unless you have enough other out of sample periods - and not the same - to further test on.




Hi Dave, and all --

I think we are agreeing.  One of the points I make is that adjusting the trading system based on information obtained by an analysis of the out-of-sample results changes the previously out-of-sample data into in-sample data.  

There are modeling and simulation techniques that use three sets of data.  The first, in-sample data, is used develop the model.  Than the system is run using the second set of data -- call it the "tuning data set", for want of a common name.  Based on the results from the tuning set, the model is changed.  The procedure does go back and forth between those two sets of data, often in an automated way such that the values of the tunable parameters are saved from each iteration.  The results will start at some level, rise to a peak, then drop.  The procedure notes that the peak has been reached and remembers the parameter values from the peak point.  Using those values, it makes one more test, this time on the third set of data, called the out-of-sample or validation data set.  Based on one evaluation of the out-of-sample data, use the model or start over.  The idea is the same -- do whatever you want to using in-sample data, but use out-of-sample data sparingly -- ideally only once.

My larger point is this:  Tomorrow, when I plan to place trades based on my trading system, I want as much confidence as possible that the signals will be accurate.  Tomorrow is out-of-sample.  If I do make a trade tomorrow, I only get to make that trade one time.  I cannot see what happens, adjust my system, scratch that trade, and re-trade tomorrow.  The best -- the only -- way I have of estimating the performance on tomorrow's data is to follow a procedure that lets me see many transitions from in-sample development to out-of-sample performance.  Each transition is one data point -- one sample of what might happen tomorrow.  Since there is no way I can peek into tomorrow's data, I should not be peeking into the out-of-sample data during the development of the system.  Said differently -- every time I peek at the out-of-sample results that I get during development of the system and then make a modification to the trading system, I am reducing the probability that the signals I will receive in real time will be accurate and profitable.

Thanks for listening,
Howard


----------



## howardbandy (8 February 2009)

Nick Radge said:


> I agree Dave. It's much more productive to understand how to gain a positive expectancy regardless of the data being tested. In sample/out of sample is just an added 'confidence' booster. I think an innate appreciation for positive expectancy and probability theory will do a lot more for long term survival.




Hi Nick, and all --

I completely agree that the trading system must have a positive expectancy.  

The point I am making is that the expectancy when measured over the in-sample results is certain to be good -- and I should ignore those -- they will always be good, and they have no value in estimating the future performance of the system.  

It is only the expectancy when measured over truly out-of-sample results that is important.

Thanks,
Howard


----------



## howardbandy (8 February 2009)

Greetings all --

I know I am repeating myself here, but it is important to your wealth that proper, rigorous, valid modeling and simulation techniques are applied to the construction of trading systems.

If the trading system always has the same result when applied to the same data, then Monte Carlo techniques have value in these ways:
1.  Add random noise to the data.  This tests the sensitivity of the system to the precise data that was used.
2.  Add random perturbations to those parameter values where it makes sense to perturb them.  This tests the robustness of the system relative to its parameters.  It helps identify best solutions that are at the top of peaks in profitability, when areas that are lower in profit but more stable, more like plateaus than peaks, would be safer.

I know how hard it is to keep from peeking at the out-of-sample results and adjusting the system.  That is why I continue to recommend that the best way to avoid peeking is to automate the parameter selection and out-of-sample process through use of automated walk-forward testing.  

Automated walk-forward testing is far, far more valuable in developing robust and likely-to-be-profitable trading systems than Monte Carlo applied to in-sample results. 

At the completion of the walk-forward testing, then use Monte Carlo analysis of the out-of-sample results to develop estimates of performance that might be expected when the system is traded.

Thanks for listening,
Howard


----------



## Wysiwyg (8 February 2009)

howardbandy said:


> Greetings all --
> 
> I know I am repeating myself here, but it is important to your wealth that proper, rigorous, valid modeling and simulation techniques are applied to the construction of trading systems.




I have been following this thread and thankyou for providing clear concise explanations of the correct application.It really is a waste of time complicating these things but regardless of this we have returned to the correct way.


----------



## tech/a (8 February 2009)

aarbee said:


> Let’s look at the following two scenarios:
> 
> AA  My system has  filters for Turnover, volatility, trend strength, etc that many times gives more signals on daily scans than the available capital would allow me to take.
> 
> ...




Firstly Id be trying other bourses to get more results. I personally wouldnt feel comfortable with a single run type method. My opinion on the second part of your question falls within the next of your questions.



> As for ranking changing every week, I am not sure I understand. The figure for any filter in the system would change and would be optimally different on weekly or monthly basis. That per se doesn't invalidate their use in screening the stocks.
> 
> Cheers,





My opinions are based more on my logic than mathamatical arguement.

A 20,000 Montecarlo run gives me results of a "What if" on 20,000 portfolio's.
How do I choose the best "what if" going forward? If your answer is optimisation then this is quite different to Montecarlo. I have a suspicion this is where there is a crossing of argument.

If we are ranking systems then yes.

As for Ranking altering.
Longer trem methods dont often have larger alterations in the data set. IE market conditions are more prolonged.
Shorter methods have chance of seeing a wider variety of swings within a data set. 10000 1 min bars could show a great diversity of bull/bear and flat.
Where as 10000 daily bars are not likely to produce this.
This is in my opinion why most including myself find it difficult to find excellent trading methods short term if only trading bullish!

So ranking is likely to alter with market diversity.
The more diverse the data tested then the less likely the swings in ranking.

Not withstanding Howards explainations above which I agree with,both in Montecarlo results and smoothness from peaks and troughs,to walk forward analysis.


----------



## TradeSim (8 February 2009)

howardbandy said:


> Greetings all --
> 
> Automated walk-forward testing is far, far more valuable in developing robust and likely-to-be-profitable trading systems than Monte Carlo applied to in-sample results.




But what happens if I don't rank trades and there is variance in the trading system results due to multiple entry triggers ?? How do you deal with that situation and how would you optimize it then ??

Regards
David


----------



## pilbara (9 February 2009)

TradeSim said:


> But what happens if I don't rank trades and there is variance in the trading system results due to multiple entry triggers ?? How do you deal with that situation and how would you optimize it then ??



if the triggers are simultaneous maybe make a basket of them all, each an equal fraction, with total size equal to a single stake.  This would have higher brokerage costs though.


----------



## abattia (29 December 2011)

aarbee said:


> ...For example, what is the highest DD at 1, 10, 50, 90 percentile level etc...




Which levels (1, 10, 50, 90, etc) do you apply in your decison making, and how?


----------



## abattia (29 December 2011)

bingk6 said:


> ... The ideal situation is for the high and low bounds to be *quite close together *across a large number of simulation runs, and they are all sufficiently profitable, then you can have added confidence in your system because it has shown its ability to perform regardless of the actual trades taken...




I am trying to understand what sort of "confidence intervals" (or percentiles, or number of standard deviations, for example) others use for evaluating whether or not insample/out of sample data and MC sim results are "in agreement" or not.

So, what do you mean above by "quite close together"?


----------



## abattia (29 December 2011)

For anyone interested, here's an interesting paper on use of MC sims and its use in system design/analysis...

http://www.tradingblox.com/Files/MC_resampling_Nbars.pdf


For anyone who gets the chance to read through, do you have any feel for how general are the following conclusions from the above paper, or any other comments?

a) "Thus it is recommended to use ... 10 million resampled daily returns [for convergence]." (page 15)
... so if your actual curve has, say, 150 points then 67,000 MC simulations would be needed to give confidence of convergence?

b) "... as the portflio size is reduced, serial correlation in the equity curve is also reduced." (page 20)
... so, from the perspective of how serial correlation of returns affects MC results if not taken into account, the effect is less in any case when dealing with a single instrument rather than with a portfolio of instruments? -> MC sims for estimating Max DD etc are better on single instruments than on baskets? .... hmmmm but are instruments like ES or FDAX more like single instruments or portfolios from the serial correlations perspective???


----------

