tech/a
No Ordinary Duck
- Joined
- 14 October 2004
- Posts
- 20,447
- Reactions
- 6,477
No, it doesn't. You can have many different portfolios between runs. There are often several trade signals on a given day, but due to limited system equity your system may only be able to take say 1 or 2 of those signals. You can change the portfolio by making the system select a different set of valid signals each run.
No, it doesn't. You can have many different portfolios between runs. There are often several trade signals on a given day, but due to limited system equity your system may only be able to take say 1 or 2 of those signals. You can change the portfolio by making the system select a different set of valid signals each run.
When you alter your decision rules ('making the system select a different...') as per above, you are creating a variant algo with different deterministic parameters. Keep these new parameters and run this a billions times and you get the same outcome each time for this fixed set of parameters. You have altered the algo. What you are doing is varying decision parameters and calling it Monte Carlo. It isn't. It is an examination of outcomes based on varying input parameters
That's not what he's doing or saying.
Then what is he saying or doing if not that?
The technique referred to as Monte Carlo by tech/a and Alter Ego is a simple trade-substitution exercise.
There are no parameters as such being changed and no parameter hyper-surface.
It's simply the path-dependence of a system that signals multiple (buy)trades and insufficient money to take all of them.
At least that's what I understand. Whether it's valid or useful is another matter.
For a fixed path which is historical market realisation of any type that is factual, what exactly changes to produce different portfolio outcomes between runs if not some change to a decison rule parameter?
The technique referred to as Monte Carlo by tech/a and Alter Ego is a simple trade-substitution exercise.
What changes is which trade of the many valid signals are taken on a given day. Say you get 10 trade signals on a given day, but have funds for 2. First run may take trades A & B, next run may take trades C & D, etc. Then as C & D may have different trade durations than A & B, this alters the trading equity available in following days, so following trades will be different.
I'm not calling what I do Monte Carlo.
But it does show a different path through the same data using the same rules, so gives a different equity curve, different stats, etc. So gives a better picture of the system performance than just looking a 1 of the possible paths.
And then you'd want to check it on a different data set as well (different time period or different market) to see if it still works. And then if all goes well, test it in the real world
Then what is he saying or doing if not that?
This introduction of randomness by adding an algorithm to the base code returns a best and worst case scenario. In actual trading one hopes the results lie somewhere within that range. Tomasz J. uses the "Random" algorithm in this formula ... http://traders.com/Documentation/FEEDbk_Docs/2009/03/TradersTips.html#TT3He's not altering the algo.
Hes running (I presume) the same algo on countless trading possibilities.
// General-purpose MC part
HowManyMCSteps = 20000; // adjust that to change the number of MC tests
PositionScore = 100 * mtRandomA();
// that is single-line that causes random picking of signals
Step = Optimize(“Step”, 1, 1, HowManyMCSteps , 1 );
// this is dummy variable, not used below
// The trading system itself
// ( you may enter your own system below instead of one from the article )
NumPos = 8; // maximum number of open positions
SetOption(“MaxOpenPositions”, NumPos );
SetPositionSize( GetOption(“InitialEquity”) / NumPos, spsValue );
// as in the article - no compounding of profits
// SetPositionSize( 100 / NumPos, spsPercentOfEquity );
// uncomment this for compounding profits
// signals
s = Signal( 12, 26, 9 );
m = MACD( 12, 26 );
Buy = Cross( s, m );
Sell = Cross( m, s );
SetTradeDelays( 1, 1, 1, 1 ); // trade with one bar delay on open price
BuyPrice = Open;
SellPrice = Open;
—Tomasz Janeczko
amibroker.com
There has to be some code added to the base code so the program can "choose" different stocks from the candidates every pass. If it is not random then it is defined which is adding a new "condition" from which to choose from the candidates.Which is not Monty Carlo analysis
David Samborsk of Tradesim has Monty Carlo analysis which
Is superior.
There has to be some code added to the base code so the program can "choose" different stocks from the candidates every pass. If it is not random then it is defined which is adding a new "condition" from which to choose from the candidates.
Amibroker has position score which ranks the candidates. This to me is the same as adding that position score condition to the buy conditions but the difference has never been explained to me.
He's not altering the algo.
Hes running (I presume) the same algo on countless trading possibilities.
---
Lets say I have 10 triggers on Day 1----20 on day 2 and 30 on day three etc etc.
10 x 19 x 29 combinations of trades for a portfolio opportunity----ad infinitum---with a big enough universe.
Then----some will come off a trade get stopped out or exited all at very different times so the new trades will all be infinitely different as each portfolio trades its own path.
From the combination of a huge number of portfolio's you'll get a scatter of the results within that universe over that period tested.
---
Whether that then computes to similar results on a different data set and indeed any data set and of course going forward is not known.
But to argue against is to suggest that there is no patterns or occurrences other than flukes in any data set that will result in a quantifiable edge that can be expressed as an algo.
Which gets back to what I think your argument is and that's the Efficient Market Hypothesis
Which means this thread is going to run for a few years of 2-60 min posts.
1. Past performance conducted by backtest, based on methods which are pretty much data-mining in the guise of pattern recognition, will have virtually no relationship with the future. The class of strategies most frequently written of (longitudinal, time-series oriented, entirely price-based) is a total minefield of disaster for this effect. Just dig around the cemetery (the threads) and you'll find the bodies. New plots are being dug by the day.
2. So you'll only backtest those companies which have survived and that you expect to survive. That reduces the extent of bias in the backtest that might be conducted? This might be true with perfect foresight. Without any foresight, it is obviously laden with bias. The basis under which you choose to forecast survivorship will introduce new bias. You will have a very hard time understanding the impact of this.
I guess you'll have to jump in live/sim and learn these things over time. Others can jump in and explain the cost of that education and offer ways to circumvent it. For everyone who is still here posting, there would be around twenty or more who started at the same time that didn't make it. Of course, you'll be able to figure out who - of those - will still be around in three years and only look at what they have done in the past....
I guess they key message here is that your data will be imperfect. Bias is very important to understand and avoid. The short-cut you are proposing to take is pretty material for all the thought of concepts of probability of survival etc. Doing so introduces all sorts of hindsight biases and style biases of the type which is hard to observe and very fraught ("oh, I would have known that this measure would lead to a better outcome 10 years ago.."). Backtests are just a tool, yet, with the type of stuff usually attempted, where rules are developed on the basis of backtests and applied algorithmically and in a time series fashion, the key thing you will hopefully learn is that the mortality rate is through the roof. There will be a subset who attempt to do so that will succeed. Of those, you will have to discern which were genuinely skillful or the result of type-2 error, survived despite negative odds. Your probability of doing so, whilst it may feel rock solid without experience, will be closer to pure chance.
There is a reason why brokers produce programmable platforms and signals for customer use....I'll let you guess why. Proceed with caution. Good on you, though, for posting questions.
Not "perfect" foresight just "probable" foresight.
I think I have my answers. Thanks for the feedback
Not "perfect" foresight just "probable" foresight.
I think I have my answers. Thanks for the feedback
...surely one could pick some companies that one could reasonably expect to survive in future for as far back as they are being backtested...
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?