Australian (ASX) Stock Market Forum

Amibroker FAQ

Nasdaq Stock. I seem to be getting better results with the Russell 3000 but still nowhere near as with my usual 10c to $2 range on the ASX. I guess I need to understand the pricing of stocks over there as I may be picking up some 'wrong uns' if selecting an unfavorable Amibroker index watchlist.

Have any of you guys encountered any such issues? And do you test your strategies across different markets to test the robustness of the system?

Perhaps it may have something to do with volatility. I know with the Flipper i always got better results on the small caps because they had farther to run...but honestly i can't fathom off the top of my head why the NQ would be much different....

Let us know when you figure it out, curious as to why it could be...

By adding some filters, either volume of volatility you may be able to narrow it down.

Good luck...
 
Perhaps it may have something to do with volatility. I know with the Flipper i always got better results on the small caps because they had farther to run...but honestly i can't fathom off the top of my head why the NQ would be much different....

Let us know when you figure it out, curious as to why it could be...

By adding some filters, either volume of volatility you may be able to narrow it down.

Good luck...

Cheers. I'm seeing some horrendous drawdowns on the Russell Universe. Acceptable returns but with system drawdowns of 70%-90%! No idea why.

The whole reason why I thought I'd try one of the US markets was because of the poor spreads on the ASX in the micro sector. I too like the small end. But when the maximum position size as a percentage of the volume bar, is reduced to an acceptable level eg 10% max, there just aren't enough buy signals on the ASX for me - leaving me with a not so great annual return. Basically I'm having an issue maximizing the employment of capital.

Which exchanges do you guys find good ones to trade to work around this problem? TSX? FTSE?
 
Nasdaq Stock. I seem to be getting better results with the Russell 3000 but still nowhere near as with my usual 10c to $2 range on the ASX. I guess I need to understand the pricing of stocks over there as I may be picking up some 'wrong uns' if selecting an unfavorable Amibroker index watchlist.

Have any of you guys encountered any such issues? And do you test your strategies across different markets to test the robustness of the system?

I think the problem rests with the price filter which would need to be considerably higher for both the Nasdaq & Russell.
 
Greetings --

In the US markets, stocks that are very low in price -- $3.00 or less -- are usually troubled companies. Stocks that are less than $1.00 are "penny stocks" and are often used by manipulators. Neither should be bought and held (unless you know the chief financial officer really, really well). US low priced stocks are very volatile and usually quite illiquid. The illiquidity shows up when you want to exit your position -- just when you need liquidity most. Depending on both your broker and the availability of shares to borrow, they are usually very hard to short effectively. There are seldom listed options on low priced stocks. (The US does not have CFDs.) So they are volatile, illiquid, and difficult to hedge -- perfect for disreputable "pump and dump" schemes. Best avoided.

Contrast that with the Australian market where -- and you will know better than I -- low price is less an indicator of a stock to avoid.

But, as always, frictional costs are proportionally higher for lower priced shares. There is a tradeoff with the relative ease of price movement for low priced shares.

And, be aware of the distortion that splits have on historical price data. A stock that is now $20 and has split 2:1 three times has eight times as many shares today as it did before the first split. All things being equal, the price pre-split will be listed as $2.50, when it may have never have traded below, say, $10.

The comments about the Russell index illustrates a general observation, that the more developed the trading market, the more efficient prices are, and the more difficult it is to make money trading in it. The Russell 1000 is the (about) 1000 largest stocks (by capitalization) traded in the US. In total, Russell 1000 stocks comprise some 90% of total stock value in US companies. The Russell 2000 stocks are the next (about) 2000 companies, in order by capitalization. They comprise about 8% of total stock value. The Russell 2000, as an index or future, is a popular issue for traders. It is a compromise between too efficient and too small. Since it is the index that is being traded (using a surrogate such as a mutual fund, ETF, future, or option), the drawback of small capitalization illiquidity is pretty well eliminated.

Best regards,
Howard
 
Many thanks for your thoughts Howardbandy (and RNR).:)

I just stepped up the index to the Russell 2000 and low and behold - the returns improved. I wasn't aware that corporate behaviour was so questionable at the low end - but shouldn't really be surprised. I will now spend my time reading through this thread for more tips and ideas.
 
Many thanks for your thoughts Howardbandy (and RNR).:)

I just stepped up the index to the Russell 2000 and low and behold - the returns improved. I wasn't aware that corporate behaviour was so questionable at the low end - but shouldn't really be surprised. I will now spend my time reading through this thread for more tips and ideas.

How does the index it self come up?
The single entity.
 
The Russell 2000 Index ticker for Norgate Premium Data is $RUT
For Yahoo Financial it is ^RUT
The iShares Russell 2000 ETF is IWM
RWM is an inverse ETF
It also trades as a futures contract -- check your data vendor for specific symbol.
The CBOE lists options on the Russell 2000
The Yahoo symbol for the volatility index for the Russell 2000 is ^RVX

Etc

Best,
Howard
 
Hi. I have a couple of basic questions relating to filters which I can't get my head around. As always, any input is appreciated.

1. All of my criteria to generate a buy signal are written like as in the following example:

Opclos= O > ref(c, -1);

Buy = Opclos

So my question here is what is the purpose of the 'filter' code' in Amibroker? Can't conditions just be included in the buy argument without the use of a filter? Should I be using a filter here?

2. When I run my exploration, I want to see only stocks which opened on that day higher that the previous day's high. ie only execute those trades and include them in the report? (so not filter on the scan itself, but the following day) So how do I achieve this? How do I filter tomorrows entries by ones which opened higher, as I'd only execute these trades.

Do I use ref +1 and include it in the filter?


Hope this makes sense. Thanks:xyxthumbs
 
So my question here is what is the purpose of the 'filter' code' in Amibroker? Can't conditions just be included in the buy argument without the use of a filter? Should I be using a filter here?

2. When I run my exploration, I want to see only stocks which opened on that day higher that the previous day's high. ie only execute those trades and include them in the report? (so not filter on the scan itself, but the following day) So how do I achieve this? How do I filter tomorrows entries by ones which opened higher, as I'd only execute these trades.

A filter to me is exactly the same as an "EntryTest" condition.

This is a basic example of how I would structure the code for back-testing:-

Code:
PriceFilter = C >=0.10 AND C<=2.00;
IndexFilter = your code for this condition;
VolumeFilter = LLV(V,10) >= 250000;
PriceAtEntry = O > Ref(C,-1);

EntryTest = PriceFilter*IndexFilter*VolumeFilter*PriceAtEntry;

Buy = Ref(EntryTest,-1);

The code is based on the day of entry when referencing the EntryTest (hopefully the correct code for AmiBroker) and will give you the number of potential trades based on the PriceAtEntry (PaE) condition being included, however I will need to run another back-test excluding the PaE condition to determine the expected failure rate resulting from the use of the condition which may be relevant to the system's "tradeability".
 
Hi. I have a couple of basic questions relating to filters which I can't get my head around. As always, any input is appreciated.

1. All of my criteria to generate a buy signal are written like as in the following example:

Opclos= O > ref(c, -1);

Buy = Opclos

So my question here is what is the purpose of the 'filter' code' in Amibroker? Can't conditions just be included in the buy argument without the use of a filter? Should I be using a filter here?

2. When I run my exploration, I want to see only stocks which opened on that day higher that the previous day's high. ie only execute those trades and include them in the report? (so not filter on the scan itself, but the following day) So how do I achieve this? How do I filter tomorrows entries by ones which opened higher, as I'd only execute these trades.

Do I use ref +1 and include it in the filter?


Hope this makes sense. Thanks:xyxthumbs

Your thinking too hard.

Filter just shows the results[ a display function] , in the format you have assigned, addcolumn, of whatever your filter expression is. In your case filter == O > ref(h,-1) or todays open is greater than yesterdays high.

Backtest has nothing to do with the filter statement.

You could always use amibroker yahoo group, as on the AB website, that's where all the AB propeller heads hang out.
 
So I've been working on position sizing on a simple system (for position sizing test purposes) . I have entered the following code:

SetOption("maxopenpositions", 5);
SetPositionSize(20, spsPercentOfEquity);


Then I've run 10 year Monte Carlo simulations many times over by including this code.


//MonteCarlo

PS=Optimize("Postion Score",1,1,100,1);
PositionScore = Random()*PS;


Now what I am finding is that fixed position sizing is in fact significantly hindering my CAR, RAR etc In fact, it doesn't even noticeably improve maximum system draw down - when compared to random position sizing and capital allocation. I've done the whole in sample/out of sample walk forward process too.

This is even taking into account that there are numerous trades which randomly have 100% equity applied. My one restriction that I should mention is that my stop loss is kept at a fixed 7% of the entry point and exits are always after a week.

Drawdowns always recover. But obviously the timing and extent of the initial drawdowns do have s significant bearing on final equity balance.

Can anybody please explain the reason behind this as it seems to go against what it is supposed to achieve? Is it fair to say that random position sizing and equity allocation over time out performs fixed position sizing.

What have you guys discovered in this regard?


On a side note - I read a discussion on here about number of losses and whether a system can be determined broken or not from this. Here's a piece of code which I found which gives a level of visibilty on the frequency of winning an losing runs. Hope it can be of help.

//Consecutive winners/losers

SetCustomBacktestProc( "" );

function updateStats( winnersOrLosers, profit, longOrShortOrBoth, seriesLength )
{
signum = 1;

if ( winnersOrLosers == "Losers" )
signum = -1;

if ( signum * profit > 0 )
seriesLength++;
else
{
VarSet( longOrShortOrBoth + seriesLength, VarGet( longOrShortOrBoth + seriesLength ) + 1 );
seriesLength = 0;
}

return seriesLength;
}

procedure calcStats( bo, type )
{

metricName = type + "MaxConsecutive";

// Retrive the max number of consecutive winning trades
stat = bo.GetPerformanceStats( 0 ); // Get Stats object for all trades
maxConsecutive = stat.GetValue( metricName );
stat = bo.GetPerformanceStats( 1 ); // Get Stats object for long trades
maxConsecutive = Max( maxConsecutive , stat.GetValue( metricName ) );
stat = bo.GetPerformanceStats( 2 ); // Get Stats object for short trades
maxConsecutive = Max( maxConsecutive , stat.GetValue( metricName ) );

for ( i = 1; i <= maxConsecutive ; i++ ) // Remember that "Dynamic variables are always global"
{
VarSet( "consecutive" + i, 0 );
VarSet( "consecutiveLong" + i, 0 );
VarSet( "consecutiveShort" + i, 0 );
}

consecutiveCounter = consecutiveLongCounter = consecutiveShortCounter = 0;

for ( trade = bo.GetFirstTrade(); trade; trade = bo.GetNextTrade() )// Loop through all closed trades
{
consecutiveCounter = updateStats( type, trade.getProfit(), "consecutive", consecutiveCounter );

if ( trade.isLong )
consecutiveLongCounter = updateStats( type, trade.getProfit(), "consecutiveLong", consecutiveLongCounter );
else
consecutiveShortCounter = updateStats( type, trade.getProfit(), "consecutiveShort", consecutiveShortCounter );
}

for ( i = 1; i <= maxConsecutive; i++ )
bo.AddCustomMetric( "Consecutive " + type + " #" + i, VarGet( "consecutive" + i ), VarGet( "consecutiveLong" + i ), VarGet( "consecutiveShort" + i ), 0 ); // Add to results display

}

if ( Status( "action" ) == actionPortfolio )
{
bo = GetBacktesterObject(); // Get backtester object
bo.Backtest(); // Run backtests

calcStats( bo, "Winners" );
calcStats( bo, "Losers" );
}
 
I'll bite. If your system is being Monte Carlo'd/backtested over a long enough period and taking many trades in that time, then you probably should expect position sizing effects to average out. A smaller number of larger positions might even help by reducing ongoing commissions etc.

However, you're drastically increasing the likilehood that a string of losers will wipe you out, or do serious capital damage that is tough to recover from. You might expect over many such runs you'll see many more outliers (lucky high achieving runs, and "unlucky" runs with little or no capital growth). Howard covers this sort of thing and the value of plotting runs on "straw broom diagrams" in his books.
 
Hi chipotle

Not necessarily based on your last post although relevant to the system...what is your entry price code?
 
Hi Chipotle, and all --

You wrote:

So I've been working on position sizing on a simple system (for position sizing test purposes) . I have entered the following code:

SetOption("maxopenpositions", 5);
SetPositionSize(20, spsPercentOfEquity);

Then I've run 10 year Monte Carlo simulations many times over by including this code.


//MonteCarlo

PS=Optimize("Postion Score",1,1,100,1);
PositionScore = Random()*PS;

-----------------------

I Strongly recommend keeping All position sizing Out of your trading system logic. The maximum safe position size depends on system health -- on the current degree of synchronization between the model (logic, rule, parameters) and the data. It depends on the distribution of recent trades -- most importantly the number and magnitude of losing trades.

This information cannot be adequately determined from within the trading system.

Including position sizing within the trading system will introduce an unfavorable bias that Always over-estimates profit and under-estimates risk.

Rather, use this procedure:
1. Use fixed size trades -- not even compounding -- for all development, including validation.
2. Create a set of trades that you feel are the "best estimate" of future performance. A good source for these is the set of out-of-sample trades from an uncontaminated walk forward run. Augment this set with whatever you subjectively feel is likely to occur in the future but is under-represented in that set.
3. Determine your personal "statement of risk tolerance." An example is:
I am trading a $100,000 account and looking forward two years. I want to hold the probability of a drawdown from maximum account equity, marked-to-market daily, of 20% or greater to 5% or less.
4. Use the Monte Carlo simulation techniques I describe in my Modeling book to estimate the risk of drawdown for the two year horizon using the best estimate set of trades.
5. Calibrate the initial value of the position size so that risk is within your personal risk tolerance. It will almost certainly be less than full fraction.
6. Rerun the Monte Carlo at that position size and analyze the distribution of profit. Decide whether the system is worth trading. Look beyond the mean -- perhaps at the 5th to 95th inter-percentile range -- to estimate the probable range of CAR given your level of risk.

If you do decide to trade the system, after every additional completed trade, add that trade to the best estimate set and rerun steps 4, 5, and 6 above. Adjust position size for the next trade accordingly.

When the system health begins to deteriorate, and it will, this technique will automatically reduce position size in advance of account-destroying drawdown. If the system fails completely, you will already know that the correct position size for a system that is broken is zero.

There is a flowchart and brief discussion in Chapter 2 of my "Mean Reversion" book. You can download that chapter for free:
http://www.meanreversiontradingsystems.com/book.html

There is a deeper discussion in my forthcoming book, "Quantitative Technical Analysis."
http://www.quantitativetechnicalanalysis.com/

Best regards,
Howard
 
Hi Howard, Thanks for your reply. Somehow I missed it even though I receive notifications.

Great point on not compounding when testing. I'd missed that one completely.

I had put my development on hold for a few days while I read some Van Tharpe's 'TYW to FF'. I'm looking forward to your book (when is it being released?).

Actually the reading didn't take long. However understanding his logic behind R-Multiples still has me flummoxed.

Initially it makes sense but there seems to be a whole dimensions that he hasn't considered - trade frequency and average length of time that a trade is kept open for.

The system I am working on is a variation on a Bollinger Band breakout/sell the next day system. So it's really short term. The classic expectancy calc is pretty good. However, when I apply Van Tharpe's (as shown at the bottom of the image and taken from the ), the result is a negative expectancy of -0.11! I found the code for this in the Amibroker guide online.

So naturally, I'm trying to figure out whether this figure should be considered for a short term system. Or the (Av Win x % Winners) - (Av loss x $ Losers). His calcs seem to penalize short term traders.

Any thoughts from all appreciated as always.
 
Hi Chipotle, and all --

I think you are discovering some of the difficulties that arise when attempting to put position sizing into the model. It is possible to precompute two exit prices -- one for a profit target and one for a loss -- take the ratio of those, and use that ratio to set position size. Dr. Tharp uses variations on this theme.

There are three immediate difficulties.

One. Might be called tracking error -- the execution price is not the same as the precomputed price.
Two. Might be called synchronization slip -- the relationship between the model and the data changes, and the system does not perform consistently.
Three. Personal risk tolerance.

The approach I recommend, and outlined in my posting of a few days ago, moves the entire position sizing outside the model. It uses the actual execution results to measure the recent performance of the system and compute the best current position size for the next trade on a trade-by-trade basis.

Simplistic position sizing methods ignore differences in risk tolerance. One size does not fit all. An aggressive person trading his or her own account might be willing to take much more risk than a manager of other people's money. If the system continues to perform as expected based on development, that increased risk translates to increased position size, which translates to increased wealth after some period of time. But it comes at the expense of increased drawdown during the trading sequence. And -- importantly -- if that drawdown was an indication that the system is Not performing as expected, the increased risk translates to an increased real loss of trading capital.

Think about the relationship between a model and position size in the same way we think about an airplane or automobile and velocity. It is not possible to know the velocity of the vehicle from within it -- some external frame of reference is required. Similarly, from within the model, it is not possible to know the correct position size, because that depends on the current state of system health -- the state of synchronization between the model and the data.

The two factors -- individual risk tolerance and system health -- are critical to correct position sizing. And neither of them can be addressed from within the model -- the trading system code.

Techniques that compute position size within the model appear to be doing something useful, but are actually obscuring the more important issues.

Best regards,
Howard
 
I'm trying to do a backtest on particular sectors in the US stocks database. For some reason when i specify the sector, it just tells me there are no symbols to test, even if i specify the NYSE, or the S&P500, etc...
 
Top