# Backtesting based on fundamental data



## jet328 (28 April 2014)

Anyone been successful in backtesting based on fundamental data (pe ratio, dividend yield, debt ratio etc.) as opposed to more conventional bactesting on technical data (price, volume etc)

Lets just say I wanted to compare 
-25% lowest p/e stocks vs the index
or
-top 20 by debt yield vs the lowest in the asx200
or
-top 20 stocks by shareholder yield vs the index

Any ideas on how I'd go about sourcing the data? No problem doing some excel manipulation/calcs but don't really want to source the raw data line by line


----------



## DeepState (28 April 2014)

jet328 said:


> Anyone been successful in backtesting based on fundamental data (pe ratio, dividend yield, debt ratio etc.) as opposed to more conventional bactesting on technical data (price, volume etc)
> 
> Lets just say I wanted to compare
> -25% lowest p/e stocks vs the index
> ...





Yes.  Bloody nice to meet you.

Data sources for commercial grade systems are sourced from FactSet, Bloomberg and/or CapIQ.  Each of these is around $15-$25k per annum. Or you can peruse annual reports and spend the rest of your natural life entering data.  You will have next to no chance getting much done in Excel beyond simple sorts and will need stronger systems to process the data.  If all you can achieve are simple sorts before moving to full signal processing, you stand the high risk of thinking you have found something and actually not, or the converse. This is very very important.  As you build your database, you will need to be mindful of UniqueIDs and may have to create your own keys.

All the best with it.  Keep us posted with your ... um...posts.


----------



## pixel (28 April 2014)

jet328 said:


> Anyone been successful in backtesting based on fundamental data (pe ratio, dividend yield, debt ratio etc.) as opposed to more conventional bactesting on technical data (price, volume etc)
> 
> Lets just say I wanted to compare
> -25% lowest p/e stocks vs the index
> ...




As a first guess, I'd say you can't avoid extensive manual collation.
The problem with all the fundamentals you mention lies in the fact that it's not sufficient to have e.g. historic ex-div dates, but you'll need to consider *when exactly* new figures have become available. 
Take the example of TPC: ex-div 3cFF on May 1st; but it had been mooted several weeks ago and the punters that paid attention then would've gone in well below 10c. How do you assess that when you back-test in a year's time?


----------



## luutzu (28 April 2014)

jet328 said:


> Anyone been successful in backtesting based on fundamental data (pe ratio, dividend yield, debt ratio etc.) as opposed to more conventional bactesting on technical data (price, volume etc)
> 
> Lets just say I wanted to compare
> -25% lowest p/e stocks vs the index
> ...






That's really clever.

Though you might also want to measure other factors than just PE or dividend yield... say see how investments or new product launches has impact dividend yield, or profit margin.

But as retired young said, it's not easy or cheap to get these information... not now anyway


----------



## DeepState (28 April 2014)

pixel said:


> As a first guess, I'd say you can't avoid extensive manual collation.
> The problem with all the fundamentals you mention lies in the fact that it's not sufficient to have e.g. historic ex-div dates, but you'll need to consider *when exactly* new figures have become available.
> Take the example of TPC: ex-div 3cFF on May 1st; but it had been mooted several weeks ago and the punters that paid attention then would've gone in well below 10c. How do you assess that when you back-test in a year's time?




Well spotted.  Couple of things:

1. There are 'as reported' databases around.  These are useful for macro analysis where announcements are often revised subsequently.  Eg. US GDP has three announcements for each quarter, each a revision of the previous one.  Most databases will print only the final for history.

2. For the dividend expectations example, these can be obtained from historical files of analyst expectations.  Here, you are going to be able to get some off the data vendors.  But if you are serious, you are going to need contacts or pay money to the major brokers for access to the data and to keep it current.  Access to this stuff is normally reserved to insto.  Hence, the analysis available to the general punter asking such questions - who presumably is unable to get analyst expectations data through history, you can only work off historical data for the most part and hope that provides you with some sort of an edge.  

As it turns out, it does.  That's for Jet to find out.  Enjoy the journey.


----------



## KnowThePast (28 April 2014)

Welcome to my playground!

You can read my thread, where I've posted some of my backtesting results (page 10 for Price/Book and 14 for Alt Z).

I am afraid I am not aware of any way that you can easily do it yourself.

I wrote the software from scratch. There was nothing like it in Australia that I could find. There's some kits in overseas markets, but they all had limitations I didn't like. You can do some of the functionality in Excel, but only for simpler tests.

On the data side, it gets worse - all data feeds that I could find were rediculously expensive and most of them did not provide all the data that I needed. The best data available probably belongs to brokers/funds, where it is manually entered, but is proprietary. In the end, I've ended up just using my manually entered data. Lots of work, but wasn't as bad for me, since I've already recorded that data whenever I'd research a new company.

In addition to my posts, here's a quick demo of my software:
https://www.youtube.com/watch?v=a85bD6iXBVc&feature=player_embedded

As far as timing issues go, yes, one certainly needs to be aware of them. There's a number of approaches:
1. Ignore them all together. Just buy and sell on numbers, ignoring any announcement, etc. This has more merit that may seem at first. Yes, some annoucements provide guidance on changed fundamentals in the future, but some are just distraction. From an automation perspective, it's hard to tell which is which. 
2. Record them in your data. There's been some great studies done on announcements and market reaction. It is striking how different those reactions are for cheap vs expensive stocks. 
3. Limit buy/sell decisions to times of low announcement activity, eg. not in the leading 1-2 months to a report.

Feel free to ask any questions, this is one topic I love talking about!


----------



## DeepState (28 April 2014)

KnowThePast said:


> Welcome to my playground!
> 
> You can read my thread, where I've posted some of my backtesting results (page 10 for Price/Book and 14 for Alt Z).
> 
> ...





Who are you Batman?


----------



## KnowThePast (28 April 2014)

DeepState said:


> Who are you Batman?




???

Robin?


----------



## DeepState (28 April 2014)

KnowThePast said:


> ???
> 
> Robin?




Just checked out your YouTube.  Great stuff.  Just wondering what your background is and how you invest etc...  The average human being does not produce what you have done.  So I am curious.  Hats off to you.

Alfred


----------



## KnowThePast (28 April 2014)

DeepState said:


> Just checked out your YouTube.  Great stuff.  Just wondering what your background is and how you invest etc...  The average human being does not produce what you have done.  So I am curious.  Hats off to you.
> 
> Alfred




Thanks Alfred, you are too kind.

I've "played" in the stock market since late 90's, but didn't put much effort into it until about 7-8 years ago, when I started managing my own super. I was always oriented more towards fundamentals, and most of efforts have been to find undervalued companies from that perspective. I have achieved a return a few percent above the indexes in this time, which I was happy with, given the severe position restrictions my super imposed.

Automated strategies, backed by backtesting is something that I've always leaned towards, but didn't embrace fully until fairly recently. My experience kept telling me that even though my results were good, they would have been better if I didn't try to pick the best prospects, but simply bought everything in my filters.

I look at what I've done as a product of multiple coincidences. 
1. I've been in the market for a long time, trying different things.
2. The way I conducted research, I already had a lot of data entered over the years.
3. My profession is software development, mostly in financials area. Quite a bit of it in storing and analyzing financials. So writing software to store my research + financials was not as difficult of a task.
4. Initially when I wrote the software, it wasn't for backtesting, but just as a convenient place to store my entered data instead of Excel. I kept adding features to it, but because it happened over many years, it never felt like a huge amount of work.
5. A few lucky design decisions made the data flexible enough so that it could be used for other purposes. So, when my backtesting idea came along, I already had 90% in place for it. 

If I didn't have almost everything in place by the time I decided to implement back testing, I would have never started doing it, way too much work doing it from scratch for someone with as little capital as me. But, due to sheer luck, everything fell in place to make it happen.

I have recently decided to run a fully public portfolio, documenting all my investments in real time. You can follow my adventure in my thread:
https://www.aussiestockforums.com/forums/showthread.php?t=26890


----------



## luutzu (28 April 2014)

KnowThePast said:


> Welcome to my playground!
> 
> You can read my thread, where I've posted some of my backtesting results (page 10 for Price/Book and 14 for Alt Z).
> 
> ...





Are there any fundamental-based software out there?
How much would an investor be willing to pay for a good one?

Seems there's Charting, and some so called Fundamentals that basically give you the figures.. and by the look of it, those figures are bought from S&P or Morningstar and rearranged.


----------



## KnowThePast (28 April 2014)

luutzu said:


> Are there any fundamental-based software out there?
> How much would an investor be willing to pay for a good one?
> 
> Seems there's Charting, and some so called Fundamentals that basically give you the figures.. and by the look of it, those figures are bought from S&P or Morningstar and rearranged.




As a retail investor with a small budget, I couldn't find anything. 

I don't know about larger funds, but I suspect most of them don't use anything similar either.


----------



## DeepState (28 April 2014)

KnowThePast said:


> Thanks Alfred, you are too kind.
> 
> I've "played" in the stock market since late 90's, but didn't put much effort into it until about 7-8 years ago, when I started managing my own super. I was always oriented more towards fundamentals, and most of efforts have been to find undervalued companies from that perspective. I have achieved a return a few percent above the indexes in this time, which I was happy with, given the severe position restrictions my super imposed.
> 
> ...




You've accomplished great things and your thread is loaded with wisdom and contemplation from yourself and others.  What a catalyst you've been.  I am just so pleased to find you because you work in an evidence-based framework with hard data. Plus you've built a useful univariate analysis tool from scratch.  Not too shabby at all.  Bloody awesome in fact.

1. Did you work in a funds management environment as a developer?

2. What's your latest area of investigation?


----------



## DeepState (28 April 2014)

KnowThePast said:


> As a retail investor with a small budget, I couldn't find anything.
> 
> I don't know about larger funds, but I suspect most of them don't use anything similar either.




There are pre-canned systems out there which fill fundamental data and time series data and then allow you to manipulate it within reason for basics.  These will cost upwards of USD 50k per annum. And, KTP, that's just part of the reason why you are a genius in the flesh.


----------



## luutzu (28 April 2014)

DeepState said:


> There are pre-canned systems out there which fill fundamental data and time series data and then allow you to manipulate it within reason for basics.  These will cost upwards of USD 50k per annum. And, KTP, that's just part of the reason why you are a genius in the flesh.




I thought you wrote $50 per year... but it's 50k, as in $50 000, in US dollars.


Pre-canned results are no good. 
I've used them, but that's mainly because I don't know any better, and it's free. But to pay $50K... I'll stick to excel.


----------



## jet328 (28 April 2014)

Thanks for all the replies, looks like it will be more difficult than I first thought.

I've been impressed with Meb Faber's work and it had given me a few ideas to play around with.

KnowThePast, very impressed with your software, some SERIOUS work has gone into that.

I've coded a really basic backtester in the past using C#/SQL backend but nothing that smooth (threading made my head spin) or anywhere near as advanced.

The best free Aussie data I found was morningstar http://financials.morningstar.com/ratios/r.html?t=XASX:CBA&region=AUS which wouldn't be hard to scrape, but it only goes back 10 years.

The US computstat looks very impressive (and pricey). I read somewhere on a forum that if you joined an investing club you could get access, which I'll look into when I get some spare time.


----------



## DeepState (28 April 2014)

luutzu said:


> I thought you wrote $50 per year... but it's 50k, as in $50 000, in US dollars.
> 
> 
> Pre-canned results are no good.
> I've used them, but that's mainly because I don't know any better, and it's free. But to pay $50K... I'll stick to excel.





USD 50k pa yep.  I was unclear.  The platform is pre-canned.  That is, you buy it and it's ready to go.  The results are whatever you happen to be researching. The platform gathers the data and manipulates it.  This data management and manipulation is hard to achieve and you'll find that if you are using Excel.   They don't provide pre-canned factors unless you are seeking factor returns from commercial risk systems.  If you are, these will cost about USD$80k pa more.

Most of the above is sometimes used as a step phase for privately built engines which will cost millions to develop, often with proprietary data to boot.

It's pretty much out of the league of the average punter.  Maybe you can group up and share the costs and combine research?  If you work this out, it might make sense relative to the cost of time that you would spend developing and maintaining your own systems.


----------



## luutzu (29 April 2014)

DeepState said:


> USD 50k pa yep.  I was unclear.  The platform is pre-canned.  That is, you buy it and it's ready to go.  The results are whatever you happen to be researching. The platform gathers the data and manipulates it.  This data management and manipulation is hard to achieve and you'll find that if you are using Excel.   They don't provide pre-canned factors unless you are seeking factor returns from commercial risk systems.  If you are, these will cost about USD$80k pa more.
> 
> Most of the above is sometimes used as a step phase for privately built engines which will cost millions to develop, often with proprietary data to boot.
> 
> It's pretty much out of the league of the average punter.  Maybe you can group up and share the costs and combine research?  If you work this out, it might make sense relative to the cost of time that you would spend developing and maintaining your own systems.





What is proprietary data? Raw financial data they entered themselves?

Are you talking about the Bloomberg terminal or some other system?

How do you know so much about these? From your research or work RY?

---

Can imagine it costing a fortune to develop your own system. The big fund managers probably have them, we hope... not sure what the botique guys uses. 

Hey Batman, do you know?

will check out your youtube when my bandwidth's back... too blurry for me at moment.


----------



## DeepState (29 April 2014)

luutzu said:


> 1. What is proprietary data? Raw financial data they entered themselves?
> 
> 2. Are you talking about the Bloomberg terminal or some other system?
> 
> 3. How do you know so much about these? From your research or work RY?




1. Data that you create internally and which is not available to the general public unless they did the same thing as you.  This stuff can be utterly amazing.  But that's what you get paid for and this is where the biggest edges are obtained from.  For example, when RMBS became a real phenom, most people used aggregate default rates in their valuations.  This was approximately right.  But, some investors went to the enormous effort of finding out default rates by suburb which gave them an ability to value them more accurately.  This info was not generally available, although the pieces are public.  It is not insider trading.  Today, this stuff has gone through the stratosphere.

2. CapIq, FactSet...

3. I used to use this type of stuff as decision aids before I retired, my team built one of those multi-million dollar developments which took two years to achieve - oh, the pain - and it was state of the art at the time. But I'm not a developer.  Now I have my own baby version.


----------



## howardbandy (29 April 2014)

Greetings --

Visit this web page:
http://www.blueowlpress.com/WordPress/links/#books
Just a few entries down from the top of this section is a link to a paper -- 
"Bandy, Howard, Use of Fundamental Data in Active Investing, pdf file, 2009."
Click the link, and you will open a 30 page pdf file with some of my thoughts on using fundamental data.

The executive summary is that it is not useful because:
1.  Granularity of reports is too sparse.
2.  Delay between corporate or government action and reporting is too great.
3.  Revisions are too numerous.
4.  The agenda of the reporter is unknowable.

In addition, there are a few comments about data in the introductory chapter of my forthcoming book.  Visit this web page:
http://www.quantitativetechnicalanalysis.com/book.html
Click the link to Introduction, and you will open a 13 page pdf document.  

One of the conclusions of the analysis of risk -- not shown in that introductory chapter, but discussed in the full book and in my presentations at the ATAA in Melbourne May 2014 -- is that the data series being traded inherently establishes the upper limit of profit potential, even before a trading system is applied to it.  Rapid response to changes in the data, along with short holding periods, are required to extract profit without encountering unacceptably high risk.  These trades are typically much shorter than the period between reports of fundamental data.  Fundamental data could not be used to generate signals for them.

Best regards,
Howard


----------



## KnowThePast (29 April 2014)

DeepState said:


> There are pre-canned systems out there which fill fundamental data and time series data and then allow you to manipulate it within reason for basics.  These will cost upwards of USD 50k per annum. And, KTP, that's just part of the reason why you are a genius in the flesh.




Thanks again RY, you are embarrassing me now 



DeepState said:


> You've accomplished great things and your thread is loaded with wisdom and contemplation from yourself and others.  What a catalyst you've been.  I am just so pleased to find you because you work in an evidence-based framework with hard data. Plus you've built a useful univariate analysis tool from scratch.  Not too shabby at all.  Bloody awesome in fact.
> 
> 1. Did you work in a funds management environment as a developer?
> 
> 2. What's your latest area of investigation?




1. I've worked for a software company that wrote funds management software that we then sold to the funds, but haven't worked in a fund itself.

2. Lately I've been playing around with filters for underperforming. Basically, strategies that allow me to pick companies to short. I haven't implemented any of it in my investing yet, but I am considering it more and more.

Another thing that I think has really not been covered much by formal research is backtesting of portfolio management strategies. Most backtest results that are published are done by buying a group of lowest/highest percentile of a specific criteria, then rebalancing every year. This is perfectly fine if one wants to find out whether a specific fundamental filter performs above average. In general. But for practical purposes, I also very much want to know what kind of an effect these kind of things may have:

- have a sell filter, as well as a buy filter. For instance, buy @ P/B < 0.7. Sell @ P/B > 2.0. Sell criteria could even be on a different criteria than buy.
- having that sell filter than allows me not to re-balance my portfolio every year, but to see how funds would flow in and out in a more "natural" flow.
- cash rate on unused funds.
- averaging up/down.
- selling losers after a defined period.
- position sizing, max portolio size, max holding size, etc.
- evaluate on a monthly/weekly/daily basis
- limiting number of trades per month/year/etc

The way a portfolio is managed may have a huge impact on a winning (or losing) strategy. But I haven't yet read anything where this was properly back tested. Most of literature on this kind of portfolio management deal with theretical risk, not performance. 



jet328 said:


> KnowThePast, very impressed with your software, some SERIOUS work has gone into that.
> 
> I've coded a really basic backtester in the past using C#/SQL backend but nothing that smooth (threading made my head spin) or anywhere near as advanced.




Thank you for the kind words jet, and for starting this topic!

Believe it or not, my software is done with C#/SQL as well. Previous analytical software that I created, I used C++ for the engine and we created our own file format for the data. Worked really, really fast. My latest software I initially created for a different purpose, so it is not optimized as much. But it handles perfectly well what I need it for, so there was no need to change it.



DeepState said:


> It's pretty much out of the league of the average punter.  Maybe you can group up and share the costs and combine research?  If you work this out, it might make sense relative to the cost of time that you would spend developing and maintaining your own systems.




I've considered making a website that hooks into my engine. People could subscribe to for a monthly fee and get access to it. However, the cost of sourcing the data and running the site makes it a little too risky of a proposition for my situation. 



DeepState said:


> 3. I used to use this type of stuff as decision aids before I retired, my team built one of those multi-million dollar developments which took two years to achieve - oh, the pain - and it was state of the art at the time. But I'm not a developer.  Now I have my own baby version.




RY, you seem incredibly knowledgeable on this. Could you please share you experience in this as well? I showed you mine, you show me yours kind of thing


----------



## KnowThePast (29 April 2014)

howardbandy said:


> Greetings --
> 
> Visit this web page:
> http://www.blueowlpress.com/WordPress/links/#books
> ...




Hi Howard,

It always amazes me that public internet forums regularly get such quality contributions. Thank you for your work, I've read it with great interest.

I can now start arguing with you 

While I fully agree that your reasons make perfect sense, does the data agree with them? After all, if one was to make trades based on fundamental data, the main thing should be that it works, not that it makes sense. Of course, making sense of it is very important. A very real danger of back testing is that you find something that happened to work during that time period for that group of companies, rather than something that has a real chance of working ever again. 

Let's pick something safe. Ben Graham, 80 years ago, wrote about a strategy of buying companies under their asset value with a margin of safety. Preferably under working capital, but let's stick to Net Assets. Numerous studies since then, have shown that this strategy has produced a better than index result in almost any 5 year period in every developed market in the world. Not significantly better, but after 80 years and millions of trades, it cannot simply be dismissed. 

Furthermore, it has been shown that the lower P/B was, the better the performance. The higher, the worse. Also, no matter which measure of risk one preferred to measure, lower P/B strategies did not measure as higher risk. 

While I do not use this strategy myself, I've ran it in my back test on an Australian market for the last 10 years and got precisely the same result as all researchers did for the last 80 years. 



howardbandy said:


> The executive summary is that it is not useful because:
> 1.  Granularity of reports is too sparse.
> 2.  Delay between corporate or government action and reporting is too great.
> 3.  Revisions are too numerous.
> 4.  The agenda of the reporter is unknowable.




I agree, all the data is extremely suspect for these reasons, and probably more. But nevertheless, accurate predictions on a group of companies can be made from them. A very high error rate for analysis of an individual company, yes. But as a group, different story. 

My personal favourite explanation lies in behavioural economics. People are too pessimistic about worst prospects and too optimistic about best ones. Which would explain the lower than rational prices being paid for, an average, for cheapest stocks. 

But one can't also ignore the predictive power of fundamentals on business performance itself, which has nothing to do with anyone's psychology. For instance, Altman Z score has now, for decades, been pretty accurate in predicting chance of bankruptcy. 



howardbandy said:


> One of the conclusions of the analysis of risk -- not shown in that introductory chapter, but discussed in the full book and in my presentations at the ATAA in Melbourne May 2014 -- is that the data series being traded inherently establishes the upper limit of profit potential, even before a trading system is applied to it.  Rapid response to changes in the data, along with short holding periods, are required to extract profit without encountering unacceptably high risk.  These trades are typically much shorter than the period between reports of fundamental data.  Fundamental data could not be used to generate signals for them.




FWIW, I found that value strategies, on average, have an average holding period of 3-5 years for best performance. Which seems consistent with prior research as well. What kind of periods did you test with your data?

I've always wondered why it was 3 years+. Perhaps it's the fact that most analysts concentrate on 2 year forecasts, or perhaps it has to do with a length of a typical business/credit cycle.

I highly recommend Tweedy, Browne paper on this topic, which can be found here:
http://www.tweedy.com/resources/library_docs/papers/WhatHasWorkedFundVersionWeb.pdf

In addition to describing their investment approach, the describe the results of other similar studies in different markets.


----------



## howardbandy (30 April 2014)

Hi KTP --

Yes, projections about groups of companies have lower variance than projections about individual companies.  At the limit, just buy the benchmark index.  If you have faith that the market will behave as you hope.  

The risk is drawdown.  Of experiencing a drawdown greater than my risk tolerance.  Of having funds ties up in positions that have large losses for long periods of time.  Of being forced to liquidate a position at a serious loss.  

Even America's most famous value-oriented investors had drawdowns of 50% or more in 2009.  They held their positions, having little alternative.  Their positions were too large to liquidate, and they were seen as being patriotic for having faith in a recovery.  Perhaps them holding their positions did contribute to moderating a steeper decline.  But individual investors / traders do not have those limitations or those responsibilities.  Rather, my responsibility is to keep my family safe and financially sound.  

This recent recovery was as swift as it was entirely, in my opinion, because of government funding, which is distorting interest rates, distorting alternative uses of money, and forcing investment in stocks and real estate.  The government funding will end and / or its effect will lessen.  I expect a revisitation of 2009 market lows, or lower, but without the followon swift recovery.  In my opinion, forecasts of holdings of several years are at risk of government action -- or equivalently -- of world action in reaction to government action or inaction.  I would not be surprised if it takes two generations -- an entire adult lifetime for many people -- for equity indexes to recover to today's levels following the next crisis.  It did take that long following 1930.

Rather, I prefer to work with fairly short holding periods, looking for high probability opportunities, retreating to cash regularly, reevaluating often, calculating confidence rather than relying on faith.

I wish you continued success.

Best regards,
Howard


----------



## DeepState (3 May 2014)

KnowThePast said:


> I've always wondered why it was 3 years+. Perhaps it's the fact that most analysts concentrate on 2 year forecasts, or perhaps it has to do with a length of a typical business/credit cycle.




MOM reverts after two years to three years instead of extrapolates in the cross section.

Markets display excess volatility.  Earnings growth and P/E contraction/expansion are highly correlated.

There is no overwhelmingly great theory behind why the reversion occurs in that period.  Perhaps the most likely simple theory is where investors initially under-react to news, see the movement, hop on the trend, then overreact and then realise it happened.  But this theory does not suggest a time frame.  It is an observation that holds.

Behavioural science experiments undertaken in an agent based setting produced these results in gross terms. So it's rationale may be found in the exploration of agent based dynamics.  More realistically, it has also got to do with the time horizons and peer relative nature of big blocks if the market.  Insto makes up around 60-70%, mostly by fundies who can't take heat beyond 3 yrs because their client base uses that sort of time period for assessment because they would look silly for holding a money loser over that period and make up reasons to support the action.  The alignment is all off kilter, but that's the world we live in.

I believe it is partly due to business/cyclic credit rational that you have put forward.

But...there are deeper reasons relating to within-market concerns that make the cross-section behave as it does.

Auto-correlation of daily results of ASX do not pass statistical tests for auto-correlation at even the 10% level for data sample of the last 10 years.  Hence MOM at aggregate level does not seem present, although I have found cross sectional mom across markets many years ago for a naive signal.  This had modest predictive power.


----------



## DeepState (3 May 2014)

KnowThePast said:


> RY, you seem incredibly knowledgeable on this. Could you please share you experience in this as well? I showed you mine, you show me yours kind of thing




Sorry for the delayed reply, 

Back in the day, we had to build our own database because commercial systems miss a bunch of things and have problems with primary keys. Stock codes would be re-used etc.  We also had to classify stocks into the correct sectors too.

Fundamental data for 3-way accounts was obtained from a range of data vendors in historical terms.  These had to be compared and corrected. Estimates data was obtained from the brokers who were producing those estimates.  Proprietary data was sourced from wherever it needed to be sourced from.

The backtesting engine was built in MatLab.  The database was SQL and visible through PowerBuilder as well as via SQL query. C# was in there as well.   MatLab was also used to form portfolios by manipulating all this data into a trade list.  Optimisers in MatLab, though awesome, were not awesome enough for these purpose.  Hence optimisation was linked to Lindo Pro which provided commercial grade stuff useful for backtesting and real portfolio construction.

All proposed trades were run through a compliance check via extraction of holdings via PowerBuilder from HiPortfolio files.  These were compared against client restrictions.  The HiPortfolio files were also source of truth for portfolio construction with information from back office relating to overnight cash received or intra day cash receipt.  We also loaded information parcels where a client wanted tax aware portfolio management which were coded for tax rate and accounting convention applicable.  These were considered in the trade off between forecasted returns and tax costs amongst other things.

Trade feeds via DFS IRESS allowed direct communication of trades and bookout without manual entry and intra day management of the trading activity of the day.

Then we sipped Pina Coladas and had long lunches. Or, sometimes, took a dive off the Barrier Reef or Bermuda.


----------



## craft (3 May 2014)

DeepState said:


> Then we sipped Pina Coladas and had long lunches. Or, sometimes, took a dive off the Barrier Reef or Bermuda.




So you’re now retired and could do those things all day every day except they only retain their true appeal when done in moderation and yet the market obsession can soak up endless time without losing its appeal.  So retired from employment but not the market is my guess – question then is how are you going to approach things being a private investor/trader?

Do what you used to do but with less resources and without management fees from other people’s money, that seems an uphill road.

Try and find a statistical niche that is not being employed by the big end of town – from your posts this seems where you’re headed.

Use what you know about the big end against them.

What are the objectives – obvious a competitive spirit there so I guess some sort of outperformance would be the goal. But how far do you want to reach given limiting the downside limits the upside.  Are you looking for risk adjusted outperformance or you prepared to put it all on the line to achieve the best absolute return you can?

What’s the strategy and objectives – what are your competitive advantages now to compete against what you used to do when employed?  Retail traders/investors obviously don’t have the resources you describe in your working life – I’m interested in how an ex-insider intends to combat their obvious resource and scale advantage now he’s on the other side.

Off topic – maybe if you want to answer (don’t feel you need too) do so in another thread. It would be interesting to also get others thoughts on what they think their competitive advantages are over institutional/hedge etc managed money.


----------



## DeepState (3 May 2014)

craft said:


> So you’re now retired and could do those things all day every day except they only retain their true appeal when done in moderation and yet the market obsession can soak up endless time without losing its appeal.  So retired from employment but not the market is my guess – question then is how are you going to approach things being a private investor/trader?
> 
> Do what you used to do but with less resources and without management fees from other people’s money, that seems an uphill road.
> 
> ...




  I don't think I've actually had a Pina Colada for five years.  That world was theoretical.  It's not even aspirational. That's how interested I actually am in all that in real life....

The main advantages retail has over insto are, in my view:
+ Much reduced market impact;
+ Longer time horizon;
+ No need for concern over peer risk; 
+ No need for concern about career risk; and
+ No frictions between when you find a good idea and when you do it.

The main disadvantages:
+ You don't have monster, commercial grade rocking systems;
+ You don't have as much access to high grade people whose individual insights you can piece together into something that sparks your mind to find the next thing.  This is the part I miss the most.  It's harder to build on something and you have to be more self-reliant.  Given the variance in expertise that may be needed to produce a good investment outcome, this can be constraining;  and
+ It's alright now, but I loved working in a team that was unified in objective and free flowing.  Those were great days I shall always treasure.

You actually don't need commercial grade stuff to make money in retail.  Because market impact consumes 30-50% of raw alpha in equities for even medium sized investors and the pay-off for alpha search is diminishing, a smart investor like you and some others here are actually good enough to pull it off.

My objectives are different.  My floor is that, short of war or confiscation of assets...and even then, we will be alright for the next 60 years. I figure I'll be dust by then and want to make sure that my wife is fine beyond it.    So there is a liability structure which tries to achieve that as best as possible.  It's not sexy, you can see long term yields in nominal and real in local and foreign markets.  Some credit risk is acceptable.  Some equities (local and foreign), but not a dominating component (approx. 30%), are in the mix. Given these dominate absolute risk for the overall portfolio even at that level, they are actively hedged according to a secret sauce process to ensure that I can mentally handle things like GFC when they occur along the journey.  That's bed rock.  That's premium related long term investing, developed on scenarios and long term valuation, profiting from risk premiums, or otherwise serving as a hedge to economic outliers (ie commodities), which have and I believe will generate above CPI style returns over the very long term.

Then there are surplus assets which are available for risk.  These include equity l/s, commodity l/s and currency l/s.  Pure alpha on market neutral.  This is much tougher because equity l/s comes with such a financing hurdle that a big chunk goes to the financier.  The others are done more like futures, but the edge you get is smaller. I'll pass on methodology in here.

Then, be real about tax....so you have to do various things to sort that out, obviously.

Everything is slow turnover.  No day trading. All super-patient.  My statistical edge lies within the way in which stocks, commodities and FX edges are found.  Each of these is grounded in fundamentals of some sort.  They have some technical elements, but nothing like the sorts of strategies I am learning about here.  All highly diversified.


----------



## McLovin (3 May 2014)

craft said:


> Off topic – maybe if you want to answer (don’t feel you need too) do so in another thread. It would be interesting to also get others thoughts on what they think their competitive advantages are over institutional/hedge etc managed money.




I'd say it's a couple of things.

First off, I don't have someone breathing down my neck when I underperform for the week/month/quarter/year. I think that is a huge problem in the mindset of the institutional investor. The corollary of that is I'm able to take positions that have far more volatility because I have a loooong time horizon. This also comes back to temperament. The market shakes out those who don't have the stomach or the conviction behind their position. Which in itself is an advantage. My grandfather use to always tell me that the truth almost always lies somewhere in the middle.

Secondly, I think a fair few fund managers are hopeless business people. They're really just accountants (there is nothing wrong with being an accountant!). The make overly complex models when all you need to do is sit in a quiet room and think. The recent downgrades at CCL make that point really well. There were people on this forum talking about the price discounting at Coles and Woolies, but the fund managers just took the company line, then got burnt. 

Finally, size. Having a few million to invest is a far simpler task than having a few billion. My universe is multiples of a large fund's. The sweet spot for me seems to be companies around $80m to $1b. Any higher and there's too many smarter minds working on it, any smaller and it's difficult to sort the noise from real competitive advantage.

Anyway. That's my.


----------



## banco (3 May 2014)

DeepState said:


> You actually don't need commercial grade stuff to make money in retail.  Because market impact consumes 30-50% of raw alpha in equities for even medium sized investors and the pay-off for alpha search is diminishing, a smart investor like you and some others here are actually good enough to pull it off.
> 
> .




What do you mean by this?


----------



## DeepState (3 May 2014)

banco said:


> What do you mean by this?




Hi Banco

It means that when you are managing a lot of money, it's really hard because you move markets every time you trade.  When you do this, you get set at unfavourable prices.  This is called market impact.  Anything beyond crossing the spread is called market impact as you move through the spread searching for liquidity.  For completeness, the spread bit is called 'Half-spread' because the ideal point is right in the middle of the touch to touch prices that are sitting in front priority on the bid and offer.  Thus, if you are a buyer and you hit the sell price, you have crossed half the spread.

This market impact is much bigger for an insto of even medium size for anything which is trend following or has an element of this in it.   This takes away from your ability to produce an edge after t-cost (market impact, half-spread and commission).  

It doesn't work this way exactly, but alpha production has diminishing rates of return for effort.  It's much easier to find your first $1 of alpha than you $300 millionth.  After a while, try as you might, it's hard to squeeze out another dollar.  Meanwhile as you get larger, all of this moves against you as market impact increases and your ability to produce alpha doesn't keep up with either of these and becomes a smaller proportion of your growing asset pool.  This makes it very hard to manage money for large asset pools.

In contrast, retail may not quite find it as easy to squeeze out the first $1 or $10,000 or $100k...but it doesn't have to.  If you have edge, no two people in this forum that I have read does things too similarly and if they did, it would be small relative to available liquidity.  Further, you would have next to no market impact for the most part.

Hence, when you are retail size, even if you aren't as geared up as insto, you can do better because you have less frictions relative to your ability to generate returns.

This is not to say that you can just rock up as retail and expect to make money, of course.  You still need an edge and that is no where near as easy as the average trader, live in the market at any given time, seems to imagine it to be.  Bending distributions does not create an edge, for example.

Cheers


----------



## DeepState (3 May 2014)

McLovin said:


> I'd say it's a couple of things....
> 
> ...They make overly complex models




An example of the lack of predictive ability despite the 'complex models'. This is broker consensus EPS estimates for MSCI Australia from Thompson Reuters.  It equally weights the estimates of, dunno, maybe 15-20 brokers. These are then aggregated into as MSCI Aust EPS using what is called a 'single stock' method.  Treating the whole market as if it were a single stock.  

In absolute terms, I don't know why they bother other than they get asked to produce something.  People who have looked into it find virtually no forecasting power exists.  Same goes for market economists and strategists.  In fact, if anything, they are contrarian signals.  So flipping a coin is probably as good a method for stock forecasting than heavy analysis for the most part.  And retail cap flip coins as well as insto.  It's not quite the whole story, of course. However here's a graph for your consumption:


----------



## DeepState (3 May 2014)

McLovin said:


> I'd say it's a couple of things.
> 
> 1. First off, I don't have someone breathing down my neck when I underperform for the week/month/quarter/year.
> 
> 2. My universe is multiples of a large fund's. The sweet spot for me seems to be companies around $80m to $1b.




1. Not married are you.... 

2. Please check out the below.  This is the aggregate performance of small cap insto managers participating in the Mercer surveys.  They are killing it too.  Who do you reckon is the patsy at the table?  I'm not saying you are.  Just wondering who do you think is?  This result has always puzzled me.


----------



## McLovin (3 May 2014)

DeepState said:


> 1. Not married are you....
> 
> 2. Please check out the below.  This is the aggregate performance of small cap insto managers participating in the Mercer surveys.  They are killing it too.  Who do you reckon is the patsy at the table?  I'm not saying you are.  Just wondering who do you think is?  This result has always puzzled me.
> 
> View attachment 57842




Not married, and the girlfriend is quite undemanding. At least when it comes to financial matters.

As to two, I'd argue that the small ords does a pretty poor job of measuring genuine small caps. TPG is an almost $5b company and is included in XSO, FBU is a NZ$6.5b company and included in XSO. Meanwhile, it doesn't capture anything under about $150m. I think you can see where I'm going with this. So it's really weighted to what are more mid caps, or large caps in the Australian sense.


----------



## DeepState (3 May 2014)

McLovin said:


> 1. Not married, and the girlfriend is quite undemanding. At least when it comes to financial matters.
> 
> 2. As to two, I'd argue that the small ords does a pretty poor job of measuring genuine small caps. TPG is an almost $5b company and is included in XSO, FBU is a NZ$6.5b company and included in XSO. Meanwhile, it doesn't capture anything under about $150m. I think you can see where I'm going with this. So it's really weighted to what are more mid caps, or large caps in the Australian sense.




1. Total sweet spot.  Congratulations.

2. Just checked it out.  ASX Smalls contains everything from ASX 101-300.  Correct on TPG total cap (now $5.3bn per below)  but it only has partial float.  The 31st company, Independence Group (IGO-AU)  falls into your $1bn zone and the smallest stock has a market cap of ~$50m.  Extract from S&P Index composition report:




So this stuff is in your indicated zone.  However the top caps would dominate the index - agreed.  Still, unless we are saying that the first 30 underperforms the rest systematically, the relative performance stats are reasonably valid as an indication of profit extraction.  At least - I think so.

Also, micro caps active managed funds have similarly ridiculous outperformance in aggregate.  But I don't have a chart than I can show you.  I just know the results from the top funds and they are way above any notion of an index in the micro range.

I'm not expressing doubt about your prowess or the wisdom of your strategy.  I was just looking at it and curious about it.  I'm just wondering who you think you are taking money off (if this is an alpha play) or are you positioning for long term growth in this sector generally which is a game that all can play and get along in?  The answer can validly be anything.  But you are clearly thoughtful and well considered.  Are you in a dog fight with other, less-skilled, retail?  Are you in a dog fight with foreign? etc.  

Part of the reason I ask is that funds management is supposed to be zero sum less expenses.  You know, the usual Vanguard argument.  But Australia seems to defy it for professional funds across the spectrum from large through to micro caps.  It's puzzled people for a long time. I still don't think we really know why. But it has been like this for decades. Perhaps you could guess - no-one else knows.  Who are you and the professional community taking alpha from?  In large cap, it's foreign.  Retail does alright.  But foreign in the small and micro doesn't seem right.

Anyway, penny for your thoughts.


----------



## maffu (4 May 2014)

jet328 said:


> Thanks for all the replies, looks like it will be more difficult than I first thought.
> 
> I've been impressed with Meb Faber's work and it had given me a few ideas to play around with.
> 
> ...




Morningstar (DatAnalysis Premium) is very good for Australian firms. It has all the financial statement data, but it is not as good with market data. Scraping it is very easy.
Thompson Reuters Datastream is excellent for market data around the world.
Compustat/CRSP are the best databases for US firms.

I think they are all way to expensive for a retail investor, I use them through my university.

All the fundamental screens you mentioned in your initial post have all been used previously. Have a look on Google Scholar for results of those kind of experiments. The general result is the bottom 10% of P/E firms will outperform the top 10% of P/E firms, the result is stronger when using M/B than P/E. No one has a perfect explanation as to why, it may be a risk factor.


----------



## DeepState (4 May 2014)

maffu said:


> 1. Morningstar (DatAnalysis Premium) is very good for Australian firms. It has all the financial statement data, but it is not as good with market data. Scraping it is very easy.
> Thompson Reuters Datastream is excellent for market data around the world.
> Compustat/CRSP are the best databases for US firms.
> 
> 2. All the fundamental screens you mentioned in your initial post have all been used previously. Have a look on Google Scholar for results of those kind of experiments. The general result is the bottom 10% of P/E firms will outperform the top 10% of P/E firms, the result is stronger when using M/B than P/E. No one has a perfect explanation as to why, it may be a risk factor.




Hi Maffu and Jet 328

1. Watch out for survivorship.  Because you can only dump data for firms that are alive now, any database build from webscrape will be inherently biased to survivors.  This will skew your results in material ways.

For example, look up CGJ-AU (Coles Myer) which was acquired by Wesfarmers in 2007.  The Morningstar portal can't access it.

2. Success of Value has two key explanations: Risk (Fama) and overreaction (Lakonishok).  Evidence exists for both.  Neither PE or PB are perfect...how could they be?  But there is a premium to be earned for both.  Low n-tile Earnings yield (PE inverted) can include loss-makers (if included in the measure) or companies that are marginally profitable.  You can't tell if they are ramping up or declining to oblivion. Higher n-tiles also suffer this, but it is less of an issue.  Dispersion within lower n-tiles is also larger as if to highlight the uncertainty associated with such firms and the difficulty of translating this measure for such companies.  Some investors exclude the lower n-tiles when determining P/E based measures for investment as a result of this.  The lowest n-tile of P/E (inverted) often exceeds the returns generated by higher n-tiles.  PB suffers less from these issues.  Given the n-tiles show monotonic return improvement for PB as we proceed down the valuation spectrum...with the same results showing through for other value metrics apart from these, it is less likely that the premium shown for lower n-tile PE is a return to risk, but is just simply noise which should be set aside for valuation purposes because the measure ceases to have meaning in that universe.  Doing so improves the PE valuation measure's return.

Strange thing to me is that you'd think a PE or PB measure can't be compared between a gold spec and WOW-AU, say.  This implies that grouping should be made on a like for like basis.  Retailers only compared to other retailers etc.  But Fama and Lakonishok do it across the whole market in actual funds management activity and both are successful.  I guess the factors work across industries as well as within for reasons as previously stated.


----------



## craft (4 May 2014)

McLovin said:


> I'd say it's a couple of things.
> 
> First off, I don't have someone breathing down my neck when I underperform for the week/month/quarter/year. I think that is a huge problem in the mindset of the institutional investor. The corollary of that is I'm able to take positions that have far more volatility because I have a loooong time horizon. This also comes back to temperament. The market shakes out those who don't have the stomach or the conviction behind their position. Which in itself is an advantage. My grandfather use to always tell me that the truth almost always lies somewhere in the middle.
> 
> ...




Thanks McLovin and RY for your thoughts on this.

I think my advantage and it’s not just an advantage against institutional management but also against most of what I observe in other retail participants is that I can disregard price because I’m valuation focused (defined as buying future discretionary cash flows for less than they are worth) whilst nearly everybody else is to some extent price focused and that puts me on a road less travelled. No need to be the brightest and fastest, the fruit on that road hangs low and most of the time in abundance.

Even good value driven fund managers can’t ignore price – watching Peter Hall struggle with what needs to be done to retain funds under management is very telling.

And statistical/system based valuation as discussed on this thread isn’t really a competing valuation approach as such because the measures can never be what it needs to be and that is price to ‘value’.  Value is subjective and based on good judgement.  Statics can tell you in which way people generally err i.e. the low percentile P/E’s out performing high P/E’s etc but that’s not valuation – that’s market prices assigned a value designation – it doesn’t capture the essence of value just a glimpse of how things on the whole are predominantly mis-valued. 

For as long as valuation stays a skill/art based on judgement, requires effort and lacks herd reinforcement of your conclusion in the form of instant price validation, I suspect it will remain a road less travelled.


----------



## DeepState (4 May 2014)

craft said:


> I think my advantage and it’s not just an advantage against institutional management but also against most of what I observe in other retail participants is that I can disregard price because I’m valuation focused (defined as buying future discretionary cash flows for less than they are worth) whilst nearly everybody else is to some extent price focused and that puts me on a road less travelled. No need to be the brightest and fastest, the fruit on that road hangs low and most of the time in abundance.




And here is an indication of your profit opportunity based on rationally discounted dividends (I guess it's a decent proxy for FCF). (From Shiller, "American Economic Journal", 1981):




It's real.  Go well....


----------



## sinner (6 May 2014)

DeepState said:


> Who do you reckon is the patsy at the table?  I'm not saying you are.  Just wondering who do you think is?  This result has always puzzled me.




That's because you obviously haven't read Seth Klarmans "Margin of Safety" wherein he explains all of this at the beginning of the book. Some excerpts

re smallcaps


> Most of the major money management firms consider only large-capitalization securities for investment. These institutions cannot justify analyzing small and medium-sized companies in which only modest amounts could ever be invested. To illustrate this point, consider a manager at a very large institution who oversees a $1 billion portfolio. To achieve rea*sonable but not excessive diversification, the manager may have a policy of investing $50 million in each of twenty different stocks. To avoid owning illiquid positions, investments might be limited to no more than 5 percent of the outstanding shares of any one company. In combination these rules imply owning shares of companies with a minimum market capitalization of $1 billion each (5 percent of $1 billion is $50 million).
> 
> At the beginning of 1991 there were only 559 companies with market capitalizations this large, a fairly small universe.
> 
> I refer to this type of limitation on institutional investors' behavior as a self-imposed constraint. This one is not, however, a completely arbitrary rule adopted by managers; the size of the portfolio dictates such a restriction. Unfortunately for the clients of large money managers, like the one in this example, thousands of companies are automatically excluded from investment consideration regardless of individual merit.




But more importantly on the general "who is the patsy at the table that craft takes money off when he's winning":



> An important stock market development in the past several years has been the rush by institutional investors into indexing. Indeed this trend may be a major factor in the significant diver*gence between the performances of large-capitalization and small-capitalization stocks between 1983 and 1990.
> 
> Indexing is the practice of buying all the components of a market index, such as the Standard & Poor's 500 Index, in pro*portion to the weightings of the index and then passively hold*ing them. An index fund manager does not look to buy or sell even at attractive prices. Even more unusual, index fund man*agers may never have read the financial statements of the com*panies in which they invest and may not even know what businesses these companies are in.




Klarman posits that the largest money in the market - instos - are indexing. This is literally antithetical to the idea of a stockmarket, i.e. capital formation for the companies with the highest ROC, where instead you invest in a company regardless of its ROC, or any fundamental factor whatsoever. In the index case whoever has the largest market cap (shares on issue * current price) is invested in most heavily. The more people who index, the less efficient the market becomes - stocks outside the index are underpriced and undertraded relative to those in it. You are often allocating more money to companies that destroy value than those who create it.

Not just that but there are plenty of other irrational, similar, reasons covered at the beginning of the book that explains where the largest market inefficiencies that a value investor can capture come from:

* Securities in the index are often playthings for speculators and wannabe arbitrageurs, resulting in significant overtrading relative to their fundamental value and this overtrading (while giving the appearance of liquidity) is often the source of return drag. Low liquidity stocks outperform high liquidity stocks across all market cap quantiles. Low liquidity value outperforms low liquidity growth, low liquidity smallcap value outperforms low liquidity large growth, and so on. 
* Funds don't necessarily choose when they sell - long only funds often prefer to retain earnings than sell a security looking for another representing better value and capital formation properties so they will still be holding stocks at "the top" when Klarman is in cash. Leveraged funds might sell an entire holding on a drawdown that's 1 tick beyond their pain point. and so on....
I suggest reading the book.


----------



## McLovin (6 May 2014)

Ahh thanks Sinner. 

In a half-arsed way I was trying to make that point by highlighting the nature of the index; that 50% of the small ords is also the XJO. So there would be plenty of index huggers in the top part of the small ords. While small cap fund managers tend to be stock pickers not index followers from my anecdotal observation, at least in Australia. 



			
				craft said:
			
		

> Even good value driven fund managers can’t ignore price – watching Peter Hall struggle with what needs to be done to retain funds under management is very telling.




The way he's having to deal with his SRX holding is a very interesting case study in funds management.


----------



## DeepState (6 May 2014)

sinner said:


> That's because you obviously haven't read Seth Klarmans "Margin of Safety" wherein he explains all of this at the beginning of the book. Some excerpts
> 
> re smallcaps
> 
> ...




Ha ha.  You probably read at 2000 words per minute Sinner!  Given the book has 76,736 words in it, you'd eat it in less than 45 mins!  Crikey dude...please let me know the name of your speed reading coach. I am awed by your depth of knowledge and I used to read over 100 books a year.  Obviously on stupid stuff. Sigh.

You're right.  I have not read Klarman's book but think the way he and, thus, Baupost, invests are very interesting. His stock picking ability is out of this world.

Everything you have said is true.  I agree with all of it.  In terms of the efficiency of index vs active, I would add that there is the Grossman-Stiglitz 2003 "On the impossibility of informationally efficient markets" which also talks about the fact that alpha will be available for harvesting.  It's just a matter of who is getting it.  It remains available for insto despite his criticisms of how they might, in aggregate, seek to obtain it.  He is highlighting peer/career/business risk concerns and these do lead to additional frictions - no argument there.

So, these results stand in the US and just about everywhere else.  But they do not hold in Australia?  Well, the small cap value premium exists.  But I'm just talking about transfer of wealth defined as pure alpha - which is the thing that Baupost/Klarman is talking about. The market is sub zero-sum in terms of transfers. So that's the puzzling part.  I agree with Baupost and what you are saying.  But if it is true, why then are Aust insto doing so well vs index across the spectrum of cap? I don't get it.  This result does not line up with Baupost/Klarman's thesis which I also subscribe to. That's why I am so puzzled. It is a puzzle that no-one that I know of has explained well although some postulates have been put up.  In large cap, examination of tick level data shows that the patsy is foreign insto.  Retail does alright.  They get routinely wrong footed.  But why do they stay so stupid (partly it is the thesis above, but also the kinds of stocks they hold and the way they mis-time their market movements)?  But as you go down the cap spectrum, this also ceases to become a viable response because the ownership habitat changes - if insto is doing very well on average - who is the patsy in this market?  Because it isn't insto. It's foreign insto in large cap, but it is not likely to be that as we move to smalls and micro - they are not active there.  However, since retail does alright in large (for reasons other than alpha directional plays, I would add) I am not currently prepared to say it is retail and think I am missing something.  But all roads presently lead there by tautology.  

Hence the question.  I don't know the answer and it seems to defy the postulates you have just added - which I agree with also.  Something is exceptional about the Australian market that seems to favour domestic insto.  So if the theory doesn't hold against the outcome...we need to look some more.  And these results have held for decades.

Do you have a view?


----------



## sinner (6 May 2014)

> So, these results stand in the US and just about everywhere else.  But they do not hold in Australia?  Well, the small cap value premium exists.  But I'm just talking about transfer of wealth defined as pure alpha - which is the thing that Baupost/Klarman is talking about. The market is sub zero-sum in terms of transfers. So that's the puzzling part.  I agree with Baupost and what you are saying.  But if it is true, why then are Aust insto doing so well vs index across the spectrum of cap? I don't get it.  This result does not line up with Baupost/Klarman's thesis which I also subscribe to. That's why I am so puzzled. It is a puzzle that no-one that I know of has explained well although some postulates have been put up.  In large cap, examination of tick level data shows that the patsy is foreign insto.  Retail does alright.  They get routinely wrong footed.  But why do they stay so stupid (partly it is the thesis above, but also the kinds of stocks they hold and the way they mis-time their market movements)?  But as you go down the cap spectrum, this also ceases to become a viable response because the ownership habitat changes - if insto is doing very well on average - who is the patsy in this market?  Because it isn't insto. It's foreign insto in large cap, but it is not likely to be that as we move to smalls and micro - they are not active there.  However, since retail does alright in large (for reasons other than alpha directional plays, I would add) I am not currently prepared to say it is retail and think I am missing something.  But all roads presently lead there by tautology.
> 
> Hence the question.  I don't know the answer and it seems to defy the postulates you have just added - which I agree with also.  Something is exceptional about the Australian market that seems to favour domestic insto.  So if the theory doesn't hold against the outcome...we need to look some more.  And these results have held for decades.




I am not sure I understand what you're saying here?

There is this question: 



> why then are Aust insto doing so well vs index across the spectrum of cap?




Given the appropriate data you should be able to answer this question with a Principal Component Analysis, i.e. define a bunch of fundamental factor return paths and find which (or combination of which) have the explanatory power against the insto return paths. 

But I would ask:

* Which index is the benchmark (XAO or XJO)?
* Which instos do you include and exclude from this statement? 
* Are they actually outpeforming on a Sharpe/Sortino ratio basis?

But also you seem to be asking another question about why certain market cap quantiles might outperform others?



> But as you go down the cap spectrum, this also ceases to become a viable response because the ownership habitat changes - if insto is doing very well on average - who is the patsy in this market?  Because it isn't insto. It's foreign insto in large cap, but it is not likely to be that as we move to smalls and micro - they are not active there




Can you clarify this?

To me the answer to the second one is simply an issue of liquidity, i.e. overtrading.


----------



## craft (6 May 2014)

sinner said:


> I am not sure I understand what you're saying here?




Me either, but I'll give you a view on what I think you are asking.



DeepState said:


> But I'm just talking about transfer of wealth defined as pure alpha - which is the thing that Baupost/Klarman is talking about. The market is sub zero-sum in terms of transfers.




That is only true of the entire market – it does not have to be for the sub index’s 

So just restating what McLovin eluded too because I think he is right.

The top 100 stocks in the Small ordinaries are simultaneously the bottom 100 stocks in the ASX200. 

Small funds measuring themselves against the small ordinaries don’t have to be zero sum (gross of fees) in total for outperformance against the XSO.  Their overall outperformance/alpha can be soaked up by funds measured against the XJO. Given the difference in amounts benchmarked against each index a reasonable overall outperformance by small cap managers may only register as a small underperformance to the funds (possibly foreign) bench marked against the XJO.


----------



## DeepState (6 May 2014)

sinner said:


> 1. I am not sure I understand what you're saying here?
> 
> 
> 2. Given the appropriate data you should be able to answer this question with a Principal Component Analysis, i.e. define a bunch of fundamental factor return paths and find which (or combination of which) have the explanatory power against the insto return paths.
> ...




Great questions.  

1. In am trying to understand why, in Australia, insto outperforms the cap weighted index for accounts managed against ASX 200/300, ASX Small Ords and even for indices in Micro cap.  It implies that they are in receipt of transfers of alpha from others in the market, in aggregate.  This has been going on for decades and defies what Baupost, you, Craft, McLovin and I think should happen.

2. PCA is used for explaining cross-sectional returns and the factors which emerge are blind although you might be able to template them against fundamental factor return time series.  So the argument goes that insto may be holding a misweight to certain factors relative to index.  These factors must show non-zero/positive mean.  It may be possible to find out what these factors pertain to even through this will always be approximate.  Absolutely.

Sure.  I guess part of it may be due to permanent factor bias in the insto sample.  But, if you examine the universe, you have pretty much all the major styles accounted for (value/growth/quality/mom/cap/leverage/liquidity...).  In aggregate, these exposures are close to flat.  What is left, if you observe the smalls outperformance, is a really big excess return that cannot possibly fall within the ambit of a factor bias. It could be factor timing, but that's alpha.  It isn't due to permanent bias. Well, some of it might be, but no way to that extent.  In the large caps, which are benched to ASX 300 and ASX 200, the insto manager universe is massive.  There are nearly 100 managers surveyed and anyone with $20m or more who wants to show numbers in order to get and retain clientele is in there.  Selection bias has been shown to be very small.  Difference between equal weight and cap weight also is not a big thing.  The median manager routinely produces returns above the indices by ~2% per annum.  ie. the major factors are flat-ish, yet you have outperformance of index.

Hence, it largely falls to stock selection.  The bit not explained by key fundamental factors, or not explained by beta (if only using PCA for average manager vs index) or the higher order Eigenvalue PCA components if looking at the cross section of the index and the manager universe - although this is hard because the universe changes. So it's still weird.

3. Great question.  I do not know.  But apart from Low Vol and a few exceptions like this, most of these guys are seeking to outperform the index in absolute terms.  They have risk stats all over the place.  Aggregate beta is generally close to 1 although this varies a lot with some at 0.7 ex post, to others in the 1.1 range.  Risk adjusted or not, insto is taking money out of the market.  I guess you are arguing a segmentation approach may exist...insto seeks one type of risk-reward outcome and others in the market seek another and all can be happy because it's not win-lose but win-win.  That's certainly a viable explanation, but why is Australia out of whack with the rest of world?  Presumably the rest of world has the same segmentation arguments as well and their mix of objectives should not (I guess) be that different to what exists in Aust.  Aust managers are seeking to beat the index and, typically, each other.  Bottom line is that they are taking money out of the market vs others.  In general these others presumably don't intend to lose money vs rest of market on average.  Yet they are....in Australia.

4. Craft has argued that this may be due to a capitalisation banding issue.  McLovin has indicated that his edge lies in his participation in stocks below the radar on the basis of capitalisation.

Craft's argument says that the Small Cap managers might be outperforming.  Because of index overlap with large cap in the ASX 100-200, it is possible that the Small Caps outperform and take alpha off the Large caps and that they can also outperform.  However, this still results in the outcome that insto is outperforming in aggregate vs rest of market even if Small cap insto is taking money off large cap insto in this example (and the obverse of large cap taking money off small cap and both outperforming can also exist).  Either way you take this argument, insto is taking money over others.  This extends to the micro caps as well.  A range that extends below the level of even McLovin's search zone.  Hence, we really can't argue along the lines that it is below the radar.  At every capitalisation band, insto is taking money off the rest of the market.

 5. Liquidity demand and t-cost may be a reason why others add to the zero sum nature of aggregate performance.  But frictions of this nature don't impact the market.  The index return does not factor in any t-costs.  Are you arguing that insto is providing liquidity to the rest of the market?  That would be a viable explanation, perhaps.  However, it has been found that insto generally demands liquidity.  It turns out that retail is the liquidity provider and gains from this.  Pretty amazing.  So liquidity provision is probably not a big explanatory factor for the level of outperformance seen.  The active exposure to the liquidity factor is not significant enough, nor the returns to the factor high enough, to make up for the difference in any way. 

So I'm still amazed and puzzled.  My best guess is that insto gets preferential treatment for seasoned offerings in bookbuilds which don't exceed about 10% of the existing float - depending on company rules and ASX rules.  Because of index construction rules, they are usually not added in index weight until much later.  The insto managers get to pick up the premium and the index doesn't show anything.  Insto has this access and it is not available to retail.  But but but...this stuff must happen overseas too.  Puzzled again.


----------



## craft (6 May 2014)

The chart you referenced below was for small cap manager’s excess return.

Are you now saying that all managers in aggregate produce an excess return over the total market return?

What’s the supporting evidence for that claim?

Private placements (Grrrrrr)  –  I’m not sure how its extent of use compares to other markets but they seem to get used and abused here relentlessly.  Maybe part of the answer????


----------



## DeepState (6 May 2014)

craft said:


> The chart you referenced below was for small cap manager’s excess return.
> 
> Are you now saying that all managers in aggregate produce an excess return over the total market return?
> 
> ...




Hi Craft

Yeah, right across the spectrum.  Maybe it wasn't clear enough in Post 3 May, 11:10pm.

Yep, there are surveys around from, say, Mercer, which have recorded sector returns since the mid 1990s.  I don't have one to hand but they have shown consistent outperformance of the broad market index since their inception - for Australian equities. The rest of the sectors are much more in line with what we'd think should happen.  Mercer is the "Go-to" place for such things.  If you aren't on the database, like the tree falling in the forest, you never really happened.  So the participants in the survey essentially contains any insto seeking external assets.

I think SEOs are part of the answer.  They just aren't all of it, or even the majority of it.  The volume offered and discounts offered just aren't large enough to make a decent impact on the gap we are looking at.  It's a pure benefit to instos for just turning up...and getting lunched.

Cheers


----------



## DeepState (6 May 2014)

Managed to drag one out.  Australian equity survey from Sept 2013.  Please note that the number of managers surveyed exceeds 100.  Although the surveys are point in time, where it might be considered that survivorship bias exists (see how the number of managers increases as the time period shortens), this has been investigated and not found to be material in skewing results.  The figures (median - index) to 2013 are weak relative to history but still show material outperformance in aggregate. This does not occur elsewhere in major markets - to my knowledge - for reasons as previously posted.  Nonetheless, here they are.  The columns are 3mth, 1yr, 2yr, 3yr and 5yr ended 30 Sept 2013 annualised for periods longer than a year:




Source: Mercer

Mercer takes other actions like limiting the length of historical data accepted into the survey to a very short period etc. to control for look back bias.  As managers move to extinction, they still tend to report right to the end in the hope of getting money and, whatever happens, the record is still preserved...etc..


----------



## sinner (6 May 2014)

DeepState said:


> Managed to drag one out.  Australian equity survey from Sept 2013.  Please note that the number of managers surveyed exceeds 100.  Although the surveys are point in time, where it might be considered that survivorship bias exists (see how the number of managers increases as the time period shortens), this has been investigated and not found to be material in skewing results.  The figures (median - index) to 2013 are weak relative to history but still show material outperformance in aggregate. This does not occur elsewhere in major markets - to my knowledge - for reasons as previously posted.  Nonetheless, here they are.  The columns are 3mth, 1yr, 2yr, 3yr and 5yr ended 30 Sept 2013 annualised for periods longer than a year:
> 
> View attachment 57871
> 
> ...




Hm. That seems quite odd, if I'm understanding the table correctly, to say the least. Not to mention, completely at odds with not just the global contemporary situation as you mention, but even my own understanding of the Australian market?

I present the following paper:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2339661


> We investigate the existence and sources of performance persistence among Australian equity funds, using monthly portfolio holdings data. We find that persistence exists, and that it relates to outperforming rather than underperforming funds, primarily stems from security selection skill, and is associated with exposure to growth and high momentum stocks. Further, persistence largely derives from existing holdings, while subsequent active trading contributes only moderately positive returns for both outperforming and underperforming funds. We also find that persistence fades beyond six-months after the initial ranking, and vanishes after 24-months. In summary, persistence amongst our sample largely stems from the medium-term security selection and portfolio construction skills of previously outperforming growth-orientated managers. Differences between these findings and those for U.S. equity funds imply that the existence and nature of persistence may depend on market context.



(the sections highlighted in red conform with my understanding)

and also "The Case for Indexing Australia" - which I guess should be taken with a grain of salt, since it's a Vanguard funded study and of course they'd be pro indexing, however the data is from Morningstar and I have no reason to assume they've fudged these results:

https://static.vgcontent.info/crp/i...se-for-Indexing-Australia.pdf?20140324|093000
The entire link is worth reading to discuss the differences between your understanding, and the one presented within, I also provide one of the many useful charts contained within


----------



## DeepState (6 May 2014)

sinner said:


> Hm. That seems quite odd, if I'm understanding the table correctly, to say the least. Not to mention, completely at odds with not just the global contemporary situation as you mention, but even my own understanding of the Australian market?
> 
> I present the following paper:
> http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2339661
> ...




Hey hey...

Gee you dig well.

On the SSRN paper...I actually know those guys.  One used to broker to me and chat weekly even after I retired, another bought our stuff, another is one hell of an academic another is also a darn fine academic and I don't know the person who probably did all the work (the last name).  It's an all-star ensemble.

The research there is about relative performance persistence.  They find that there is persistence on the most successful funds.  Other studies conducted elsewhere show persistence on the underperforming funds (because we can figure out what people are holding over time and kill them.  It's also for reasons of what fund managers do when under intense selling pressure).  Their findings are perfectly fine by me.  There is momentum in the Australian market.  It is the strongest factor.  Although it is much weaker elsewhere and negative in Japan.

They show autocorrelation *in the cross section*.  It does not explain why the mean is above zero.  It explains why funds rotate around the mean.

So the conundrum continues.

I also looked at the Vanguard figures.  The Morningstar data is net of fees. See the footnote (2) on page 4.  Whilst relevant, the question I have relates to the pure alpha extraction.  Before fees but net of frictions, domestic insto are extracting alpha vs rest of world.


----------



## sinner (6 May 2014)

Glad you appreciate the digging 

Last one from me, it's a stumper and I'm just not qualified to answer 

http://afrsmartinvestor.com.au/p/shares/what_fund_managers_won_tell_you_8MYjF4vobqdBitF1BSocaP

The message of this article is that fund managers underperform. It notes



> Countless studies have cast doubt on the benefits of active management in highly efficient markets, especially US equities, where competition is high and (in theory) prices immediately reflect all publicly available information.
> 
> While the Australian equity market appears similar, the evidence is less compelling but still gives much cause for scepticism.



(personally my opinion is the difference in how compelling the evidence is down mostly to lack of research and available sample size as the market is that much smaller/younger)

but later in the article a snippet from Mercer


> Australian Super’s chief executive officer Mark Delaney told Smart Investor there were maybe “two or three” active fund managers in *Australia who can consistently beat the index by three or four percentage points.
> 
> Simon Eagleton, a senior partner at Mercer, argues there are a number of factors that allow funds management professionals in the Australian equities market to outperform compared to the US market.
> 
> ...




I can definitely see an interesting point there about the broad thematic investment strategies from overseas, perhaps foreign is more active in the space (via a less official/trackable investment vector?) than you thought?


----------



## DeepState (6 May 2014)

sinner said:


> I can definitely see an interesting point there about the broad thematic investment strategies from overseas, perhaps foreign is more active in the space (via a less official/trackable investment vector?) than you thought?




Delaney is referring to a very high hurdle.  I'm not going to argue that one. He's entitled to his view.

Simon's reference to dumb retail is actually not correct.  He does not have access to the data to discern what is going on there.  I thought so too until we did the work and found otherwise.  It's an easy assumption to make.

His references to foreign insto playing themes is correct.  They see Australia as a bunch of banks and resource companies.  Foreign insto invests in these thematically for the most part.  When they move in, though, they stick to the major names for the most part.  Just an anecdote but if you look at the ownership of NCM-AU it is loaded with offshore insto.  Look at KCN-AU and it is dominated by local insto.  Same theme, vastly different ownership profile.  Same sort of thing, but less strong, for WBC-AU vs BEN-AU.

So foreign insto as patsy in large cap is a contributor. Why it keeps happening is a mystery to me.  It is a less viable explanation when you move down the cap spectrum.  So I still don't know who the pasty is there.  If this goes any longer, it's obvious it is me....


----------



## KnowThePast (6 September 2014)

DeepState said:


> Sorry for the delayed reply,
> 
> Back in the day, we had to build our own database because commercial systems miss a bunch of things and have problems with primary keys. Stock codes would be re-used etc.  We also had to classify stocks into the correct sectors too.
> 
> ...




I had a very busy summer here, neglected my favourite topic on the forum. Some very interesting discussion here and I wanted to bring it back closer to the thread title. 

RY, I love hearing everything you have to say, any other backtester developers on here, I'd love to hear from you.

As an amateur, I wish I had the means to pay for data, it vastly increases the scope of what is possible to backtest. Event though someone else's data may present challenges, such as duplicate or different keys as you pointed out. Which sectors companies get allocated to is also often useless and is better off being manually corrected.

At retail level, finding a consistent edge is not as much of a problem, as long as you stick to smaller, less liquid companies. At your end of town, however, how much of that development was ongoing? The edges that you found, how long would you normally have to exploit them before it was discovered by others? 

Some other thoughts on challenges/considerations for this kind of software. My backtesting software, by the way, is not the only one of its kind I worked on. I also built software that analysed company financials for credit risk, which, under the bonnet, is almost the same thing. Funny you mention HiPortfolio, I used to work for them, although mainly worked on HiTrust.

- Data structure is super important. It may come from other sources, but if you don't make a copy and store it yourself, it will be slow. And then, it can be optimised for two purposes - speed of searching through all companies and speed of computation on one company. One will always be at expense of the other. The data will either be fast to search, or to compute. All relative, of course.

- Data will be BIG. And when running a multi year backtest, over thousands of companies, if you need to search the entire database to match the best possible trade, it will be slow. And so, highly denormalised data structure may be required, making it even bigger.

- Each company has many years of financials, each financial is made up of many figures, many computed. Some of those computed figures use current price as input. Which means they need to be recalculated every step.

- Database or flat file? It's 2014, yet this is one of the few cases where it is still a tough choice.

- survivorship bias is very important to guard against, results can be impacted very sustantially by it. And it's not just delisted companies. There may be companies that you have no interest investing in now, but would have been fine candidates 10 years ago. It needs to be possible to include these, but only to a certain point.

- how much data is enough? Would you trust backtest results if the strategy only resulted in 10 trades over 10 years in one market? What if all trades were in one year, with no matching opportunities in other 9? What if most trades were in one industry? 

- stock splits, consolidations, rights issues are a pain in the ### to handle. 

- there's an infinite number of data oddities that will screw up results. It is often better to just omit some companies from backtest, than to try and handle every combination.

- number of criteria is an important consideration. Designing a buy/sell criteria that looks at 20 different measures may result in a perfect result, but most likely means that you just fished for data that happened to work in that particular period. There's no good way to know whether that's the case or not. It's nice, as you pointed out before, that the criteria fits some plausible economic theory. Ideally it has been shown to work by other researchers in different markets at different times. The more researched it is, the less likely it is to produce above average results, however.

- combining two good backtest criteria into one does not always result in better strategy. 

- timing issues. Record of results vs release of guidances, etc. This tends to be an issue for shorter term strategies. For strategies over many years, this tends to even out.

- selling criteria makes a lot less of a difference than buying criteria for long term strategies.

- when backtesting a scenario with limited funds, being fully invested or not, and when, makes a massive difference. Therefore, percentage of funds you allocate to each position, and how many positions you would normally expect to have becomes a big consideration.

- searching for what makes companies improvement revenue/earnings/etc. may be better than searching for ones that appreciate in price, although these usually overlap.

- doing backtests on some "common knowledge" makes you realise most of it is wrong. 


Hopefully someone finds it useful or just interesting, I want to keep this thread alive.


----------

