Australian (ASX) Stock Market Forum

I would love to see AI beat a professional poker player and to be honest I'm not sure that it could. In poker, maths has very little to do with anything. Bluffing is everything. Good poker players can change the way the game is played to their advantage ie, get people to change their thinking. They could probably get an AI to modify it's thinking too right before they punish it.

Blackjack now has 8 deck shoes, so you have to count 412 cards. A fresh shoe is cut by a player and this is discarded, so now you have to guess the number of discarded cards and assume an even distribution of the cut cards to guess the remainder. The edge of calculating a probability shift has been eroded by new casino strategies.
 
Roulette is not beatable. A true wheel has no memory. Between the table limits (which prevent full use of Martingale betting techniques) and the house advantage (green pockets) that creates a negative expectation, no player can win consistently.

His success with roulette was due to recognizing and taking advantage of an unfair (out of balance) roulette wheel. Neither he, nor anyone else, could or can predict which pocket the ball will land in on a fair roulette wheel.

Best, Howard

That does not make sense.

He had success but it is not beatable?????


That is like saying no-one has read hair. Its rare but I have seen it...



See assumptions

Fair wheel. You made an assumption.

That is the assumption that broke montecarlo.

and lost casinos around the world money

sounds a bit like efficient market doesn't it.

Of course the casino will change the game to prevent increased bet sizing and add extra zeros of course then will stop people from winning, that is their bread and butter.

Of course the odds are in their favour. Basic Ev depicts that.

Checking the games are fair, so that they can win by giving worse odds.

But the point is the games were not fair all of the time

Just like the market. People think that it will follow maths and it will,

But only if the assumptions are correct.

That is the point.

We don't know the deck.

What I meant in blackjack was that the rules don't change. They state the rules, they follow the rules.
The deck is this, the draw is that, the rule are this.
Same in poker, a flush is flush every game.

I didn't mean cards weren't taken out of the deck.

But that is the reason it is beatable is because the game is the same.

The market is not always the same.

That is why a lot the banks failed in the GFC because the Quants assumptions were wrong..

Then the people bailed them out.

The smartest quants too, forget about assumptions.
 
I would love to see AI beat a professional poker player and to be honest I'm not sure that it could. In poker, maths has very little to do with anything. Bluffing is everything. Good poker players can change the way the game is played to their advantage ie, get people to change their thinking. They could probably get an AI to modify it's thinking too right before they punish it.

Blackjack now has 8 deck shoes, so you have to count 412 cards. A fresh shoe is cut by a player and this is discarded, so now you have to guess the number of discarded cards and assume an even distribution of the cut cards to guess the remainder. The edge of calculating a probability shift has been eroded by new casino strategies.

Greetings --

Just as with games like checkers, chess, and go, the machine plays many thousands of games, learning strategy, tactics, psychology, traits of individuals, etc. AI applications are winning against human card players, including bridge and poker.

In blackjack there are not only a lot of decks, but continuous shuffling. As soon as the dealer picks up discards from a hand, those cards are put back into the dealing shoe, which is continuously shuffling. A discard from one hand can be dealt in the next hand. The count is never far from neutral and never in the player's favor.

Some dealing shoes are automatic. Several decks are shuffled and placed in the shoe. Whenever a card is requested, the shoe releases one. The human dealer does not touch the cards as they are dealt. This is explained as removing an opportunity for a dishonest dealer to favor a compatriot. While illegal to do so in most locations, automatic dealing also allows the dealing shoe to recognize what cards have been dealt, compute the count, and reshuffle whenever it becomes favorable for players.

There are still a few casinos where single or double decks are dealt by hand and dealt nearly completely. They attract card counters, some of whom are very good. But the casino uses a different payout schedule -- particularly the odds for blackjack -- so it remains very difficult for the player to realize an advantage.

All of that said, casinos have little to fear from customers. Having a few big winners is seen as good publicity. And even when a blackjack deck can sometimes have a count in the player's favor, the mean gain per hand is low, the variance is high, it takes a large bankroll to withstand the drawdowns, and the casino must allow players to make big changes in change bet size -- at least 10 to 1. Those teams that won large amounts did so by having a team member bet the minimum while counting, signaling another member when the deck was favorable, and having that new player make a large bet. The casinos caught on pretty quickly and that tacit no longer works.

Additionally, facial recognition software alerts casino security as soon as a known winner comes through the door.

But it is still fun to be a spectator and watch people who do not understand the math.

Best, Howard
 

https://www.pokernews.com/news/2015/05/man-vs-machine-pro-ahead-450k-21434.htm

Conclusion

With three out of four of the professional players ahead of the poker bot so far for a combined $458,902, it appears that man is still greater than machine when it comes to no-limit Texas hold'em. Even if "Claudico" is unable to mount a comeback, it proves that AI is stronger now than ever, as it did give some problems to Kim and is dominating Les.
 
I wish I was smart enough and had the Mathematics knowledge to do this stuff, but I fear I'd need to spend the next 10 years becoming an expert at Maths, then another 10 learning about systems and machine learning, then beyond that you need the know-how of actually having a strategy that works, still need the ideas. Hard to know what to concentrate on in life, especially when I'm stronger in other parts of my life, like creative things, music & photography, then you have 10 million people all fighting for those positions and thinking they're amazing at it.

Decisions decisions :D Interesting discussion in here anyhow, it kind of points to having programming knowledge as being an essential skill in the future, as so many things are being automated, they all need code and maintaining, may be the only jobs in the future with the way things are going.

It's hard to imagine that us little guys can compete in this world of trading, surely anything we think of has already been thought of and tested etc. Much smarter minds out there working on this stuff than me so what chance have I got?
 
...especially when I'm stronger in other parts of my life, like creative things, music & photography, .....Much smarter minds out there working on this stuff than me so what chance have I got?

None, but it will be a long time before a robot can imbue a creative piece of work with soul. Perhaps never.
 
Greetings --

Just as with games like checkers, chess, and go, the machine plays many thousands of games, learning strategy, tactics, psychology, traits of individuals, etc. AI applications are winning against human card players, including bridge and poker.

In blackjack there are not only a lot of decks, but continuous shuffling. As soon as the dealer picks up discards from a hand, those cards are put back into the dealing shoe, which is continuously shuffling. A discard from one hand can be dealt in the next hand. The count is never far from neutral and never in the player's favor.

Some dealing shoes are automatic. Several decks are shuffled and placed in the shoe. Whenever a card is requested, the shoe releases one. The human dealer does not touch the cards as they are dealt. This is explained as removing an opportunity for a dishonest dealer to favor a compatriot. While illegal to do so in most locations, automatic dealing also allows the dealing shoe to recognize what cards have been dealt, compute the count, and reshuffle whenever it becomes favorable for players.

There are still a few casinos where single or double decks are dealt by hand and dealt nearly completely. They attract card counters, some of whom are very good. But the casino uses a different payout schedule -- particularly the odds for blackjack -- so it remains very difficult for the player to realize an advantage.

All of that said, casinos have little to fear from customers. Having a few big winners is seen as good publicity. And even when a blackjack deck can sometimes have a count in the player's favor, the mean gain per hand is low, the variance is high, it takes a large bankroll to withstand the drawdowns, and the casino must allow players to make big changes in change bet size -- at least 10 to 1. Those teams that won large amounts did so by having a team member bet the minimum while counting, signaling another member when the deck was favorable, and having that new player make a large bet. The casinos caught on pretty quickly and that tacit no longer works.

Additionally, facial recognition software alerts casino security as soon as a known winner comes through the door.

But it is still fun to be a spectator and watch people who do not understand the math.

Best, Howard


oh yeah the casinos stop people from winning.

But the rules are set and known.

Computers will win all of the games with set rules eventually

In life we do not always know all of the rules.

Markets are a representation of this chaos.

That is why assumptions and rule of thumb have allowed our species to survive so long.

Until computers can philosophise these assumptions we are still ok for the moment
 
Why would you not improve what your doing with the inclusion
Of quant analysis/programming/data analysis/systems testing.

You can still post a letter or use an email.

If you can use it I personally think you should.
 
Why would you not improve what your doing with the inclusion
Of quant analysis/programming/data analysis/systems testing.

You can still post a letter or use an email.

If you can use it I personally think you should.

Yeah agree.

If can use it do it.

But remember thee humble assumptions
 
Howard, what sort of % accuracy are you personally getting with your ML systems with Python? Looking around I've seen anywhere from 58% up to 95%, I know it would depend on what algorithm you use, but curious as to what you're getting from your efforts :)
 
Howard, what sort of % accuracy are you personally getting with your ML systems with Python? Looking around I've seen anywhere from 58% up to 95%, I know it would depend on what algorithm you use, but curious as to what you're getting from your efforts :)

Yes, the algorithm (model) makes a difference. And, for many models, accuracy is tunable. Often very high accuracy -- 90% or higher -- comes with many small gains. That is OK as long as the losses are about the same magnitude. One difficulty with taking small gains is that the losses can be excessively large.

The sweet spot is high accuracy and short holding period. Holding longer than about three days increases risk considerably, as does accuracy below about 65%.

The metric to maximize is risk-normalized compound annual rate of return -- CAR25.

That said, using close-to-close change one day ahead and state signals, you will be able to develop systems with accuracy in the 75 to 80% range, trades that last 1 or 2 days, and gains and losses about the same distribution.

The key is to identify conditions unlikely to continue and take the position that gives a profit when the condition changes -- mean reversion entry, then trend following for the duration of the trade. Any fast oscillating function will work. Work with some that do not require integer lookback periods. RSI works. Detrended price oscillator works. Simple moving average is restricted to integer lookback, so is less likely to work. In any given period of time, you want as many signals -- zero crossings or equivalent -- as trades. So long lookback indicators [such as RSI(14) or MACD(12,26,9)] are too slow and will not work. Try to use only one -- but at most two. Additional indicators increase overfitness at the cost of poor generalization.

Pay very close attention to good learning practice -- in-sample fitting, followed by out-of-sample validation, followed by trade-by-trade management.

Best, Howard
 
This looks ok and has a high accuracy, but falls apart on walk forward. DAX/daily/5 contracts/$5comm per contract. Not sure what to do - any suggestions appreciated.

x.jpg.png
 
Really, there is no "competition". The trader is not competing against another trader but simply trading the price action as it unfolds. When a company/individual can provide a claimer stating "historical results are indicative of future outcomes" then there will be no more auction.
 
This looks ok and has a high accuracy, but falls apart on walk forward. DAX/daily/5 contracts/$5comm per contract. Not sure what to do - any suggestions appreciated.

View attachment 69419
Greetings --

The short answer is that the model overfit the in-sample data during learning and does not fit the out-of-sample data.

Some explanation might be helpful.

The system development process is classical learning using the scientific method. I do not understand why, but the trading community has been super slow to recognize that systems to predict stock direction (trades) are similar in almost all respects to systems such as those that predict loan default. It is critical that the data processed for prediction has the same distribution, with respect to the signal being identified, as the data processed for model fitting.

To develop a system that will predict whether a borrower will repay a loan, the lender gathers data that hopefully has some predictive value from a large group of customers, some of whom repaid and others whom have defaulted. If conditions for the period the data represents are relatively constant, the distribution of any randomly chosen subgroups will be the same as the distribution of the entire group and, importantly, of future customers. With respect to loan repayment, the data is stationary. The future resembles the past.

The data scientists develop the model that goes with the data by selecting a random subsample of the data and fitting the rules to it. This is the "training" data and fitting is the learning process. This is "supervised" learning -- each data point has values for the predictor variables (income, job history, etc) and the loan repayment -- the target -- is known. The fitting process is a mathematical process of finding the best solution to a set of simultaneous equations. aX = Y. Where X is a large array of values for the indicator variables and y is a column array of values for the target. The model is the array "a" -- the coefficients of the solution. There will be a solution -- there will be an "a" array -- whether the model has predictive capability or not.

The model -- the rules -- may have identified one or more important features in the data that are consistently associated with probability of repayment. But the developer cannot tell by looking at the learning process. He or she has reserved a separate subsample that is not used at all in learning -- call it the "test" data. As a one-time test, the model is applied to the test data. The value of the target is known, but is only used for later reference. A predicted value for each test data point is computed using the a array. Comparison between the known target values and predicted target values lets the person building the model know whether the model learned general features or just fit the particular data it was given -- including all the randomness.

The scientific method insists on two phases -- learning and testing. Without independent testing, nothing can be said about the model or its predictions.

The trading system development profession has largely ignored the scientific method. Independent, one-time, testing using data that follows the training data and has not been used in development is seldom done. Trading system developers see the equity curve from the training portion, and assume completely without justification, that future results will be similar.

That may be true, but in order for it to be true, two conditions must hold.
1. The future must resemble the past. That is, the data used for learning and the data used for testing / trading must have the same distribution (with respect to the signal). This is stationarity.
2. The model must learn real, predictive signals rather than simply fit to the randomness of the data. This is learning.

When building models for stationary data such as loan repayment, some model algorithms produce in-sample results that can be used to give estimates of out-of-sample performance, while other model algorithms always overfit in-sample and have no value for estimating out-of-sample performance. Out-of-sample testing is always required before estimating future performance.

Back to trading.

A system that works in-sample has fit a set of equations to a set of data. Whether there is true learning or not, there is always a solution -- a set of trades and an associated equity curve. Most are bad and are discarded. We test so many combinations that eventually one fits and the result looks good. It might be simply an overfit solution or it might be a representation of a truly predictive system. We cannot tell anything about the future performance without testing never-seen-before future data. When results of this out-of-sample test are poor, it is because one or both of the two conditions do not hold. Either the data is not stationary beyond the time period of the learning data; or the model has not learned.

I hope this helps.

Thanks for listening, Howard
 
Really, there is no "competition". The trader is not competing against another trader but simply trading the price action as it unfolds.

I have a different opinion. I believe there is a competition.

Share prices change from one reasonably stable level to another for reasons that cannot be assigned causes. There is a limited amount of profit available from a given price change. That profit goes to the traders in reasonable proportion to the trader's ability. The best traders get the most profit.

As an analogy. Imagine an airplane flies over a beach. Everyone at the beach can see it as it drops several hundred $20 bills in an area where there are few people. There is a dash toward the money -- a competitive event where the faster and more aggressive a person is, the more money he can gather.

I do not have to just be better than my naive brother-in-law. There are other players in the game, and they vary in ability. David Shaw, James Simons, and Goldman Sachs will always out-compete me. I still have to be good enough to beat most of the players.

Best, Howard
 
This looks ok and has a high accuracy, but falls apart on walk forward. DAX/daily/5 contracts/$5comm per contract. Not sure what to do - any suggestions appreciated.

View attachment 69419

Hi GB --

If this is in-sample, then my post of a few minutes ago applies -- only out-of-sample results have meaning in estimating future performance. Show us the OOS.

If it is out-of-sample, it looks pretty good. Dynamic position sizing will probably handle the drops.

Best, Howard
 
Hi GB --

If this is in-sample, then my post of a few minutes ago applies -- only out-of-sample results have meaning in estimating future performance. Show us the OOS.

If it is out-of-sample, it looks pretty good. Dynamic position sizing will probably handle the drops.

Best, Howard

Which brings up a useful point.

There is no way to tell from published results alone whether those results are:
Real-time with real money.
Paper trade.
Out-of-sample from walk forward and good technique.
Pseudo out-of-sample after many passes at fitting and testing.
In-sample.
Hypothetical.
Intentionally misleading.

Best, Howard
 
Top