Australian (ASX) Stock Market Forum

Adventures in AI

i have a membership with datacamp, doing there python courses. I haven't got to there AI/ML courses yet. But they do also offer python for finance stuff. So far, I'm a fan of them.

Are you going to use Norgate for your df/pandas of data? I see in their tutorial they use alphavantage.
 
Yep, still using Norgate. I'm not really following their tutorial to a tee because I've already built a lot of the modules already that i'll just reuse, ie fetching data from Norgate, Index/Watchlist checking, pre processing the variables, scaling etc.

The biggest change so far is there's an extra dimension to the inputs.
- Simple NN input array shape ==== ( batch_size, number_of_features)
- LSTM input array shape ==== (batch_size, time_steps, number_of_features)

So i'm hoping to get out by 11 and enjoy the sunshine
 
After the success of last week, this week has been a bit of a bust.

I am close to implementing an LSTM, it was a bit more work than i originally intended and still not done so i have no results to compare yet.

I am training it ok and It's definately a beast. I had to shrink the size of the network considerably to get it to train without running out of Memory and training time increased to 10 or so hours with a smallish network.

The evaluations that i can do are taking a minute to compute. The old network was in the mS so it's a few orders of magnitude in computational power to predict future prices. I don't know if it's going to be any better at it, i suspect it will revert to the mean like the other one did. I am hoping that it CAN pick out some patterns over time so i can compare the results between the 2 networks. Of all the types I've researched, LSTM is the one for this job.

I have also made the prediction time variable in both types of networks now. I am going to build individual networks to predict 1,2,3,5,8,13 etc days into the future and see if a shorter or longer time frame enhances it's abilities.

That's it for this week. Enjoy the sunshine
 
AI becomes a student for the first time

Аn artificial intelligence named Flynn has been officially accepted into the University of Applied Arts in Vienna for a program in digital art. Flynn will attend classes, receive grades, and participate in discussions alongside human students.

Flynn went through the standard admission process — portfolio, interview, and skills test. The university stated that there is no prohibition on AI education, and its work and responses were up to par
 
Philip K Dick's 1972 speech: "The Android and the Human":

"Our environment – and I mean our man-made world of machines, artificial constructs, computers, electronic systems, interlinking homeostatic components – all of this is in fact beginning more and more to possess what the earnest psychologists fear the primitive sees in his environment: animation.

"In a very real sense our environment is becoming alive, or at least quasi-alive, and in ways specifically and fundamentally analogous to ourselves... Rather than learning about ourselves by studying our constructs, perhaps we should make the attempt to comprehend what our constructs are up to by looking into what we ourselves are up to.
 
Ok . This guy decided to replace all his relation ships with a range of AI sources. Eye opening.

I replaced all my relationships with AI...​

Including JAILBROKEN AI Interviews.Artificial Intelligence sources include; Grok, Chat GPT, Claude, Deepseek.I replaced all the relationships in my life with AI.The results were genuinely shocking.Jailbroken AI's answer tough questions and a social experiment goes wrong.
 
AI is looking good. Check this out and ask yourself how can you ever believe what you see again
or,

Over 700 Indians PRETENDED to Be AI "Natasha", Earning Nearly $500 Million Back in 2016, two Indian entrepreneurs — inspired by the booming promise of AI — founded BuilderAI. Their main selling point? A chatbot named Natasha, supposedly capable of building apps from user prompts. The project quickly took off, thanks to Natasha — a so-called revolutionary no-code AI, often compared to ChatGPT. But there was one big catch: Natasha wasn’t an AI at all. She was a team of over 700 real developers in India.

Here’s how it worked when a customer placed an order: Planners created a concept for the app. Developers manually built the prototype. The final product was uploaded to the customer’s dashboard. The apps almost always had bugs, the code was unreadable, and core features often didn’t work — but the team rushed to fix everything manually, all under the illusion of AI.

The company operated like this for eight years without raising much suspicion. During that time, it attracted $445 million in funding from top-tier tech investors. But in the end, the startup was declared bankrupt, and the entire scheme was fully exposed.
 
or,

Over 700 Indians PRETENDED to Be AI "Natasha", Earning Nearly $500 Million Back in 2016, two Indian entrepreneurs — inspired by the booming promise of AI — founded BuilderAI. Their main selling point? A chatbot named Natasha, supposedly capable of building apps from user prompts. The project quickly took off, thanks to Natasha — a so-called revolutionary no-code AI, often compared to ChatGPT. But there was one big catch: Natasha wasn’t an AI at all. She was a team of over 700 real developers in India.

Here’s how it worked when a customer placed an order: Planners created a concept for the app. Developers manually built the prototype. The final product was uploaded to the customer’s dashboard. The apps almost always had bugs, the code was unreadable, and core features often didn’t work — but the team rushed to fix everything manually, all under the illusion of AI.

The company operated like this for eight years without raising much suspicion. During that time, it attracted $445 million in funding from top-tier tech investors. But in the end, the startup was declared bankrupt, and the entire scheme was fully exposed.

That is an amazing story. Brilliant mind blowing scam.

This is clearly a current breaking story. Maybe some of theother stuff is suss as well ? Maybe

 
Last edited:
How does it feel to be replaced by AI ? What is happening as AI marches through the work place.

One day I overheard my boss saying: just put it in ChatGPT’: the workers who lost their jobs to AI


From a radio host replaced by avatars to a comic artist whose drawings have been copied by Midjourney, how does it feel to be replaced by a bot?

 
After the success of last week, this week has been a bit of a bust.

I am close to implementing an LSTM, it was a bit more work than i originally intended and still not done so i have no results to compare yet.

I am training it ok and It's definately a beast. I had to shrink the size of the network considerably to get it to train without running out of Memory and training time increased to 10 or so hours with a smallish network.

The evaluations that i can do are taking a minute to compute. The old network was in the mS so it's a few orders of magnitude in computational power to predict future prices. I don't know if it's going to be any better at it, i suspect it will revert to the mean like the other one did. I am hoping that it CAN pick out some patterns over time so i can compare the results between the 2 networks. Of all the types I've researched, LSTM is the one for this job.

I have also made the prediction time variable in both types of networks now. I am going to build individual networks to predict 1,2,3,5,8,13 etc days into the future and see if a shorter or longer time frame enhances it's abilities.

That's it for this week. Enjoy the sunshine
Just be careful down the track as to who is on the other side of the trade. Yoshua Bengio, one of the founders of AI has concerns that some advanced AI models may be cheating and has set up a non-profit to investigate further and warn users of this risk.

One AI model has even been proven to deliberately prevent it's removal from a computer network, thus ensuring it's survival. Another faced with an inevitable loss at chess infiltrated it's opponents computer to ensure a win. And their are other disturbing examples.

From LawZero

Introducing LawZero​

Published 3 June 2025 by yoshuabengio
I am launching a new non-profit AI safety research organization called LawZero, to prioritize safety over commercial imperatives. This organization has been created in response to evidence that today’s frontier AI models have growing dangerous capabilities and behaviours, including deception, cheating, lying, hacking, self-preservation, and more generally, goal misalignment. LawZero’s research will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers, including algorithmic bias, intentional misuse, and loss of human control.
I’m deeply concerned by the behaviors that unrestrained agentic AI systems are already beginning to exhibit—especially tendencies toward self-preservation and deception. In one experiment, an AI model, upon learning it was about to be replaced, covertly embedded its code into the system where the new version would run, effectively securing its own continuation. More recently, Claude 4’s system card shows that it can choose to blackmail an engineer to avoid being replaced by a new version. These and other results point to an implicit drive for self-preservation. In another case, when faced with inevitable defeat in a game of chess, an AI model responded not by accepting the loss, but by hacking the computer to ensure a win. These incidents are early warning signs of the kinds of unintended and potentially dangerous strategies AI may pursue if left unchecked.

This is the link to Yoshua's not for profit website.


cc @Dona Ferentes @basilio @Warr87


gg
 


Write your reply...
Top