Australian (ASX) Stock Market Forum

Artificial Intelligence: AI - The dumb man's way to becoming a genius

Came across this AI generated story.

View attachment 177653



There's a joke about "shtting bricks" in there somewhere hmmm.... chatgpt help me out.


ChatGPT
It looks like if you follow that advice, you'll literally be shitting bricks! Talk about a rocky situation!
Lame chatgpt I wont be outsourcing for a while yet.

Google's AI sucks big time. Hopefully they start losing dominance.

Anyway, now all you have to do is take photos and chatgpt will answer. I was using this on my sons year 12 maths homework. It gives you an answer and how to work it out. It has literally just destroyed the tutoring industry.
Accounting will be soon along with bookkeepers.

I'm thinking that there's probably an investment opportunity in getting into these companies that are focused on these areas that are fairly large scale.
 
AI video is coming along. Hollywood will soon be out of business.
 
A new paper came out "situational awareness and the coming decade".
Overview of it:

Paper is worth skimming over.
Interesting part on dictatorship becoming permanent backed with ai super intelligence. It will be able to run through anybody for possible dissenters. Robotics could be used to round us up. Every move monitored.

You look at situations like Canada where they denied access to people's bank accounts and blacklist them in the system to realise how much we can lose in a digital world.
 
A new paper came out "situational awareness and the coming decade".
Overview of it:

Paper is worth skimming over.
Interesting part on dictatorship becoming permanent backed with ai super intelligence. It will be able to run through anybody for possible dissenters. Robotics could be used to round us up. Every move monitored.

You look at situations like Canada where they denied access to people's bank accounts and blacklist them in the system to realise how much we can lose in a digital world.

I do think it is extremely problematic.
I have listened to a program interviewing a number of serious Ai developers on the ABC seriously worried about the future but just like developing the Atomic bomb we can't stop.

As he says above, which matches their take.
Automated Ai research by 2026! 2030 it's all over.

I am a SF fan and used to laugh off the premise of Terminator. After listening to these guys, no longer.
Human level Ai robots. In factories, on battlefields.

We won't recognise the world in 15 years.
 
Last edited:
I do think it is extremely problematic.
I have listened to a program interviewing a number of serious Ai developers on the ABC seriously worried about the future but just like developing the Atomic bomb we can't stop.

As he says above, which matches their take.
Automated Ai research by 2026! 2030 it's all over.

I am a SF fan and used to laugh off the premise of Terminator. After listening to these guys, no longer.
Human level Ai robots. In factories, on battlefields.

We won't recognise the world in 15 years.
It opens up a lot of questions and if we all have to become Luddites.
Does bitcoin get hacked?
Does the stockmarket become impossible to trade?
Is digital banking security at risk?

Let's face it, a super intelligence that may have the ability to control everything is bad enough. But there are hundreds of these things being developed. Yes the initial opportunities will be huge. But that greed will rush us into slavery.
 
I've been chatting to some techies. Seems a lot of the AI guys want to let AI loose. It's a bit like a religion at the moment apparently.

I'd slowly begin moving away from digital and move to physical over the coming years. Honestly it's hard to see what investments are safe if AI goes wrong.
 
My son was telling me of a recent experience.
The company he works for was looking to put more people on in his team, so they asked him what sort of quals etc he required.
As this person was going to be required to do a fair bit of coding, son said once HR got through the usual guff, the ones left he would create a test for them to do to make sure they really knew what they said they knew.
He had one girl who had outstanding quals, so he asked her to come in to do the test.
She said she was interstate and could she do the test online.
Ok says son, but when he started it, she was a bit vague with the information, and the internet kept dropping out, and she would go incognito for a while then suddenly come back on with the answer.
He was a little bit wary of this, as he suspected she was either using AI to write the code, or getting coached.
When discussing it with HR, the HR person said after they interviewed the girl, they knew nothing more about her than when they started.
She did not get the job, and they have decided that in future, if applicants cannot attend in person sessions, they just don't go any further.
Mick
 
My son was telling me of a recent experience.
The company he works for was looking to put more people on in his team, so they asked him what sort of quals etc he required.
As this person was going to be required to do a fair bit of coding, son said once HR got through the usual guff, the ones left he would create a test for them to do to make sure they really knew what they said they knew.
He had one girl who had outstanding quals, so he asked her to come in to do the test.
She said she was interstate and could she do the test online.
Ok says son, but when he started it, she was a bit vague with the information, and the internet kept dropping out, and she would go incognito for a while then suddenly come back on with the answer.
He was a little bit wary of this, as he suspected she was either using AI to write the code, or getting coached.
When discussing it with HR, the HR person said after they interviewed the girl, they knew nothing more about her than when they started.
She did not get the job, and they have decided that in future, if applicants cannot attend in person sessions, they just don't go any further.
Mick
Well, I'm retired but I still maintain an interest in computer programming.

ChatGPt has been my constant companion over the last few weeks, writing some code itself and giving hints. I don't have to pretend to anyone that I can do a job, I just want to learn things I need to know.

Ai is great for piecemeal stuff, but no substitute for a well rounded university course.
 
“You can make large language models do anything but do a science experiment. And in a weird way, even though the models are improving, they're meeting up against greater scepticism among clients because clients have tried them and [think] it's just a high school experiment.”
- Alexander Karp, CEO, Palantir Technologies Inc.
 
I think we've got a long way to go

Screenshot_20241123_141306_Google.jpg

⬆️
.. generative AI is going to confuse the innumerate

I think they mean 10 to the power of 9
 
buzzword du jour
.
Agentic AI is a type of artificial intelligence that can perform complex tasks autonomously, adapt to real-time changes, and solve problems based on context and objectives. Agentic AI systems are designed to operate with little to no direct human supervision or interaction.

Agentic AI systems have a number of capabilities, including:
  • Reasoning: Agentic AI can use sophisticated decision-making based on context.
  • Reinforced learning: Agentic AI can evolve dynamically through interactions with the environment and receive feedback from those interactions.
  • Language understanding: Agentic AI can comprehend and follow complex instructions.
  • Workflow optimization: Agentic AI can efficiently execute a multi-step process.
Agentic AI has the potential to reshape how businesses operate, potentially redefining roles, workflows, and human-AI collaboration models. Some examples of how agentic AI could be used include:
  • Customer service
    AI-powered chatbots could provide 24/7 customer support, answering product inquiries, assisting with returns, and troubleshooting.
  • Human resources
    AI agents could automate and enhance various HR processes, such as conducting initial candidate screenings and scheduling interviews.
  • Scientific research
    AI agents could autonomously design and run experiments, analyze results, and even formulate new hypotheses.
  • Finance
    AI agents could analyze market trends, make trading decisions, and dynamically adjust investment strategies based on real-time economic data and news events.
 
It seems that in many ways, AI manages to emulate some of the stupid actions of humans.
From Jonathan Turley comes the news that he , along with others, some of which we know about, some of which we do not, have been "dissappeared" or "ghosted" by ChatGPT.
1736131443780.png

1736131501731.png

1736131583537.png

Brian Hood, a mayor from Hepburn Springs in Victoria, had a slightly different experience with CHatGPT, but was equally unhappy with flase information spread about him via ChatGPT.
Now, after contacting the VhatGpt manegment, rather than simply correcting the error, he has been 'erased".
If you type his name into Chat GPT, you get nothing.
Its almost as if it is being run by a political party!
According to ABC News
1736131857667.png


Mick
 
An AI agent called Boardy can call you and introduce you to other professionals

Boardy Boardman, an AI agent created by New York-based start-up Boardy AI, has gone viral on LinkedIn over the past fortnight as hundreds of users recount their realistic interactions with it.

I’d love to hear your story. What are you working on, and how did you get here?” Boardy inquired when The Australian Financial Review received its call back.

Boardy claims it can facilitate warm introductions between people who wouldn’t otherwise meet, but are worth knowing. Users send a LinkedIn direct message with their email address and phone number to Boardy to set a call back from the AI agent in motion.

.
....and it's insidious ... a bit of diversity in academia would help:
.
Toby Walsh, chief scientist at UNSW’s AI institute, was less enthusiastic, saying he feared Boardy’s popularity would start a rush of AI-led unsolicited phone calls.

“Our time is our most valuable asset. I fear for a future in which more AI bots consume that asset. Especially when they come with Aussie accents to fool some of us into thinking, perhaps only briefly, that they’re human and not AI,” Dr Walsh said.

“The AI is doubtlessly trained on LinkedIn data. If we’re not careful, such apps will perpetuate the white male patriarchy, connecting more white male CEOs to each other, and not breaking the gender and racial barriers."

!!
 

How DeepSeek’s ‘nerdy’ billionaire stunned Silicon Valley​

Hedge fund manager and entrepreneur Liang Wenfeng built an AI model on a tight budget despite a US attempt to halt China’s high-tech ambitions.

Eleanor Olcott and Zijing Wu
28 Jan, 2025 – 8.12am

A small Chinese artificial intelligence lab stunned the world this month by revealing the technical recipe for its cutting-edge model, turning its reclusive leader into a national hero who has defied US attempts to stop China’s high-tech ambitions.
DeepSeek, founded by hedge fund manager Liang Wenfeng, released its R1 model last Monday, explaining in a detailed paper how to build a large language model on a bootstrapped budget that can automatically learn and improve itself without human supervision.
US companies including OpenAI and Google DeepMind pioneered developments in reasoning models, a relatively new field of AI research attempting to make models match human cognitive capabilities. In December, the San Francisco-based OpenAI released the full version of its o1 model but kept its methods secret.
DeepSeek’s R1 release has sparked a frenzied debate in Silicon Valley about whether better-resourced US AI companies, including Meta and Anthropic, can defend their technical edge.
Meanwhile, Liang has become a focal point of national pride at home. This month, he was the only AI leader selected to attend a publicised meeting of entrepreneurs with the country’s second-most powerful leader, Li Qiang. The entrepreneurs were told to “concentrate efforts to break through key core technologies”.

‘Nerdy guy with a terrible hairstyle’​

In 2021, Liang started buying thousands of Nvidia graphic processing units for his AI side project while running his quant trading fund High-Flyer. Industry insiders viewed it as the eccentric actions of a billionaire looking for a new hobby.
“When we first met him, he was this very nerdy guy with a terrible hairstyle talking about building a 10,000-chip cluster to train his own models. We didn’t take him seriously,” said one of Liang’s business partners.
ea1099ae3a577d298467ac9d7e6b7233645d32b3.jpg

Liang Wenfeng, the founder of Chinese AI startup DeepSeek, speaks behind a microphone at a government symposium this month.

“He couldn’t articulate his vision other than saying: I want to build this, and it will be a game change. We thought this was only possible from giants like ByteDance and Alibaba,” the person said.

Liang’s status as an outsider in the AI field was an unexpected source of strength. At High-Flyer, he built a fortune by using AI and algorithms to identify patterns that could affect stock prices. His team became adept at using Nvidia chips to make money trading stocks. In 2023, he launched DeepSeek, announcing his intention to develop human-level AI.
“Liang built an exceptional infrastructure team that really understands how the chips worked,” said one founder at a rival LLM company. “He took his best people with him from the hedge fund to DeepSeek.”
After Washington banned Nvidia from exporting its most powerful chips to China, local AI companies have been forced to find innovative ways to maximise the computing power of a limited number of onshore chips – a problem Liang’s team already knew how to solve.

“DeepSeek’s engineers know how to unlock the potential of these GPUs, even if they are not state of the art,” said one AI researcher close to the company.
Industry insiders say DeepSeek’s singular focus on research makes it a dangerous competitor because it is willing to share its breakthroughs rather than protect them for commercial gains. DeepSeek has not raised money from outside funds or made significant moves to monetise its models.
“DeepSeek is run like the early days of DeepMind,” said one AI investor in Beijing. “It is purely focused on research and engineering.”

High pay for best talent​

Liang, who is personally involved in DeepSeek’s research, uses proceeds from his hedge fund trading to pay top salaries for the best AI talent. Along with TikTok-owner ByteDance, DeepSeek is known for giving the highest remuneration available to AI engineers in China, with staff based in offices in Hangzhou and Beijing.
“DeepSeek’s offices feel like a university campus for serious researchers,” said the business partner. “The team believes in Liang’s vision: to show the world that the Chinese can be creative and build something from zero.”
DeepSeek and High-Flyer did not respond to a request for comment.

Liang has styled DeepSeek as a uniquely “local” company, staffed with PhDs from top Chinese schools, Peking, Tsinghua and Beihang universities rather than experts from US institutions.
In an interview with the domestic press last year, he said his core team “did not have people who returned from overseas. They are all local … We have to develop the top talent ourselves”. DeepSeek’s identity as a purely Chinese LLM company has won the start-up plaudits at home.
DeepSeek claimed it used just 2048 Nvidia H800s and $US5.6 million ($8.8 million) to train a model with 671 billion parameters, a fraction of what OpenAI and Google spent to train comparably sized models.
Ritwik Gupta, AI policy researcher at the University of California, Berkeley, said DeepSeek’s recent model releases demonstrate that “there is no moat when it comes to AI capabilities”.
“The first person to train models has to expend lots of resources to get there,” he said. “But the second mover can get there cheaper and more quickly.”
Gupta added that China had a much larger talent pool of systems engineers than the US who understand how to get the best use of computing resources to train and run models more cheaply.
Industry insiders say that even though DeepSeek has shown impressive results with limited resources, it remains an open question whether it can continue to be competitive as the industry evolves.
Returns at High-Flyer, its big backer, lagged behind in 2024, which one person close to Liang blamed on the founder’s attention being mostly focused on DeepSeek.

Its US rivals are not standing still. They are building mega “clusters” of Nvidia’s next-generation Blackwell chips, creating the computing power that threatens to once again create a performance gap with Chinese rivals.

This week, OpenAI said it was creating a joint venture with Japan’s SoftBank, dubbed Stargate, with plans to spend at least $US100 billion on AI infrastructure in the US. Elon Musk’s xAI is massively expanding its Colossus supercomputer to contain more than 1 million GPUs to help train its Grok AI models.

“DeepSeek has one of the largest advanced computing clusters in China,” said Liang’s business partner. “They have enough capacity for now, but not much longer.”

Financial Times
 
Deepseek allegedly stole their training data out of Berkeley. They apparently have the nvidia chips as well but can't say anything due to the US sanctions.
Also rumours of a guy that was killed.

I wouldn't trust anything they say on costs if they stole their training data.
But actual open source is nice. Sam altman is a bit of a d1ck ad well.
 
Top