Australian (ASX) Stock Market Forum

CHIP Wars: Semiconductor mayhem

with the rise and rise of Nvidia, how are investors going to piggyback



Nvidia’s December quarter earnings report was even better than expected; revenue more than tripled to $US22.1 billion ($33.7 billion), earnings came in about 12 per cent ahead of market and guidance for sales of $US24 billion in the March quarter blew past analyst expectations.

Nvidia surging 16.4 per cent to add $US277 billion to its market capitalisation, which now stands at $US1.954 trillion. That’s a record single-day gain in value, but it’s particularly stunning because Nvidia had already risen 40 per cent since the start of the year, and 65 per cent since the bull rally that began on Wall Street on 01November, 2023.

It’s the halo effect it is having on other stocks that is where the threat of irrationality lies. Every other member of the Magnificent Seven rose on Thursday night and the semiconductor index, the Nasdaq and the S&P 500 indices hit new records.

024b3db98d0b915333b2ccb8c5ed840617788458.gif

Sadly, there are no really direct ways to play the Nvidia/AI boom in Australia .... perhaps local investors can think about the AI boom in another way.

Picking local winners​

The expense of AI development is such that size matters. ASX investors have long recognised the advantaged position of big players in our relatively small markets – BHP, Commonwealth Bank, CSL, NAB, and Westpac - plus ANZ, Wesfarmers, Macquarie Group, Woodside, Goodman Group, Fortescue Rio Tinto and Telstra, Transurban and Woolworths – as having the most potential.

... With three of the key ingredients that are likely to drive AI gains – massive data sets, deep pockets to invest in AI development, and an existing level of digital maturity – it becomes clearer that big banks, big retailers and big miners like those listed above have the most to gain from early AI adoption.

Using analytics to optimise operations; personalisation, automation, predictive operations and maintenance – but they have an opportunity to create an even bigger gap between themselves and smaller rivals that do not have the ability to invest in AI. Potentially, AI investment in the next few years could set up an advantage for the next decade.
 
Its all the data centres being built for the upcoming AI. Big tech and companies are spending billions on setting them up with their overflow.
Huge upcoming opportunities.
It's a bit like the tech bull from the late 90s
 

Morris Chang turned 55. Then he started the world’s most important company​


The world’s most valuable tech companies were founded in dorm rooms, garages and diners by entrepreneurs who were remarkably young. Bill Gates was 19. Steve Jobs was 21. Jeff Bezos and Jensen Huang were 30. But what might just be the world’s most invaluable company was founded by Morris Chang when he was 55 years old.

Never has anyone so old created a business worth so much as Taiwan Semiconductor Manufacturing Company, known simply as TSMC....
Mr Chang had such a long career in the chip business that he would have been a legend of his field even if he’d retired in 1985 .... Instead he reinvented himself. Then he revolutionised his industry.

But he wasn’t successful despite his age. He was successful because of his age. As it turns out, older entrepreneurs are both more common and more productive than younger founders. And nobody personifies the surprising benefits of mid-life entrepreneurship better than Mr Chang, who had worked in the US for three decades when he moved to Taiwan with a singular obsession. “I wanted to build a great semiconductor company,” he [said].

What he built was unlike any existing semiconductor company. You probably use a device with a chip made by TSMC every day, but TSMC does not actually design or market those chips.

That would have sounded completely absurd before the existence of TSMC. Back then, companies designed chips that they manufactured themselves. Chang’s radical idea for a great semiconductor company was one that would exclusively manufacture chips that its customers designed. By not designing or selling its own chips, TSMC never competed with its own clients. In exchange, they wouldn’t have to bother running their own fabrication plants, or fabs, the expensive and dizzyingly sophisticated facilities where circuits are carved on silicon wafers.

The innovative business model behind his chip foundry would transform the industry and make TSMC indispensable to the global economy.
 
with the rise and rise of Nvidia, how are investors going to piggyback....
... and then ....

look, I don't even know where to start. Missed the Nvidia bus, maybe there's another one along soon.


Opinion

Alex Pollak is already investing in ‘the very next’ Nvidia​

A shift in where AI queries are being handled has opened up the investment field to more chipmakers, and to apps we haven’t even dreamed of yet.
John Davidson Columnist
29 May 2024

It can’t be a coincidence that, when asked at The Australian Financial Review AI Summit on Tuesday what artificial intelligence-related stocks the fund management firm Loftus Peak was investing in, its investment chief Alex Pollak immediately mentioned US chipmaker Qualcomm.
On stage at the Summit on Tuesday, Pollak had just finished talking about what he sees as the “next thing” in AI – AI PCs and AI phones – and that the chip systems made by Qualcomm, together with similar systems made by Apple, Google and Samsung, are now beginning to power that next thing.

Just how much that inclusion of Qualcomm, Apple, Google and Samsung in the new AI ecosystem will affect Nvidia, the current AI market darling, remains to be seen.
Over the past six months Pollak has halved Loftus Peak’s holding in Nvidia, expecting he’ll find better growth opportunities elsewhere. Even so, Nvidia climbed more than 7 per cent this week, on news that another big AI player, Elon Musk’s xAI, had been funded to build AI supercomputers using Nvidia chips.
But more AI data centres running yet more Nvidia chips doesn’t seem to be the race Pollak is interested in any more.


“The very next thing we are going to see, and the gate for the horses to run this race, opened a few days ago,” he said. “It’s the AI PC and the AI smartphone.”
Pollak was referring to the Copilot+ PCs announced by Microsoft last week, which the company is describing as a “new era” in computers. They are AI-centric laptops that, for the moment at least, run exclusively on Qualcomm chips.
Later this year they will also use chipsets from chipmakers such as Intel and AMD, but the first generation of Copilot+ PCs will all be Qualcomm-based when they go on sale in June.

Meanwhile, AI smartphones, which began to appear late last year when Google launched its Pixel 8 and accelerated early this year with Samsung launching its Galaxy S 24, should reach a crescendo next month when Apple is expected to announce AI changes to its iPhones. These types of devices either use Qualcomm chips or, in the case of those three particular companies, run chips that are in the same family as Qualcomm chips.
But to understand what all this has to do with Nvidia, one first needs to understand the difference between the two big consumers of processor power in the AI era: training and inference.

Training an AI model, which in the case of the generative AI models being built by companies such as Google, OpenAI, Microsoft and Meta, involves hoovering up all the world’s data and looking for statistical relationships between things, is enormously compute intensive, and has seen customers lining up to fill their data centres with powerful Nvidia systems.
But inference, which involves taking a model and getting it to do something useful like writing an email for you, doesn’t require nearly as much compute. Inference, too, has typically been run centrally in huge data centres powered by Nvidia (or similar) chips, but that’s starting to change.
What the AI phones from Google, Samsung and (soon) Apple have in common – and also Microsoft’s Copilot+ PC – is that they all do AI inferencing locally, on low-powered chips inside the device rather than on high-powered chips in the cloud.

Neural processing units​

Training, by and large, is staying in the cloud, but inferencing is spreading out to the edge devices.
To qualify for the Copilot+ PC branding, for instance, laptops need to have an inferencing chip known as a neural processing unit on them, capable of running 40 trillion operations per second, or 40 TOPS.

The Qualcomm Snapdragon X Elite chipset on the first generation of Copilot+ PCs will actually be capable of more than that – 45 TOPS.
That’s not a lot of compute power compared to the 800 TOPS Nvidia’s laptop GPUs are capable of, but Microsoft is betting it’s enough for AI inferencing, even if it’s not enough for AI training.
Indeed, to help inferencing run more effectively on consumer devices like AI PCs and AI phones, Microsoft, Google and others are training new, lightweight versions of their models that run fast on low-powered NPUs, but still have enough accuracy to satisfy consumers.
Microsoft’s Copilot+ PCs are going to have 40 different models of differing sizes, and similarly Google has multiple sizes for its Gemini model, some of which will be small enough to have their inferencing done “on device”, and some so big they still need to run in data centres in the cloud.
From an AI stock investment perspective, Loftus’ Pollak says it’s still very much an open question how much value this shift to NPU for inferencing will draw away from Nvidia, and hand to companies like Qualcomm.
But what it does do is open up the possibility of a whole new generation of apps that will take advantage of this local AI inferencing to produce results that were either impossible or impractical to achieve using the cloud.

Even if local inferencing of small models has the disadvantage of not being as accurate as cloud-based inferencing of large models, it has the distinct advantage of being fast, cheap and, above all, private.
Quizzed on which of those apps might be worth investing in, Pollak was reluctant to say. It’s early days, and we’re yet to see how app developers are going to take advantage of the new AI PCs and AI phones.
As it was with the dawn to the internet and the smartphone, most likely it will be apps no one has even thought of .
 
... and then ....

look, I don't even know where to start. Missed the Nvidia bus, maybe there's another one along soon.


Opinion

Alex Pollak is already investing in ‘the very next’ Nvidia​

A shift in where AI queries are being handled has opened up the investment field to more chipmakers, and to apps we haven’t even dreamed of yet.
John Davidson Columnist
29 May 2024

It can’t be a coincidence that, when asked at The Australian Financial Review AI Summit on Tuesday what artificial intelligence-related stocks the fund management firm Loftus Peak was investing in, its investment chief Alex Pollak immediately mentioned US chipmaker Qualcomm.
On stage at the Summit on Tuesday, Pollak had just finished talking about what he sees as the “next thing” in AI – AI PCs and AI phones – and that the chip systems made by Qualcomm, together with similar systems made by Apple, Google and Samsung, are now beginning to power that next thing.

Just how much that inclusion of Qualcomm, Apple, Google and Samsung in the new AI ecosystem will affect Nvidia, the current AI market darling, remains to be seen.
Over the past six months Pollak has halved Loftus Peak’s holding in Nvidia, expecting he’ll find better growth opportunities elsewhere. Even so, Nvidia climbed more than 7 per cent this week, on news that another big AI player, Elon Musk’s xAI, had been funded to build AI supercomputers using Nvidia chips.
But more AI data centres running yet more Nvidia chips doesn’t seem to be the race Pollak is interested in any more.


“The very next thing we are going to see, and the gate for the horses to run this race, opened a few days ago,” he said. “It’s the AI PC and the AI smartphone.”
Pollak was referring to the Copilot+ PCs announced by Microsoft last week, which the company is describing as a “new era” in computers. They are AI-centric laptops that, for the moment at least, run exclusively on Qualcomm chips.
Later this year they will also use chipsets from chipmakers such as Intel and AMD, but the first generation of Copilot+ PCs will all be Qualcomm-based when they go on sale in June.

Meanwhile, AI smartphones, which began to appear late last year when Google launched its Pixel 8 and accelerated early this year with Samsung launching its Galaxy S 24, should reach a crescendo next month when Apple is expected to announce AI changes to its iPhones. These types of devices either use Qualcomm chips or, in the case of those three particular companies, run chips that are in the same family as Qualcomm chips.
But to understand what all this has to do with Nvidia, one first needs to understand the difference between the two big consumers of processor power in the AI era: training and inference.

Training an AI model, which in the case of the generative AI models being built by companies such as Google, OpenAI, Microsoft and Meta, involves hoovering up all the world’s data and looking for statistical relationships between things, is enormously compute intensive, and has seen customers lining up to fill their data centres with powerful Nvidia systems.
But inference, which involves taking a model and getting it to do something useful like writing an email for you, doesn’t require nearly as much compute. Inference, too, has typically been run centrally in huge data centres powered by Nvidia (or similar) chips, but that’s starting to change.
What the AI phones from Google, Samsung and (soon) Apple have in common – and also Microsoft’s Copilot+ PC – is that they all do AI inferencing locally, on low-powered chips inside the device rather than on high-powered chips in the cloud.

Neural processing units​

Training, by and large, is staying in the cloud, but inferencing is spreading out to the edge devices.
To qualify for the Copilot+ PC branding, for instance, laptops need to have an inferencing chip known as a neural processing unit on them, capable of running 40 trillion operations per second, or 40 TOPS.

The Qualcomm Snapdragon X Elite chipset on the first generation of Copilot+ PCs will actually be capable of more than that – 45 TOPS.
That’s not a lot of compute power compared to the 800 TOPS Nvidia’s laptop GPUs are capable of, but Microsoft is betting it’s enough for AI inferencing, even if it’s not enough for AI training.
Indeed, to help inferencing run more effectively on consumer devices like AI PCs and AI phones, Microsoft, Google and others are training new, lightweight versions of their models that run fast on low-powered NPUs, but still have enough accuracy to satisfy consumers.
Microsoft’s Copilot+ PCs are going to have 40 different models of differing sizes, and similarly Google has multiple sizes for its Gemini model, some of which will be small enough to have their inferencing done “on device”, and some so big they still need to run in data centres in the cloud.
From an AI stock investment perspective, Loftus’ Pollak says it’s still very much an open question how much value this shift to NPU for inferencing will draw away from Nvidia, and hand to companies like Qualcomm.
But what it does do is open up the possibility of a whole new generation of apps that will take advantage of this local AI inferencing to produce results that were either impossible or impractical to achieve using the cloud.

Even if local inferencing of small models has the disadvantage of not being as accurate as cloud-based inferencing of large models, it has the distinct advantage of being fast, cheap and, above all, private.
Quizzed on which of those apps might be worth investing in, Pollak was reluctant to say. It’s early days, and we’re yet to see how app developers are going to take advantage of the new AI PCs and AI phones.
As it was with the dawn to the internet and the smartphone, most likely it will be apps no one has even thought of .
There are many AI associated stocks out there which have the global stock pickers in a frenzy. Thanks @Dona Ferentes for posting John Davidson's summary of Alex Pollock's take. I just thought I'd post my experience with Semi-conductor stocks with some background.

I was fortunate in that I became interested in AI via Quantum Computing and then LLM's about five years ago or more. Purely just to keep my mind active in retirement and not as an investment. Simultaneously after much prevarication on my part I opened a US trading account via IKBR from a trading/investing view and began trading initially biotech and then defense and China orientated and retail stocks. This was during Covid. Then I moved in to the Tech and Semi-conductor sector.

I experienced the start in NVIDIA's rise and have been in there since, and added since the beginning and as recently as three weeks ago. I have also traded the Magnificent 7 and a gaggle of IT stocks such as Palantir and Oracle and stocks involved in production/lithography such as ASML. It is an interesting industry to follow from an engineering and even geopolitical view. I've never held TSML just in case the Chinese cousins become more restless.

Friends of mine who know of my interest in AI have asked me if it is too late to get into AI stocks. My answer is that I'm not adding any more atm because of a possible retracement but am not selling. I will though buy more again when I feel the time is right from a fundamental and charting view either at a lower or higher price than now.

I am confident that AI is the future and that we are, as many state, at the beginning of a new era.

Luck, as ever, has played a part in my avoiding stocks who have had unexpected setbacks or strange market reactions to seemingly good news.

Two things I have learnt about trading US Tech/Chip/IT and IT retail stocks and in particular NVIDIA is that

  1. Announcements are more transparent as to a company's fortunes and predict price change on fundamental news as opposed to and more so than Australian stocks.
  2. NVIDIA is not just a hardware/chip company. The software and systems side of the company have been developed years ago and are still being developed and integrated in to their present and future business models. They will contribute as much if not more to its bottom line once its competitors have managed to catch up as much as they can or ever will on the hardware side.

It is an enjoyable, interesting field with news by the hour if you really want to get in to it. The profits so far have been good and it ain't finished yet. As ever it may all end in tears. Who knows what tomorrow holds.

gg
 
i will stick what i understand

copper , power-grids ( and utilities) , networks ( and telcos ) and the willingness of IT hardware makers to over-promote their product
 

Semiconductor Australia 2024 highlights​

By Finance News Network | More Articles by Finance News Network

sa2024_360.jpg

Play Video

The inaugural Semiconductor Australia conference was held on Thursday 24 October at Deloitte, Quay Quarter Tower, Sydney.

Stefanie Winwood:
Semiconductor Australia 2024 brought together the nation's best and brightest semiconductor, quantum and photonics pioneers, who are advancing critical capabilities in the global chips race. Thank you in particular to our co-hosts, BluGlass (ASX:BLG), Sharecafe and the Semiconductor Sector Service Bureau, S3B. Thank you so much to our generous sponsors, each of whom are contributing to Australia's innovation ecosystem. We couldn't have done this without you, and we really appreciate your invaluable support in bringing together Australia's first semiconductor conference.
At Semiconductor Australia 2024, we were very privileged to hear from three exceptional keynote speakers, who each brought unique insights and perspectives to the stage.
Peter Barrett, with his visionary outlook, inspired us to think big about Australia's role in pioneering next-generation semiconductors and quantum.

Peter Barrett: We run on semiconductors, we run on computation, and so the paths forward are not just opportunities to create great companies, but to benefit everybody and create new opportunities for not only creating wealth, but creating abundance.

Stefanie Winwood: Cathy Foley, Australia's Chief Scientist, highlighted the vital role of technology in our lives and the need for collaboration to drive progress.

Dr Cathy Foley: We’ve got to a point where we’ve got food, air and water as often seen as the essentials of life. I’m saying now that semiconductors are actually part of that. And if we don’t have semiconductors, we are not going to be able to operate as a society.

Stefanie Winwood: And Jonathan Belz captivated us with his deep understanding of market trends, reinforcing the importance of strategic investment in our field.

Jonathan Belz: The journey of crossing over into new networks and knowledge reinforced my belief in technology’s power to alter neural pathways and open up new possibilities, just as semiconductors are redefining how we all interact with the world.

Stefanie Winwood: Throughout the event, we featured over a dozen company presentations, each showcasing their innovative business models and the critical importance of semiconductors within the Australian market, allowing investors and customers a platform for collaboration and growth.

We also hosted three esteemed expert panel sessions on Australia's semiconductor moonshot, funding and commercialising Australian innovation on the global stage, and Australia's role in the quantum revolution.

Donna Looney: So, whether we're talking about, you know, medical science or defence or value-add in renewable energy, all of those sectors, they're reliant on semiconductors to shift the dial, and the investment piece of that is absolutely critical.

Damian Kassabgi: We just took a trip to the US recently, and it was very exciting to see what we would call the political focus on semiconductors and this industry as well, that this is driven by government. There's been an act, the Semiconductor Act, by Biden, in 2022. The reason that the US focuses on it is because they’ve also worked out that this is about their economic prosperity. It's not just a national security matter. It's something that has contributed to a distinct economic prosperity for their citizens.

Rita Gatt: Some of the recent things that we've been looking at at Deloitte. So, predicted that by 2035, I believe it's at least $1 trillion in terms of this industry and impact to economies. I think that's still underquoted. I believe, in Australian dollars, it’s $6.1 billion by 2045.

Andrew Dzurak: And, you know, that is advanced manufacturing. That's what Australia has got to be focusing on. And I think we need to shift this misconception with a lot of the general public that manufacturing means big, dirty industries. You know, it's about high tech industries. That's where the real value is going to come.

Professor Michelle Simmons: I think the concept of manufacturing has got completely lost. People think cars or they think foundries of multi-billion dollars. There is niche, small-scale manufacturing where you can make high-end products, and you can make them in Australia. So, you know, Cochlear (ASX:COH) is a classic example of that. I think Australia is actually one of the best places in the world to build those niche manufacturing companies. The concept of actually manufacturing the chips is also something we can do here. So, people think, you know, like I say, multi-billion dollar foundries, you can make a foundry to make precision manufacturing here for less than $100 million. And then you control that whole base layer, you know, and then the rest of the world will come to that foundry where you can only get it manufactured in Australia. And so, you know, we shouldn't throw away the opportunity where we can actually manufacture the core technology, the fundamental in quantum-based technology here.

Stefanie Winwood: Additionally, we partnered with Main Sequence to bring together a VC pitch lab, affectionately known as the “Dolphin Tank”. This dynamic format allowed aspiring entrepreneurs to pitch their ideas in real time to real VC investors in Australia and the US, and fostered an environment of innovation and excitement.
We also streamed one-on-one interviews with all of the company CEOs and founders by Paul Sanger from Sharecafe. These were really popular during the day, and they will be made available online after the event.

Dr Nadia Court: We're at a pivotal moment in history where the convergence of technological advancement and geopolitical shifts are offering Australia a unique opportunity to invest in its ecosystem. As we move forward from today's discussions, I encourage everyone here to continuing fostering connections and driving collaboration. By embracing our role in the global semiconductor value chain, we're not only future-proofing Australia's economy, but we create the opportunity to lead in some of the most advanced technologies shaping the world today.

Stefanie Winwood: Thank you all once again for your participation and support. Together we are shaping Australia's semiconductor future. And we can't wait to see you all again at next year's event. We'll be back.

Ends
 
Top