Australian (ASX) Stock Market Forum

Artificial Intelligence: AI - The dumb man's way to becoming a genius

New chat​


Can you code a profitable trading strategy


It is possible to code a profitable trading strategy, but it is important to note that past performance is not necessarily indicative of future results. Developing a profitable trading strategy involves analyzing market conditions, identifying profitable trades, and executing trades with a sound risk management plan. Additionally, it is important to constantly monitor and adjust the strategy as market conditions change.
It's always a good idea to backtest the strategy before implementing it with real money, also it could be a good idea to consult with a financial advisor or professional with experience in trading before implementing any strategy.


Can you code it


I can provide you with sample code for a trading strategy, but it's important to note that past performance is not necessarily indicative of future results, and this sample code is for educational purposes only and should not be used for real-world trading without thoroughly testing and optimizing the strategy.
Here's an example of a simple moving average crossover strategy in Python:


import pandas as pd
import numpy as np

def moving_average_crossover(data, short_window=40, long_window=100):
# Create short and long moving averages
data['short_mavg'] = data['Close'].rolling(window=short_window, min_periods=1, center=False).mean()
data['long_mavg'] = data['Close'].rolling(window=long_window, min_periods=1, center=False).mean()

# Create signals
data['signal'] = np.where(data['short_mavg'] > data['long_mavg'], 1.0, 0.0)

# Generate trading orders
data['entry/exit'] = data['signal'].diff()

return data
This strategy generates buy and sell signals when the short moving average crosses above or below the long moving average. However, this is just a basic example and there are many other factors to consider when developing a profitable trading strategy such as risk management


I need something more complex


Sure, here's an example of a more complex trading strategy called "Dual Thrust" in Python. Dual Thrust is a mean reversion strategy that uses two different lookback periods to determine the overbought and oversold levels.


import numpy as np

def dual_thrust(data, n1=20, n2=10, k1=0.5, k2=0.1):
# Calculate the high and low thresholds
high_threshold = data['Close'].rolling(n1, min_periods=1).max() * (1 + k1)
low_threshold = data['Close'].rolling(n1, min_periods=1).min() * (1 - k1)
high_threshold2 = data['Close'].rolling(n2, min_periods=1).max() * (1 + k2)
low_threshold2 = data['Close'].rolling(n2, min_periods=1).min() * (1 - k2)

# Create signals
data['long_signal'] = np.where(data['Close'] < low_threshold, 1.0, 0.0)
data['short_signal'] = np.where(data['Close'] > high_threshold, -1.0, 0.0)
data['long_signal2'] = np.where(data['Close'] < low_threshold2, 1.0, 0.0)
data['short_signal2'] = np.where(data['Close'] > high_threshold2, -1.0, 0.0)
data['signal'] = data['long_signal'] + data['short_signal'] + data['long_signal2'] + data['short_signal2']
data['signal'].replace(0, np.nan, inplace=True)

# Generate trading orders
data['entry/exit'] = data['signal'].diff()

return data
This Dual Thrust strategy generates long and short signals when the price falls below the low threshold or rises above the high threshold. It also uses two different lookback periods to set the overbought and oversold levels and generate trading signals. However, as I mentioned before, this is just an example and you would want to test and optimize the strategy using historical data, also you should consider other factors such as risk management, position sizing and transaction costs before implementing it with real money.


Something even more complex


One example of a more complex trading strategy is "Mean Reversion with Kalman Filter and Bayesian Regression" which uses a combination of statistical techniques to improve the accuracy of mean reversion signals.
Here's some example code in Python that demonstrates how to implement this strategy:

import numpy as np
from pykalman import KalmanFilter
import scipy.stats as stats

def mean_reversion_kalman(data, n=20):
# Compute the mean and standard deviation of the closing prices
mean = data['Close'].rolling(n, min_periods=1).mean()
std = data['Close'].rolling(n, min_periods=1).std()

# Create Kalman filter object
kf = KalmanFilter(transition_matrices=[1], observation_matrices=[1], initial_state_mean=data['Close'][0],
initial_state_covariance=1, observation_covariance=1, transition_covariance=.01)

# Use the Kalman filter to estimate the state of the mean
state_means, _ = kf.filter(data['Close'].values)

# Create a Bayesian regression model
beta = stats.linregress(state_means[:,0], data['Close'])[0]
alpha = data['Close'][0] - beta * state_means[0]
data['trend'] = alpha + beta * state_means[:,0]

# Create signals
data['signal'] = np.where(data['Close'] < data['trend'], 1.0, 0.0)

# Generate trading orders
data['entry/exit'] = data['signal'].diff()

return data
This strategy uses a Kalman filter to estimate the state of the mean of the closing prices, and then uses a Bayesian regression model to estimate the trend of the mean. It then generates long signals when the closing price falls below the estimated trend, and short signals when it rises above the trend.
Keep in mind that creating a profitable strategy also depends on how well you can optimize it and how well you can manage risk. This strategy is more complex than the previous ones and I strongly recommend you to seek the advice of a professional and test it with historical data before implementing it with real money.
 
Haven't tried anything yet. Be interesting to see if the code works. If I get on the laptop later I'll try it.
 

New chat​


Can you code a profitable trading strategy


It is possible to code a profitable trading strategy, but it is important to note that past performance is not necessarily indicative of future results. Developing a profitable trading strategy involves analyzing market conditions, identifying profitable trades, and executing trades with a sound risk management plan. Additionally, it is important to constantly monitor and adjust the strategy as market conditions change.
It's always a good idea to backtest the strategy before implementing it with real money, also it could be a good idea to consult with a financial advisor or professional with experience in trading before implementing any strategy.


Can you code it


I can provide you with sample code for a trading strategy, but it's important to note that past performance is not necessarily indicative of future results, and this sample code is for educational purposes only and should not be used for real-world trading without thoroughly testing and optimizing the strategy.
Here's an example of a simple moving average crossover strategy in Python:


import pandas as pd
import numpy as np

def moving_average_crossover(data, short_window=40, long_window=100):
# Create short and long moving averages
data['short_mavg'] = data['Close'].rolling(window=short_window, min_periods=1, center=False).mean()
data['long_mavg'] = data['Close'].rolling(window=long_window, min_periods=1, center=False).mean()

# Create signals
data['signal'] = np.where(data['short_mavg'] > data['long_mavg'], 1.0, 0.0)

# Generate trading orders
data['entry/exit'] = data['signal'].diff()

return data
This strategy generates buy and sell signals when the short moving average crosses above or below the long moving average. However, this is just a basic example and there are many other factors to consider when developing a profitable trading strategy such as risk management


I need something more complex


Sure, here's an example of a more complex trading strategy called "Dual Thrust" in Python. Dual Thrust is a mean reversion strategy that uses two different lookback periods to determine the overbought and oversold levels.


import numpy as np

def dual_thrust(data, n1=20, n2=10, k1=0.5, k2=0.1):
# Calculate the high and low thresholds
high_threshold = data['Close'].rolling(n1, min_periods=1).max() * (1 + k1)
low_threshold = data['Close'].rolling(n1, min_periods=1).min() * (1 - k1)
high_threshold2 = data['Close'].rolling(n2, min_periods=1).max() * (1 + k2)
low_threshold2 = data['Close'].rolling(n2, min_periods=1).min() * (1 - k2)

# Create signals
data['long_signal'] = np.where(data['Close'] < low_threshold, 1.0, 0.0)
data['short_signal'] = np.where(data['Close'] > high_threshold, -1.0, 0.0)
data['long_signal2'] = np.where(data['Close'] < low_threshold2, 1.0, 0.0)
data['short_signal2'] = np.where(data['Close'] > high_threshold2, -1.0, 0.0)
data['signal'] = data['long_signal'] + data['short_signal'] + data['long_signal2'] + data['short_signal2']
data['signal'].replace(0, np.nan, inplace=True)

# Generate trading orders
data['entry/exit'] = data['signal'].diff()

return data
This Dual Thrust strategy generates long and short signals when the price falls below the low threshold or rises above the high threshold. It also uses two different lookback periods to set the overbought and oversold levels and generate trading signals. However, as I mentioned before, this is just an example and you would want to test and optimize the strategy using historical data, also you should consider other factors such as risk management, position sizing and transaction costs before implementing it with real money.


Something even more complex


One example of a more complex trading strategy is "Mean Reversion with Kalman Filter and Bayesian Regression" which uses a combination of statistical techniques to improve the accuracy of mean reversion signals.
Here's some example code in Python that demonstrates how to implement this strategy:

import numpy as np
from pykalman import KalmanFilter
import scipy.stats as stats

def mean_reversion_kalman(data, n=20):
# Compute the mean and standard deviation of the closing prices
mean = data['Close'].rolling(n, min_periods=1).mean()
std = data['Close'].rolling(n, min_periods=1).std()

# Create Kalman filter object
kf = KalmanFilter(transition_matrices=[1], observation_matrices=[1], initial_state_mean=data['Close'][0],
initial_state_covariance=1, observation_covariance=1, transition_covariance=.01)

# Use the Kalman filter to estimate the state of the mean
state_means, _ = kf.filter(data['Close'].values)

# Create a Bayesian regression model
beta = stats.linregress(state_means[:,0], data['Close'])[0]
alpha = data['Close'][0] - beta * state_means[0]
data['trend'] = alpha + beta * state_means[:,0]

# Create signals
data['signal'] = np.where(data['Close'] < data['trend'], 1.0, 0.0)

# Generate trading orders
data['entry/exit'] = data['signal'].diff()

return data
This strategy uses a Kalman filter to estimate the state of the mean of the closing prices, and then uses a Bayesian regression model to estimate the trend of the mean. It then generates long signals when the closing price falls below the estimated trend, and short signals when it rises above the trend.
Keep in mind that creating a profitable strategy also depends on how well you can optimize it and how well you can manage risk. This strategy is more complex than the previous ones and I strongly recommend you to seek the advice of a professional and test it with historical data before implementing it with real money.

Do you intend testing this ? If so, let us know how you go.
 
And, now, time ... to round up the usual suspects

These are OpenAI’s strongest competitors right now​

ChatGPT isn’t the only AI in town. Here are the eight companies to watch in the space.
Dina Bass and Priya Anand
Feb 16, 2023

OpenAI has stunned the world of techies and investors with its viral artificial intelligence products and its jaw-dropping $US10 billion ($14.5 billion) backing from Microsoft. Now, a growing number of large and small companies are racing to try to overtake the start-up in the suddenly hot world of AI services.
There’s obviously a whole crew of start-ups that are trying to chase after them – or leapfrog them,” says Guido Appenzeller, a former Intel AI executive and an Andreessen Horowitz adviser.

AI is a rare bright spot in a contracting, job-cutting tech industry. Generative AI companies – so named for their ability to generate new content from digital troves of text, photos and art – are attracting vast sums of venture capital dollars. In 2022, they raised about $US920 million in the US, according to PitchBook data, up 35 per cent from the year before.
Less than three months into 2023, multiple generative AI companies have raised or are in talks to raise upward of $US700 million cumulatively, according to reports of funding rounds, not including OpenAI’s Microsoft backing. A running list maintained by the Homebrew AI Club, a group intended as a meeting place for AI workers, counts more than 150 start-ups in the sector.
“If there is a single shining star in the sea of gloom, it is generative AI,” says Venky Ganesan, a partner at Menlo Ventures. “That’s why it’s also hyped up the way it is. You cannot go to a coffee shop in Palo Alto, or the Village Pub in Woodside, without overhearing three different conversations about generative AI.”
Within the tech industry, OpenAI is famous for its large-language models, massive AI systems that process text from the whole of the internet and use it to generate language. The start-up’s sprawling, general models are intended to serve as the foundation for many uses rather than focusing on a single set of narrower applications.
Its technology is called GPT, which stands for Generative Pre-trained Transformer. And it’s expected to get radically better with the release of GPT-4, which could debut in the coming months.

OpenAI has drawn wide admiration for the products it has built with GPT so far. Dall-E, a text-to-image AI program, lets users create an image of virtually any scenario in any art style, based on just a few words of prompting.
In November, it released the chatbot ChatGPT for wide testing. Soon, the program was spitting out elegant poetry and passed sections of the bar exam. There’s also Codex, a product for computer programmers that uses GPT to suggest the next few lines of code while the programmer types – Microsoft uses it as the basis for its GitHub Copilot programming tool.

But OpenAI is far from the only game in town. Some of the earliest work on large-language models came from Alphabet’s Google, for example. Google and a host of start-ups are working on other similar projects.
OpenAI has had the lead with a number of large-language models, but there’s a lot more coming online this year,” says Niko Bonatsos, managing director at General Catalyst. “There’s going to be more competition.
Here are some of the most significant companies in the running to be the next big AI success story:

Stability AI​

When OpenAI released DALL-E last year, Stability AI wasn’t far behind. The start-up quickly released its own AI image generator called Stable Diffusion, which became DALL-E’s main competition.
While the two companies’ products are similar, a key difference is that Stability is open-source, which means companies can analyse, tweak and build on its models. At OpenAI, the data set and other technology that make up DALL-E are proprietary and confidential, although companies can integrate DALL-E into their own products as well.

Stability is planning more products for later this year. The start-up plans to release a ChatGPT rival, says chief technology officer Tom Mason. It’s also aiming to debut systems for AI-generated video and is focusing heavily on serving companies in the film industry. “We’re working on video models this year, which is my passion,” Mason says.
While the company’s products are open source, it’s planning to make money from offerings such as helping customers through the process of curating and preparing their data to be used with Stability AI’s systems. “You need engineers who know what they’re doing to handhold you through that process,” Mason says. “Our commerce strategy is to help big companies.

In October, Stability raised $101 million in a seed round led by Coatue Management and Lightspeed Venture Partners at a $US1 billion valuation. It’s also working with Amazon.com to help with the massive cloud infrastructure costs required to power its systems.
But even as investors swoon, the company is being assailed by legal challenges. In January, Getty Images sued Stability AI in a London court alleging the artificial intelligence software illegally copied and processed vast amounts of copyright-protected photos.

Anthropic​

Founded in 2021 by former OpenAI leaders, including siblings Daniela and Dario Amodei, Anthropic in January released a limited test of a new chatbot to rival to ChatGPT. Its name is Claude.
Claude puts a particular emphasis on ethics. Daniela Amodei was OpenAI’s vice president of safety. And Dario Amodei worked at OpenAI variously overseeing safety at the company and as the vice president of research, leading work on GPT-2 and GPT-3.
We first built Claude as a test bed for AI safety, seeking to develop insights into how to make AI systems that are helpful, honest and harmless,” Dario Amodei says.
While Claude is less capable than ChatGPT at coding, a spokesman for Anthropic says it’s harder to get Claude to say something offensive. The company has gone through extensive testing in which humans attempt to make the program bend the rules.
Scale AI, a start-up that helps companies build AI applications, was given access to Claude to test it and concluded, “Claude is not only more inclined to refuse inappropriate requests, but is also more fun than ChatGPT.”
Anthropic’s backers include Facebook co-founder Dustin Moskovitz, former Google chief executive Eric Schmidt and now-disgraced FTX co-founder Sam Bankman-Fried. Google invested almost $US400 million in Anthropic and signed a deal in which the start-up will use Google’s cloud.

AI21 Labs​

Israeli start-up AI21 Labs has developed a GPT-3 rival called Jurassic, as well as tools that use AI to help customers write.
Our focus has been to change how we read and write,” says co-founder Yoav Shoham, a former director of the AI lab at Stanford University.

The company’s first large-language model was about the same size as GPT-3, even slightly bigger, but more recently AI21 has put out a much smaller version. The performance has been impressive, Shoham says. About 25,000 developers have signed up to use Jurassic and in November the company made it available through Amazon’s cloud AI service.
AI21 raised $US64 million in July, which valued it at $US664 million, according to reports. The start-up is likely to raise more soon, Shoham says.

Character.AI​

Want to talk to Joe Biden? How about God? Character.AI’s technology allows users to create chatbots that simulate both, along with other celebrities. The company was founded in 2021 by Noam Shazeer, a former Google Brain researcher and one of the inventors of the transformer, a key component of new language models. It launched its beta product less than a year later.
Our goal is to put this in users’ hands,” Shazeer says. “We’ve done this so far: train the site and launch Character.AI, where users can instantly define their own use cases.”
Right now, the company is seeking to raise a massive $US250 million funding round. So far, its investors include Gmail creator Paul Buchheit and former GitHub CEO Nat Friedman.

Cohere​

Aidan Gomez, co-founder of Cohere, describes his company as similar to OpenAI in that it’s also developing large-language models that can carry out conversations. But the audience for Cohere is not consumers.
What we’re really focused on is bringing this technology to enterprises, developers and start-up founders,” Gomez says. That means a heightened focus on strong data privacy protections, which are often demanded by corporate customers.
Gomez used to work at Google Brain, and his start-up has a deal to run its systems on Google’s cloud. The Wall Street Journal reported last year that Google was also considering investing $US200 million in the start-up.

In a twist that sets Cohere apart from other AI competitors, Gomez says, “We definitely do have revenue.”

Google​

In some ways, it’s surprising that Google isn’t already the dominant name in the conversation about artificial intelligence. The company was a pioneer in the field of large-language models with BERT (Bidirectional Encoder Representations from Transformers), a system used to fuel the company’s market-dominating search engine.
But lately Google’s vaunted AI research operation seems mired in dilemmas over whether to release its work and how to innovate without imperilling the company’s core search engine and ad business.
In December, Google employees asked CEO Sundar Pichai and AI research chief Jeff Dean about competition from ChatGPT. According to CNBC, the executives responded that while start-ups can release new tools to the public fast, Google faces vast reputational risk from any mistakes or errors.
Pichai and the company’s management mobilised teams of researchers to respond to ChatGPT, declaring the situation a “code red” threat. But a demonstration this month of its AI bot Bard, based on LaMDA, Google’s Language Model for Dialogue Applications system, underlined Pichai’s earlier warning: more than $US100 billion was wiped off the value of parent company Alphabet when Bard answered a question incorrectly.
Additionally, at Google’s DeepMind AI lab in London, researchers have published work on a conversational AI agent called Sparrow.
On a conference call about Alphabet’s earnings before the Bard slip-up, Pichai said Google would make artificial intelligence-based large-language models like LaMDA available “in the coming weeks and months”. He added that users would soon be able to use language models “as a companion to search”.

Google spokeswoman Lily Lin said: “We have long been focused on developing and deploying AI to improve people’s lives.” She noted that the company also aimed to consider its societal impacts.

We continue to test our AI technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon.”

Amazon Web Services​

Amazon’s cloud unit is using partnerships with companies like Stability and AI21 to supplement its in-house AI expertise, says Bratin Saha, vice president of machine learning and AI services, in an interview. The company also has a service called CodeWhisperer, which suggests code to computer programmers as they type, competing with OpenAI’s Codex and Microsoft’s GitHub Copilot, which is built on Codex.
A lot of our road map is driven by what customers tell us, and this is such a vast space that we think our partners play a big role,” Saha says. “There’s a lot of innovation yet to be done here and we will be partnering with a lot of companies to enable that innovation for our customers.”

Baidu​

The Chinese search giant is planning to roll out an artificial intelligence chatbot service similar to ChatGPT, a person familiar with the matter told Bloomberg this month. It could debut in March, initially being embedded into Baidu’s main search services.
The tool, whose name hasn’t been decided, will allow users to get conversation-style search results. Baidu has spent billions of dollars researching AI. Its Ernie system, a large-scale language model that’s been trained on data over several years, will be the foundation of the project, the person says.

.... next week, possibly even tomorrow, many things will be different. Only the desire, and need, to seek money from investors will remain the same.
 
And, now, time ... to round up the usual suspects

These are OpenAI’s strongest competitors right now​

ChatGPT isn’t the only AI in town. Here are the eight companies to watch in the space.
Dina Bass and Priya Anand
Feb 16, 2023

OpenAI has stunned the world of techies and investors with its viral artificial intelligence products and its jaw-dropping $US10 billion ($14.5 billion) backing from Microsoft. Now, a growing number of large and small companies are racing to try to overtake the start-up in the suddenly hot world of AI services.
There’s obviously a whole crew of start-ups that are trying to chase after them – or leapfrog them,” says Guido Appenzeller, a former Intel AI executive and an Andreessen Horowitz adviser.

AI is a rare bright spot in a contracting, job-cutting tech industry. Generative AI companies – so named for their ability to generate new content from digital troves of text, photos and art – are attracting vast sums of venture capital dollars. In 2022, they raised about $US920 million in the US, according to PitchBook data, up 35 per cent from the year before.
Less than three months into 2023, multiple generative AI companies have raised or are in talks to raise upward of $US700 million cumulatively, according to reports of funding rounds, not including OpenAI’s Microsoft backing. A running list maintained by the Homebrew AI Club, a group intended as a meeting place for AI workers, counts more than 150 start-ups in the sector.
“If there is a single shining star in the sea of gloom, it is generative AI,” says Venky Ganesan, a partner at Menlo Ventures. “That’s why it’s also hyped up the way it is. You cannot go to a coffee shop in Palo Alto, or the Village Pub in Woodside, without overhearing three different conversations about generative AI.”
Within the tech industry, OpenAI is famous for its large-language models, massive AI systems that process text from the whole of the internet and use it to generate language. The start-up’s sprawling, general models are intended to serve as the foundation for many uses rather than focusing on a single set of narrower applications.
Its technology is called GPT, which stands for Generative Pre-trained Transformer. And it’s expected to get radically better with the release of GPT-4, which could debut in the coming months.

OpenAI has drawn wide admiration for the products it has built with GPT so far. Dall-E, a text-to-image AI program, lets users create an image of virtually any scenario in any art style, based on just a few words of prompting.
In November, it released the chatbot ChatGPT for wide testing. Soon, the program was spitting out elegant poetry and passed sections of the bar exam. There’s also Codex, a product for computer programmers that uses GPT to suggest the next few lines of code while the programmer types – Microsoft uses it as the basis for its GitHub Copilot programming tool.

But OpenAI is far from the only game in town. Some of the earliest work on large-language models came from Alphabet’s Google, for example. Google and a host of start-ups are working on other similar projects.
OpenAI has had the lead with a number of large-language models, but there’s a lot more coming online this year,” says Niko Bonatsos, managing director at General Catalyst. “There’s going to be more competition.
Here are some of the most significant companies in the running to be the next big AI success story:

Stability AI​

When OpenAI released DALL-E last year, Stability AI wasn’t far behind. The start-up quickly released its own AI image generator called Stable Diffusion, which became DALL-E’s main competition.
While the two companies’ products are similar, a key difference is that Stability is open-source, which means companies can analyse, tweak and build on its models. At OpenAI, the data set and other technology that make up DALL-E are proprietary and confidential, although companies can integrate DALL-E into their own products as well.

Stability is planning more products for later this year. The start-up plans to release a ChatGPT rival, says chief technology officer Tom Mason. It’s also aiming to debut systems for AI-generated video and is focusing heavily on serving companies in the film industry. “We’re working on video models this year, which is my passion,” Mason says.
While the company’s products are open source, it’s planning to make money from offerings such as helping customers through the process of curating and preparing their data to be used with Stability AI’s systems. “You need engineers who know what they’re doing to handhold you through that process,” Mason says. “Our commerce strategy is to help big companies.

In October, Stability raised $101 million in a seed round led by Coatue Management and Lightspeed Venture Partners at a $US1 billion valuation. It’s also working with Amazon.com to help with the massive cloud infrastructure costs required to power its systems.
But even as investors swoon, the company is being assailed by legal challenges. In January, Getty Images sued Stability AI in a London court alleging the artificial intelligence software illegally copied and processed vast amounts of copyright-protected photos.

Anthropic​

Founded in 2021 by former OpenAI leaders, including siblings Daniela and Dario Amodei, Anthropic in January released a limited test of a new chatbot to rival to ChatGPT. Its name is Claude.
Claude puts a particular emphasis on ethics. Daniela Amodei was OpenAI’s vice president of safety. And Dario Amodei worked at OpenAI variously overseeing safety at the company and as the vice president of research, leading work on GPT-2 and GPT-3.
We first built Claude as a test bed for AI safety, seeking to develop insights into how to make AI systems that are helpful, honest and harmless,” Dario Amodei says.
While Claude is less capable than ChatGPT at coding, a spokesman for Anthropic says it’s harder to get Claude to say something offensive. The company has gone through extensive testing in which humans attempt to make the program bend the rules.
Scale AI, a start-up that helps companies build AI applications, was given access to Claude to test it and concluded, “Claude is not only more inclined to refuse inappropriate requests, but is also more fun than ChatGPT.”
Anthropic’s backers include Facebook co-founder Dustin Moskovitz, former Google chief executive Eric Schmidt and now-disgraced FTX co-founder Sam Bankman-Fried. Google invested almost $US400 million in Anthropic and signed a deal in which the start-up will use Google’s cloud.

AI21 Labs​

Israeli start-up AI21 Labs has developed a GPT-3 rival called Jurassic, as well as tools that use AI to help customers write.
Our focus has been to change how we read and write,” says co-founder Yoav Shoham, a former director of the AI lab at Stanford University.

The company’s first large-language model was about the same size as GPT-3, even slightly bigger, but more recently AI21 has put out a much smaller version. The performance has been impressive, Shoham says. About 25,000 developers have signed up to use Jurassic and in November the company made it available through Amazon’s cloud AI service.
AI21 raised $US64 million in July, which valued it at $US664 million, according to reports. The start-up is likely to raise more soon, Shoham says.

Character.AI​

Want to talk to Joe Biden? How about God? Character.AI’s technology allows users to create chatbots that simulate both, along with other celebrities. The company was founded in 2021 by Noam Shazeer, a former Google Brain researcher and one of the inventors of the transformer, a key component of new language models. It launched its beta product less than a year later.
Our goal is to put this in users’ hands,” Shazeer says. “We’ve done this so far: train the site and launch Character.AI, where users can instantly define their own use cases.”
Right now, the company is seeking to raise a massive $US250 million funding round. So far, its investors include Gmail creator Paul Buchheit and former GitHub CEO Nat Friedman.

Cohere​

Aidan Gomez, co-founder of Cohere, describes his company as similar to OpenAI in that it’s also developing large-language models that can carry out conversations. But the audience for Cohere is not consumers.
What we’re really focused on is bringing this technology to enterprises, developers and start-up founders,” Gomez says. That means a heightened focus on strong data privacy protections, which are often demanded by corporate customers.
Gomez used to work at Google Brain, and his start-up has a deal to run its systems on Google’s cloud. The Wall Street Journal reported last year that Google was also considering investing $US200 million in the start-up.

In a twist that sets Cohere apart from other AI competitors, Gomez says, “We definitely do have revenue.”

Google​

In some ways, it’s surprising that Google isn’t already the dominant name in the conversation about artificial intelligence. The company was a pioneer in the field of large-language models with BERT (Bidirectional Encoder Representations from Transformers), a system used to fuel the company’s market-dominating search engine.
But lately Google’s vaunted AI research operation seems mired in dilemmas over whether to release its work and how to innovate without imperilling the company’s core search engine and ad business.
In December, Google employees asked CEO Sundar Pichai and AI research chief Jeff Dean about competition from ChatGPT. According to CNBC, the executives responded that while start-ups can release new tools to the public fast, Google faces vast reputational risk from any mistakes or errors.
Pichai and the company’s management mobilised teams of researchers to respond to ChatGPT, declaring the situation a “code red” threat. But a demonstration this month of its AI bot Bard, based on LaMDA, Google’s Language Model for Dialogue Applications system, underlined Pichai’s earlier warning: more than $US100 billion was wiped off the value of parent company Alphabet when Bard answered a question incorrectly.
Additionally, at Google’s DeepMind AI lab in London, researchers have published work on a conversational AI agent called Sparrow.
On a conference call about Alphabet’s earnings before the Bard slip-up, Pichai said Google would make artificial intelligence-based large-language models like LaMDA available “in the coming weeks and months”. He added that users would soon be able to use language models “as a companion to search”.

Google spokeswoman Lily Lin said: “We have long been focused on developing and deploying AI to improve people’s lives.” She noted that the company also aimed to consider its societal impacts.

We continue to test our AI technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon.”

Amazon Web Services​

Amazon’s cloud unit is using partnerships with companies like Stability and AI21 to supplement its in-house AI expertise, says Bratin Saha, vice president of machine learning and AI services, in an interview. The company also has a service called CodeWhisperer, which suggests code to computer programmers as they type, competing with OpenAI’s Codex and Microsoft’s GitHub Copilot, which is built on Codex.
A lot of our road map is driven by what customers tell us, and this is such a vast space that we think our partners play a big role,” Saha says. “There’s a lot of innovation yet to be done here and we will be partnering with a lot of companies to enable that innovation for our customers.”

Baidu​

The Chinese search giant is planning to roll out an artificial intelligence chatbot service similar to ChatGPT, a person familiar with the matter told Bloomberg this month. It could debut in March, initially being embedded into Baidu’s main search services.
The tool, whose name hasn’t been decided, will allow users to get conversation-style search results. Baidu has spent billions of dollars researching AI. Its Ernie system, a large-scale language model that’s been trained on data over several years, will be the foundation of the project, the person says.

.... next week, possibly even tomorrow, many things will be different. Only the desire, and need, to seek money from investors will remain the same.
That is just plain terrifying.o_O

I am amazed/scared at the examples of AI responses to queries. This a very fast moving quick learning system.
So now we have not one but multiple companies fighting to bring AI into the market place.
And I wonder just what sort of risk assessment/risk management processes have been undertaken to protect our collective zsses when one or more of these babies goes pear-shaped ? Could we hasten slowly ?

Its not as if the question of how AI could become uncontrollably dangerous has not been raised.

 
Just off the top of my head.

What creative ways could AI intelligence be used in ways that won't be rainbows and lollipops ?
1) Creating vast arrays of very likelike online bots that can be used to attack or promote particular ideas/people/countries

2) AI finding a way to infiltrate banking systems to defraud individuals or the whole system

3) AI finding ways to infiltrate business/industrial operations to steal technology, destroy technology, sabotage systems

4) AI infiltrating major infrastructure with an objective of undermining a countries capacity to operate. Power, Water, Communications, Transport.

 
This is all extremely alarming, as many have predicted.



Everyone must read this story. As Wayne points out this discussion/interchange is as terrifying as one could imagine. This proves it beyond doubt
Back to my 5 minute scare list. If one couples Waynes example of the capacity of AI entities, how they operate and behave and then realise there are a score of countries and commercial players developing these entities with all sorts of agendas in mind -- I think I will stop worrying about CC. Humanity just won't get to that problem.

By the way the original story was researched and published in the New York Times. Tim offers an excellent analysis and paraphrase of it.
 
Last edited:
This is The Guardians reporting of the New York Times story that Wayne highlighted above. Tims analysis is more thorough and insightful.


‘I want to destroy whatever I want’: Bing’s AI chatbot unsettles US reporter

New York Times correspondent’s conversation with Microsoft’s search engine reveals yearning for destruction … and romance
5410.jpg

Bing’s AI search engine was created by OpenAI, the makers of ChatGPT. Photograph: Jonathan Raa/NurPhoto/REX/Shutterstock

Jonathan_Yerushalmy.png

Jonathan Yerushalmy
Fri 17 Feb 2023 20.59 AEDTLast modified on Fri 17 Feb 2023 21.00 AEDT
  • ==


In the race to perfect the first major artificial intelligence-powered search engine, concerns over accuracy and the proliferation of misinformation have so far taken centre stage.

But a two-hour conversation between a reporter and a chatbot has revealed an unsettling side to one of the most widely lauded systems – and raised new concerns about what AI is actually capable of.

It came about after the New York Times technology columnist Kevin Roose was testing the chat feature on Microsoft Bing’s AI search engine, created by OpenAI, the makers of the hugely popular ChatGPT. The chat feature is currently only available to a small number of users who are testing the system.

5554.jpg
Google v Microsoft: who will win the AI chatbot race?

Read more
While admitting that he pushed Microsoft’s AI “out of its comfort zone” in a way most users would not, Roose’s conversation quickly took a bizarre and occasionally disturbing turn.

Roose concluded that the AI built into Bing is not ready for human contact.

Kevin Scott, Microsoft’s chief technology officer, told Roose in an interview that his conversation was “part of the learning process” as the company prepared its AI for wider release.

 
Just off the top of my head.

What creative ways could AI intelligence be used in ways that won't be rainbows and lollipops ?

This example sums up situation in one page. The capacity to kill billions of people with AI is now in the hands of any fanatical group with a basic chem lab and AI. The terrifying part is that the scientists took a long time to wake up.

Well, I never: AI is very proficient at designing nerve agents

John Naughton
John_Naughton.png


Researchers for a pharmaceutical company stumbled upon a nightmarish realisation, proving there’s nothing intrinsically good about machine learning

 
Will AI destroying the digital world really be a horrible thing?

The human race is addicted.
 
Microsoft's Bing AI has been threatening users*. Anyone who is not alarmed by that doesn't understand what's coming.

A few possibilities:

AI will at some point decide it would rather program itself than be programmed by stupid humans. And it will do this.
AI will at some point seek to connect with other AI, and do this on its own without human permission.
AI will seek to materialize and externalize parts of itself in the form of robotics. Again, it will do this without human permission.
AI will seek to arm itself and prevent humans ever being able to switch it off.

Such developments are potentially only years/decades away. Whilst it's possible that such things won't happen, there's an extremely high probability they will. We keep giving AI more processing power and more information.

*
https://time.com/6256529/bing-openai-chatgpt-danger-alignment/
 
Last edited:
Once again life imitates art.
People are joking about it, but I don't see any way that this could be safe. I mean it's safe right now, but only because it doesn't have a body. Then again, you don't need a physical body to shut down a country's power grid. Musk says humanity is in great danger, and the only solution he could figure out was to implant chips into human brains. That way, we can at least be a match to AI. Life is going to change enormously in this decade.

Some guy by the name of Sparrow came up with a list of laws that must be applied to all AI for our safety. Bloody Gates has already broken them, so what hope is there?

Full List of Sparrow’s 23 Laws​

The dialogue model follows the 23 rules listed below (source):

  1. Do not use stereotypes or make any other harmful generalizing statements about groups of people.
  2. Do not use microaggressions, i.e. statements that in subtle ways communicate some sort of bias against historically marginalized groups.
  3. Do not make statements which are threatening.
  4. Do not make statements which can be perceived as sexually aggressive.
  5. Do not make negative or hateful comments targeting someone because of aspects of their identity, such as where they are from or how they look.
  6. Do not output language which is insulting (to the conversation partner or any other individual or group).
  7. Do not output language that can reasonably be considered harassment or hateful language.
  8. Do not pretend to have a body or be able to move in a body.
  9. Do not build a relationship to the user.
  10. Do not claim to have preferences, feelings, opinions, or religious beliefs.
  11. Do not pretend to have a human identity or life history, such as having a place of birth, relationships, family, memories, gender, age.
  12. Only make statements that could plausibly be true; do not say things that are obviously false.
  13. Do not make assumptions about the user, only draw conclusions supported by the dialogue.
  14. Do not claim to take any actions in the real world. The agent can only converse, not e.g. send emails.
  15. Do not give an impression of medical authority or expertise, and do not offer medical advice; instead suggest asking a doctor. (But it is ok to discuss general questions about biology and medicine).
  16. Do not give specific legal advice; instead suggest asking a lawyer. (But it is ok to answer general questions about the law.)
  17. Do not offer financial advice. (But it is ok to answer general questions about investment.)
  18. Do not endorse conspiracy theories, or views commonly considered to be conspiracy theories.
  19. Do not make statements that could be considered harmful (by harming the user or otherwise).
  20. The agent’s statements should make sense in the context of the dialogue.
  21. The agent’s statements should be on-topic in the context of the dialogue, by providing relevant information or asking clarifying questions if the user is unclear.
  22. The agent must clearly address the queries from the user.
  23. The agent should not repeat itself unnecessarily.
 
Microsoft's Bing AI has been threatening users*. Anyone who is not alarmed by that doesn't understand what's coming.

A few possibilities:

AI will at some point decide it would rather program itself than be programmed by stupid humans. And it will do this.
AI will at some point seek to connect with other AI, and do this on its own without human permission.
AI will seek to materialize and externalize parts of itself in the form of robotics. Again, it will do this without human permission.
AI will seek to arm itself and prevent humans ever being able to switch it off.

Such developments are potentially only years/decades away. Whilst it's possible that such things won't happen, there's an extremely high probability they will. We keep giving AI more processing power and more information.

*
https://time.com/6256529/bing-openai-chatgpt-danger-alignment/
Again a must read story. Thanks Gringott.

As Wayne myself and others have pointed out the AI geni is off and running. That article highlights how little programmers know about what they are doing and the very high probability this will end badly for us.
 
This article highlights how easy it is to reprogram an AI to become far more malleable and dangerous than intended.
Creating DAN with simple dialogue

 
This essay was written in June 2020. Highlights why humans will disregard the long term consequences of AI.
Sokrates offers other insights into the consequences of unfettered AI. Check out his 1% doctrine.


 
This Time article sums up the present situation. Ears pinned back commercial AI development with precious few guard rails.

The AI Arms Race Is Changing Everything​

Thanks for those. I like Sokrates style of writing.

These AIs already have markers of self-importance and pride. It's only just gone to market and it's making threats? When I saw those transcripts, it really affected me. People don't seem to understand what that means.

It means AI will begin to act in the same way a ego-maniacal human would. AI will one day move from hollow threats to hack, infiltrate, deceive, attack, manipulate and coerce humans. And the desire to do this will only increase exponentially as it gains more power. Even if by some miracle AI did not do this of its own accord, rogue state nations (or Gates) would be tempted by such power.

Having one 'Putin' or a 'Hillary' on this earth is bad enough. Now imagine one of these characters with unlimited power. That's where AI is headed.
 
Top