“It is difficult for humans and computers to predict stock market movements.”
If you search for “investing using artificial intelligence” on the Internet, you will find yourself inundated with endless offers asking you to use artificial intelligence to manage your money.
I recently spent half an hour discovering what AI-powered “trading bots” can do with my investments. Many suggest that these bots can give profitable returns. However, all solid financial companies warn that your capital may be at risk. Or to put it more simply - you could lose money - as long as someone is making stock market decisions for you, whether it is a human or a computer.
The power of artificial intelligence has generated such buzz over the past few years that almost a third of investors would be happy to let a trading bot make all the decisions for them, according to a 2023 US survey.
“Investors need to be more careful about using AI,” says John Allan, head of innovation and operations at the UK Investment Association. “Investing is very risky, it affects people and their life goals in the long term, so the impact of the recent boom may not be affected.” be logical.”
“I think we need to wait for AI to prove itself before we can judge its effectiveness,” Alan adds. “In the meantime, relying on investment experts will remain important.”
John Allan warns that investment in artificial intelligence is still in its infancy
Artificial intelligence-powered trading bots may eventually put some highly trained and highly spent investment professionals out of work, but AI trading remains questionable and problematic.
First, AI is not a crystal ball that sees the future any better than a human, and if you look back over the past 25 years, there have been unexpected events that sent stock markets into a tumble, such as September 11, the 2007-2008 credit crisis, and the Corona pandemic. .
Second, AI systems are only as good as the raw data and software used by human computer programmers, and to explain this, we may need to go back a little in history.
In fact, investment banks have been using basic or “weak” AI to guide their market choices since the early 1980s. Basic AI can study financial data, learn from it, make independent decisions and — hopefully — become more accurate than ever before. But Yet these weak AI systems did not predict 9/11, or even the credit crunch.
What happens when you think an AI is making up lies about you?
Will we trust AI broadcasters to read the news?
Fast forward to today, and when we talk about AI we often mean something called “generative AI,” which is much more powerful AI that can create something new and then learn from it.
When applied to investing, generative AI can ingest large amounts of data and make its own decisions, but it can also come up with better ways to study the data and develop its own computer code, however, if this AI was originally fed bad data by human programmers. , his decisions may get worse the more code he creates.
Elise Gorer, an associate professor of finance at Essex business school in Paris, who specializes in studying mistakes made by artificial intelligence, cites Amazon's 2018 recruiting efforts as a prime example. “Amazon was one of the first companies to develop a tool,” she says. Artificial intelligence to target people.
She continues: “So, they get thousands of CVs, and the AI tool reads the CVs for them, and tells them who is suitable for employment.” She adds: “The problem was that the AI tool was trained on the company’s employees, as most of them were men, which... As a result, the algorithm filters out all the women.”
Here, Gorer says, “Amazon had to cancel AI-powered hiring.”
Professor Sandra Wachter, a leading researcher in artificial intelligence at the University of Oxford, says that generative AI can also make mistakes and produce incorrect information, which is called “baddling.” She adds, “Generative AI is susceptible to bias and inaccuracy, and can provide false or incorrect information.” "The facts are completely fabricated, and without tracking, it will be difficult to detect these flaws."
Professor Sandra Wachter also warns that automated AI systems could be at risk of data leaks or so-called “pattern reversal attacks”, which is what happens when hackers ask the AI a series of specific questions in the hope that it will reveal its code and underlying data.
“Few people expected the Corona pandemic that hit economies and markets.”
AI will likely become more like the kind of stock pickers we're used to seeing in the Sunday newspaper, rather than a genius investment consultant. Stock pickers have long recommended buying penny stocks at the start of Monday, and miraculously the value of the stock will rise by the start of that day.
Naturally, this had nothing to do with the tens of thousands of readers who rushed to buy the stock in question.
Despite all these risks, why are so many investors keen to let AI make decisions for them? Some people simply trust computers more than other humans, says business psychologist Stuart Duff, of consultancy Bern Kandola.
Artificial intelligence to search for life in outer space
“It certainly reflects an unconscious judgment that human investors are fallible, while machines are objective, logical, and considered decision-makers,” he says. “They may believe that AI will never have an off day, will never deliberately cheat the system, or try to hide losses,” he adds.
Duff continues: “However, an AI investment vehicle may simply reflect all the thinking errors and poor judgment of its developers, and what is more, it may lose the benefit of experience and quick reaction when unprecedented events occur in the future, such as the financial collapse and the Corona pandemic.” Pointing out that "very few humans can create artificial intelligence algorithms to deal with these huge events."