Time to replace your financial advisor with ChatGPT?
AI in news / AI teaching AI / AI stock picks / AI is thirsty / China limiting AI
Social
Public discourse of AI is changing, and two big leaders in the tech space conducted big interviews for the media. Elon Musk joined Tucker Carlson on Fox News to talk about AI. Earlier in the week, Google GEO Sundar Pichai held an interview with 60 minutes, also about AI.
What’s changing: The interviews and beliefs on display are quite different, but the prominence of AI in the media continues to grow. It highlights the gaps in understanding about AI as a technology and its impact to society. The flood of news and discussions about AI are also shaping public opinion and awareness, often through the lens of their preferred media.
What’s next: Expect there to be a lot more of these types of conversations, public appearances, and attempts to persuade toward certain beliefs. In many countries where polarization has been on the rise, look for that to play into the discourse. AI is not neutral, and given the impact I would expect to see more attempts by large players in AI to find ways to reach the public.
Technological
Can AI train itself? It’s been a focus of research for some time and a researcher at Uber has developed a tool called Paired Open-Ended Trailblazer (POET) that try to navigate a simple landscape of basic obstacles. “POET generates the obstacle courses, assesses bots’ abilities, and assigns their next challenge, all without human human involvement.” In other words, it lets AI train itself, and this can get away from humans’ having to figure it all out directly.
What’s changing: We’ve handed over more and more responsibilities to computers, and the same trend continues with AI. While still in a research phase, this points to yet another way to offload the human effort and possibly get better results with less human understanding.
What’s next: If this became standard practice, then it would follow that human review of the resulting behaviors would be even more essential, and more challenging to assess. Efforts would likely need to move out of development expertise to oversight and calibration expertise. It may very well generate new types of outcomes and ‘thinking’ that could teach us something or be very hard for us to reason through.
Economic
A recent paper by Alejandro Lopez-Lira and Yuehua Tang of the University of Florida explores the use of large language models ability to forecasting stock prices based on news headlines. It used ChatGPT to indicate the good, bad, or neutral news item, then they computed a score and correlated these values against next day’s returns, finding it was better than random at forecasting. It also reviewed other models like GPT-1 and BERT and found the less complex LLMs were incapable, suggesting that “return predictability is an emerging capacity of complex models.”
What’s changing: Computers have been a major player in the stock market for a while now, and the introduction of LLMs into their computational calibration may further push the computer capabilities to better predict (and therefore automate) financial outcomes. The automated systems in use by big stock players are certainly confidential, but I would expect they incorporate news already into their systems either manually or automatically. I suspect the main change here is the improving accuracy of sentiment analysis on news items, and incorporating it into more investor’s practices.
What’s next: The concept of a stock market is based around information availability. Stocks rise and fall based on their earnings, news reports from the industry, or even global pressures unrelated to their stock. We have a complex system where one company’s negative earnings can have outsized consequences on the rest of the market (and outside of the market too). This type of approach reinforces the information flows we have today and I suspect this leads to more instability as each player (human or digital) tries to make the fastest moves to achieve the gains. The SVB failure was viewed as a potential tipping event into the market, which I fear our future includes more brittle systems because of these deeper system integrations that reinforce one another. On the other hand, as use of LLMs spreads the advantages will diminish or have to evolve. The hunt for the next advantage will continue.
Living
We know it takes a lot of energy to train AIs, but it also requires a lot of water. The amount of water used in the training of GPT-3 alone required an estimated 185,000 gallons of water to cool the data centers according to researchers. It goes further to estimate ChatGPT requires half a liter of water for every 20-50 question and answer exchanges.
What’s changing: Details of the total impact of technologies can be hard to pin down, so research like this aims to raise up awareness of previously hidden costs. Data centers use loads of water, but this quantifies the water requirements for your daily ChatGPT usage. The change is still limited to finding ways to account for water consumption in AI and raising awareness.
What’s next: Water is projected to be a resource that we can’t afford to waste. There are plenty of water stress events happening around us, so any large consumers of water are likely targets of regulation and public outrage. Water stress has the potential to cause chaos in a lot of systems that we need to pay close attention.
Political
China regulators are proposing restrictions on AI systems being built inside of the country that would focus on preventing these systems from challenging the power of the state or socialist system.
What’s changing: These kinds of regulations are not uncommon in China, but it would apply liability and responsibility on the AI creators for the outcomes. There are concepts built in to design a safe environment for users, which is lacking in many other countries from a regulatory perspective.
What’s next: Regulation generally aims to set the pace and tone of an industry. Meeting these requirements would likely be impossible with the way AI is developed today, and would likely cool the AI market dramatically in China. On the other hand, it might act like a mechanism that allows the leadership to influence the industry to their preferred future.