<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Future of AI]]></title><description><![CDATA[Exploring the many futures of AI, and understanding the environment around it.]]></description><link>https://futureof.ai</link><generator>Substack</generator><lastBuildDate>Thu, 30 Apr 2026 09:35:30 GMT</lastBuildDate><atom:link href="https://futureof.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Jeremy Wilken]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[futureofai@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[futureofai@substack.com]]></itunes:email><itunes:name><![CDATA[Jeremy Wilken]]></itunes:name></itunes:owner><itunes:author><![CDATA[Jeremy Wilken]]></itunes:author><googleplay:owner><![CDATA[futureofai@substack.com]]></googleplay:owner><googleplay:email><![CDATA[futureofai@substack.com]]></googleplay:email><googleplay:author><![CDATA[Jeremy Wilken]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Time to replace your financial advisor with ChatGPT?]]></title><description><![CDATA[AI in news / AI teaching AI / AI stock picks / AI is thirsty / China limiting AI]]></description><link>https://futureof.ai/p/time-to-replace-your-financial-advisor</link><guid isPermaLink="false">https://futureof.ai/p/time-to-replace-your-financial-advisor</guid><dc:creator><![CDATA[Jeremy Wilken]]></dc:creator><pubDate>Fri, 21 Apr 2023 01:40:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!28Zi!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3549bf6c-df28-4fa0-bae3-2ff9c982bb16_1280x1280.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Social&nbsp;</h2><p>Public discourse of AI is changing, and two big leaders in the tech space conducted big interviews for the media. <a href="https://www.forbes.com/sites/martineparis/2023/04/18/elon-musk-dishes-on-google-and-openai-over-ai-wars-on-fox-news/?sh=408eed7b6a8a">Elon Musk joined Tucker Carlson on Fox News</a> to talk about AI. Earlier in the week, Google GEO <a href="https://www.cbsnews.com/news/google-artificial-intelligence-future-60-minutes-transcript-2023-04-16/">Sundar Pichai held an interview with 60 minutes</a>, also about AI. </p><p><strong>What&#8217;s changing</strong>: The interviews and beliefs on display are quite different, but the prominence of AI in the media continues to grow. It highlights the gaps in understanding about AI as a technology and its impact to society. The flood of news and discussions about AI are also shaping public opinion and awareness, often through the lens of their preferred media.</p><p><strong>What&#8217;s next</strong>: Expect there to be a lot more of these types of conversations, public appearances, and attempts to persuade toward certain beliefs. In many countries where polarization has been on the rise, look for that to play into the discourse. AI is not neutral, and given the impact I would expect to see more attempts by large players in AI to find ways to reach the public.</p><h2>Technological&nbsp;</h2><p><a href="https://www.technologyreview.com/2021/05/27/1025453/artificial-intelligence-learning-create-itself-agi">Can AI train itself</a>? It&#8217;s been a focus of research for some time and a researcher at Uber has developed a tool called Paired Open-Ended Trailblazer (POET) that try to navigate a simple landscape of basic obstacles. &#8220;POET generates the obstacle courses, assesses bots&#8217; abilities, and assigns their next challenge, all without human human involvement.&#8221; In other words, it lets AI train itself, and this can get away from humans&#8217; having to figure it all out directly.</p><p><strong>What&#8217;s changing</strong>: We&#8217;ve handed over more and more responsibilities to computers, and the same trend continues with AI. While still in a research phase, this points to yet another way to offload the human effort and possibly get better results with less human understanding.</p><p><strong>What&#8217;s next</strong>: If this became standard practice, then it would follow that human review of the resulting behaviors would be even more essential, and more challenging to assess. Efforts would likely need to move out of development expertise to oversight and calibration expertise. It may very well generate new types of outcomes and &#8216;thinking&#8217; that could teach us something or be very hard for us to reason through. </p><h2>Economic&nbsp;</h2><p>A recent paper by Alejandro Lopez-Lira and Yuehua Tang of the University of Florida explores <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4412788">the use of large language models ability to forecasting stock prices based on news headlines</a>. It used ChatGPT to indicate the good, bad, or neutral news item, then they computed a score and correlated these values against next day&#8217;s returns, finding it was better than random at forecasting. It also reviewed other models like GPT-1 and BERT and found the less complex LLMs were incapable, suggesting that &#8220;return predictability is an emerging capacity of complex models.&#8221;</p><p><strong>What&#8217;s changing</strong>: Computers have been a major player in the stock market for a while now, and the introduction of LLMs into their computational calibration may further push the computer capabilities to better predict (and therefore automate) financial outcomes. The automated systems in use by big stock players are certainly confidential, but I would expect they incorporate news already into their systems either manually or automatically. I suspect the main change here is the improving accuracy of sentiment analysis on news items, and incorporating it into more investor&#8217;s practices.</p><p><strong>What&#8217;s next</strong>: The concept of a stock market is based around information availability. Stocks rise and fall based on their earnings, news reports from the industry, or even global pressures unrelated to their stock. We have a complex system where one company&#8217;s negative earnings can have outsized consequences on the rest of the market (and outside of the market too). This type of approach reinforces the information flows we have today and I suspect this leads to more instability as each player (human or digital) tries to make the fastest moves to achieve the gains. The SVB failure was viewed as a potential tipping event into the market, which I fear our future includes more brittle systems because of these deeper system integrations that reinforce one another. On the other hand, as use of LLMs spreads the advantages will diminish or have to evolve. The hunt for the next advantage will continue.</p><h2>Living&nbsp;</h2><p>We know it takes a lot of energy to train AIs, but <a href="https://futurism.com/the-byte/chatgpt-ai-water-consumption">it also requires a lot of water</a>. The amount of water used in the training of GPT-3 alone required an estimated 185,000 gallons of water to cool the data centers according to researchers. It goes further to estimate ChatGPT requires half a liter of water for every 20-50 question and answer exchanges. </p><p><strong>What&#8217;s changing</strong>: Details of the total impact of technologies can be hard to pin down, so research like this aims to raise up awareness of previously hidden costs. Data centers use loads of water, but this quantifies the water requirements for your daily ChatGPT usage. The change is still limited to finding ways to account for water consumption in AI and raising awareness.</p><p><strong>What&#8217;s next</strong>: Water is projected to be a resource that we can&#8217;t afford to waste. There are plenty of water stress events happening around us, so any large consumers of water are likely targets of regulation and public outrage. Water stress has the potential to cause chaos in a lot of systems that we need to pay close attention.</p><h2>Political</h2><p>China regulators are <a href="https://artifact.news/s/5ovLX3_QZCU=">proposing restrictions on AI systems</a> being built inside of the country that would focus on preventing these systems from challenging the power of the state or socialist system. </p><p><strong>What&#8217;s changing</strong>: These kinds of regulations are not uncommon in China, but it would apply liability and responsibility on the AI creators for the outcomes. There are concepts built in to design a safe environment for users, which is lacking in many other countries from a regulatory perspective. </p><p><strong>What&#8217;s next</strong>: Regulation generally aims to set the pace and tone of an industry. Meeting these requirements would likely be impossible with the way AI is developed today, and would likely cool the AI market dramatically in China. On the other hand, it might act like a mechanism that allows the leadership to influence the industry to their preferred future. </p>]]></content:encoded></item><item><title><![CDATA[Who’s driving the future of AI?]]></title><description><![CDATA[Tech has been leading, but who is pushing back?]]></description><link>https://futureof.ai/p/whos-driving-the-future-of-ai</link><guid isPermaLink="false">https://futureof.ai/p/whos-driving-the-future-of-ai</guid><dc:creator><![CDATA[Jeremy Wilken]]></dc:creator><pubDate>Thu, 13 Apr 2023 20:00:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!28Zi!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3549bf6c-df28-4fa0-bae3-2ff9c982bb16_1280x1280.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This week, I was struck by the number of regulatory type events happening in recent weeks. The push back against AI is showing up more, but its not always just an outright ban. Let&#8217;s explore some of the forces pushing back against AI this week.</p><h2>Social&nbsp;</h2><p>Social advocacy groups have submitted complaints and calls to the <a href="https://fortune-com.cdn.ampproject.org/c/s/fortune.com/2023/03/30/openai-chatgpt-gpt-4-ftc-complaint-caidp-beuc-europe/amp/">US and EU regulators to change AI development and launch investigations into ChatGPT</a>. It is unclear if these complaints will drive action, and there were no responses as of this article.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://futureof.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Future of AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>What&#8217;s changing</strong>: The fight over AI and its effect on society continues to ramp up, likely to continue as it becomes more visible and prominent. The specifics are targeted at OpenAI and its public ChatGPT tool, and the possible harms it can cause users. It&#8217;s unclear if this will prompt any action, but we are seeing more voices entering the chaotic conversation.</p><p><strong>What&#8217;s next</strong>: Even Open AI&#8217;s CEO has made statements about concerns about where AI may go, despite adding some &#8220;safety limits.&#8221; The voices and conversation about AI, and LLMs in particular, need more diversity and inclusion of those who are most likely impacted by AI. I expect we&#8217;ll see other feedback loops go into action as people and groups look for ways to push back.</p><h2>Technological&nbsp;</h2><p>Tesla has employees whose job is to review footage and images from vehicles for understanding their vehicle behaviors, but Reuters reports <a href="https://www.reuters.com/technology/tesla-workers-shared-sensitive-images-recorded-by-customer-cars-2023-04-06/">some of these media assets were shared internally and sometimes turning them into memes</a>. Today, humans are still needed to help review the training of AI to help label and improve the quality of training data. Yet vehicles are a private space and park in private places, potentially violating privacy even if they were not linked to a user (but might still have geolocation data). Ultimately, the amount of data (even beyond media) can reveal a lot about people and the ramifications are complex. Yet, the data is a requirement for how we build AI today.</p><p><strong>What&#8217;s changing</strong>: Perceptions about privacy and who owns the data are complex especially in private/public spheres. Cars may be recording people walking by the vehicle, but so do security cameras at the store or traffic lights. Not a lot of privacy exists outside of your own home (or inside of it depending on how you outfit it), but the change is how distributed the data collection and ownership is becoming. This is a long and slow change, but there are plenty of examples of people being tracked through public camera systems.</p><p><strong>What&#8217;s next</strong>: Cars go between private and public spaces all the time, what can someone reasonably expect for privacy? What kind of regulation can you have that addresses the multitude of concerns here? Regulators have thus far been unable to come to strong conclusions so the current outlook is that it may take some major breaches of privacy to get to the point where something can change. Humans just aren&#8217;t good at managing complex and distributed change.</p><h2>Economic&nbsp;</h2><p>The economics of growth have won in <a href="https://loksabha.nic.in/Questions/QResult15.aspx?qref=51601&amp;lsno=17">India after a recent answer to parliament by the country&#8217;s Ministry of Electronics and IT</a>. It states the country is not currently considering laws or regulation to limit growth of AI. Instead it wants AI to have a &#8220;kinetic effect&#8221; on innovation for the digital ecosystem.</p><p><strong>What&#8217;s changing</strong>: Nothing actually, and that is the point. Given other events in this newsletter, India is trying to clarify that it will be friendly for AI development and desires growth in this sector. The notice points to previous work done in the 2018 National Strategy for AI as sufficient for current concerns. India wants to become a global leader in AI, and today&#8217;s innovation environment favors those with light oversight.</p><p><strong>What&#8217;s next</strong>: The notice also highlights some of the enabling requirements for AI as part of a larger strategy. India has multiple efforts underway to help improve the investment in growth of data, infrastructure, and expertise/skills. Given the size of India&#8217;s technology sector, this move makes sense for their objective. Yet, will this open policy generate the value expected?&nbsp;</p><h2>Environmental&nbsp;</h2><p>Electric vehicles are the preferred platform for autonomous vehicles, powered by AI, but there are cases and places where <a href="https://www.tomorrow.city/a/countries-that-are-reconsidering-electric-vehicles">EVs are being reconsidered or proposed for outright bans</a>. Switzerland is considering banning the use of EVs during power outages, and the US state of Wyoming has a proposed bill to phase out EVs by 2035.&nbsp;</p><p><strong>What&#8217;s changing</strong>: There are very different reasons for what Switzerland and Wyoming are proposing. Switzerland is considering the implications of fully electrified vehicles, and there are some pragmatic issues that arise if we don&#8217;t have backup power sources. Wyoming&#8217;s bill is a political stunt, but reminds us there are competing interests in current industries that would be affected.</p><p><strong>What&#8217;s next</strong>: Electric vehicles are a mixed bag for the environment, while they run clean they also still require major carbon emissions in their creation and often in their charging (depending on the power grid mix), and this is a classic shifting the burden archetype. The dream to electrify everything will cause a large number of impacts (many we can anticipate if we had the desire to do something about them), and I have great doubts that we should electrify everything.&nbsp;</p><h2>Political</h2><p><a href="https://www.reuters.com/technology/germany-principle-could-block-chat-gpt-if-needed-data-protection-chief-2023-04-03/">Italy recently temporarily banned ChatGPT</a> over concerns about how the tool meets EU data compliance rules. Italy is the first western nation to tackle such a ban on an AI powered chatbot. Now other European nations are connecting with Italian regulators to understand the discussions with OpenAI (company behind ChatGPT) that ultimately led to its ban.</p><p><strong>What&#8217;s changing</strong>: I have no doubt that regulators have been busy looking at ChatGPT and the many other AI tools, but once one nation takes a major step like this the rest of the EU may follow. There is no guarantee any other countries will, as Sweden has already declared it has no plans to. The EU AI Act is still in progress, and this likely is making it more complicated to finalize.</p><p><strong>What&#8217;s next</strong>: The reality is regulations are going to play into the future, but there are a lot of questions about who will be driving it and how it will play out. AI is advancing in both technology and implications faster than any laws can be drafted. Existing frameworks, like Europe&#8217;s GDPR, will likely be called in to carry the load of missing regulation and new interpretations will be required. Lawsuits will likely be a leading pathway for driving changes. </p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://futureof.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Future of AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How will the lag between innovation and regulation affect AI?]]></title><description><![CDATA[Death to humanity, speeding up hardware development, domain specific LLMs, climate change pressures, legal issues in AI]]></description><link>https://futureof.ai/p/how-will-the-lag-between-innovation</link><guid isPermaLink="false">https://futureof.ai/p/how-will-the-lag-between-innovation</guid><dc:creator><![CDATA[Jeremy Wilken]]></dc:creator><pubDate>Thu, 06 Apr 2023 19:14:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!q_6v!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cee5598-6620-480d-b4f3-d876f1733ffc_1792x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!q_6v!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cee5598-6620-480d-b4f3-d876f1733ffc_1792x1024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!q_6v!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cee5598-6620-480d-b4f3-d876f1733ffc_1792x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!q_6v!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cee5598-6620-480d-b4f3-d876f1733ffc_1792x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!q_6v!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cee5598-6620-480d-b4f3-d876f1733ffc_1792x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!q_6v!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cee5598-6620-480d-b4f3-d876f1733ffc_1792x1024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!q_6v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cee5598-6620-480d-b4f3-d876f1733ffc_1792x1024.jpeg" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9cee5598-6620-480d-b4f3-d876f1733ffc_1792x1024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1442970,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!q_6v!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cee5598-6620-480d-b4f3-d876f1733ffc_1792x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!q_6v!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cee5598-6620-480d-b4f3-d876f1733ffc_1792x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!q_6v!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cee5598-6620-480d-b4f3-d876f1733ffc_1792x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!q_6v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cee5598-6620-480d-b4f3-d876f1733ffc_1792x1024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Got to try out Adobe Firefly and was inspired to try and flip the script with a robot purchasing an item from a human. &#8220;Robot checkout out and buying a bagel at a bakery with a human server&#8221; by Adobe Firefly </figcaption></figure></div><h2>Social</h2><p><strong>To kick things off let&#8217;s look at social dynamics:</strong> In an Time magazine Op-Ed, AI pioneer Eliezer Yudkowsky argues that the recent letter calling for a 6-month pause on powerful AI systems is not enough, but that all development should stop. The basic argument is we are incapable of understanding if they are self-aware, or if it will be concerned about other sentient life. We simply aren&#8217;t prepared to get artificial general intelligence right on the first try, and if we don&#8217;t he argues the outcome is the death of all humans.</p><p><strong>What&#8217;s changing:</strong> Opposition to AI and concerns about digital overloads is not new, it&#8217;s a sci-fi trope going back decades and sits in the middle of many Hollywood hits. Now, the concerns are coming from vocal but knowledgable experts and are getting put more in the social awareness. This article is in a major US magazine that generally publishes high quality content, and proposes complete shut down of all AI development.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://futureof.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Future of AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>What&#8217;s possible:</strong> There are a lot of implications that may arise. If the narrative of &#8216;shut it down&#8217; becomes common, it could permeate political races and pundit circles. It could lead to protesting those responsible (hardware, software, research, anyone who uses AI). It could lead to asymmetric responses where some nations or companies halt but others continue. The assumption that AI leads to death of humanity is pretty stark and difficult to prove, so the thing to watch here is sentiment.</p><h2>Technological</h2><p><strong>Next up, technology:</strong> NVIDIA announced the ability to improve the speed and power consumption of inverse lithography design by 40x (a step in making chips which a software program designs a critical piece for chip production that requires a massive hardware stack and weeks). This isn&#8217;t just running on a GPU over a CPU, this is optimizing the processing specifically for GPUs.</p><p><strong>What&#8217;s changing:</strong> As chips approach the laws of physics, software increasingly plays a role in helping to develop advancements and make the big improvements expected to keep pace of miniaturization and cost reduction expected. NVIDIA is working with foundries to allow for faster, cheaper, less carbon intensive development of chips, which of course adds value. It also shows the value of massive parallelization of work that GPUs excel at, and how more specific use cases can be significantly improved if the right attention is placed.</p><p><strong>What&#8217;s possible:</strong> Look for more opportunities to reduce costs, improve quality, and other powerful transformations that can come from hardware improvements. Training AI has always been better suited for GPUs, but what other opportunities (with the right technical approach) could be improved? </p><h2>Economic</h2><p><strong>In the economic category:</strong> <a href="https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/">Bloomberg has trained its own large language model</a> (LLM) specifically on finance related content. This is detailed in a research paper, if you are interested in the technical details about the 50 billion parameters from a 363 billion token dataset.</p><p><strong>What&#8217;s changed: </strong>There are multiple LLMs out there today, but so far they are mostly general purpose. This is an LLM dedicated specifically to the financial domain (though it almost certainly will have some generalized capabilities). To my knowledge, this is the first LLM of this size dedicated to a single domain. Given its development by a major financial firm, this indicates interest in fully developing custom LLMs for more narrow applications rather than building on other&#8217;s systems. </p><p><strong>What&#8217;s possible:</strong> There are a few factors at play here, the cost to develop a custom LLM is not out of reach for many large organizations if they have the technical interest and allocate the budget. The knowledge, tools, and awareness of how to develop an LLM are spreading, so I would expect to see more attempts at this. There are a lot of business models expecting that most organizations will want to customize an existing system instead of developing one in house, which still seems safe for today but need to watch the cost factors, tools, and data requirements at play. </p><h2>Environmental</h2><p><strong>Next up, the environment: </strong>Vanuatu, a Pacific island nation of approximately 300k, has <a href="https://www.theguardian.com/world/2023/mar/30/un-vote-on-climate-justice-pacific-island-change-crisis-united-nations-vanuatu">submitted a United Nations resolution</a> &#8220;that should make it easier to&nbsp;hold polluting countries legally accountable&nbsp;for failure to act on the climate crisis.&#8221; The island nation has recently had two category four hurricanes strike within days of one another, and is at high risk of rising sea levels.</p><p><strong>What&#8217;s changing:</strong> While not specifically about AI, this is potentially a major step towards increased international pressure to not only limit their climate impacts but also be financially assessed by what harm they are causing to other nations. While international courts have limited impact, they are potentially binding in some nations who ratify certain treaties.</p><p><strong>What&#8217;s possible:</strong> This could be an avenue for future pressure to be placed on AI development, so its worth watching how this plays out. Also, data centers and technology are a prime target for reduction of climate impact. Carbon offsets are likely not going to be a shield forever (despite their shaky track record), and the best approach is actual reduction. Power and carbon conscious systems may get a boost.</p><h2>Political</h2><p><strong>Last category is politics:</strong> <a href="https://arstechnica.com/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/">Stable Diffusion is currently the target of several lawsuits</a>, and the lawsuits are generally about the rights of  images used in training data. There are arguments on both sides, but the key thing I want to focus on is the frame of the arguments since that is what will likely lead to a ruling. Certain prompts can generate the nearly identical outputs to some of the source materials (in this case images), and it is unclear how to handle the disagreement between parties about this result. It is not the same technical image nor is it directly copied, but it is generated. </p><p><strong>What&#8217;s changing:</strong> Software rights and asset licensing are being put to the test, and until these lawsuits progress further its unclear where they might go. This could be landmark court cases, or they could settle out of court which would only defer the question for another time. Systems likely need to find ways to ensure they are doing their diligence and properly auditing the content going into their systems to minimize exposure, but a legal precedent might set more comprehensive limits or barriers.</p><p><strong>What&#8217;s possible:</strong> A likely outcome here is the legal system tries to stabilize by leaning on past understandings of rights and usage, which could lead to major barriers for developing the massive datasets currently being used to train AI systems. If rights to all materials is required, it also opens up questions about possible licensing costs and the ability to better audit that the source materials aren&#8217;t copied.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://futureof.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading The Future of AI! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Welcome to Future of AI]]></title><description><![CDATA[This is The Future of AI, a weekly newsletter about exploring the many futures of AI, and understanding the environment around it.]]></description><link>https://futureof.ai/p/coming-soon</link><guid isPermaLink="false">https://futureof.ai/p/coming-soon</guid><dc:creator><![CDATA[Jeremy Wilken]]></dc:creator><pubDate>Mon, 28 Mar 2022 18:16:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!28Zi!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3549bf6c-df28-4fa0-bae3-2ff9c982bb16_1280x1280.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>This is The Future of AI</strong>, a weekly newsletter about exploring the many futures of AI, and understanding the environment around it. Each week I share a set of meaningful events, changes, or analysis related to AI and the larger environment. They&#8217;ll be spread across different categories, to ensure we don&#8217;t just look at technology trends but also things like how social dynamics, political changes, and environmental impacts. </p><p>Subscribe now to get the first newsletter when it comes out in April 2023. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://futureof.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://futureof.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>