Fantastic blog post from a fellow HF Hawks poster
@statswatcher
Hi there. Do you like this post? Did you know that I also do a podcast called Better Offline? If not, please immediately download it on your podcast app. Follow the show. Download every episode. Share with your friends, and demand they do the same. I did an entire episode
www.wheresyoured.at
Goldman Sachs doesn't believe in it long term which is pretty damning. The power consumption of this technology is already MASSIVE and is expected to be required to grow 40% in the US to accommodate this technology. Which would require a major re-design of our nation's power grid.
In the blog post they mentioned how they (GS) used an AI to do a valuable task for them, the AI did it well and quickly, however it costed them 6x more than what it would've cost them to hire a human.
The technology isn't there yet and may never be. As someone who is entering this field soon, I look forward to seeing this grow into something useful. I really do think it will. But this notion that AI is going to spawn into AGI and take over the world, making us obsolete in the process, needs to die. It's very, very, very unrealistic. Human thought and problem solving cannot be reduced to throwing billions of dollars into processing power.
I'd be very wary about conflating generative AI with AI as a whole.
Generative AI has made incredible strides in the past 5 years (when I entered the field, the foremost researchers really struggled with not having their models either get stuck in loops of repeating phrases (about 5 years ago) or go off topic after a few sentences (lingering a few years later)).
But people in general have no idea what they should do with generative AI, what it is actually good for, and so they are just throwing it at all sorts of problems hoping that it will solve it. And the obvious problem, as that Goldman Sachs example showcases, is that by chucking a multi-capable AI model like GPT 4 on the problem, it will probably be able to solve it. The issue is that there is no guarantee that's a good use of money.
I personally think there is something to the idea that consultants may save companies a ton of money by just looking over their ideas for how they want to use AI, and give them a thumbs up or a thumbs down depending on the idea will actually save time and money for them. Cause right now people are just excited that this technology exists, but don't have the expertise necessary to know where the technology should be used. So you go to C-suite people with an idea including AI, and because they only know about the hype, but don't know anything about the technology, they sign off on it.
But don't conflate it with AI in general. Automated systems that follows plans and usually have some form of statistical machine learner component are so ingrained in so many industries at this point that there is no going back from that. We have so much data available at our fingertips, and are so willing and able to collect more data, that any rudimentary neural net is going to improve on human capability basically everywhere. Combine it with whatever domain knowledge you have in your field, and whatever you were doing is likely going to improve.
But yeah, the idea that we're right on the verge of discovering AGI, and that would take over the world (and also threaten the world) I felt was always more pushed by snake oil salesmen like Musk than reliable researchers in the field. It was always going to be a boon for people like Musk if the average person thought AI was much more capable than it actually is, and also so scary that other companies should not be allowed to freely research. Although there were and are a ton of researchers that got, and are still getting, caught up in the hype and sniffing their own farts.
I also think the conflation of generative AI with other types of AI is a bit worrisome because I think one is considerably more potentially harmful than the other.