A.I. Quietly Killing Itself?

  • PLEASE check any bookmark on all devices. IF you see a link pointing to mandatory.com DELETE it Please use this URL https://forums.hfboards.com/

x Tame Impala

HFBoards Sponsor
Sponsor
Aug 24, 2011
28,295
13,163

Read this interesting article today. I’m still very much a rookie in the field but have felt for years that for reasons like what’s laid out in the article, this A.I. doom is going to end up being entirely overblown.

Long story short, artificial intelligence essentially trains itself by scanning the Internet with insanely high processing power. Wrench in the gears though…an estimated 57% of content on the Internet currently is made by AI and that number could get to as high as 90+% by next year. So the AI training models are using AI-generated content to train themselves which leads to successively worse and worse data to train on. Without accurate (human) data points the AI will be less and less useful to us, collapsing on itself.
 

JMCx4

Welcome to: The Dumbing Down Era of HFBoards
Sep 3, 2017
14,717
9,608
St. Louis, MO
After the first two prompts, the answers steadily miss the mark, followed by a significant quality downgrade by the fifth attempt and a complete devolution to nonsensical pablum by the ninth consecutive query.
So my long-held belief is FINALLY validated ... HFBoard threads are solely creations of genAI ! 💡
 

Seedtype

Registered User
Sponsor
Aug 16, 2009
2,224
881
Ohio?!?!
So my long-held belief is FINALLY validated ... HFBoard threads are solely creations of genAI ! 💡
Heh, don't look up Dead Internet conspiracy theory.

Edit: As for AI stuff itself, it does seem to me that the hype has fallen off quite a bit, and honestly a lot of it seems like nasty stuff what with all the deepfake crap going on.
The last thing that impressed me was seeing a seamless translation of a French guy talking during some quick news clip that was around dealing with a storm or something. They were able to make him speak English with perfect lip sync, you can even tell his personal accent was intact.

I think AI will be great at looking at stuff, both in coding and maybe medical images, but I think you will still need that human input and spark. I actually have a lot of doubt with automated cars as well.
 
Last edited:

Romang67

BitterSwede
Jan 2, 2011
30,164
22,873
Evanston, IL
This is happening in academia too. Papers that are heavily aided or near entirely written using large language models. Funnily enough, a considerable portion of those papers are also written ABOUT large language models.

But in general... eh. I think the usefulness and breadth of purposes of generative AI has been overstated both by developers and by the general public. But that's just the AI that people are actively talking about right now. How many people can say that they work in an industry where there are not automated systems following a plan for classification, regression, prediction purposes? That usage of AI systems isn't going away. If anything, it's growing behind the scenes, while ChatGPT gets all the publicity.
Heh, don't look up Dead Internet conspiracy theory.

Edit: As for AI stuff itself, it does seem to me that the hype has fallen off quite a bit, and honestly a lot of it seems like nasty stuff what with all the deepfake crap going on.
The last thing that impressed me was seeing a seamless translation of a French guy talking during some quick news clip that was around dealing with a storm or something. They were able to make him speak English with perfect lip sync, you can even tell his personal accent was intact.

I think AI will be great at looking at stuff, both in coding and maybe medical images, but I think you will still need that human input and spark. I actually have a lot of doubt with automated cars as well.
AI has always held more promise in augmenting people than in replacing them, IMO.
 

x Tame Impala

HFBoards Sponsor
Sponsor
Aug 24, 2011
28,295
13,163
Fantastic blog post from a fellow HF Hawks poster @statswatcher


Goldman Sachs doesn't believe in it long term which is pretty damning. The power consumption of this technology is already MASSIVE and is expected to be required to grow 40% in the US to accommodate this technology. Which would require a major re-design of our nation's power grid.

In the blog post they mentioned how they (GS) used an AI to do a valuable task for them, the AI did it well and quickly, however it costed them 6x more than what it would've cost them to hire a human.

The technology isn't there yet and may never be. As someone who is entering this field soon, I look forward to seeing this grow into something useful. I really do think it will. But this notion that AI is going to spawn into AGI and take over the world, making us obsolete in the process, needs to die. It's very, very, very unrealistic. Human thought and problem solving cannot be reduced to throwing billions of dollars into processing power.
 
  • Like
Reactions: Seedtype

x Tame Impala

HFBoards Sponsor
Sponsor
Aug 24, 2011
28,295
13,163
I actually have a lot of doubt with automated cars as well.
I do as well. I did go to San Francisco this past spring and saw all those self-driving "WAYMO" cars everywhere which was honestly really cool. I must've seen two dozen of them and only saw a human being in one of them. So there is a basis for this technology working but I'm skeptical a computer will be able to handle the safety and decision making that's needed in many circumstances.
 
  • Like
Reactions: Seedtype

Romang67

BitterSwede
Jan 2, 2011
30,164
22,873
Evanston, IL
Fantastic blog post from a fellow HF Hawks poster @statswatcher


Goldman Sachs doesn't believe in it long term which is pretty damning. The power consumption of this technology is already MASSIVE and is expected to be required to grow 40% in the US to accommodate this technology. Which would require a major re-design of our nation's power grid.

In the blog post they mentioned how they (GS) used an AI to do a valuable task for them, the AI did it well and quickly, however it costed them 6x more than what it would've cost them to hire a human.

The technology isn't there yet and may never be. As someone who is entering this field soon, I look forward to seeing this grow into something useful. I really do think it will. But this notion that AI is going to spawn into AGI and take over the world, making us obsolete in the process, needs to die. It's very, very, very unrealistic. Human thought and problem solving cannot be reduced to throwing billions of dollars into processing power.
I'd be very wary about conflating generative AI with AI as a whole.

Generative AI has made incredible strides in the past 5 years (when I entered the field, the foremost researchers really struggled with not having their models either get stuck in loops of repeating phrases (about 5 years ago) or go off topic after a few sentences (lingering a few years later)).

But people in general have no idea what they should do with generative AI, what it is actually good for, and so they are just throwing it at all sorts of problems hoping that it will solve it. And the obvious problem, as that Goldman Sachs example showcases, is that by chucking a multi-capable AI model like GPT 4 on the problem, it will probably be able to solve it. The issue is that there is no guarantee that's a good use of money.

I personally think there is something to the idea that consultants may save companies a ton of money by just looking over their ideas for how they want to use AI, and give them a thumbs up or a thumbs down depending on the idea will actually save time and money for them. Cause right now people are just excited that this technology exists, but don't have the expertise necessary to know where the technology should be used. So you go to C-suite people with an idea including AI, and because they only know about the hype, but don't know anything about the technology, they sign off on it.

But don't conflate it with AI in general. Automated systems that follows plans and usually have some form of statistical machine learner component are so ingrained in so many industries at this point that there is no going back from that. We have so much data available at our fingertips, and are so willing and able to collect more data, that any rudimentary neural net is going to improve on human capability basically everywhere. Combine it with whatever domain knowledge you have in your field, and whatever you were doing is likely going to improve.

But yeah, the idea that we're right on the verge of discovering AGI, and that would take over the world (and also threaten the world) I felt was always more pushed by snake oil salesmen like Musk than reliable researchers in the field. It was always going to be a boon for people like Musk if the average person thought AI was much more capable than it actually is, and also so scary that other companies should not be allowed to freely research. Although there were and are a ton of researchers that got, and are still getting, caught up in the hype and sniffing their own farts.

I also think the conflation of generative AI with other types of AI is a bit worrisome because I think one is considerably more potentially harmful than the other.
 

x Tame Impala

HFBoards Sponsor
Sponsor
Aug 24, 2011
28,295
13,163
But don't conflate it with AI in general. Automated systems that follows plans and usually have some form of statistical machine learner component are so ingrained in so many industries at this point that there is no going back from that
Oh absolutely. I made the same point in a different thread. The blog post specifically talks about Generative AI but there are tons of already proven very useful cases of artificial intelligence solving problems, dealing with vast amounts of data, and be a quicker way to perform tasks. I'm all in on that type of artificial intelligence. I just don't think some version of ChatGPT in 2030 is going to rearrange society as a whole.

What field are you in if you don't mind me asking? I recently started going back to school for a career change and am getting a BS in Computer Science. Artificial Intelligence seems like a must-know field going forward, cybersecurity seems incredibly interesting to me as well so I may end up in something combining those two.
 

Romang67

BitterSwede
Jan 2, 2011
30,164
22,873
Evanston, IL
Oh absolutely. I made the same point in a different thread. The blog post specifically talks about Generative AI but there are tons of already proven very useful cases of artificial intelligence solving problems, dealing with vast amounts of data, and be a quicker way to perform tasks. I'm all in on that type of artificial intelligence. I just don't think some version of ChatGPT in 2030 is going to rearrange society as a whole.

What field are you in if you don't mind me asking? I recently started going back to school for a career change and am getting a BS in Computer Science. Artificial Intelligence seems like a must-know field going forward, cybersecurity seems incredibly interesting to me as well so I may end up in something combining those two.
Yeah, I agree. I think we've hit a bit of a plateau in language generation. For as much hype as ChatGPT (both 3.5 and 4) got, I don't think they were THAT much more impressive than GPT-3 were, especially given the jump we saw going from GPT-2 and GPT-3. That's assuming we don't see another jump like we did in the early 20s, which was arguably driven by a combination of a very powerful transformer architecture and the ability to consume a lot of data. The data consumption isn't going to dramatically increase (and the data quality isn't really improving), so we'd need to see some very impressive new architecture to see a big jump in quality in generated language, IMO.

I'm in AI Safety (so basically, looking at different usages of AI systems and how they could be harmful -- NOT doom-speaking about the dangers of AGI). I bounced around a bit for a few years before landing in that area. You can ping me directly if you have any questions, and I might be able to answer!
 

Ad

Upcoming events

Ad

Ad