A.I. reads the HF main board and creates a podcast

That's like making a phone call, everyone hates it! Who answers the phone anyway? It's all about DMs
 
How far off are we from court cases with "evidence" made by AI?

It's not going to take that long until you can insert anyone in a video as long as you have their photos or have an audio recording of them as long as you have clips of their real voice.

The videos and audio will at somepoint become nearly undetectable to be fake to the human eye/ear. Surely scammers will start exploiting people with these advancements. Law enforcement will have their work cut out for them and they in many countries are behind technological advancements due to bureaucracy and budget constraints.
 
1744388804431.gif

Speaking of Leah Hextall or Cassie Campbell Pascal, how long before this actually happens?
We had this somewhat starting in NHL 97 with selected play by play by Jim Hughson and next year added color commentary by Daryl Reaugh.
 
Last edited:
  • Like
Reactions: Stealth JD
Yeah. One of those things that can be used for a lot of good, but the negatives will very much outweigh the positives.

Creeps will be having a field day, that is for sure. If that isn't the case already. A lot of people will lose their jobs.
The amount of jobs that AI can replace at relatively cheap costs in the next decades is going to be staggering. The job market will experience mass unemployment it seems like and tons will be screwed.

If your work can be done by AI it will get dicey fairly soon.
 
The amount of jobs that AI can replace at relatively cheap costs in the next decades is going to be staggering. The job market will experience mass unemployment it seems like and tons will be screwed.

If your work can be done by AI it will get dicey fairly soon.
Everyone will have to be come rogue AI hunters.
 
Going to be really hard to discern what's real and fake anymore.

It already is. Meta has publicly announced they plan to release their own bots, run on LLMs, to help drive engagement on Instagram/Facebook. "Wow look at all the comments on my picture from this weekend!". Literal dopamine hacking.

Add that to 3rd party bots, and who knows how much of what we read comes from a human.

I think a place like HF is a little safer, because it's a smaller community compared to X, Reddit or Facebook, but still. Ya never know.
 
How far off are we from court cases with "evidence" made by AI?

It's not going to take that long until you can insert anyone in a video as long as you have their photos or have an audio recording of them as long as you have clips of their real voice.

The videos and audio will at somepoint become nearly undetectable to be fake to the human eye/ear. Surely scammers will start exploiting people with these advancements. Law enforcement will have their work cut out for them and they in many countries are behind technological advancements due to bureaucracy and budget constraints.
Hi, I'm currently working on my dissertation in AI safety with a focus on information harms from large language models.

Not quite the same, but ficticious cases have already been cited in court by lawyers who used hallucinating language models.


This was caught by the court, because it's relatively easy to double check if court cases actually exist. Double checking if some evidence is correct will be nigh on impossible, unless you have evidence disproving the false evidence.

Scammers are using language models, and language models are persuasive.


https://hai.stanford.edu/policy/how-persuasive-ai-generated-propaganda (there is a ton of work in academia on this in the past couple of years, this is just one example)

The potential for misinformation and propaganda spread has exploded in the past few years, and that's just using the medium of text. You are correct to be worried about what happens when audio and video can be seamlessly weaved together with that.

Unfortunately, AI regulations in the U.S. is moving in the opposite direction of the direction it seems you want it to.


(edited link, another NIST related group had also been shut down in the last couple of months)

Don't let the propagandists trick you into believing that the key concern about AI is general artificial intelligence and its associated nebulous existential risks. It's a real concern, but by making people focus on that risk, that may or may not happen in the next 50 years, they take eyes away from all the bad things that poorly implemented (and well implemented, but malicious) AI is doing to people now.

Call your congressperson and voice your concern.
 
Interesting I suppose, but there really isn't any depth to the conversation. This stuff is in its infancy, so down the road it'll be interesting to see if AI can make 15 mins of listening to it as informative and fulfilling as reading the forum for 15 mins. I doubt it since reading is much more efficient due to being able to skim/skip over useless posts and topics.

There are some YouTube bots now that are pretty freaking good at summing up busy chats during live broadcasts. Ryan Hall, for example, uses AI for that purpose, as well as a full time bot on one of his 24/7 weather channels. Freaky how "human" it is. I'll crawl back into my closet now and bite my nails.
 
Do people really prefer listening over reading? I don't like the pace being controlled by some third-party voice, I prefer controlling my time dimension myself.
 
  • Like
Reactions: dgeezus and MCB
Hi, I'm currently working on my dissertation in AI safety with a focus on information harms from large language models.

Not quite the same, but ficticious cases have already been cited in court by lawyers who used hallucinating language models.


This was caught by the court, because it's relatively easy to double check if court cases actually exist. Double checking if some evidence is correct will be nigh on impossible, unless you have evidence disproving the false evidence.

Scammers are using language models, and language models are persuasive.


https://hai.stanford.edu/policy/how-persuasive-ai-generated-propaganda (there is a ton of work in academia on this in the past couple of years, this is just one example)

The potential for misinformation and propaganda spread has exploded in the past few years, and that's just using the medium of text. You are correct to be worried about what happens when audio and video can be seamlessly weaved together with that.

Unfortunately, AI regulations in the U.S. is moving in the opposite direction of the direction it seems you want it to.


(edited link, another NIST related group had also been shut down in the last couple of months)

Don't let the propagandists trick you into believing that the key concern about AI is general artificial intelligence and its associated nebulous existential risks. It's a real concern, but by making people focus on that risk, that may or may not happen in the next 50 years, they take eyes away from all the bad things that poorly implemented (and well implemented, but malicious) AI is doing to people now.

Call your congressperson and voice your concern.
Seems like something AI would say to convince us they aren't AI :sarcasm:
 
Last edited:
Can we replace Leah Hextall with this AI bot?
Dunno, the AI bot might sometimes stay on topic instead of just talking about her tea party with some NHL player while a goal is being scored, wouldn't be authentic.
 
  • Like
Reactions: MCB
The amount of jobs that AI can replace at relatively cheap costs in the next decades is going to be staggering. The job market will experience mass unemployment it seems like and tons will be screwed.

If your work can be done by AI it will get dicey fairly soon.
I'm a programmer. f*** me running. The writing is on the wall, and it was placed there by an AI assistant lol
 

Users who are viewing this thread

Ad

Ad