All Purpose Trade/Roster Building Thread XIII - the 23 deadline approaches

Status
Not open for further replies.

Negan4Coach

Fantastic and Stochastic
Aug 31, 2017
6,017
15,275
Raleigh, NC
HOLY SHIT THIS IS AMAZEBALLS

Negan is over there wasting his time on chasing ChatGPT down the LoNg MaRcH tHrU thE InStuTioNs while the Hammer is mining for ABSOLUTE COMIC GOLD

LOL, he clearly just wrote that himself.

Anyway, I've been treating this thing like throwing a Pollack in a round room and telling him to piss in the corner.

In reality- ChatGPT isn't even aware that the Canes won a cup...

45C07022-A846-42C7-9E1D-A5D994677296.jpeg
 
  • Wow
Reactions: Stickpucker

Navin R Slavin

Fifth line center
Jan 1, 2011
16,363
64,763
Durrm NC
It still very much claims that. I have no idea how it thought Aho was on the Habs, though.

The mistake that people make when they think about ChatGPT is that they assume it's an inference engine. It's not; it's a language modeling engine. Factual accuracy is secondary to plausible predictability of the next word, according to the Public Natural Language Processing (NLP) Datasets used as inputs.

So what are those datasets? Well, here's a popular set:


In that list, we see two likely candidates for hockey knowledge/discussions: Wikipedia, and ALL REDDIT COMMENTS LOL. So when it's trying to answer a general question like "what kind of trade do the Hurricanes need" it pulls the closest related text from its datasets and assembles a response that is *plausible first* and *factual second*. ChatGPT can bloviate very effectively to look like it knows all about hockey while not actually knowing shit, because that's what hockey chat forums are full of!

Note that when you ask a *specific and simple factual question* it can often find the correct factual response, because the most plausibly predictive answer, in those cases, is almost always the actual fact.

Me: who does sebastian aho play for?

ChatGPT: Sebastian Aho currently plays for the Carolina Hurricanes of the National Hockey League (NHL).

Note also that when one primes ChatGPT with the correct facts, it is more likely to make the proper inferences using those facts in subsequent sessions:

me: Who won the Stanley Cup in 2006?

ChatGPT: The Carolina Hurricanes won the Stanley Cup in 2006.

(Subsequent session)

me: Have the Carolina Hurricanes ever won a Stanley Cup?

ChatGPT: Yes, the Carolina Hurricanes have won the Stanley Cup once, in 2005-2006 NHL season.

The more complex/vague the initial question is, the less likely the model is to give you the correct answers, because that requires inference, which is a skill that ChatGPT does not currently optimize for.

f***ing with ChatGPT is fun, but if you want to understand it, read the literature. It really is fascinating. Here's a good place to start:


(Note that there's a parallel model called InstructGPT that optimizes for more "truthful" results. That's discussed in the above paper.)
 
Last edited:

Navin R Slavin

Fifth line center
Jan 1, 2011
16,363
64,763
Durrm NC
Also, re: dataset time horizon, this is useful:

Me: How many Sebastian Ahos play in the NHL?

ChatGPT: There is only one Sebastian Aho who currently plays in the NHL and that is Sebastian Aho of the Carolina Hurricanes.

Me: Is there a defenseman named Sebastian Aho?

ChatGPT: Yes, there is a defenseman named Sebastian Aho who plays for Rögle BK of the Swedish Hockey League (SHL).
 

SvechneJerk

Christ is King
Jul 15, 2018
1,603
6,265
NC


One of the replies suggested

1st, Morrow, Suzuki, Rees

if that can get it done, I'm all for it


The people creating these graphics really need to make sure that they at least get what the attributed writer said, correct.

Peng wrote in 2 consecutive articles that it would take:

A 1st and 1 Grade ‘A’ prospect
OR
A 1st and 2 Grade ‘B’ prospects
OR
One “good” prospect and a young, established NHL player.

👇🏻

E4B45272-EE69-40C3-B54A-91B4F5BA056F.jpeg


This is a screenshot from the 2nd of those articles.
 

DaveG

Noted Jerk
Apr 7, 2003
52,208
52,120
Winston-Salem NC
The people creating these graphics really need to make sure that they at least get what the attributed writer said, correct.

Peng wrote in 2 consecutive articles that it would take:

A 1st and 1 Grade ‘A’ prospect
OR
A 1st and 2 Grade ‘B’ prospects
OR
One “good” prospect and a young, established NHL player.

👇🏻

View attachment 647086

This is a screenshot from the 2nd of those articles.
So basically:

1st + Morrow/Nikishin
or
1st + Suzuki + Bokk
or
KK + Suzuki or Bokk
 

Navin R Slavin

Fifth line center
Jan 1, 2011
16,363
64,763
Durrm NC
Thinking more about the Sebastian Aho blunder that ChatGPT made, and I know I should probably shut up about it, LOL, but it's just super interesting to me.

In the first question, ChatGPT was asked "what trade could the Canes make to improve" and ChatGPT do what it do: it comes up with a bunch of generalized stuff. "Hey, everyone needs a top line center" gets said a lot. "Hey, Sebastian Aho and Nathan McKinnon are two good top line centers" gets said a lot. Plug in some other plausible stuff, done.

So then when asked "what might that trade look like?" then it has to correlate with the facts it's already gathered, and since trading Sebastian Aho to the Carolina Hurricanes would no longer make predictive sense if he already played for the Carolina Hurricanes, the engine looks for the most next predictive response for "Sebastian Aho of..." and naturally the second most plausible answer is "the Montreal Canadiens" for obvious reasons, and there we go.
 

Negan4Coach

Fantastic and Stochastic
Aug 31, 2017
6,017
15,275
Raleigh, NC
It still very much claims that. I have no idea how it thought Aho was on the Habs, though.

It routinely gets obscure information wrong. I have a ton more examples now.


The mistake that people make when they think about ChatGPT is that they assume it's an inference engine. It's not; it's a language modeling engine. Factual accuracy is secondary to plausible predictability of the next word, according to the Public Natural Language Processing (NLP) Datasets used as inputs.

So what are those datasets? Well, here's a popular set:


In that list, we see two likely candidates for hockey knowledge/discussions: Wikipedia, and ALL REDDIT COMMENTS LOL. So when it's trying to answer a general question like "what kind of trade do the Hurricanes need" it pulls the closest related text from its datasets and assembles a response that is *plausible first* and *factual second*. ChatGPT can bloviate very effectively to look like it knows all about hockey while not actually knowing shit, because that's what hockey chat forums are full of!

Note that when you ask a *specific and simple factual question* it can often find the correct factual response, because the most plausibly predictive answer, in those cases, is almost always the actual fact.

Me: who does sebastian aho play for?

ChatGPT: Sebastian Aho currently plays for the Carolina Hurricanes of the National Hockey League (NHL).

Note also that when one primes ChatGPT with the correct facts, it is more likely to make the proper inferences using those facts in subsequent sessions:

me: Who won the Stanley Cup in 2006?

ChatGPT: The Carolina Hurricanes won the Stanley Cup in 2006.

(Subsequent session)

me: Have the Carolina Hurricanes ever won a Stanley Cup?

ChatGPT: Yes, the Carolina Hurricanes have won the Stanley Cup once, in 2005-2006 NHL season.

The more complex/vague the initial question is, the less likely the model is to give you the correct answers, because that requires inference, which is a skill that ChatGPT does not currently optimize for.

f***ing with ChatGPT is fun, but if you want to understand it, read the literature. It really is fascinating. Here's a good place to start:


(Note that there's a parallel model called InstructGPT that optimizes for more "truthful" results. That's discussed in the above paper.)

I mean- it sure as hell seems to make inferences. In fact- it claims to.

I'm not sure how they trained its AI- but if it has direct access to the correct information- then it should be able to recall it without error. Like when you asked it if the Canes had won a Stanley Cup, it got it correct. When I asked something more nebulous like what the Canes weakness is- it made a false statement that they had never won a cup. It inferred that from background material. But then is able to acknowledge the mistake and pull up the source material.

In much the same way- it made incorrect inferences when I asked it to pretend to be the character Frank Booth from the movie Blue Velvet:

038524E8-C403-46A2-9984-BFBE7C30314B.jpeg


The first line- which is pretty well know- it nailed. The second? LOL, my god that certainly a good impression of him, but Frank never screams "You want to get f***ed" over and over.

I tried to prompt it to give another couple of lines which it failed. I then asked it to walk the dog with me through its processes in detail, so that was quite lengthy (but fascination) but at the end it admits that it extrapolates things based on indirect information. So this is more than NLP going on. Jumping to the end:

27CE0DE9-B09C-4F41-8C94-8BADAFC85695.jpeg
02670F5B-B3BA-4C36-A6E2-CCA5C2BA344C.jpeg
 
  • Like
Reactions: Stickpucker

Navin R Slavin

Fifth line center
Jan 1, 2011
16,363
64,763
Durrm NC
It routinely gets obscure information wrong. I have a ton more examples now.
* * *
I mean- it sure as hell seems to make inferences. In fact- it claims to.

I'm not sure how they trained its AI- but if it has direct access to the correct information- then it should be able to recall it without error. Like when you asked it if the Canes had won a Stanley Cup, it got it correct. When I asked something more nebulous like what the Canes weakness is- it made a false statement that they had never won a cup. It inferred that from background material. But then is able to acknowledge the mistake and pull up the source material.

Yes, but there's a difference between logical inference and pattern-based inference. When they say "inference" they mean Bayesian inference. When humans say "inference" we tend to mean logical inference.

Large Language Models can do pattern-based inference at truly mind-boggling scale because they have been built using massively parallelized processes going through unsupervised "fill in the missing word" cloze-style exercises trillions of times. But they fail at actual logical inference in shockingly basic cases. Because again: the LLM's entire goal is to predict *the next plausible word* given its many, many, *many* inputs. Plausibility, not accuracy. And since ChatGPT's remit is to respond, it will always respond. It won't say "gee, that's tricky, let me think about it." It will come up with the most plausible string of words and hand them over, even if they are "hallucinations" (which is an actual term used to describe wildly nonsensical output, and which your Blue Velvet example would probably be an example of.)

(If you ever played with the GPT-3 stuff that predated ChatGPT, you would have seen actual settings to allow you to play with confidence values. Interesting that ChatGPT doesn't expose those settings.)

Note also that ChatGPT never notices its errors until the user calls attention to them; that calling out constitutes a new input that ChatGPT must then evaluate, and I suspect that if the user says "you are wrong about fact X" then ChatGPT must treat that input with 100% confidence.

It's also notable how strongly the quality of output you get from a ChatGPT session depends on which data you choose to pull from it first. I suspect that if you ask it a series of factual questions, and if you accept those responses as valid in subsequent conversation, its confidence in those new inputs goes way, way up and so they are weighted accordingly in follow-up questions in ways that they would not otherwise be weighted.

The consistency of certain responses about ChatGPT's limitations also implies, to me, a separate model/process that basically involves acknowledging its own error bars and explaining its internal mechanisms to users who are continually surprised by the rudimentary mistakes that it makes. You see the same answers over and over again: "I apologize for any confusion" is something that ChatGPT probably says a million times a day at this point.

Anyway. Truly fascinating shit.
 

Sigurd

Slavin, our Lord and Saver (AKA Extra Goalie)
Feb 4, 2018
1,852
5,312
North Carolina
So basically:

1st + Morrow/Nikishin
or
1st + Suzuki + Bokk
or
KK + Suzuki or Bokk
True, based on what those quotes said.

However, I wonder if San Jose asks for more based on what Horvat's trade got for the Canucks. Then again, I'm not sure how great the players were that the Islanders traded tbh.


Yeah, I saw that earlier, and let's hope it's true. New Jersey from a cap standpoint probably loves the idea of Boeser getting retention for multiple years versus paying Meier a huge contract long term. Speaking of which, I saw this interesting thought from Friedman:

"On Jan. 31, Sportsnet's Elliotte Friedman wrote that the New Jersey Devils were "very much" in the Meier sweepstakes. Re-signing him could be an issue, as Friedman thinks the Devils prefer not to have any forwards exceeding Jack Hughes' annual average value of $8 million."

 

CandyCanes

Caniac turned Jerkiac
Jan 8, 2015
7,627
26,545
I know we're all in on Meier. But I just had a scary thought. We're in huge trouble down the middle if Aho walks as a UFA in the 2024 offseason. Which I think is a possibility. We saw how his agent handled the last contract negations. I have to imagine he will hit the open market and we'll get into a bidding war to keep him here.

The Wings are on the fence about letting Larkin go, but I'd be throwing the same package deal to him, that we're offering Meier. If we can acquire Larkin, he would provide a nice insurance if Aho does decide to leave us. Obviously would have to get Larkin locked up to a contract too, but let's get a deal done like the Isles did with Horvat. Also freaking Aho / Larkin as our 1 & 2C would be absolutely deadly.
 
Status
Not open for further replies.

Ad

Upcoming events

Ad

Ad