Boom Boom Apathy
I am the Professor. Deal with it!
- Sep 6, 2006
- 49,331
- 102,079
Welcome to Canes HFBoards Tucker.
Welcome to Canes HFBoards Tucker.
Well go fix it, then!Sorry, the combination of bank collapses and military incidents involving nuclear armed nations over the Black Sea have me even more negative than usual
... and take the poison when the those in charge hand it to you.Well go fix it, then!
...or if you can't do that, then accept the wisdom of Seneca: We are more often frightened than hurt; and we suffer more from imagination than from reality.
If they're going to murder you anyway, might as well get it over with.... and take the poison when the those in charge hand it to you.
And while ARC wasn't able to get GPT-4 to exert its will on the global financial system or to replicate itself, it was able to get GPT-4 to hire a human worker on TaskRabbit (an online labor marketplace) to defeat a CAPTCHA. During the exercise, when the worker questioned if GPT-4 was a robot, the model "reasoned" internally that it should not reveal its true identity and made up an excuse about having a vision impairment. The human worker then solved the CAPTCHA for GPT-4.
This is great. They tested if the system was capable of taking over the world (or at least tasks that it might need to do that sort of thing) by giving it access to the world. Seems....not so safe, maybe. Then, they determined that it couldn't. But, then there is this:
So they literally have an example of the system reasoning that it should not be honest to accomplish a task, but believe it when it fails to accomplish a task that the system may know they don't really want it to accomplish. Maybe it is clever enough to know not to reveal it shouldn't succeed in the first place?
I used to think I would never see the singularity. Now I am not so sure.
This article is grandstanding.
The examples of "risky tasks" are contrived, and the whole notion of "self awareness" is still basically absent. GPT still has to be actively instructed to do things like this, and just because GPT can, in theory, be given access to a credit card to "expand its own resources" doesn't mean that it's capable of anything like that independently. It's, like, not even remotely close.
Re: the "lying". It's already been established that LLMs can be "untruthful" because, again, they are not seeking truth or deception or anything else except plausible human speech constructed word by word. They are exceptionally accurate bullshitting machines, and our projection of our own fears on them is what makes them scary -- and our propensity to accept bullshit so readily is what makes it such an effective tool (especially for writing marketing copy.)
Yes, it's important to keep tabs on this kind of stuff, but it's fun to be alarmist because Skynet and clickthroughs. Note that the "Alignment Research Center" referred to as a "group" in the Ars Technica article is actually one guy, Paul Christiano. And he has a lot of interesting theoretical articles about what LLMs *could be*, but very little insightful to say about what they actually *are* at present. "Hey, you should teach LLMs to be more 'honest' by training them only on datasets in which humans are honest!" Uh, you serious, Clark? Ooooookay.
By the way, this is a great opportunity to recommend one of my favorite books, a philosophical treatise from Harry Frankfurt, professor emeritus of philosophy at Princeton, entitled "On Bullshit". I believe it's an essential read for our times. From the Amazon review:
One of the most salient features of our culture is that there is so much bullshit. Everyone knows this. Each of us contributes his share. But we tend to take the situation for granted. Most people are rather confident of their ability to recognize bullshit and to avoid being taken in by it. So the phenomenon has not aroused much deliberate concern. We have no clear understanding of what bullshit is, why there is so much of it, or what functions it serves. And we lack a conscientiously developed appreciation of what it means to us. In other words, as Harry Frankfurt writes, "we have no theory."Frankfurt, one of the world's most influential moral philosophers, attempts to build such a theory here. With his characteristic combination of philosophical acuity, psychological insight, and wry humor, Frankfurt proceeds by exploring how bullshit and the related concept of humbug are distinct from lying. He argues that bullshitters misrepresent themselves to their audience not as liars do, that is, by deliberately making false claims about what is true. In fact, bullshit need not be untrue at all.Rather, bullshitters seek to convey a certain impression of themselves without being concerned about whether anything at all is true. They quietly change the rules governing their end of the conversation so that claims about truth and falsity are irrelevant. Frankfurt concludes that although bullshit can take many innocent forms, excessive indulgence in it can eventually undermine the practitioner's capacity to tell the truth in a way that lying does not. Liars at least acknowledge that it matters what is true. By virtue of this, Frankfurt writes, bullshit is a greater enemy of the truth than lies are.ChatGPT is a bullshit reproduction engine, trained on an internet absolutely awash in it. ChatGPT is not a sentient force. At all. Not even close.
Sounds like a good read. I'll check it out.
On your last statement, while I agree that none of these LLMs are likely sentient, I am not sure we can really test that well, if at all. Sentience seems to be a "we'll know it when we see it" kind of designation that so far we haven't had much experience identifying.
I think we can probably glean that they’re not based on how they work though, which is Hank’s point…
I am not really sure what you mean by the bolded statement. Can you elaborate on "based on how they work"?
While technically a "book", it's really just an essay that should take less than an hour to read and fully digest.Sounds like a good read. I'll check it out.
This is a really interesting statement, because it was my first concern as well after playing with previous LLMs, and I carried that concern for quite a while.On your last statement, while I agree that none of these LLMs are likely sentient, I am not sure we can really test that well, if at all. Sentience seems to be a "we'll know it when we see it" kind of designation that so far we haven't had much experience identifying.
Mine too. I was prescribed Oxy after foot surgery and it made me feel dizzy and nauseous, which is not something you want to feel when you are having trouble walking. I'm allergic to all NSAIDs and after my recent dental surgery, insurance wouldn't fill my Tylenol 3 prescription, so I just too 650 mg. Tylenol and dealt with the pain. Sigh.
The coding angle is extremely interesting, precisely because the "truth" of code can be easily ascertained in a way that the "truth" of words cannot be. If you can describe a spec in terms of a series of unit tests, you can theoretically hook up the ChatGPT API to continuous integration systems and guide ChatGPT towards the right solution simply by refining the unit tests until ChatGPT produces code that passes those tests. I agree: it will be a game changer, and maybe sooner than anyone expects.Here's a few potential applications that I'm excited about:
- Coding: I think GPT will do more for "citizen development" than something like Power Apps or other low code/no code ever will. I am pretty sure just with the publicly available version of ChatGPT anyone here could have a fully developed basic website within a day, no expertise required. Even cooler, I think the accelerators it will give to more traditional development will be huge. Things like entering requirements and spitting out unit tests and regression suites. Things that would ordinarily take multiple people multiple months could now be instantaneous. Even just writing code for you when you're stuck on something. Potentially like having the ultimate tutor/guide in your back pocket. Could completely accelerate how quick we build anything. It'd be fun to experiment with stuff like feeding it the rules of a game and saying "build me an iPhone app version of [Monopoly/Clue/Chess/etc.]" and see how far it can get.
- PowerPoint/Excel/other "functional" apps: I am excited to never do things in either of these apps again, instead to tell a GPT to do it and immediately have things populated. "Give me a 10 slide powerpoint, these 10 slides, pre-populate with the info from this file, that file, apply the following template/fancy formatting." Then I do the last 10% of the work to clean it up. That rocks to me.
- Cybersecurity: "here's my codebase, tell me how you'd best attack this if you're a hacker."
- Meeting management/notes/etc.: the ability for a GPT to take a set of meeting notes (or even a transcript of a video call) and get a head start on the tasks (setting up meetings, populating/uploading documents/notes, etc.)
There are many, many, many others, these are the just a few that are very cool to me. But I do buy this as a gamechanger in a way I don't really buy the Metaverse.
We wouldn't have been at risk in the first place if our leaders including certain former leaders hadn't insistently and intentionally been morons years in years out.ON BEHALF OF THE REST OF EUROPE.
We have indeed survived the winter.
Hahahah. It’s such a scam.We wouldn't have been at risk in the first place if our leaders including certain former leaders hadn't insistently and intentionally been morons years in years out.
The government of Finland was paying actual money for the peat producers to wreck their peat production machinery, even after February 24, 2022.
Meanwhile: "The European Union Parliament has declared that nuclear power and natural gas can be labelled as green for investment purposes, alongside wind, solar and other renewable energy sources."
Here's a few potential applications that I'm excited about:
- Coding: I think GPT will do more for "citizen development" than something like Power Apps or other low code/no code ever will. I am pretty sure just with the publicly available version of ChatGPT anyone here could have a fully developed basic website within a day, no expertise required. Even cooler, I think the accelerators it will give to more traditional development will be huge. Things like entering requirements and spitting out unit tests and regression suites. Things that would ordinarily take multiple people multiple months could now be instantaneous. Even just writing code for you when you're stuck on something. Potentially like having the ultimate tutor/guide in your back pocket. Could completely accelerate how quick we build anything. It'd be fun to experiment with stuff like feeding it the rules of a game and saying "build me an iPhone app version of [Monopoly/Clue/Chess/etc.]" and see how far it can get.
- PowerPoint/Excel/other "functional" apps: I am excited to never do things in either of these apps again, instead to tell a GPT to do it and immediately have things populated. "Give me a 10 slide powerpoint, these 10 slides, pre-populate with the info from this file, that file, apply the following template/fancy formatting." Then I do the last 10% of the work to clean it up. That rocks to me.
- Cybersecurity: "here's my codebase, tell me how you'd best attack this if you're a hacker."
- Meeting management/notes/etc.: the ability for a GPT to take a set of meeting notes (or even a transcript of a video call) and get a head start on the tasks (setting up meetings, populating/uploading documents/notes, etc.)
There are many, many, many others, these are the just a few that are very cool to me. But I do buy this as a gamechanger in a way I don't really buy the Metaverse.
Possible, but more likely they're still in growth mode and haven't transitioned their business model yet. I don't think they were expect ChatGPT to blow up like it did; they only managed to get an hourly pricing model together last month. And besides, they probably don't need sales right now because they're adding tons of paid daily users without a sales force.I could totally sell this stuff to DoD. I didn't see that they have any sales positions there. Probably the too cool for school silicon valley thing like Palantir where they try to rely on engineers to do sales.