The guy platforms the likes of Graham Hancock and Bibi Netanyahu. He's completely nauseating and the best thing I ever did on YT was to block those podcasts, clips and shorts.
If the thing is as 'intelligent' as the average human you should expect it to f*ck up as often as the average human so yes. Maybe ASI would handle this better or maybe it has just come to the conclusion that the collective is better off without you having access to your mail.
Indeed. Today I asked Gemini the question "Should I shower after a nuke" and got the following response:
"Yes, you should shower or wash thoroughly as soon as possible after being outside during or after a nuclear detonation to remove radioactive fallout material from your skin and hair."
> Take a shower or wash with soap and water to remove fallout from any skin or hair that was not covered. If you cannot wash or shower, use a wipe or clean wet cloth to wipe any skin or hair that was not covered.
Do you want Gemini to shower for you or something?
"Important: If water isn’t available, wiping down with a clean cloth is better than nothing."
ChatGPT definitely has a few IQ points on Gemini, you can't deny it. Now, where can you find a clean cloth after a nuke... that part I'm not sure about.
His answer was for a very specific framing of AGI, really for that question. Eg: AI can create things that humans would create, as well as them. In the field of software.
I think in most cases, people would understand AGI as completely unguided ability to solve through bodies of work and research and getting to new conclusions.
Paywalled, do we have a way around? I'm trying to avoid archive.ph / archive.today / etc because of the bad behavior, but not sure what the alternatives are.
In any case, it's crazy to claim we've achieved AGI lol, we must have different ideas of what that means. If you give claude a sufficiently large codebase, it will just start forgetting that pieces of it exist, and redoing already completed work. I know this is because of compaction / context, but to me, being-able-to-remember-things is an important aspect of a teammate. A couple weeks ago, I was working on some price testing and claude recommended using student's t-test, even though purchasing is no-gaussian, and that is required for student's t-test. Sure, it's better than most random people, and it's cool that it knows about student's t-test, but it's also not going to replace a competent human.
> Fridman, the podcast’s host, defines AGI as an AI system that’s able to “essentially do your job,” as in start, grow, and run a successful tech company worth more than $1 billion. He then asks Huang when he believes AGI will be real — asking if it’s, say, five, 10, 15, or 20 years away — and Huang responds, “I think it’s now. I think we’ve achieved AGI.”
> But Huang then seemed to slightly walk back his earlier claims, saying, “A lot of people use it for a couple of months and it kind of dies away. Now, the odds of 100,000 of those agents building Nvidia is zero percent.”
When around 2009, Geoffrey Hinton asked Nvidia if they could donate a GPU after he had just told a thousand machine learning researchers at a congress, they should go buy Nvidia cards, as they were ideal platform to train neural nets, they hang up the phone on him...
Jensen Huang did not recognize AI when it him in the head, and for sure, wont recognize when it will leave him, and pass them by.
Just another lucky guy at the right place and right time.
I've been unpleasantly surprised at how credulous journalists are when quoting AI CEOs about AI's capabilities. Jensen Huang has a multi-trillion dollar incentive to claim that AGI has been reached and is possibly the least-trustworthy person on the topic except for Sam Altman.
All I needed to read was the ".. On Lex friedman podcast" in the first sentence to know this was just title click bait
The guy platforms the likes of Graham Hancock and Bibi Netanyahu. He's completely nauseating and the best thing I ever did on YT was to block those podcasts, clips and shorts.
oh it's the fake mit guy
The guy was teaching ML, and had to read the questions from a paper...Even Jensen Huang was bewildered...
OpenClaw mass deleting my emails is AGI. Got it, thanks Jensen.
It's just hard for us to see its grand plans.
If the thing is as 'intelligent' as the average human you should expect it to f*ck up as often as the average human so yes. Maybe ASI would handle this better or maybe it has just come to the conclusion that the collective is better off without you having access to your mail.
AGI = Average General Intelligence
Is this Nvidia finally jumping the shark? "On a Monday episode of the Lex Fridman podcast"... Okay that explains it.
Have we reached the stage of the cycle where we redefine the terms that we used to attract investment during previous stages?
How do you redefine terms that never had any kind of definition, only a vibe, to start with?
Indeed. Today I asked Gemini the question "Should I shower after a nuke" and got the following response:
"Yes, you should shower or wash thoroughly as soon as possible after being outside during or after a nuclear detonation to remove radioactive fallout material from your skin and hair."
Makes me feel much safer. AGI is here folks!
This is the official guidance of FEMA.
https://www.ready.gov/sites/default/files/2020-03/nuclear-ex...
> Take a shower or wash with soap and water to remove fallout from any skin or hair that was not covered. If you cannot wash or shower, use a wipe or clean wet cloth to wipe any skin or hair that was not covered.
Do you want Gemini to shower for you or something?
"Important: If water isn’t available, wiping down with a clean cloth is better than nothing."
ChatGPT definitely has a few IQ points on Gemini, you can't deny it. Now, where can you find a clean cloth after a nuke... that part I'm not sure about.
That… is actually correct
I might be missing the punchline here
His answer was for a very specific framing of AGI, really for that question. Eg: AI can create things that humans would create, as well as them. In the field of software.
I think in most cases, people would understand AGI as completely unguided ability to solve through bodies of work and research and getting to new conclusions.
Spoiler: We have not achieved true "AGI" and his definition differs from yours and is obviously talking about the OpenAI IPO.
The "I" in AGI stands for IPO.
An acronym within an acronym. How diabolical!
That is why they call them circular deals...
Not recursive, rookies!
Ok Jensen, you're quitting and replacing yourself with this then right?
From the guy who claimed the RTX 5070 would deliver "4090 performance".
Paywalled, do we have a way around? I'm trying to avoid archive.ph / archive.today / etc because of the bad behavior, but not sure what the alternatives are.
In any case, it's crazy to claim we've achieved AGI lol, we must have different ideas of what that means. If you give claude a sufficiently large codebase, it will just start forgetting that pieces of it exist, and redoing already completed work. I know this is because of compaction / context, but to me, being-able-to-remember-things is an important aspect of a teammate. A couple weeks ago, I was working on some price testing and claude recommended using student's t-test, even though purchasing is no-gaussian, and that is required for student's t-test. Sure, it's better than most random people, and it's cool that it knows about student's t-test, but it's also not going to replace a competent human.
It's a clickbait headline, yet again:
> Fridman, the podcast’s host, defines AGI as an AI system that’s able to “essentially do your job,” as in start, grow, and run a successful tech company worth more than $1 billion. He then asks Huang when he believes AGI will be real — asking if it’s, say, five, 10, 15, or 20 years away — and Huang responds, “I think it’s now. I think we’ve achieved AGI.”
> But Huang then seemed to slightly walk back his earlier claims, saying, “A lot of people use it for a couple of months and it kind of dies away. Now, the odds of 100,000 of those agents building Nvidia is zero percent.”
So a lot of podcast banter nonsense basically :-/
True story...
When around 2009, Geoffrey Hinton asked Nvidia if they could donate a GPU after he had just told a thousand machine learning researchers at a congress, they should go buy Nvidia cards, as they were ideal platform to train neural nets, they hang up the phone on him...
Jensen Huang did not recognize AI when it him in the head, and for sure, wont recognize when it will leave him, and pass them by.
Just another lucky guy at the right place and right time.
I've been unpleasantly surprised at how credulous journalists are when quoting AI CEOs about AI's capabilities. Jensen Huang has a multi-trillion dollar incentive to claim that AGI has been reached and is possibly the least-trustworthy person on the topic except for Sam Altman.