Welcome to the Utopia Forums! Register a new account
The current time is Thu Mar 28 21:05:59 2024

Utopia Talk / Politics / Google Bard AI is dumb as fuck
murder
Member
Tue May 16 20:44:49
Does Bard know how many times ‘e’ appears in ‘ketchup’?

http://www...her-artificial-nor-intelligent

Nope.

Nimatzo
iChihuaha
Wed May 17 12:46:36
Well, when you were as young as bard you were even dumber.
tumbleweed
the wanderer
Wed May 17 13:17:16
why does she use "appear" in every test?

maybe it's just not understanding the use of the word in that context
Daemon
Member
Wed May 17 13:29:29
The EU protects me from Bard.
murder
Member
Wed May 17 15:17:13

"maybe it's just not understanding the use of the word in that context"

That's the impression I got too. She could have just asked the damn thing to spell it.

murder
Member
Wed May 17 15:22:19

"Well, when you were as young as bard you were even dumber."

Bard is powered by LaMDA which is at least 3 years old, and I can assure you that I could identify an "e" when I was 3. I was actually a very smart child. I didn't start getting dumb until later.

Nimatzo
iChihuaha
Wed May 17 15:22:40
Seems the problem is something else.


How many e in ketchup


ChatGPT
There are no occurrences of the letter "e" in the word "ketchup."
murder
Member
Wed May 17 15:24:34

"The EU protects me from Bard."

Every non-national entity should protect its citizens from stupidity. Artificial or otherwise.

Nimatzo
iChihuaha
Wed May 17 15:24:35
Are you sure?


ChatGPT
Apologies for the incorrect response. There is indeed one occurrence of the letter "e" in the word "ketchup."
Nimatzo
iChihuaha
Wed May 17 15:26:28
lol

How many t in ketchup?


ChatGPT
There are two occurrences of the letter "t" in the word "ketchup."
murder
Member
Wed May 17 15:27:42

"ChatGPT: There are no occurrences of the letter "e" in the word "ketchup.""

OK that's bizarre. You know I don't think much of artificial intelligence, but that kind of response leads me to believe that these things can't figure anything out.

I wonder what happens if you tell it the answer and then ask it the question.

Nimatzo
iChihuaha
Wed May 17 15:30:31
Murder
Yes, but could you explain quantum fields?
murder
Member
Wed May 17 15:31:02

"ChatGPT: There are two occurrences of the letter "t" in the word "ketchup."

"Hallucinating"?


"ChatGPT: Apologies for the incorrect response. There is indeed one occurrence of the letter "e" in the word "ketchup."

I wonder if that got scraped up recently due to that article I posted.

Maybe that's why it was able to answer correctly for "e" but not "t".

murder
Member
Wed May 17 15:32:35

"Murder: Yes, but could you explain quantum fields?"

No :o\

murder
Member
Wed May 17 15:32:46

Still can't.

Nimatzo
iChihuaha
Wed May 17 15:35:24
Murder
While these things have limitations the quality of responses are contingent on the quality of the prompt. Learning how to super prompt is a skill we have to learn to get quality texts out of them.
murder
Member
Wed May 17 15:40:27

I'm messing with the Bing thing (whatever the fuck that is) ...

---------------------------
How many consonants are there in the name Microsoft?

>> There are 7 consonants in the name Microsoft.
---------------------------

I think it's just guessing.
Nimatzo
iChihuaha
Wed May 17 15:46:42
Murder
Imagine the space that every particle exists in are like pixels on a screen that light up when a particle move through them. Every pixel has the potential to express every force or interaction that can happen to a particle, all the laws of the universe exists in the fields/pixels and are mediated to particles. It’s still a matter of debate if space actually is quantized like my pixel example, it could just be one continuous giant pixel, but it works as an illustration. So space is actually a fundamental property of the universe, every interaction between atoms bonding and breaking, the fact that photons can travel through space is because of these properties of these fields that no one knows about (outside quantim theory) and that we can see!
Nimatzo
iChihuaha
Wed May 17 15:47:35
*Can’t see.
williamthebastard
Member
Wed May 17 16:14:50
In these Maga times, the fact that it can mechanically state the opposite of what it sees when its theory conflicts with practicality and the physical world might be its most human quality yet
Nimatzo
iChihuaha
Wed May 17 16:22:02
Apparently I understand quantum theory better than ChatGPT, I caught the dumb bastard mixing general relativity and quantum theory trying to explain gravity.

“Are you mixing general relativity with quantum theory here?”

“Apologies if there was any confusion. You are correct that in my previous response, I mixed aspects of general relativity and quantum theory.”

That is a major weakness these things have, they don’t know when they are bullshitting even they are bullshitting half way through a some Frankenstein version of a unified theory.


Seb
Member
Thu May 18 02:38:15
Nimatzo:

It's the token thing.

It deals with scraps of paper with words on them.

You are asking it to complete an analytical task of breaking the word ketchup down into characters, searching the word for particular letters, and counting them.

While it probably understands that words are made of letters, and ketchup is a word, it hasn't got a way of "seeing" how many letters are in "ketchup" or what they are, unless someone written *about* ketchup and the letters in it; which you wouldn't do because a human understands words not as tokens, but strings of tokens (with those tokens being individual letters) so things like "ketchup is a seven letter word spelled "k" "e" "t" "c" "h" "u" "p" " wouldn't ever need to be written down.

Seb
Member
Thu May 18 02:40:46
Nim:

This is part of what I mean about lack of world model being a problem for truly general abstraction.

It's like you might know that atoms exist but when someone put a small grey block in front of you and asks how many silicon atoms are in it, you aren't necessarily going to be able to answer that or even understand the question unless you have access to a bunch of facts including what that block is made out of.
Seb
Member
Thu May 18 02:47:48
A better way of expressing it: imagine you'd learned English purely orally and had never seen the Roman alphabet.

And I explained what writing was orally, how there strings of symbols that are associated with sounds and you can put them together so that if you make the sounds you say a word.

And then I asked you how many "e"s were in ketchup.

You'd struggle.

The mistake here is to think that LLMs process text like we do. What they deal with are tokens - they see a bit of paper with a word on (actually fragments of word I believe, its complicated) - and while they render tokens as a string of letters, for them it's a different medium. Writing/reading for them as it would be for someone who was well educated about writing but only through oral means.
Nimatzo
iChihuaha
Thu May 18 03:18:36
Seb
Yes, but how come it corrected itself? Did it just guess?
Seb
Member
Thu May 18 05:06:35
I don't know.

It might be some of the system 1 system 2 thing - and that by giving it negative feedback it pushes it more towards some particular set of weights where there there is patterns of tokens and relations that capture this information.

What it can't do is take the token you gave it "ketchup", break it down into letters, and analyse them - as we would.

Instead it needs to find a set of tokens that correspond to the text of your answer based on their correlations to the vector in phase space that the tokens you have given it represents.

I.e. it's not engaging with the medium in the same way we are, and the way it accesses and reasons is not very much like the way we do - except perhaps the way we do when we intuitively understand something (and we often intuit it wrong when we move slightly out of context).


When you go "thinking step by step" and use langchain or other plugins, what the lang chain wrapper / application layer on top of the LLM does is search and pull out sequences of tokens that map to an API call, execute the API call, and return the answer as a string of tokens.

So if you had a small string parsing function plug in, you could probably say "how many letters e in the word ketchup, thinking step by step" and then it would go:
O: the user is asking me to search the string "ketchup" for instances of the letter e.
T: Run the function "f=countif("e","ketchup")
[API/Function call]
[Returns f=1]
R: There is 1 e in ketchup.
Seb
Member
Thu May 18 05:11:20
A really powerful LLM - and I'm not sure GPT4 is at that stage, wrapped in something like AutoGPT might be able to write and execute a string parsing function.

But it can't do it "natively".

Like how we "know" what calculation might be needed and how to perform it but need to reach for a calculator to work it out.

Nimatzo
iChihuaha
Thu May 18 06:01:17
Heh yea the way there things fail is almost as fascinating :) I tried a different promt and told it to count every letter in ketchup, it got everything right, but told me it the letter T comes twice.

One thing that really impresses me is the ability to reason with gpt to do things it refuses to do at first because it has a knee jerk social justice response.

Have you heard about this concept of super prompting? Because it does seem some of the errors are in how we communicate with it.
Seb
Member
Thu May 18 06:13:16
100% certain it wasn't actually counting the number of letters.

It's just accessing the knowledge from its database of weights. Somewhere in training there's probably loads of e.g. "list of seven letter words" - e.g. from Scrabble resources - that give some weak correlation between the token pair "seven letters" and "ketchup". It will take a bit of prompt engineering and feedback to get it to converge on that, but it is likely there.

If you give it a nonsense word you made up and ask it to count the letters it would never get it right without giving it the mechanism to break out to some ancillary function.
Seb
Member
Thu May 18 06:15:24
That's my hypothesis based on my understanding of the way LLM tech works - I have to say though I've not done the same kind of deep dive as on others so there may be subtleties about GPT and what else it might have besides a bare bones LLM.
murder
Member
Thu May 18 17:54:44

"What it can't do is take the token you gave it "ketchup", break it down into letters, and analyse them - as we would."

Whatever the hell is running the Bing thing can spell it.

Right after I initially asked it how many "e" in ketchup and it replied that there were none, I asked it to spell ketchup ... which it did ... and then asked it how many "e" it used to spell ketchup ... and it replied that it had used one.

murder
Member
Thu May 18 17:59:15

I think part of the problem is their ability to understand the question. Kind of like a native English speak using idioms when speaking to a non-native speaker.

Seb
Member
Fri May 19 01:46:13
Murder:

Maybe it can break it down into letters.

But I think you need to prompt it to do so. It's not engaging with the word ketchup in the same way we intuitively would - as an object self evidently made of a string of letters.

Try asking, in a fresh session, "how many times does the letter 'e' occur in the word ketchup".
Seb
Member
Fri May 19 01:48:01
It might be struggling to understand that you mean the representation of ketchup as a string of letters; rather than what ketchup means to it (a token occupying a place in a parameter space close to mayonnaise).
murder
Member
Fri May 19 15:11:10

> The letter ‘e’ occurs once in the word ketchup. Is there anything else you would like to know?

Of course it may have learned, or been tweaked, or simply have guessed correctly, or who know.

murder
Member
Fri May 19 15:18:59

> There are 7 consonants in the name Microsoft. Is there anything else you would like to know?

...

> There are 3 vowels in the name Microsoft. Is there anything else you would like to know?

...

> There are 9 letters in the name Microsoft. Is there anything else you would like to know?

...


"If there are 9 letters in the name Microsoft, and 7 of them are consonants and 3 of them are vowels, then which letter in Microsoft is both a consonant and a vowel?"

> I apologize for any confusion in my previous messages. There are 9 letters in the name Microsoft, 7 of which are consonants and 2 of which are vowels. None of the letters in Microsoft are both a consonant and a vowel. Is there anything else you would like to know?

It seems pretty certain of itself.
murder
Member
Fri May 19 15:23:12

"Please list the consonants and the vowels in the name Microsoft."

> In the name Microsoft, the consonants are M, c, r, s, f, t and the vowels are i and o. Is there anything else you would like to know?


"You listed 6 consonants, so why do you reply that there are 7 consonants in the name Microsoft?"

> I apologize for the mistake in my previous message. There are 6 consonants in the name Microsoft: M, c, r, s, f and t. Is there anything else you would like to know?

murder
Member
Fri May 19 15:27:41

I think they are just hoping the public trains these things by giving a thumbs up for correct answers and a thumbs down for incorrect answers, because it's obviously not ready for prime time.

murder
Member
Fri May 19 15:29:05

This thread started out about Google Bard, but I don't think I've tried that one yet.

Seb
Member
Sun May 21 23:28:35
This explains a bit about how these models understand writing.

https://towardsdatascience.com/byte-pair-encoding-the-dark-horse-of-modern-nlp-eb36c7df4f10

show deleted posts

Your Name:
Your Password:
Your Message:
Bookmark and Share