Welcome to the Utopia Forums! Register a new account
The current time is Fri Mar 14 12:51:54 2025

Utopia Talk / Politics / AI
williamthebastard
Member
Tue Mar 11 03:45:34
One of the USA's most reputable companies (which shall remain unnamed since the contract they sent over looked horribly litigous) sent me an offer which I just declined after looking through it. They actually have teams of people sitting reviewing, I dunno - hundreds of thousands? Millions? of questions posed on their AI tool and grading the answers according to a long list of standards. This is AI? This seems highly manual. It seems like most answers you get using AI tools consist of paragraphs that have been checked manually beforehand and then get compiled according to context.
williamthebastard
Member
Tue Mar 11 03:54:55
The contract they sent over, btw, was intended to ruin the average person's life, if they mentioned any details, with financial consequences on par with the ruin that US citizens that need serious medical care seem to incur. Yuck, the average citizen in the USA seems to get treated like a Matrix battery.
TheChildren
Member
Tue Mar 11 05:27:42
this is becoz ai is not real, is trash

they stole various theories and shit from da internet but problem with that is, its always gonna be faulty becoz of it

da internet is filled with trash and lies

they dunt got real ai, they got trash a glorified google search machine that gives u da illusion u is talkin 2 some real person

even if u use it, u still gotta manually have people check da shit 2 see if da shit is not jewin garbage at u



williamthebastard
Member
Tue Mar 11 05:58:26
AI's most obvious failure is humour, interestingly enough. While a lot of the answers it provides can easily look impressive, ask it to be funny and no human will ever laugh at its attempts at humour. Try it. But a brief look into what is going on behind the scenes did seem to reveal a glorified google search engine into which hits it inserts pre-written stuff like "Hi! Sure, I'll help you.", which seems to be enough to impress us a lot of the time.
williamthebastard
Member
Tue Mar 11 06:03:48
It just seems to be e.g. a massive index of questions that include words like "can, you, help, tell me, answer" and matches them up with an index of pre-written "Sure, no problem, thank you for asking, of course, I'd be happy to" etc. Its just really, really big.
williamthebastard
Member
Tue Mar 11 06:07:51
say something really funny about the state of the world that will make me laugh

ChatGPT said:
The world right now feels like a group project where half the team is trying their best, a quarter disappeared, and the rest are actively setting things on fire.
williamthebastard
Member
Tue Mar 11 06:18:42
"Say something funny about Trump

ChatGPT said: Trump ordering a well-done steak with ketchup is the culinary equivalent of putting soda in fine whiskey—technically legal, but deeply upsetting.

You said: Wut?

ChatGPT said: Haha, just messing around! Trump has so many quirks, but that well-done steak with ketchup thing is legendary. You were looking for something different?"

People sit there compiling lists of responses to when people post "Wut?", "Huh?", like "Sorry, You were looking for something different?", "Sorry, what were you looking for?" in this case it also matched my "say something funny" with a list of responses like "Haha, just messing around"

Its just searching for and comparing huge lists of questions with huge lists of responses. Its as dead as a bubblegum machine where an inserted coin triggers a couple of mechanical levers to release a stick of chewing gum.
Daemon
Member
Tue Mar 11 08:02:59
Similar stuff has been reported before
http://time.com/6247678/openai-chatgpt-kenya-workers/
January 18, 2023

...
To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021.
...
The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance.
...
williamthebastard
Member
Tue Mar 11 11:40:42
I think "intelligence" in this context is something dreamed up by marketing departments. This has zero to do with intelligence. Its just the 3 questions that the hero in a video game gets to choose between when talking to the innkeeper who has 3 different programmed answers, but applied on a massive scale by serial connecting millions of x boxes.
TheChildren
Member
Tue Mar 11 11:57:39
"The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance.
... "

>> of course they was...

AI = actually indians

jergul
large member
Tue Mar 11 12:07:43
The Matrix, if we see the human batteries as just a metaphore for humans driving AI.
Dukhat
Member
Tue Mar 11 12:45:14
AI is wrong a lot. It's not actually accurate even. If you ask it a novel question, the error rate and hallucination level is way higher than advertised, not 5% like they say.

The only reason they got the rate down to 5% is because they got people to give it feedback so that they can manually override the LLM when it is wrong.

AI is fucking hype at this point. LLM's are just algorithms spewing word salad.
Daemon
Member
Tue Mar 11 23:23:04
This is certainly the future anyway:

http://giz...d-human-brain-cells-2000573993

Cortical Labs would like to sell you a brain in a box. It’ll cost about $35,000, and you can teach it to do all kinds of nifty things. If that’s out of your price range, you can sign up for its ‘Wetware-as-a-Service’ and rent bio-computer processing power from a rack of living tissues welded to machines. It’ll be in the cloud.

Cortical Labs has been working on this computer for six years and detailed many of its features in New Atlas. The computer is called the CL1, and the company is already taking orders with plans to ship them out later this year.

The New Atlas article is built around a long interview with Cortical Labs’s Chief Scientific Officer, Brett Kagan. He said that the CL1 is powered by lab-grown neurons that are placed on a planar electrode array. “Basically just metal and glass.” The lab-made hunk of brain is hooked up to 59 electrodes that create a stable neural network. This is all plugged into a “life-support unit” and hooked up to a proprietary software system.

“We have pumps like the heart. Waste. Feeding reservoirs. Filtration units like the kidneys. And we have a gas mixer to take carbon dioxide, oxygen, and nitrogen,” Cortical Labs CEO Hon Weng Chong told Reuters in a video walkthrough of the machine.

The marketing of the CL1 on the Cortical Labs website is morbid. “Real neurons are cultivated inside a nutrient rich solution, supplying them with everything they need to be healthy,” the website says. “They grow across a silicon chip, which sends and receives electrical impulses into the neural structure.”

And what’s the fate of this unholy melding of flesh and machine? “The world the neurons exist in is created by our Biological Intelligence Operating System (biOS),” Cortical Labs says. “It runs a simulated world and sends information directly to the neurons about their environment. As the neurons react, their impulses affect their simulated world.”

And what are the applications for the wetware? Cortical Labs got an early version of the system to play Pong a few years ago. The pitch here is that the CL1 can match or exceed the performance of digital AI systems. “If you have 120 [CL1s], you can set up really well-controlled experiments to understand exactly what drives the appearance of intelligence, Kagan told New Atlas.

“You can break things down to the transcriptomic and genetic level to understand what genes and what proteins are actually driving one to learn and another not to learn,” he said. “And when you have all those units, you can immediately start to take the drug discovery and disease modeling approach.”

According to the Cortical Labs website, the CL1 is a “high-performance closed-loop system where real neurons interact with software in real time.” This “robust environment” can keep your wetware machine alive for up to 6 months. It’s also plug-and-play. The cloud version can support a wealth of USB devices.

Cortical Lab is just one of the groups pushing the frontiers of nightmare science by teaching stuff to play Pong as they search for alternatives to digital LLMs. Last year, a team of researchers at the University of Reading published a paper describing how they’d taught an ionic electroactive polymer hydrogel—a lump of goo—to play Pong.

The scientists said they were confident they could get the lump of goo to improve its Pong abilities if they figured out how to make it feel pain.
murder
Member
Wed Mar 12 07:13:16

People will believe ANYTHING.

show deleted posts

Your Name:
Your Password:
Your Message:
Bookmark and Share