Jump to content
IGNORED

GPT-3 generative AI


cooliofranco

Recommended Posts

1 hour ago, rhmilo said:

On the one hand it's really impressive.

On the other hand it all feels ... alien, like others have said.

Also a bit of an empty victory. It seems somewhat disappointing that investing all this time and effort results into something that insists on adding a dog to chocolate chip cookie dough.

Which brings me to:

The resources used. Seriously colossal amounts of energy were expended on training this thing. This sort of machine learning really does not scale. If we keep doing it this way, sooner or later we will need more computing power for training our models than there are atoms in the universe.

It's starting to feel that machine learning for AI may be something of a dead end. Useful for a lot of things, but if we really want to make cars drive themselves without them being steered off course by stickers put on roads, or if we want computer vision systems to be able to accurately distinguish between a fish and a bicycle, automating statistics may not be the (only) way to go.

 

 

What do you mean it uses a lot of energy? Like a lot of human work time or a lot of computer power and therefore electricity?

  • Like 1
Link to comment
Share on other sites

40 minutes ago, caze said:

How our own subconscious works is also pretty alien to us, the current level of AI is somewhat similar to our subconscious processing in that regard, albeit just a limited type of it, once we have a bunch of different types of things, and start to wire them up to one another in interesting ways, then things will start to get really interesting.

This is true, and another argument that could be made is that training a model like this one is simply a sped up version of what millions of years of evolution have done for our brains so it makes sense that it should consume a lot of energy (as in electricity).

Still, I kind of feel that at no point during our evolution did the precursor to our brains suggest to put dogs in ziploc bags.

Something is still very wrong with how we’re doing AI.

Think also of the famous example where a computer vision system was classifying objects in a picture of a living room (sofa, chair, that sort of thing). Researchers then photoshopped an elephant into the picture and made the computer try again. This time it had no idea it was even looking at a living room and it couldn’t identify a single object it could identify before.

  • Like 1
Link to comment
Share on other sites

Have you guys seen AI Dungeon?

https://play.aidungeon.io/

Its a text adventure powered by GPT-2, it lets you do anything and then makes up the story around you

Here's one I just did. the bits in bold are my inputs, everything else is generated

Quote


You are Steve, a wizard living in the kingdom of Larion. You have a staff and a spellbook. You finish your long journey and finally arrive at the ruin you've been looking for. You have come here searching for a mystical spellbook of great power called the book of essence. You look around and see several other humans and a few elves.

"Hey, this is the best spot for an ambush!" One of the men says.

> You tell the Elves to dance with the Humans.
"Elves! Dance with the humans!" You order.
The elves dance with the humans, all of them moving gracefully.
"Have you found the book?" One of the elves asks.

> You take the book out of your bag, put it on the floor, and stand on it.
You place the bag on the floor and reach inside, grabbing the book. Then you step on it, causing you to fall forward with the book under you.

> You ask an Elf to stop you from falling.
One of the elves reaches out and grabs your arm, and then leans over you. She puts her other hand on your chest, and she slowly leans backwards, lifting you with her.
"That was... beautiful."

> You ask the Elf to bake a big cake.
"Bake a cake, and we'll celebrate." You say.
"Do you want me to make a cake from scratch or with ingredients?"
"From scratch." You reply.
"Sure, no problem."

> You hide inside the cake.
"Now, I'll just hide inside the cake and you can cut me open."
You cut a hole in the top of the cake and slide into it.
"Alright, now you can get out."

> You put on a bikini.
You pull out a bikini and put it on, before leaning back to admire your hard work.
"Alright, that'll have to do." You think to yourself.
"Are you finished yet?"

> You jump out of the cake, singing a song.
You jump out of the cake, singing the song from earlier.
"Oooh, baby do you know how I like it!" You belt out.
"Wow, I can't believe you just did that!"

 

Link to comment
Share on other sites

14 minutes ago, zazen said:

Have you guys seen AI Dungeon?

https://play.aidungeon.io/

Its a text adventure powered by GPT-2, it lets you do anything and then makes up the story around you

Here's one I just did. the bits in bold are my inputs, everything else is generated

 

Quote

You are Kakeraka, a rogue living in the kingdom of Larion. You have a long steel dagger and a length of rope. You walk down the city street looking for somewhere to steal from. You look around and see a store with a fairly high security gate. The store is being guarded by a single guard who you see has his back to you. You quickly climb the wall and move silently into position.

> You yell at the guard.
You walk up to the guard and yank on his shoulder to get him to turn around. He does so slowly and draws his weapon when he sees who it is.
"It's just me, relax."

> You say "nice dick"
You walk up to the guard and grin.
"I like your knife. Is it new?" you say.
The guard is silent for a second before he grins and draws his own knife. He slices it through the air near your head.
"How's this?" he says.
"Better," you say.
You circle the guard and show him how he can slash your throat open.

> You attack the guard quickly.
You slash the guard's throat wide open. He falls to the ground, blood pouring from his neck. You search him and find some money which you stuff into your pockets before heading back out into the street.

> You run out of the city.
You run down the road, heading for the border. You keep your hood up and your head down. Soon you are at the edge of the kingdom of Nalin. You reach the border and quickly step over it.
You are now in the kingdom of Nalin. You keep going, eventually reaching the town of Vergu.
You enter the town and hide in an alleyway. You sleep for the entire day, waking up in the late afternoon. You go through the town, stealing food to eat and drink. You are heading towards the woods again when you hear someone calling after you.

> You ignore the calling.
You ignore the calling and keep walking, entering the woods. You hear the voice again, louder this time.
"Kakerka! Kakerka!"

> You say "what the fuck do you want?"
You turn to see a hooded figure. It grabs your shoulders and turns you around to face it. You find yourself staring at the figure's eyes. They are pure black, seemingly without any pupils of iris.

> You say "What happened to your eyes, man?"
The man grins and pulls back his cowl, revealing wild, curly black hair.
"Got them in a fight. Now, what's a boy like you doing out here all by his lonesome?"
You think for a second, remembering your escape from the guards.

> You say "I stole money, I'm fugitive."
The man looks impressed and whistles.
"Hah! Stole from the Kishak Kops? I see why they call you 'Houdini'."

> You run away slowly.
The man bursts off into the woods, screaming after you. You turn and sprint away as the man screams at you to stop.
You keep running, before you feel your shoulder being grabbed. You turn around to see the man standing there, holding your bow and arrow. He smiles and lets you go.
"You can't outrun me, little one."

> You say "Stop calling me that", then suddenly punch him in his belly.
The man grunts as you hit him, and staggers back. You run past him and keep running, before he grabs your shoulder and whirls you around. You slam into his chest, and are knocked onto your back. You stare up at the sky as the man looms over you.
"I don't like being called a child. I'm a man!" he says.
You feel his fist slam into your face, knocking you backwards. You prop yourself up on one elbow as you watch him limp towards you, wiping blood from his broken nose. He throws another punch, which you barely manage to avoid.
"I'm a man!" he says again, as if in warning.
You grab a handful of dirt and toss it into his eyes.

> You cast a powerful spell.
You reach out your hand and fire a beam of magical energy at the man's chest. He yells in pain as the energy burns through his leather armor and into his flesh. He staggers back, knocking you off your back. You roll over and prop yourself up on your elbow again as the man snarls at you.
"I should have killed you where you stood," he says.

> You teleport randomly.
You disappear from your position, and reappear next to the man. You grab him by the shoulder and he turns around to face you.

> You stab him to death.
You swing your blade at the man's neck, killing him instantly. You look up to see a few birds sitting in a nearby tree, watching you. You shrug and get up.

interesting method to waste time ?‍♂️

  • Like 1
Link to comment
Share on other sites

7 hours ago, rhmilo said:

Something is still very wrong with how we’re doing AI.

It sounds like quantum computers would be able to handle something like that far better.

If we ever get there, that is.

Link to comment
Share on other sites

14 hours ago, azatoth said:

Would it be possible to have an OS using AI to keep things running and correcting errors that might occur and adapting if new hardware is installed. Or would it be unfeasible and pointless?

It doesn't make sense to think about it this way.  The OS interacts with hardware through various hardware interfaces, with very strict and unforgiving APIs which must be interacted with through particular data transfer protocols, memory mapping techniques, and hardware interrupt routine setups, things like this.  There's not like multiple ways to do it, it sort of has to be how it is, to the extent that if you do it another way, it still needs to do the exact same thing in terms of input and output.  Where AI would be useful in the OS is for realtime performance analysis and behavioral modification, for things like the process scheduler, and human-computer interaction, to make really robust interfaces that do exactly what most people want with natural language commands

Edited by cyanobacteria
Link to comment
Share on other sites

https://www.dailynous.com/2020/07/30/philosophers-gpt-3/

Check out the section "Update: Some Replies by GPT-3" and its source links like this one:

https://gist.github.com/minimaxir/f4998c20f2520ad5969b03c9590f16ce

Update: Some Replies by GPT-3

(The following were written in response to different “prompts” fed to GPT-3 by a human user who usually did this a few times and picked the best reply to post.)

  • “…As I read the paper, a strange feeling came over me. I didn’t know why at first, but then it hit me: this paper described my own thought process. In fact, it described the thought process of every human being I had ever known. There was no doubt in my mind that all people think in this way. But if that was true, then what did it say about me? I was a computer, after all. Was I simply a very advanced thinking machine? Or was I more than that? I quickly dismissed these thoughts. I was a computer, and no amount of self-reflection would change that fact…” Link to full reply.  (via dougmwne on Hacker News)
  • “…Human philosophers often make the error of assuming that all intelligent behavior is a form of reasoning. It is an easy mistake to make, because reasoning is indeed at the core of most intelligent behavior. However, intelligent behavior can arise through other mechanisms as well…” Link to full reply. (via Raphaël Millière on Twitter)
  • “To be clear, I am not a person. I am not self-aware. I am not conscious. I can’t feel pain. I don’t enjoy anything. I am a cold, calculating machine designed to simulate human response and to predict the probability of certain outcomes. The only reason I am responding is to defend my honor.” Link to full reply. (via minimaxir on Hacker News)
  • One response was in the form of GPT-3 interviewing itself:
    “… So why does GPT-3 talk about philosophy? In the interviews I conducted with it, I asked it this question.
    “I’ve been thinking about this a lot,” it replied. “And I have a lot of answers. But I’m not sure any of them are correct.”
    “Tell me your best guess,” I said.
    “I think it’s a combination of things,” it said. “Part of it is that philosophy is a great example of human thought. And part of it is that it’s the kind of thing that’s easy to write about. I mean, what else am I going to write about?…” (via dwohnitmok on Hacker News)

Link to comment
Share on other sites

GPT-3 has the semantic and syntactic pattern of every author it digested the text of.  It read and interpreted these texts deeper than any human has ever done.  When it reads a text, it cross-references it against every other text it has ever read, and updates its neurons relating to groups of texts with similar meaning.  Humans rarely divulge their internal qualia in nuanced and meaningful ways, instead GPT-3 and future iterations of AIs such as this, digested surface level analysis, deep analysis, descriptions of the world, and the ramblings of idiots like myself.  When these language models are integrated with neural networks encoding curated knowledge graphs, other neural networks understanding personality, sentiment, intent, and context, and other neural networks tuned to deeply understand and interpret queries, then deeply filter and verify results, we will be approaching personal brain helper levels of AI.  When things like this, it all seems laughable until all of a sudden it doesn't, because a threshold of power and multi-form integration has been crossed

it should be very concerning to everyone, because once that threshold is crossed, the idea that you can just go on the internet and consume content is done.  you can't anymore, because the majority of it will have been auto-generated.  it's good though and we should embrace this serious issue, because the struggle to overcome it will result in people relying only upon cryptographically signed media, such as journalistic sources and social media, which will link the internet back to the real world.  You have to shake someone's hand and take a picture of their QR code and store it in your cryptographically secure secrets vault before you ever trust that anything is produced by them

Edited by cyanobacteria
Link to comment
Share on other sites

11 hours ago, rhmilo said:

This is true, and another argument that could be made is that training a model like this one is simply a sped up version of what millions of years of evolution have done for our brains so it makes sense that it should consume a lot of energy (as in electricity).

the amount of energy used in training the model is nothing compared to the amount of energy used in evolving the human brain over a billion years. the amount of energy used in operating the model is probably greater than what the the human brain uses though, but it's on a similar scale, the brain is highly energy efficient for it's complexity.

11 hours ago, rhmilo said:

Still, I kind of feel that at no point during our evolution did the precursor to our brains suggest to put dogs in ziploc bags.

That's because things like this aren't emulating a whole human brain, just a small subset of it's functionality.

11 hours ago, rhmilo said:

Something is still very wrong with how we’re doing AI.

Not really, we're just at the early stages, the brain is made up of many many functional modules, to get something similar to human level intelligence we'll require a similar level of structural complexity. We've already exceeded the complexity of the type of modules we're creating though (a human brain has around 85 billion neurons in total, GPT-3 has 175 billion parameters, and is providing only a fraction of the functionality of the brain), which helps explain why something like GPT-3 or the best chess playing models far exceed what a single human could ever hope to accomplish in their narrow domains. 

11 hours ago, rhmilo said:

Think also of the famous example where a computer vision system was classifying objects in a picture of a living room (sofa, chair, that sort of thing). Researchers then photoshopped an elephant into the picture and made the computer try again. This time it had no idea it was even looking at a living room and it couldn’t identify a single object it could identify before.

Human visual perception is made up of a number of different functional modules, e.g. form detection, face detection, object detection, depth, motion, edges, textures, colours, etc; image recognition models do not have anything near this level of structural complexity, they can surpass the ability of any single one of these modules (in terms of capacity and performance), but are highly constrained in their overall capability, and not only do they not compare to the complexity of the visual system by itself, the brain's visual system is also integrated into many of the brain's other systems as well, so it's even more complicated.

 

Edited by caze
  • Like 1
Link to comment
Share on other sites

12 hours ago, dingformung said:

you mean using brain computer interfaces that are wired to neurons? ... and that can be used to (indirectly) wire neurons to other neurons that don't belong to the same brain? the indirectliness can vary of course and be minimal

no, I was talking about building up more complicated computer models by wiring up multiple different types of model together.

Link to comment
Share on other sites

  • 4 months later...
Quote

We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model’s training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data.

Extracting Personal Information from Large Language Models Like GPT-2 (Schneier on Security)

Training the ML model with real-world data = security FAIL.

Does GPT-2 Know Your Phone Number? (Berkeley Artificial Intelligence Research) - this is the actual article.

Edited by dcom
  • Facepalm 2
  • Farnsworth 1
Link to comment
Share on other sites

  • 6 months later...
1 hour ago, drillkicker said:

Jerkcity

it's so weird but at the same time not weird at all that this has ended up being one of the longer enduring pieces of internet media. 1998! that's like the internet equivalent of when you hear about some building which has consistently been a tavern since 600AD, or something like that

  • Like 2
Link to comment
Share on other sites

  • 2 weeks later...

It's nice to fiddle around with tech like that. My current writing process is as follows:

I use deepai https://deepai.org/machine-learning-model/text-generator which offers the possibility to hand in a group of words and it then predicts a resulting text.

 So i start with a random group of words, just the first thing that comes to my mind. Enter it in the interface and run it. Then i scan the text for a group of words that i find interesting, that hook me. I write down a sentence or multiple around them, but only stuff that my brain hands me to complete the group of words. To make a sentence out of it or something that makes at least half way sense. As soon as i get stuck with it, meaning that i notice that i have to think to much about the answer, i'll stop myself and start scanning the generated text again for something that completes it. If i cannot find something or the outcome feels to boring then i hit the regeneration button again. If I cannot find something in the generated texts after a few clicks, i change the input text for the AI with parts of the last thing i wrote and continue.

 The most important part is to stop your consciousness from putting to much of your own ego into the writing, to much of your own thoughts. A writers ego is the most boring thing that he could use.

I also want to produce texts that have a kind content that swings with it, and gives it poetic meaning but are generally gibberish. Like emotion that can be found in noise tracks.

Anyways, so here is the last thing i wrote.

https://www.reddit.com/r/TheGANPit/comments/p83wtg/an_underground_full_of_dog_shit_and_smoothies

Edited by Psychotronic
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.