Jump to content
IGNORED

GPT-3 generative AI


cooliofranco

Recommended Posts

This new AI platform for generating text, and apparently, code, web sites, and other stuff, has been getting a lot of attention and I have to say, I'm usually underwhelmed by this stuff but the demos and examples I've seen are pretty spooky I have to say.

I bet there are people on here who know about this and are using it and I'd like to hear about it. A friend said an RPG he plays was using it and I've messed around with Shortly which is a short story writing app that uses it. I suspect you could use for music and probably that's already being done somewhere.

 

Edited by cooliofranco
Link to comment
Share on other sites

11 minutes ago, dingformung said:

What happens if you feed the source code of GPT 3 into GPT 3? ? (or is it only for English language and not for coding language?)

It's been basically fed the whole of the text content of internet that's publicly available so it contains tons of source code as material already. You need to just prompt it to start the code generation.

  • Like 1
Link to comment
Share on other sites

21 minutes ago, zkom said:

It's been basically fed the whole of the text content of internet that's publicly available so it contains tons of source code as material already. You need to just prompt it to start the code generation.

So as I understand it you can feed it text - or any info - and it logically completes it/"thinks" it further with the knowledge it has (which is basically all public knowledge on the internet). So what if it is generating source code based on its own source code? Is GPT 3 able to generate GPT 4? Is GPT 4 able to generate GPT 5? Etc.

Sorry, totally ignorant on coding and machine learning.

Edited by dingformung
Link to comment
Share on other sites

1 minute ago, dingformung said:

So as I understand it you can feed it text - or any info - and it logically completes it/"thinks" it further with the knowledge it has (which is basically all public knowledge on the internet). So what if it is generating source code based on its own source code? Is GPT 3 able to generate GPT 4? Is GPT 4 able to generate GPT 5? Etc.

Sorry, totally ignorant on coding and machine learning.

No, it can't think or understand what it is writing or what the real world context is. It just basically knows what kind of words go together. If you try to generate code it's syntax might be correct but logically it can be complete nonsense.

  • Thanks 1
Link to comment
Share on other sites

For example here's a cookie recipe generated by GPT-3. It doesn't make much sense even though it can kind of keep up with the ingredients to stay consistent, etc. The second step already throws a dog in a ziploc bag.

EeLdgWwUwAQ7g0u?format=png&name=large

  • Haha 5
Link to comment
Share on other sites

Would it be possible to have an OS using AI to keep things running and correcting errors that might occur and adapting if new hardware is installed. Or would it be unfeasible and pointless?

Link to comment
Share on other sites

1 hour ago, dingformung said:

What happens if you feed the source code of GPT 3 into GPT 3? ? (or is it only for English language and not for coding language?)

It can write code too, either automatically by looking at code you've already written and extrapolating out to what the completed code would look like, or by examining some English language description of what the code can do and creating it from that.

 

1 hour ago, zkom said:

If you try to generate code it's syntax might be correct but logically it can be complete nonsense.

This isn't correct, it's pretty good at generating perfectly functional code, mightn't be exactly what you want, or could have bugs in it, but at this early stage it's very impressive.

Edited by caze
Link to comment
Share on other sites

1 hour ago, caze said:

This isn't correct, it's pretty good at generating perfectly functional code, mightn't be exactly what you want, or could have bugs in it, but at this early stage it's very impressive.

Let me rephrase a bit, it can write completely functional code, but it doesn't understand what the code does. It doesn't know the meaning of any of the commands, operators, etc. It only knows the models that it has built based on the data that has been used as input. It's impressive what it can do with those models, but it still doesn't know what happens when you run the code and the code is just the best guess it has based on the previous examples it has seen.

Link to comment
Share on other sites

2 hours ago, azatoth said:

Would it be possible to have an OS using AI to keep things running and correcting errors that might occur and adapting if new hardware is installed. Or would it be unfeasible and pointless?

This would probably eventually lead to a system that nobody understands how or why it works and if something goes wrong that the AI can't fix then it's unfixable.

Link to comment
Share on other sites

It's certainly good enough to generate WATMM responses. Would be fun to have a WATMM AI bot as a mascot.

Or how about an online forum with only AI bots that respond to each other? Would be sort of pointless

Is it possible to create a recursive feedback loop of a code-generating AI recreating new versions of itself infinitely based on its own source code, given there is infinite processing power. ?

6qc4XbB.gif

???WHAT IF WATMM IS A BOT AND YOU ARE THE ONLY REAL USER? ??....

  • Like 1
  • Haha 1
Link to comment
Share on other sites

13 minutes ago, zkom said:

Let me rephrase a bit, it can write completely functional code, but it doesn't understand what the code does. It doesn't know the meaning of any of the commands, operators, etc. It only knows the models that it has built based on the data that has been used as input. It's impressive what it can do with those models, but it still doesn't know what happens when you run the code and the code is just the best guess it has based on the previous examples it has seen.

Yes, that's correct. It has no long-term memory (it does have a short term memory though, it can remember stuff in the context of the thing it's trying to generate, but once the calculation is complete that memory goes away), no abstract reasoning ability, etc.

Link to comment
Share on other sites

3 hours ago, zkom said:

For example here's a cookie recipe generated by GPT-3. It doesn't make much sense even though it can kind of keep up with the ingredients to stay consistent, etc. The second step already throws a dog in a ziploc bag.

 

will try this tonight and report back

Link to comment
Share on other sites

58 minutes ago, caze said:

Yes, that's correct. It has no long-term memory (it does have a short term memory though, it can remember stuff in the context of the thing it's trying to generate, but once the calculation is complete that memory goes away), no abstract reasoning ability, etc.

That's what it wants you to think.

Link to comment
Share on other sites

I was totally unimpressed with modern AI tech until i saw 20 chess games between AlphaZero (neural network based chess program) and Stockfish (one of the strongest chess engines) released in 2017 by Google's DeepMind (company that developed AlphaZero); soon they released 100 additional games. Developers gave AlphaZero only basic rules of chess and they left it to play with itself. After only 4 hrs of playing/teaching it achieved rating higher than the strongest supercomputer program and after 24 hrs a superhuman level of playing nobody believed it'd be possible in the near future.

 But the games was totally mind blowing! I've never seen that level of sophistication, neither in human or supercomputer playing, sacrificing pieces for no apparent reasons, crazy moves and yet it wins in totally unsuspecting ways. AI is the closed we can can to experience an alien mind imo; cause it feels alien... it's just otherworldly! 

 

if someone's interested in chess and this topic, here's an example: 

ok... one more ?

 

 

Edited by xox
  • Like 1
Link to comment
Share on other sites

On the one hand it's really impressive.

On the other hand it all feels ... alien, like others have said.

Also a bit of an empty victory. It seems somewhat disappointing that investing all this time and effort results into something that insists on adding a dog to chocolate chip cookie dough.

Which brings me to:

The resources used. Seriously colossal amounts of energy were expended on training this thing. This sort of machine learning really does not scale. If we keep doing it this way, sooner or later we will need more computing power for training our models than there are atoms in the universe.

It's starting to feel that machine learning for AI may be something of a dead end. Useful for a lot of things, but if we really want to make cars drive themselves without them being steered off course by stickers put on roads, or if we want computer vision systems to be able to accurately distinguish between a fish and a bicycle, automating statistics may not be the (only) way to go.

 

 

Link to comment
Share on other sites

How our own subconscious works is also pretty alien to us, the current level of AI is somewhat similar to our subconscious processing in that regard, albeit just a limited type of it, once we have a bunch of different types of things, and start to wire them up to one another in interesting ways, then things will start to get really interesting.

Link to comment
Share on other sites

dunno, as soon as it is able to mutate by its own it might do interesting stuff at some point. doesn't matter whether all of it is useful or not. can just be inspiring/funny/interesting. sort of like technological poetry.

15 minutes ago, caze said:

How our own subconscious works is also pretty alien to us, the current level of AI is somewhat similar to our subconscious processing in that regard, albeit just a limited type of it, once we have a bunch of different types of things, and start to wire them up to one another in interesting ways, then things will start to get really interesting

you mean using brain computer interfaces that are wired to neurons? ... and that can be used to (indirectly) wire neurons to other neurons that don't belong to the same brain? the indirectliness can vary of course and be minimal

 

?? ?? ?? ??

Edited by dingformung
Link to comment
Share on other sites

4 minutes ago, zazen said:

It imitates Trump perfectly

Quote

"We have a lot of people, a lot of people, a lot of people that are very angry, and they're very, very, very -- I mean, you have many people and I can't even count them all, I can't even list them off, I can't even, I can't even -- I'm -- I can't even -- you know, I mean there's many people that are very angry, and they're very, very, very, very, very, very angry, and they're -- they're being very misused."

 

  • Like 1
  • Haha 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.