Jump to content
IGNORED

AI - The artificial intelligence thread


YO303
 Share

Recommended Posts

As far as Ai is concerned I am not worried about the current generation who are getting Ai decades into their life. I'm worried about the younger generations who will never know anything except Ai. We've seen the colossal effect just smartphones have had on Gen Z. Ai will be another beast entirely. 

  • Like 3
Link to comment
Share on other sites

10 hours ago, auxien said:

i'm curious if it's better at teaching than a human, tho. even if the answer is no, its ability to do so at the user's whim (much more difficult to find a human available 24/7 of course). the availability of video/interactive recorded human teaching would seem to trump a (possibly) flawed piece of AI.

I don't think it's better than a human in it's explanations of topics when the human is an expert on that topic and a good teacher, but I think it's already extremely useful. I was copying large chunks of text from machine learning papers into it and asking it to briefly summarize those sections. It did a great job of this but one of these times, there was a complicated concept I had never heard of before in the summary it gave me, so I asked it to explain it. The first response was correct but not super detailed, so I asked it for more detail on a specific part of the explanation it gave, you can do this recursively on exactly what parts you want explained further. Once I'd had all my questions answered, I typed out my understanding of the concept and asked it to tell me if I was getting it right, it did this well too.. Next, I asked it to explain how that concept fits into the original summary of the paper and it did so. This was on a fairly niche and not so simple topic and it actually helped me understand something quicker than if I had googled it.

  • Like 5
Link to comment
Share on other sites

Yeah, i actually think this technology will have a huge impact. Not in the sense of having created the magic AI or the singularity, or whatever hollywood fantasy. But in a more functional sense. It’s a new kind of steam engine, if you will. It’s potentially a new industrial revolution. Its a steam engine for knowledge acquisition.

In a way this chatgpt is a technique that can summarize all knowledge on the internet. And in such a way that it can help people looking for knowledge. Which is quite useful as we’re living in a more and more knowledge intense civilization where even the experts can be overwhelmed by the amount of (new) information that is out there. Here’s a tool which you can ask everything you need to know and poof, it presents you a summary based on everything that is out there. 42! 😉

  • Like 3
Link to comment
Share on other sites

  • 2 weeks later...

ai is just a name for advanced computer algorithms which can be wielded for any arbitrary purpose.  right now, the vast majority, 99%, of all ai algorithms being ran are purely to increase profits, which is decoupled entirely from any human guided decisions.  this absolute worship of capital distorts humanity's trajectory in a way that must be corrected for with a renewed devotion to the earth and to nature, as well as a better perspective on our place in the universe. 

  • Like 4
Link to comment
Share on other sites

52 minutes ago, ignatius said:

5jmqwtc7o1ba1.jpg?width=455&auto=webp&s=

what do you think about this? For me its the first wave of things to come that will soon hit these people to much larger extent. If he was clever he would use GTP himself instead of complaining but on the other hand its kind of sad to see how these manual crafts go to trash. On the other hand same thing happend when Photoshop was introduced 

  • Like 1
  • Farnsworth 1
Link to comment
Share on other sites

So it seems ChatGPT probably has about a 4000 token limit on the things it can handle. Token = sortof part of a word, but pretend Token=word if thats easier. So that means if you ask it a question involving more than 4000 tokens, or continue a conversation for more than 4000 tokens, it will struggle. Although some people have had longer conversation with it - perhaps its smart enough to summarise/compress earlier parts of the conversation into a smaller amount of tokens so that it can keep going.

So its smart but its 'working memory' (thats one way of describing it) is about 3000/4000 words.

(note this is seperate to its 'understanding of the world' which is probably static and would be the combined weights of all the connections in the trained model, its probably enormous, terabytes)

Obviously us humans have much more working memory than ChatGPT, we can recall most of our lives, in a fuzzy, summarised sort of way, and any of those recollections can be incorporated into the things we say.

SO the way to make ChatGPT more like a person is to increase the token limit. Which is probably mega difficult, but if you imagine a fantasy version of it with a million token limit it would be able to converse with you for a couple of weeks (say) before it ran out of tokens. Which means within that two week conversation it would perfectly remember everything you'd asked or told it and would be able to work that back into its responses. Or if you had it reading all your documents and emails at work it would burn through the tokens faster but it would probably manage a couple of days of shadowing you in your work and helping you decide what to write next.

So thats fantasy for now

But what probably is within reach in the next year or so is an 8000 token version that has 4000 tokens for whats happening right now, and like 4000 tokens that it keeps to itself for recording ongoing context. So it kindof has a longer term memory of 4000 tokens that it updates with a summary of what is happening ("Tom is talking to me about his work project, it involves X people called a,b,c and they are building some software to do Y, we are working on a presentation for next week, the issues in play are ..."). And it seems like that would be pretty smart, it could learn context over time (as long it can compress that context down to 4000 tokens) and use that context to inform whatever the current request is. I wonder if that would work. It would really start to feel like a buddy/assistant that could remember what you were doing and help out. (and/or maybe that would also be quite freaky)

Edited by zazen
  • Like 2
Link to comment
Share on other sites

https://www.theverge.com/2023/1/20/23563851/google-search-ai-chatbot-demo-chatgpt
 

As recently as December, we’d heard Google execs were worried that despite investing heavily in AI technology, moving too fast to roll it out could harm the company’s reputation. But things are changing quickly. Earlier this morning, Google announced it’s laying off more than 12,000 employees and focusing on AI as a domain of primary importance

  • Like 1
Link to comment
Share on other sites

This is quite interesting:

"Do Large Language Models learn world models or just surface statistics?"

https://thegradient.pub/othello/

Trying to answer the question of what goes on inside these things. So in this article:

  • they teach a GPT AI to play Othello, just by showing it chains of moves A5 B6 F3 etc, like a language
  • they analyse the AI's innards (using a second AI) and find structures which seem to represent the 8x8 Othello board. So the AI had built a model of the othello board in its internal workspace (aka 'mind') (without actually being told about the othello board, all its ever seen is A5 B6 F3 etc)
  • they then mess around with that othello board model inside the AI, by switching values around, and observe that changing the model inside the AI does make it choose different next moves. This confirms that it is using its own model of the board to decide what to do

 

 

  • Like 1
  • Big Brain 1
Link to comment
Share on other sites

Just now, zazen said:

This is quite interesting:

"Do Large Language Models learn world models or just surface statistics?"

https://thegradient.pub/othello/

Trying to answer the question of what goes on inside these things. So in this article:

  • they teach a GPT AI to play Othello, just by showing it chains of moves A5 B6 F3 etc, like a language
  • they analyse the AI's innards (using a second AI) and find structures which seem to represent the 8x8 Othello board. So the AI had built a model of the othello board in its internal workspace (aka 'mind') (without actually being told about the othello board, all its ever seen is A5 B6 F3 etc)
  • they then mess around with that othello board model inside the AI, by switching values around, and observe that changing the model inside the AI does make it choose different next moves. This confirms that it is using its own model of the board to decide what to do

 

 

That's fascinating. I've been wondering what goes on inside those models. They are so complex.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.