Jump to content
IGNORED

AI - The artificial intelligence thread


YO303

Recommended Posts

  • Replies 1k
  • Created
  • Last Reply

Top Posters In This Topic

4 hours ago, cruising for burgers said:

what's the difference between that and a human being? I mean, look at this thread...

:trollface:

no but really, we do, think, too much of ourselves...

exactly. the irony is you can point the same criticisms towards humans. that's what makes these criticisms appear so naive. to me at least.

on the larger agi fears: as long as GPT doesn't tell us where to go next in science (unifying theory of physics?), there's nothing "autonomous" about it. currently it's a rasther successful language model combined with the ability to summarise vast amounts of data/knowledge. it's an expert-system, if you will. but without the ability to be autonomous, as far as i can tell. a technology which will bring a new kind of industrial revolution. said it before and still think that.

Link to comment
Share on other sites

10 hours ago, vkxwz said:

ChatGPT is only in a primitive stage but I personally have learnt a lot from it, especially using it in conjunction with a text on the topic you want to learn about. You can engage in a back and forth conversation where you can ask for more detail on whatever you don't understand and also ask it to verify if your understanding(as you explain it) is correct, like a 1on1 tutor. I'm certain it'll be an extremely valuable learning tool in the future as it improves.

As for the therapist part I agree that it feels dystopian, but I've already witnessed someone use it in a therapy like way when they were in a dark place and it genuinely helped them when they had no access to a professional(there is a shortage of them where I live). Imo it's in extremely early stages and is not meant to be used for therapy at this stage though.

I'd want to hear you expand on what you're saying about manufactured decisions though.

you have to sign contracts and register with subscriptions and choose between various products, forms of training, jobs, etc. constantly weighing costs and benefits rather than having a baseline level of expectation of human treatment without compulsory labor, this absurd labyrinth of "consumer freedom" obscures the means of production itself and the nature of production and distribution, instituting enforced distribution according to price rather than value and disallowing the value generated by workers collectively to be distributed back to them rather than to an owning class

Link to comment
Share on other sites

10 hours ago, vkxwz said:

from that article: "it's still a computational guessing game. ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all."

this is a bad take that seems to be spreading among people who have little to no understanding of the field. There is strong evidence that these systems contain internal models of the world whichs data they are trained on, here's a nice article on one example of this:

https://www.lesswrong.com/posts/nmxzr2zsjNtjaHh7x/actually-othello-gpt-has-a-linear-emergent-world

"But despite the model just needing to predict the next move, it spontaneously learned to compute the full board state at each move - a fascinating result. A pretty hot question right now is whether LLMs are just bundles of statistical correlations or have some real understanding and computation! This gives suggestive evidence that simple objectives to predict the next token can create rich emergent structure (at least in the toy setting of Othello). Rather than just learning surface level statistics about the distribution of moves, it learned to model the underlying process that generated that data."

But I do think it is very important to acknowledge the fact that the GPT models do hallucinate, using in in the place of google is a bad idea. It's better used as a sort of reasoning engine.

its just replicating the structure of human language which is internally deeply linked and learned to talk like chess players, if you gave it false chess player human language it would learn made up shit instead.  its just an advanced markov chain

23 minutes ago, Satans Little Helper said:

exactly. the irony is you can point the same criticisms towards humans. that's what makes these criticisms appear so naive. to me at least.

on the larger agi fears: as long as GPT doesn't tell us where to go next in science (unifying theory of physics?), there's nothing "autonomous" about it. currently it's a rasther successful language model combined with the ability to summarise vast amounts of data/knowledge. it's an expert-system, if you will. but without the ability to be autonomous, as far as i can tell. a technology which will bring a new kind of industrial revolution. said it before and still think that.

are u kidding me it is NOT an expert system its just a language model

Link to comment
Share on other sites

19 minutes ago, zlemflolia said:

its just replicating the structure of human language which is internally deeply linked and learned to talk like chess players, if you gave it false chess player human language it would learn made up shit instead.  its just an advanced markov chain

are u kidding me it is NOT an expert system its just a language model

Please amuse me with making an actual argument. it's not just another chatbot. or a markov chain.

  • Like 1
Link to comment
Share on other sites

20 minutes ago, Satans Little Helper said:

Please amuse me with making an actual argument. it's not just another chatbot. or a markov chain.

what value does it generate?  you defend it yet it gets basic questions wrong and can't even know that it's wrong.  it's not an expert system

An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

these don't have that

Link to comment
Share on other sites

56 minutes ago, zlemflolia said:

Inference engines can also include explanation and debugging abilities.

There are people using chatgpt to code. Give some requirements and ask for some code in a specific language. And there are people asking to explain things. Examples of physicians putting in some cases and asking what to do given the current literature and clinical guidelines. it is very much an expert system. Not sure what's so hard to understand here. It's trained on pretty much the entire internet and as such a summary of all knowledge on there. And the user can acquire that knowledge by what seems like a normal conversation.

Link to comment
Share on other sites

35 minutes ago, Satans Little Helper said:

There are people using chatgpt to code. Give some requirements and ask for some code in a specific language. And there are people asking to explain things. Examples of physicians putting in some cases and asking what to do given the current literature and clinical guidelines. it is very much an expert system. Not sure what's so hard to understand here. It's trained on pretty much the entire internet and as such a summary of all knowledge on there. And the user can acquire that knowledge by what seems like a normal conversation.

it's not knowledge it's random text sampled as if it represents truth and creating a bastardized amalgamation of that meaning transformed into whatever the ai's trainer wants it to say, most importantly it will continue only being developed for the needs of the owning class since they control the means of production on which these softwares are made

Link to comment
Share on other sites

 

 

 

 

image.thumb.png.ad1f72eb216b388cc2e7b58119c5a6ef.png

 

image.thumb.png.cc227f455013d626181aaf8be77b57e9.png

 

what is a significant truth that people don't yet realize?


 

As an AI language model, I don't have access to a universal truth that people don't yet realize, as everyone's perception and understanding of truth can vary depending on their beliefs, experiences, and cultural background.

However, one significant truth that some people may not fully realize is the impact of their actions on the environment and the interconnectedness of all living things. As humans, we are part of a complex ecosystem, and our actions can have a ripple effect on the health of the planet and all its inhabitants.

It's crucial to recognize that we are all responsible for creating a sustainable future, and taking small steps towards reducing our environmental impact can have a significant impact in the long run. This includes reducing our carbon footprint, conserving resources, and advocating for policies that protect the environment. By recognizing the importance of our individual actions and working together as a society, we can create a more sustainable future for generations to come

Edited by trying to be less rude
  • Farnsworth 1
Link to comment
Share on other sites

great; so the 'truth' is just a compilation of most visible and re-iterated information of our own making, delivered back to us by something we call 'sentient' and therefore willing to believe as the word of our new god.

  • Like 4
Link to comment
Share on other sites

 if you're giving an algorithm trained at replicating semantics the power to answer the biggest questions of the universe for you you might as well ask the mailman

Link to comment
Share on other sites

20 hours ago, vkxwz said:

There is strong evidence that these systems contain internal models of the world whichs data they are trained on, here's a nice article on one example of this:

https://www.lesswrong.com/posts/nmxzr2zsjNtjaHh7x/actually-othello-gpt-has-a-linear-emergent-world

 

10 hours ago, zlemflolia said:

its just an advanced markov chain

My position is: you're both right.

It is a markov chain in the academic sense, because it is just a 'guess the next word machine'

BUT (and its a big but) it guesses the next word by looking at (up to...) the previous 2000 words, and using its vast neural net weights from its vast training material.

Something that can guess the next word by looking at the previous 2000 words has to go pretty deep.

It is a LANGUAGE MODEL. A MODEL of our LANGUAGE. And it is LARGE.

So its knows that a "Cat" might "Sit" on a "Mat". It doesn't have any external reference for what a Cat, Mat, or Sitting are, but it does have all the rest of its training data to cross reference, so its knows that a "Cat" can "Purr", eats "Fish", has "Four" "Legs" etc. On and on, deeper and deeper, billions of carefully weighed connections between hundreds of thousands of words in different contexts.

So it knows (at a deep level) all the interconnections between all the words, without having an external reference for what any of them actually are. That what the Model bit of LLM means.

Is it an 'expert system'? - sortof - it can tell you a lot of stuff. But sometimes it gets it wrong and its very hard to distinguish. But in a sense it kicks the arse of any expert system we've previously been able to build (IBMs Watson and all that crap). So even though its not meant to be an expert system it comes closer to being one than anything else we've ever built. I guess if it can pass the bar exam you have to admit it knows some shit.

That OthelloGPT paper and the stuff about it building internal model is fascinating.

Is it conscious? I dont think so (but the interesting thing is that we can't say for sure because we dont know how consciousness works, who knows what happens in those milliseconds where billions of silicon components work in parallel to produce an output) but the shock to us humans is that just by scaling shit up we've been able to make huge leaps. Us humans are used to being the only competent users of language on the planet (granted dolphins and monkeys etc but they're not using language to the extent we are). then suddenly, BANG, there's another entity in the game that can competently use language, and we barely understand how it works. It hints at emergent properties. And it suggests that timelines until AGI are much shorter than we thought (and that also means us humans are a lot less special than we thought - perhaps our fabled consciousness is not such an amazing thing after all...)

I find it all endlessly fascinating because its my interest in technology and my interest in consciousness and philosophy crashing into each other

 

Edited by zazen
Link to comment
Share on other sites

Facebook has been bombarding me with AI music making software... most of them seem like you just throw up some shit on it and it immediately composes an idm track... generative music software, endless non repeating beat patterns/fx... u know, all the shit that usually would take months to compose takes 5 minutes top with these apps... where's the fun in all that?
 

Edited by cruising for burgers
  • Like 1
Link to comment
Share on other sites

15 minutes ago, cruising for burgers said:

Facebook has been bombarding me with AI music making software... most of them seem like you just throw up some shit on it and it immediately composes an idm track... generative music software, endless non repeating beat patterns/fx... u know, all the shit that usually would take months to compose takes 5 minutes top with these apps... where's the fun in all that?
 

I want one where I can hook up my synths, sequencers.. like 16 inputs, each channel gets its own machine learning path.. then it reiterates in realtime over and over in sync to whatever you have going/fed to it.. you press stop and scroll back to select bits that are timesynced to the last bpm.. from each track too. that'd be cool as shit

Edited by chronical
Link to comment
Share on other sites

1 hour ago, zazen said:

 

My position is: you're both right.

It is a markov chain in the academic sense, because it is just a 'guess the next word machine'

BUT (and its a big but) it guesses the next word by looking at (up to...) the previous 2000 words, and using its vast neural net weights from its vast training material.

Something that can guess the next word by looking at the previous 2000 words has to go pretty deep.

It is a LANGUAGE MODEL. A MODEL of our LANGUAGE. And it is LARGE.

So its knows that a "Cat" might "Sit" on a "Mat". It doesn't have any external reference for what a Cat, Mat, or Sitting are, but it does have all the rest of its training data to cross reference, so its knows that a "Cat" can "Purr", eats "Fish", has "Four" "Legs" etc. On and on, deeper and deeper, billions of carefully weighed connections between hundreds of thousands of words in different contexts.

So it knows (at a deep level) all the interconnections between all the words, without having an external reference for what any of them actually are. That what the Model bit of LLM means.

Is it an 'expert system'? - sortof - it can tell you a lot of stuff. But sometimes it gets it wrong and its very hard to distinguish. But in a sense it kicks the arse of any expert system we've previously been able to build (IBMs Watson and all that crap). So even though its not meant to be an expert system it comes closer to being one than anything else we've ever built. I guess if it can pass the bar exam you have to admit it knows some shit.

That OthelloGPT paper and the stuff about it building internal model is fascinating.

Is it conscious? I dont think so (but the interesting thing is that we can't say for sure because we dont know how consciousness works, who knows what happens in those milliseconds where billions of silicon components work in parallel to produce an output) but the shock to us humans is that just by scaling shit up we've been able to make huge leaps. Us humans are used to being the only competent users of language on the planet (granted dolphins and monkeys etc but they're not using language to the extent we are). then suddenly, BANG, there's another entity in the game that can competently use language, and we barely understand how it works. It hints at emergent properties. And it suggests that timelines until AGI are much shorter than we thought (and that also means us humans are a lot less special than we thought - perhaps our fabled consciousness is not such an amazing thing after all...)

I find it all endlessly fascinating because its my interest in technology and my interest in consciousness and philosophy crashing into each other

 

i dont think it means humans are less special than we thought, because this machine requires massive and diverse quantities of human labor fed into it to construct contraptions of human engineering larger in scope and capabilities than anything we have ever made before in this field.  i wouldnt attribute much of anything to the "ai" but rather to the people who made it.  the people made the language, the pickaxes used to mine the materials to build it, the coal to power it, i think the idea that this is ai is even a step towards ai is people becoming confused and misled by smoke and mirrors because we are innately trying to see patterns, and it gives us language patterns, the kinds to which we attribute intelligence.  if this ai outputted 345 657 657 453 4352 342  654 675 76 instead we would not be so convinced yet it may contain the same patterns we're seeing but in new robes

chatgpt is narcissus' mirror to the texts humans already made and nothing more, when it gives a novel answer that i wouldnt consider spammy garbage i might change my mind.  if a human wrote the drivel it writes i wouldnt even be interested

  • Like 2
Link to comment
Share on other sites

2 hours ago, zlemflolia said:

i wouldnt attribute much of anything to the "ai" but rather to the people who made it.  the people made the language, the pickaxes used to mine the materials to build it, the coal to power it, i think the idea that this is ai is even a step towards ai is people becoming confused and misled by smoke and mirrors because we are innately trying to see patterns, and it gives us language patterns, the kinds to which we attribute intelligence.  if this ai outputted 345 657 657 453 4352 342  654 675 76 instead we would not be so convinced yet it may contain the same patterns we're seeing but in new robes

 I feel like you could say this bit about attributing nothing to the ai itself to most other technology. Why attribute anything to jet engines, the creation of them rests upon all these manufacturing and resource gathering processes so they don't really do anything themselves. But jet engines still produce something we value, flying, and I think it's the same with LLMs, of course a lot has gone into them but they produce something that's valuable just like flying is. I think the issue with our discussion here is that we obviously don't agree on a definition of intelligence, my personal definiton is the ability to make models of a domain that are predictive, and chatGPT definitely has aquired models that allow it to solve the tasks required to predict the next word, based on a huge dataset.

Also the comparison to markov chains is super reductive, these systems compute the "meaning" of words in context in a recursive kind of way and then use this to compute a new algorithm to guess the next word. The only way it matchs a markov chain is that it calculates probabilities of the next state(word) based on its current state(prompt / all the previous tokens). I would arge that humans can be reduced to this sort of definition too, we simply compute the next state(the next contents of conscious experience + actions) based on the current state(current contents of conscious experience), like a state machine. There is definitely a valid argument here about wheher or not text is enough, but I think its clear you can learn to model a domain even if you have a super limited and compressed stream of input from it (see othelloGPT, humans).

People are putting these models in observation, thought, action loops to give it agency which is pretty cool and possibly scary: (green is gpt generated)

 image.thumb.jpeg.1456c8165d42f231f217b57d2930d0a7.jpeg

Edited by vkxwz
  • Like 1
Link to comment
Share on other sites

"I would arge that humans can be reduced to this sort of definition too, we simply compute the next state(the next contents of conscious experience + actions) based on the current state(current contents of conscious experience), like a state machine"

speak for yourself

all this trash is an excuse to not make real AI manually using curated sets of reliable information about the world. instead its just text patterns with no connection to reality, except coincidental when observed text patterns relate to reality too

weve had chatbots a long time, everyones obsessed with these new ones.  but they shouldnt be

  • Haha 1
Link to comment
Share on other sites

42 minutes ago, Satans Little Helper said:

You talk like a markov chain!

markov chains that insult AI are COOL and GOOD actually

if (yoo) thot u coulddd sae eye'm uh markov chain gess agen bud, eyem braking ALL patturns

  • Haha 1
Link to comment
Share on other sites

imagine a database of indisputable facts such as nutrient density distributions of plants, geography, scientific and mathematical facts, physics reasoning systems, all which are known to be accurate and legitimate, combined into one system, capable of being controlled by an advanced query language which is learnable and simple.  reasoning, inference, and research capabilities.  the ability to spawn simple sub-AIs to do tasks based on loops of reasoning and analysis.  verbal prompt capabilities or vibrational or tactile for the blind and deaf.  fully modular, programmable, FOSS, locally computed.  this is the type of AI I want to see.  not this bullshit. 

the way to create AI is not by creating a big blob and trying to trim it down until it can kind of do some stuff.  no the way is to create sculptures, building into gardens, arranged into forests of AIs all communicating and collaborating alongside our own world to help us do the things we need to do. 

there is nothing like this, software hasnt even begun to achieve its basic primitive forms, we are not even at the beginning of the technological transformation of humanity, so it's no surprise all we see is a bunch of charlatans trying to make money

  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.