Jump to content
IGNORED

AI - The artificial intelligence thread


YO303

Recommended Posts

  • Replies 1k
  • Created
  • Last Reply

Top Posters In This Topic

I see one good by-product of this recent frenzy of AI news, is that it hopefully has got people thinking more on the philosophy of life. maybe future AI nightmare scenarios scare enough humans into waking up, really ask why we need an "artificial" intelligence at all. if there's fear/anxiety about it now, then what do you think it will be like later when minds are locked in some AI run dead mall / retail hell / matrix la-la land metaverse?

IMO we all need to remember we are animals that through evolution took thousands of years to get where we are today. we started from the elements, grew limbs, brains, then all the rest happened (capitalism lol). if all the electronics we're all addicted to went away, we need to be fine with this. they're helpful, yet distracting devices, that IMO have done more harm than good to the human pscyhe. can't stop it tho. just gotta keep technology from taking your mind to a dark place, best to keep that in check. and keep on surfing the reality wave. keep on paying close attention to the present moment.

  • Like 3
Link to comment
Share on other sites

On 4/2/2023 at 12:08 PM, zero said:

I see one good by-product of this recent frenzy of AI news, is that it hopefully has got people thinking more on the philosophy of life. maybe future AI nightmare scenarios scare enough humans into waking up, really ask why we need an "artificial" intelligence at all. if there's fear/anxiety about it now, then what do you think it will be like later when minds are locked in some AI run dead mall / retail hell / matrix la-la land metaverse?

IMO we all need to remember we are animals that through evolution took thousands of years to get where we are today. we started from the elements, grew limbs, brains, then all the rest happened (capitalism lol). if all the electronics we're all addicted to went away, we need to be fine with this. they're helpful, yet distracting devices, that IMO have done more harm than good to the human pscyhe. can't stop it tho. just gotta keep technology from taking your mind to a dark place, best to keep that in check. and keep on surfing the reality wave. keep on paying close attention to the present moment.

Well said, love this. I'm pretty sick of my LinkedIn feed which is now just flooded with people parading, championing and cheering on AI no matter where it is going. I used to be more excited about it and soft AI is one thing, but learning more recently about where things are all headed makes me realize how fucked we are. And it's all capitalism's fault. No turning back now. 

  • Like 1
Link to comment
Share on other sites

On 4/1/2023 at 2:00 AM, Real Human Bean said:

?

visual art ais are going to create epochs of aesthetics which will end up either not reproducible at all, or only reproducible through internet archeology to find the old models used to originally create them

Link to comment
Share on other sites

The debate about whether LLMs are more than bullshit generators is an interesting one and none is wise to take a hard position either way.

I've for some time entertained the idea that a lot of humans don't really possess intelligence (whatever that is), but they've learned to imitate intelligent behaviors very well, so well that there's not a clear way to differentiate between the two (if there is even a difference)

Isn't that what makes humans stand out anyway? The ability of mimicry and extrapolation. We apply the process of evolution behaviorally and socially, we copy, imitate, improve.

I'm sure everyone has met someone who is really good at appearing knowledgeable about a topic or many but when probed further reveals themselves as someone who has only learned to say the right words.

This is in essence what these LLMs are doing, they've learned *really well* the correct words to say in a ton of different contexts without this implying they have an "understanding" of the topic. However, GPT-4 is starting to exhibit other emergent properties that makes it harder to see it at just a "next best word" predictor. There's a paper out there written by people at Microsoft that were given full access to it and had a chance to test it extensively (LINK) I'm blow away by the example of it fixing an image that was given as code.

If these properties emerge from the complexity of neural nets, there's no telling what else will come, nothing's off the table. Even that thing we call consciousness.

however I think improvement on these systems will soon plateau and reach a point of diminishing returns where no significant improvement will be achieved without enormous cost na effort.

Edited by GORDO
Link to comment
Share on other sites

The ability of LLM to tell you they can give you certain things and their ability to give them to you are two different things. Earlier GPTs weren‘t released as they where basically fake news machines. Now it got much better but they still can’t give you as much as you would assume when first talking to them

Link to comment
Share on other sites

2 hours ago, zlemflolia said:

visual art ais are going to create epochs of aesthetics which will end up either not reproducible at all, or only reproducible through internet archeology to find the old models used to originally create them

Yep

Link to comment
Share on other sites

In a way, GPT's have shown that a complex enough language model has an interesting side effect of also modelling knowledge. As an emergent property, if you will. As far as I'm concerned, that also says a lot about our own biological neural networks. Despite our experience being vastly different, we might actually work more similar to how GPT's work than we might believe. We're just a bunch of biological robots. Some moreso than others, of course! ?

 

  • Like 3
Link to comment
Share on other sites

 

58 minutes ago, Satans Little Helper said:

In a way, GPT's have shown that a complex enough language model has an interesting side effect of also modelling knowledge. As an emergent property, if you will. As far as I'm concerned, that also says a lot about our own biological neural networks. Despite our experience being vastly different, we might actually work more similar to how GPT's work than we might believe. We're just a bunch of biological robots. Some moreso than others, of course! ?

 

Indeed, maybe the take-away is that an embedding of "knowledge" is a necessary condition to make a language model good.

Or more simply, that having knowledge is very helpful to understand language (duh) and so the network configures itself in the way it best achieves its goal.

I'm sure contained within its billion parameters and topological structure  lie some embedded models for reasoning about a bunch of stuff, and they are all interconnected.

  • Like 1
Link to comment
Share on other sites

i think it's interesting(?) that such specific discussions of intelligence, knowledge, consciousness, etc. have surrounded this technology. i've yet to see a truly compelling definition of intelligence in play w/r/t AI. it seems that people are just using definitions that have to fit something like "language model" which really involves a lot of question begging imo, very self-serving. i feel like we're all in some kind of philosophy 101 class on day 1 just throwing out our most deep thoughts maaaaaaan.

in a way i feel this is possibly a doomed discussion. the tech field has created a technology they have called "intelligence" and we all feel we must conceptualize this technology as such. then we're branching off into discussing whether the machine is conscious, does it understand, etc. i think this thrusts us into a kind of conceptual paralysis, perpetually back to square one, bc we do not really have a comprehensive picture of intelligence afaik. certainly, the scientific world has not seemed to produce one. and the tech world, well it's full of shit. 

in any case i generally see the discussion of intelligence on this topic to be rather one dimensional. it's something like intelligence is just some kind of linear computation in the brain, which is some kind of machine itself. and artificial intelligence just kind of emerges in/from a machine when you feed it enough bits. it's all taking place in a single conceptual dimension. seems to me a lot is left out here!

 

  • Thanks 1
Link to comment
Share on other sites

27 minutes ago, Summon Dot E X E said:

GPT4 passed the bar exam with a high score, so call it whatever you want... it's more than I can do.

We can use this technology to improve our lives. It seems the sky's the limit in terms of finding creative ways to use this technology.

passing the bar exam defines intelligence? lol dude come on

it would be more fascinating if somehow this technology could not pass the bar exam. it can store like every conceivable case and legal text.

  • Like 2
Link to comment
Share on other sites

25 minutes ago, Alcofribas said:

passing the bar exam defines intelligence? lol dude come on

it would be more fascinating if somehow this technology could not pass the bar exam. it can store like every conceivable case and legal text.

yeah.. like they used the super computer to examine all those MRI/Scan results and it found way more cancer and problems than any doctors because they don't have the time and the Ai can look at everything and diagnose more accurately. i guess soon it'll cost more for an Ai assisted analysis of MRIs and stuff. "check this box for Ai assistant at an additional cost of $489"

  • Like 3
  • Farnsworth 1
Link to comment
Share on other sites

3 minutes ago, ignatius said:

yeah.. like they used the super computer to examine all those MRI/Scan results and it found way more cancer and problems than any doctors because they don't have the time and the Ai can look at everything and diagnose more accurately. i guess soon it'll cost more for an Ai assisted analysis of MRIs and stuff. "check this box for Ai assistant at an additional cost of $489"

yeah i imagine such a thing will be incorporated into practice exactly as we currently do it - charge more for the more advanced tech. 

many years ago (15+?) my gf went to urgent care bc she had severe stomach pain. we were there for several hours. the total time a doctor personally examined and spoke to her was maybe 5 minutes. the rest of the time they just ran tests, looked at screens, looked at sheets, etc. i'm imagining a future urgent care with precisely 0 seconds spent with a doctor. 

  • Like 1
Link to comment
Share on other sites

5 hours ago, Alcofribas said:

i think it's interesting(?) that such specific discussions of intelligence, knowledge, consciousness, etc. have surrounded this technology. i've yet to see a truly compelling definition of intelligence in play w/r/t AI. it seems that people are just using definitions that have to fit something like "language model" which really involves a lot of question begging imo, very self-serving.

I dont understand the problem with using a term like language model. Its a technical term thats fairly concrete as opposed to wishy washy try-hard self serving pseudo philosophy lingo. A markov chain based on some corpus is a language model. Or a statistical representation of a collection of texts, if you will. And its perfectly relevant in the context of chatGPT.

or, as wiki says:

Quote

A language model is a probability distribution over sequences of words.[1] Given any sequence of words of length m, a language model assigns a probability  to the whole sequence. Language models generate probabilities by training on text corpora in one or many languages.

….

Since 2018, large language models (LLMs) consisting of deep neural networks with billions of trainable parameters, trained on massive datasets of unlabelled text, have demonstrated impressive results on a wide variety of natural language processing tasks. This development has led to a shift in research focus toward the use of general-purpose LLMs.


I would agree with being hesitant to fall into the trap of “theory of mind” kind of arguments when it comes to chatGPT, though. Or the “hey look, it’s sentient!” kind of response. But people are perfectly free to go there. 

Link to comment
Share on other sites

34 minutes ago, Satans Little Helper said:

Its a technical term thats fairly concrete as opposed to wishy washy try-hard self serving pseudo philosophy lingo

i'd actually really love to see a lot more philosophy in our society, and less slavish devotion to the completely morally bankrupt tech sector to solve all the problems they incessantly create

  • Like 5
  • Big Brain 1
Link to comment
Share on other sites

7 hours ago, Alcofribas said:

i think it's interesting(?) that such specific discussions of intelligence, knowledge, consciousness, etc. have surrounded this technology. i've yet to see a truly compelling definition of intelligence in play w/r/t AI. it seems that people are just using definitions that have to fit something like "language model" which really involves a lot of question begging imo, very self-serving. i feel like we're all in some kind of philosophy 101 class on day 1 just throwing out our most deep thoughts maaaaaaan.

in a way i feel this is possibly a doomed discussion. the tech field has created a technology they have called "intelligence" and we all feel we must conceptualize this technology as such. then we're branching off into discussing whether the machine is conscious, does it understand, etc. i think this thrusts us into a kind of conceptual paralysis, perpetually back to square one, bc we do not really have a comprehensive picture of intelligence afaik. certainly, the scientific world has not seemed to produce one. and the tech world, well it's full of shit. 

in any case i generally see the discussion of intelligence on this topic to be rather one dimensional. it's something like intelligence is just some kind of linear computation in the brain, which is some kind of machine itself. and artificial intelligence just kind of emerges in/from a machine when you feed it enough bits. it's all taking place in a single conceptual dimension. seems to me a lot is left out here!

 

What's happening is we have to reevaluate our previous definitions and assumptions and hence we have to go back to the drawing board, and revisit every discussion about the topic.

A simple example would be the Turing test, there's no question modern systems can pass the Turing test. There's also no question that these systems haven't achieved "real" AI.

So what should the new test be? Would a human be bale to pass it?

In summary: developments in the field of artificial intelligence spark discussions about intelligence

Edited by GORDO
Link to comment
Share on other sites

17 minutes ago, GORDO said:

What's happening is we have to reevaluate our previous definitions and assumptions and hence we have to go back to the drawing board, and revisit every discussion about the topic.

A simple example would be the Turing test, there's no question modern systems can pass the Turing test. There's also no question that these systems haven't achieved "real" AI.

So what should the new test be? Would a human be bale to pass it?

In summary: developments in the field of artificial intelligence spark discussions about intelligence

i haven't seen anything that indicates we have to reevaluate our definitions of intelligence and go back to the drawing board. seems to me we are mostly just bending over backwards trying to fit our definitions to machine learning.

the discussions are extremely banal for the most part. 

  • Like 1
Link to comment
Share on other sites

2 hours ago, Alcofribas said:

i haven't seen anything that indicates we have to reevaluate our definitions of intelligence and go back to the drawing board. seems to me we are mostly just bending over backwards trying to fit our definitions to machine learning.

the discussions are extremely banal for the most part. 

Your own post is a an indication of it. Maybe in the attempt to sound smart your dumbing yourself too much.

 

Link to comment
Share on other sites

7 hours ago, Alcofribas said:

i'd actually really love to see a lot more philosophy in our society, and less slavish devotion to the completely morally bankrupt tech sector to solve all the problems they incessantly create

You sound conflicted. In your other post you seemed to criticize the use of concepts like knowledge, consciousness, intelligence, language model and what not. And at the same time you crave a more philosophy in our society. Perhaps it’s not philosophical savvy, but I see it everywhere. Quantity is not the issue I’m guessing. But quality is? 
 

Or rather, those slaves devoted to the morally bankrupt tech sector? But is that because of a lack of philosophy in our society? I’d love to take the opposite side of the argument! ?
Here’s a bunch of silly thoughts:

- even if people would be more philosophically savvy, they’d still show interest in tech. 
- who’s to say a more philosophical society isnt morally bankrupt? (Looks at ancient greece)

Link to comment
Share on other sites

there are animals, fungus, machines, they all do stuff, maybe you can call it all intelligence, maybe not, but intelligence is just a word we invented.  getting hung up on it is silly.  but when people start saying "well who knows maybe us humans ourselves are just large language models and our words are all just predictions made in our brains and..." then you have to push back and say fuck off, I'm not a computer, then throw their phone into a field

  • Like 1
Link to comment
Share on other sites

GTP4 has its strength but seeing it fail to deliver even small excel functions or producing coding errors in some more obscure languages still shows that it promises so much more than it can deliver 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.