Jump to content

AI - The artificial intelligence thread


YO303
 Share

Recommended Posts

Quote

One of the big problems of AI - or actually machine learning, it's not "real" AI yet - is that no-one knows how they really work, they're black boxes and when a model is trained, it's not a deterministic process, the end result can seem functionally same, but internally they're not. They're fun, yes, and they seem intelligent, but they're really not, they're Chinese Rooms.

Two points.

You’re making a broad statement about ML which is simply incorrect. When people mention those black boxes, they’re usually talking about deep neural nets (or more general neural networks). There are plenty ML techniques which produce deterministic models which aren’t black boxes. Decision trees are the most straightforward examples of these.
And with regard to the “black boxes” I would disagree that no-one knows how they work as well. Technically and functionally I’d argue we understand them as well as your basic microwave oven in your kitchen. Sure, the models is way more complex than a decision tree. But in the end, a language model like chatgpt is inherently more complex. But that doesn’t mean no-one knows how they work. Or worse, it doesn’t mean it’s impossible to know how they work. Regardless of the degree of non-determinism.
Which is similar to our own language model which is inside our head, btw. That language model is equally a black box I would say. And non-deterministic. (But is it really non-deterministic though? Or does it only appear that way?)

Which brings me to the second point. Which is about the Chinese room argument a number of people have referred to. It’s an old argument, which imo wrongly assumes there’s such a thing as understanding ( that we understand). It’s similar to free will, consciousnesses and all other things related to “hard”-AI. And the hard part of those things is - imo - we don't understand those concepts. But we all “experience” them. So it must be real, right? Well, not so fast. The world isn’t flat either, right? And the Galaxy doesn’t revolve around earth either.

When it comes to the Chinese Room, one could make an argument that it doesn't even matter what happens inside it. As long as it produces good (or good enough, if the tasks get more complex) results we could simply assume there’s an understanding of what is right or wrong, even though we can’t pin point properly where that understanding exists. (The person inside doesn't understand) In this argument, understanding becomes some behavioral almost zombie like feature. Which is admittedly hard to swallow ( thatswhatshesaid). But at this point in time, with our current understanding of consciousness and allthat other hard stuff, It’s more an emotional argument than a rational one. We just don,t know. And worse, we might be surprised when we finally do. Perhaps were all chinese rooms…

  • Like 2
Link to comment
Share on other sites

27 minutes ago, auxien said:

your premise is inherently biased towards a criminal actor in the first place. more concerning is that the AI could be spitting out artwork that's copying art without the user knowing it. the user could be profiting off of this in some way, until they're later sued by the originator of the art that was copied.

Good point, seems like the only way is to have AI generated images be treated as if they are "sampling" everything in its dataset, that makes sense to me. 

29 minutes ago, auxien said:

Human brains are not Chinese rooms. i don't think we're magic or imbued with a soul/etc (there's no proof of this even if you disagree) but reasonably complex brains begin operating in ways that operate at a different level, in different ways, than 'lesser' brains. there's multi-level connections, cognition, and experiential understanding that simply isn't reducible.

https://en.wikipedia.org/wiki/China_brain I should have linked this in my other reply oops.

I feel like it's difficult to discuss this without having a good definition of understanding, I think that its best defined as having a model of the thing you understand, which is predictive of its future states, and in this way these ML models do have a type of understanding.

37 minutes ago, auxien said:

source? "just like our visual cortex" is a big statement.

The early layers of the visual cortex take in receptive fields that are like "pixels" from the retina, and detect lines of different orientations, each subsequent layer uses the features computed by the previous layer to detect more complex features (you can read about the functions of the different areas here https://en.wikipedia.org/wiki/Visual_cortex), this can also be observed in the filters of convolutional neural networks:

spacer.png

  • Like 1
Link to comment
Share on other sites

13 hours ago, dcom said:

For the record, I did not say any of the above, you're combining things from my post and making inferences that are not there, then putting the words in my mourh. I did not talk about any paths towards AGI, I did not specifically mention ChatGPT, I did not say it's a dead end, and I definitely said that the reference to Markov chains was in jest.

What some of the ML models can do is indeed impressive, and yes, they are stepping stones towards truly useful things, maybe even AGI - but currently the instances that get the most publicity are just expensive toys to make people go "oooh".

Ok when I go back and re-read your post you had hedged it quite a bit (edit: and I dont even mean that you'd edited it, probably I had just read it too quickly and got the slightly wrong end of the stick). And I approve of hedging. But still your overall point was

Quote

They're fun, yes, and they seem intelligent, but they're really not, they're Chinese Rooms.

and then you said

Quote

I'm not starting a conversation about the hard problem of consciousness or Panpsychism, Hylopathism or the like, I'm just nonchalantly dissing a technology fad. This too shall pass.

So it seems the overall shape of your argument is that 'the instances that get the most publicity' (do you mean ChatGPT or something else, please specify) 'seem intelligent but really theyre not' and thats an ok position, philosophically, to take but if you then say 'I'm not starting a conversation about the hard problem of consciousness' then its like you're trying to disqualify any opposing argument before its even started. Because the way to counter your argument is to talk about the hard problem of consciousness.

Edited by zazen
Link to comment
Share on other sites

3 hours ago, zazen said:

Ok when I go back and re-read your post you had hedged it quite a bit. And I approve of hedging. But still your overall point was

Yes, I edit and re-edit my posts when there are no (significant) replies to them yet, and I might still edit while someone has replied, but I try not to modify my intentions after the fact. I do it ruthlessly, and without notice.

3 hours ago, zazen said:

So it seems the overall shape of your argument is that 'the instances that get the most publicity' (do you mean ChatGPT or something else, please specify) 'seem intelligent but really theyre not' and thats an ok position, philosophically, to take but if you then say 'I'm not starting a conversation about the hard problem of consciousness' then its like you're trying to disqualify any opposing argument before its even started. Because the way to counter your argument is to talk about the hard problem of consciousness.

Yes, I was intentionally trying to disqualify any opposing argument because I was trying to do a comment-and-run, and not get caught up in the minutiae of speculative arguments about real AI and consciousness. I think that the public instances of ML-portrayed-as-AI (e.g. GANs, GPTs, Stable Diffusion...) are new-technology-made-fun and because of the long AI Winter before them an impressive, sudden step up in research and application of new methods and models - but they're still just toys. I can discuss the hard problem, substrates, emergence, connectomes, embodied coginition etc. fine, but at the moment I don't have the time to argue with strangers on the internet about them, so yes, I was just trying to make my disdain towards the overexcitement about ML-portrayed-as-AI known, and get away with it.

I didn't.

Edited by dcom
  • Like 2
  • Thanks 1
  • Haha 1
Link to comment
Share on other sites

https://www.cbr.com/ai-comic-deemed-ineligible-copyright-protection/

Quote

The United States Copyright Office (USCO) reversed an earlier decision to grant a copyright to a comic book that was created using "A.I. art," and announced that the copyright protection on the comic book will be revoked, stating that copyrighted works must be created by humans to gain official copyright protection.

In September, Kris Kashtanova announced that they had received a U.S. copyright on his comic book, Zarya of the Dawn, a comic book inspired by their late grandmother that she created with the text-to-image engine Midjourney. Kashtanova referred to herself as a "prompt engineer" and explained at the time that she went to get the copyright so that she could “make a case that we do own copyright when we make something using AI.”

 

Link to comment
Share on other sites

13 hours ago, auxien said:

potential for what? like, directly. what is the point of all this, i still do not think i've ever read a valid reason for creating AI like Chat GPT. (a good, useful reason. not one primarily for monetary gain)

exactly! this is what I always mentally defer back to when thinking on topics such as this. I know asking what's the point what's the point all the time can get annoying...but when it comes to a potential new technology being rolled out, which could start the snowball rolling that will some day lead to a catastrophic disaster for the human race, well, I think it warrants asking!

like you implied, this chatbot thing is a part of the whole capitalist shebang. come up with some tech, say it is there for some altruistic reason, then someone with a ton of money wants it, makes an offer too good to refuse, it sells, and then what...some nothingburger feature gets added in to hook the potential millions of "subscribers" that will pay to do something with it, and the money rolls right in! it all sounds like a giant money making scam operation to me, that will undoubtedly cause further confusion, and detach people even more from having to face reality.

or hey, the other easy answer is it could lead to sex. that's the other motivator besides money, right? dudes can't get chicks, so they think about building one with the potential to have a conversation with and screw. isn't one of the big proponents of VR pornhub? it starts with something harmless like Chat GPT...AI big breasted robot for sale when?

  • Like 1
Link to comment
Share on other sites

2 hours ago, ignatius said:

seems weird we're making artificial intelligence when we don't even have actual intelligence yet. 

That's right but someone above us wants more control on our lives.

And what is happening with this copyright ambiguous law on images, soon will happen to music and also movies/TV shows.

The art of the future will be defined as "artisan" made or "generated" work, even will be more difficult to distinguish between them.

I support cautiously the AI technology, it could be a potential Damocles Sword.

Edited by Diurn
  • Like 2
Link to comment
Share on other sites

2 hours ago, ignatius said:

seems weird we're making artificial intelligence when we don't even have actual intelligence yet. 

maybe thats why we need the artifical kind?

we have a massive intelligence deficit

  • Like 1
Link to comment
Share on other sites

4 hours ago, zero said:

like you implied, this chatbot thing is a part of the whole capitalist shebang. come up with some tech, say it is there for some altruistic reason, then someone with a ton of money wants it, makes an offer too good to refuse, it sells, and then what...some nothingburger feature gets added in to hook the potential millions of "subscribers" that will pay to do something with it, and the money rolls right in! it all sounds like a giant money making scam operation to me, that will undoubtedly cause further confusion, and detach people even more from having to face reality.

ChatGPT has certain educational strengths, it's really apt at teaching you to code in various languages. It gave me a detailed step-by-step description of Advanced Trauma Life Support, which is not readily accessible by Google as it is a licensed and paywalled training curriculum, it nails general emergency life support methods like cABCDE, how to use devices like capnometers & advanced CPR devices. I totally expect general practicioners will use this in the future and possibly in healthcare education. Various forks of GPT could find their way into many societal and professional aspects. It also seems proficent at games and worldbuilding, possibly a way to naturalize and extend conversation scripts in open-world games and RPG's (anyone remember SpookiTalk?)

Edited by chim
  • Like 1
Link to comment
Share on other sites

58 minutes ago, chim said:

which is not readily accessible by Google as it is a licensed and paywalled training curriculum, 

which is going to make google angry - https://www.businessinsider.com/google-management-issues-code-red-over-chatgpt-report-2022-12

google will of course then have to try and beat it:

Quote

In particular, teams in Google's research, Trust and Safety division among other departments have been directed to switch gears to assist in the development and launch of new AI prototypes and products, the Times reported. Some employees have even been tasked to build AI products that generate art and graphics similar to OpenAI's DALL-E used by millions of people, according to the Times. 

 

because google sees the $ loss:

Quote

Sridhar Ramaswamy, who oversaw Google's ad team between 2013 and 2018, said that ChatGPT could prevent users from clicking on Google links with ads, which generated $208 billion — 81% of Alphabet's overall revenue — in 2021, Insider reported.

which leads to competition. let's start the race to build more, and better than the other guy! capitalism prevails!

seriously tho, I hear you about the potential educational value that an AI chat bot brings to the table. the possibility that it could help share factual information is a positive. but I always question where is this chatbot idea coming from, what is this going to lead to...and it usually always goes back to the root of all evil - make money at someone else's expense. and once the race for money starts, things always are going to go awry...

  • Like 3
Link to comment
Share on other sites

The Unhittable Man. Read left to right. The first one kinda starts in the middle. There are 8 parts. The 9th picture is just a BS academic article I had chatGPT write to justify the weirdness of this skit. Final two is me trying to make chatGPT flesh out the 'plot armorium' thing.

It's so absurd. It gets better as it goes. I was thinking it would be cool to use chatGPT to build a rich lore with reasoning behind it. And then ask chatGPT to generate stories based on said lore. So you would flesh the lore out, and have chatGPT keep track of all the variables of the lore, to spit out stories that follow the rules of the lore. Ai is going to allow for some absolutely absurd and contrived things.

 

image.thumb.png.e4d943d6d1daa1602f61e96412ba2f35.pngimage.thumb.png.dba8807f828218dad14159ea48b61842.pngimage.png.d1ebdf68012931d880029d26ee8086bb.pngimage.png.d28f154a646a33bda59a686ede42dc35.pngimage.png.9321a892f3faef131cccf83e65836888.pngimage.png.5e5d041a56e1b8d02cde7f2ca23bfa10.pngimage.png.f03bbef6786f14437a81b6c8e7daa696.pngimage.thumb.png.5fdc87062633d9d1e9ee92d135ce76f7.pngimage.thumb.png.d11b211373d93e7f89478f4e1eeb3a6e.png

Unhittable Man - How Jack can manipulate fields - GENERIC PHRASES.PNG

Unhittable Man - How Jack can manipulate fields.PNG

Edited by Brisbot
  • Like 1
  • Farnsworth 1
  • Big Brain 1
Link to comment
Share on other sites

I see it as pretty freaky and there's definitely a lot of potential for dystopian scenarios.

One scenario: your actual photos are ruled as AI-generated in court when they would support your side, but their AI-generated photos are accepted as real and run in the MSM.

I think there's no holding it back, aside from complete social collapse. Maybe there will be a Foundation scenario where there are these magical AI users hiding away in some far away place, manipulating world events.

They'll have the most GPUs and the best ones, you'll have fewer, weaker, more expensive ones. They'll have unfiltered results from their models and unlimited API access, you'll have heavily censored results in the name of "fairness" and "copyright protection". I agree with @chim that it's going to be abused by the powerful.

  • Sad 1
Link to comment
Share on other sites

13 minutes ago, Summon Dot E X E said:

They'll have the most GPUs and the best ones, you'll have fewer, weaker, more expensive ones. They'll have unfiltered results from their models and unlimited API access, you'll have heavily censored results in the name of "fairness" and "copyright protection". I agree with @chim that it's going to be abused by the powerful.

The people that founded OpenAI were very aware of this, and OpenAI has a charter to try and counter that happening https://openai.com/charter/

the original point of OpenAI was to try and develop AI in an open-source kindof way so everyone would benefit

I think they have drifted somewhat and I'm not sure how open they are now

  • Like 2
Link to comment
Share on other sites

On 12/21/2022 at 2:47 PM, chim said:

ChatGPT has certain educational strengths, it's really apt at teaching you to code in various languages. It gave me a detailed step-by-step description of Advanced Trauma Life Support, which is not readily accessible by Google as it is a licensed and paywalled training curriculum, it nails general emergency life support methods like cABCDE, how to use devices like capnometers & advanced CPR devices.

i'm curious if it's better at teaching than a human, tho. even if the answer is no, its ability to do so at the user's whim (much more difficult to find a human available 24/7 of course). the availability of video/interactive recorded human teaching would seem to trump a (possibly) flawed piece of AI.

On 12/21/2022 at 2:47 PM, chim said:

It also seems proficent at games and worldbuilding, possibly a way to naturalize and extend conversation scripts in open-world games and RPG's (anyone remember SpookiTalk?)

there's a game dev i follow who does games that are 95% text, and he was talking about this exact thing. my immediate thought was '...isn't writing text/stories almost all of your passion/career? why do you want to replace any portion of the fun part?' i can see it making sense to perhaps add a bit of spice, but utilizing it to larger degrees would seem to me a bit counterintuitive in many arts/hobbies/etc.

On 12/21/2022 at 3:59 PM, zero said:

which leads to competition. let's start the race to build more, and better than the other guy! capitalism prevails!

that just reminds me of how fucking disgusting search has become for literally anything that could be monetized. everything is overfull of spammers/scammers/crapware. it's fucking useless searching google/whatever for...well, anything except pure information (and even that is full of misinfo of course)

On 12/21/2022 at 5:59 PM, Summon Dot E X E said:

I think there's no holding it back, aside from complete social collapse.

i'm starting to lean towards option B.

 

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.