Jump to content
IGNORED

AI - The artificial intelligence thread


YO303

Recommended Posts

On 12/17/2022 at 8:59 PM, zazen said:

Riffusion - using stable diffusion AI trained on spectrogram images to generate music

The samples on that 'about' page are amazing, see the 'Looping and Interpolation' section, where it interpolates between 'typing' and 'jazz'

https://www.riffusion.com/about/typing_to_jazz.mp3

Another interpolation between 'church bells' and 'electronic beats'

https://www.riffusion.com/about/church_bells_to_electronic_beats.mp3

 

The second just sounded like spectral morphing (like something that could be done using Zynaptiq Morph https://www.zynaptiq.com/morph/ ) but that first one (Typing -> Jazz) is absolutely great and a darn catchy track to boot - Could listen to entire album like that !

Link to comment
Share on other sites

  • Replies 1.2k
  • Created
  • Last Reply

Top Posters In This Topic

The Artstation (and Deviantart) feed is currently going all Butlerian Jihad on the AI issue: https://www.artstation.com/?sort_by=trending. For future generations browsing this in the toxic hellscape of tomorrow, the feed is being flooded with uploads of this image, by pros and laymen alike:

rodrygo-avila-hugo-gomez-ai-ban-please.thumb.jpg.9c598d98cd958a0689088ff58f8d57ba.jpg

 

I'm conflicted as I dig the tech and procedural/generative nature of this stuff, and have had quite a bit of fun with the novelty of the image generators, but current models are encroaching way too much on artists with the nondiscriminatory webcrawling. Lots of AI users are utterly spiteful against the same artists whose images they're leeching off of in their datasets & prompts, hoping to put them out of business. That's also looking like a real possibility with companies and clients relying more and more on AI for images. 

The AI image generators might not be all the way there at the moment (the hands thing is pretty ubiquitous), but it's robably only a matter of time before it'll get close to indistinguishable from actual artists' works. Riffusion is harmless fun as it is now but we might very well reach a point when you can endlessly generate further music tracks of an artist based on their entire discography, their entire genre, and choose any famous vocalist you want to sing vocals on it. Even in a fairly simplistic state, it would likely kill most of the library/production music industry and the hope of making a living in a business that's already very tough. 

So yeah, nice things, can't have them. 

Edited by chim
Link to comment
Share on other sites

They don’t get that these tools are not pulling current images from artstation and that the images are generated from scratch by understanding the image itself not by cutting and pasting their stuff 

  • Like 1
Link to comment
Share on other sites

13 hours ago, o00o said:

They don’t get that these tools are not pulling current images from artstation and that the images are generated from scratch by understanding the image itself not by cutting and pasting their stuff 

The AI doesn't understand jack shit. "Training on" versus copying is just elaborate semantics. 

1671418675278459.thumb.jpg.688d31fe81e9667100445bea1693ce82.jpg

1671252090399076.thumb.png.d9f7e5ee8f320725cc1ce5fa1809204a.png

13 hours ago, Summon Dot E X E said:

  "Let's stop this new technology from unfolding" is something which has never worked, and which will never work.

But legislation and content control hasn't? People still pirate movies but Netflix and Spotify are doing well. The idea is not to stop AI art outright but control the dataset entries and allow creators to opt out from them. Right now it's a shady grey area of "fair use" that isn't being adressed.

13 hours ago, Summon Dot E X E said:

So all that's left will be... people who do it for sheer joy, rather than commercial success? Sounds like the new human-made music will be much more interesting, overall. Right now we have most artists iterating the same clichés into the ground for a buck, and then even deeper into oblivion. The subcultures associated with different musical genres are also all pretty bizarre. Like little cults.

This is the lousiest argument I've ever seen. Many artists might dream of commercial success (and the financial security it brings) but the vast, vast majority would be more than overjoyed by simply being able to keep a reliable income equivalent to any regular job. Until we have our UBI conveyor belt utopia, you can't subsist on the sheer joy of an economically worthless craft, and being forced to seek other ways to make a living will always deprive the quality and possible time alottment of that craft. Sacrificing other pathways to devote yourself to a craft will then permanently be a lose-lose situation that automatically entails a lower quality of life, lower long-term health & a lower level of education. Shoulda just studied economics instead of learning to paint or sing. The very reason artists are reiterating all these clichés to scrape a living is because the whole spectrum of art is being undervalued - this is compounding that direction.

13 hours ago, Summon Dot E X E said:

The industry I hope is most disrupted by this is Hollywood. Imagine if average people are able to generate movies tailored to their precise specifications, and start to prefer those to the entertainment shoved in everyone's faces by TPTB. It would be pretty computationally expensive with current technology, but if this tech singularity continues to accelerate I could see it happening.

Bigwigs like Hollywood will likely abuse it to cheat creators out of their livelihoods, not the other way around. We're already seeing this happen with studios laying off major chunks of their art dept. 

  • Like 1
Link to comment
Share on other sites

39 minutes ago, chim said:

The AI doesn't understand jack shit. "Training on" versus copying is just elaborate semantics. 

Can you elaborate on this? And also how is this situation different from humans copying another artists style, and why are those differences such a big problem?

Link to comment
Share on other sites

36 minutes ago, vkxwz said:

how is this situation different from humans copying another artists style, and why are those differences such a big problem?

i'm not going to try to speak for chim of course, but the argument i've seen most often for how this is different is that AI cannot be held accountable for actions. a human artist (in theory) can. the AI's creators have a layer of separation/deniability built into if you try to hold them accountable for the AI's creations/potential thefts/etc. that can further complicate already muddy copyright and similar laws.

i'm not sure it's a great argument, but it rings at least partly of truth to me.

Link to comment
Share on other sites

That's almost like a legal argument, isn't it? You cannot held an AI accountable. You need to be human to be held accountable.

Which is a fair argument, btw. Don't get me wrong. But implicitly, there's also a fair argument behind vkxwz's question. Namely, that on a certain level there isn't much difference between an AI copying and a human copying. Some might argue the understanding part is what signifies the difference. But IMO, much of what people think or do are based on an unconscious layer. And there's only a small % of our neural activity that we are aware of. In this sense, there's not much between the two.

But, having made that argument, I fairly well understand I've brought myself in a difficult position in this legal perspective. One which is difficult or impossible to defend.

  • Like 1
Link to comment
Share on other sites

22 hours ago, Summon Dot E X E said:

  "Let's stop this new technology from unfolding" is something which has never worked, and which will never work.

correct. we know there is no stopping technological advancement. there is no way to stop advancements in AI, because of the driving force innate in humans to keep progressing, keep the ball going toward some future goal further and further away. what is that goal exactly? we are not entirely sure. I mean yeah the loosely based goal with technology is to make things easier for humanity, help humanity, I get that. but does having AI created music/art, or having chats with an AI bot really do anything to make human life "easier?" it's hard for me to see that at this time. seems to me it will only disconnect people further from humanity.

I get that AI advancements excite a lot of folks. hard to deny that that humans have been a very destructive force on this planet, and perhaps AI will figure out a way to help us do this life thing better. the transhumanists like to believe it is a natural progression to have humans integrate with tech/machines, in hopes of making things better for those on this planet... but if you ask me, things keep getting shittier as we go forward. they say the best years are usually those behind us. so what does that say about this line of thinking that we need to push forward with AI? what is it going to make better exactly? probably short term wins, but long term losses. there is always a catch, a ying/yang. it will no doubt make things worse for all humans at some point, but there is nothing we can do to stop it.

 

  • Like 1
Link to comment
Share on other sites

Highly recommended reading on these issues:

Nick Bostrom: Superintelligence: Paths, Dangers, Strategies

Max Tegmark: Life 3.0: Being Human in the Age of Artificial Intelligence

Martin Ford: Architects of Intelligence: The truth about AI from the people building it

I'm very cautious of making any statements or predictions about AI, but one thing I'm sure of is that we're not even close to human-level AGI. I've been working with and been enthusiastic about technology for all my life, and for many, what various instances of GPT do seem to be very advanced, but then again, any sufficiently advanced technology is indistinguishable from magic. It's currently just trained algorithms that do a very good job of imitating things, but in the grand scheme of things they're just very advanced markov chains (I'm being sarcastic here, not factual). One of the big problems of AI - or actually machine learning, it's not "real" AI yet - is that no-one knows how they really work, they're black boxes and when a model is trained, it's not a deterministic process, the end result can seem functionally same, but internally they're not. They're fun, yes, and they seem intelligent, but they're really not, they're Chinese Rooms.

Edited by dcom
  • Like 3
  • Thanks 1
Link to comment
Share on other sites

10 hours ago, auxien said:

i'm not going to try to speak for chim of course, but the argument i've seen most often for how this is different is that AI cannot be held accountable for actions. a human artist (in theory) can. the AI's creators have a layer of separation/deniability built into if you try to hold them accountable for the AI's creations/potential thefts/etc. that can further complicate already muddy copyright and similar laws.

i'm not sure it's a great argument, but it rings at least partly of truth to me.

Isn't the human that uses the system to copy accountable? where is there ever not a human in the loop for this process. If you put in a prompt that specifies the style of a particular artist and then you make money off of posting that then the accountability is on you, its just a tool.

1 hour ago, dcom said:

Highly recommended reading on these issues:

Nick Bostrom: Superintelligence: Paths, Dangers, Strategies

Max Tegmark: Life 3.0: Being Human in the Age of Artificial Intelligence

Martin Ford: Architects of Intelligence: The truth about AI from the people building it

I'm very cautious of making any statements or predictions about AI, but one thing I'm sure of is that we're not even close to human-level AGI. I've been working with and been enthusiastic about technology for all my life, and for many, what various instances of GPT do seem to be very advanced, but then again, any sufficiently advanced technology is indistinguishable from magic. It's currently just trained algorithms that do a very good job of imitating things, but in the grand scheme of things they're just very advanced markov chains (I'm being sarcastic here, not factual). One of the big problems of AI - or actually machine learning, it's not "real" AI yet - is that no-one knows how they really work, they're black boxes and when a model is trained, it's not a deterministic process, the end result can seem functionally same, but internally they're not. They're fun, yes, and they seem intelligent, but they're really not, they're Chinese Rooms.

How are we so different from a chinese room? individual neurons are not conscious, imo the takeaway from the chinese room is that consciousness and intelligence are substrate independant, not that there is some magical property of biological neurons that imbues the signal processing with true understanding. As for the black box argument, I don't think that is really the case anymore, we have a pretty good understanding of the fundamental principles of how they work, and the tools for investigating how trained neural networks work internally are a lot better now (partly thanks to max tegmark I think), especially for convolutional NNs, which break down data into high level and low level features just like our visual cortex.

  • Like 2
Link to comment
Share on other sites

1 minute ago, vkxwz said:

How are we so different from a chinese room? individual neurons are not conscious, imo the takeaway from the chinese room is that consciousness and intelligence are substrate independant, not that there is some magical property of biological neurons that imbues the signal processing with true understanding.

I'm not starting a conversation about the hard problem of consciousness or Panpsychism, Hylopathism or the like, I'm just nonchalantly dissing a technology fad. This too shall pass.

  • Haha 1
Link to comment
Share on other sites

10 hours ago, vkxwz said:

Can you elaborate on this? And also how is this situation different from humans copying another artists style, and why are those differences such a big problem?

auxien and Satans Little Helper (great handle) have already provided some important points, and let's not discount dcom's terrific primer right above ^. I'm not nearly law- or tech-savvy enough for this topic, but a big mistake that a lot of people are doing right now is applying human qualities like "understanding" or "inspiration" to AI. It's a black-box dependent on input conditions, and right now a big chunk of that input is copyrighted material. It cannot judge whether a specific output is copyright-intrusive or distinct enough. The input conditions are manipulated via prompts, the output is also (somewhat) predictively influenced based on input prompts that directly reference copyright-protected works. This essentially means that in some way, the dataset is accessible in this process. 

A JPEG of a copyrighted work is already a data-string converted into RGB pixels at the output, is AI image generation functionally different?

AI companies are banking heavily on the nonprofit angle for whatever reasons, but they do have corporate sponsorship. Through tactics like this they are (most likely deliberately) obfuscating the inherent copyright issues. Courts are already running into those issues, with a recent response being an AI piece cannot be copyrighted. This is a super-weird situation, as we all know generative music is most certainly copyrightable even though the human-machine interaction is similar to AI image generation. But generative music does not include input of vast archives of recorded music. I would venture to say an AI production cannot be copyrighted if it is trained on copyrighted material, but nobody knows where we'll end up with that.  

6 hours ago, Satans Little Helper said:

Namely, that on a certain level there isn't much difference between an AI copying and a human copying. Some might argue the understanding part is what signifies the difference. But IMO, much of what people think or do are based on an unconscious layer. And there's only a small % of our neural activity that we are aware of. In this sense, there's not much between the two.

This is a potent argument but it's also on a philosophical level that is way beyond the scope of the immediate issue. At that point we're comparing machine data entry with human biological sense-input. You don't just shove the data-string of a JPEG up your butt. I'd argue that a human has near-infinitely many more variables in play (visual input is a relatively small part of visual processing), and importantly tends to know when they are copying. An AI is not subject to optical illusions, color constancy, sensitive to depth perception & anisotropy, etc etc. When I'm drawing from a reference, I can barely even remember what it's like as soon as I'm not looking directly at it. It's constantly being processed and influenced by ideas and memories, as well as my specific body's motor skills/tendencies. 

The AI itself isn't really the issue, what's happening that's causing grief is that people are deliberately abusing the fact that datasets contain famous artists and their platforms. And they're being real dicks about it. 

Another reaction is that this thing is happening. Not sure how optimistic I'll be about it though. Much of our modern vernacular stems from jobs and skills that don't exist anymore.

  • Like 2
Link to comment
Share on other sites

12 minutes ago, dcom said:

I'm not starting a conversation about the hard problem of consciousness or Panpsychism, Hylopathism or the like, I'm just nonchalantly dissing a technology fad.

But you're doing the latter by skirting around the former.

You're saying ChatGPT is a dead end because its just a fancy markov chain and you see no path from there to AGI. Me and others are saying that we know its just a fancy markov chain, but it still seems to kicking arse and exhibiting abiltiies no-one expected language models to have. Seems like there's some potential there when it continues to scale up.

 

  • Like 1
Link to comment
Share on other sites

Anyway, on a less serious note, I've been probing ChatGPT a bit. Inspired by a Valefisk video where a player is forced to speak only in Star Wars quotes, I tried playing a game based off of that with GPT. It turns out, it will bullshit you into various answers and pretend it's a relevant quote by a character, which movie/novel it's from and won't admit it's made a mistake until you force it into a corner. Sometimes it'll give you an actual Han Solo quote or similar if you give it a very easy question/statement, but most likely it'll just repeat the bullshit process no matter how many times you try to refine & correct it. 

581046009_Namnls.png.9e0cf80f79c17651cdcb4034b7763e06.png

(There is no such quote, and it later admitted so)

2122759704_Namnls.png.7f3e93b6e919c5b3ef29b3c0ce4aa261.png

Link to comment
Share on other sites

30 minutes ago, zazen said:

You're saying ChatGPT is a dead end because its just a fancy markov chain and you see no path from there to AGI.

For the record, I did not say any of the above, you're combining things from my post and making inferences that are not there, then putting the words in my mourh. I did not talk about any paths towards AGI, I did not specifically mention ChatGPT, I did not say it's a dead end, and I definitely said that the reference to Markov chains was in jest.

What some of the ML models can do is indeed impressive, and yes, they are stepping stones towards truly useful things, maybe even AGI - but currently the instances that get the most publicity are just expensive toys to make people go "oooh".

Edited by dcom
Link to comment
Share on other sites

3 hours ago, vkxwz said:

Isn't the human that uses the system to copy accountable? where is there ever not a human in the loop for this process. If you put in a prompt that specifies the style of a particular artist and then you make money off of posting that then the accountability is on you, its just a tool.

your premise is inherently biased towards a criminal actor in the first place. more concerning is that the AI could be spitting out artwork that's copying art without the user knowing it. the user could be profiting off of this in some way, until they're later sued by the originator of the art that was copied.

3 hours ago, vkxwz said:

How are we so different from a chinese room?

human brains are not Chinese rooms. i don't think we're magic or imbued with a soul/etc (there's no proof of this even if you disagree) but reasonably complex brains begin operating in ways that operate at a different level, in different ways, than 'lesser' brains. there's multi-level connections, cognition, and experiential understanding that simply isn't reducible.

3 hours ago, vkxwz said:

individual neurons are not conscious

has anyone been saying that? i've only read some of Superintelligences that dcom linked, not the others at all tho. i don't believe i've seen any direct arguments stating that from anyone here. is that how you're interpreting the Chinese room thing?

3 hours ago, vkxwz said:

which break down data into high level and low level features just like our visual cortex.

source? "just like our visual cortex" is a big statement.

3 hours ago, dcom said:

I'm not starting a conversation about the hard problem of consciousness or Panpsychism, Hylopathism or the like, I'm just nonchalantly dissing a technology fad.

what parts of all these recent discussion topics are you considering fads?

2 hours ago, zazen said:

Seems like there's some potential there when it continues to scale up.

potential for what? like, directly. what is the point of all this, i still do not think i've ever read a valid reason for creating AI like Chat GPT. (a good, useful reason. not one primarily for monetary gain)

 

Edited by auxien
Link to comment
Share on other sites

Quote

One of the big problems of AI - or actually machine learning, it's not "real" AI yet - is that no-one knows how they really work, they're black boxes and when a model is trained, it's not a deterministic process, the end result can seem functionally same, but internally they're not. They're fun, yes, and they seem intelligent, but they're really not, they're Chinese Rooms.

Two points.

You’re making a broad statement about ML which is simply incorrect. When people mention those black boxes, they’re usually talking about deep neural nets (or more general neural networks). There are plenty ML techniques which produce deterministic models which aren’t black boxes. Decision trees are the most straightforward examples of these.
And with regard to the “black boxes” I would disagree that no-one knows how they work as well. Technically and functionally I’d argue we understand them as well as your basic microwave oven in your kitchen. Sure, the models is way more complex than a decision tree. But in the end, a language model like chatgpt is inherently more complex. But that doesn’t mean no-one knows how they work. Or worse, it doesn’t mean it’s impossible to know how they work. Regardless of the degree of non-determinism.
Which is similar to our own language model which is inside our head, btw. That language model is equally a black box I would say. And non-deterministic. (But is it really non-deterministic though? Or does it only appear that way?)

Which brings me to the second point. Which is about the Chinese room argument a number of people have referred to. It’s an old argument, which imo wrongly assumes there’s such a thing as understanding ( that we understand). It’s similar to free will, consciousnesses and all other things related to “hard”-AI. And the hard part of those things is - imo - we don't understand those concepts. But we all “experience” them. So it must be real, right? Well, not so fast. The world isn’t flat either, right? And the Galaxy doesn’t revolve around earth either.

When it comes to the Chinese Room, one could make an argument that it doesn't even matter what happens inside it. As long as it produces good (or good enough, if the tasks get more complex) results we could simply assume there’s an understanding of what is right or wrong, even though we can’t pin point properly where that understanding exists. (The person inside doesn't understand) In this argument, understanding becomes some behavioral almost zombie like feature. Which is admittedly hard to swallow ( thatswhatshesaid). But at this point in time, with our current understanding of consciousness and allthat other hard stuff, It’s more an emotional argument than a rational one. We just don,t know. And worse, we might be surprised when we finally do. Perhaps were all chinese rooms…

  • Like 2
Link to comment
Share on other sites

27 minutes ago, auxien said:

your premise is inherently biased towards a criminal actor in the first place. more concerning is that the AI could be spitting out artwork that's copying art without the user knowing it. the user could be profiting off of this in some way, until they're later sued by the originator of the art that was copied.

Good point, seems like the only way is to have AI generated images be treated as if they are "sampling" everything in its dataset, that makes sense to me. 

29 minutes ago, auxien said:

Human brains are not Chinese rooms. i don't think we're magic or imbued with a soul/etc (there's no proof of this even if you disagree) but reasonably complex brains begin operating in ways that operate at a different level, in different ways, than 'lesser' brains. there's multi-level connections, cognition, and experiential understanding that simply isn't reducible.

https://en.wikipedia.org/wiki/China_brain I should have linked this in my other reply oops.

I feel like it's difficult to discuss this without having a good definition of understanding, I think that its best defined as having a model of the thing you understand, which is predictive of its future states, and in this way these ML models do have a type of understanding.

37 minutes ago, auxien said:

source? "just like our visual cortex" is a big statement.

The early layers of the visual cortex take in receptive fields that are like "pixels" from the retina, and detect lines of different orientations, each subsequent layer uses the features computed by the previous layer to detect more complex features (you can read about the functions of the different areas here https://en.wikipedia.org/wiki/Visual_cortex), this can also be observed in the filters of convolutional neural networks:

spacer.png

  • Like 1
Link to comment
Share on other sites

13 hours ago, dcom said:

For the record, I did not say any of the above, you're combining things from my post and making inferences that are not there, then putting the words in my mourh. I did not talk about any paths towards AGI, I did not specifically mention ChatGPT, I did not say it's a dead end, and I definitely said that the reference to Markov chains was in jest.

What some of the ML models can do is indeed impressive, and yes, they are stepping stones towards truly useful things, maybe even AGI - but currently the instances that get the most publicity are just expensive toys to make people go "oooh".

Ok when I go back and re-read your post you had hedged it quite a bit (edit: and I dont even mean that you'd edited it, probably I had just read it too quickly and got the slightly wrong end of the stick). And I approve of hedging. But still your overall point was

Quote

They're fun, yes, and they seem intelligent, but they're really not, they're Chinese Rooms.

and then you said

Quote

I'm not starting a conversation about the hard problem of consciousness or Panpsychism, Hylopathism or the like, I'm just nonchalantly dissing a technology fad. This too shall pass.

So it seems the overall shape of your argument is that 'the instances that get the most publicity' (do you mean ChatGPT or something else, please specify) 'seem intelligent but really theyre not' and thats an ok position, philosophically, to take but if you then say 'I'm not starting a conversation about the hard problem of consciousness' then its like you're trying to disqualify any opposing argument before its even started. Because the way to counter your argument is to talk about the hard problem of consciousness.

Edited by zazen
Link to comment
Share on other sites

3 hours ago, zazen said:

Ok when I go back and re-read your post you had hedged it quite a bit. And I approve of hedging. But still your overall point was

Yes, I edit and re-edit my posts when there are no (significant) replies to them yet, and I might still edit while someone has replied, but I try not to modify my intentions after the fact. I do it ruthlessly, and without notice.

3 hours ago, zazen said:

So it seems the overall shape of your argument is that 'the instances that get the most publicity' (do you mean ChatGPT or something else, please specify) 'seem intelligent but really theyre not' and thats an ok position, philosophically, to take but if you then say 'I'm not starting a conversation about the hard problem of consciousness' then its like you're trying to disqualify any opposing argument before its even started. Because the way to counter your argument is to talk about the hard problem of consciousness.

Yes, I was intentionally trying to disqualify any opposing argument because I was trying to do a comment-and-run, and not get caught up in the minutiae of speculative arguments about real AI and consciousness. I think that the public instances of ML-portrayed-as-AI (e.g. GANs, GPTs, Stable Diffusion...) are new-technology-made-fun and because of the long AI Winter before them an impressive, sudden step up in research and application of new methods and models - but they're still just toys. I can discuss the hard problem, substrates, emergence, connectomes, embodied coginition etc. fine, but at the moment I don't have the time to argue with strangers on the internet about them, so yes, I was just trying to make my disdain towards the overexcitement about ML-portrayed-as-AI known, and get away with it.

I didn't.

Edited by dcom
  • Like 2
  • Thanks 1
  • Haha 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.