Jump to content
IGNORED

AI - The artificial intelligence thread


YO303

Recommended Posts

Ai generated Joe Rogan podcast and now he's spouting off warnings about Ai.. finally joe rogan is weighing in on the debate... 

Watch: Completely AI-generated Joe Rogan podcast with OpenAI CEO Sam Altman

https://news.yahoo.com/watch-completely-ai-generated-joe-150416710.html

lols.. joe rogan issues a warning.. and "here's 3 stocks to invest in"

https://www.msn.com/en-us/money/technology/joe-rogan-just-issued-a-warning-about-artificial-intelligence-—-after-a-fake-version-of-his-podcast-was-created-100percent-through-ai-technology-here-are-3-stocks-to-capitalize/ar-AA19R4qO

 

 

  • Like 1
  • Farnsworth 1
Link to comment
Share on other sites

I think there's been enough proof by now that furthering along AI is only going to lead to more things on the "bad" side for all of us, rather than the good. greedy humans will fuck it all up as usual, think only on how to utilize AI for personal gain/capital, with some half-baked BS about how this will somehow be helpful. yes I know we are not even at "real" AI yet. but by the time we get there, it will probably be similar to a fckin salesperson trying to sell us stupid crap we don't need... and maybe we won't even know if it is some actual AI consciousness or some AI lite chatbot, because we will have all been so desensitized to this nonsense by then. and then of course we can't even define our own consciousness, so it's just a great fckin idea to invent an artificial version.

   

  • Like 2
  • Farnsworth 1
Link to comment
Share on other sites

55 minutes ago, zero said:

I think there's been enough proof by now that furthering along AI is only going to lead to more things on the "bad" side for all of us, rather than the good. greedy humans will fuck it all up as usual, think only on how to utilize AI for personal gain/capital, with some half-baked BS about how this will somehow be helpful. yes I know we are not even at "real" AI yet. but by the time we get there, it will probably be similar to a fckin salesperson trying to sell us stupid crap we don't need... and maybe we won't even know if it is some actual AI consciousness or some AI lite chatbot, because we will have all been so desensitized to this nonsense by then. and then of course we can't even define our own consciousness, so it's just a great fckin idea to invent an artificial version.

   

What could go wrong? FULL SPEED AHEAD

  • Farnsworth 1
  • Big Brain 1
Link to comment
Share on other sites

1 hour ago, o00o said:

So the only reason to sign the petition to delay development on ai was because musk wanted to found a company himself: https://www.wsj.com/articles/elon-musks-new-artificial-intelligence-business-x-ai-incorporates-in-nevada-962c7c2f?mod=Searchresults_pos1&page=1

figures. odds seem to be favoring Alan Musk as the guy that starts skynet.

what evil scheme is he up to now - 

https://www.reuters.com/technology/elon-musk-plans-ai-startup-rival-openai-ft-2023-04-14/

Quote

Musk has secured thousands of graphics processing units, systems that power the computing required for intensive tasks such as AI and high-end graphics, from Nvidia Corp (NVDA.O), according to FT. Shares of the chip company, which declined to comment on the matter, gained on the news on Friday.

 

some sorta get more rich plan to pay back the twitter $ loss I'd say. that and to rule the world with the other tech bro gang members

https://www.cnbc.com/2023/04/14/nvidias-h100-ai-chips-selling-for-more-than-40000-on-ebay.html

Quote

The prices for Nvidia’s H100 processors were noted by 3D gaming pioneer and former Meta consulting technology chief John Carmack on Twitter. On Friday, at least eight H100s were listed on eBay at prices ranging from $39,995 to just under $46,000. Some retailers have offered it in the past for around $36,000.

Musk + Meta bros + a bunch of GPUS / chips = skynet when

Link to comment
Share on other sites

the main guy behind OpenAI is basically saying ChatGPT is a digital brain, btw, 'neural networks.' he's not making comparisons. he admits the tech is essentially superhuman in breadth of knowledge already, which has a bit of truth to it. expects a singularity to occur based on this tech. "AI is going to become truly extremely powerful" ...expects to be able to induce consciousness in AI, has thoroughly thought it through (my assumption is that they're already working on this specifically). states it probably needs some sort of large governmental regulation/control due to how powerful it's going to become, and soon. "for obvious reasons" he can't comment on costs to develop ChatGPT.

nbd tho nothing to worry about here lol just keep memeing lmao 

Spoiler

image.png.551b2abeb301ed8d6de978a89726b580.png 

and he got his PhD in compsci under this guy, Hinton:

from this interview:

he's trying to understand the brain, not specifically interested in AI/etc. thinks there's distinctions in what AI is doing and what our brains are doing. 

notable quote regarding the changes happening with AI: "I think it’s comparable in scale with the Industrial Revolution or electricity — or maybe the wheel."

  • Like 2
Link to comment
Share on other sites

2 hours ago, auxien said:

basically saying ChatGPT is a digital brain, btw, 'neural networks.'

This isn't news or a surprise, right? Not sure what's the significance. Neural networks have been around since the 70s. And since roughly 2010's I believe, the complexity of those networks have increased and the term "deep learning" is put on top of it.  It's not some dark art or anything. 

 

 

Link to comment
Share on other sites

6 hours ago, Satans Little Helper said:

This isn't news or a surprise, right? Not sure what's the significance.

the significance is it's not a metaphorical thing, or at least their implied goal is to create a real, honest analogous but far, far more powerful, version of a brain/neural network. they're not trying to create a chatbot, or a search engine, or a test subject, or a mathematical reasoning tool, or a code-writing wizad, or a weapon. they are trying to create an all-powerful, singularity-inducing neural network. and they're stepping right out and saying it.

Link to comment
Share on other sites

1 hour ago, auxien said:

the significance is it's not a metaphorical thing, or at least their implied goal is to create a real, honest analogous but far, far more powerful, version of a brain/neural network. they're not trying to create a chatbot, or a search engine, or a test subject, or a mathematical reasoning tool, or a code-writing wizad, or a weapon. they are trying to create an all-powerful, singularity-inducing neural network. and they're stepping right out and saying it.

this can also be a marketing/funding thing. i agree this stuff is all exciting and game changing but just know it’s still a game

Link to comment
Share on other sites

1 hour ago, auxien said:

the significance is it's not a metaphorical thing, or at least their implied goal is to create a real, honest analogous but far, far more powerful, version of a brain/neural network. they're not trying to create a chatbot, or a search engine, or a test subject, or a mathematical reasoning tool, or a code-writing wizad, or a weapon. they are trying to create an all-powerful, singularity-inducing neural network. and they're stepping right out and saying it.

I wrote a whole thing on how I think that's misleading because even today we don't understand how NNs produce their outputs, but rather I'll just ask what is an "all-powerful, singularity-inducing neural network"? 

like, what does it do? Anything? Everything? Will it run companies and countries? Will it gain autonomy and make decitions without input?

like, what does that mean? What's the objective and how do you know you achieved it? Will it have real time inputs like we do? 

Link to comment
Share on other sites

1 hour ago, exitonly said:

this can also be a marketing/funding thing. i agree this stuff is all exciting and game changing but just know it’s still a game

of course yeah, very possible. but the returns on any additional funding are going to minuscule if i’m understanding their ‘capped profit’ model correctly (about 2/3rds into the first video). it could still be marketing to ramp up ‘product’ sales but this guy doesn’t seem like the marketing type.

37 minutes ago, GORDO said:

I wrote a whole thing on how I think that's misleading because even today we don't understand how NNs produce their outputs, but rather I'll just ask what is an "all-powerful, singularity-inducing neural network"? 

like, what does it do? Anything? Everything? Will it run companies and countries? Will it gain autonomy and make decitions without input?

like, what does that mean? What's the objective and how do you know you achieved it? Will it have real time inputs like we do? 

watch the second vid with Hinton if you’ve not…that’s his whole point in researching this stuff his entire career…to understand the human brain, not to create AGIs. but he realized to fully understand the human (/any) brain you’ve got to be able to replicate it pretty accurately…so he’s spent decades developing tools to do that. 

his argument, ultimately the conclusion i agree with, is that we don’t understand how our brains really work, not to any true or complete understanding, and yet obviously we’re here and self aware. talking on the internet that humans have created. we don’t have to understand how the AGI becomes aware once it does, we just have to have the ability to recognize if it’s real or not. i don’t think this shit is aware, yet, at least the stuff publicly shown. but even Hinton is scared by the speed with which stuff is progressing right now.

Edited by auxien
  • Like 1
Link to comment
Share on other sites

30 minutes ago, GORDO said:

I wrote a whole thing on how I think that's misleading because even today we don't understand how NNs produce their outputs, but rather I'll just ask what is an "all-powerful, singularity-inducing neural network"? 

like, what does it do? Anything? Everything? Will it run companies and countries? Will it gain autonomy and make decitions without input?

like, what does that mean? What's the objective and how do you know you achieved it? Will it have real time inputs like we do? 

 

It will be able to do anything in the domain of information, and, given a physical form, a lot that humans can do. I hope that there's no longer a need for humans to mine, or disarm explosives, for instance.

The information aspect of it is the most dangerous. Improperly managed, we may become completely dependent on it, or it may drive our planet into chaos, intentionally or not.

Managed by the powerful, it could be used to instantly devise flawless plans for the subjugation of the masses, and it could also work (through the Internet / social engineering) to execute them.

I've thought there will always need to be people to maintain these systems, but now I think other machines can maintain them.

It could be used for good, too, but who has unlimited access to this kind of computing power, or can afford to pay ML experts vast salaries? Not me.

Link to comment
Share on other sites

3 hours ago, auxien said:

the significance is it's not a metaphorical thing, or at least their implied goal is to create a real, honest analogous but far, far more powerful, version of a brain/neural network. they're not trying to create a chatbot, or a search engine, or a test subject, or a mathematical reasoning tool, or a code-writing wizad, or a weapon. they are trying to create an all-powerful, singularity-inducing neural network. and they're stepping right out and saying it.

I think you're a bit too much going for the hype. From the transformer architecture it's clear how important the architecture and the data are for its success. The claim to build a brain also needs both: an architecture and a lot of data (parameters). We're not there yet. Not by a long shot. You can't just train a different architecture on the same data as chatGPT (the internet) and think something entirely different pops out on the other side of learning. This brain "dream" requires a different kind of approach. 

Edited by Satans Little Helper
Link to comment
Share on other sites

6 minutes ago, Satans Little Helper said:

I think you're a bit too much going for the hype. From the transformer architecture it's clear how important the architecture and the data are for its success. The claim to build a brain also needs both: an architecture and a lot of data (parameters). We're not there yet. Not by a long shot. You can't just train a different architecture on the same data as chatGPT (the internet) and think something entirely different pops out on the other side of learning. This brain "dream" requires a different kind of approach. 

The different kind of approach, already being worked on, is to connect many different, complementary ML models together, and to give them access to as much information as possible.

The stuff they are showing the public is just the tip of the iceberg - a PR move so people can begin to understand it.

Also, the question of whether it models a human brain or is conscious is irrelevant for practical consideration. It's what it can and will be used for that is most important. Autonomy can occur without consciousness, however we want to define consciousness.

  • Like 1
Link to comment
Share on other sites

15 minutes ago, Summon Dot E X E said:

The different kind of approach, already being worked on, is to connect many different, complementary ML models together, and to give them access to as much information as possible.

The stuff they are showing the public is just the tip of the iceberg - a PR move so people can begin to understand it.

Also, the question of whether it models a human brain or is conscious is irrelevant for practical consideration. It's what it can and will be used for that is most important. Autonomy can occur without consciousness, however we want to define consciousness.

OK, so in other words, it is actually about functionality. Fair enough. I pretty much agree with this point.

Trick is though, that you have to understand what kind of data you need for different kinds of functionality. If you want to make a chatbot, you can learn a model on large amounts of text. If you want to create "a human brain"  - which is obviously vague - you have to know what kind of data it needs. And whatever that is, it is not just the internet. Or text.

On the architecture side of things, I must say the complementary ML models seems far fetched to me. Unless a transformer model is similarly a collection of " many different complementary ML models together". There's a certain reason (logic) to the transformer model which makes it work. There needs to be one to make this artificial brain work.

And assuming we have such an architecture, I'm afraid your going to be stuck with a "brain" in a bottle. Without a body it wont be particularly interesting. And yes, I'm fairly certain we're more than just a brain in a body. There's a bit of embodiment which is tied to our human intelligence. imo. That's why I'm not too hyped, btw.

Edited by Satans Little Helper
Link to comment
Share on other sites

why would feeding a machine a bunch of data from the internet produce intelligence, autonomy, consciousness, etc.? 

what does it mean to say a machine can "do anything in the domain of information?"

what does it mean to say something like chatgpt is a "brain," and is this terminology appropriate or meaningful?

 

i think there's a tremendous amount of fluff and bullshit coming out of the tech world about this. a lot of it seems founded on the premise that a brain is just a computer, knowledge is just data collection, and intelligence is just organizing this data. and here data is just whatever...from the internet? come on. the hype is constantly saying humans are dumb compared to a computer, bc a computer stores more website data. seems incredibly stupid, to me.

i'm not sure if anyone itt has messed with chatgpt but when you "test" it on subjects you're very familiar with, it consistently demonstrates that it is bordering on completely useless. constantly churning out gibberish, bullshit, disorganized information, absurd inaccuracy, etc. it doesn't even have the "intelligence" to simply say "lmao idk." it's a massive search engine designed to provide results in the form of "answering a question." 

i'm highly skeptical this will produce anything outside of massively destroying our minds with misinformation and being used by the ruling class to dominate and oppress society. 

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

tbh it’s tempting to see this as the natural outcome of an educational system that prioritizes memorization over learning and a society which denigrates “the humanities” as a bunch of irrelevant, unprofitable emo shit. 
 

you end up with a tech industry saying their search engine is a brain and it’s the most powerful intelligence ever created. bc it stores all the stuff from online. 

  • Like 1
  • Thanks 2
  • Burger 1
Link to comment
Share on other sites

46 minutes ago, Satans Little Helper said:

I think you're a bit too much going for the hype. From the transformer architecture it's clear how important the architecture and the data are for its success. The claim to build a brain also needs both: an architecture and a lot of data (parameters). We're not there yet. Not by a long shot. You can't just train a different architecture on the same data as chatGPT (the internet) and think something entirely different pops out on the other side of learning. This brain "dream" requires a different kind of approach. 

i don’t think i’m falling for any hype, i think i’m simply listening to a variety of information available and making conclusions. Hinton discusses other research happening using different methodologies/etc. to achieve similar goals as OpenAI. 

transformer architecture? what architecture is needed we don’t have? the parameters available for data are already far vaster than you or i can comprehend (the entirety of human history and research as it exists digitally). babies are trained and grow to reasonable thinking adults on much, much, much less data (tho vastly different, given human input is largely physical and familial/community based).

 

42 minutes ago, Summon Dot E X E said:

Also, the question of whether it models a human brain or is conscious is irrelevant for practical consideration. It's what it can and will be used for that is most important. Autonomy can occur without consciousness, however we want to define consciousness.

agreed.

21 minutes ago, Alcofribas said:

why would feeding a machine a bunch of data from the internet produce intelligence, autonomy, consciousness, etc.? 

it’s not a machine. it’s a set of algorithms designed to learn and be able to predict based of the information given. that’s not quite a toaster, and it’s not in and of itself a ‘brain’ but it’s getting closer and closer to one than the other.

  • Like 1
Link to comment
Share on other sites

23 minutes ago, Alcofribas said:

i'm highly skeptical this will produce anything outside of massively destroying our minds with misinformation and being used by the ruling class to dominate and oppress society.

valid argument. possibly even more terrifying than the singularity.

  • Like 1
Link to comment
Share on other sites

7 minutes ago, Alcofribas said:

why would feeding a machine a bunch of data from the internet produce intelligence, autonomy, consciousness, etc.? 

what does it mean to say a machine can "do anything in the domain of information?"

what does it mean to say something like chatgpt is a "brain," and is this terminology appropriate or meaningful?

 

i think there's a tremendous amount of fluff and bullshit coming out of the tech world about this. a lot of it seems founded on the premise that a brain is just a computer, knowledge is just data collection, and intelligence is just organizing this data. and here data is just whatever...from the internet? come on. the hype is constantly saying humans are dumb compared to a computer, bc a computer stores more website data. seems incredibly stupid, to me.

i'm not sure if anyone itt has messed with chatgpt but when you "test" it on subjects you're very familiar with, it consistently demonstrates that it is bordering on completely useless. constantly churning out gibberish, bullshit, disorganized information, absurd inaccuracy, etc. it doesn't even have the "intelligence" to simply say "lmao idk." it's a massive search engine designed to provide results in the form of "answering a question." 

i'm highly skeptical this will produce anything outside of massively destroying our minds with misinformation and being used by the ruling class to dominate and oppress society. 

"why would feeding a machine a bunch of data from the internet produce intelligence, autonomy, consciousness, etc.?"

It doesn't need to produce that. It's about the applications that could be created to make use of such models. Or, since the applications themselves will be able to be written by ML, it's about the people who just think of the ideas they want, and programs are written for them which make use of models designed and trained by the model.

Once you have that, you can gradually or suddenly just stop broadcasting that you have this technology, and just secretly use it to control people, like you said. It could be used for good purposes instead, maybe.

"what does it mean to say a machine can "do anything in the domain of information?""

I meant that the powers of a well-trained ML model in very specific domains can surpass the equivalent in humans, so a network of these types of models, a network perhaps run by an ML that was trained to run such a network, could in turn be controlled by a model which was trained to make decisions about policies for humans, which the people could then choose to use, or not use. It might suggest some interesting things. That's a good thing it could do.

An autonomous ML model which had control over a virtual machine with a fast connection, massive processing capacity, etc., could create social credit scores for everyone based on their posts on the internet, and could even punish people by the way it manipulates their social media feeds, suppresses good news and shows a ton of depressing news, creates fictitious reports about a person to keep jobs away from them... I mean, think about what you can do on a computer. Now imagine what an ML agent could do on a VM.

These are just some hypotheticals, but if you are examining how the speed of development of this stuff is accelerating so quickly... we are heading faster and faster towards the singularity.

3 minutes ago, Alcofribas said:

tbh it’s tempting to see this as the natural outcome of an educational system that prioritizes memorization over learning and a society which denigrates “the humanities” as a bunch of irrelevant, unprofitable emo shit. 
 

you end up with a tech industry saying their search engine is a brain and it’s the most powerful intelligence ever created. bc it stores all the stuff from online. 

Information is stored in the human brain. It can be encoded in digital form, or not. More and more information is being stored in that form. With that information, machines have already been shown to show a much higher degree of accuracy of understanding than humans, such as in some medical imaging models.

Think of all the different applications of ML you've seen over the years. Not just ChatGPT or Midjourney. ML models will be able to do "anything in the domain of information" if we continue in this direction.

  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.