Jump to content
IGNORED

AI - The artificial intelligence thread


YO303

Recommended Posts

  • Replies 1k
  • Created
  • Last Reply

Top Posters In This Topic

The GPT Plugins (e.g. connecting chatgpt to wolfram alpha) seems like a big deal 

I've never seen so much happen in 4 months in tech. ChatGPT last november, GPT-4 now, the ChatGPT API being 100 times cheaper to use than anyone expected, MS picking up the whole thing and plonking it into Bing and Office365. And then earlier in the year all the Stable Diffusion stuff.

There's also a load of stuff happening with the Llama model and Alpaca, essentially people exploring smaller open-source versions of chatgpt and asking 'what can I run locally' and it turns out you can do quite a lot.

Usually with tech we see occasional big leaps then incremental improvements, but I think the LLM stuff is such a wide new frontier that hasn't been explored that the velocity is very fast. Like opening up new territory and there's all kinds of directions people can explore. We're finding new stuff and then 8 weeks later figuring out how to run it 100 times more efficiently etc.  

And everyone is getting to play with this new frontier, rather than it being some mega project that no-one gets to use. The OpenAI API is cheap, you can talk to ChatGPT for free, Bing is free, Stable Diffusion is open-source, Llama/Aplaca is free, etc.

Lots of people losing their shit on twitter

 

  • Like 3
Link to comment
Share on other sites

elon, woz, and a bunch of other dudes are asking nicely here - please sign this petition to stop AI from wiping out humanity:

https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Quote

we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

hmmm...which will win....capitalistic AI labs...or the greater good for the rest of us. stay tuned!

I wonder what all the great governments on this planet will think about this "institute a moratorium". ha!

  • Like 1
  • Farnsworth 1
  • Big Brain 1
Link to comment
Share on other sites

On 3/22/2023 at 8:28 AM, zero said:

I know there is no way to stop the flow of technological progress. humans have an innate drive to keep advancing toward something. but what? we don't know exactly. something about making life easier, a better life for the next generation...I suppose. why not stop all this AI advancement and just be happy with what we have achieved technologically so far? why do we have to keep going toward something newer and shinier? IMO humans need to focus more on reality, truthfulness, what is right in front of us, instead of a life online. the natural world... AI advancements are going to make it nearly impossible at some point to discern what is reality, and what is a computer-generated version of reality. I'm already really creeped out by some of this AI porn crap I see. this is going to set off even more of a myriad of psychological catastrophes. the last 25 years of internet/social media have shown millions of humans are unable to mentally handle a digital version of themselves. AI is going to fuel more of an addiction to the online fantasy world, less on the real.

but whatever man...this shit is all out of our control. next 40 years should be fun lol

 

progress towards what is the most relevant question

for some reason all this "progress" isn't towards:

-universal housing, food, water, education, solving climate change, etc

its mostly moneygrabs by corporations

its not progress at all its bullshit and its not even technology, its garbage

the idea of ai wiping out humanity is pure hubris and projection that the owning class may have something above them for the first time.  what they choose to ignore is that they are the ones creating this ai, not someone in a basement they pontificate about.  they have the massive datacenters and they are controlling huge workforces creating this stuff.  it's all them so their pretend cares towards Ai taking over humanity are full of shit, every billionaire is full of shit

  • Like 3
Link to comment
Share on other sites

1 hour ago, zlemflolia said:

the idea of ai wiping out humanity is pure hubris and projection that the owning class may have something above them for the first time.  what they choose to ignore is that they are the ones creating this ai, not someone in a basement they pontificate about.  they have the massive datacenters and they are controlling huge workforces creating this stuff.  it's all them so their pretend cares towards Ai taking over humanity are full of shit, every billionaire is full of shit

I'm with you on this. the idea of AI wiping out humanity is a bit of sci fi humor to me. I see the irony in these tech associated big wigs like Musky jumping on this bandwagon that AI development needs to be stopped, or slowed down all of a sudden. and the idea that various government's across the planet will have to step in and regulate AI somehow is even more ludicrous to me. 

and yes technological progress has not solved any of those glaring problems like food shortages, housing, income inequality, or climate change. the overarching vibe regarding humanity is that shit keeps getting worse, rather than better... but it's good to keep in check that humans have been saying "shit's getting worse" for thousands of years now. of course now everyone can plaster their unhappiness/negativity all over the internet, so it's in our faces a hell of a lot more. staying above all that is all we can try and do. 

Link to comment
Share on other sites

1 hour ago, zlemflolia said:

its not progress at all its bullshit and its not even technology, its garbage

well technically it is progress. i don't know what exactly is going on under the bonnet of this chatgpt, but you can't deny some sort of progress has been made there. if nothing else, it's a very efficient data mining algorithm. it's a progress of a very narrow focal point of the owning class (as you put it). and i guess i don't have to tell what they are all about.

at first i thought that maybe a general access to this technology might even the playing field a bit, but i caught myself wanting to be naive again: even if openai is made public and accessible to all and yadayada, the investors will surely get the more powerful version of it. so...

i guess we, the citizens, should be more concerned about the money stolen from us. if the new financial crisis hits (has it already? idk), make sure we don't get to bail the fuckers out again. we should really be more proactively vigilant

  • Like 2
Link to comment
Share on other sites

The Only Way to Deal With the Threat From AI? Shut It Down | Time

I heard of Elizier Yudkowsky a few years ago, in the context of the Ex Machina movie - someone telling me that movie is not actually about a turing test, its about a Yudkowsky "AI Box" experiment - the idea that a sufficiently smart AI would be very hard to contain.

He's been popping up a lot on twitter recently. And it turns out he's one of the founders of LessWrong which I hadn't really heard of before but is kindof a community of like-minded rationalists who have been talking about all this stuff for a while.

Anyway so Yudkowsky is someone who's been thinking and writing about this stuff for a while, and while there's plenty of people who disagree with him, he's not a crank, he is generally careful about what he says and tries to back everything up with reasonable arguments.

He wrote this article for Time and he pulls no punches:

Quote

 

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.

Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.

Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

...

We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.

Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs. And so they all think they might as well keep going. This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.

...

To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long.

...

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.

 

 

 

 

  • Like 2
Link to comment
Share on other sites

humanity coming together against capitalism? governments throwing out the ideas of their own dominance? businesses ignoring their own market gains over others to instead all decide not to exceed?

lol right okay sure

  • Like 1
  • Sad 1
  • Farnsworth 1
Link to comment
Share on other sites

Its more like trying to contain weapons of mass destruction

 

Not sure how much I agree with the article but its a suprising thing to see in a mainstream magazine. 

Edited by zazen
  • Like 1
Link to comment
Share on other sites

This sounds to be more like businesses scared of losing their income to the competition and now trying to stop the revolution that might destroy them

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, zazen said:

Its more like trying to contain weapons of mass destruction

 

Not sure how much I agree with the article but its a suprising thing to see in a mainstream magazine. 

yeah we’re not so great at weapons of mass destruction containment obv. we could ruin every square inch of the world a few times over with the active nukes alone.

i agree with what of the article you posted. i just realize that’s a laughable pipe dream. people writing articles like that don’t want change, they want to feel smug in acting like they want change. real change isn’t helped by one article like that…if he really wants that to happen, he’s gotta privately get large swathes of the smart people who are doing these things on his side. doing it publicly is not a help, it’s just an ego trip.

  • Like 3
Link to comment
Share on other sites

53 minutes ago, auxien said:

people writing articles like that don’t want change, they want to feel smug in acting like they want change. real change isn’t helped by one article like that…if he really wants that to happen, he’s gotta privately get large swathes of the smart people who are doing these things on his side. doing it publicly is not a help, it’s just an ego trip.

Well, in his defense he has sort of done that. In 2000 Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence - now called the Machine Intelligence Research Institute, and has - I'm given to understand - done quite a lot of work to make the concepts of 'AI Safety' and 'AI Alignment' into research fields that people have heard of. He also founded the lesswrong forum - which is focussed on AI safety. He's quite well known in the AI field. Here's Sam Altman (OpenAI boss) talking about him. They disagree, but my point is that people like Sam Altman are well aware of him.


 

  • Like 1
  • Thanks 2
Link to comment
Share on other sites

oh that’s interesting then. i obv don’t know anything about the guy beyond what’s on this page…will sure be curious to read more about that stuff…anyone working (actually working) towards keeping that shit in check is sure to be worth some time. thanks zazen

  • Like 2
Link to comment
Share on other sites

On 3/30/2023 at 5:33 AM, zlemflolia said:

progress towards what is the most relevant question

for some reason all this "progress" isn't towards:

-universal housing, food, water, education, solving climate change, etc

its mostly moneygrabs by corporations

its not progress at all its bullshit and its not even technology, its garbage

the idea of ai wiping out humanity is pure hubris and projection that the owning class may have something above them for the first time.  what they choose to ignore is that they are the ones creating this ai, not someone in a basement they pontificate about.  they have the massive datacenters and they are controlling huge workforces creating this stuff.  it's all them so their pretend cares towards Ai taking over humanity are full of shit, every billionaire is full of shit

I think that in the medium term the promise of AI is A) a great teacher that is an expert in every topic at a level of the best human expert, B) an excellent therapist on par with some of the best in the world, and C) able to help you as an individual make more informed and thought out decisions in general life choices through it's own reasoning and information gathering skills, all for a low price compared to comparable ways to get a tiny fraction of these benefits in the current world.

While this doesn't directly work on societal issue such as universal housing, climate change etc, by giving people this power it will indirectly lead to these areas being improved. Everyone already working on these issue will be more effective. AI can be thought of as the meta problem, problem solving problem solving, if you have strong AI then your capacity to solve other problems is magnified.

And while I agree there is a lot of bullshit surrounding the safety discussion, I wouldn't rule out the risks of AGI, if you have a simple solution to the alignment problem let us know.

On 3/30/2023 at 7:38 AM, cichlisuite said:

At first i thought that maybe a general access to this technology might even the playing field a bit, but i caught myself wanting to be naive again: even if openai is made public and accessible to all and yadayada, the investors will surely get the more powerful version of it. so...

OpenAI is capped profit and is controlled by a non profit, obviously this doesn't mean that what you're talking about can never happen but it helps.

And if it does go that way, it's not a 0 sum game, the general population can stuff get huge benefit from this stuff even if the "owning class" gets access to better models / sooner 

Link to comment
Share on other sites

2 hours ago, vkxwz said:

I think that in the medium term the promise of AI is A) a great teacher that is an expert in every topic at a level of the best human expert, B) an excellent therapist on par with some of the best in the world, and C) able to help you as an individual make more informed and thought out decisions in general life choices through it's own reasoning and information gathering skills, all for a low price compared to comparable ways to get a tiny fraction of these benefits in the current world.

While this doesn't directly work on societal issue such as universal housing, climate change etc, by giving people this power it will indirectly lead to these areas being improved. Everyone already working on these issue will be more effective. AI can be thought of as the meta problem, problem solving problem solving, if you have strong AI then your capacity to solve other problems is magnified.

And while I agree there is a lot of bullshit surrounding the safety discussion, I wouldn't rule out the risks of AGI, if you have a simple solution to the alignment problem let us know.

OpenAI is capped profit and is controlled by a non profit, obviously this doesn't mean that what you're talking about can never happen but it helps.

And if it does go that way, it's not a 0 sum game, the general population can stuff get huge benefit from this stuff even if the "owning class" gets access to better models / sooner 

i dont think its good for any of those things, and i think a lot of these "decisions" we all have to make are manufactured ways to force people into paths from which value can be extracted from them by not letting them have freedom afterwords or taking things away from them if they choose another path. 

there is nothing more dystopian than an AI therapist, in fact i dont think there would be any point talking to it since part of the main purpose of a therapist is having a sentient human to talk to

Link to comment
Share on other sites

ai is powerless and useless without an embodied system performing work.  yeah maybe ai can "escape" and start executing sql injection against everyone but humans can already do that kinda of stuff and at best ai will expose vulnerabilities already existing, not supercede to become some alien overlord. 

those who control the computing clusters control the AI, and those people are the owning class, whose status as owning class must be eliminated and proletarianized so that the proletariat can then abolish itself and let us be humans again not wage slaves

all this AI shit is a distraction

Link to comment
Share on other sites

Hey Zlemflolia this will be up your street:

ChatGPT Is a Bullshit Generator Waging Class War (vice.com)

Quote

ChatGPT isn't really new but simply an iteration of the class war that's been waged since the start of the industrial revolution. That allegedly well-informed commentators can infer that ChatGPT will be used for "cutting staff workloads" rather than for further staff cuts illustrates a general failure to understand AI as a political project. Contemporary AI, as I argue in my book, is an assemblage for automatising administrative violence and amplifying austerity. ChatGPT is a part of a reality distortion field that obscures the underlying extractivism and diverts us into asking the wrong questions and worrying about the wrong things. Instead of expressing wonder, we should be asking whether it's justifiable to burn energy at "eye watering" rates to power the world's largest bullshit machine.

I found it to be an interesting point of view, quite underrepresented in the current frenzy

  • Thanks 1
Link to comment
Share on other sites

I think the biggest risk right now with LLMs like gpt-x is people not understanding their limitations and treating its output as gospel.

Also us being flooded with ai generated 'content' that will commoditise a lot of people's source of income.

And a third is us being flooded with false and inaccurate information, making it even harder to tell fact from fiction (this loops around the first point as well)

I think more that thinking about limiting AI, we should be talking about how to increase human intelligence or rather, making better use of the intelligence we got. We got to raise expectations of the things we consume, be more critical of the things we read and more diligent about how we come to trust a source. 

  • Thanks 1
Link to comment
Share on other sites

18 hours ago, zlemflolia said:

i dont think its good for any of those things, and i think a lot of these "decisions" we all have to make are manufactured ways to force people into paths from which value can be extracted from them by not letting them have freedom afterwords or taking things away from them if they choose another path. 

there is nothing more dystopian than an AI therapist, in fact i dont think there would be any point talking to it since part of the main purpose of a therapist is having a sentient human to talk to

ChatGPT is only in a primitive stage but I personally have learnt a lot from it, especially using it in conjunction with a text on the topic you want to learn about. You can engage in a back and forth conversation where you can ask for more detail on whatever you don't understand and also ask it to verify if your understanding(as you explain it) is correct, like a 1on1 tutor. I'm certain it'll be an extremely valuable learning tool in the future as it improves.

As for the therapist part I agree that it feels dystopian, but I've already witnessed someone use it in a therapy like way when they were in a dark place and it genuinely helped them when they had no access to a professional(there is a shortage of them where I live). Imo it's in extremely early stages and is not meant to be used for therapy at this stage though.

I'd want to hear you expand on what you're saying about manufactured decisions though.

Link to comment
Share on other sites

10 hours ago, zazen said:

Hey Zlemflolia this will be up your street:

ChatGPT Is a Bullshit Generator Waging Class War (vice.com)

I found it to be an interesting point of view, quite underrepresented in the current frenzy

from that article: "it's still a computational guessing game. ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all."

this is a bad take that seems to be spreading among people who have little to no understanding of the field. There is strong evidence that these systems contain internal models of the world whichs data they are trained on, here's a nice article on one example of this:

https://www.lesswrong.com/posts/nmxzr2zsjNtjaHh7x/actually-othello-gpt-has-a-linear-emergent-world

"But despite the model just needing to predict the next move, it spontaneously learned to compute the full board state at each move - a fascinating result. A pretty hot question right now is whether LLMs are just bundles of statistical correlations or have some real understanding and computation! This gives suggestive evidence that simple objectives to predict the next token can create rich emergent structure (at least in the toy setting of Othello). Rather than just learning surface level statistics about the distribution of moves, it learned to model the underlying process that generated that data."

But I do think it is very important to acknowledge the fact that the GPT models do hallucinate, using in in the place of google is a bad idea. It's better used as a sort of reasoning engine.

Edited by vkxwz
  • Like 1
Link to comment
Share on other sites

5 hours ago, vkxwz said:

The language model has no idea what it's talking about because it has no idea about anything at all."

what's the difference between that and a human being? I mean, look at this thread...

:trollface:

no but really, we do, think, too much of ourselves...

Edited by cruising for burgers
  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.