Jump to content
IGNORED

AI - The artificial intelligence thread


YO303

Recommended Posts

Posted (edited)
2 hours ago, cichlisuite said:

what i was getting at (and trying to understand myself) was it doesn't actually 'possess' knowledge, more like it fetches from its 'learned' repositories, or indexed registers + the user query sent from the prompt. meaning the real 'knowledge' it has, or is being equipped with, is data mining knowledge that can parse strings (also text recognition module for reading text in images), and recognize contexts and how contexts relate to each other on different levels. i.e. literature - classic - shakespeare - king lear, versus literature -scientific - physics -quantum electrodynamics, etc... meaning that it should understand these levels and the general knowledge trees must be strictly organized somewhere inside chatgpt hq servers.... or is it capable of making 'sense' of unstructured, rudimentary, reduntant, scattered pieces of information?

it would be very interesting to have an insight into a sort of pseudo-code analysis of chatgpt's 'thinking' procedures, and how it stores and caches data. like how the neural-network topology recurses through itself in a way, idk.

that seems pretty impressive, however, if i understand correctly, these are run by separate, optimized versions of chatgpt that cannot relate to each other in terms of being one interconnected entity?

I see this as a common misunderstanding. These models are not databases, they don't 'look up' stuff. They're enormously complex formulas whose parameters are 'fitted' with data towards a target objective. It's no different conceptually than finding the best intercept and slope to fit a line through a bunch of points in a plane. The difference is that instead of two parameters you're finding the fit for billions of parameters in an humanly intractable formula. It's statistics on steroids, not databases.

This is why we can't have 'pseudo code' for how they produce their outputs and why we say we don't 'understand' how they work. While we do know *exactly* what the 'formula' is and we know the parameters, it's so immensely big and complicated that there's no point in trying to extract insight from it. There are tho techniques to gain partial understanding on how variation of their inputs produce variations on the outputs and so on.

Whenever I see the argument that they don't posses knowledge I'd argue back that whatever test you can come up for a human to prove knowledge, this thing can do it too.

Edited by GORDO
  • Like 4
Link to comment
Share on other sites

Posted (edited)
4 hours ago, cichlisuite said:

what i was getting at (and trying to understand myself) was it doesn't actually 'possess' knowledge, more like it fetches from its 'learned' repositories, or indexed registers + the user query sent from the prompt. meaning the real 'knowledge' it has, or is being equipped with, is data mining knowledge that can parse strings (also text recognition module for reading text in images), and recognize contexts and how contexts relate to each other on different levels. i.e. literature - classic - shakespeare - king lear, versus literature -scientific - physics -quantum electrodynamics, etc... meaning that it should understand these levels and the general knowledge trees must be strictly organized somewhere inside chatgpt hq servers.... or is it capable of making 'sense' of unstructured, rudimentary, reduntant, scattered pieces of information?

it would be very interesting to have an insight into a sort of pseudo-code analysis of chatgpt's 'thinking' procedures, and how it stores and caches data. like how the neural-network topology recurses through itself in a way, idk.

that seems pretty impressive, however, if i understand correctly, these are run by separate, optimized versions of chatgpt that cannot relate to each other in terms of being one interconnected entity?

i probably jumped off of your question a bit too far, apologies. i'm not too great at staying in the lane these days (programmers are needing to update my software).

but really....GORDO explained things far better than i ever could've. (thanks GORDO)

i imagine there'll be some people focusing on studying how the procedures are actually happening, if not some already of course. but like GORDO said, that level of complexity is difficult to break down...i imagine it's akin to studying cosmology and creating these Big Bang-through-now models of the evolution of the universe...they're not trying to pinpoint the exact formation of exact stars or exact galaxies, there's just too many...so those researchers are just crafting models until they get distributions of stars/galaxies that largely match our current overview of the universe. or studying the exact trail of one bolt of lighting in the finest of detail...it's possible, sure, and it can tell you a lot about how lightning usually behaves in the conditions found for that weather/etc., but every bolt is vastly different than every other, and mostly unpredictable. i'm blabbering again...

i believe those versions that are analyzing images are a separate instance (or instances) of ChatGPT, but i'm not certain at all on that. i'm sure the 'main' GPT-4's basic language analysis and output abilities are incorporated into the visual instances even if the visual ones are trained separately....idk how the computing power/etc. is all sorted and accounted for when trying to decide how to do this stuff. i've seen multiple reports of how ridiculously computationally expensive it is for even simple queries with these programs...i think i remember estimates of the Bing GPT-powered chat actually costing Microsoft a few cents each time, far more than any ad revenue offset would be worth (at least, currently). all this shit is probably going to end up being as bad for eating up 'useless' power as BTC/crypto mining if it ain't already.

Edited by auxien
  • Like 1
Link to comment
Share on other sites

12 hours ago, auxien said:

“We ARE the machines! We always were!!!”

in each era the most advanced technology is projected onto the conception of the mind, and of god

  • Like 2
Link to comment
Share on other sites

saw this posted in my Linkedin feed:

Quote

How can #ChatGPT effectively help and develop the #Manufacturing industry?

Artificial intelligence has impacted the social and working world to a high degree, and it’s everywhere! Everywhere I look, I see new articles and posts about the great benefits of ChatGPT- in almost every setting imaginable.

I thought I’d give it a try…

So, in its own words, how can ChatGTP help manufacturing:

1) Quality control: ChatGPT can analyse large amounts of data and help identify patterns and anomalies in the manufacturing process, which can help improve quality control and reduce defects.

2) Predictive maintenance: ChatGPT can analyze sensor data from machines and predict when they may require maintenance, allowing manufacturers to perform maintenance proactively, reducing downtime and improving productivity.

3) Supply chain optimization: ChatGPT can help optimize supply chain operations by analyzing data from suppliers, inventory, and logistics, identifying potential bottlenecks or inefficiencies, and suggesting ways to streamline the process.

4) Customer service: ChatGPT can help manufacturers provide better customer service by answering customer inquiries in real-time, providing product information, and resolving issues quickly and efficiently.

5) Innovation: ChatGPT can help manufacturers develop new products and processes by analyzing data and identifying new trends and opportunities in the industry.

Sounds great right?

Of course, there will always be limitations and concerns, like with any implementation of new #technology. However, the early signs are showing promise as many industry experts agree that the application of ChatGPT within manufacturing will allow businesses to make faster decisions, reduce running costs and improve product quality.

Learn more about ChatGPT and its potential relationship with the manufacturing sector below, and let me know your thoughts on AI in the comments

the people that commented were all cheering ChatGPT on.

the post mentions "in its own words"... so ChatGPT is dictating its advantages to the humans, and the humans are loving it.

"of course there are concerns" but blah blah blah it will reduce costs + improve product quality. yay!

this is how it happens. the slow adoption of AI tools into the workforce among the masses. we're already more or less there, I guess. it starts with an AI-lite tool like ChatGPT, then progresses to whatever next more enhanced version comes along. finally we get to the point where AI is viewed as 100% essential for business/society to function. AI at that point may have "woken up," therefore the machines will have the upper hand. again, I suppose we're already there, since we all rely on phones/computers/internet for just about everything.

yes this is more doom and gloom. but based on my observations of how human beings easily adopt to new technologies, are always hungry for more information, and have a tendency to get swept up in emotional decision making, there will be things that get overlooked, and will come back around with incredibly negative consequences. chances are the consequences will end up in more destruction of the planet we live on. and more societal division.

 

  • Like 1
Link to comment
Share on other sites

Posted (edited)

I had a realization the other day.

ML models use GPUs or custom hardware like TPUs. This allows the massively parallel computation required.

Then I was remembering something about quantum computing. It's mostly good for parallel processing, right?

I asked Bing about it.

Quote

Tell me about work being done to run ML models on quantum computers.

Based on web search results, here is some information about work being done to run ML models on quantum computers:

Quantum machine learning is the integration of quantum algorithms within machine learning programs, with the aim of improving computational speed and data storage. Quantum machine learning can be divided into four categories, depending on whether the data and the device are classical or quantum:

  1. Machine learning with quantum computers refers to using quantum algorithms to analyze classical data, such as images or text. This can involve encoding the data into quantum states, performing quantum operations on them, and measuring the outcomes. Some examples of quantum algorithms for machine learning are quantum support vector machines, quantum principal component analysis, and quantum neural networks.
  2. Machine learning for quantum systems refers to using classical algorithms to analyze quantum data, such as measurements from quantum experiments or simulations. This can involve extracting features, finding patterns, or optimizing parameters from the quantum data. Some examples of classical algorithms for machine learning are deep neural networks, reinforcement learning, and Bayesian inference.
  3. Hybrid quantum-classical machine learning refers to using a combination of classical and quantum devices to perform machine learning tasks. This can involve outsourcing computationally hard subroutines to a quantum device, while performing the rest of the computation on a classical device. Some examples of hybrid quantum-classical algorithms are variational quantum eigensolver, quantum approximate optimization algorithm, and quantum generative adversarial networks.
  4. Quantum-inspired machine learning refers to using mathematical and numerical techniques from quantum physics to improve classical machine learning algorithms. This can involve exploiting analogies between physical systems and learning systems, such as neural networks or Boltzmann machines. Some examples of quantum-inspired algorithms are tensor networks, matrix product states, and restricted Boltzmann machines.

Quantum machine learning is an active area of research that explores the potential advantages and challenges of using quantum computers for machine learning. Some of the advantages include faster computation, better generalization, and access to more complex data. Some of the challenges include noise and errors, scalability and connectivity, and data privacy and security.

This is a weird time to be alive, and it's going to get even weirder.

Edited by Summon Dot E X E
added numbering to list
  • Like 1
Link to comment
Share on other sites

The parallel computations used by GPUs in ML are the same kinds of parallel computations used by GPUs in graphic processing: Linear Algebra. They're only doing really big matrix multiplications.

I don't know much about quantum computing but I don't think it would achieve gains in the speed or volume of these type of computations which are straight forward, the edge quantum computing lies in other problem spaces, I think.

But I could be wrong, maybe quantum computing would make training a deep learning model orders of magnitude faster or maybe open the way for other types of parameter fitting or maybe (I'm just wildly guessing rn) make the parameters diffuse and be a probability distribution rather than fixed real numbers, models would be then stochastic rather than deterministic.

  • Like 3
Link to comment
Share on other sites

Quote

"The world of the future will be an ever more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves."

— Norbert Wiener, God & Golem, Inc. (Cambridge: MIT Press, 1964) p. 69.

 

  • Like 1
Link to comment
Share on other sites

Posted (edited)
1 hour ago, iococoi said:

the future wars will be narrated by a vocal frying valley girl

"the allied forces LiTeRallY outflanked enemy positions, and the enemy was like, "oMG, what's even happening?!", but it was too late." #maneuverwarfare #aiassistedbattlefields

Edited by cichlisuite
  • Haha 3
Link to comment
Share on other sites

On 5/5/2023 at 12:55 AM, GORDO said:

I see this as a common misunderstanding. These models are not databases, they don't 'look up' stuff. They're enormously complex formulas whose parameters are 'fitted' with data towards a target objective. It's no different conceptually than finding the best intercept and slope to fit a line through a bunch of points in a plane. The difference is that instead of two parameters you're finding the fit for billions of parameters in an humanly intractable formula. It's statistics on steroids, not databases.

This is why we can't have 'pseudo code' for how they produce their outputs and why we say we don't 'understand' how they work. While we do know *exactly* what the 'formula' is and we know the parameters, it's so immensely big and complicated that there's no point in trying to extract insight from it. There are tho techniques to gain partial understanding on how variation of their inputs produce variations on the outputs and so on.

Whenever I see the argument that they don't posses knowledge I'd argue back that whatever test you can come up for a human to prove knowledge, this thing can do it too.

i didn't say they were databases, i asked if there are components to the whole system, which must(?) include databases... i mean there must be something structured that feeds into this neural network (for learning purposes), it's not gibberish (or is it?) so my question was (is), does it need a structured and organized data set to be able to learn to think of  a relevant output, or you can feed it random strings and integers which are only chunks of real data, and from it, it is able to make sense of it by completing the logical gaps by itself? again, i'm asking about the learning process here.

and another thing: once a version of chatgpt is 'taught' something, is this knowledge referenced at some point when the user writes a question in the prompt, or is it performing the entire computation from scratch when the user writes a question in the prompt? i'm sorry if i'm prying, or sounding thick for not getting it...you can ignore me if i annoy you.

Quote

This is why we can't have 'pseudo code' for how they produce their outputs and why we say we don't 'understand' how they work. While we do know *exactly* what the 'formula' is and we know the parameters, it's so immensely big and complicated that there's no point in trying to extract insight from it. There are tho techniques to gain partial understanding on how variation of their inputs produce variations on the outputs and so on.

so the neural network by ways of structure and recursion is making 'the magic'? do you know what kind of operations (functions) are assigned to an individual node in that structure?

 

 

Link to comment
Share on other sites

On 5/1/2023 at 1:41 PM, auxien said:

“When you see something that is technically sweet, you go ahead and do it.”

full quote: "When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success."

well put

  • Like 1
Link to comment
Share on other sites

6 hours ago, cichlisuite said:

So the neural network by ways of structure and recursion is making 'the magic'? do you know what kind of operations (functions) are assigned to an individual node in that structure?

 

The neural networks are just chains of matrix multiplication, fairly straightforward math so we understand what operations are being done. But the amount of computation happening is so massive and the training process creates emergent structures in the operations(by modifying them automatically in the training process, gradually over time). The issue is that it's quite difficult to identify and understand this emergent structure even though we can see that it works and know every step of the math done to compute the output. it's like trying trying to understand how a human brain generates thoughts just based on data that is the firing patterns of all neurons in the brain at all times. There is a whole science around how to set up the neural networks like an environment that the emergent functions can "evolve" in, but much less understanding about the nature of those functions.

  • Thanks 1
Link to comment
Share on other sites

6 hours ago, cichlisuite said:

i didn't say they were databases, i asked if there are components to the whole system, which must(?) include databases... i mean there must be something structured that feeds into this neural network (for learning purposes), it's not gibberish (or is it?) so my question was (is), does it need a structured and organized data set to be able to learn to think of  a relevant output, or you can feed it random strings and integers which are only chunks of real data, and from it, it is able to make sense of it by completing the logical gaps by itself? again, i'm asking about the learning process here.

and another thing: once a version of chatgpt is 'taught' something, is this knowledge referenced at some point when the user writes a question in the prompt, or is it performing the entire computation from scratch when the user writes a question in the prompt? i'm sorry if i'm prying, or sounding thick for not getting it...you can ignore me if i annoy you.

so the neural network by ways of structure and recursion is making 'the magic'? do you know what kind of operations (functions) are assigned to an individual node in that structure?

 

 

It needs structured data yes, you have to provide it with the "right" answers and feed it a ton of them. Afaik chatgpt is a 'next best word' model, so the right answer for that is just the word that succeeded some other chain of words.

More generally as I was saying before what machine learning models do is fit a function: f(X) = y. and the process of learning is finding what exactly this 'f' is. For it to 'learn' you feed it examples of X and y, and the process consists of evaluating and adjusting iteratively a bunch of different functions until the 'error' is minimized in some way. Again,  think of finding the best intercept and slope, it is truly no more different than this, only exploded in complexity.

Afaik or unless there's some other magic in chatgpt, it's doing the computation by scratch in each session, but everything that's been said in the session is fed as input and that's why you can 'correct' or ask for changes. 

Units in NN can have pretty much any operation in them as long as it is differentiable (talking calculus here) but there are some common ones, they're refered to as activation functions, you're better off looking them up in wiki than me trying to list them.

 

  • Thanks 1
Link to comment
Share on other sites

Posted (edited)

Structured Supervised and unstructured unsupervised learning both exist. Structured Supervised means the data is labeled. Unstructured Unsupervised means it isn't.

The models themselves are "black boxes". Researchers have gained some visibility into it, but the high level of dimensionality limits human understanding. Emergent abilities in LLM continue to surprise people as well, as has been discussed here... abilities the model wasn't specifically trained to have but they nevertheless possess.

Edited by Summon Dot E X E
Link to comment
Share on other sites

Posted (edited)
21 hours ago, Summon Dot E X E said:

Structured and unstructured learning both exist. Structured means the data is labeled. Unstructured means it isn't.

The models themselves are "black boxes". Researchers have gained some visibility into it, but the high level of dimensionality limits human understanding. Emergent abilities in LLM continue to surprise people as well, as has been discussed here... abilities the model wasn't specifically trained to have but they nevertheless possess.

No, you're talking about supervised and unsupervised learning. Both require data that's structured in some way to fit the problem at hand.

In this field we say unstructured data we are often talking about data that isn't a table such as images, sounds or words.

Edited by GORDO
  • Thanks 1
Link to comment
Share on other sites

2 hours ago, GORDO said:

No, you're talking about supervised and unsupervised learning. Both require data that's structured in some way to fit the problem at hand.

In this field we say unstructured data we are often talking about data that isn't a table such as images, sounds or words.

Ah, you're right. I conflated the terms.

Link to comment
Share on other sites

6 hours ago, cichlisuite said:

This is very well written, and I thoroughly agree. But will we be able to stop the giant grind stone?

no, we won't. the humans at the top of the chain running the planet don't like to hear the truth, because it doesn't fit in with their delusional view of reality. I agree the author of that article really nailed it on just about all points. this one especially so - 

Quote

There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.

imo you always have to look at the psychology of any situation involving the human decision making process, take into account individual's personal motivations when they are faced with important choices to make. this slow AI roll out is no different. it is being pushed out in an entirely capitalistically way, to a global society already totally addicted to the latest and greatest technology, by profit motivated mega rich humans. and there is no end game planning here. just keep building it, keep competing with other companies, and see what eventually happens. 

and kinda ironic that on an IDM forum, there are a lot of folks that seem to recognize this barrel of a loaded gun the collective us are starring down. 20 or so years ago when AI was still very far fetched, a lifetimes away sorta thing, I was all about the idea that AI should replace humans. now I dream of a society where computers were never invented. where there are no screens, no phones to stare at, telling us what to do. 

  • Like 4
Link to comment
Share on other sites

On 5/6/2023 at 4:27 PM, GORDO said:

It needs structured data yes, you have to provide it with the "right" answers and feed it a ton of them. Afaik chatgpt is a 'next best word' model, so the right answer for that is just the word that succeeded some other chain of words.

More generally as I was saying before what machine learning models do is fit a function: f(X) = y. and the process of learning is finding what exactly this 'f' is. For it to 'learn' you feed it examples of X and y, and the process consists of evaluating and adjusting iteratively a bunch of different functions until the 'error' is minimized in some way. Again,  think of finding the best intercept and slope, it is truly no more different than this, only exploded in complexity.

Afaik or unless there's some other magic in chatgpt, it's doing the computation by scratch in each session, but everything that's been said in the session is fed as input and that's why you can 'correct' or ask for changes. 

Units in NN can have pretty much any operation in them as long as it is differentiable (talking calculus here) but there are some common ones, they're refered to as activation functions, you're better off looking them up in wiki than me trying to list them.

 

Thanks that was informative. Could you say it's a game of elimination of a sort, in a way that it first 'explodes' in contexts and possibilities, and then slowly starts picking up the strongest threads, and the array of contexts and possibilities shrinks, until only one possible solution remains...?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.