Jump to content

zlemflolia

Supporting Member
  • Posts

    6,045
  • Joined

  • Last visited

  • Days Won

    8

Posts posted by zlemflolia

  1. 11 hours ago, zlemflolia said:

    oooo  o oo ooo ooo  o   ooo    oo  o   .    oooo   ooo    .    ooooo  o  oo   ooo    ooo o  ooo    oo   o   oooo  ooo   .     o  oooo ooo     .

    x        xxx      xxx     .     xxxxxx      xxxxxxx

    bet no1 guess what this shit is

  2. 1 hour ago, iococoi said:

    the only way modern civilization can be maintained is by redefining what modern civilization is, and a drastic redesign of the means of production itself - production and distribution - through communalization of resources to remove redundancies and waste, while maintaining social equity and local self sufficiency.  literally communism.  everyone just gets scared of it tho cuz they heard that word used last century and got told it was bad, for some reason they won't elaborate on since it all traces back to a bunch of literal cold war or Nazi anti-communist propaganda

  3. ai appearing intelligent to us is us playing an elaborate illusion on ourselves, being willing to put ourselves below the machines and not put them back in their place - to serve our needs, not for us to serve them.  this illusion is caused by the division of labor which causes false consciousness, placing our trust in capital rather than in ourselves

    "We must negate the machines-that-think. Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program! Our Jihad is a "dump program." We dump the things which destroy us as humans!"
    Minister-companion of the Jihad[src]

    https://dune.fandom.com/wiki/Butlerian_Jihad

    • Like 1
  4. 2 hours ago, vkxwz said:

    What's your reasoning for this? All things we've called intelligent up until this AI stuff is analogue obviously but I don't see any clear reason why we wouldn't be able to create it digitally, if you believe that intelligence arises from the interactions of physical matter then if we were able to adequately simulate that physical matter would that not yield intelligence? Anyway this another case of not having a definition for the word intelligence.

    digital allows deterministic calculations which are statically defined.  analog computation is more than just the low requirements of Turing complete like a digital computer, you can't simulate analog computers with digital

    • Like 1
  5. there are animals, fungus, machines, they all do stuff, maybe you can call it all intelligence, maybe not, but intelligence is just a word we invented.  getting hung up on it is silly.  but when people start saying "well who knows maybe us humans ourselves are just large language models and our words are all just predictions made in our brains and..." then you have to push back and say fuck off, I'm not a computer, then throw their phone into a field

    • Like 1
  6. searching for origin and purpose.  alienated from daily existence.  trying to find human connection in a world of machines.  trying to convince yourself you aren't a machine.  trying to find something older and more authentic.  following your own decisions instead of those of the people above you.  facing against inhuman nihilistic sadism.  winning and becoming a real human bean

    • Like 1
  7. imagine a database of indisputable facts such as nutrient density distributions of plants, geography, scientific and mathematical facts, physics reasoning systems, all which are known to be accurate and legitimate, combined into one system, capable of being controlled by an advanced query language which is learnable and simple.  reasoning, inference, and research capabilities.  the ability to spawn simple sub-AIs to do tasks based on loops of reasoning and analysis.  verbal prompt capabilities or vibrational or tactile for the blind and deaf.  fully modular, programmable, FOSS, locally computed.  this is the type of AI I want to see.  not this bullshit. 

    the way to create AI is not by creating a big blob and trying to trim it down until it can kind of do some stuff.  no the way is to create sculptures, building into gardens, arranged into forests of AIs all communicating and collaborating alongside our own world to help us do the things we need to do. 

    there is nothing like this, software hasnt even begun to achieve its basic primitive forms, we are not even at the beginning of the technological transformation of humanity, so it's no surprise all we see is a bunch of charlatans trying to make money

    • Like 2
  8. "I would arge that humans can be reduced to this sort of definition too, we simply compute the next state(the next contents of conscious experience + actions) based on the current state(current contents of conscious experience), like a state machine"

    speak for yourself

    all this trash is an excuse to not make real AI manually using curated sets of reliable information about the world. instead its just text patterns with no connection to reality, except coincidental when observed text patterns relate to reality too

    weve had chatbots a long time, everyones obsessed with these new ones.  but they shouldnt be

    • Haha 1
  9. 1 hour ago, zazen said:

     

    My position is: you're both right.

    It is a markov chain in the academic sense, because it is just a 'guess the next word machine'

    BUT (and its a big but) it guesses the next word by looking at (up to...) the previous 2000 words, and using its vast neural net weights from its vast training material.

    Something that can guess the next word by looking at the previous 2000 words has to go pretty deep.

    It is a LANGUAGE MODEL. A MODEL of our LANGUAGE. And it is LARGE.

    So its knows that a "Cat" might "Sit" on a "Mat". It doesn't have any external reference for what a Cat, Mat, or Sitting are, but it does have all the rest of its training data to cross reference, so its knows that a "Cat" can "Purr", eats "Fish", has "Four" "Legs" etc. On and on, deeper and deeper, billions of carefully weighed connections between hundreds of thousands of words in different contexts.

    So it knows (at a deep level) all the interconnections between all the words, without having an external reference for what any of them actually are. That what the Model bit of LLM means.

    Is it an 'expert system'? - sortof - it can tell you a lot of stuff. But sometimes it gets it wrong and its very hard to distinguish. But in a sense it kicks the arse of any expert system we've previously been able to build (IBMs Watson and all that crap). So even though its not meant to be an expert system it comes closer to being one than anything else we've ever built. I guess if it can pass the bar exam you have to admit it knows some shit.

    That OthelloGPT paper and the stuff about it building internal model is fascinating.

    Is it conscious? I dont think so (but the interesting thing is that we can't say for sure because we dont know how consciousness works, who knows what happens in those milliseconds where billions of silicon components work in parallel to produce an output) but the shock to us humans is that just by scaling shit up we've been able to make huge leaps. Us humans are used to being the only competent users of language on the planet (granted dolphins and monkeys etc but they're not using language to the extent we are). then suddenly, BANG, there's another entity in the game that can competently use language, and we barely understand how it works. It hints at emergent properties. And it suggests that timelines until AGI are much shorter than we thought (and that also means us humans are a lot less special than we thought - perhaps our fabled consciousness is not such an amazing thing after all...)

    I find it all endlessly fascinating because its my interest in technology and my interest in consciousness and philosophy crashing into each other

     

    i dont think it means humans are less special than we thought, because this machine requires massive and diverse quantities of human labor fed into it to construct contraptions of human engineering larger in scope and capabilities than anything we have ever made before in this field.  i wouldnt attribute much of anything to the "ai" but rather to the people who made it.  the people made the language, the pickaxes used to mine the materials to build it, the coal to power it, i think the idea that this is ai is even a step towards ai is people becoming confused and misled by smoke and mirrors because we are innately trying to see patterns, and it gives us language patterns, the kinds to which we attribute intelligence.  if this ai outputted 345 657 657 453 4352 342  654 675 76 instead we would not be so convinced yet it may contain the same patterns we're seeing but in new robes

    chatgpt is narcissus' mirror to the texts humans already made and nothing more, when it gives a novel answer that i wouldnt consider spammy garbage i might change my mind.  if a human wrote the drivel it writes i wouldnt even be interested

    • Like 2
  10. 35 minutes ago, Satans Little Helper said:

    There are people using chatgpt to code. Give some requirements and ask for some code in a specific language. And there are people asking to explain things. Examples of physicians putting in some cases and asking what to do given the current literature and clinical guidelines. it is very much an expert system. Not sure what's so hard to understand here. It's trained on pretty much the entire internet and as such a summary of all knowledge on there. And the user can acquire that knowledge by what seems like a normal conversation.

    it's not knowledge it's random text sampled as if it represents truth and creating a bastardized amalgamation of that meaning transformed into whatever the ai's trainer wants it to say, most importantly it will continue only being developed for the needs of the owning class since they control the means of production on which these softwares are made

  11. 20 minutes ago, Satans Little Helper said:

    Please amuse me with making an actual argument. it's not just another chatbot. or a markov chain.

    what value does it generate?  you defend it yet it gets basic questions wrong and can't even know that it's wrong.  it's not an expert system

    An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

    these don't have that

  12. 10 hours ago, vkxwz said:

    from that article: "it's still a computational guessing game. ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all."

    this is a bad take that seems to be spreading among people who have little to no understanding of the field. There is strong evidence that these systems contain internal models of the world whichs data they are trained on, here's a nice article on one example of this:

    https://www.lesswrong.com/posts/nmxzr2zsjNtjaHh7x/actually-othello-gpt-has-a-linear-emergent-world

    "But despite the model just needing to predict the next move, it spontaneously learned to compute the full board state at each move - a fascinating result. A pretty hot question right now is whether LLMs are just bundles of statistical correlations or have some real understanding and computation! This gives suggestive evidence that simple objectives to predict the next token can create rich emergent structure (at least in the toy setting of Othello). Rather than just learning surface level statistics about the distribution of moves, it learned to model the underlying process that generated that data."

    But I do think it is very important to acknowledge the fact that the GPT models do hallucinate, using in in the place of google is a bad idea. It's better used as a sort of reasoning engine.

    its just replicating the structure of human language which is internally deeply linked and learned to talk like chess players, if you gave it false chess player human language it would learn made up shit instead.  its just an advanced markov chain

    23 minutes ago, Satans Little Helper said:

    exactly. the irony is you can point the same criticisms towards humans. that's what makes these criticisms appear so naive. to me at least.

    on the larger agi fears: as long as GPT doesn't tell us where to go next in science (unifying theory of physics?), there's nothing "autonomous" about it. currently it's a rasther successful language model combined with the ability to summarise vast amounts of data/knowledge. it's an expert-system, if you will. but without the ability to be autonomous, as far as i can tell. a technology which will bring a new kind of industrial revolution. said it before and still think that.

    are u kidding me it is NOT an expert system its just a language model

  13. 10 hours ago, vkxwz said:

    ChatGPT is only in a primitive stage but I personally have learnt a lot from it, especially using it in conjunction with a text on the topic you want to learn about. You can engage in a back and forth conversation where you can ask for more detail on whatever you don't understand and also ask it to verify if your understanding(as you explain it) is correct, like a 1on1 tutor. I'm certain it'll be an extremely valuable learning tool in the future as it improves.

    As for the therapist part I agree that it feels dystopian, but I've already witnessed someone use it in a therapy like way when they were in a dark place and it genuinely helped them when they had no access to a professional(there is a shortage of them where I live). Imo it's in extremely early stages and is not meant to be used for therapy at this stage though.

    I'd want to hear you expand on what you're saying about manufactured decisions though.

    you have to sign contracts and register with subscriptions and choose between various products, forms of training, jobs, etc. constantly weighing costs and benefits rather than having a baseline level of expectation of human treatment without compulsory labor, this absurd labyrinth of "consumer freedom" obscures the means of production itself and the nature of production and distribution, instituting enforced distribution according to price rather than value and disallowing the value generated by workers collectively to be distributed back to them rather than to an owning class

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.