Jump to content

GORDO

Supporting Member
  • Posts

    4,796
  • Joined

  • Last visited

Posts posted by GORDO

  1. 2 hours ago, cichlisuite said:

    what i was getting at (and trying to understand myself) was it doesn't actually 'possess' knowledge, more like it fetches from its 'learned' repositories, or indexed registers + the user query sent from the prompt. meaning the real 'knowledge' it has, or is being equipped with, is data mining knowledge that can parse strings (also text recognition module for reading text in images), and recognize contexts and how contexts relate to each other on different levels. i.e. literature - classic - shakespeare - king lear, versus literature -scientific - physics -quantum electrodynamics, etc... meaning that it should understand these levels and the general knowledge trees must be strictly organized somewhere inside chatgpt hq servers.... or is it capable of making 'sense' of unstructured, rudimentary, reduntant, scattered pieces of information?

    it would be very interesting to have an insight into a sort of pseudo-code analysis of chatgpt's 'thinking' procedures, and how it stores and caches data. like how the neural-network topology recurses through itself in a way, idk.

    that seems pretty impressive, however, if i understand correctly, these are run by separate, optimized versions of chatgpt that cannot relate to each other in terms of being one interconnected entity?

    I see this as a common misunderstanding. These models are not databases, they don't 'look up' stuff. They're enormously complex formulas whose parameters are 'fitted' with data towards a target objective. It's no different conceptually than finding the best intercept and slope to fit a line through a bunch of points in a plane. The difference is that instead of two parameters you're finding the fit for billions of parameters in an humanly intractable formula. It's statistics on steroids, not databases.

    This is why we can't have 'pseudo code' for how they produce their outputs and why we say we don't 'understand' how they work. While we do know *exactly* what the 'formula' is and we know the parameters, it's so immensely big and complicated that there's no point in trying to extract insight from it. There are tho techniques to gain partial understanding on how variation of their inputs produce variations on the outputs and so on.

    Whenever I see the argument that they don't posses knowledge I'd argue back that whatever test you can come up for a human to prove knowledge, this thing can do it too.

    • Like 4
  2. On 4/28/2023 at 11:12 AM, T3551ER said:

    From - Rotten Tomatoes

    From - the TV series whose name makes it frustratingly difficult to google and find out more about, particularly when it's a show that invites so much watercooler talk and speculation. 

    Who knows if they will stick the eventual landing, what I will say is that I started this show on Monday and lost many nights of sleep because I couldn't stop myself from watching the next episode. It's like Lost but 100% more horror - kind of echoes of Stephen King in there too. It's not perfect, but it is very, very entertaining with some interesting character stuff going on in amidst the mind bending puzzle box shit. 

    First season is available to stream for free on Amazon now. Taking all my willpower to not not watch first ep of S2 (I'd much rather just wait for the season to be over and binge, b/c I hate the week to week suspense of shows like this. I'm gettin' too old for this stuff)

     

     

    Binged about 7 or 8 episodes. I'm sure I'll end up hating wherever it goes with a passion, but here I go...

  3. Well whatever I'll just throw it in here that I think the internet as a whole is a self aware entity that has been acting autonomously to influence humanity for some time now.

    And that's the only explanation for how Tumblr and Chan culture dominate our socio political interactions as of late.

    • Haha 1
    • Big Brain 1
  4. 1 hour ago, auxien said:

    the significance is it's not a metaphorical thing, or at least their implied goal is to create a real, honest analogous but far, far more powerful, version of a brain/neural network. they're not trying to create a chatbot, or a search engine, or a test subject, or a mathematical reasoning tool, or a code-writing wizad, or a weapon. they are trying to create an all-powerful, singularity-inducing neural network. and they're stepping right out and saying it.

    I wrote a whole thing on how I think that's misleading because even today we don't understand how NNs produce their outputs, but rather I'll just ask what is an "all-powerful, singularity-inducing neural network"? 

    like, what does it do? Anything? Everything? Will it run companies and countries? Will it gain autonomy and make decitions without input?

    like, what does that mean? What's the objective and how do you know you achieved it? Will it have real time inputs like we do? 

  5. Finished (re)watching all of community.

    Old news but definitely some spark was lost after season 3, with some gems sprinkled throughout. S4 is not as bad as its reputation imo.

    Didn't like some character development after binge re watching it, particularly Britta's.

    Million places the movie could go, hopefully Harmon's brain can come up with something heartfelt and brilliant and humble.

  6. 2 hours ago, Alcofribas said:

    i haven't seen anything that indicates we have to reevaluate our definitions of intelligence and go back to the drawing board. seems to me we are mostly just bending over backwards trying to fit our definitions to machine learning.

    the discussions are extremely banal for the most part. 

    Your own post is a an indication of it. Maybe in the attempt to sound smart your dumbing yourself too much.

     

  7. 7 hours ago, Alcofribas said:

    i think it's interesting(?) that such specific discussions of intelligence, knowledge, consciousness, etc. have surrounded this technology. i've yet to see a truly compelling definition of intelligence in play w/r/t AI. it seems that people are just using definitions that have to fit something like "language model" which really involves a lot of question begging imo, very self-serving. i feel like we're all in some kind of philosophy 101 class on day 1 just throwing out our most deep thoughts maaaaaaan.

    in a way i feel this is possibly a doomed discussion. the tech field has created a technology they have called "intelligence" and we all feel we must conceptualize this technology as such. then we're branching off into discussing whether the machine is conscious, does it understand, etc. i think this thrusts us into a kind of conceptual paralysis, perpetually back to square one, bc we do not really have a comprehensive picture of intelligence afaik. certainly, the scientific world has not seemed to produce one. and the tech world, well it's full of shit. 

    in any case i generally see the discussion of intelligence on this topic to be rather one dimensional. it's something like intelligence is just some kind of linear computation in the brain, which is some kind of machine itself. and artificial intelligence just kind of emerges in/from a machine when you feed it enough bits. it's all taking place in a single conceptual dimension. seems to me a lot is left out here!

     

    What's happening is we have to reevaluate our previous definitions and assumptions and hence we have to go back to the drawing board, and revisit every discussion about the topic.

    A simple example would be the Turing test, there's no question modern systems can pass the Turing test. There's also no question that these systems haven't achieved "real" AI.

    So what should the new test be? Would a human be bale to pass it?

    In summary: developments in the field of artificial intelligence spark discussions about intelligence

  8.  

    58 minutes ago, Satans Little Helper said:

    In a way, GPT's have shown that a complex enough language model has an interesting side effect of also modelling knowledge. As an emergent property, if you will. As far as I'm concerned, that also says a lot about our own biological neural networks. Despite our experience being vastly different, we might actually work more similar to how GPT's work than we might believe. We're just a bunch of biological robots. Some moreso than others, of course! ?

     

    Indeed, maybe the take-away is that an embedding of "knowledge" is a necessary condition to make a language model good.

    Or more simply, that having knowledge is very helpful to understand language (duh) and so the network configures itself in the way it best achieves its goal.

    I'm sure contained within its billion parameters and topological structure  lie some embedded models for reasoning about a bunch of stuff, and they are all interconnected.

    • Like 1
  9. The debate about whether LLMs are more than bullshit generators is an interesting one and none is wise to take a hard position either way.

    I've for some time entertained the idea that a lot of humans don't really possess intelligence (whatever that is), but they've learned to imitate intelligent behaviors very well, so well that there's not a clear way to differentiate between the two (if there is even a difference)

    Isn't that what makes humans stand out anyway? The ability of mimicry and extrapolation. We apply the process of evolution behaviorally and socially, we copy, imitate, improve.

    I'm sure everyone has met someone who is really good at appearing knowledgeable about a topic or many but when probed further reveals themselves as someone who has only learned to say the right words.

    This is in essence what these LLMs are doing, they've learned *really well* the correct words to say in a ton of different contexts without this implying they have an "understanding" of the topic. However, GPT-4 is starting to exhibit other emergent properties that makes it harder to see it at just a "next best word" predictor. There's a paper out there written by people at Microsoft that were given full access to it and had a chance to test it extensively (LINK) I'm blow away by the example of it fixing an image that was given as code.

    If these properties emerge from the complexity of neural nets, there's no telling what else will come, nothing's off the table. Even that thing we call consciousness.

    however I think improvement on these systems will soon plateau and reach a point of diminishing returns where no significant improvement will be achieved without enormous cost na effort.

  10. 6 hours ago, cruising for burgers said:

    checked imdb's page to know who the creator of the show is just to find out that it is Donald Glover... scrolled down on his actor roles and apparently there's a Community webisodes season? anybody watched this?

    ahhh nvrmnd these are all 5 minutes sketches taken out from the original show if I'm not mistaken...

    Speaking of, I've been re watching community.

    Hyped for the movie! Hopefully Dong lover and Ivette Brown join the rest of the cast.

  11. On 4/1/2023 at 12:18 PM, Limo said:

    Where is “here”? In NL they go for between €32 and 38.

    Never tried one, though, as I either use a Moka Pot or a simple home espresso setup (see post above - though since posting that the wife and I had a good talk so now I’m thinking of buying a Cafelat Robot for myself instead - and not with our household money)

    Mexico

  12. 3 hours ago, Limo said:

    Wait … how are you finding it hard to get one? Over here in the Netherlands there’s a whole bunch of web sites selling them.

    I could order one online at a higher price than usual. Even the us Amazon store  has it out of stock. Dunno, thing isn't very popular/ well known over here.

  13. I think the biggest risk right now with LLMs like gpt-x is people not understanding their limitations and treating its output as gospel.

    Also us being flooded with ai generated 'content' that will commoditise a lot of people's source of income.

    And a third is us being flooded with false and inaccurate information, making it even harder to tell fact from fiction (this loops around the first point as well)

    I think more that thinking about limiting AI, we should be talking about how to increase human intelligence or rather, making better use of the intelligence we got. We got to raise expectations of the things we consume, be more critical of the things we read and more diligent about how we come to trust a source. 

    • Thanks 1
  14. On 3/8/2023 at 6:54 PM, cruising for burgers said:

    damn, didn't like station 11 ending at all... was expecting something way more bitter and fantasy/sci-fi style wise but it took the easiest happy/drama/family path...

    (⁠┛⁠◉⁠Д⁠◉⁠)⁠┛⁠彡⁠┻⁠━⁠┻

    great performances overall though...

    World is shit enough to have bitter endings in fictional media...

    If 

    Spoiler

    Kirsten and Jeevan didn't reunite 

    at the end I would've been legit upset.

    But yeah, it's sappy

  15. Binged "the leftovers". nothing particularly irritated me which is a compliment for a "mystery-box-ambiguous-meaning-for-everything" show from lindelof. It's ok with good moments.

    Kinda falls apart in the last season but at least they gave it an ending.

     

    On 3/5/2023 at 10:10 AM, auxien said:

    episode 2 was decent, the Worf scene was pretty nice for what it was. seemed to sort of narrow the scope of the season to just 'oh, so this is what the whole thing is going to be about, i guess?' which is fine, i'll take anything even slightly better than the first 2 horrid seasons...but i'm concerned that it's going to plod along the whole time. 

    episode 3 was starting to feel like a drag. could've easily been trimmed in half...the de-aged Picard and Riker were weird, but it wasn't a long scene, at least. made it clear just how terribly old Stewart sounds. i'm wondering if they're playing up that 'old man' stereotype with some of his script/acting in these scenes....but his scene with McFadden was really good on both their parts. letting the actors act out human dramas is woefully forgotten about with much/all of the new Star Trek (broken record).

    anyway, they've layed more than enough bricks and mortar at this point. i imagine there's going to be another twist or two (LeVar Burton is in the season interview hype machine but hasn't shown up on the screen yet).

    also, the super dark all the time thing is so goddamned cringey. like, literally, the scenes are almost always just barely lit enough to see anything. TOS/TNG/DS9/Voyager/etc. had their cinematic moments/episodes and it's great these new ones are leaning into that side of stuff, but turn on the fucking lights on the fucking bridge for fuck's sake. 

    Fell asleep watching episode 3. I don't think I'll be returning to it.

    • Like 1
  16. 19 hours ago, cruising for burgers said:

    dickflix... I want the same that this guy's having ^^^ the original also sucked dunno why you're surprised... :trollface:

     

    I started watching Station 11 yesterday and I loved the 1st episode but damn the start of the 2nd one?

      Hide contents

    a bunch of hippies playing Shakespeare? It was cringe af... :facepalm:

    I'm actually sad and not dissing cause I loved the dynamics between the little girl and the main character that apparently is not the main character...

    Keep at it, it all melds together 

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.