Jump to content

GORDO

Supporting Member
  • Posts

    4,796
  • Joined

  • Last visited

Posts posted by GORDO

  1. On 7/12/2023 at 7:36 PM, GORDO said:

    So I'm watching American gods. I had already seen season 1 way back when but never followed up on it.

    Up to season 3 now, seems like a corpse of what it originally was, is it worth finishing it?

    Yeah... not worth it. Imma try and read the book now cus I quite liked the theme and setting, show really blows after the first season.

  2. On 7/5/2023 at 2:47 PM, Stock said:

    Anyone spent some money on Steam sales yet ? I don't know why, but I want to go and try a space 4X game (never played one before !). I'm on the verge between Stellaris and Endless Space 2, which both look awesome but also seem to need a shit ton of DLC to propose a complete experience.

    I got Pyre and the outer wilds both for very cheap.

  3. Across the spiderverse.

    Best animation ever? Love how it's highly stylized beyond it's style if that makes sense.

    Story and the pacing not as good, some good moments but

    Spoiler

    I'm guessing it was ruined by them making it a two-parter

     

    • Like 1
  4. 11 hours ago, T3551ER said:

    Yeah, I'm stuck! I flew threw the first season and now I'm semi invested in the characters (and the mysteries) but... if the endgame isn't there (they say they have it planned but who tf knows) and/or if it's cancelled and they haven't been able to wrap it up, that'll blow chunks. 

      Reveal hidden contents

    I like when Kenny throws down his badge and it feels all "fuck you, fuck you, fuck you... you're cool .. fuck you and I'm out" about it

     

    I've been watching (not binging) season 2 and I don't see myself returning for a third. I'm sure there be some fuck you cliffhanger at the end but it doesn't seem to be moving forward. 

    It's reminding me off how the walking dead turned mostly into roommate drama arguing about who left the pickle jar open.

    • Like 1
  5. Slowly going down the rabbit hole of coffee snobbery thanks to the aeropress and that damned James Hoffman. Upgraded my grinder and even got a scale, taking mental notes of every aspect of how brews turn out.

    If I ever find a conveniently located specialty coffee store imma be doomed.

    • Like 3
  6. Watched John wick 4 as well.

    Should've stopped with the first one, still the best of the bunch

    All the 'lore' is stupid, was ok when it remained as a vague. Assassin's have sanctuaries? they pay with gold coins? ok. They answer to some stupid all powerful secret society with backwards rules? Laaaaaame.

    Ending's bad. Teaser scene cheap.

    still has great choreographed fighting scenes.

    • Like 1
    • Thanks 1
  7. So I was able to get an aeropress for a  regular price.

    While the first brew surprised me on its quality, I'm finding I need to use the same amount of beans for a single serving that I used to in the moka cup for a triple shot. So I'm unsure if I'm doing things right.

    Also struggling to find the best ratio of beans/grind size/brew time/water temp for the ideal shot. At one point I was using such a fine grind size that I felt I needed an hidraulic press.

    Moka cup was equally fiddly and inconsistent at firs but I had settled into a stable routine that yielded consistent results.

    But with this thing cleanup is so easy that I'm gonna stick with it.

    • Like 1
  8. 4 hours ago, cichlisuite said:

    Thanks that was informative. Could you say it's a game of elimination of a sort, in a way that it first 'explodes' in contexts and possibilities, and then slowly starts picking up the strongest threads, and the array of contexts and possibilities shrinks, until only one possible solution remains...?

    It's an iterative process where the error is minimized in each step slightly by adjusting the parameters of the function.

    Again think of slope and intercept. I start with random pair (s0, i0) so f(x)=i0+s0x I check how this function fits an observation from the problem I'm trying to predict and adjust the parameters based on the error, so now I got (s1, i1) and so f(x) = i1 + s1x and I keep repeating this until I run out of data or no longer gain any significant reduction in the error. Now imagine instead of a pair of parameters I have billions and f is a stupidly comprehensive formula that allows for any kind of interaction between its variables, and that the problem is something more interesting than fitting a line through points.

    Interestingly there does not need to be an 'unique solution' since both the initialization and method for adjusting parameters could yield different overall results, the result being the specific parameters that yielded the least error.

    • Like 1
  9. 21 hours ago, Summon Dot E X E said:

    Structured and unstructured learning both exist. Structured means the data is labeled. Unstructured means it isn't.

    The models themselves are "black boxes". Researchers have gained some visibility into it, but the high level of dimensionality limits human understanding. Emergent abilities in LLM continue to surprise people as well, as has been discussed here... abilities the model wasn't specifically trained to have but they nevertheless possess.

    No, you're talking about supervised and unsupervised learning. Both require data that's structured in some way to fit the problem at hand.

    In this field we say unstructured data we are often talking about data that isn't a table such as images, sounds or words.

    • Thanks 1
  10. 6 hours ago, cichlisuite said:

    i didn't say they were databases, i asked if there are components to the whole system, which must(?) include databases... i mean there must be something structured that feeds into this neural network (for learning purposes), it's not gibberish (or is it?) so my question was (is), does it need a structured and organized data set to be able to learn to think of  a relevant output, or you can feed it random strings and integers which are only chunks of real data, and from it, it is able to make sense of it by completing the logical gaps by itself? again, i'm asking about the learning process here.

    and another thing: once a version of chatgpt is 'taught' something, is this knowledge referenced at some point when the user writes a question in the prompt, or is it performing the entire computation from scratch when the user writes a question in the prompt? i'm sorry if i'm prying, or sounding thick for not getting it...you can ignore me if i annoy you.

    so the neural network by ways of structure and recursion is making 'the magic'? do you know what kind of operations (functions) are assigned to an individual node in that structure?

     

     

    It needs structured data yes, you have to provide it with the "right" answers and feed it a ton of them. Afaik chatgpt is a 'next best word' model, so the right answer for that is just the word that succeeded some other chain of words.

    More generally as I was saying before what machine learning models do is fit a function: f(X) = y. and the process of learning is finding what exactly this 'f' is. For it to 'learn' you feed it examples of X and y, and the process consists of evaluating and adjusting iteratively a bunch of different functions until the 'error' is minimized in some way. Again,  think of finding the best intercept and slope, it is truly no more different than this, only exploded in complexity.

    Afaik or unless there's some other magic in chatgpt, it's doing the computation by scratch in each session, but everything that's been said in the session is fed as input and that's why you can 'correct' or ask for changes. 

    Units in NN can have pretty much any operation in them as long as it is differentiable (talking calculus here) but there are some common ones, they're refered to as activation functions, you're better off looking them up in wiki than me trying to list them.

     

    • Thanks 1
  11. The parallel computations used by GPUs in ML are the same kinds of parallel computations used by GPUs in graphic processing: Linear Algebra. They're only doing really big matrix multiplications.

    I don't know much about quantum computing but I don't think it would achieve gains in the speed or volume of these type of computations which are straight forward, the edge quantum computing lies in other problem spaces, I think.

    But I could be wrong, maybe quantum computing would make training a deep learning model orders of magnitude faster or maybe open the way for other types of parameter fitting or maybe (I'm just wildly guessing rn) make the parameters diffuse and be a probability distribution rather than fixed real numbers, models would be then stochastic rather than deterministic.

    • Like 3
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.