Jump to content

GORDO

Supporting Member
  • Posts

    4,794
  • Joined

  • Last visited

Everything posted by GORDO

  1. Yeah... not worth it. Imma try and read the book now cus I quite liked the theme and setting, show really blows after the first season.
  2. So I'm watching American gods. I had already seen season 1 way back when but never followed up on it. Up to season 3 now, seems like a corpse of what it originally was, is it worth finishing it?
  3. Guardians of the Galaxy vol. 3 Second best marvel film after Guardians of the Galaxy. Could've shortened it a lil bit.
  4. Rewatched Tenet for a second time this time paying more attention to it so I actually understood what was going on this time. Cool concept but still a bit meh.
  5. I got Pyre and the outer wilds both for very cheap.
  6. 'From' S2 was all filler. Don't think I'll watch a third. Binged the first season of 'Silo', it's good. Hoping for a second season although I don't see where it can go from here and still be good.
  7. Across the spiderverse. Best animation ever? Love how it's highly stylized beyond it's style if that makes sense. Story and the pacing not as good, some good moments but
  8. Only seen s01. Not interested in 2 atm. Is it redeemable?
  9. Yellow jackets starts off promising then becomes so braindead
  10. Beat this over the weekend. Couldn't get the game to actually sync properly but it's still pretty great!
  11. I've been watching (not binging) season 2 and I don't see myself returning for a third. I'm sure there be some fuck you cliffhanger at the end but it doesn't seem to be moving forward. It's reminding me off how the walking dead turned mostly into roommate drama arguing about who left the pickle jar open.
  12. Slowly going down the rabbit hole of coffee snobbery thanks to the aeropress and that damned James Hoffman. Upgraded my grinder and even got a scale, taking mental notes of every aspect of how brews turn out. If I ever find a conveniently located specialty coffee store imma be doomed.
  13. Watched John wick 4 as well. Should've stopped with the first one, still the best of the bunch All the 'lore' is stupid, was ok when it remained as a vague. Assassin's have sanctuaries? they pay with gold coins? ok. They answer to some stupid all powerful secret society with backwards rules? Laaaaaame. Ending's bad. Teaser scene cheap. still has great choreographed fighting scenes.
  14. Couldn't resist and still watching 'from' weekly. Last couple eps have been boring nothing burgers, I anticipate the season it's gonna end on a cheap cliffhanger moving exactly nothing forward that'll piss me off.
  15. The covenant. Good watch for a propaganda film. Sisu. Fun action stuff.
  16. posting this here as a bookmark https://www.lepoint.fr/sciences-nature/yuval-harari-sapiens-versus-yann-le-cun-meta-on-artificial-intelligence-11-05-2023-2519782_1924.php
  17. So I was able to get an aeropress for a regular price. While the first brew surprised me on its quality, I'm finding I need to use the same amount of beans for a single serving that I used to in the moka cup for a triple shot. So I'm unsure if I'm doing things right. Also struggling to find the best ratio of beans/grind size/brew time/water temp for the ideal shot. At one point I was using such a fine grind size that I felt I needed an hidraulic press. Moka cup was equally fiddly and inconsistent at firs but I had settled into a stable routine that yielded consistent results. But with this thing cleanup is so easy that I'm gonna stick with it.
  18. It's an iterative process where the error is minimized in each step slightly by adjusting the parameters of the function. Again think of slope and intercept. I start with random pair (s0, i0) so f(x)=i0+s0x I check how this function fits an observation from the problem I'm trying to predict and adjust the parameters based on the error, so now I got (s1, i1) and so f(x) = i1 + s1x and I keep repeating this until I run out of data or no longer gain any significant reduction in the error. Now imagine instead of a pair of parameters I have billions and f is a stupidly comprehensive formula that allows for any kind of interaction between its variables, and that the problem is something more interesting than fitting a line through points. Interestingly there does not need to be an 'unique solution' since both the initialization and method for adjusting parameters could yield different overall results, the result being the specific parameters that yielded the least error.
  19. No, you're talking about supervised and unsupervised learning. Both require data that's structured in some way to fit the problem at hand. In this field we say unstructured data we are often talking about data that isn't a table such as images, sounds or words.
  20. It needs structured data yes, you have to provide it with the "right" answers and feed it a ton of them. Afaik chatgpt is a 'next best word' model, so the right answer for that is just the word that succeeded some other chain of words. More generally as I was saying before what machine learning models do is fit a function: f(X) = y. and the process of learning is finding what exactly this 'f' is. For it to 'learn' you feed it examples of X and y, and the process consists of evaluating and adjusting iteratively a bunch of different functions until the 'error' is minimized in some way. Again, think of finding the best intercept and slope, it is truly no more different than this, only exploded in complexity. Afaik or unless there's some other magic in chatgpt, it's doing the computation by scratch in each session, but everything that's been said in the session is fed as input and that's why you can 'correct' or ask for changes. Units in NN can have pretty much any operation in them as long as it is differentiable (talking calculus here) but there are some common ones, they're refered to as activation functions, you're better off looking them up in wiki than me trying to list them.
  21. The parallel computations used by GPUs in ML are the same kinds of parallel computations used by GPUs in graphic processing: Linear Algebra. They're only doing really big matrix multiplications. I don't know much about quantum computing but I don't think it would achieve gains in the speed or volume of these type of computations which are straight forward, the edge quantum computing lies in other problem spaces, I think. But I could be wrong, maybe quantum computing would make training a deep learning model orders of magnitude faster or maybe open the way for other types of parameter fitting or maybe (I'm just wildly guessing rn) make the parameters diffuse and be a probability distribution rather than fixed real numbers, models would be then stochastic rather than deterministic.
  22. I see this as a common misunderstanding. These models are not databases, they don't 'look up' stuff. They're enormously complex formulas whose parameters are 'fitted' with data towards a target objective. It's no different conceptually than finding the best intercept and slope to fit a line through a bunch of points in a plane. The difference is that instead of two parameters you're finding the fit for billions of parameters in an humanly intractable formula. It's statistics on steroids, not databases. This is why we can't have 'pseudo code' for how they produce their outputs and why we say we don't 'understand' how they work. While we do know *exactly* what the 'formula' is and we know the parameters, it's so immensely big and complicated that there's no point in trying to extract insight from it. There are tho techniques to gain partial understanding on how variation of their inputs produce variations on the outputs and so on. Whenever I see the argument that they don't posses knowledge I'd argue back that whatever test you can come up for a human to prove knowledge, this thing can do it too.
  23. Binged about 7 or 8 episodes. I'm sure I'll end up hating wherever it goes with a passion, but here I go...
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.