Jump to content

usagi

Members
  • Posts

    21,795
  • Joined

  • Days Won

    18

Everything posted by usagi

  1. bruh. that was a reference to 1990 cyberpunk/horror b-movie Hardware. adding it in as a little homage was one of the best moves they pulled. the ending itself is fine, pretty much the same as the original. it's just the final fight that sucks.
  2. now that I cannot agree with at all. that's still my favourite of his and his most definitive imo.
  3. I mean, he's already done the Hateful 8. I kinda liked it but that seemed to be his least well-received film ever.
  4. I can't understand the mentality behind someone just looking at that blocky piece of shit and being like "yep, I want that". let alone whatever issues it might have with exploding or falling apart or whatever.
  5. nearing the end of this remake and have to say it's an extremely mixed bag. I've constantly gone through love/hate cycles with it. it looks and sounds great, some really slick finer details etc. and then it's marred by some stupid decisions, e.g. trying to force a Doom-style run and gun arena battle in this final room before SHODAN, when the mechanics of the game and the arena itself are just not built for it. the Diego boss battle sucked balls as well.
  6. all that bullshit aside... I wonder if any sample-spotters can say where the sample in the track Channel (off the Plan B album) comes from? the one of the guy saying "I don't care what anyone thinks of me, I know what I am, I know who I am, and I know what I do, and I can assure you, whatever I do, I do it better than you do it" guessing it must be obscure.
  7. whoever at Nightdive designed the puzzles in the System Shock remake should have their vidya making licence revoked
  8. ^ I saw an article directly relevant to all that just earlier, which I am going to post here in full rather than just linking. this is from The Conversation (source). -- Israel accused of using AI to target thousands in Gaza, as killer algorithms outpace international law The Israeli army used a new artificial intelligence (AI) system to generate lists of tens of thousands of human targets for potential airstrikes in Gaza, according to a report published last week. The report comes from the nonprofit outlet +972 Magazine, which is run by Israeli and Palestinian journalists. The report cites interviews with six unnamed sources in Israeli intelligence. The sources claim the system, known as Lavender, was used with other AI systems to target and assassinate suspected militants – many in their own homes – causing large numbers of civilian casualties. According to another report in the Guardian, based on the same sources as the +972 report, one intelligence officer said the system “made it easier” to carry out large numbers of strikes, because “the machine did it coldly”. As militaries around the world race to use AI, these reports show us what it may look like: machine-speed warfare with limited accuracy and little human oversight, with a high cost for civilians. Military AI in Gaza is not new The Israeli Defence Force denies many of the claims in these reports. In a statement to the Guardian, it said it “does not use an artificial intelligence system that identifies terrorist operatives”. It said Lavender is not an AI system but “simply a database whose purpose is to cross-reference intelligence sources”. But in 2021, the Jerusalem Post reported an intelligence official saying Israel had just won its first “AI war” – an earlier conflict with Hamas – using a number of machine learning systems to sift through data and produce targets. In the same year a book called The Human–Machine Team, which outlined a vision of AI-powered warfare, was published under a pseudonym by an author recently revealed to be the head of a key Israeli clandestine intelligence unit. Last year, another +972 report said Israel also uses an AI system called Habsora to identify potential militant buildings and facilities to bomb. According the report, Habsora generates targets “almost automatically”, and one former intelligence officer described it as “a mass assassination factory”. Read more: Israel's AI can produce 100 bombing targets a day in Gaza. Is this the future of war? The recent +972 report also claims a third system, called Where’s Daddy?, monitors targets identified by Lavender and alerts the military when they return home, often to their family. Death by algorithm Several countries are turning to algorithms in search of a military edge. The US military’s Project Maven supplies AI targeting that has been used in the Middle East and Ukraine. China too is rushing to develop AI systems to analyse data, select targets, and aid in decision-making. Proponents of military AI argue it will enable faster decision-making, greater accuracy and reduced casualties in warfare. Yet last year, Middle East Eye reported an Israeli intelligence office said having a human review every AI-generated target in Gaza was “not feasible at all”. Another source told +972 they personally “would invest 20 seconds for each target” being merely a “rubber stamp” of approval. The Israeli Defence Force response to the most recent report says “analysts must conduct independent examinations, in which they verify that the identified targets meet the relevant definitions in accordance with international law”. Israel’s bombing campaign has taken a heavy toll on Gaza. Maxar Technologies / AAP As for accuracy, the latest +972 report claims Lavender automates the process of identification and cross-checking to ensure a potential target is a senior Hamas military figure. According to the report, Lavender loosened the targeting criteria to include lower-ranking personnel and weaker standards of evidence, and made errors in “approximately 10% of cases”. The report also claims one Israeli intelligence officer said that due to the Where’s Daddy? system, targets would be bombed in their homes “without hesitation, as a first option”, leading to civilian casualties. The Israeli army says it “outright rejects the claim regarding any policy to kill tens of thousands of people in their homes”. Rules for military AI? As military use of AI becomes more common, ethical, moral and legal concerns have largely been an afterthought. There are so far no clear, universally accepted or legally binding rules about military AI. The United Nations has been discussing “lethal autonomous weapons systems” for more than ten years. These are devices that can make targeting and firing decisions without human input, sometimes known as “killer robots”. Last year saw some progress. Read more: US military plans to unleash thousands of autonomous war robots over next two years The UN General Assembly voted in favour of a new draft resolution to ensure algorithms “must not be in full control of decisions involving killing”. Last October, the US also released a declaration on the responsible military use of AI and autonomy, which has since been endorsed by 50 other states. The first summit on the responsible use of military AI was held last year, too, co-hosted by the Netherlands and the Republic of Korea. Overall, international rules over the use of military AI are struggling to keep pace with the fervour of states and arms companies for high-tech, AI-enabled warfare. Facing the ‘unknown’ Some Israeli startups that make AI-enabled products are reportedly making a selling point of their use in Gaza. Yet reporting on the use of AI systems in Gaza suggests how far short AI falls of the dream of precision warfare, instead creating serious humanitarian harms. The industrial scale at which AI systems like Lavender can generate targets also effectively “displaces humans by default” in decision-making. The willingness to accept AI suggestions with barely any human scrutiny also widens the scope of potential targets, inflicting greater harm. Setting a precedent The reports on Lavender and Habsora show us what current military AI is already capable of doing. Future risks of military AI may increase even further. Chinese military analyst Chen Hanghui has envisioned a future “battlefield singularity”, for example, in which machines make decisions and take actions at a pace too fast for a human to follow. In this scenario, we are left as little more than spectators or casualties. A study published earlier this year sounded another warning note. US researchers carried out an experiment in which large language models such as GPT-4 played the role of nations in a wargaming exercise. The models almost inevitably became trapped in arms races and escalated conflict in unpredictable ways, including using nuclear weapons. The way the world reacts to current uses of military AI – like we are seeing in Gaza – is likely to set a precedent for the future development and use of the technology.
  9. I've been away from this thread for too long.
  10. that narrative tho. and the atmosphere/visuals. never seen Beksinski-inspired hellscapes depicted so vividly in vidya before.
  11. I also caught this recently and was unexpectedly charmed by it. I thought it'd be boring normie Hollywood comedy garbage but it was genuine and made me laugh out loud a few times. big credit to Jeffrey Wright who I've always liked, had it been someone else it might not have worked as well. similar shoutout to Sterling K Brown who I don't think I've really seen in anything else before.
  12. ongoing lols @ all the babybrained kneejerk reactions this has brought out of Israel's celebrity defenders living in the lap of luxury a world away from all the death and destruction perpetrated in their name. on the downside, a true disappointment to find that Rachel Riley, a woman I used to admire for her brains and humour and beauty altogether, is just as dumb as the rest of them.
  13. I finally hooked up my PC to my TV and played through Blasphemous 2 on it w/ Xbox kontrolar. great experience, great game. improves on the first and keeps what was good about it. moved on to Amnesia: Rebirth. I'm a huge Frictional fan but it's quite annoying that their games are essentially locked to 60FPS in that the engine/physics are dependent on it. this wasn't an issue for me before but on the high-end system I have now with a 240Hz main monitor it's been visually jarring and hard to get over.
  14. usagi

    AE_Live 2022

    lol
  15. The Evil Within. this game should be up my alley but I'm finding it fucking annoying. gameplay enjoyment varies hugely, thanks to it being a random hodgepodge of disconnected setpieces, some of them quite idiotic. story's dumb and old hat, characters are flat and unengaging, the player physicality is handicapped relative to the environment and enemies in a way that reminds me a lot of my frustrations with Alan Wake, etc. it's not a total disappointment but it's not as good as it should be. I'm guessing the sequel (which is also in my library) has made some improvements.
  16. usagi

    Dune

    but yeah I agree, Walken was hilariously just Walken. he may as well have been in 70s New York instead of far flung space in 10,999 or whatever year it's meant to be.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.