Jump to content
IGNORED

AI - The artificial intelligence thread


YO303

Recommended Posts

  • Replies 1.3k
  • Created
  • Last Reply

Top Posters In This Topic

3 hours ago, Nebraska said:

 

no doubt they're on their way to take someone's job. 😉

future 10yr old kid to his classmate [sick burn] "sorry the sex robot took your mom's job, billy" 

Edited by ignatius
  • Haha 1
Link to comment
Share on other sites


It’s interesting how the development of military ai progresses. This is the most current one I found. I wonder if there’s something even more sophisticated 

Edited by o00o
  • Like 1
  • Sad 1
Link to comment
Share on other sites

There is this pilot on Reddit posting videos about „ufos“ he recorded- I wonder if these are some of the newer drones. If you don’t have a pilot anymore acceleration is a completely different topic i guess:

 

  • Like 1
Link to comment
Share on other sites

2 hours ago, o00o said:


It’s interesting how the development of military ai progresses. This is the most current one I found. I wonder if there’s something even more sophisticated 

Goddamn reruns 

Link to comment
Share on other sites

Have you guys heard about this chat ai vs the number of r’s in the word strawberry? Basically the ai firmly believed that there were only two r’s in the word. It took the guy like five minutes to convince the ai that there are indeed 3 r’s. 

Lots of videos on tik tok about it but I’m having trouble finding the original. 

Edited by YEK
  • Like 1
Link to comment
Share on other sites

5 hours ago, YEK said:

Have you guys heard about this chat ai vs the number of r’s in the word strawberry? Basically the ai firmly believed that there were only two r’s in the word. It took the guy like five minutes to convince the ai that there are indeed 3 r’s. 

Lots of videos on tik tok about it but I’m having trouble finding the original. 

The gpt subreddit is spammed with the topic to point of annoyance. It’s basically like the llms don’t see words as words but as tokens so they have a hard time counting the letters:

Quote

Large Language Models (LLMs) exhibit an interesting paradox when it comes to their capabilities. While they can perform complex tasks like generating code, they often struggle with seemingly simple operations such as counting the letters in words. This phenomenon can be attributed to a few key factors:

1. Token-level training: LLMs are typically trained at the token level rather than the character level. This means they process and understand language in chunks (tokens) rather than individual characters. As a result, they may not have a granular understanding of character composition within words. [^1]

2. Lack of explicit counting mechanism: LLMs don't have an inherent counting mechanism. They rely on patterns and associations learned during training to generate responses. Counting, which is a precise mathematical operation, doesn't align well with this probabilistic approach.

3. Abstraction vs. execution: LLMs are designed to understand and generate high-level concepts and patterns. They can describe how to count letters or even write code to do so, but they don't have the ability to execute these operations internally. This creates a disconnect between their ability to conceptualize a task and actually perform it. [^2]

4. Focus on semantic understanding: LLMs are primarily trained to understand and generate meaningful content based on context and semantics. Counting letters is a low-level task that doesn't necessarily contribute to this primary objective.

5. Lack of working memory: Unlike humans who can mentally keep track of counts, LLMs don't have a persistent working memory to store and manipulate such information during processing.

This limitation in counting letters highlights the difference between human cognition and the way LLMs process information. It's a reminder that while LLMs are powerful tools for language understanding and generation, they still have significant limitations when it comes to certain types of precise, quantitative tasks. [^2][^1]

To address this issue, researchers are exploring ways to enhance LLMs' abilities in tasks requiring precise manipulation of characters and numbers. This may involve developing hybrid models that combine the strengths of LLMs with more traditional computational approaches for specific tasks like counting.

[^1]: [Large Language Models Lack Understanding of Character ... - arXiv](https://arxiv.org/html/2405.11357v1#:~:text=However%2C%20large,within%20words.)
[^2]: [The Curious Case of LLMs: LLMs Can Code but Not Count - Medium](https://medium.com/@gcentulani/the-curious-case-of-llms-llms-can-code-but-not-count-14513d9532e1#:~:text=LLMs%20exhibit,tasks%20themselves.)

 

Edited by o00o
  • Like 1
Link to comment
Share on other sites

22 minutes ago, o00o said:

To address this issue, researchers are exploring ways to enhance LLMs' abilities in tasks requiring precise manipulation of characters and numbers. This may involve developing hybrid models that combine the strengths of LLMs with more traditional computational approaches for specific tasks like counting.

hm. maybe instead of trying to train the LLMs (large LANGUAGE models) to be counting computers the researchers should just build them little helper computers, separately. that's why we designed computers in the first place, really.

then the LLMs with their little helper computers could learn to code directly, and create their own little LLMs within those computers.

(i'm trying to point out how insane it is that our COMPUTERS have unlearned how to do COMPUTING)

Edited by auxien
Link to comment
Share on other sites

On 8/29/2024 at 11:02 PM, Rubin Farr said:

DOOM running on a neural network with no game engine 

https://gamengen.github.io

A neutral network of my brain would produce this instead

 

 

56 minutes ago, Rubin Farr said:

 

Is anyone actually asking for this kind of shit or will these dumbass techbro companies keep sucking up venture capital to generate useless crap?

  • Like 1
  • Haha 2
Link to comment
Share on other sites

23 minutes ago, EdamAnchorman said:

will these dumbass techbro companies keep sucking up venture capital to generate useless crap?

yes

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

41 minutes ago, EdamAnchorman said:

A neutral network of my brain would produce this instead

 

 

Is anyone actually asking for this kind of shit or will these dumbass techbro companies keep sucking up venture capital to generate useless crap?

Have we implemented the 3 laws of robotics yet? Because these fuckers look strong.

Link to comment
Share on other sites

1 hour ago, Rubin Farr said:

Have we implemented the 3 laws of robotics yet? Because these fuckers look strong.

let's hope not. anyone stupid enough to buy one deserves to have their tiny brains smashed on their fancy couch. a noble sacrifice for the greater good to learn from.

Link to comment
Share on other sites

have never seen this youtuber before. .no idea what his channel is about but this little thing about behind the scenes tech bro billionaire chatting about Ai for everyone making everyone a tiktok (or any social media platform) of their own using prompts to steal code/music/users etc.. is pretty fucking weird. 

 

Link to comment
Share on other sites

Good news for all the conspiracy theorists at WAMM: chatGPT can save you! 

🤣 :flower:

Quote

Conspiracy theory beliefs are notoriously persistent. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored. To evaluate this possibility, we leveraged developments in generative artificial intelligence and engaged 2190 conspiracy believers in personalized evidence-based dialogues with GPT-4 Turbo. The intervention reduced conspiracy belief by ~20%. The effect remained 2 months later, generalized across a wide range of conspiracy theories, and occurred even among participants with deeply entrenched beliefs. Although the dialogues focused on a single conspiracy, they nonetheless diminished belief in unrelated conspiracies and shifted conspiracy-related behavioral intentions. These findings suggest that many conspiracy theory believers can revise their views if presented with sufficiently compelling evidence.

https://www.science.org/doi/10.1126/science.adq1814

  • Farnsworth 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.