Jump to content
IGNORED

AI - The artificial intelligence thread


YO303

Recommended Posts

  • Replies 1k
  • Created
  • Last Reply

Top Posters In This Topic

Quote

This year was crazy in many ways. There has been a lot of things going on so it's no wonder that AI lost some of the headlines. But the implosion of several flagship projects certainly gives a strong sense of deflation. I think the real AI winter will come once everyone will finally realize that self driving cars are not coming anytime soon. Once that fact of life finally hits the portfolios of those heavily invested in the current AI bubble, money stream will freeze for many years. And while the winter is needed to cool down some of the craziness, it will generally be harmful to legit researchers distancing themselves from the hype and probably slow down the progress towards real AI, but at this point, with all the bald promises already made, it seems inevitable. 

AI Update, Late 2020 - dumpster fire (Piekniewski's blog: on limits of deep learning and where to go next with AI.)

Link to comment
Share on other sites

  • 5 weeks later...
  • 2 weeks later...

The problem is the black keys lack depth in the animation, and the resulting actions correspond to that lack of depth. 
still, relatively impressive when you read that the one for “Soul” took less than 3 seconds to create. 

Link to comment
Share on other sites

16 hours ago, chenGOD said:

The problem is the black keys lack depth in the animation, and the resulting actions correspond to that lack of depth. 
still, relatively impressive when you read that the one for “Soul” took less than 3 seconds to create. 

i have no idea what you mean but really, that looks like shit. really crappy. they got the fingering right, big fucking deal. apart from that, it's polar express level of realism. and i'm talking about the animation, not the graphics. poo poo

Link to comment
Share on other sites

the motion is way too smooth and floaty. no force, twithchy stuff or human expression, but congrats to AI on hitting all the right keys. looks like flesh crabs.

Still think it's pretty impressive!

it was only last year this was considered peak realism, and that wasnt even AI generated.

spacer.png

Edited by Silent Member
Link to comment
Share on other sites

23 minutes ago, iococoi said:

the future looks bright

The Original Series GIF by Star Trek

we are the ai, it's each of us. it's pictures of us, it's things we did and wrote.  it's our blinks training the neural networks that made that

if only it was being used for our benefit

Edited by cyanobacteria
Link to comment
Share on other sites

  • 3 weeks later...
  • 3 weeks later...

someone remember IBM deep blue vs Kasparov? The stage is that AI will beat humans on every aspect in upcoming decades. You may be not realising but some kind of our consciousness is hybridating with machines, this sounds crazy now but the impact is in the future.

Edited by Diurn
  • Like 1
Link to comment
Share on other sites

  • 3 weeks later...
On 4/5/2021 at 12:35 PM, Diurn said:

someone remember IBM deep blue vs Kasparov? The stage is that AI will beat humans on every aspect in upcoming decades. You may be not realising but some kind of our consciousness is hybridating with machines, this sounds crazy now but the impact is in the future.

                                                            I really enjoyed that documentary, Thanks Diurn

 

                                    here's some a.i. acid by user48736353001 that has a short circuit and saw'd 1.5 times

 

Link to comment
Share on other sites

https://www.nvidia.com/en-us/omniverse/apps/audio2face/

Quote

Omniverse Audio2Face App is based on an original NVIDIA Research paper. Audio2Face is preloaded with “Digital Mark”— a 3D character model that can be animated with your audio track, so getting started is simple. Just select your audio and upload it into the app.  The technology feeds the audio input into a pre-trained Deep Neural Network and the output of the network drives the facial animation of your character in real-time.  Users have the option to edit various post-processing parameters to edit the performance of the character. The output of the network then drives the 3D vertices of your character mesh to create the facial animation. The results you see on this page are mostly raw outputs from Audio2Face with little to no post-processing parameters edited.

 

  • Like 1
  • Farnsworth 1
Link to comment
Share on other sites

  • 3 weeks later...
  • 2 months later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.