Jump to content
IGNORED

Yes, androids do dream of electric sheep


zlemflolia

Recommended Posts

 

the problem is that when installing vagrant it insists on using C:, my SSD, as the temp drive for downloading and unpacking the linux build - more specifically C:\users\xxx\AppData\Local\Vagrant\tmp or something. i'm pretty short on space on my SSD, so the installation fails. i'm pretty sure that i can set an environment variable or change a config file to change where vagrant uses as a temp location, but it's one of those annoying things with OSS where i need to go digging to find out exactly how.

create a tmp folder on a drive you do have space on, then inside of C:\users\xxx\AppData\Local\Vagrant\ create a junction to that folder. you can either use mklink.exe from the command line to do it, or even handier, from inside explorer use this: http://schinagl.priv.at/nt/hardlinkshellext/linkshellextension.html

 

so essentially create a linux-style symlink? i suppose that's a possibility. i'm pretty sure i can just edit a config file. i was trying to get it running in an hour before leaving home for a few days. and it's a 64gb ssd, so no porn on it, just the OS!

Link to comment
Share on other sites

  • Replies 350
  • Created
  • Last Reply

havent seen anything besides that one guy i posted earlier in the thread that managed to remove the 'dog face' effect completely. He did it right when deepdream code was released but hasn't posted any new images since :(

the constantly automated one on Twitch seems to be running off a better code or database than the one google already released (anything you type starts to show up in the feed)

Link to comment
Share on other sites

to produce something other than dogs requires one of two things -

 

very carefully chosen starter images that don't resemble dogs in any way deepdream can pick up on

an entirely new neural network, trained on other images. and that will take months.

 

because this was just google fooling about, they trained it on any old random thing they had lying about - like putting an amen into a breakchopper just to try it out.

Link to comment
Share on other sites

but if thats the case why is that twitch running feed able to show 'relief' of any image suggested in the chat? People will say things like 'paintbrush' or 'tarantula' and the entire zooming image starts showing deepdream stylings of those suggestions and no dog faces. Also take a look at the images i posted earlier in the thread, a guy managed to get a very strong deep dream impression that looked lik it mostly using bodies of ants (and again no dogs)

http://www.twitch.tv/317070

what you're saying makes some sense though. the 'training' part seems to probably take the most work and time. It was the same way when i finally sat down on a Kyma and tried out spectral morphing for the first time, to actually 'train' a sound in a way where it would sound good when it was morphed was extremely difficult. To this day i still haven't heard any Kyma morphs that sound as good as the ones that were packaged with the demo CD

Link to comment
Share on other sites

the chat has very little people in it right now, suggestions get through fairly easily. still looks more varied than most of the results of the google code people havent been playing with (also make sure to check 'high' , the default setting is medium)


the twitch one is on better code than mine...

 

 

you know that

i know absolutely zero about code, so i actually don't know shit. I was just curious why the first real-time iteration of it (before the code was released) is making imagery automatically better than the code they released

Link to comment
Share on other sites

it's not the code, it's the data. the twitch thing has a limited set of words that will trigger images as well, it's just a good few more than the snakes and birds and dogs and eyes of the other one (IIRC a hundred or so, can't remember exactly).

Link to comment
Share on other sites

the twitch thing runs on the exact same image set, it's just able to recognize individual things better. it can recognize 110 words, and two at a time.

 

in other words there's still loads of dogs in there, but if you tell it 'corn + volcano', it will actively look for corn and volcanoes, even though it only has a limited knowledge of them. you could probably say 'malamute + alsatian' and it would find those because it has much better knowledge of dogs.

Link to comment
Share on other sites

ah, ok. i thought the 110 words was related to the training set.

it is and it isn't.

 

it's important to realise that the images deepdream creates aren't any of the images from its set.

the imageset trains it on, basically, 'the concept of a dog' (or a pagoda, or a vehicle).

it understands the relationship between pixels and colours, and a particular dataset is a dog, or a vehicle. nothing from the imageset is purely recreated - just the relationships between pixels.

Link to comment
Share on other sites

that twitch feed knows about 1000 or so things according to the description. Imagenet which it apparently runs with has 14 million objects.

As far as I know, google images also uses html tags and text around an image to infer what the image is about, not necessarily this kind of DNN for all images. Training takes a long time and you need supercomputers that run thousands of passes and then use backpropagation to set the parameters right so you get the output you want.

Link to comment
Share on other sites

I hope you dont mind me bumping my progress as i try and get the fidelity better

 

11717481_10152950445867826_1820594493568

 

 

hmmm...I wonder if you could use this program to render a video game environment. I see potential for some kind of abstract, music based, point accumulation game, or just some kind of psychedelic sandbox

Link to comment
Share on other sites

that's interesting, it definitely only had 100 when it began, maybe it's learning as it goes?

 

definite singularity shit happening here.

Link to comment
Share on other sites

But um also personally, you know this is not meant as a creative tool. This is just for visualizing what the DNN is seeing to get a better handle on what it's doing. Turns out it looks pretty freaking cool, but as far as I'm concerned it's still way too passive. All you're doing is adding an object into another image based on an algorithm to compare 2 images basically and then the nuance will come from variation in the values that come out, along with a lot of random values. Deep learning is amazing for automatically categorizing objects in images but I really think the power of this can be in say, photoshop, with an automatic tool to replace any object in the scene, or to add any object in anothers place, and in video etc too. These kinds of active actions will need to be coded separately and on top of the basic ability to pick out things.

Link to comment
Share on other sites

that's interesting, it definitely only had 100 when it began, maybe it's learning as it goes?

 

definite singularity shit happening here.

 

This is just my personal hobbyist opinion but... Nah man, no singularity. Imagenet has existed for a while and had those objects for a while, years even by now. I think it might just be delegation of resources to twitch or something.

 

And also we are far away from like AI. As far as I can tell, this is exploiting one part of inspiration from the brain, but we are lacking everything else, including emotions, outputs, consciousness... Most of this is supervised learning. Even unsupervised learning would itself not be enough. Unsupervised learning just means it can learn new abstractions by itself, while right now a lot of human engineers are doing backpropagation and other things to tweak the DNN to give the results they want. Don't get me wrong it's super powerful and very nice but, AI? Call me a big skeptic for now. But who knows what could happen in the future and what they are working on right now. Usually, someone stumbles over a solution to a problem, or something happens with hardware or some code, that unleashes the unexpected breakthrough, etc

Link to comment
Share on other sites

check this out, its sorta the opposite thing

 

the top right is photos, the middle is deepdream working out what reality would look like

 

This is the craziness. Now you're talking about seriously being able to mess with someone's mind.

Link to comment
Share on other sites

 

that's interesting, it definitely only had 100 when it began, maybe it's learning as it goes?

 

definite singularity shit happening here.

 

This is just my personal hobbyist opinion but... Nah man, no singularity. Imagenet has existed for a while and had those objects for a while, years even by now. I think it might just be delegation of resources to twitch or something.

 

And also we are far away from like AI. As far as I can tell, this is exploiting one part of inspiration from the brain, but we are lacking everything else, including emotions, outputs, consciousness... Most of this is supervised learning. Even unsupervised learning would itself not be enough. Unsupervised learning just means it can learn new abstractions by itself, while right now a lot of human engineers are doing backpropagation and other things to tweak the DNN to give the results they want. Don't get me wrong it's super powerful and very nice but, AI? Call me a big skeptic for now. But who knows what could happen in the future and what they are working on right now. Usually, someone stumbles over a solution to a problem, or something happens with hardware or some code, that unleashes the unexpected breakthrough, etc

 

 

Yeah, I wouldn't be surprised if they stumbled upon parts of it by accident.

 

The next big problem is tackling general purpose problem solving, a lot has been achieved in specialized domains in the last decade or so. I don't think that will require emotion or consciousness or anything like that, but will require innovative ways to control and integrate multiple specialized networks (one for object recognition, one for edge detection, one for depth, one for language, several mathematical models, geometrical systems, etc.).

 

Modeling things like desires, emotions, intentionality (i.e. representational ideas), will also be key to any kind of strong AI though, but once you've added all that to the mix, maybe you'll just get consciousness for free.

Link to comment
Share on other sites

gQTzf3X.jpg

jlMKMDK.jpg

BPfftYC.jpg

^the xfiles "i want to believe" ufo photo

AevOzcq.jpg

^me in yosemite. the highest pagoda at the top is half dome.

as you can see the hyperdimensional machine elves gave me a golden chalice to drink their mercurial info smoothie.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.