After 3 hours of Googling, I have to ask you guys. I'm looking for an app or command-line tool that is able to increase resolution using AI. Something like Let's Enhance but free. I know about Alex J. C.'s neural-enhance but my PC is not able to run Docker. And without Docker, the installation is super complex. Also, I don't have Nvidia graphics card that supports CUDA.
Is there a reason why there is no wide-spread app like this? Is it so power consuming? And is there any alternative solution?
That's exactly what I'm looking for but for common photos.
IIRC it can do both regular photos and anime, the actual site (seems to be down for me atm) had an option for specifying the type.
I was searching more about waifu2x and I finally found a site that allows you to use that "AI engine" (or what it is...). It's http://waifu2x.me/
I will upload result image. Thank you.
Here is another site for the waifu2x algorithm: http://waifu2x.udp.jp/
A short list, ordered after output quality and setup time:
SRGAN, Super-resolution generative adversarial network (say that quickly 10 times 😅): I know you've said you want an alternative, but
https://letsenhance.io/ is perfect for photos. Unfortunately it's not for free – at least you can try it out for 5 images. If you want an easy and comfortable way to solve your problem – $6.99 for 999 images is not that much. IMO SRGAN produces the most detailed results for 2-4x upscaling, sometimes even for 6-8x. If you want unlimited tries, you need to set it up yourself: https://github.com/topics/srgan, there a dozens of projects using Tensorflow, PyTorch and Torch. Other implementations: https://github.com/tensorlayer/srgan https://github.com/brade31919/SRGAN-tensorflow https://github.com/titu1994/Super-Resolution-using-Generative-Adversarial-Networks
Neural Enhance: https://github.com/alexjc/neural-enhance/ Just had to mention it, because the results are awesome. Don't think that there is an app or online tool for that, sorry. You don't need CUDA for it! It also runs pretty quickly on the CPU, you can download the finished training models and set it up in 20-30min WITHOUT docker (if you have some experience with Github and Python). If you need help with the installation – just look at the Issues section, there are people that can help you if you get stuck. Setting it up is really not that hard, you just need some patience.
 3. Photoshop: The newest PS version (19.x, since October 2017 release) also has a new upscaling method, called "Preserve Details 2.0 Upscale" – but compared to SRGAN the results clearly lack sharp and fine details. You have asked for an App and PS is easy to use and can be automated.
Take your time to learn about downloading and setting up github projects, it's worth it! Just look at the results of the following projects and maybe you get motivated to dig a little deeper into the super res topic.
Overview of the most popular algorithms:
(VDSR, EDSR, DCRN, SubPixelCNN, SRCNN, FSRCNN, SRGAN)
Not in the list above:
Ok, I need to save this post and go through it all tomorrow haha. I know let's enhance. Yeah, it is perfect and the easiest way to solve my problem. I just don't want to throw all my family photos (500+) to their servers. I would like to do it all offline.
I tried Alex J. C.'s Neural Enhance but I can't run Docker and without Docker, it's too hard for me.
Photoshop seems to be a good solution. It's easy and fast and I can pay for one month and then quit subscription. I have to try how good it is and if it's worth it.
I already used GitHub couple of times. I would like to dig deeper into that topic but you need to know Python. And I don't know if it's worth it to learn Python just because of that one task.
Thank you very much, I will look at these GitHub projects and those algorithms and differences between them.
To think that we are nearing the times when I won't need to ram my head into a wall out of sheer rage whenever someone says "enhance" on tv
Haha, unless it's police using that function to gather evidence against someone
"Dr. Housman, what is your specialty?"
"I work in machine learning."
"That sounds sophisticated! What do your machines learn?"
"They enhance low-resolution photographs to present a high-resolution version of the image."
"So they are recovering data where it was lost?"
"Well, not exactly recovering it. They're filling in a statistical pattern of what probably was there."
"How probably are we talking here?"
"We have trained the data set on hundreds of thousands of images of human faces, so when the computer sees something like a face, it can usually guess what the face looked like."
"So it's just changing the original image based on a guess? That doesn't sound like science."
"It's interpolating--"
"Is that a change to the data?"
"--yes, interpolating based on high-confidence estimates."
"So let me make sure I understand you. The state acquired video images from the robbery, and then deliberately altered them using your Very Good Guess software, and now they look like the defendant?"
"Altered is not really the wor--"
"No further questions, Your Honor."
There was an Law & Order episode about something like this. Can't find the episode though.
Yes! It had Robin Williams as the suspect, representing himself. It was brilliant.
Amazing that you found it! I searched for a while and couldn't remember the circumstances exactly of the episode. It's been a while since I saw it.
Thanks for finding it! Going to give it a watch.
But hey, crappy porn could be updated to not be so crappy and no one would be worse off for it.
And all those Hollywood movies from roughly 2000-2015 that were finished digitally at <=2K.
Sure you could go back to the 35mm negatives, scan them in >=4K and do the entire post production anew. But that's so much work that it won't happen much.
A great CSI plot would be them using this "enhance" but CSI unwittingly use a docker instance where the neural net was trained nearly exclusively on an innocent person. The person who trained that neural net was the real killer, who they find by his github commits.
And the net the killer used was a MAN: Murderous Adversarial Network.
Here is the result of resizing a photo with http://waifu2x.me :
Look especially to that eye. It's incredible.
Downvote away but I don't find it that great. The original is quite noisy and the model upscales that noise. Also in general the image has little detail and the resulting model looks like a median filter applied to a 2x version. There's definitely more impressive results out there. Edit: As it's trained on anime only, that's expected behavior because you wouldn't find it inpainting skin detail, a feature that's non-present in anime drawings (often not always).
Yeah, that's what I was thinking. It looks like the original was just slightly blurred
Hi, I'm a bot for linking direct images of albums with only 1 image
This is called superresolution, and there are a number of projects on github to do it. For instance, Pytorch comes with a superresolution example as part of its documentation. But I don't think they include the model, you'll have to train it, which I guess is hard on your computer.
Maybe you can try Dmitri Ulyanov's Deep Image Prior. That doesn't require training a model, although it's a little expensive to run in itself. If you can run ipython notebooks and understand what's going on, it should be easy to use super-resolution.ipynb on arbitrary images... but you'll have to modify it a little to not use cuda. A bit short of an easy command line tool.
I see there are a lot of apps for Android and iOS which offer superresolution, but I can't vouch for their effectiveness.
Deep Image Prior looks good until it comes to modifying. I can make a calc in python but that's all. It looks like I have to learn to program in python to play with AI. But thank you for your help
You may or may not need all the power of a GPU-trained deep learning model. Sparse coding super resolution has been successful as well.
The 2018 Photoshop CC will increase resolution using AI, go to image size and make sure you select Preserve Details 2.0.
Wow, that's awesome! I don't want to pay for photoshop just because of one feature (I use Affinity), but finally something I don't need to know a programming language to work with haha.
Try this: https://github.com/titu1994/Image-Super-Resolution
It is based on keras, which you can run with tensorflow on cpu mode. I haven't used this program, but I have used Alex's neural-enhance, and it took hardly few minutes for 4x zoom of 509x335 image on i7-4790K.
Ok, I'm installing tensorflow. I'll try it. Thanks
Hi! I'm the author of that project. If you are on Windows, there is a folder called Windows Helper, which has a barebones compiled application that provides a GUI to manipulate the main.py script.
If you are looking for speed over quality, you should use the default "distilled_rnsr" as the model - which is a distilled version of ResNet Super Resolution model.
If you want quality over speed, you should use the "rnsr" model. Though I have to warn you, it exhausts a lot of memory to use even for medium sized images.
If you're using TF, the setup is relatively straight forward.
Yeah, I see it. I'm on Mac now. When I came home I'll try it on windows. So I just need to install Tensorflow on windows and then run that "helper"?
Oh I think I forgot to mention this, but since your goal is to upscale a large number of pictures, you can use the helper and select multiple images at once - it will auto upscale each one and save it in the same directory as the input files. You can then just select the images by "(2x)" and it should select only the upscaled images.
Turns out I created the helper script for a very similar purpose.
Yes inside the helper folder there is an executable file called Image Super Resolution.exe. I use it when I'm prototyping quickly, but it's useful nevertheless.
Also, you will need to setup opencv as one of the requirements - since it does some post processing to remove artifacts. On Windows, that might be a huge pain.
You're the man for setting that up. I wish more of these projects were easy to get running like that. Well done!
Some of these have been implemented to run in the browser, check it out: https://transcranial.github.io/keras-js
It'd be great to compare results.
Nice point. But it doesn't work. It says: "This browser doesn't support WebGL 2.0". I tried Chrome, Firefox and Opera. None of them worked.
If I will be able to run it then yes, I will post results.
I was interested in dealing w/ both noise and edge retention so I tried this on a small 128x128 icon and upsized it to 512x512 w/ three different techniques; PS Bicubic(Automatic), OnOne(Genuine Fractals), and Waifu2x(twice). Looks pretty nice for art anyway. Cool.
I’m really impressed with the Waifu2x results. That name though...
I know. I had to choke that down a little to bookmark it. I'd love to be able to run it locally, but I tried to learn python once and just couldn't get it.
Why can't your PC run Docker? I mean that would be the simplest solution... I'm happy to help, just give some more details.
I know.. It says that my CPU is not supported
Yes, I did. But it's more like an experiment than a useful app. You can not save the image generated by this app.
I know about that. The problem is that you have to pay for this. It's not a big problem cause it's relatively cheap. But I have 500+ family photos that I would like to restore with help of AI. And I don't want to send it somewhere. I know, I know... It's secure, they don't collect it... I'm just searching for another solution until I will use their service.
Seconding this. Quality matches waifu2x but also trained on photo data (not sure if they’re using one of the GAN super res models?). Main downsides (last I checked) were wait times and account limits.
Yeah, I know enhance. I'm thinking about it.
Google uses it for image compression.
https://research.googleblog.com/2016/11/enhance-raisr-sharp-images-with-machine.html?m=1
I know about this. But it doesn't seem to be powered by AI. What I mean is that it can't add missing information to an image, based on what it learned. I think what it does is that it just makes transients and lines smoother. Something like blur without blurring an image.
It is, indeed, powered by AI. Check again, "adding missing information" is only one tool in the deep image prior toolkit. Enhancement is another one
Ok, thanks for the explanation. I was looking at this project and their website but there is no tutorial how to use it. I don't know what to do after installing those libraries. And if I got it right - it's not trained so I would have to train it, right?
madVR uses NNEDI3, which was trained as a video deinterlacer. You can upscale by 2x at a time.
Isn't this an example in pytorch? The super resolution model?
Wow, that looks so awesome! Is it enough to learn Python to work with TensorFlow, PyTorch and stuff like this?
If you want to get actually into machine learning, absolutely.
tons. here's the one that comes up as the first google hit for cut and pasting your question.
it's quite good.
Yeah, I know this one. But it's too hard for me to install it cause I can't use Docker.