A few days ago I noticed a pretty cool photo on Facebook — it was a girl surrounded by trees, a surreal image. When you hovered with your mouse over it or tilted your phone, you could actually see the trees behind. The depth of the picture was so real that you could touch it. That very second I instantly fell in love with 3D photography. You know this as 3D photos on Facebook — but this is far, far ahead of what you’ve seen so far. Thanks Richard-Wakefield.
I started studying the effect and asked around — how was this even possible? I quickly learned that the core of this effect is something called a depth map, a grayscale image (not really 50 shades of grey) that basically points out to items closer or further away from the viewer — what’s further away is grey and black, what’s closer is light grey and white.
So I said to myself – this should be pretty easy; As a professional photographer that uses Photoshop on a daily basis, I’m quite familiar with the best practices, so all my work is neatly layered. So I took a shot at creating a manual depth map of the objects in my pictures.
This was my 1st attempt:
And the depth map of the above looks like this:
I liked how the 1st attempt worked out, but I wanted more — I took another shot at it:
And the depth map:
All good and dandy, but I wasn’t quite pleased with the result and I knew that if I wanted to create a mind-blowing piece, I had to go much further that just edit layers and depth maps.
And further I went:
My first manual depth map was created obviously in Photoshop (here’s a super awesome tutorial, https://www.youtube.com/watch?v=DInWVvfPQm8)
But when I uploaded to Facebook, the result was disastrous to say the least. I tried tweaking it to no avail — it was artistic garbage.
Then I got to thinking — there surely must be a better way to do this. I was thinking about the heatmaps I’ve seen in movies and the depth maps from 3D modelling apps, but apart from DOF PRO (Windows Photoshop Plugin) I couldn’t find anything even remotely relevant.
Jumping from video to video I found this: Unsupervised Monocular Depth Estimation with Left-Right Consistency – https://www.youtube.com/watch?v=go3H2gU-Zck
It was exactly what I was looking for, but it’s just a video. Oh, cool, there’s a link in the description. Bingo, it’s Python. These guys are quite cool.
You see, I’m not only a photographer.
But for the better part of past 20 years I coded the heck of things. So naturally I installed the code on one of my servers, took one of my pics and created a depth map.
I’ve tweaked Pongo’s nose, Cruella’s head and the result is, as you probably can see, absolutely stunning. I didn’t even have to transform the depth map to grayscale, as Facebook directly recognises this format.
And based on that, I’ve created a tool to help you all achieve the same results: Meet 3Dphoto.io.
3Dphoto.io is an experiment in artificial intelligence used for artistic purposes. We here at Gloobus are constantly pushing the boundaries of technology, not only for business purposes but also in order to give back and make the world a better place – through art and experimentation.
As a matter of fact, one upcoming iteration of our flagship product, the GSB is going to heavily rely on AI and what better way to test our skills than to combine it with our passions?
3Dphoto.io is live today and free to use. Keep in mind that the process is entirely automated and unsupervised (that purest form of AI) — but if you want better results than the automated ones (or to even tweak them), I can personally help you – just drop me a line on Instagram or Facebook.
In the meantime, if you’re interested in our work in AI, travel and hospitality, drop us a line here.