In this post I will be exploring a technique involving a more guerrilla approach to 3D scanning – single camera photogrammetry. I have recently come to learn how useful this technique is; it is cost effective and could be a life saver if you work professionally in the world of 3D scanning – read on and I will explain…
My inspiration for trying out this technique was from a recent project I worked on where the scanning department were to 3D scan the heads of multiple high profile clients. It was my job to turn the scanned data into into digital models so that they could be then printed as a cast for making them into statues. When flying out the airline lost some important kit, therefore all the head models had to be captured using a ‘single camera workflow’. For anyone who is not familiar, the ideal way to capture a non-static subject (i.e. a human) is to use a synchronised camera array, where multiple cameras all fire at the same time. Until now I have never heard of anyone using only a single camera on a commercial shoot as there are many factors that may cause you to fail. All complications aside, they managed to make it work. Processing the models was considerably more difficult than what I am used to as the models required more careful treatment to preserve all the facial details.
Hopefully I will be able to share this project on my blog at a later date, but in the meantime I will show you my own demonstration of this technique.
For my own set-up, the studio was located in our garden (image above), and my girlfriend Luizee kindly offered to stand in as the model. I shot the image set on my Canon 5D mk IV and a 35mm Canon L Series lens. Usually I would aim to use a 50mm lens, however I was shooting at a fast aperture (F5.6) and using a wider lens would give me a larger depth-of-field and therefore help keep Luizee’s face in focus.
During the shoot we did two sets of images; the first in direct sunlight which allowed for a faster shutter speed and higher aperture, but meant we were battling with hard shadows across the face, which is not good for texturing purposes. The second image set was shot in the shade which removed the harsh shadows, but meant I had to compensate for the lower levels of light by bumping the ISO up to 500. I chose to process the second set of images as the lighting was more even across the face. Even lighting makes for cleaner results when processing images into 3D models.
See below a time-lapse of the shoot.
Above you can see the textured raw mesh in Agisoft Photoscan, which turned out much better than expected. The next step was to optimise the scanned data and clean up any noise within the geometry. Using a well rehearsed cleanup workflow I have in ZBrush I was able to extract all the skin pore details whilst preserving the general shape of Luizee’s face.
See below the final turntable.
After having completed the whole process I decided to capture another head scan to make sure the first attempt wasn’t just a fluke. This time around Luizee’s sister Kim offered to stand in as the model.
Half way through the shoot I think I said something that made Kim laugh which meant we had to stop the scan at 99 images. Instead of doing the process over again I decided to run the test and see if the results would be comparable to Luizee’s scan, with only half the number of images. It turned out that the results were actually sharper, and unlike last time, none of the cameras miss-aligned from any subtle head movements.