Adam Spring | Official Website

In this post I will be exploring a technique involving a more guerrilla approach to 3D scanning – single camera photogrammetry. I have recently come to learn how useful this technique is; it is cost effective and could be a life saver if you work professionally in the world of 3D scanning – read on and I will explain…

Adam Spring | Official Website

My inspiration for trying out this technique was from a recent project I worked on where the scanning department were to 3D scan the heads of multiple high profile clients. It was my job to turn the scanned data into into digital models so that they could be then printed as a cast for making them into statues. When flying out the airline lost some important kit, therefore all the head models had to be captured using a ‘single camera workflow’. For anyone who is not familiar, the ideal way to capture a non-static subject (i.e. a human) is to use a synchronised camera array, where multiple cameras all fire at the same time. Until now I have never heard of anyone using only a single camera on a commercial shoot as there are many factors that may cause you to fail. All complications aside, they managed to make it work. Processing the models was considerably more difficult than what I am used to as the models required more careful treatment to preserve all the facial details.

Hopefully I will be able to share this project on my blog at a later date, but in the meantime I will show you my own demonstration of this technique.

 

For my own set-up, the studio was located in our garden (image above), and my girlfriend Luizee kindly offered to stand in as the model. I shot the image set on my Canon 5D mk IV and a 35mm Canon L Series lens. Usually I would aim to use a 50mm lens, however I was shooting at a fast aperture (F5.6) and using a wider lens would give me a larger depth-of-field and therefore help keep Luizee’s face in focus.

Control Points Measurement | Adam Spring | Official Website

During the shoot we did two sets of images; the first in direct sunlight which allowed for a faster shutter speed and higher aperture, but meant we were battling with hard shadows across the face, which is not good for texturing purposes. The second image set was shot in the shade which removed the harsh shadows, but meant I had to compensate for the lower levels of light by bumping the ISO up to 500. I chose to process the second set of images as the lighting was more even across the face. Even lighting makes for cleaner results when processing images into 3D models.

See below a time-lapse of the shoot.

After watching back the time-lapse I noticed that Luizee’s head drifts forwards slightly half way through the shoot. Somehow Agisoft (the software used to process the images) was able to build a model using all 170 images despite this subtle head movement. Building the texture, however, was more complicated due to the head movement. Using the time-lapse I was able to roughly identify at which point she moved her head, so when texturing the model I could dismiss all images taken after that point in time. I never intended to use the time-lapse as a shooting reference, but seeing as it came in so useful I’ll make a point of shooting one every time in future…

Above you can see the textured raw mesh in Agisoft Photoscan, which turned out much better than expected. The next step was to optimise the scanned data and clean up any noise within the geometry. Using a well rehearsed cleanup workflow I have in ZBrush I was able to extract all the skin pore details whilst preserving the general shape of Luizee’s face.

See below the final turntable.

After having completed the whole process I decided to capture another head scan to make sure the first attempt wasn’t just a fluke. This time around Luizee’s sister Kim offered to stand in as the model.

Half way through the shoot I think I said something that made Kim laugh which meant we had to stop the scan at 99 images. Instead of doing the process over again I decided to run the test and see if the results would be comparable to Luizee’s scan, with only half the number of images. It turned out that the results were actually sharper, and unlike last time, none of the cameras miss-aligned from any subtle head movements.

Tags: , ,

8 thoughts on “Single Camera Head Scanning | Photogrammetry

  • Hi Adam, your article is very interesting. Could you share the process you used in Photoscan to make such an amazing 3D reproduction? I’ve been trying multiple settings but the skin results often in some not-flat surface. Thanks, Alberto.

  • This is pretty cool, man… I am just getting into Photogrammetry and my next project will involve getting my own head scanned for the creation of a 3d printed helmet. I just have to find a friend with steady hands.

  • Hey there! This post сouldn’t be written any better!
    Reading through this post reminds me of my old room mate!
    Ηe always kept talking about this. I wilⅼ forward this post to him.
    Pretty sure he will have a good read. Thank you for
    sharing!

  • Impressive results Adam. The quality achieved is fantastic for a single camera shoot. I would have expected a greater variance in light levels but the difference looks fairly negligible.

    Thanks so much for writing this up- especially enjoyed your use of the timelapse to find where Luizee moved.

    1. Thanks Craig! Really glad you found it interesting. I am currently working on another single camera scanning post for a more in-depth character pipeline. If you’re interested, check back here in a month and with a bit of luck I will have published it by then.

  • Very impressive result, given such “low-budget” hardware.
    Any gotchas for cleaning up the geometry ?

Comments are closed.