Set-up & Capture
It’s been a while since my last shoe scanning post, and another pair of trainers just arrived, so it felt like a good time to write another one. The focus this time was to see if it is possible to make a cross polarised scanning booth on a ‘shoe string’ budget (shameless pun intended). For those who are not familiar with cross polarised photography, it is a capture technique used to filter specular reflectance from an image. With specular light bounce removed from an image you are left with flat diffuse colour, which is well suited for texturing and relighting 3D models.
One of the challenging aspects of scanning a shoe is figuring out how to suspend it in order to capture every angle. This can prove to be quite difficult, but with a recently discovered technique that I am calling ‘masked blending’, you can avoid this altogether. When I say ‘masked blending’ I am referring to a technique where you can use difference masking in Agisoft to trick the software into believing two scans were captured in one continuous setup. So blending the scans can happen during the mesh calculation stage rather than the alternate approach of hacking multiple scans together in ZBrush – I will explain in more detail further down.
As you can see in the image (right), my set up is pretty basic; a matte black backdrop (minus an iron for those shocking creases), a turntable and a couple of speedlite flashes. Despite all the creases, the black backdrop worked really well – especially when using the polarised filter, which did a great job of removing all light bounce across the creased areas. My light source was two speedlites pointed inwards at a 45 degree angle from the camera, each with a strip of polarising gel taped over the front.
When processing I used Lightroom for batching and colour correcting the images. I then generated a set of masks in Agisoft which helped isolate the subject during the processing stage. The matte black backdrop allowed me to generate a pretty good difference mask and meant I could carefully flip the shoe mid way through the scan, but trick the software into thinking the camera position had been changed instead, and therefore creating the ‘masked blending’ effect thatI mentioned earlier. Using this technique I was able to move the shoe and get coverage on multiple axis, and then combine them all together in Agisoft. I did have to do some mask cleanup work in AfterEffects, which took 10 minutes or so, but once I saw those two perfectly perpendicular rings of camera alignment in Agisoft, it definitely felt like one of those eureka moments.
Since doing this test I have learned that turntable setups can be fairly manual and tedious, but there are a few things you can do to help speed things up. For example, using an intervalometer on your camera can make life easier. Also, a motorised turntable would make a lot of sense too, although I only thought about that after watching back a time-lapse of me hunching over the turntable waiting for the camera to fire every 5 seconds. See below a video of one of my earlier test shoots (worth noting that the time-lapse failed during the final scan, which was captured at night to help control lighting conditions).
Remodelling & Rendering
When rebuilding the trainer I decided to avoid remodelling the knitted surface with all the individual threading, as this would either require some very clever modelling tricks, or a person with a lot of spare time on their hands. Instead I remodelled the basic forms in ZBrush, and extracted high frequency details from the diffuse texture which gave the illusion of a knitted surface.
The model is made up of ten different parts, all with polarised texture maps that support a PBR workflow, so they can be easily relit and rendered in Maya. All texture cleanup was done in Mari, apart from the laces which were created using various tiled patterns in Photoshop. I created alphas for the logo imprints on the rubber sole that were applied in ZBrush using the inflate tool, in the deformation palette. To recreate the curvy shapes on the sole I had to keep the topology fairly clean in order to retain the forms and remove any noise from the original scanned data.
Photometric Capture
For a while now I have wateded to improve my surface detail capture when shooting for photogrammetry, so I decided to look into some photometric techniques. I have recently been learning Substance Designer, which is a great piece of software that creates procedural textures, but also has some amazing tools that support 3D scanning and photometric workflows.
My first test was to try extracting specular information from an image using difference blending between polarised and non-polarised photos of the same subject. As you can see from the image (right) it seems to be working pretty well. The next step would be to calculate the specular difference from various angles of the trainer and re-project those images onto the mesh in Agisoft to see how well it combines them together. I’m curious as to whether this will actually work though, because specular reflections can vary depending on the material of the surface, and the angle it is being viewed from.
The surface details in the renders of my final trainer model appear to be pretty good, however they are not totally accurate to the original shoe. The current surface details have been approximated using a ‘bump from diffuse’ approach, which uses RGB data in the texture to create fine details on the surface of the model. This approach is not accurate as it uses darker values in the texture to negatively displace the surface and lighter values for positive displacement, and this doesn’t always give you the correct result. Skin pores and facial stubble are a good example of this issue as both are darker in colour, however one requires a positive displacement and the other a negative.
To solve this problem, my second photometric test was to extract height (or depth) information from a photograph to create surface details accurate to real life. I did this by capturing a set of images using multi-directional light sources, then loading them into Substance Designer to calculate normal values across the shoe, which was then converted into a hight map.
If you look at the rendered gif you can see during the clay wipe that the white tick is slightly protruding the surface of the model, which is not how it looks on the original. If I were to use the height map extracted in Substance Designer I should be able to get a displacement true to the original.
To take this experiment further, similar to the specular extraction, I would like to create height maps from multiple angles of the shoe, and hopefully I can blend these height maps together across the model surface using the texture projection tool in Agisoft. In theory this height map could then be applied as a displacement in ZBrush to get much more accurate surface details across the high poly model. If I manage to get all this working, I will no doubt post the results on my blog. So for those of you who are interested, feel free to subscribe and watch this space!
21 thoughts on “Cross Polarised Scanning on a Shoe String | Photogrammetry”
Ey Adam, thanks a lot for share your konwledge. Your workflow is awesome and the final result too. Congratulations for the result!!! I would to know more about your method, and i would like to know how did you solved the internal part of the shoe… Do you had blochy areas or surface holes in that part because of the oclussion areas?… Thanks in advance and congrats one moretime.
Hi Alberto, thanks for the kind words : ) glad you liked the post! Most of the interior of the shoe has to be rebuilt as it is very difficult to capture in the scan. I would advise removing the insole after you have scanned the shoe and shoot texture reference so that you can model/texture it separately
Տpot on with this write-up, I reaⅼly think this amazing site
needs a great deal morе attention. I’ll probably be back again to reaⅾ through more,
thanks for the info!
Thanks Granaries!!
Hi Adam, are you using a circular polarized filter for your camera?
I’m in the process of buying a similar setup so any info would be appreciated. Thanks,
Jeremy
Hi Jeremy, yes for this test I used a Canon 24-105mm L series lens with this Hoya filter – https://www.amazon.co.uk/77mm-Digital-Filter-Circular-Polarizer/dp/B000KKVFD6/?tag=adamspring-21 .
If you have the option, I would advise shooting on a prime lens as the results will be much sharper!
This is amazing work, Adam. I never thought to try something like this for scanning objects. I recently started using a Nikon D3300 for Photogrammetry and it works so well. I started my first project scanning my bike helmet (after dusting it with baby powder) and the results were PERFECT, except having to rescale it to fit the real world scale… I am learning so much from your work
Ah thanks Kashif! Appreciate the feedback, and glad you are finding the blog useful. I’d be interested to see the bike helmet if you have a link. Also, I am currently working on a couple more blog posts that you might find interesting – hoping to publish them when I can find some spare time, probably in the next few weeks.
Hi Adam, really cool result! like others I’m trying to create a scanning setup and was just wondering what type of speedlites and filters were used on them?
Cheers Niall! So the speedlites are some cheapish brand called Yongnuo – pretty reliable and more affordable than Canon, Nikon etc. The filter for the lights is a small piece of linear polarising film – important to make sure the orientation is kept the same when taping them over multiple light sources. You will then need a circular polarising filter to attach to you lens also. Hope this helps.
I am relatively new to substance designer. How did you go about calculating the normal’s from multiple images. I have no idea where to start. Any advice would be greatly appreciated.
Hi Chris. Thanks for checking out the blog. For calculating normals from multiple images, you can use the ‘Multi-Angle to Normal’ node. This tutorial should help – https://www.youtube.com/watch?v=kWkbBxwg05Q
Incredible work Adam!
I’m also trying to recreate this, and was curious to know if I can use the same linear polarized film over my camera lens also? Or does it have to be a circular polarized film? As for the lens, were you shooting on a 35mm equivalent on a full frame sensor?
Cheers William! I use a circular polarising filter on the lens as it allows you to quickly alternate between specular and polarised states, without having to touch the lens too much. I have tested with using the linear film over the lens and it appears to work well, the only thing you need to remember is to make sure the filter on the lens is turned 90 degrees to the filter on the light source(s) – otherwise your images will still contain spec in them. Regarding the lens I used, for this I think it was a 24-105mm L Series Zoom set to 50mm on a Canon MkIV Body (full frame). I would recommend shooting with prime lenses if you can – since doing this blog post I have picked up a 50mm and the images are looking so much sharper.
I’m not sure where you are getting your information,
but good topic. I needs to spend some time learning much more or understanding more.
Thanks for excellent information I was looking for this information for my mission.
Hi Adam,
Amazing work! been reading through your blog with great interest. I get the polarization of lighting and bringing those into substance designer but feel like I’m missing a step I’m trying to figure out what output is used from here into photoscan. Do you just use the outputted flat images to make a better 3d result or do you somehow combine the normals and height data in substance designer with the 3d model agisoft produces?
Hi. I’ve just begun exploring photogrammetry and I’m finding your blog invaluable. Thanks for sharing you knowledge. I’ve one question. I’ve looked into buying a polarising filter for my light source but the filter sheets come in different degrees. 0, 45, 135 degrees, etc. What degree of filter did you used on your speedlites?
Great work! Can i ask what settings you’re using in Align Photo?
Hi Adam, so the final output is still a made by modelling? not reality capture?
Hello there! My name is Kvein. I mixed it in Agisoft Mask. I tested it several times without success. I don’t know what the problem is. I can learn from you. I have studied photogrammetry for a year, but I know very little about photogrammetry. Hope to learn from you.
Hi Adam
How did you tricked agisoft into thinking that you had different camera angles?
– I have some problems when doing this method.
Comments are closed.