Set-up & Capture
It’s been a while since my last shoe scanning post, and another pair of trainers just arrived, so it felt like a good time to write another one. The focus this time was to see if it is possible to make a cross polarised scanning booth on a ‘shoe string’ budget (shameless pun intended). For those who are not familiar with cross polarised photography, it is a capture technique used to filter specular reflectance from an image. With specular light bounce removed from an image you are left with flat diffuse colour, which is well suited for texturing and relighting 3D models.
One of the challenging aspects of scanning a shoe is figuring out how to suspend it in order to capture every angle. This can prove to be quite difficult, but with a recently discovered technique that I am calling ‘masked blending’, you can avoid this altogether. When I say ‘masked blending’ I am referring to a technique where you can use difference masking in Agisoft to trick the software into believing two scans were captured in one continuous setup. So blending the scans can happen during the mesh calculation stage rather than the alternate approach of hacking multiple scans together in ZBrush – I will explain in more detail further down.
As you can see in the image (right), my set up is pretty basic; a matte black backdrop (minus an iron for those shocking creases), a turntable and a couple of speedlite flashes. Despite all the creases, the black backdrop worked really well – especially when using the polarised filter, which did a great job of removing all light bounce across the creased areas. My light source was two speedlites pointed inwards at a 45 degree angle from the camera, each with a strip of polarising gel taped over the front.
When processing I used Lightroom for batching and colour correcting the images. I then generated a set of masks in Agisoft which helped isolate the subject during the processing stage. The matte black backdrop allowed me to generate a pretty good difference mask and meant I could carefully flip the shoe mid way through the scan, but trick the software into thinking the camera position had been changed instead, and therefore creating the ‘masked blending’ effect thatI mentioned earlier. Using this technique I was able to move the shoe and get coverage on multiple axis, and then combine them all together in Agisoft. I did have to do some mask cleanup work in AfterEffects, which took 10 minutes or so, but once I saw those two perfectly perpendicular rings of camera alignment in Agisoft, it definitely felt like one of those eureka moments.
Since doing this test I have learned that turntable setups can be fairly manual and tedious, but there are a few things you can do to help speed things up. For example, using an intervalometer on your camera can make life easier. Also, a motorised turntable would make a lot of sense too, although I only thought about that after watching back a time-lapse of me hunching over the turntable waiting for the camera to fire every 5 seconds. See below a video of one of my earlier test shoots (worth noting that the time-lapse failed during the final scan, which was captured at night to help control lighting conditions).
Remodelling & Rendering
When rebuilding the trainer I decided to avoid remodelling the knitted surface with all the individual threading, as this would either require some very clever modelling tricks, or a person with a lot of spare time on their hands. Instead I remodelled the basic forms in ZBrush, and extracted high frequency details from the diffuse texture which gave the illusion of a knitted surface.
The model is made up of ten different parts, all with polarised texture maps that support a PBR workflow, so they can be easily relit and rendered in Maya. All texture cleanup was done in Mari, apart from the laces which were created using various tiled patterns in Photoshop. I created alphas for the logo imprints on the rubber sole that were applied in ZBrush using the inflate tool, in the deformation palette. To recreate the curvy shapes on the sole I had to keep the topology fairly clean in order to retain the forms and remove any noise from the original scanned data.
For a while now I have wateded to improve my surface detail capture when shooting for photogrammetry, so I decided to look into some photometric techniques. I have recently been learning Substance Designer, which is a great piece of software that creates procedural textures, but also has some amazing tools that support 3D scanning and photometric workflows.
My first test was to try extracting specular information from an image using difference blending between polarised and non-polarised photos of the same subject. As you can see from the image (right) it seems to be working pretty well. The next step would be to calculate the specular difference from various angles of the trainer and re-project those images onto the mesh in Agisoft to see how well it combines them together. I’m curious as to whether this will actually work though, because specular reflections can vary depending on the material of the surface, and the angle it is being viewed from.
The surface details in the renders of my final trainer model appear to be pretty good, however they are not totally accurate to the original shoe. The current surface details have been approximated using a ‘bump from diffuse’ approach, which uses RGB data in the texture to create fine details on the surface of the model. This approach is not accurate as it uses darker values in the texture to negatively displace the surface and lighter values for positive displacement, and this doesn’t always give you the correct result. Skin pores and facial stubble are a good example of this issue as both are darker in colour, however one requires a positive displacement and the other a negative.
To solve this problem, my second photometric test was to extract height (or depth) information from a photograph to create surface details accurate to real life. I did this by capturing a set of images using multi-directional light sources, then loading them into Substance Designer to calculate normal values across the shoe, which was then converted into a hight map.
If you look at the rendered gif you can see during the clay wipe that the white tick is slightly protruding the surface of the model, which is not how it looks on the original. If I were to use the height map extracted in Substance Designer I should be able to get a displacement true to the original.
To take this experiment further, similar to the specular extraction, I would like to create height maps from multiple angles of the shoe, and hopefully I can blend these height maps together across the model surface using the texture projection tool in Agisoft. In theory this height map could then be applied as a displacement in ZBrush to get much more accurate surface details across the high poly model. If I manage to get all this working, I will no doubt post the results on my blog. So for those of you who are interested, feel free to subscribe and watch this space!