After a combination of trial and error, reading the Blender Python API, reading Blender dev web pages and further research on the Internet I was able to recreate Blender's skinning algorithm using a Python script. The advantage of using Blender's python scripting was that I could run my script and compare its results with Blender's native pose deformation system.

In pseudocode, the skinning algorithm is:

Deriving the Inverse Pose Transform

Step 1: Recreating Blender's armature posing algorithm using a python script
Step 2: Converting the per vertex skinning algorithm to a mathematical function

Step 3: Finding the inverse skinning function g such that v

Step 4: Converting the mathematical solution to an algorithm
Envelopes Must be Frozen for the PSD to Work

For each vertex v of a mesh: originalv = Vector(v) // save a (deep) copy of v For each bone b: delta = originalv * b.localMatrix - originalv v = v + delta *normalizedWeightOfBone_b_forVertex_v

Each vertex goes through the following transformation:

For each bone b: delta = originalv * b.localMatrix - originalv v = v + delta *normalizedWeightOfBone_b_forVertex_v

By substituting for delta, this can be rewritten:

For each bone b: v = v + (originalv * b.localMatrix - originalv)*normalizedWeightOfBone_b_forVertex_v

Replacing the for loop with a summation over all bones b we get the final vertex coordinates in terms of the initial vertex coordinates (upside down 'A' is the 'for all' symbol):

Where:

v

v

M

W

Note that we have now expressed v

v

_{f}is the final skinned vertexv

_{0}is the vertex prior to skinningM

_{b}is b.localMatrixW

_{bv}is the normalized vertex weight of bone b for v_{0}and v_{f}.Note that we have now expressed v

_{f}as a function f of v_{0}, in other words we have v_{f}= f(v_{o}) .Step 3: Finding the inverse skinning function g such that v

_{0}= g(v_{f}) .Below is a step by step solution (note that I is the identity matrix):

We have now ended up with a formula expressing v

_{0}in terms of v_{f}. This is g(v_{f}) .Step 4: Converting the mathematical solution to an algorithm

By examining g(v

_{f}), we can work from the inside of the mathematical expression outwards to build the code. At the beginning of an interation of the outer loop (over the vertices) we start with v_{f}. At the end of the interation v_{f}is overwritten with v_{0}. Here is the final inverse transform pseudocode:For each vertex v of a mesh: sigma = Matrix filled with zeros //the accumulator for each bone b: sigma = sigma + (b.localMatrix - I) * normalizedWeightOfBone_b_forVertex_v v = v * invert(I + sigma)

In Blender, when the envelope option is active for a certain bone, an algorithm

*dynamically*determines the amount of influence exerted by the bone on vertices using bone envelope sizes and its proximity to the vertices. After calculating the inverse transform the unposed shape's vertices are in different locations and therefore now have different bone influences and therefore different weights. Bone weights must be constant for the inverse transform to be calculated, otherwise by the end of the calculation, the starting conditions are no longer the same and the solution is invalid. I believe that finding the inverse pose transform when (dynamic) envelopes are active is impossible unless the envelopes are made permanent by baking them into weights or making envelopes only apply to the basis shape. A similar situation to the above exists in Lightwave and may be responsible for problems users have encountered when setting up smart-skinning for vertices influenced in part or in full by bone envelopes.