Mannequin head models

Hi,

We recently discussed our project with you via Skype Toon and we’ve had a chance to play with the Mannequin tool, it’s very impressive! I’m just interested in how it works. For example I enter my headsize data of a 95th percentile male, width breadth and circumference. Does the tool then manplulate a 3D model to match these parameters or does it match it to a real 3D scan in the database?

Thanks,
Phil

Hi Phil. Welcome to our User Forum!

We recently wrote a paper about DINED Mannequin:

Huysmans, T., Goto, L., Molenbroek, J., & Goossens, R. (2020). DINED Mannequin. Tijdschrift voor Human Factors , 45 (1), 4-7.

In short, when you select a number of measures in the Mannequin tool, we calculate a linear regression between the values of those measures for each subject in the database and the shape coordinates of the 3D scans of those subjects. Shape coordinates are a compact set of numbers (40) that describe the body shape by making use of dimensionality reduction techniques. The shape coordinates replace the description of the 3D mesh point coordinates, which are thousands of coordinates, with a much lower number, typically a few tens. These shape coordinates represent typical shape modes present in the data, in decreasing degree of importance. See this figure (from the paper above) for a visualisation of the first few shape modes:

In that way, we build a linear model expressing 3D shape in terms of 1D measures. When you enter values in Mannequin for the persona, then we use the linear regression model to go from the measure values to the shape coordinates. These shape coordinates are then used to recreate a 3D shape. Creating a 3D shape from a set of shape parameters (shown as dials) is illustrated here:

If you were to digitally measure the measures on the resulting Person 3D shape, then you may see that these are not always in line with the values that you entered. The reason is twofold: (1) the linear model may not be able to capture the relation between the measures and the 3D shape (We plan to explore using non-linear models in the future) and (2) the measures may have been physical measures and these can be different from digital measures as e.g. explained by this paper:

Han, H., Nam, Y., & Choi, K. (2010). Comparative analysis of 3D body scan measurements and manual measurements of size Korea adult females. International Journal of Industrial Ergonomics, 40(5), 530-540.

I hope this makes the internals of Mannequin a bit more clear.

Kind regards,

Toon.

BTW, I’m going to add the head as a separate dataset in Mannequin. That will further increase the accuracy because separated from the body they can be better aligned and modeled (the subjects do not always look straight ahead).

Thanks for the detailed response Toon, this is some great info. Adding the head as a separate dataset would be really useful for comparisons in CAD.

We’ve been using the tool successfully lately however I think we still have issues with shape, I’m am curious if you know of any study on head shape and specifically the shape of head circumference?

Hi Phil,

There are certainly studies focussing on head circumference shape, e.g.

Could you however be a bit more specific on how you have problems with head shape? Is the product still not fitting, for what head shapes does it not fit, and how do those head shapes compare to the manikins from DINED that you used to size the product? What was your approach in using the manikins for sizing?

Your feedback can be useful for further improving the functionality of DINED.

Kind regards,

Toon.

We still feel we have some issues with fit in regards to the head shape. If we take the Hohenstein study as an example, if we took the extreme oval and extreme round shape heads and overlayed them we could create a ring shape that would optimally fit around both (obviously with some air gap depending on the shape)

I guess if I used the DINED tool a little more wisely by entering lets say 95th percentile M head width and circumference and pair that with an average or below average breath it would give me the results I’m interested in.

Hi Phil,

I do think that the separate head dataset would be a solution here. The full body dataset focuses more on global body shape and is less suited for head analyses. The reason for this is that the subjects all have their heads in different (rotational) poses, impairing the linear modeling. After separating the head from the body, all heads can be consistently aligned e.g. via the Frankfurth plane.

As an example, take the child head dataset, there the differences in head shape become clear with a simple analysis:

Kind regards,

Toon.

Thanks Toon,

Looks exactly what we are after at the moment, has the head feature been implemented?