[Subject Prev][Subject Next][Thread Prev][Thread Next][Date Index][Thread Index]

[hts-users:03076] Re: the order of CMLLR transforms being applied to models in the course of HTS engine generation


Hi,

2011/10/19 Hui LIANG <tshlmail-hts@xxxxxxxxx>:

> This is exactly my concern. According to my understanding of the estimation/training of a cascade of CMLLR transforms, parent transforms (set_A) are applied to features and thus set_B should bridge the gap between speaker-independent models and adapted features. As a result, in the course of HTS engine generation, I feel that it is set_B that should be applied to the speaker-independent models first. In other words, set_B (or set_A) is "closer" to the model (or feature) side during training. set_B should be still "closer" to the model side during HTS engine generation.
>
> Does swapping the two sets of CMLLR transforms make no/negligible difference? Or, my understanding is wrong?

Ah, I understand it.  I think you are correct.  This is a bug so it
should be fixed.  If CMLLR transforms are cascaded & they are applied
to model rather than features, the current transform should be applied
first then its parent transforms should be applied.

Regards,

Heiga

-- 
Heiga ZEN (in Japanese)
Byung Ha CHUN (in Korean)
<heigazen@xxxxxxxxxx>

References
[hts-users:03071] the order of CMLLR transforms being applied to models in the course of HTS engine generation, Hui LIANG
[hts-users:03072] Re: the order of CMLLR transforms being applied to models in the course of HTS engine generation, Heiga ZEN (Byung Ha CHUN)
[hts-users:03073] Re: the order of CMLLR transforms being applied to models in the course of HTS engine generation, Hui LIANG