I've modified the training script of hts-2.3 to use HERest with parallel processing for re-estimation of model parameters.
However, I'm not sure if it's possible to use parallel processing for estimating the transforms for speaker adaptation.
It is supported neither in HTK nor HTS.
I implemented such thing before joining Google, but unfortunately it has not been released.
Heiga
For example, one step of the "adaptation transform reestimation" in the script of the adaptation demo is given below.
I've tried to split the training data and write the output to different
transform files respectively. But I don't know how to combine those
files using HTK tools.
I'd be very grateful for any helpful comments or solution.
Thanks
HERest -A -B -C configs/qst001/ver1/trn.cnf -D -T 1 -S
data/scp/train.scp -I data/labels/full.mlf -m 1 -u ada -w 5000 -t 1500
100 5000 -h */*%%%_* -H models/qst001/ver1/cmp/re_clustered_all.mmf -N
models/qst001/ver1/dur/re_clustered_all.mmf -C
configs/qst001/ver1/adp.cnf -K models/qst001/ver1/cmp/xforms
SI+dec_feat1 -H models/qst001/ver1/cmp/regTrees/dec.base -H
models/qst001/ver1/cmp/regTrees/dec.tree -Z
models/qst001/ver1/dur/xforms SI+dec_feat1 -N
models/qst001/ver1/dur/regTrees/dec.base -N
models/qst001/ver1/dur/regTrees/dec.tree models/qst001/ver1/cmp/tiedlist
models/qst001/ver1/dur/tiedlist