[Subject Prev][Subject Next][Thread Prev][Thread Next][Date Index][Thread Index]

[hts-users:04532] hmm adaptation using VCTK database


Hi all!

Several days before, I changed the speaker pattern in Config.pm so as to get the right speaker name by HTS, and the result is normal then. Thanks for your help!

These days, I use VCTK database to do this task. VCTK database has 109 speakers and the quality of these waves are not good enough. There are many different accents in these speakers. I use 103 speakers as training speakers, and 80 sentences of p226 as adaptation data. The results of speaker independent model is medium voice but not clear, I cannot figure out what it is saying.Then, the results of SI+dec_feat3 sound like the target speaker p226, but also cannot figure out what he is saying.

The procedure of HTS Adapt is correct. So I was wondering if this phenomenon was caused by the problem in database, since if the difference between the speakers is too large, the adaptation effect will not be good enough.

Looking forward to hearing from you! 

Best regards,
zhen wei