* Welcome! [#k4f3be02]
> The HMM-based Speech Synthesis System (HTS) has been being developed by the HTS working group and others (see [[Who we are]] and [[Acknowledgements]]).  The basic core system of HTS is implemented as a modified version of [[HTK:http://htk.eng.cam.ac.uk/]] and released as a form of patch code to HTK.  The patch code is released under [[a modified BSD-style license>License]].  However, it should be noted that &color(red){once you apply the patch to the HTK, you must obey the [[license of HTK:http://htk.eng.cam.ac.uk/docs/license.shtml]].};

> HTS version 2.0 newly supports adaptation and adaptive training based on MLLR.   MAP-based adaptation is also supported.  This version has not included any text analyzer yet, but you can use the [[Festival Speech Synthesis System:http://www.festvox.org/festival/]] as a text analyzer. This distribution includes demo scripts using [[CMU ARCTIC database:http://www.festvox.org/cmu_arctic/]] (English).
Six HTS voices for Festival 1.95 are also released. They are based on our small synthesis engine which has been included as a module of Festival.  Each of HTS voices can be used without any other HTS tools.

> For training Japanese voices, a demo script using the Nitech database is also prepared.  Japanese voices trained by the demo script can be used on [[GalateaTalk:http://hil.t.u-tokyo.ac.jp/~galatea/]], which is a speech synthesis module of an open-source toolkit for anthropomorphic spoken dialogue agents developed in [[Galatea project:http://hil.t.u-tokyo.ac.jp/~galatea/]].  An HTS voice for Galatea trained by the demo script is also released.

Front page   New Page list Search Recent changes   Help   RSS of recent changes