[Subject Prev][Subject Next][Thread Prev][Thread Next][Date Index][Thread Index]

[hts-users:02067] Re: Audio-Visual speech synthesis


Hi,

I pick up old ones.

Brand, M.E., "Voice Puppetry", ACM SIGGRAPH, ISBN: 0-201-48560-5, pps 21-28 , August 1999
http://www.cs.cmu.edu/~ph/869/papers/Brand-sigg99.pdf

Although this is not Audio-visual one, this paper
(trajectory HMM plus annealing!) is also nice.
http://www.mrl.nyu.edu/publications/style-machines/brand-hertzmann.pdf

Regards,
Junichi Yamagishi
CSTR

On 3 Jul 2009, at 11:14, Heiga ZEN (Byung Ha CHUN) wrote:

Hi,

Girish Malkarnenkar wrote:

I am trying to synthesise visual speech via HTS by replacing the MGC files created with facial parameters. While I would appreciate it if someone who has done a similar thing before could share his experience,

Bailly, G., O. Govokhina, G. Breton, F. Elisei and C. Savariaux (2008). The trainable trajectory formation model TD-HMM parameterized for the LIPS 2008 challenge. Interspeech, Brisbane, Australia.

http://www.icp.inpg.fr/ICP/publis/synthese/_gb/gb_IS08.pdf

Regards,

Heiga ZEN (Byung Ha CHUN)

--
--------------------------
Heiga ZEN (Byung Ha CHUN)
Speech Technology Group
Cambridge Research Lab
Toshiba Research Europe
phone: +44 1223 436975

______________________________________________________________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email ______________________________________________________________________




--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.


Follow-Ups
[hts-users:02069] Re: Audio-Visual speech synthesis, Girish Malkarnenkar
References
[hts-users:02063] Audio-Visual speech synthesis, Girish Malkarnenkar
[hts-users:02066] Re: Audio-Visual speech synthesis, Heiga ZEN (Byung Ha CHUN)