[Subject Prev][Subject Next][Thread Prev][Thread Next][Date Index][Thread Index]

[hts-users:02064] Re: Audio-Visual speech synthesis

On 3 Jul 2009, at 09:05, Girish Malkarnenkar wrote:

Dear Sir/Madam,

I am trying to synthesise visual speech via HTS by replacing the MGC files created with facial parameters. While I would appreciate it if someone who has done a similar thing before could share his experience

Gregor Hofer, Junichi Yamagishi, and Hiroshi Shimodaira. Speech-driven lip motion generation with a trajectory hmm. In Proc. Interspeech 2008, pages 2314-2317, Brisbane, Australia, September 2008.

Gregor Hofer, Hiroshi Shimodaira, and Junichi Yamagishi. Speech-driven head motion synthesis based on a trajectoy model. Poster at Siggraph 2007, 2007

Gregor Hofer, Hiroshi Shimodaira, and Junichi Yamagishi. Lip motion synthesis using a context dependent trajectory hidden Markov model. Poster at SCA 2007, 2007.

all available from  http://www.cstr.ed.ac.uk/publications/users/s0343879.html

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

[hts-users:02065] Re: Audio-Visual speech synthesis, Keiichi Tokuda
[hts-users:02063] Audio-Visual speech synthesis, Girish Malkarnenkar