[Subject Prev][Subject Next][Thread Prev][Thread Next][Date Index][Thread Index]

[hts-users:00776] Re: Questions on training and flat pitch pattern


Hi,

lixiulin wrote (2007/08/09 10:31):

We noticed that the ram and swap (3G in total) are used up when copying monophone models to fullcontext ones, and then failed to malloc memory.

On 32-bit Linux, 3 GBytes is the upper limit for a single process.

As a result, it can't finish this step or go to the following tree-based clustering procedures. We ever trained the model using 1500 sentences, and it runs well. The problem may be how to reduce the memory consumption in clone procedure.

It is very difficult to reduce the memory consumption in clone procedure because your model itself consumes more than 3 GBytes.
Internally we use some tricks to reduce memory consumption in training process.

Regards,

Heiga ZEN (Byung Ha CHUN)

--
------------------------------------------------
Heiga ZEN     (in Japanese pronunciation)
Byung Ha CHUN (in Korean pronunciation)

Department of Computer Science and Engineering
Nagoya Institute of Technology
Gokiso-cho, Showa-ku, Nagoya 466-8555 Japan

http://www.sp.nitech.ac.jp/~zen
------------------------------------------------

Follow-Ups
[hts-users:00782] files, Tamer Fares
References
[hts-users:00775] 答复: [hts-users:00772] Re: Questions on training and flat pitch pattern, lixiulin