Hi, javi p wrote:
I'm trying to train a big corpus using HTS2.1 and at some point during the reestimation of the full context models, I get the following error: AllocBlock: Cannot allocate block data of 5000000 bytes It didn't happen before with other data although in this case the number of models in full.list is quite big (118294) and the fullcontext.mmf is around 900M. The process reaches more than 2.9G of memory doing the HERest call. The computer uses a Linux os, with 3G RAM and 4G SWAP. I don't know if this is a problem of having a lot of data or could be something else. I've checked the cmp files and they're fine.
I guess you're using 32-bit linux. The upper memory limit per process on 32-bit linux is 3GB. See below for details
http://www.dansdata.com/askdan00015.htm Regards, Heiga ZEN (Byung Ha CHUN) -- -------------------------- Heiga ZEN (Byung Ha CHUN) Speech Technology Group Cambridge Research Lab Toshiba Research Europe phone: +44 1223 436975 ______________________________________________________________________ This email has been scanned by the MessageLabs Email Security System.For more information please visit http://www.messagelabs.com/email ______________________________________________________________________