[Subject Prev][Subject Next][Thread Prev][Thread Next][Date Index][Thread Index]

[hts-users:04159] Re: HHEd to process multiple workloads


Hi Dmitry,

Are you trying to parallel the clustering process?
In such case, I recommend using openmpi. It allows you to create multiple processes(not threads) of HHEd to cluster different streams. I did this before, not the code is with business license. Hope this would help you.

--
Xingyu

On 10/31/2014 12:53 PM, Dmitry Bakuntsev wrote:
Hi,



Summary:
I'm trying to modify HHEd to process multiple workloads, while initializing only once. Error "ERROR [+9999]  AssignStructure: incompatible tree" occurs.




Details:

The full original command line looks like the following:

HHEd -A -B -C trn.cnf -D -T 1 -p -i -H cmp.re_clustered.mmf -w cmp.re_clustered_all.mmf.1mix mku.cmp.hed full.list

... where"-w cmp.re_clustered_all.mmf.1mix" and "mku.cmp.hed" are the parameters that define "a workload". The goal is to initialize an instance of HHEd once using the rest of the command line parameters, and then process sequentially multiple sets of the "workloads". Examples of trn.cnf and mku.cmd.hed files are attached.


A simplified version of the source code is attached in HHEdPP.cpp file. HHEd_Init() is called once, and then HHEd_DoWork() is called repeatedly with different parameters. On the second HHEd_DoWork() call, the error "ERROR [+9999]  AssignStructure: incompatible tree" is generated. Apparently, some clean-up must be done after each HHEd_DoWork() call, but I am unable to figure out how to do that correctly.

Any insight into how hset->swidth is initialized and how to clean it up properly for reuse is greatly appreciated!



Regards,
Dmitry.【来自网易邮箱的超大附件】
邮件带有附件预览链接,若您转发或回复此邮件时不希望对方预览附件,建议您手动删除链接。

HHEdPP.cpp
下载: http://u.163.com/t/bK3nRPwKx

预览: http://u.163.com/t/PaxpmkUrN


trn.cnf
下载: http://u.163.com/t/vWvvFn


mku.cmp.hed
下载: http://u.163.com/t/qE5BqX2YKX





Follow-Ups
[hts-users:04160] Re: HHEd to process multiple workloads, Dmitry Bakuntsev
References
[hts-users:04158] HHEd to process multiple workloads, Dmitry Bakuntsev