Drosophila gene expression pattern annotation using sparse features and term-term interactions
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Drosophila gene expression pattern annotation through multi-instance multi-label learning
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
A shared-subspace learning framework for multi-label classification
ACM Transactions on Knowledge Discovery from Data (TKDD)
Contour Extraction of Drosophila Embryos
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Drosophila Gene Expression Pattern Annotation through Multi-Instance Multi-Label Learning
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Trace Norm Regularization: Reformulations, Algorithms, and Multi-Task Learning
SIAM Journal on Optimization
Hi-index | 3.84 |
Motivation: Regulation of gene expression in space and time directs its localization to a specific subset of cells during development. Systematic determination of the spatiotemporal dynamics of gene expression plays an important role in understanding the regulatory networks driving development. An atlas for the gene expression patterns of fruit fly Drosophila melanogaster has been created by whole-mount in situ hybridization, and it documents the dynamic changes of gene expression pattern during Drosophila embryogenesis. The spatial and temporal patterns of gene expression are integrated by anatomical terms from a controlled vocabulary linking together intermediate tissues developed from one another. Currently, the terms are assigned to patterns manually. However, the number of patterns generated by high-throughput in situ hybridization is rapidly increasing. It is, therefore, tempting to approach this problem by employing computational methods. Results: In this article, we present a novel computational framework for annotating gene expression patterns using a controlled vocabulary. In the currently available high-throughput data, annotation terms are assigned to groups of patterns rather than to individual images. We propose to extract invariant features from images, and construct pyramid match kernels to measure the similarity between sets of patterns. To exploit the complementary information conveyed by different features and incorporate the correlation among patterns sharing common structures, we propose efficient convex formulations to integrate the kernels derived from various features. The proposed framework is evaluated by comparing its annotation with that of human curators, and promising performance in terms of F1 score has been reported. Contact: jieping.ye@asu.edu Supplementary information: Supplementary data are available at Bioinformatics online.