Content-Based Image Retrieval at the End of the Early Years
IEEE Transactions on Pattern Analysis and Machine Intelligence
ComicDiary: Representing Individual Experiences in a Comics Style
UbiComp '02 Proceedings of the 4th international conference on Ubiquitous Computing
Multimedia Electronic Chronicles
IEEE MultiMedia
Semantic E-Workflow Composition
Journal of Intelligent Information Systems
The Journal of Machine Learning Research
Digital Artifacts for Remembering and Storytelling: PostHistory and Social Network Fragments
HICSS '04 Proceedings of the Proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS'04) - Track 4 - Volume 4
ConceptNet — A Practical Commonsense Reasoning Tool-Kit
BT Technology Journal
Retrieving lightly annotated images using image similarities
Proceedings of the 2005 ACM symposium on Applied computing
A Theory of Fun for Game Design
A Theory of Fun for Game Design
Human-Computer Interaction
Generating cartoon-style summary of daily life with multimedia mobile devices
IEA/AIE'07 Proceedings of the 20th international conference on Industrial, engineering, and other applications of applied intelligent systems
Hi-index | 0.00 |
The notable developments in pervasive and wireless technology enable us to collect enormous sensor data from each individual. With context-aware technologies, these data can be summarized into context data which support each individual's reflection process of one's own memory and communication process between the individuals. To improve reflection and communication, this paper proposes an automatic cartoon generation method for fun. Cartoon is a suitable medium for the reflection and the communication of one's own memory, especially for the emotional part. By considering the fun when generating cartoons, the advantage of the cartoon can be boosted. For the funnier cartoon, diversity and consistency are considered during the cartoon generation. For the automated generation of diverse and consistent cartoon, context data which represent the user's behavioral and mental status are exploited. From these context information and predefined user profile, the similarity between context and cartoon image is calculated. The cartoon image with high similarity is selected to be merged into cartoon cuts. Selected cartoon cuts are arranged with the constraints for the consistency of cartoon story. To evaluate the diversity and consistency of the proposed method, several operational examples are employed.