Automatic referent resolution of deictic and anaphoric expressions
Computational Linguistics
User interactions with everyday applications as context for just-in-time information access
Proceedings of the 5th international conference on Intelligent user interfaces
Query-Free Information Retrieval
IEEE Expert: Intelligent Systems and Their Applications
Just-in-time information retrieval agents
IBM Systems Journal
World Wide Web
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 2
From Searching to Browsing through Multimodal Documents Linking
ICDAR '05 Proceedings of the Eighth International Conference on Document Analysis and Recognition
Online and off-line visualization of meeting information and meeting support
The Visual Computer: International Journal of Computer Graphics
Graphical representation of meetings on mobile devices
Proceedings of the 10th international conference on Human computer interaction with mobile devices and services
The AMI meeting corpus: a pre-announcement
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
A Tangible Mixed Reality Interface for the AMI Automated Meeting Assistant
Proceedings of the Symposium on Human Interface 2009 on ConferenceUniversal Access in Human-Computer Interaction. Part I: Held as Part of HCI International 2009
Recognition and understanding of meetings
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
The ACLD: speech-based just-in-time retrieval of meeting transcripts, documents and websites
Proceedings of the 2010 international workshop on Searching spontaneous conversational speech
Designing conversation-context recommendation display to support opportunistic search in meetings
Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia
Hi-index | 0.00 |
The AMIDA Automatic Content Linking Device (ACLD) is a just-in-time document retrieval system for meeting environments. The ACLD listens to a meeting and displays information about the documents from the group's history that are most relevant to what is being said. Participants can view an outline or the entire content of the documents, if they feel that these documents are potentially useful at that moment of the meeting. The ACLD proof-of-concept prototype places meeting-related documents and segments of previously recorded meetings in a repository and indexes them. During a meeting, the ACLD continually retrieves the documents that are most relevant to keywords found automatically using the current meeting speech. The current prototype simulates the real-time speech recognition that will be available in the near future. The software components required to achieve these functions communicate using the Hub, a client/server architecture for annotation exchange and storage in real-time. Results and feedback for the first ACLD prototype are outlined, together with plans for its future development within the AMIDA EU integrated project. Potential users of the ACLD supported the overall concept, and provided feedback to improve the user interface and to access documents beyond the group's own history.