Solving the multiple instance problem with axis-parallel rectangles
Artificial Intelligence
A framework for multiple-instance learning
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Machine Learning for Adaptive User Interfaces
KI '97 Proceedings of the 21st Annual German Conference on Artificial Intelligence: Advances in Artificial Intelligence
Constructive adaptive user interfaces: composing music based on human feelings
Eighteenth national conference on Artificial intelligence
Interactive Improvisational Music Companionship: A User-Modeling Approach
User Modeling and User-Adapted Interaction
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Emotional Coloring of Computer-Controlled Music Performances
Computer Music Journal
Relational IBL in classical music
Machine Learning
Acquisition of human feelings in music arrangement
IJCAI'97 Proceedings of the 15th international joint conference on Artifical intelligence - Volume 1
Modelling affective-based music compositional intelligence with the aid of ANS analyses
Knowledge-Based Systems
Data-driven exploration of musical chord sequences
Proceedings of the 14th international conference on Intelligent user interfaces
Constructive Adaptive User Interfaces Based on Brain Waves
Proceedings of the 13th International Conference on Human-Computer Interaction. Part II: Novel Interaction Methods and Techniques
Exposing parameters of a trained dynamic model for interactive music creation
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
RaPScoM: towards composition strategies in a rapid score music prototyping framework
Proceedings of the 6th Audio Mostly Conference: A Conference on Interaction with Sound
PRICAI'12 Proceedings of the 12th Pacific Rim international conference on Trends in Artificial Intelligence
Hi-index | 0.00 |
The consideration of human feelings in automated music generation by intelligent music systems, albeit a compelling theme, has received very little attention. This work aims to computationally specify a system's music compositional intelligence that tightly couples with the listener's affective perceptions. First, the system induces a model that describes the relationship between feelings and musical structures. The model is learned by applying the inductive logic programming paradigm of FOIL coupled with the Diverse Density weighting metric over a dataset that was constructed using musical score fragments that were hand-labeled by the listener according to a semantic differential scale that uses bipolar affective descriptor pairs. A genetic algorithm, whose fitness function is based on the acquired model and follows basic music theory, is then used to generate variants of the original musical structures. Lastly, the system creates chordal and non-chordal tones out of the GA-obtained variants. Empirical results show that the system is 80.6% accurate at the average in classifying the affective labels of the musical structures and that it is able to automatically generate musical pieces that stimulate four kinds of impressions, namely, favorable-unfavorable, bright-dark, happy-sad, and heartrending-not heartrending.