A Structured Approach for Cooperative Query Answering
IEEE Transactions on Knowledge and Data Engineering
Proceedings of the 17th International Conference on Data Engineering
Explaining Differences in Multidimensional Aggregates
VLDB '99 Proceedings of the 25th International Conference on Very Large Data Bases
Lineage Tracing for General Data Warehouse Transformations
Proceedings of the 27th International Conference on Very Large Data Bases
iMAP: discovering complex semantic matches between database schemas
SIGMOD '04 Proceedings of the 2004 ACM SIGMOD international conference on Management of data
Relaxing join and selection queries
VLDB '06 Proceedings of the 32nd international conference on Very large data bases
Making database systems usable
Proceedings of the 2007 ACM SIGMOD international conference on Management of data
On the provenance of non-answers to queries over extracted data
Proceedings of the VLDB Endowment
Proceedings of the 12th International Conference on Extending Database Technology: Advances in Database Technology
Proceedings of the 2009 ACM SIGMOD International Conference on Management of data
Proceedings of the 2009 ACM SIGMOD International Conference on Management of data
Online query relaxation via Bayesian causal structures discovery
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Artemis: a system for analyzing missing answers
Proceedings of the VLDB Endowment
The complexity of causality and responsibility for query answers and non-answers
Proceedings of the VLDB Endowment
Tracing data errors with view-conditioned causality
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
ReDRIVE: result-driven database exploration through recommendations
Proceedings of the 20th ACM international conference on Information and knowledge management
Diagnosing faults in embedded queries in database applications
Proceedings of the 2012 Joint EDBT/ICDT Workshops
The nautilus analyzer: understanding and debugging data transformations
Proceedings of the 21st ACM international conference on Information and knowledge management
User feedback based query refinement by exploiting skyline operator
ER'12 Proceedings of the 31st international conference on Conceptual Modeling
Observing SQL queries in their natural habitat
ACM Transactions on Database Systems (TODS)
Certain and possible XPath answers
Proceedings of the 16th International Conference on Database Theory
A framework for query refinement with user feedback
Journal of Systems and Software
On modeling query refinement by capturing user intent through feedback
ADC '12 Proceedings of the Twenty-Third Australasian Database Conference - Volume 124
Why not, WINE?: towards answering why-not questions in social image search
Proceedings of the 21st ACM international conference on Multimedia
Database research at the National University of Singapore
ACM SIGMOD Record
Wondering why data are missing from query results?: ask conseil why-not
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Answering why-not queries in software-defined networks with negative provenance
Proceedings of the Twelfth ACM Workshop on Hot Topics in Networks
A probabilistic optimization framework for the empty-answer problem
Proceedings of the VLDB Endowment
YmalDB: exploring relational databases via result-driven recommendations
The VLDB Journal — The International Journal on Very Large Data Bases
Reasoning about explanations for negative query answers in DL-lite
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
One useful feature that is missing from today's database systems is an explain capability that enables users to seek clarifications on unexpected query results. There are two types of unexpected query results that are of interest: the presence of unexpected tuples, and the absence of expected tuples (i.e., missing tuples). Clearly, it would be very helpful to users if they could pose follow-up why and why-not questions to seek clarifications on, respectively, unexpected and expected (but missing) tuples in query results. While the why questions can be addressed by applying established data provenance techniques, the problem of explaining the why-not questions has received very little attention. There are currently two explanation models proposed for why-not questions. The first model explains a missing tuple t in terms of modifications to the database such that t appears in the query result wrt the modified database. The second model explains by identifying the data manipulation operator in the query evaluation plan that is responsible for excluding t from the result. In this paper, we propose a new paradigm for explaining a why-not question that is based on automatically generating a refined query whose result includes both the original query's result as well as the user-specified missing tuple(s). In contrast to the existing explanation models, our approach goes beyond merely identifying the "culprit" query operator responsible for the missing tuple(s) and is useful for applications where it is not appropriate to modify the database to obtain missing tuples.