[MUSIC] The method we’ve just covered has been developed outside formal concept analysis, and admittedly, it is somewhat at odds with the usual FCA approach. Let's talk a little bit about why this query-based learning can be useful at all, apart from the fact that it tells us something interesting about some computational properties. Once scenario is when we don't have enough data about the domain we study, but there are domain experts willing to share their knowledge about the domain. We use queries to extract information from them. Another scenario is when there is lots of data, more than we can handle with our usual algorithms, and this data is organized in a sort of a distributed database or maybe spread over the Internet, but there are mechanisms for efficiently querying this data. Or maybe, we work with a mathematical domain, one with an infinite number of objects, say, the domain of loopless directed graphs, and we have procedures that can automatically prove theorems about the domain or present counterexamples from this domain to our hypothesis. Can we use the algorithm from the previous video to learn valid implications in situations like this? Well, not easily. The biggest problem, perhaps, is that the algorithm needs negative counterexamples. These counterexamples are not part of the domain, they are attribute combinations that never occur. Maybe, they are not even real objects; they are descriptions of something that may not exist at all. We can’t really expect from a human expert to be able to easily produce such attribute combinations. A computer program can search a database or the Internet for a positive counterexample to a hypothesis, but it's much more difficult to find something that doesn't exist. It may not always be easy to construct a graph that violates a certain conjecture, but it seems much more difficult to construct a, so to speak, non-graph that satisfies the conjecture. So finding positive counterexamples to implications is relatively easy (at least, in some domains), but finding negative counterexamples is usually very hard. But even membership queries present a problem for any realistic domain expert, unless we work with so-called Horn domains, that is, domains where the set of object intents is closed under intersection. In this case, when asked a membership query about an attribute set C, the domain expert will only need to check if an object with intent C exists. However, in the general case, the expert will have to establish whether C is closed, in other words, whether it is an intersection of object intents, and this is a potentially nontrivial task. The problem here is that the set of models of all the implications valid in the domain can be much larger than the set of attribute combinations feasible in the domain, and the oracle must be able to answer membership, and equivalence queries with respect to this larger set of models. In the next few videos, we'll talk about attribute exploration, an alternative technique for learning implications with queries, which doesn't have a polynomial-time implementation, but which is far less demanding toward the oracle or domain expert. [MUSIC]