[MUSIC] Let's look at an example. This is a slightly modified version of the well-known zoo dataset from the UCI Machine Learning Repository. It contains 101 animals described in terms of 15 attributes. So, the attributes are like whether an animal has hair, whether it has feathers, whether it lays eggs, and so on. Here we can see only part of this dataset. If we build the concept lattice of this formal context, it's going to look like this. And, of course, it's completely unreadable. If you work with it in Concept Explorer or in any other software, well, then you may explore this lattice part by part. You may have a look at those concepts you are interested in, see what combinations of attributes are possible. But if you want to get a general picture of the dataset, of the zoo dataset, then this is really not good. You can't read it. You can't really get a general understanding of your dataset from a picture like this. So one way to make this picture better readable is to concentrate on the upper part of the lattice, to look only at the most general concepts. So let's say that we fix a certain threshold on the extent size. And then we keep only concepts that cover as many objects or more. So we'll look at only most general concepts. Of course, they don't always form a lattice because, most likely, if we set a threshold above zero, then most likely, we won't get the bottom concept. Most likely it will happen that the set of all such concepts doesn't have an infimum. So we add the bottom concept, and what we get is a lattice, which is called an iceberg lattice. It's called an iceberg lattice because, if you look at the lattice diagram, at the entire lattice diagram, it looks like an iceberg and only the upper part of it is above the water. The rest is under the water. So that's why people call it an iceberg lattice. So for the zoo dataset, let's say we want to look only at concepts that cover at least half of all the objects, at least 50%. Then this is what we get. Here we can see the concept of predators, the concept of animals that lay eggs, the concept of those who breathe, and those who have a backbone, and also some combinations, like animals who breathe and have a backbone. And here we see all the largest groups of animals, all the largest combinations. Sometimes this is what we want because, in this case, we don't see any small concepts, any concepts that may be due to noise. So here, if in our dataset, we have one noisy object with a rather strange and untypical object intent, then we're not going to see it in our iceberg lattice. How do we compute iceberg lattices? Well, we can adapt standard concept lattice construction algorithms to build an iceberg lattice. It's not possible for all lattice construction algorithms, but, for some of them, it is possible. That's how we can do this for Next Closure. In Next Closure, we're going to compute the large extent corresponding to the lectically next intent after C’, where C is a subset G, and this C is what our Next Closure algorithm will receive as the input. Then, we will compute the intent corresponding to C, C', this is going to be called A in our pseudocode. And then the algorithm works as usual until it reaches this line. Here, in this line, we check if B is actually the lectically next intent after C', after A. But we add an additional check: we check that B', the extent correspondent to B, is large enough, if it satisfies our threshold. And if it does, then we output B'. Otherwise, we continue with the next iteration of the algorithm. But this is maybe not the most efficient way to compute iceberg lattices. It turns out that there are lots of other algorithms developed in a community dealing with association rule mining. They have lots of algorithms generating so-called frequent closed itemsets, which are precisely concept intents corresponding to large extents. We're going to talk a little bit about association rule mining later. [MUSIC]