Now we will check how the different information systems should be implemented to be reliable, maintainable, and scalable. Now, I will detail some issues regarding transactional applications with structured data, analytical applications with a structured data, analytical applications with unstructured data, and streaming applications with high volume of unstructured data in real time. I will explain how these information systems should be implemented in order to be reliable, maintainable, and scalable. In the case of transactional applications with structured data, they can be implemented under different architectures. On the first place, they can reside on a parallel relational database system. Remember, that parallelism divides a big problem into many smaller ones to be solved in parallel. This is useful in applications that have to process an extremely large number of transactions per second of the order of thousands of transactions per second. We can see a typical architecture of a parallel database system. On the second place, transactional applications are implemented on a cluster. They can entail separate sets of nodes for clients, database servers, and SQL processing, as well as dedicated server and client software for management tasks. These architecture dramatically increased available net processing power, reduces cost, enables more evenly balanced [inaudible] , and delivers more scalable and reliable data management. On the third place, these transactional applications can be implemented on an in-memory relational database. In the case of analytical applications where the structured data that might be implemented on a columnar database, on in-memory database or an in-memory columnar database. In the case of a columnar database, SAP Sybase IQ has been used for business intelligence, data warehousing, and data marked. Each original relational table is divided into columns, each column is started in its own segment. Each segment is compressed individually, this results in high compressions rate. Calling success in only a few columns in contrast to a select star query do not have to dodge segments of other columns. An in-memory database within a database management system is a one that primarily relies on main memory for computer data storage. In memory databases are faster than disk of demise databases because disc access is slower than memory access. The internal optimization algorithms are simpler and execute fewer CPU instructions. Accessing data in memory eliminates seek time when querying the data, which provides faster and more predictable performance than disc. In case of analytical applications, it can be columnar. If we want the best of both walls, we can implement analytical application with structured data on an in-memory columnar database. The columnar in-memory database needs less main memory, where it's hot storage than a traditional database uses for these cache in main memory. SAP hana gives the ability to keep the data in either memory or in the disc in a columnar format. Data is not duplicated. Dynamic diary notion helps users to choose memory for hot data, and disc for warm data, helping to strike the right price performance balance. In that case, we require analyzing not only structure, but also unstructured data, a one model database system would not be the best option. On the first place, an in-memory multimodal database can store and manage any kind of data. We can see on the following video an SAP hana architecture, where any kind of device for applications are integrated, supporting spatial data, text, Hadoop, unstructured data, transactional data, etc. On the second place, we can use a Hadoop framework to support big data and data mining. Apache Mahout provides analysis through clustering, classification or recommendation data mining task in order to analyze big data on HDFS. It will require streaming analytical applications with high volume or unstructured data in real time, then we need to analyze whether architecture really suits our specifications. For example, if we consider to use Hadoop for big data and SAMOA, yarn, Spark or any other streaming data processor to support data streaming, we need to check if the process was not a fail response time, degrading the real time expectation. Other architecture that can support the streaming applications with high volume of unstructured data in real time is an in-memory multi-model database, which can store and manage graphs, documents, spatial data, columnar, and structure data. This week, we have learned different architectures in order to design scalable, maintainable, unreliable data intensive applications. Next course, will be the time to design a data intensive application with all the knowledge we have learned from past courses. Good luck.