Introducing the IBM SPSS Modeler, this book guides readers through data mining processes and presents relevant statistical methods. There is a special focus on step-by-step tutorials and well-documented examples that help demystify complex mathematical algorithms and computer programs. The variety of exercises and solutions as well as an accompanying website with data sets and SPSS Modeler streams are particularly valuable. While intended for students, the simplicity of the Modeler makes the book useful for anyone wishing to learn about basic and more advanced data mining, and put this knowledge into practice.
This accessible text/reference provides a general introduction to probabilistic graphical models (PGMs) from an engineering perspective. The book covers the fundamentals for each of the main classes of PGMs, including representation, inference and learning principles, and reviews real-world applications for each type of model. These applications are drawn from a broad range of disciplines, highlighting the many uses of Bayesian classifiers, hidden Markov models, Bayesian networks, dynamic and temporal Bayesian networks, Markov random fields, influence diagrams, and Markov decision processes. Features: presents a unified framework encompassing all of the main classes of PGMs; describes the practical application of the different techniques; examines the latest developments in the field, covering multidimensional Bayesian classifiers, relational graphical models and causal models; provides exercises, suggestions for further reading, and ideas for research or programming projects at the end of each chapter.
In many decision problems, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. One common type is the monotonicity constraint stating that the greater an input is, the greater the output must be, all other inputs being equal. Well-known examples include investment decisions, medical diagnosis, selection and evaluation tasks. However, often the models obtained by traditional data mining techniques alone does not meet these constraints. Therefore, this book provides a thorough study on the incorporation of monotonicity constraints into a data mining process to improve knowledge discovery and facilitate the decision-making process for end-users by deriving more accurate and plausible decision models. The main contributions include a novel procedure to test the degree of monotonicity of a data set, a greedy algorithm to transform non-monotone into monotone data, and extended and novel approaches to build monotone decision models. The theoretical and empirical findings should be valuable to graduates, researchers and practitioners involved in the study and development of data mining systems.
Today, reliable software systems are the basis of any business or company. The continuous further development of those systems is the central component in software evolution. It requires a huge amount of time- man power- as well as financial resources. The challenges are size, seniority and heterogeneity of those software systems. Christian Wagner addresses software evolution: the inherent problems and uncertainties in the process. He presents a model-driven method which leads to a synchronization between source code and design. As a result the model layer will be the central part in further evolution and source code becomes a by-product. For the first time a model-driven procedure for maintenance and migration of software systems is described. The procedure is composed of a model-driven reengineering and a model-driven migration phase. The application and effectiveness of the procedure are confirmed with a reference implementation applied to four exemplary systems.
Limited to a strict interpretation of its definition, open source consists of a set of rules which apply to a piece of software and which specify how the software and derivatives of it may be used. However, it is widely seen as much more than a simple licensing agreement, it is a ´´philoshophy´´, a ´´production model´´, a ´´way of organizing projects´´, or even ´´a new innovation model´´. But how are open source projects organized and how is work coordinated and distributed between its developers? This work contributes by examining actual source code changes, comparing 29 projects. Which developers collaborate in the same files and wich work exclusively in their own domain? Looking for patterns across projects, this work attempts to identify coordination styles in open source projects.
Data alone are worth almost nothing. While data collection is increasing exponentially worldwide, a clear distinction between retrieving data and obtaining knowledge has to be made. Data are retrieved while measuring phenomena or gathering facts. Knowledge refers to data patterns and trends that are useful for decision making. Data interpretation creates a challenge that is particularly present in system identification, where thousands of models may explain a given set of measurements. Manually interpreting such data is not reliable. One solution is to use data mining. This book thus proposes an integration of techniques from data mining, a field of research where the aim is to find knowledge from data, into an existing multiple-model system identification methodology. In addition to providing information about the candidate model space, data mining is found to be a valuable tool for supporting decisions related to subsequent sensor placement.
The revised edition of this book offers an extended overview of quantum walks and explains their role in building quantum algorithms, in particular search algorithms. Updated throughout, the book focuses on core topics including Grover´s algorithm and the most important quantum walk models, such as the coined, continuous-time, and Szedgedy´s quantum walk models. There is a new chapter describing the staggered quantum walk model. The chapter on spatial search algorithms has been rewritten to offer a more comprehensive approach and a new chapter describing the element distinctness algorithm has been added. There is a new appendix on graph theory highlighting the importance of graph theory to quantum walks. As before, the reader will benefit from the pedagogical elements of the book, which include exercises and references to deepen the reader´s understanding, and guidelines for the use of computer programs to simulate the evolution of quantum walks. Review of the first edition: ´´The book is nicely written, the concepts are introduced naturally, and many meaningful connections between them are highlighted. The author proposes a series of exercises that help the reader get some working experience with the presented concepts, facilitating a better understanding. Each chapter ends with a discussion of further references, pointing the reader to major results on the topics presented in the respective chapter.´´ - Florin Manea, zbMATH.
This book introduces a computationally feasible, cognitively inspired formal model of concept invention, drawing on Fauconnier and Turner´s theory of conceptual blending, a fundamental cognitive operation. The chapters present the mathematical and computational foundations of concept invention, discuss cognitive and social aspects, and further describe concrete implementations and applications in the fields of musical and mathematical creativity. Featuring contributions from leading researchers in formal systems, cognitive science, artificial intelligence, computational creativity, mathematical reasoning and cognitive musicology, the book will appeal to readers interested in how conceptual blending can be precisely characterized and implemented for the development of creative computational systems.
Hardware architecture and software solutions for Automated Deployment Systems in the Cloud as well as monitoring of the Internet multiservice systems in a globally distributed infrastructure are presented. New integrated models and methods of auto-deployment, Operations, Change Management, monitoring, Incident and Problem Management, etc., are proposed. Examples of the real-world implementation in the leading International telecommunications companies are provided. The monograph is scientific and practical and is recommended for IT specialists who are dealing with provisioning worldwide web services, globally distributed operating centers, big data traffic, fast growing scalability, virtualization and automation of deployments with 24/7 technical support.
This book constitutes the refereed proceedings of the 13th CCF Conference on Computer Supported Cooperative Work and Social Computing, ChineseCSCW 2018, held in Guilin, China, in August 2018. The 33 revised full papers presented along with the 13 short papers were carefully reviewed and selected from 150 submissions. The papers of this volume are organized in topical sections on: collaborative models, approaches, algorithms, and systems, social computing, data analysis and machine learning for CSCW and social computing.