A decision procedure is an algorithm that, given a decision problem, terminates with a correct yes/no answer. Here, the authors focus on theories that are expressive enough to model real problems, but are still decidable. Specifically, the book concentrates on decision procedures for first-order theories that are commonly used in automated verification and reasoning, theorem-proving, compiler optimization and operations research. The techniques described in the book draw from fields such as graph theory and logic, and are routinely used in industry. The authors introduce the basic terminology of satisfiability modulo theories and then, in separate chapters, study decision procedures for each of the following theories: propositional logic; equalities and uninterpreted functions; linear arithmetic; bit vectors; arrays; pointer logic; and quantified formulas.
In many decision problems, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. One common type is the monotonicity constraint stating that the greater an input is, the greater the output must be, all other inputs being equal. Well-known examples include investment decisions, medical diagnosis, selection and evaluation tasks. However, often the models obtained by traditional data mining techniques alone does not meet these constraints. Therefore, this book provides a thorough study on the incorporation of monotonicity constraints into a data mining process to improve knowledge discovery and facilitate the decision-making process for end-users by deriving more accurate and plausible decision models. The main contributions include a novel procedure to test the degree of monotonicity of a data set, a greedy algorithm to transform non-monotone into monotone data, and extended and novel approaches to build monotone decision models. The theoretical and empirical findings should be valuable to graduates, researchers and practitioners involved in the study and development of data mining systems.
During task composition, such as can be found in distributed query processing, workflow systems and AI planning, decisions have to be made by the system and possibly by users with respect to how a given problem should be solved. Although there is often more than one correct way of solving a given problem, these multiple solutions do not necessarily lead to the same result. Some researchers are addressing this problem by providing data provenance information. Others use expert advice encoded in a supporting knowledge-base. However, users do not usually trust complete automation during decision-making for certain domains with natural variation, like biology; they need a way to be able to control and/or intervene with the system´s reasoning to verify parts of the process. This book provides a thorough analysis of the problem and presents a data-centric methodology of measuring decision criticality and describe its potential use. We argue that agent technology is a natural fit for the design of distributed heterogeneous integration systems, particularly in bioinformatics, and we propose a multi-agent system design and architecture as the basis of our framework.
This textbook presents fundamental machine learning concepts in an easy to understand manner by providing practical advice, using straightforward examples, and offering engaging discussions of relevant applications. The main topics include Bayesian classifiers, nearest-neighbor classifiers, linear and polynomial classifiers, decision trees, neural networks, and support vector machines. Later chapters show how to combine these simple tools by way of ´´boosting,´´ how to exploit them in more complicated domains, and how to deal with diverse advanced practical issues. One chapter is dedicated to the popular genetic algorithms. This revised edition contains three entirely new chapters on critical topics regarding the pragmatic application of machine learning in industry. The chapters examine multi-label domains, unsupervised learning and its use in deep learning, and logical approaches to induction. Numerous chapters have been expanded, and the presentation of the material has been enhanced. The book contains many new exercises, numerous solved examples, thought-provoking experiments, and computer assignments for independent work.
A basic primer for all employees on using Lean Six Sigma to meet your company´s goals and your customers´ needs Lean Six Sigma combines the two most important and popular quality trends of our time: Six Sigma and Lean Production. In this plain-English guide, you´ll discover how this remarkable quality improvement method will help you identify and eliminate waste, cut costs and grow revenue, enhance your job skills, and even make work more meaningful.What is Lean Six Sigma? reveals why companies are implementing this strategy, and walks you through the foundations of Lean Six Sigma, explaining the ´´four keys´´ and how they apply to your own job:Delight your customers with speed and quality Improve your processes Work together for maximum gain Base decisions on data and facts Featuring charts, diagrams, and case studies of teams who have used these methods to improve their workplace, What is Lean Six Sigma? tells you what you need to know to make this strategy a success in your organization.
This accessible text/reference provides a general introduction to probabilistic graphical models (PGMs) from an engineering perspective. The book covers the fundamentals for each of the main classes of PGMs, including representation, inference and learning principles, and reviews real-world applications for each type of model. These applications are drawn from a broad range of disciplines, highlighting the many uses of Bayesian classifiers, hidden Markov models, Bayesian networks, dynamic and temporal Bayesian networks, Markov random fields, influence diagrams, and Markov decision processes. Features: presents a unified framework encompassing all of the main classes of PGMs; describes the practical application of the different techniques; examines the latest developments in the field, covering multidimensional Bayesian classifiers, relational graphical models and causal models; provides exercises, suggestions for further reading, and ideas for research or programming projects at the end of each chapter.
This practically-oriented textbook introduces the fundamentals of designing digital surveillance systems powered by intelligent computing techniques. The text offers comprehensive coverage of each aspect of the system, from camera calibration and data capture, to the secure transmission of surveillance data, in addition to the detection and recognition of individual biometric features and objects. The coverage concludes with the development of a complete system for the automated observation of the full lifecycle of a surveillance event, enhanced by the use of artificial intelligence and supercomputing technology. This updated third edition presents an expanded focus on human behavior analysis and privacy preservation, as well as deep learning methods. Topics and features: contains review questions and exercises in every chapter, together with a glossary; describes the essentials of implementing an intelligent surveillance system and analyzing surveillance data, including a range of biometric characteristics; examines the importance of network security and digital forensics in the communication of surveillance data, as well as issues of issues of privacy and ethics; discusses the Viola-Jones object detection method, and the HOG algorithm for pedestrian and human behavior recognition; reviews the use of artificial intelligence for automated monitoring of surveillance events, and decision-making approaches to determine the need for human intervention; presents a case study on a system that triggers an alarm when a vehicle fails to stop at a red light, and identifies the vehicle´s license plate number; investigates the use of cutting-edge supercomputing technologies for digital surveillance, such as FPGA, GPU and parallel computing. This concise and accessible work serves as a classroom-tested textbook for graduate-level courses on intelligent surveillance. Researchers and engineers interested in entering this area will also find the book suitable as a helpful self-study reference.
This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches -which are based on optimization techniques - together with the Bayesian inference approach, whose essence lies in the use of a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods. The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling. Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied. MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code.
Work with blockchain and understand its potential application beyond cryptocurrencies in the domains of healthcare, Internet of Things, finance, decentralized organizations, and open science. Featuring case studies and practical insights generated from a start-up spun off from the author´s own lab, this book covers a unique mix of topics not found in others and offers insight into how to overcome real hurdles that arise as the market and consumers grow accustomed to blockchain based start-ups. You´ll start with a review of the historical origins of blockchain and explore the basic cryptography needed to make the blockchain work for Bitcoin. You will then learn about the technical advancements made in the surrounded ecosystem: the Ethereum virtual machine, Solidity, Colored Coins, the Hyperledger Project, Blockchain-as-a-service offered through IBM, Microsoft and more. This book looks at the consequences of machine-to-machine transactions using the blockchain socially, technologically, economically and politically. Blockchain Enabled Applications provides you with a clear perspective of the ecosystem that has developed around the blockchain and the various industries it has penetrated. What You´ll Learn Implement the code-base from Fabric and Sawtooth, two open source blockchain-efforts being developed under the Hyperledger Project Evaluate the benefits of integrating blockchain with emerging technologies, such as machine learning and artificial intelligence in the cloud Use the practical insights provided by the case studies to your own projects or start-up ideas Set up a development environment to compile and manage projects Who This Book Is For Developers who are interested in learning about the blockchain as a data-structure, the recent advancements being made and how to implement the code-base. Decision makers within large corporations (product managers, directors or CIO level executives) interested in implementing the blockchain who need more practical insights and not just theory.
Follow step-by-step guidance to craft a successful security program. You will identify with the paradoxes of information security and discover handy tools that hook security controls into business processes. Information security is more than configuring firewalls, removing viruses, hacking machines, or setting passwords. Creating and promoting a successful security program requires skills in organizational consulting, diplomacy, change management, risk analysis, and out-of-the-box thinking. What You Will Learn: Build a security program that will fit neatly into an organization and change dynamically to suit both the needs of the organization and survive constantly changing threats Prepare for and pass such common audits as PCI-DSS, SSAE-16, and ISO 27001 Calibrate the scope, and customize security controls to fit into an organization´s culture Implement the most challenging processes, pointing out common pitfalls and distractions Frame security and risk issues to be clear and actionable so that decision makers, technical personnel, and users will listen and value your advice Who This Book Is For: IT professionals moving into the security field; new security managers, directors, project heads, and would-be CISOs; and security specialists from other disciplines moving into information security (e.g., former military security professionals, law enforcement professionals, and physical security professionals)