Today, Computed Tomography (CT) is one of the highly efficient indispensable tools in medical science for clinical purpose. Due to software and hardware limitations, and statistical uncertainty of all physical measurements in Computed Tomography, the unwanted noise appears in CT images. The presence of noise is one of the main factors to degrade the visual quality of the CT images. Because of the image degradation, the experts are not able to identify the correct information from the medical images. Therefore, this book investigates the methods for noise reduction in CT images while preserving their main structures. The goal of work is to improve the signal to-noise ratio without loss of spatial resolution or structures of the CT images. Hence, image denoising is an essential processing step preceding visual and automated analyses. This book is concerned with the methods for CT image enhancement using image denoising concepts which can be capable to preserve edges and maintain high visual quality.
Market threats, opportunities, changes in regulations and various quality improvement programs are the main drivers of change in an organization. These changes are very often depicted in business processes and business rules. Propagation of these changes to the underlying Information and Communication Technology (ICT) infrastructure becomes difficult due to lack of available documentation and rigid application design. Considering the frequency of changes in business processes in today´s competitive world, the resulting maintenance of supporting information system becomes difficult. This book proposes an approach ´´Business Process to Software Design (BP2SD)´´ that can help in the automatic propagation of changes in the process model to the software design. Business processes of an organization are modeled in suitable modeling languages and constructs of these process models are then linked to the corresponding software design elements. Whenever a change is introduced to these process models, it is automatically propagated to the linked software design element(s). Finally, a case study is presented in order to demonstrate the applicability of the proposed approach.
Data matching (also known as record or data linkage, entity resolution, object identification, or field matching) is the task of identifying, matching and merging records that correspond to the same entities from several databases or even within one database. Based on research in various domains including applied statistics, health informatics, data mining, machine learning, artificial intelligence, database management, and digital libraries, significant advances have been achieved over the last decade in all aspects of the data matching process, especially on how to improve the accuracy of data matching, and its scalability to large databases. Peter Christen´s book is divided into three parts: Part I, ´´Overview´´, introduces the subject by presenting several sample applications and their special challenges, as well as a general overview of a generic data matching process. Part II, ´´Steps of the Data Matching Process´´, then details its main steps like pre-processing, indexing, field and record comparison, classification, and quality evaluation. Lastly, part III, ´´Further Topics´´, deals with specific aspects like privacy, real-time matching, or matching unstructured data. Finally, it briefly describes the main features of many research and open source systems available today. By providing the reader with a broad range of data matching concepts and techniques and touching on all aspects of the data matching process, this book helps researchers as well as students specializing in data quality or data matching aspects to familiarize themselves with recent research advances and to identify open research challenges in the area of data matching. To this end, each chapter of the book includes a final section that provides pointers to further background and research material. Practitioners will better understand the current state of the art in data matching as well as the internal workings and limitations of current systems. Especially, they will learn that it is often not feasible to simply implement an existing off-the-shelf data matching system without substantial adaption and customization. Such practical considerations are discussed for each of the major steps in the data matching process.
This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users´ perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this re search area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences.
Erscheinungsdatum: 01/2007Medium: BuchEinband: GebundenTitel: Quality Measures in Data MiningRedaktion: Guillet, Fabrice // Hamilton, Howard J.Verlag: Springer-Verlag GmbHSprache: EnglischSchlagworte: Data Mining // EDV // Intelligenz // Kuenstliche