Since the 70s, the C preprocessor is still widely used in practice in a number of projects to tailor systems to different platforms and application scenarios. In academia, researchers have criticized its lack of separation of concerns, its proneness to introduce subtle errors, and its obfuscation of the source code. To better understand the problems of using the C preprocessor, we conducted 40 interviews and a survey among 202 developers. We found that developers deal with three common problems in practice: configuration-related bugs, combinatorial testing, and code comprehension. To better deal with these problems, this book presents strategies to detect bugs and bad smells in preprocessor-based systems based on variability-aware analysis and sampling. This work presents useful findings for C developers during their development tasks, contributing to minimize the chances of introducing configuration-related bugs and bad smells, improve code comprehension, and guide developers to perform combinatorial testing.
Most of the current instant messaging applications such as Telegram Secret Chat transmit packets in plain text. This means that an intruder equipped with appropriate remote monitoring tools can sniff the packets being transmitted and obtain the raw packets that are being relayed across the network. However, some of them like Whatsapp and Facebook Messenger have embraced end -to-end encryption. In so doing, this encryption protects this data as it is being passed from one device to another over communication channels. Effectively, this prevents potential eavesdroppers such as telecommunication service providers, Internet service providers or the provider of the communication service from being able to access the cryptographic keys needed to decrypt the conversation. However, most messaging applications encrypt data but only between the user and the companies´ servers. The consequence of this is that the service providers can pry open the data being passed across their network data anytime and access the information being passed between the communicating parties.
This book examines the effectiveness of communication between chief information officer (CIO) and chief executive officer (CEO) and its impact on the role of information technology (IT) in an organization. The book is empirically based on interviews with CIO/CEO pairs from twelve organizations in the manufacturing and retail industries. It examines how CIOs and CEOs can achieve effectiveness in their communication, including insights into antecedents and consequences of communication effectiveness. Based on the interview data the authors develop a CIO/CEO communication model with which CIOs and CEOs can gain new insights into the efficiency of their interactions, likely resulting in higher levels of shared understanding regarding the role of IT in their organization.
Gaussian Mixture Model (GMM) is the probabilistic model, it works well with the classification and parameter estimation strategy. In this Maximum Likelihood Estimation (MLE) based on Expectation Maximization (EM) is being used for the parameter estimation approach and the estimated parameters are being used for the training and the testing of the images for their normality and the abnormality. With the mean and the covariance calculated as the parameters they are used in the Gaussian Mixture Model (GMM) based training of the classifier. Support Vector Machine a discriminative classifier and the Gaussian Mixture Model a generative model classifier are the two most popular techniques used in this work. The performance of the classification strategy of both the classifiers used have a better proficiency when compared to the other classifiers. By combining the SVM and GMM we could be able to classify at a better level since estimating the parameters through the GMM has a very few amount of features and hence it is not needed to use any of the feature reduction techniques.
Discover the RESTful technologies, including REST, JSON, XML, JAX-RS web services, SOAP and more, for building today´s microservices, big data applications, and web service applications. This book is based on a course the Oracle-based author is teaching for UC Santa Cruz Silicon Valley which covers architecture, design best practices and coding labs. Pro RESTful APIs: Design gives you all the fundamentals from the top down: from the top (architecture) through the middle (design) to the bottom (coding). This book is a must have for any microservices or web services developer building applications and services. What You´ll Learn Discover the key RESTful APIs, including REST, JSON, XML, JAX, SOAP and more Use these for web services and data exchange, especially in today´s big data context Harness XML, JSON, REST, and JAX-RS in examples and case studies Apply best practices to your solutions´ architecture Who This Book Is For Experienced web programmers and developers.
This book is a comprehensive introduction to the methods and algorithms of modern data analytics. It provides a sound mathematical basis, discusses advantages and drawbacks of different approaches, and enables the reader to design and implement data analytics solutions for real-world applications. This book has been used for more than ten years in the Data Mining course at the Technical University of Munich. Much of the content is based on the results of industrial research and development projects at Siemens.
This book presents a comparison between two simulation methods, namely Discrete Event Simulation (DES) and Agent Based Simulation (ABS). In our literature review we identified a gap in comparing the applicability of these methods to modelling human centric service systems. Hence, we have focused our research on reactive and different level of detail of proactive of human behaviour in service systems. The aim of the work is to establish a comparison for modelling human reactive and different level of detail of proactive behaviour in service systems using DES and ABS. To achieve this we investigate both the similarities and differences between model results performance and the similarities and differences in model difficulty performance.
This self-contained introduction to modern cryptography emphasizes the mathematics behind the theory of public key cryptosystems and digital signature schemes. The book focuses on these key topics while developing the mathematical tools needed for the construction and security analysis of diverse cryptosystems. Only basic linear algebra is required of the reader; techniques from algebra, number theory, and probability are introduced and developed as required. This text provides an ideal introduction for mathematics and computer science students to the mathematical foundations of modern cryptography. The book includes an extensive bibliography and index; supplementary materials are available online. The book covers a variety of topics that are considered central to mathematical cryptography. Key topics include: classical cryptographic constructions, such as Diffie - Hellmann key exchange, discrete logarithm-based cryptosystems, the RSA cryptosystem, and digital signatures; fundamental mathematical tools for cryptography, including primality testing, factorization algorithms, probability theory, information theory, and collision algorithms; an in-depth treatment of important cryptographic innovations, such as elliptic curves, elliptic curve and pairing-based cryptography, lattices, lattice-based cryptography, and the NTRU cryptosystem. The second edition of An Introduction to Mathematical Cryptography includes a significant revision of the material on digital signatures, including an earlier introduction to RSA, Elgamal, and DSA signatures, and new material on lattice-based signatures and rejection sampling. Many sections have been rewritten or expanded for clarity, especially in the chapters on information theory, elliptic curves, and lattices, and the chapter of additional topics has been expanded to include sections on digital cash and homomorphic encryption. Numerous new exercises have been included.
An alternative approach for conducting tests of multi-choice format is the e-Examination Platform or simply a Computer based Test (CBT). The CBT is necessary considering the large population of students enrolled in Nigerian Secondary and Higher Institutions of learning. Furthermore, the student size and the unique nature of some departmental courses hinder its total implementation. This paper investigates the challenges attributed to the manual processes of conducting test/exams into an automated system. The paper also examines the potential for using student feedback in the validation of assessments. The process was further modified to run in parallel, making it faster, and helped to minimize human errors, improve the accuracy of the CBT and produce quality and transparency in the process. A survey of 230 students from various schools was taken to test and sample their opinions, by filling a carefully worded questionnaire. The data collected was then collated and analyzed. The analysis showed that more than 95% of the students surveyed were already competent in the use of computers and the CBT platform. Also, more than 90% of them found the platform easy to navigate and use.
This book presents modern techniques for the analysis of Markov chain Monte Carlo (MCMC) methods. A central focus is the study of the number of iteration of MCMC and the relation to some indices, such as the number of observation, or the number of dimension of the parameter space. The approach in this book is based on the theory of convergence of probability measures for two kinds of randomness: observation randomness and simulation randomness. This method provides in particular the optimal bounds for the random walk Metropolis algorithm and useful asymptotic information on the data augmentation algorithm. Applications are given to the Bayesian mixture model, the cumulative probit model, and to some other categorical models. This approach yields new subjects, such as the degeneracy problem and optimal rate problem of MCMC. Containing asymptotic results of MCMC under a Bayesian statistical point of view, this volume will be useful to practical and theoretical researchers and to graduate students in the field of statistical computing.