Communications and personal information that are posted online are usually accessible to a vast number of people. Yet when personal data exist online, they may be searched, reproduced and mined by advertisers, merchants, service providers or even stalkers. Many users know what may happen to their information, while at the same time they act as though their data are private or intimate. They expect their privacy will not be infringed while they willingly share personal information with the world via social network sites, blogs, and in online communities. The chapters collected by Trepte and Reinecke address questions arising from this disparity that has often been referred to as the privacy paradox. Works by renowned researchers from various disciplines including psychology, communication, sociology, and information science, offer new theoretical models on the functioning of online intimacy and public accessibility, and propose novel ideas on the how and why of online privacy. The contributing authors offer intriguing solutions for some of the most pressing issues and problems in the field of online privacy. They investigate how users abandon privacy to enhance social capital and to generate different kinds of benefits. They argue that trust and authenticity characterize the uses of social network sites. They explore how privacy needs affect users´ virtual identities. Ethical issues of privacy online are discussed as well as its gratifications and users´ concerns. The contributors of this volume focus on the privacy needs and behaviors of a variety of different groups of social media users such as young adults, older users, and genders. They also examine privacy in the context of particular online services such as social network sites, mobile internet access, online journalism, blogs, and micro-blogs. In sum, this book offers researchers and students working on issues related to internet communication not only a thorough and up-to-date treatment of online privacy and the social web. It also presents a glimpse of the future by exploring emergent issues concerning new technological applications and by suggesting theory-based research agendas that can guide inquiry beyond the current forms of social technologies.
This practically-oriented textbook introduces the fundamentals of designing digital surveillance systems powered by intelligent computing techniques. The text offers comprehensive coverage of each aspect of the system, from camera calibration and data capture, to the secure transmission of surveillance data, in addition to the detection and recognition of individual biometric features and objects. The coverage concludes with the development of a complete system for the automated observation of the full lifecycle of a surveillance event, enhanced by the use of artificial intelligence and supercomputing technology. This updated third edition presents an expanded focus on human behavior analysis and privacy preservation, as well as deep learning methods. Topics and features: contains review questions and exercises in every chapter, together with a glossary; describes the essentials of implementing an intelligent surveillance system and analyzing surveillance data, including a range of biometric characteristics; examines the importance of network security and digital forensics in the communication of surveillance data, as well as issues of issues of privacy and ethics; discusses the Viola-Jones object detection method, and the HOG algorithm for pedestrian and human behavior recognition; reviews the use of artificial intelligence for automated monitoring of surveillance events, and decision-making approaches to determine the need for human intervention; presents a case study on a system that triggers an alarm when a vehicle fails to stop at a red light, and identifies the vehicle´s license plate number; investigates the use of cutting-edge supercomputing technologies for digital surveillance, such as FPGA, GPU and parallel computing. This concise and accessible work serves as a classroom-tested textbook for graduate-level courses on intelligent surveillance. Researchers and engineers interested in entering this area will also find the book suitable as a helpful self-study reference.
This edited volume explores the intersection between philosophy and computing. It features work presented at the 2016 annual meeting of the International Association for Computing and Philosophy. The 23 contributions to this volume neatly represent a cross section of 40 papers, four keynote addresses, and eight symposia as they cut across six distinct research agendas. The volume begins with foundational studies in computation and information, epistemology and philosophy of science, and logic. The contributions next examine research into computational aspects of cognition and philosophy of mind. This leads to a look at moral dimensions of man-machine interaction as well as issues of trust, privacy, and justice. This multi-disciplinary or, better yet, a-disciplinary investigation reveals the fruitfulness of erasing distinctions among and boundaries between established academic disciplines. This should come as no surprise. The computational turn itself is a-disciplinary and no former discipline, whether scientific, artistic, or humanistic, has remained unchanged. Rigorous reflection on the nature of these changes opens the door to inquiry into the nature of the world, what constitutes our knowledge of it, and our understanding of our place in it. These investigations are only just beginning. The contributions to this volume make this clear: many encourage further research and end with open questions.
Data matching (also known as record or data linkage, entity resolution, object identification, or field matching) is the task of identifying, matching and merging records that correspond to the same entities from several databases or even within one database. Based on research in various domains including applied statistics, health informatics, data mining, machine learning, artificial intelligence, database management, and digital libraries, significant advances have been achieved over the last decade in all aspects of the data matching process, especially on how to improve the accuracy of data matching, and its scalability to large databases. Peter Christen´s book is divided into three parts: Part I, ´´Overview´´, introduces the subject by presenting several sample applications and their special challenges, as well as a general overview of a generic data matching process. Part II, ´´Steps of the Data Matching Process´´, then details its main steps like pre-processing, indexing, field and record comparison, classification, and quality evaluation. Lastly, part III, ´´Further Topics´´, deals with specific aspects like privacy, real-time matching, or matching unstructured data. Finally, it briefly describes the main features of many research and open source systems available today. By providing the reader with a broad range of data matching concepts and techniques and touching on all aspects of the data matching process, this book helps researchers as well as students specializing in data quality or data matching aspects to familiarize themselves with recent research advances and to identify open research challenges in the area of data matching. To this end, each chapter of the book includes a final section that provides pointers to further background and research material. Practitioners will better understand the current state of the art in data matching as well as the internal workings and limitations of current systems. Especially, they will learn that it is often not feasible to simply implement an existing off-the-shelf data matching system without substantial adaption and customization. Such practical considerations are discussed for each of the major steps in the data matching process.