This study was conceived with the aim of investigating the availability and utilization of information and communication technology for accessing health information by medical professionals in Kenya. The study started from the premise that access to relevant information and knowledge is critical to the delivery of effective healthcare services. Although most of the health information is continuously being delivered electronically, many healthcare professionals in developing countries including Kenya are disadvantaged because of limited access to and use of ICT. The research was exploratory in nature and used Kenyatta National Hospital as a case study.
The Unified Modeling Language (UML) provides an environment for modeling complex systems. It supports a variety of diagrams for analyzing, designing, and implementing software systems. During the requirements phase, developers abstract concepts from the application domain and describe what the system is intended to do, not how it will do it. UML was adopted as a standard for OO modeling by the Object Management Group in 1997 and has found use in various software development projects. However, the continued success of any new technology depends a great deal on its usability. To predict the future success of a language like UML it is important to address the issue of usability from the perspective of the users of the language, the software developers. This publication reports on the results of an empirical study aimed at assessing the usability of UML for developing software requirements. It addresses the dimensions of ease of use, usefulness, and usefulness for communicating requirements to various project stakeholders.
Service-oriented computing has become one of the predominant factors in IT research and development efforts over the last few years. In spite of several standardization efforts that advanced from research labs into industrial-strength technologies and tools, there is still much human effort required in the process of finding and executing Web services. Here, Dieter Fensel and his team lay the foundation for understanding the Semantic Web Services infrastructure, aimed at eliminating human intervention and thus allowing for seamless integration of information systems. They focus on the currently most advanced SWS infrastructure, namely SESA and related work such as the Web Services Execution Environment (WSMX) activities and the Semantic Execution Environment (OASIS SEE TC) standardization effort. Their book is divided into four parts: Part I provides an introduction to the field and its history, covering basic Web technologies and the state of research and standardization in the Semantic Web field. Part II presents the SESA architecture. The authors detail its building blocks and show how they are consolidated into a coherent software architecture that can be used as a blueprint for implementation. Part III gives more insight into middleware services, describing the necessary conceptual functionality that is imposed on the architecture through the basic principles. Each such functionality is realized using a number of so-called middleware services. Finally, Part IV shows how the SESA architecture can be applied to real-world scenarios, and provides an overview of compatible and related systems. The book targets professionals as well as academic and industrial researchers working on various aspects of semantic integration of distributed information systems. They will learn how to apply the Semantic Web Services infrastructure to automate and semi-automate tasks, by using existing integration technologies. In addition, the book is also suitable for advanced graduate students enrolled in courses covering knowledge management, the Semantic Web, or integration of information systems, as it will educate them about basic technologies for Semantic Web Services and general issues related to integration of information systems.
With significant growth of bio-molecular sequence data in the last decade the need for algorithms to extract patterns and meaningful information from such data has been felt strongly. Alignment of sequences, in order to determine regions of common descent, has also been an important area of research as it helps scientist discover the evolution of species. Another problem that researchers are putting in a lot of effort into, is document summary. As the lower bound for computation is being met for various algorithms, to further expedite the computing on large data sets, parallelization has become imperative. New multiprocessor architectures like the Cell Broadband Engine have the potential to do extensive calculations and act as mini-supercomputers. Other applications for these include onboard aircraft fault diagnosis and prognosis. We take a peek into some existing algorithms for these problems as well as propose novel algorithms along with their implementations to address these problems in the field of bioinformatics.
Despite its explosive growth over the last decade, the Web remains essentially a tool to allow humans to access information. Semantic Web technologies like RDF, OWL and other W3C standards aim to extend the Web´s capability through increased availability of machine-processable information. Davies, Grobelnik and Mladenic have grouped contributions from renowned researchers into four parts: technology; integration aspects of knowledge management; knowledge discovery and human language technologies; and case studies. Together, they offer a concise vision of semantic knowledge management, ranging from knowledge acquisition to ontology management to knowledge integration, and their applications in domains such as telecommunications, social networks and legal information processing. This book is an excellent combination of fundamental research, tools and applications in Semantic Web technologies. It serves the fundamental interests of researchers and developers in this field in both academia and industry who need to track Web technology developments and to understand their business implications.
In many decision problems, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. One common type is the monotonicity constraint stating that the greater an input is, the greater the output must be, all other inputs being equal. Well-known examples include investment decisions, medical diagnosis, selection and evaluation tasks. However, often the models obtained by traditional data mining techniques alone does not meet these constraints. Therefore, this book provides a thorough study on the incorporation of monotonicity constraints into a data mining process to improve knowledge discovery and facilitate the decision-making process for end-users by deriving more accurate and plausible decision models. The main contributions include a novel procedure to test the degree of monotonicity of a data set, a greedy algorithm to transform non-monotone into monotone data, and extended and novel approaches to build monotone decision models. The theoretical and empirical findings should be valuable to graduates, researchers and practitioners involved in the study and development of data mining systems.
During task composition, such as can be found in distributed query processing, workflow systems and AI planning, decisions have to be made by the system and possibly by users with respect to how a given problem should be solved. Although there is often more than one correct way of solving a given problem, these multiple solutions do not necessarily lead to the same result. Some researchers are addressing this problem by providing data provenance information. Others use expert advice encoded in a supporting knowledge-base. However, users do not usually trust complete automation during decision-making for certain domains with natural variation, like biology; they need a way to be able to control and/or intervene with the system´s reasoning to verify parts of the process. This book provides a thorough analysis of the problem and presents a data-centric methodology of measuring decision criticality and describe its potential use. We argue that agent technology is a natural fit for the design of distributed heterogeneous integration systems, particularly in bioinformatics, and we propose a multi-agent system design and architecture as the basis of our framework.
This practically-oriented textbook introduces the fundamentals of designing digital surveillance systems powered by intelligent computing techniques. The text offers comprehensive coverage of each aspect of the system, from camera calibration and data capture, to the secure transmission of surveillance data, in addition to the detection and recognition of individual biometric features and objects. The coverage concludes with the development of a complete system for the automated observation of the full lifecycle of a surveillance event, enhanced by the use of artificial intelligence and supercomputing technology. This updated third edition presents an expanded focus on human behavior analysis and privacy preservation, as well as deep learning methods. Topics and features: contains review questions and exercises in every chapter, together with a glossary; describes the essentials of implementing an intelligent surveillance system and analyzing surveillance data, including a range of biometric characteristics; examines the importance of network security and digital forensics in the communication of surveillance data, as well as issues of issues of privacy and ethics; discusses the Viola-Jones object detection method, and the HOG algorithm for pedestrian and human behavior recognition; reviews the use of artificial intelligence for automated monitoring of surveillance events, and decision-making approaches to determine the need for human intervention; presents a case study on a system that triggers an alarm when a vehicle fails to stop at a red light, and identifies the vehicle´s license plate number; investigates the use of cutting-edge supercomputing technologies for digital surveillance, such as FPGA, GPU and parallel computing. This concise and accessible work serves as a classroom-tested textbook for graduate-level courses on intelligent surveillance. Researchers and engineers interested in entering this area will also find the book suitable as a helpful self-study reference.
Teaches readers the basics of Python programming through simple game creation and describes how the skills learned can be used for more practical Python programming applications and real-world scenarios.
Problem solving in computing is referred to as computational thinking. The theory behind this concept is challenging in its technicalities, yet simple in its ideas. This book introduces the theory of computation from its inception to current form of complexity; from explanations of how the field of computer science was formed using classical ideas in mathematics by Gödel, to conceptualization of the Turing Machine, to its more recent innovations in quantum computation, hypercomputation, vague computing and natural computing. It describes the impact of these in relation to academia, business and wider society, providing a sound theoretical basis for its practical application. Written for accessibility, Demystifying Computation provides the basic knowledge needed for non-experts in the field, undergraduate computer scientists and students of information and communication technology and software development.