Software Reliability and Dependability: a Roadmap

Size: px
Start display at page:

Download "Software Reliability and Dependability: a Roadmap"

Transcription

1 Software Reliability and Dependability: a Roadmap Bev Littlewood Lorenzo Strigini Centre for Software Reliability, City University Northampton Square, London EC1V OHB, UK b.littlewood@csr.city.ac.uk l.strigini@csr.city.ac.uk ABSTRACT Software's increasing role creates both requirements for being able to trust it more than before, and for more people to know how much they can trust their software. A sound engineering approach requires both techniques for producing reliability and sound assessment of the achieved results. Different parts of industry and society face different challenges: the need for education and cultural changes in some areas, the adaptation of known scientific results to practical use in others, and in others still the need to confront inherently hard problems of prediction and decision-making, both to clarify the limits of current understanding and to push them back. We outline the specific difficulties in applying a sound engineering approach to software reliability engineering, some of the current trends and problems and a set of issues that we therefore see as important in an agenda for research in software dependability. 1 INTRODUCTION Dependability is that property that allows us to place justifiable dependence upon the functioning of a system. It encompasses, among other attributes, reliability, safety, security, and availability. These qualities are the shared concern of many sub-disciplines in software engineering (which deal with achieving them), of specialised fields like computer security, and of reliability and safety engineering. We will concentrate on the aspects that are the traditional concern of these last allied disciplines, and will mainly discuss reliability, but many of our remarks will also be of relevance to the other attributes. In this area, an important factor is the diversity of "the software industry", or, rather, among the many industrial sectors that produce or use software. Dependability in many industries is driven by market forces. The demand for software dependability varies widely between industrial sectors, as does the degree of adoption of systematic approaches to it. From many viewpoints, two extremes of the range are found in mass-marketed PC software and in safety-critical software for heavilyregulated industries. A couple of decades ago there was a revolution in dependability of consumer goods such as LEAVE BLANK THE LAST 2.5 cm (1 ) OF THE LEFT COLUMN ON THE FIRST PAGE FOR THE COPYRIGHT NOTICE. (preserve these six lines in some cases, but make their contents blank in your text) TVs, VCRs and automobiles, when companies realised that there was market advantage to be gained by demonstrating higher reliability than their competitors. There has not yet been a similar movement in the corresponding sectors of the software industry, although 1.1 Why is our dependence on software increasing? It is commonplace that software is increasingly important for society. The Y2K bug has just brought this to the attention of the public: not only was a huge expense incurred for assurance (verification and/or fixes) against its possible effects, but this effort affected all kinds of organisations and of systems, including many that the public does not usually associate with computer software. It is useful to list various dimensions of this increased dependence: Software-based systems substitute older technologies in safety- or mission-critical applications. Software has found its way into aircraft engine control, railroad interlocking, nuclear plant protection, etc. New critical applications are developed, like automating aspects of surgery, or steering and piloting of automobiles. Some of these applications imply ultra-high dependability requirements. Others have requirements that are much more limited, but require the development of a computer dependability culture either in the vendors (e.g., equipment manufacturers without previous experience of using computers in safety-critical roles) or customers and users (e.g., doctors and surgeons); Software moves from an auxiliary to a protagonist role in providing critical services. E.g., air traffic control systems are being modernised to handle more traffic, and one aspect of this is increasing reliance on software. The software has traditionally been regarded as non safety-critical, because humans using manual backup methods could take over its roles if it failed, but increasing traffic volumes mean that this fall-back capability is being eroded. Here the challenge is to evolve a culture that has been successful so far to cope with higher dependability requirements under intense pressure for deploying new systems; Software becomes the only way of performing some function which is not perceived as critical but whose failures would deeply affect individuals or groups. So, hospitals, supermarkets and pension offices depend on their databases and software for their everyday business; electronic transactions as the natural way of doing business are extending from the

2 financial world to many forms of electronic commerce ; Software-provided services become increasingly an accepted part of everyday life without any special scrutiny. For instance, spreadsheet programs are in widespread use as a decision-making aid, usually with little formal checks on their use, although researchers have found errors to be extremely frequent in producing spreadsheets, and the spreadsheet programs themselves suffer from many documented bugs and come with no promise of acceptable reliability by their vendors; Software-based systems are increasingly integrated and interacting, often without effective human control. Larger, more closely-coupled system are thus built in which software failures can propagate their effects more quickly and with less room for human intervention. With increased dependence, the total societal costs of computer failures increase. Hence a need to get a better grip on the trade-offs involving dependability, in many cases to improve it and generally better to evaluate it Why is there a problem with software reliability? The major difference between software and other engineering artefacts is that software is pure design. Its unreliability is always the result of design faults, which in turn arise as a result of human intellectual failures. The unreliability of hardware systems, on the other hand, has tended until recently to be dominated by random physical failures of components - the consequences of the perversity of nature. Some categories of hardware systems do fail through design and manufacturing defects more often than is desirable - for example buildings in poor countries - but engineering knowledge is sufficient, at least in principle, to prevent these systematic failures. Reliability theories have been developed over the years which have successfully allowed hardware systems to be built to high reliability requirements, and the final system reliability to be evaluated with acceptable accuracy. In recent years, however, many of these systems have come to depend upon software for their correct functioning, so that the reliability of software - its achievement and assessment - has become more and more important. The increasing ubiquity of software stems, of course, from its general purpose nature. Unfortunately, however, it is precisely this that brings disadvantages from the point of view of achieving sufficient reliability, and of demonstrating its achievement. Rather informally, these problems stem from the difficulty and novelty of the problems that are tackled, the complexity of the resulting solutions, the need for short development timescales, as well as the difficulty of gaining assurance of reliability because of the inherently discrete behaviour of digital systems. Novelty Whereas in the past computer-based systems were often used to automate the solution of problems for which satisfactory manual solutions already existed, it is becoming increasingly common to seek computerised solutions for previously unresolved problems - often ones that would have been regarded as impracticable using other technology. This poses particular difficulties for systems with high reliability requirements, since it means that we can learn little from experience of previous systems. Other branches of engineering, by contrast, tend to have a more continuous evolution in successive designs. Changing from a non-digital electronic control system to a software-based system, for example, might be best regarded as a step-change in technology. Equivalent step changes in other branches of engineering are known to be risky, for example the attempt to introduce new materials for turbine blades that led to insolvency and nationalisation for Rolls Royce in Difficulty There is a tendency for system designers to take on tasks that are intrinsically difficult when building softwarebased systems. The fact of a system being based on software frees the designer from some of the constraints of a purely hardware system, and allows the implementation of sometimes excessive extra functionality. Thus in engineered systems, there are examples of software being used to implement difficult functionality - e.g. enhanced support to pilots in fly-by-wire and unstable aircraft control, dynamic control of safe separation between trains in "moving block" railway signalling - that would be inconceivable in older technologies. Most complex modern manipulations of information - e.g. the control of massive flows of funds around the world s banking systems, or the recent growth of e-commerce - would not be possible without software. The more difficult and novel the task, of course, the more likely that mistakes will be made, resulting in the introduction of faults which cause system failure when triggered by appropriate input conditions. In the worst cases, the over-weening ambition of designers has resulted in systems being abandoned before completion, with consequent heavy financial loss. Complexity Most importantly, these trends to new and increased functionality in computer-based systems are almost unavoidably accompanied by increased complexity. Whilst there is no universally accepted measures of complexity, simple size will often give a rough-and-ready indication. The growth of complexity is then evident - see, for example, the growth in packages such as MS Office from one release to another. Great complexity brings with it many dangers. One of the greatest is difficulty of understanding: it is now common to have systems that no single person can claim to understand completely, even at a fairly high level of abstraction. This results in an associated uncertainty about the properties of the program - particularly its reliability and safety. Control of unwarranted complexity is thus one of the most important results of good design: a system should be no more complex than it need be to deliver the needed functionality. Clearly some of the trends discussed above militate against control of complexity. When complexity is needed, the challenge is to determine how much the

3 added intellectual difficulty detracts from the dependability of the product. Assurance Finally, the inherent discreteness of behaviour of digital systems makes it particularly difficult to gain assurance of their reliability. In contrast to conventional mechanical and electrical systems, it is almost invariably impossible to extrapolate from evidence of failure-free operation in one context in order to claim that a system will perform acceptably in another context. It is, of course, almost always infeasible to test all such contexts (inputs). Knowing that software is sufficiently reliable is necessary before we can make intelligent decisions about its use. This is clear for safety-critical systems, where we need to be sure that software (and other) failures will not incur unacceptable loss of human life. It is less clear, but we believe also important, in more mundane applications where, for example, it must be decided whether the tradeoff between new functionality and possible loss of reliability is cost-effective. There is abundant anecdotal evidence of financial losses from computer undependability: many users need better estimates both of the frequency and of the possible impact of computer failures. This is part of the general need for better assessment of the effectiveness of automation projects. It is this problem of assurance that has been at the centre of our own research interests; it will thus form a large part of the remainder of the paper Industry demand and concerns These different factors are common to all software-related industries, but their combinations vary. The baleful impact of novelty is particularly marked in much of the software used for important everyday tasks, like office automation. This is developed and marketed in ways that are closer to fashion driven consumer goods than to professional tools. Dependability takes very low priority. New releases are frequent, and tend to include new features to displace the competition and lure customers into making a new purchase. Reported bugs are preserved in the next release. The user s manual gives an ambiguous description of many functions of the software, and their semantics change between releases, or even between different parts of the same software suite. Many functions are used by small subsets of the user population, making many bugs difficult to find and economically uninteresting to fix. Furthermore, the platforms on which the applications run often do not enforce separation between the various applications and software supporting them, so that failures propagate, reducing system reliability and complicating fault reporting and diagnosing. A feature-dominated development culture is part of a competitive situation in which the time-to-market for new features is perceived by producers as the dominant economic driver. The high salaries commanded by developers indicate that competence is scarce and familiarity with new technologies for producing software commands a premium over experience and reliability culture. Thanks to tools like application-specific languages, libraries of components, spreadsheet and database programming packages, many more people can build complex software-based systems more quickly than was previously possible, often without a formal technical education and without an apprenticeship with more experienced professionals. Compared to more traditional software professionals, these new designers may be as effective at building apparently well-functioning systems, but are unaware of the accumulated experience of risks and pitfalls in software design, and often lack the skills of abstraction and analysis which have been traditionally necessary in software design. In this kind of market, both producers and users have little ability for a rational approach to dependability. Vendors do not offer suitable information for comparing products. The reliability of any one application varies, besides varying with the way it is used (the relative frequencies of the different types of functions it is required to perform and of the inputs to them), depends heavily on the other applications with which it coexists in the same computer. Even for performing very simple tasks we depend on complex software (e.g.,, to add two columns of numbers we may use a feature-rich spreadsheet programs) and hence we obtain lower reliability than we could. Last but not least, cultures have developed in which excessive computer undependability is accepted as, and thus becomes, inevitable. Users of office software, for instance, often perceive the software's behaviour as only approximately predictable. They are often unable to discriminate between their own misunderstandings of procedures and software failures. This adds to the tendency of users to blame themselves rather than designers of poor or poorly documented systems and reduces useful feedback to developers. At the other end of the spectrum, software for safetycritical application is subject to stringent development procedures intended to guarantee its reliability. Costs are much higher, times-to-market longer, innovation slower. Competitive pressures on these factors are resisted by a necessary conservatism in the regulator, the customers and/or the developers. However, little is known about the actual reliability gains from the various assurance techniques employed, about the actual reliability of new (and often even of mature) products, and about the dependability penalties implied by novel complex applications or new features. When regulators lack confidence about the reliability of a new product, licensing delays may ensue with huge costs. Different industrial sectors adhere to different standards, and some of the differences are historic accidents, but no scientific argument can be presented for choosing a "best" way of doing something as more effective than a sector's own, apparently satisfactory practices. 2 WHY PROBABILISTIC RELIABILITY? People who are new to the problems of software reliability often ask why reliability needs to be expressed in terms of probabilities. After all, there is a sense in which the execution of a program is completely deterministic. It is either fault-free, in which case it will never fail; or it does contain faults, in which case any circumstances that cause it to fail once will always cause it to fail. This contrasts with hardware components which will inevitably fail if we wait long enough, and which

4 can fail randomly in circumstances in which they have previously worked perfectly. Reliability engineers often call failures due to software (and other failures arising from design defects) systematic, to distinguish them from random hardware failures. This terminology is somewhat misleading, inasmuch as it seems to suggest that in the one case an approach involving probabilities is inevitable, but that in the other we might be able to get away with completely deterministic arguments. In fact this is not so, and probability-based reasoning seems inevitable in both cases. When we use the word systematic here it refers to the fault mechanism, i.e. the mechanism whereby a fault reveals itself as a failure, and not to the failure process. Thus it is correct to say that if a program failed once on a particular input (i.e. particular set of input values and timings) it would always fail on that input until the offending fault had been successfully removed. It is from this rather limited determinism that the terminology arises. However, our interest really centres upon the failure process: what we see when the system under study - and in particular the software - is used in its operational environment. The software failure process arises from the random uncovering of faults during the execution of successive inputs. We cannot predict with certainty what all future input cases will be and we don't know the program's faults. We would not know which inputs, of the ones we had not yet executed, would produce a failure if executed (if we did know this, we could use the information to fix the fault). There is inevitable uncertainty in the software failure process, then, for several reasons. This uncertainty can only be captured by probabilistic representations of the failure process: the use of probability-based measures to express our confidence in the reliability of a program is therefore inevitable. The important point is that the language and mathematics of reliability theory are as appropriate (or inappropriate) for dealing with software reliability as they are for hardware and human reliabilities. This means that it is possible, during the construction of a system, to assign a probabilistic reliability target even when, in the most general case, this is subject to random hardware failures, human failures, and failures as a result of software or hardware design faults. 3 WHAT LEVELS OF RELIABILITY ARE CURRENTLY ACHIEVABLE? Clearly, the difficulty of achieving and demonstrating reliability will depend upon the level of reliability that is required. This varies quite markedly from one application to another, and from one industry to another. Some of the most stringent requirements seem to apply to applications involving active control. Software-based flight control systems ( fly-by-wire ) in civil airliners fall under the requirement that catastrophic failures be not anticipated to occur over the entire operational life of all airplanes of one type, usually translated as 10-9 probability of failure per hour [5]; some railway signalling and train control systems have a requirement of probability of failure per hour [9]. By contrast, safety systems (systems that are only called upon when some controlled system gets into a potentially dangerous state) such as nuclear reactor protection systems, often have relatively modest requirements: for example, some nuclear protection systems have a requirement of 10-4 probability of failure upon demand (pfd). The most stringent of these requirements look extremely difficult to satisfy, but there is some evidence from earlier systems that very high software reliability has been achieved during extensive operational use. Reliability data for critical systems are rarely published, but, for instance, measurement-based estimates on some control and monitoring systems give a failure rate of per hour for potentially safety-related functions [15]. An analysis [22] of FAA records (while pointing at the extreme difficulty of extracting trustworthy data) tentatively estimated failure occurrence rates in avionics software to vary in the range 10-7 to 10-8 (very high reliability, but short of the 10-9 level) for systems in which failures prompted the issue of FAA airworthiness directives, and a much lower bound for systems for which no such failures were reported. The AT&T telephone system historically had very high quality-of-service measures, achieved by focusing not only on component reliability but also extensive redundancy, error detection and recovery capabilities, e.g. the 4ESS switches achieved observed downtime (from all causes) of less than 2 hours per 40 years, or about unavailability [4]; and a recent analysis [14] indicated that software failure accounts for only 2% of telephone service outage time experienced by customers. It is interesting, but perhaps not surprising, that hard evidence about achieved levels of software reliability come from those industries where the required levels are extremely high: typically these industries have reliability cultures that long preceded the introduction of computer systems. Figures from the newer IT industries are much harder to come by. However, there is much anecdotal evidence of low reliability from the users of PC software, and this viewpoint has not resulted in any authoritative rebuttal from the industry itself. It should be emphasised that the evidence, above, of having achieved extremely high reliability was only available after the event, when the systems had been in operational use for extremely long times. In fact for most of these systems, particularly the safety critical ones, the assurance that the reliability target has been met is needed before the systems are deployed. This remains one of the most difficult problems in software reliability. 4 HOW CAN WE MEASURE AND ASSURE RELIABILITY? We now consider briefly the different types of evidence that can support pre-operational claims for reliability. In practice, particularly when high levels or reliability need to be assured, it will be necessary to use several sources of evidence to support reliability claims. Combining such disparate evidence to aid decision making is itself a difficult task and a topic of current research Testing of software under operational conditions An obvious way to estimate the reliability of a program is to simulate its operational use, noting the times at

5 which failures occur. There has been considerable research on the statistical techniques needed to analyse such data, particularly when faults are removed as they are detected. This reliability growth modelling [2, 20] is probably one of the most successful techniques available: it is now generally possible, given the availability of appropriate data, to obtain accurate estimates of reliability and to know that they are accurate. There are, however, limitations to this approach. In the first place, it is often difficult to create a testing regime that is statistically representative of operational use. This regime can be specified by analysing the probabilities of input series in future use and/or implicitly by simulation; and is sometimes adopted as advantageous for reliability growth. For reliability assessment, however, doubts will remain on whether inaccuracies in the testing regime may invalidate the reliability predictions obtained. In some areas - e.g., general office products, management information systems - such experience is lacking, and indeed the products often change the way in which their users operate so that the operational environment is not stable. Secondly, the reliability growth models tend to assume that fault removal is successful: they can be thought of as sophisticated techniques for trend fitting. They will not capture any short-term reversals of fortune, such as a failure to remove a fault or, worse, the introduction of a new fault. This has serious implications in critical applications, where the possibility that the last fix might have introduced a new fault may be unacceptable. This is the case in the UK nuclear industry, for example, where the conservative assumption is made that any change to a program creates a new program, which must demonstrate its reliability from scratch. Finally, the levels of reliability that can be assured from these kinds of data are quite limited. It can be shown that to demonstrate a mean time between failures of x time units using the reliability growth models can require a test of duration several hundred times x time units [19]. Similarly, if we seek a conservative assessment by only considering testing after the last change to the software, to have, e.g., 99% confidence in 10-3 pfd would need 4600 statistically representative demands to be executed completely failure-free; 99% confidence in 10-4 would need demands without failure, and so on. Increasing the reliability level to be demonstrated will increases the length of the test series required until it becomes infeasible Evidence of process quality. Since its is obvious that the quality of a process affects the quality of its product, it is accepted practice that the higher the dependability requirements on a system, the more stringent quality requirements are imposed on its development and validation process. For instance, standards for software for safety-critical systems link sets of recommended or prescribed practices to the level of required reliability. Having applied the recommended practices is then often used as a basis for claiming that the corresponding reliability level has been achieved. Unfortunately, there is no evidence that the former implies the latter. In a parallel development, in recent years there has been increasing emphasis on the contribution of strict control on the software development process to product quality. But again, whilst common sense tells us that it is unlikely for poor development procedures to produce highly reliable software, there is little or no evidence indicating how much benefit can be expected from the use of good process. Indeed, it is clear that good process can sometimes result in very unreliable products. Even if we had extensive experience of the relationship between process and product qualities on previous products, it seems likely that this will contain large statistical variation, and thus preclude strong conclusions being drawn about a particular new product. There are similar problems in relating counts (or estimates) of software faults to reliability. Even if we could trust the statistical techniques that estimate the numbers of faults left in a program, which is doubtful [8] it is not possible to use this information to obtain accurate reliability predictions. One reason for this is that the sizes of software faults seem to be extremely varied [1]: to know the reliability of a program it is necessary to know both the number of faults remaining and the contribution that each makes to unreliability. 4.3 Evidence from static analysis of the software product Static analysis techniques clearly have an important role in helping to achieve reliability. It also seems intuitively obvious that they could increase confidence in the reliability of a program. For example, a formal proof that a particular class of fault is not present in a program should make us more confident that it will perform correctly: but how much more confident should we be? More precisely, what contribution does such evidence contribute to a claim that a program has met its reliability target? At present, answers to questions like this are rather informal. For example, the largest Malpas analysis ever conducted was for the safety system software of the Sizewell nuclear reactor. This showed up some problems, but it was claimed that none of these had safety implications. On the other hand, certain parts of the system defeated the analysis tool because of their complexity. Thus whilst some considerable comfort could be taken from the analysis, the picture was not completely clear. At the end of the day, the contribution of this evidence to the safety case rested on the informed judgement of expert individuals. 4.4 Evidence from software components and structure Structural models of reliability [3, 17] can allow the reliabilities of the software components of a system to be used to predict the system reliability, as common in nonsoftware engineering. When COTS components are being used, these component reliabilities can, in principle, be estimated from their previous operational use in other systems. The main difficulties here are actually obtaining such data (which are rarely recorded), and knowing that the previous reliabilities will apply in the novel context. 5 TRENDS AND RESEARCH CHALLENGES FOR THE FUTURE Among the challenges that we list here, only some are actually "hard" technical research topics. The difficulties in applying reliability and dependability techniques in

6 current software engineering are quite often cultural rather than technical, a matter of a vast gap of incomprehension between most software experts and dependability experts. For an actual improvement in engineering practice, it is necessary to bridge this gap. This may require more than just goodwill, but research into its economic, cultural and psychological causes and how to deal with them. 5.1 Focus on User-Centred, System-Level Dependability Qualities All too often, reliability is described in terms of compliance of a specific program with its written specifications. This may have paradoxical consequences: if a program was written with imprecise, largely unstated requirements, does this imply that we cannot state reliability requirements for it? The sensible way of approaching reliability is to define failure in terms of a system's effect on its user. For instance, in using a computer to write this article, I have a very clear perception of what would constitute a failure, e.g. the computer reacting to a command with an unexpected change to the text, or its crashing or corrupting a stored file. Measuring the reliability of individual components with respect to component-specific requirements is, in other areas of engineering, a convenient step towards assessing the satisfaction of the user's dependability requirements. It may also be useful for carefully structured software-based systems, ones in which, for instance, altering the options for my -reading software cannot destroy recent changes to my article. But component-based assessment is not the goal of reliability engineering. For the user, failures are classified by their consequences rather than their causes: it does not matter to me whether I loose my article because the word processor contains a bug, or because the platform allows an unrelated application to interfere with the word processor, or because the manual to the word processor does not explain the side-effects of a certain command. Actually, most users cannot tell whether a certain undesired behaviour of a word processor is due to a bug or to their misunderstanding of the function of the software. The system I am using to produce the printed article includes the computer with its software as well as myself, and it is the reliability of this system that should be of concern to designers. User-oriented requirements have many dimensions. Thus, traditionally, telephone companies established multiple target bounds for the frequency of dropped calls, frequency and total duration of outages, and so on. Users of an office package have distinct requirements regarding the risks of corruption to stored data, of unintended changes to an open file, or interruptions of service. All these needs are served by attention to various aspects of design: reliability of application modules, robustness of the platform, support for proper installation and detection of feature interactions, effective detection of run-time problems, informative error messages and design to facilitate recovery by the user. With an accent on integration rather than ex-novo design, and a climate of feature dominated frequent improvements, most system integrators and users find themselves using software whose reliability is difficult to assess and may turn out to be very poor in their specific environments. This increases the importance of resilience or fault tolerance: the ability of systems to limit the damage caused by any failure of their components. Propagating a culture of robust design, and exploring its application in modern processing environments, seems an essential part of improving dependability in the short term. Measuring robustness is essential for trusting systems built out of re-used components. Examples of attempts in this direction are [12, 23], but there are still challenges in studying how to obtain robust or conservative estimates given the unknown usage pattern to which the software may be subjected. In all these areas, dependability in software in general could benefit from lessons learned in the area of safety, e.g., the need for systematic analysis of risks ("hazards" for the safety engineer) early on during specification and of prioritising dependability demands, the realisation that maintenance and transition phases are an essential and critical part of a system's life, the importance of human factors in both operation and maintenance, the need to understand the genesis of mistakes, the necessity of fault tolerance (error detection and recovery) and of diversity COTS, re-use, open source,.. The trend towards greater utilisation of off-the-shelf software offers some promises for both better reliability and better ability to assess it. The effort of high-quality development and assurance activities becomes more affordable if spread over a wider population of users. This does not guarantee that this effort will be made, though: with mass-distributed consumer software, for instance, these economies of scales have been used instead for reducing prices or increasing profits. For dependability-minded customers, like the safety-critical industries, quality of COTS products is now a major concern. Re-use of COTS components may also pose difficulties and reliability risks if, as is common, the components were not designed within a re-use strategy in the first place. This issue is open to empirical study. An advantage of widely used off-the-shelf components should also be that if they have already seen much operational use, this experience could be used to forecast their dependability in a new context, allowing the application of the methods mentioned in section 4.4 [3, 17]. We know, however, that this extrapolation may be greatly inaccurate. Characterising differences between usage environments and their effects on reliability is an important research problem. Immediate research goals could be simply rules for conservative extrapolations, or about when extrapolation is legitimate, as a function of the characteristics of components, architectures and system use. Another problem is that reliability data from previous uses of COTS items are seldom documented with sufficient accuracy and detail to allow confident predictions. Likewise, the existence of a large user base should help with problem reporting and product improvement, but again this potential would only be realised given sufficient economic incentives. The practical difficulties listed should apply less to software producers that cater to safety-critical applications. Here, there is also a trend towards standardisation and consolidation of product lines, so that developing new applications is increasingly a matter of

7 customisation rather than ad-hoc design. With pressure from the customers, this trend has better chances of realising the promises of the "COTS movement" sooner than in the general market, using the wide diffusion of the same components both to improve the software faster and to measure achieved reliability. A need here is to develop practices for documenting past reliability records that can become accepted standards. Interestingly, many supporters of the "open source" approach claim that it produces improved reliability. It is difficult to verify these claims, and, assuming they are correct, to clearly account for the causes of higher reliability and determine to how wide a range of products they could be extended. Tapping the expertise of users for diagnosing and even repairing faults is attractive. Customers with high reliability requirements may mistrust the apparently lax centralised control in the open-source process, but even for them disclosing source code offers more informed bug reporting and distributed verification. In a related area, many security experts believe that using secret algorithms is often a selfdefeating move for designers, as it deprives them of the advantage of scrutiny by the vocal research community. Clarifying the advantages and disadvantages of the various aspects of the open-source approach on an empirical basis, and, more modestly, exploiting it as a source of data for other reliability research, are two necessary items on the agenda of research in software dependability Design for dependability assessment The difficulties in assessing software dependability are due in part to the complexity of the functions that we require from software, but also for a large part to design cultures that ignore the need for validation. Engineers have traditionally accepted that the need to validate a design (to demonstrate beforehand that the implemented system will be serviceable and safe) must constrain design freedom: structures have been limited to forms that could be demonstrated to be acceptably safe, either by extensive empirical knowledge or by the methods of calculation known at the time; the less a new design could be pre-validated on models and prototypes, the more conservative the design had to be; etc. This restraint has been lost in a large part of the software industries. We list here design practices that have a potential for facilitating validation and thus reliability engineering. Failure prevention A generally useful approach is that of eliminating whole classes of failures. One method is proving that certain events cannot happen (provided that the software implementation preserves the properties of the formal description on which the proof is based). Another set of methods uses the platform on which software runs to guarantee separation of subsystems. Memory protection prevents interference and failure propagation between different application processes. Guaranteed separation between application has been a major requirement for the integration of multiple software services in few powerful computers in modern airliners. We recommend [13] for a thorough discussion of separation and composability. It should be noted that these methods can support one another E.g., hardware-level separation between applications prevents some departures from the behaviour assumed in formal proofs of "correctness" based on highlevel descriptions. Exploiting this synergy for dependability assessment is a possibility that has not been explored, although a suitable approach is described in [21]. These methods favour dependability engineering in multiple ways. First of all, they directly increase reliability by reducing the frequency or severity of failures. Run-time protections may also detect faults before they cause serious failures. After failures, they make fault diagnosis easier, and thus accelerate reliability improvements. For dependability assessment, they reduce the uncertainties with which the assessor has to cope. The probability of some classes of failures becomes lower than the probability of, e.g., an error in a proof or a failure of a hardware protection mechanism- often negligible in comparison to the probabilities of other software failure modes. So, for instance, sufficient separation between running applications means that when we port an application to a new platform, we can trust its failure rate to equal that experienced in similar use on a previous platform plus that of the new platform, rather than being also affected by the specific combination of other applications present on the new platform. Thus, the possibility is brought closer of applying to software the structure-based reliability models common in nonsoftware engineering (cf Section 4.4). Some difficulties typical of software would remain (failure dependence between subsystems, wide variation of reliability with the usage environment), but the range of applicability of structure-based models would certainly increase. System monitoring Testing for reliability assessment can also be aided by software designers. They can simplify the space of demands on the software, which testers need to sample, and simplify the set of variables that the test designer must understand in order to build a realistic sample of the usage profile of the software. For instance, periodic resets limit the traces of operation of the software to finite lengths; subsystem separation reduces the number of variables affecting the behaviour of each subsystem; elements of defensive and fault-tolerant programming - assertions for reasonableness check, interface checks, auditing of data structures-, improve the ability to detect errors and failures, so that failure counts from testing becomes more trustworthy (cf [10]). Error detection techniques have an important role throughout the lifetime of systems. No matter how thoroughly a system has been assessed before use, information from its actual failure behaviour in use is precious. Reported errors and failures can lead to faults being corrected. For instance, the civil aviation industry has procedures for incident reporting and promulgation of corrections to equipment and procedures that contribute to its general safety. Besides improving dependability, monitoring is useful for improving its assessment. For instance, when a safety-critical system starts operation, the assurance of its being sufficiently safe is affected by various uncertainties. Even if it has been tested in realistic conditions, a prediction on the probability of future accidents is only possible with rather wide bounds, due to both the possibility that actual use will differ from predicted use, and to the fact that the period of test was

8 limited. As operation continues, both factors of uncertainty are reduced (in a way that is easily captured by mathematical formulations for the latter, and requires more ad hoc, informal reasoning for the former). Monitoring requires a technical component - effective means for automatically detecting and logging problems -, and an organisational component - procedures and incentives for the data thus logged to be collected AND analysed. The technical means have been around for a long time. The organisational part is more difficult. Experience teaches that a vendor's dedication may not be enough, as users may be selective in reporting failures. However, given a will, a vendor of even, say, personal computer operating systems could reach the point of being able to advertise the reliability of the operating system using truthful measurement from the field. The technical means are there. All these approaches come together when we consider the "COTS problem". When integrating a COTS subsystem in a new system with explicit dependability requirements, it would seem natural for a designer to require assurance in some appropriate form: possibly, a documented proof of correctness from specified viewpoints, and certainly an indication of the forms of monitoring applied in previous uses and the reliability data thus collected. Thus, for instance, the price of COTS components could increase with documented experience as the risk of using them decreases, allowing more efficient cost-effectiveness decisions for dependability-minded designers Propagating awareness of dependability issues and the use of existing, useful methods It is common for computer scientists to complain about the poor quality of current software, and for vendors to reply that their choices are dictated by their customers. Without judging where the truth lies in between these somewhat self-serving positions, it seems clear to us that society would benefit from greater awareness of software dependability problems. There is room for great improvements among users - both end users and system integrators. Public perception of software dependability On new Year's Day, 2000, media reports proclaimed that very few computer systems had failed, and thus the huge "Y2K" expenditure had been wasted. These reports show ignorance of a few facts: computer failures need not be immediately obvious, like "crashes". They may be hard to detect; knowing the approximate form of a software fault (a "Y2K" fault) does not mean knowing when it will cause a failure; since computers are state machines, they may store an erroneous state now which will cause a failure after a long time of proper service. Last but far from least, knowledge about dependability is always uncertain, and investing in reducing this uncertainty is often worthwhile. Increased awareness of these issues would certainly allow users better to address system procurement, to prepare and defend themselves against the effects of failures, and to better report problems and requirements to vendors. Design culture With the blurring of the separation between professional software developers and users, these misperceptions increasingly affect system development. But even professional developers often lack education in dependability, both from academic learning and from their workplace environment. The RISKS archives ( are a useful source for new and old developers, users and educators. They document both useful lists of common problems, for those who wish to learn from historical memory, and the lack of this shared memory for many users and developers. Many reported problems stem from repeated, well-known design oversights (e.g., "buffer overflow" security vulnerabilities). The same cultural problems show again and again: lack of risk analysis and of provisions of fall-backs and redundancy, focus on a technical subsystem without system-level consideration of risks. Management culture Assessing dependability and taking high-level engineering decisions to achieve it run into different problems. Here we deal with uncertainty, requiring understanding of probability and statistics applied to rather subtle questions. Managers who come from software design do not usually have an appropriate background. The errors in applying theoretical results to decision-making are often very basic: ignoring the limits of the methods (e.g., accepting clearly unbelievable prediction of ultra-high reliability [16], or trusting failure probability estimates to multiple significant digits), misusing one-number measures (e.g., using an MTTF comparison to choose a system for which the main requirement is availability over short missions: a serious error), embracing methods from the scientific literature which have been proven inadequate (e.g., requiring a vendor to estimate reliability by a specific method that errs in favour of the vendor). The knowledge that decision-makers need concerns the basic concepts of dependability and uncertainty, awareness of the misunderstandings that arise between the software and the reliability specialists, the need to probe the bases of well-packaged decision support methods. Perhaps the most serious challenge for the reliability engineer is in delimiting the role for the probabilistic treatments of dependability: on the one hand, clarifying the limits of the possible knowledge of the future; on the other hand, pointing out that if we really want to examine what we know, some formalism is an indispensable support for rigorous thought. In some industries, labels like "10-9 probability of failure" are now applied without much consideration of what evidence would really be required for claiming them. In practice, this probabilistic labelling is a conventional exercise, even where there is the most serious attention to safety. The challenge is to make practitioners accept that a well-founded claim of "better than 10-4" would be more useful to them, and the public not to see such a change as a change for the worse Diversity and variation as drivers of dependability "Functional diversity" or, less frequently, "design diversity" are common approaches to ensuring safety or reliability of critical systems. The study of their effectiveness is still open: while they have been shown to deliver reliability improvements, the evidence about their cost-effectiveness and their limits, compared to other techniques, is still as primitive as for most software engineering techniques. Further study is certainly

Software Reliability and Dependability: a Roadmap

Software Reliability and Dependability: a Roadmap Software Reliability and Dependability: a Roadmap Bev Littlewood Lorenzo Strigini Centre for Software Reliability, City University Northampton Square, London EC1V OHB, UK +44 20 7477 8420 +44 20 7477 8245

More information

City, University of London Institutional Repository

City, University of London Institutional Repository City Research Online City, University of London Institutional Repository Citation: Littlewood, B. & Strigini, L. (2000). Software reliability and dependability: a roadmap. In: A Finkelstein (Ed.), The

More information

Validation of ultra-high dependability 20 years on

Validation of ultra-high dependability 20 years on Bev Littlewood, Lorenzo Strigini Centre for Software Reliability, City University, London EC1V 0HB In 1990, we submitted a paper to the Communications of the Association for Computing Machinery, with the

More information

Designing for recovery New challenges for large-scale, complex IT systems

Designing for recovery New challenges for large-scale, complex IT systems Designing for recovery New challenges for large-scale, complex IT systems Prof. Ian Sommerville School of Computer Science St Andrews University Scotland St Andrews Small Scottish town, on the north-east

More information

Scientific Certification

Scientific Certification Scientific Certification John Rushby Computer Science Laboratory SRI International Menlo Park, California, USA John Rushby, SR I Scientific Certification: 1 Does The Current Approach Work? Fuel emergency

More information

Stanford Center for AI Safety

Stanford Center for AI Safety Stanford Center for AI Safety Clark Barrett, David L. Dill, Mykel J. Kochenderfer, Dorsa Sadigh 1 Introduction Software-based systems play important roles in many areas of modern life, including manufacturing,

More information

Limits to Dependability Assurance - A Controversy Revisited (Or: A Question of Confidence )

Limits to Dependability Assurance - A Controversy Revisited (Or: A Question of Confidence ) Limits to Dependability Assurance - A Controversy Revisited (Or: A Question of Confidence ) Bev Littlewood Centre for Software Reliability, City University, London b.littlewood@csr.city.ac.uk [Work reported

More information

Response to Ofcom s Consultation on Administrative Incentive Pricing

Response to Ofcom s Consultation on Administrative Incentive Pricing Response to Ofcom s Consultation on Administrative Incentive Pricing Background 1. The RadioCentre formed in July 2006 from the merger of the Radio Advertising Bureau (RAB) and the Commercial Radio Companies

More information

System of Systems Software Assurance

System of Systems Software Assurance System of Systems Software Assurance Introduction Under DoD sponsorship, the Software Engineering Institute has initiated a research project on system of systems (SoS) software assurance. The project s

More information

Instrumentation and Control

Instrumentation and Control Program Description Instrumentation and Control Program Overview Instrumentation and control (I&C) and information systems impact nuclear power plant reliability, efficiency, and operations and maintenance

More information

Objectives. Designing, implementing, deploying and operating systems which include hardware, software and people

Objectives. Designing, implementing, deploying and operating systems which include hardware, software and people Chapter 2. Computer-based Systems Engineering Designing, implementing, deploying and operating s which include hardware, software and people Slide 1 Objectives To explain why software is affected by broader

More information

Assurance Cases The Home for Verification*

Assurance Cases The Home for Verification* Assurance Cases The Home for Verification* (Or What Do We Need To Add To Proof?) John Knight Department of Computer Science & Dependable Computing LLC Charlottesville, Virginia * Computer Assisted A LIMERICK

More information

in the New Zealand Curriculum

in the New Zealand Curriculum Technology in the New Zealand Curriculum We ve revised the Technology learning area to strengthen the positioning of digital technologies in the New Zealand Curriculum. The goal of this change is to ensure

More information

GUIDE TO SPEAKING POINTS:

GUIDE TO SPEAKING POINTS: GUIDE TO SPEAKING POINTS: The following presentation includes a set of speaking points that directly follow the text in the slide. The deck and speaking points can be used in two ways. As a learning tool

More information

INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED 03 STOCKHOLM, AUGUST 19-21, 2003

INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED 03 STOCKHOLM, AUGUST 19-21, 2003 INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED 03 STOCKHOLM, AUGUST 19-21, 2003 A KNOWLEDGE MANAGEMENT SYSTEM FOR INDUSTRIAL DESIGN RESEARCH PROCESSES Christian FRANK, Mickaël GARDONI Abstract Knowledge

More information

Software Aging by D. L. Parnas

Software Aging by D. L. Parnas Software Aging by D. L. Parnas Software Aging Programs, like people, get old. We can t prevent aging, but we can understand its causes, take steps to limit its effects, temporarily reverse some of the

More information

Principled Construction of Software Safety Cases

Principled Construction of Software Safety Cases Principled Construction of Software Safety Cases Richard Hawkins, Ibrahim Habli, Tim Kelly Department of Computer Science, University of York, UK Abstract. A small, manageable number of common software

More information

Comments of Shared Spectrum Company

Comments of Shared Spectrum Company Before the DEPARTMENT OF COMMERCE NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION Washington, D.C. 20230 In the Matter of ) ) Developing a Sustainable Spectrum ) Docket No. 181130999 8999 01

More information

Focusing Software Education on Engineering

Focusing Software Education on Engineering Introduction Focusing Software Education on Engineering John C. Knight Department of Computer Science University of Virginia We must decide we want to be engineers not blacksmiths. Peter Amey, Praxis Critical

More information

The Three Laws of Artificial Intelligence

The Three Laws of Artificial Intelligence The Three Laws of Artificial Intelligence Dispelling Common Myths of AI We ve all heard about it and watched the scary movies. An artificial intelligence somehow develops spontaneously and ferociously

More information

Getting the Best Performance from Challenging Control Loops

Getting the Best Performance from Challenging Control Loops Getting the Best Performance from Challenging Control Loops Jacques F. Smuts - OptiControls Inc, League City, Texas; jsmuts@opticontrols.com KEYWORDS PID Controls, Oscillations, Disturbances, Tuning, Stiction,

More information

Getting the evidence: Using research in policy making

Getting the evidence: Using research in policy making Getting the evidence: Using research in policy making REPORT BY THE COMPTROLLER AND AUDITOR GENERAL HC 586-I Session 2002-2003: 16 April 2003 LONDON: The Stationery Office 14.00 Two volumes not to be sold

More information

DEPUIS project: Design of Environmentallyfriendly Products Using Information Standards

DEPUIS project: Design of Environmentallyfriendly Products Using Information Standards DEPUIS project: Design of Environmentallyfriendly Products Using Information Standards Anna Amato 1, Anna Moreno 2 and Norman Swindells 3 1 ENEA, Italy, anna.amato@casaccia.enea.it 2 ENEA, Italy, anna.moreno@casaccia.enea.it

More information

WORKSHOP ON BASIC RESEARCH: POLICY RELEVANT DEFINITIONS AND MEASUREMENT ISSUES PAPER. Holmenkollen Park Hotel, Oslo, Norway October 2001

WORKSHOP ON BASIC RESEARCH: POLICY RELEVANT DEFINITIONS AND MEASUREMENT ISSUES PAPER. Holmenkollen Park Hotel, Oslo, Norway October 2001 WORKSHOP ON BASIC RESEARCH: POLICY RELEVANT DEFINITIONS AND MEASUREMENT ISSUES PAPER Holmenkollen Park Hotel, Oslo, Norway 29-30 October 2001 Background 1. In their conclusions to the CSTP (Committee for

More information

Software-Intensive Systems Producibility

Software-Intensive Systems Producibility Pittsburgh, PA 15213-3890 Software-Intensive Systems Producibility Grady Campbell Sponsored by the U.S. Department of Defense 2006 by Carnegie Mellon University SSTC 2006. - page 1 Producibility

More information

The Development of Computer Aided Engineering: Introduced from an Engineering Perspective. A Presentation By: Jesse Logan Moe.

The Development of Computer Aided Engineering: Introduced from an Engineering Perspective. A Presentation By: Jesse Logan Moe. The Development of Computer Aided Engineering: Introduced from an Engineering Perspective A Presentation By: Jesse Logan Moe What Defines CAE? Introduction Computer-Aided Engineering is the use of information

More information

OWA Floating LiDAR Roadmap Supplementary Guidance Note

OWA Floating LiDAR Roadmap Supplementary Guidance Note OWA Floating LiDAR Roadmap Supplementary Guidance Note List of abbreviations Abbreviation FLS IEA FL Recommended Practices KPI OEM OPDACA OSACA OWA OWA FL Roadmap Meaning Floating LiDAR System IEA Wind

More information

Seeking Obsolescence Tolerant Replacement C&I Solutions for the Nuclear Industry

Seeking Obsolescence Tolerant Replacement C&I Solutions for the Nuclear Industry Seeking Obsolescence Tolerant Replacement C&I Solutions for the Nuclear Industry Issue 1 Date September 2007 Publication 6th International Conference on Control & Instrumentation: in nuclear installations

More information

Compendium Overview. By John Hagel and John Seely Brown

Compendium Overview. By John Hagel and John Seely Brown Compendium Overview By John Hagel and John Seely Brown Over four years ago, we began to discern a new technology discontinuity on the horizon. At first, it came in the form of XML (extensible Markup Language)

More information

Where does architecture end and technology begin? Rami Razouk The Aerospace Corporation

Where does architecture end and technology begin? Rami Razouk The Aerospace Corporation Introduction Where does architecture end and technology begin? Rami Razouk The Aerospace Corporation Over the last several years, the software architecture community has reached significant consensus about

More information

The Response of Motorola Ltd. to the. Consultation on Spectrum Commons Classes for Licence Exemption

The Response of Motorola Ltd. to the. Consultation on Spectrum Commons Classes for Licence Exemption The Response of Motorola Ltd to the Consultation on Spectrum Commons Classes for Licence Exemption Motorola is grateful for the opportunity to contribute to the consultation on Spectrum Commons Classes

More information

Formal Methods & Traditional Engineering: by Michael Jackson

Formal Methods & Traditional Engineering: by Michael Jackson Formal Methods & Traditional Engineering: by Michael Jackson Introduction Formal methods have not been taken up by industry to the extent that their creators and advocates think desirable. Certainly there

More information

Deregulating Futures: The role of spectrum

Deregulating Futures: The role of spectrum Deregulating futures: The role of spectrum Deregulating Futures: The role of spectrum A speech for the UK-Korea 2 nd Mobile Future Evolution Forum, 7 th September 2005 Introduction Wireless communication

More information

Human Factors Points to Consider for IDE Devices

Human Factors Points to Consider for IDE Devices U.S. FOOD AND DRUG ADMINISTRATION CENTER FOR DEVICES AND RADIOLOGICAL HEALTH Office of Health and Industry Programs Division of Device User Programs and Systems Analysis 1350 Piccard Drive, HFZ-230 Rockville,

More information

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game 37 Game Theory Game theory is one of the most interesting topics of discrete mathematics. The principal theorem of game theory is sublime and wonderful. We will merely assume this theorem and use it to

More information

clarification to bring legal certainty to these issues have been voiced in various position papers and statements.

clarification to bring legal certainty to these issues have been voiced in various position papers and statements. ESR Statement on the European Commission s proposal for a Regulation on the protection of individuals with regard to the processing of personal data on the free movement of such data (General Data Protection

More information

NUCLEAR SAFETY AND RELIABILITY

NUCLEAR SAFETY AND RELIABILITY Nuclear Safety and Reliability Dan Meneley Page 1 of 1 NUCLEAR SAFETY AND RELIABILITY WEEK 12 TABLE OF CONTENTS - WEEK 12 1. Comparison of Risks...1 2. Risk-Benefit Assessments...3 3. Risk Acceptance...4

More information

Jacek Stanisław Jóźwiak. Improving the System of Quality Management in the development of the competitive potential of Polish armament companies

Jacek Stanisław Jóźwiak. Improving the System of Quality Management in the development of the competitive potential of Polish armament companies Jacek Stanisław Jóźwiak Improving the System of Quality Management in the development of the competitive potential of Polish armament companies Summary of doctoral thesis Supervisor: dr hab. Piotr Bartkowiak,

More information

Academy of Social Sciences response to Plan S, and UKRI implementation

Academy of Social Sciences response to Plan S, and UKRI implementation Academy of Social Sciences response to Plan S, and UKRI implementation 1. The Academy of Social Sciences (AcSS) is the national academy of academics, learned societies and practitioners in the social sciences.

More information

Technology Transfer: An Integrated Culture-Friendly Approach

Technology Transfer: An Integrated Culture-Friendly Approach Technology Transfer: An Integrated Culture-Friendly Approach I.J. Bate, A. Burns, T.O. Jackson, T.P. Kelly, W. Lam, P. Tongue, J.A. McDermid, A.L. Powell, J.E. Smith, A.J. Vickers, A.J. Wellings, B.R.

More information

Small Airplane Approach for Enhancing Safety Through Technology. Federal Aviation Administration

Small Airplane Approach for Enhancing Safety Through Technology. Federal Aviation Administration Small Airplane Approach for Enhancing Safety Through Technology Objectives Communicate Our Experiences Managing Risk & Incremental Improvement Discuss How Our Experience Might Benefit the Rotorcraft Community

More information

Information & Communication Technology Strategy

Information & Communication Technology Strategy Information & Communication Technology Strategy 2012-18 Information & Communication Technology (ICT) 2 Our Vision To provide a contemporary and integrated technological environment, which sustains and

More information

Failure modes and effects analysis through knowledge modelling

Failure modes and effects analysis through knowledge modelling Loughborough University Institutional Repository Failure modes and effects analysis through knowledge modelling This item was submitted to Loughborough University's Institutional Repository by the/an author.

More information

DIGITAL TRANSFORMATION LESSONS LEARNED FROM EARLY INITIATIVES

DIGITAL TRANSFORMATION LESSONS LEARNED FROM EARLY INITIATIVES DIGITAL TRANSFORMATION LESSONS LEARNED FROM EARLY INITIATIVES Produced by Sponsored by JUNE 2016 Contents Introduction.... 3 Key findings.... 4 1 Broad diversity of current projects and maturity levels

More information

My 36 Years in System Safety: Looking Backward, Looking Forward

My 36 Years in System Safety: Looking Backward, Looking Forward My 36 Years in System : Looking Backward, Looking Forward Nancy Leveson System safety engineer (Gary Larsen, The Far Side) How I Got Started Topics How I Got Started Looking Backward Looking Forward 2

More information

System and Network Administration

System and Network Administration System and Network Administration 1 What is network & system administration A branch of engineering that concern the operational management of human-computer systems. Its address both technology of computer

More information

ARGUING THE SAFETY OF MACHINE LEARNING FOR HIGHLY AUTOMATED DRIVING USING ASSURANCE CASES LYDIA GAUERHOF BOSCH CORPORATE RESEARCH

ARGUING THE SAFETY OF MACHINE LEARNING FOR HIGHLY AUTOMATED DRIVING USING ASSURANCE CASES LYDIA GAUERHOF BOSCH CORPORATE RESEARCH ARGUING THE SAFETY OF MACHINE LEARNING FOR HIGHLY AUTOMATED DRIVING USING ASSURANCE CASES 14.12.2017 LYDIA GAUERHOF BOSCH CORPORATE RESEARCH Arguing Safety of Machine Learning for Highly Automated Driving

More information

SAFETY CASES: ARGUING THE SAFETY OF AUTONOMOUS SYSTEMS SIMON BURTON DAGSTUHL,

SAFETY CASES: ARGUING THE SAFETY OF AUTONOMOUS SYSTEMS SIMON BURTON DAGSTUHL, SAFETY CASES: ARGUING THE SAFETY OF AUTONOMOUS SYSTEMS SIMON BURTON DAGSTUHL, 17.02.2017 The need for safety cases Interaction and Security is becoming more than what happens when things break functional

More information

Chapter IV SUMMARY OF MAJOR FEATURES OF SEVERAL FOREIGN APPROACHES TO TECHNOLOGY POLICY

Chapter IV SUMMARY OF MAJOR FEATURES OF SEVERAL FOREIGN APPROACHES TO TECHNOLOGY POLICY Chapter IV SUMMARY OF MAJOR FEATURES OF SEVERAL FOREIGN APPROACHES TO TECHNOLOGY POLICY Chapter IV SUMMARY OF MAJOR FEATURES OF SEVERAL FOREIGN APPROACHES TO TECHNOLOGY POLICY Foreign experience can offer

More information

Masao Mukaidono Emeritus Professor, Meiji University

Masao Mukaidono Emeritus Professor, Meiji University Provisional Translation Document 1 Second Meeting Working Group on Voluntary Efforts and Continuous Improvement of Nuclear Safety, Advisory Committee for Natural Resources and Energy 2012-8-15 Working

More information

Opinion-based essays: prompts and sample answers

Opinion-based essays: prompts and sample answers Opinion-based essays: prompts and sample answers 1. Health and Education Prompt Recent research shows that the consumption of junk food is a major factor in poor diet and this is detrimental to health.

More information

IEEE IoT Vertical and Topical Summit - Anchorage September 18th-20th, 2017 Anchorage, Alaska. Call for Participation and Proposals

IEEE IoT Vertical and Topical Summit - Anchorage September 18th-20th, 2017 Anchorage, Alaska. Call for Participation and Proposals IEEE IoT Vertical and Topical Summit - Anchorage September 18th-20th, 2017 Anchorage, Alaska Call for Participation and Proposals With its dispersed population, cultural diversity, vast area, varied geography,

More information

COMPETITIVE ADVANTAGES AND MANAGEMENT CHALLENGES. by C.B. Tatum, Professor of Civil Engineering Stanford University, Stanford, CA , USA

COMPETITIVE ADVANTAGES AND MANAGEMENT CHALLENGES. by C.B. Tatum, Professor of Civil Engineering Stanford University, Stanford, CA , USA DESIGN AND CONST RUCTION AUTOMATION: COMPETITIVE ADVANTAGES AND MANAGEMENT CHALLENGES by C.B. Tatum, Professor of Civil Engineering Stanford University, Stanford, CA 94305-4020, USA Abstract Many new demands

More information

Cognitive Systems Engineering

Cognitive Systems Engineering Chapter 5 Cognitive Systems Engineering Gordon Baxter, University of St Andrews Summary Cognitive systems engineering is an approach to socio-technical systems design that is primarily concerned with the

More information

Blade Tip Timing Frequently asked Questions. Dr Pete Russhard

Blade Tip Timing Frequently asked Questions. Dr Pete Russhard Blade Tip Timing Frequently asked Questions Dr Pete Russhard Rolls-Royce plc 2012 The information in this document is the property of Rolls-Royce plc and may not be copied or communicated to a third party,

More information

A Roadmap for Connected & Autonomous Vehicles. David Skipp Ford Motor Company

A Roadmap for Connected & Autonomous Vehicles. David Skipp Ford Motor Company A Roadmap for Connected & Autonomous Vehicles David Skipp Ford Motor Company ! Why does an Autonomous Vehicle need a roadmap? Where might the roadmap take us? What should we focus on next? Why does an

More information

Logic Solver for Tank Overfill Protection

Logic Solver for Tank Overfill Protection Introduction A growing level of attention has recently been given to the automated control of potentially hazardous processes such as the overpressure or containment of dangerous substances. Several independent

More information

Research strategy LUND UNIVERSITY

Research strategy LUND UNIVERSITY Research strategy 2017 2021 LUND UNIVERSITY 2 RESEARCH STRATEGY 2017 2021 Foreword 2017 is the first year of Lund University s 10-year strategic plan. Research currently constitutes the majority of the

More information

Must the Librarian Be Underdog?

Must the Librarian Be Underdog? RONALD W. BRADY Vice-President for Administration University of Illinois Urbana-Champaign, Illinois Negotiating for Computer Services: Must the Librarian Be Underdog? NEGOTIATING FOR COMPUTER SERVICES

More information

Quality Digest November

Quality Digest November Quality Digest November 2002 1 By Stephen Birman, Ph.D. I t seems an easy enough problem: Control the output of a metalworking operation to maintain a CpK of 1.33. Surely all you have to do is set up a

More information

Antenie Carstens National Library of South Africa. address:

Antenie Carstens National Library of South Africa.  address: Submitted on: 15/06/2017 Planning digitising projects with reference to acquiring appropriate equipment for the project and the quality management process using case studies in South Africa Antenie Carstens

More information

Design and technology

Design and technology Design and technology Programme of study for key stage 3 and attainment target (This is an extract from The National Curriculum 2007) Crown copyright 2007 Qualifications and Curriculum Authority 2007 Curriculum

More information

Pan-Canadian Trust Framework Overview

Pan-Canadian Trust Framework Overview Pan-Canadian Trust Framework Overview A collaborative approach to developing a Pan- Canadian Trust Framework Authors: DIACC Trust Framework Expert Committee August 2016 Abstract: The purpose of this document

More information

GLOBAL ICT REGULATORY OUTLOOK EXECUTIVE SUMMARY

GLOBAL ICT REGULATORY OUTLOOK EXECUTIVE SUMMARY GLOBAL ICT REGULATORY OUTLOOK 2017 EXECUTIVE SUMMARY EXECUTIVE SUMMARY Over past decades the world has witnessed a digital revolution that is ushering in huge change. The rate of that change continues

More information

UN-GGIM Future Trends in Geospatial Information Management 1

UN-GGIM Future Trends in Geospatial Information Management 1 UNITED NATIONS SECRETARIAT ESA/STAT/AC.279/P5 Department of Economic and Social Affairs October 2013 Statistics Division English only United Nations Expert Group on the Integration of Statistical and Geospatial

More information

Executive Summary Industry s Responsibility in Promoting Responsible Development and Use:

Executive Summary Industry s Responsibility in Promoting Responsible Development and Use: Executive Summary Artificial Intelligence (AI) is a suite of technologies capable of learning, reasoning, adapting, and performing tasks in ways inspired by the human mind. With access to data and the

More information

THE LABORATORY ANIMAL BREEDERS ASSOCIATION OF GREAT BRITAIN

THE LABORATORY ANIMAL BREEDERS ASSOCIATION OF GREAT BRITAIN THE LABORATORY ANIMAL BREEDERS ASSOCIATION OF GREAT BRITAIN www.laba-uk.com Response from Laboratory Animal Breeders Association to House of Lords Inquiry into the Revision of the Directive on the Protection

More information

Years 5 and 6 standard elaborations Australian Curriculum: Design and Technologies

Years 5 and 6 standard elaborations Australian Curriculum: Design and Technologies Purpose The standard elaborations (SEs) provide additional clarity when using the Australian Curriculum achievement standard to make judgments on a five-point scale. They can be used as a tool for: making

More information

Software Maintenance Cycles with the RUP

Software Maintenance Cycles with the RUP Software Maintenance Cycles with the RUP by Philippe Kruchten Rational Fellow Rational Software Canada The Rational Unified Process (RUP ) has no concept of a "maintenance phase." Some people claim that

More information

A Science & Innovation Audit for the West Midlands

A Science & Innovation Audit for the West Midlands A Science & Innovation Audit for the West Midlands June 2017 Summary Report Key Findings and Moving Forward 1. Key findings and moving forward 1.1 As the single largest functional economic area in England

More information

Rethinking Software Process: the Key to Negligence Liability

Rethinking Software Process: the Key to Negligence Liability Rethinking Software Process: the Key to Negligence Liability Clark Savage Turner, J.D., Ph.D., Foaad Khosmood Department of Computer Science California Polytechnic State University San Luis Obispo, CA.

More information

COPYRIGHTED MATERIAL. Introduction. 1.1 Important Definitions

COPYRIGHTED MATERIAL. Introduction. 1.1 Important Definitions 1 Introduction In modern, complex telecommunications systems, quality is not something that can be added at the end of the development. Neither can quality be ensured just by design. Of course, designing

More information

Transferring knowledge from operations to the design and optimization of work systems: bridging the offshore/onshore gap

Transferring knowledge from operations to the design and optimization of work systems: bridging the offshore/onshore gap Transferring knowledge from operations to the design and optimization of work systems: bridging the offshore/onshore gap Carolina Conceição, Anna Rose Jensen, Ole Broberg DTU Management Engineering, Technical

More information

Computer Science: Who Cares? Computer Science: It Matters. Computer Science: Disciplines

Computer Science: Who Cares? Computer Science: It Matters. Computer Science: Disciplines Computer Science: Who Cares? Computer Graphics (1970 s): One department, at one university Several faculty, a few more students $5,000,000 grant from ARPA Original slides by Chris Wilcox, Edited and extended

More information

Scenario Planning edition 2

Scenario Planning edition 2 1 Scenario Planning Managing for the Future 2 nd edition first published in 2006 Gill Ringland Electronic version (c) Gill Ringland: gill.ringland@samiconsulting.co.uk.: this has kept to the original text

More information

Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.

Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. Editor's Note Author(s): Ragnar Frisch Source: Econometrica, Vol. 1, No. 1 (Jan., 1933), pp. 1-4 Published by: The Econometric Society Stable URL: http://www.jstor.org/stable/1912224 Accessed: 29/03/2010

More information

UNIT-III LIFE-CYCLE PHASES

UNIT-III LIFE-CYCLE PHASES INTRODUCTION: UNIT-III LIFE-CYCLE PHASES - If there is a well defined separation between research and development activities and production activities then the software is said to be in successful development

More information

Revolutionizing Engineering Science through Simulation May 2006

Revolutionizing Engineering Science through Simulation May 2006 Revolutionizing Engineering Science through Simulation May 2006 Report of the National Science Foundation Blue Ribbon Panel on Simulation-Based Engineering Science EXECUTIVE SUMMARY Simulation refers to

More information

THE FUTURE OF DATA AND INTELLIGENCE IN TRANSPORT

THE FUTURE OF DATA AND INTELLIGENCE IN TRANSPORT THE FUTURE OF DATA AND INTELLIGENCE IN TRANSPORT Humanity s ability to use data and intelligence has increased dramatically People have always used data and intelligence to aid their journeys. In ancient

More information

Instrumentation and Control

Instrumentation and Control Instrumentation and Control Program Description Program Overview Instrumentation and control (I&C) systems affect all areas of plant operation and can profoundly impact plant reliability, efficiency, and

More information

Managing the process towards a new library building. Experiences from Utrecht University. Bas Savenije. Abstract

Managing the process towards a new library building. Experiences from Utrecht University. Bas Savenije. Abstract Managing the process towards a new library building. Experiences from Utrecht University. Bas Savenije Abstract In September 2004 Utrecht University will open a new building for the university library.

More information

Copyright: Conference website: Date deposited:

Copyright: Conference website: Date deposited: Coleman M, Ferguson A, Hanson G, Blythe PT. Deriving transport benefits from Big Data and the Internet of Things in Smart Cities. In: 12th Intelligent Transport Systems European Congress 2017. 2017, Strasbourg,

More information

THE IMPLICATIONS OF THE KNOWLEDGE-BASED ECONOMY FOR FUTURE SCIENCE AND TECHNOLOGY POLICIES

THE IMPLICATIONS OF THE KNOWLEDGE-BASED ECONOMY FOR FUTURE SCIENCE AND TECHNOLOGY POLICIES General Distribution OCDE/GD(95)136 THE IMPLICATIONS OF THE KNOWLEDGE-BASED ECONOMY FOR FUTURE SCIENCE AND TECHNOLOGY POLICIES 26411 ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT Paris 1995 Document

More information

SAFETY CASE PATTERNS REUSING SUCCESSFUL ARGUMENTS. Tim Kelly, John McDermid

SAFETY CASE PATTERNS REUSING SUCCESSFUL ARGUMENTS. Tim Kelly, John McDermid SAFETY CASE PATTERNS REUSING SUCCESSFUL ARGUMENTS Tim Kelly, John McDermid Rolls-Royce Systems and Software Engineering University Technology Centre Department of Computer Science University of York Heslington

More information

Technology Readiness Level assessment and Capability Route Mapping for Special Engineered Structures for a Far Infrared Space Telescope

Technology Readiness Level assessment and Capability Route Mapping for Special Engineered Structures for a Far Infrared Space Telescope 1 Technology Readiness Level assessment and Capability Route Mapping for Special Engineered Structures for a Far Infrared Space Telescope Alison J. McMillan Glyndwr University 15-17 December 2015 FISICA

More information

Industrial Experience with SPARK. Praxis Critical Systems

Industrial Experience with SPARK. Praxis Critical Systems Industrial Experience with SPARK Roderick Chapman Praxis Critical Systems Outline Introduction SHOLIS The MULTOS CA Lockheed C130J A less successful project Conclusions Introduction Most Ada people know

More information

Executive Summary. Chapter 1. Overview of Control

Executive Summary. Chapter 1. Overview of Control Chapter 1 Executive Summary Rapid advances in computing, communications, and sensing technology offer unprecedented opportunities for the field of control to expand its contributions to the economic and

More information

SENSORS SESSION. Operational GNSS Integrity. By Arne Rinnan, Nina Gundersen, Marit E. Sigmond, Jan K. Nilsen

SENSORS SESSION. Operational GNSS Integrity. By Arne Rinnan, Nina Gundersen, Marit E. Sigmond, Jan K. Nilsen Author s Name Name of the Paper Session DYNAMIC POSITIONING CONFERENCE 11-12 October, 2011 SENSORS SESSION By Arne Rinnan, Nina Gundersen, Marit E. Sigmond, Jan K. Nilsen Kongsberg Seatex AS Trondheim,

More information

How Explainability is Driving the Future of Artificial Intelligence. A Kyndi White Paper

How Explainability is Driving the Future of Artificial Intelligence. A Kyndi White Paper How Explainability is Driving the Future of Artificial Intelligence A Kyndi White Paper 2 The term black box has long been used in science and engineering to denote technology systems and devices that

More information

Engaging UK Climate Service Providers a series of workshops in November 2014

Engaging UK Climate Service Providers a series of workshops in November 2014 Engaging UK Climate Service Providers a series of workshops in November 2014 Belfast, London, Edinburgh and Cardiff Four workshops were held during November 2014 to engage organisations (providers, purveyors

More information

Computer Science: Disciplines. What is Software Engineering and why does it matter? Software Disasters

Computer Science: Disciplines. What is Software Engineering and why does it matter? Software Disasters Computer Science: Disciplines What is Software Engineering and why does it matter? Computer Graphics Computer Networking and Security Parallel Computing Database Systems Artificial Intelligence Software

More information

Our digital future. SEPA online. Facilitating effective engagement. Enabling business excellence. Sharing environmental information

Our digital future. SEPA online. Facilitating effective engagement. Enabling business excellence. Sharing environmental information Our digital future SEPA online Facilitating effective engagement Sharing environmental information Enabling business excellence Foreword Dr David Pirie Executive Director Digital technologies are changing

More information

Infrastructure for Systematic Innovation Enterprise

Infrastructure for Systematic Innovation Enterprise Valeri Souchkov ICG www.xtriz.com This article discusses why automation still fails to increase innovative capabilities of organizations and proposes a systematic innovation infrastructure to improve innovation

More information

PROCESS-VOLTAGE-TEMPERATURE (PVT) VARIATIONS AND STATIC TIMING ANALYSIS

PROCESS-VOLTAGE-TEMPERATURE (PVT) VARIATIONS AND STATIC TIMING ANALYSIS PROCESS-VOLTAGE-TEMPERATURE (PVT) VARIATIONS AND STATIC TIMING ANALYSIS The major design challenges of ASIC design consist of microscopic issues and macroscopic issues [1]. The microscopic issues are ultra-high

More information

Years 9 and 10 standard elaborations Australian Curriculum: Design and Technologies

Years 9 and 10 standard elaborations Australian Curriculum: Design and Technologies Purpose The standard elaborations (SEs) provide additional clarity when using the Australian Curriculum achievement standard to make judgments on a five-point scale. They can be used as a tool for: making

More information

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters Scott Watson, Andrew Vardy, Wolfgang Banzhaf Department of Computer Science Memorial University of Newfoundland St John s.

More information

Validation and Verification of Field Programmable Gate Array based systems

Validation and Verification of Field Programmable Gate Array based systems Validation and Verification of Field Programmable Gate Array based systems Dr Andrew White Principal Nuclear Safety Inspector, Office for Nuclear Regulation, UK Objectives Purpose and activities of the

More information

The Citizen View of Government Digital Transformation 2017 Findings

The Citizen View of Government Digital Transformation 2017 Findings WHITE PAPER The Citizen View of Government Digital Transformation 2017 Findings Delivering Transformation. Together. Shining a light on digital public services Digital technologies are fundamentally changing

More information

Assessment of Smart Machines and Manufacturing Competence Centre (SMACC) Scientific Advisory Board Site Visit April 2018.

Assessment of Smart Machines and Manufacturing Competence Centre (SMACC) Scientific Advisory Board Site Visit April 2018. Assessment of Smart Machines and Manufacturing Competence Centre (SMACC) Scientific Advisory Board Site Visit 25-27 April 2018 Assessment Report 1. Scientific ambition, quality and impact Rating: 3.5 The

More information

CHAPTER 8 RESEARCH METHODOLOGY AND DESIGN

CHAPTER 8 RESEARCH METHODOLOGY AND DESIGN CHAPTER 8 RESEARCH METHODOLOGY AND DESIGN 8.1 Introduction This chapter gives a brief overview of the field of research methodology. It contains a review of a variety of research perspectives and approaches

More information