Experiences in developing and applying a software engineering technology testbed

Size: px
Start display at page:

Download "Experiences in developing and applying a software engineering technology testbed"

Transcription

1 Empir Software Eng (2009) 14: DOI /s INDUSTRY EXPERIENCE REPORT Experiences in developing and applying a software engineering technology testbed Alexander Lam Barry Boehm Published online: 11 November 2008 # Springer Science + Business Media, LLC 2008 Editor: Audris Mockus Abstract A major problem in empirical software engineering is to determine or ensure comparability across multiple sources of empirical data. This paper summarizes experiences in developing and applying a software engineering technology testbed. The testbed was designed to ensure comparability of empirical data used to evaluate alternative software engineering technologies, and to accelerate the technology maturation and transition into project use. The requirements for such software engineering technology testbeds include not only the specifications and code, but also the package of instrumentation, scenario drivers, seeded defects, experimentation guidelines, and comparative effort and defect data needed to facilitate technology evaluation experiments. The requirements and architecture to build a particular software engineering technology testbed to help NASA evaluate its investments in software dependability research and technology have been developed and applied to evaluate a wide range of technologies. The technologies evaluated came from the fields of architecture, testing, state-model checking, and operational envelopes. This paper will present for the first time the requirements and architecture of the software engineering technology testbed. The results of the technology evaluations will be analyzed from a point of view of how researchers benefitted from using the SETT. The researchers just reported how their technology performed in their original findings. The testbed evaluation showed (1) that certain technologies were complementary and cost-effective to apply; (2) that the testbed was cost-effective to use by researchers within a well-specified domain of applicability; (3) that collaboration in testbed use by researchers and the practitioners resulted comparable empirical data and in actions to accelerate technology maturity and transition into project use, as shown in the AcmeStudio evaluation; and (4) that the software engineering technology testbed s requirements and architecture were suitable for evaluating technologies and accelerating their maturation and transition into project use. A. Lam (*) : B. Boehm University of Southern California, Los Angeles, CA 90089, USA alexankl@usc.edu B. Boehm boehm@usc.edu

2 580 Empir Software Eng (2009) 14: Keywords Testbed. Software maturity. Software adoption. Technology evaluation. Technology transition 1 Introduction A major problem in empirical software engineering is to determine or ensure comparability across multiple sources of empirical data. One approach is to provide a number of parameters to characterize each data source, and to use only data from sources comparable to one s decision situation. However, this may leave users with a relatively small set of relevant data upon which to base decisions. The U.S. National Aeronautics and Space Administration (NASA) recently faced a particularly strong challenge in this area. It placed a high priority on research and technology to improve software dependability for its space missions, and wished not only to compare the abilities of research products to improve software dependability, but also to accelerate the maturation and adoption of those research products. When an organization like NASA has to send its software to space, the software has to be dependable. If the software performs the wrong function, the mission could end, which would be a huge loss to the organization. However, developing defect-free software is a complex problem. There are many technologies available to help software engineers identify defects, but choosing the right technology can be difficult. There are many questions to be asked of the technology such as How does one know if the technology does as it states, Is the technology mature enough for use, and Will the technology work on my system? Furthermore, technology researchers face their own problems in getting their technology adopted. For dependability researchers, they have to prove that their technology can help decrease defects in the software system, that their technology is cost-effective with respect to alternative technologies, and that the technology will work for the end user in a real-mission situation. This paper reports on experiences in developing for NASA a full-service, mission-relevant, dependability technology-oriented software engineering technology testbed (SETT) to help researchers mature technologies and to help practitioners evaluate alternative software engineering technologies. The paper also addresses the use of the SETT in accelerating technology maturation and adoption, and provides the requirements and architecture for a SETT. For this paper, the primary dimension of dependability examined will be the number and types of defects in a software system representative of NASA planetary exploration vehicles. The structure of this paper is as follows. Section 2 explains the obstacles for technology adoption and how SETTs address the obstacles. Section 3 explains the requirements and architecture of a software engineering technology testbed which have not been discussed before in our prior publications. Section 4 details how researchers would configure a SETT to evaluate a technology. Section 5 provides the results of using an instance of the software engineering technology testbed to evaluate several technologies and the limitations of that instance. A more detailed analysis of the evaluation results is presented in this paper that has not been discussed before in prior publications of the SCRover work. Section 6 provides the benefits researchers received from using the SETT s features. Finally, conclusions are presented in Section 7. 2 Motivation Redwine and Riddle state that it takes about years to mature a technology to the point it gets used by practitioners. They provide several critical factors that explain why technology maturation can take so long (Redwine and Riddle 1985). This section will

3 Empir Software Eng (2009) 14: outline those critical factors and indicate how a software engineering technology testbed can address them in order to increase the speed of technology maturation and adoption. One of the critical factors in getting users to use new technologies is that there is little relevant data on prior experiences demonstrating positive feedback on a technology. This lack of an experience base causes software engineers to be hesitant to use new technologies. One requirement for a software engineering technology testbed would be to have an experience base of prior experiences, both positive and negative, for each technology, providing software engineers an indication of how well the technology worked on a representative software system. The information going into the experience base would contain information such as, but not limited to, the effectiveness of the technology to finding defects, what type of defects it found, training time to learn the technology, and a description of the technology. By analyzing the information, a software engineer would be able to gauge how well the technology will work on their project and evaluate alternative software engineering technologies. At times, a practitioner may not know if two or more technologies are complimentary or not. The technologies may find the same set of defects. With an experience base, practitioners can decide if two or more technologies are complimentary or not. In addition, researchers who use the testbed to evaluate their technology would be able to add their experiences/results to the experience base for practitioners to view. Another critical factor according to Redwine and Riddle is conceptual integrity. By using the software engineering technology testbed, researchers will be able to demonstrate that the technology is well developed by applying the technology on a representative software system and being able to find (seeded) detects. If the technology is unable to find the seeded defects or significant additional defects in the representative system, then the researcher will need to develop/mature the technology some more before being used by the technical community. A third challenge to technology adoption is showing a clear recognition of need for the researchers (dependability) technology. By applying their dependability technology to the software engineering technology testbed, researchers can demonstrate to practitioners how well their technology can identify certain classes of defects in a system, thereby proving to practitioners why they should use the dependability technology in their software development. The fourth critical factor outlined by Redwine and Riddle is tuneability. The researcher needs to show that a technology can be tuned to fit the user s needs. By adapting the technology to the software engineering technology testbed, researchers are able to demonstrate what activities will need to be done to tune a technology to work on an organization s software systems. Finally, Redwine and Riddle mentioned lack of training for the new technology as an impediment for technology transition. In the experience base that was previously mentioned, the experience base should also collect and maintain training information such as how the technology was applied to the testbed, how long it took to apply the technology, and how much training was involved before using the technology. 2.1 A Look at Other Testbeds Before NASA funded the Dependability Computing Program (HDCP) to determine how to evaluate a dependability technology, there were no established full-service software engineering testbeds in existence that could help an organization fully evaluate dependability technologies. A few testbeds existed, such as the Robocup (2007)

4 582 Empir Software Eng (2009) 14: competition and the DETER testbed (Benzel and et al. 2007) that could evaluate technologies, but these testbeds have their disadvantages. RoboCup is a head-to-head soccer competition between two research teams to determine who the winner is. The team that scores the most goals wins. The disadvantages to using RoboCup as a SETT are numerous. First, it can be difficult to judge how technologies compare against each other. In its soccer competition, it uses a playoff type format to determine the winner of the competition. As (Stone 2003) indicates, this can be an erroneous way to judge the value of one s research. Perhaps the technology was not suited for soccer, but it will work better in other fields of application. Likewise, the winner of the competition may not be well-suited to work in the practitioners domain. Another limitation of RoboCup is that the problem scope of the competition is limited to just a few applications: soccer, search and rescue missions, and dance challenges. With its limited scope, this could deter many practitioners from using the technology as they would be unsure how the technology would fit with their needs, i.e. it is not clear the technology can be tuned to the users needs (afterall, how many companies are building soccer-playing robots?). Finally, RoboCup doesn t provide the full-service software engineering support a SETT does. Only a simulator and very little code are provided. No specifications or other type of support are provided. Thus, an architecture researcher would have a difficult time using RoboCup to evaluate architecture models since none are provided. The cyber-defense Technology Experimental Research (DETER) testbed provides researchers support in evaluating computer/information security technologies. While the DETER testbed has many advantages such as providing a platform for repeatable experiments, a repository of complete experiments, and tools to help researchers gather and visualize their test data, there are a few disadvantages to the testbed. First, the DETER testbed is built primarily to evaluate computer security technologies. Other technologies would find it difficult to use the testbed. While the DETER testbed provides many great support facilities for the researcher, it doesn t provide the full-service software engineering support a SETT does. For example, specifications as detailed as the SETT specifies are not provided by the DETER testbed. Also, it is unclear how a practitioner would be able to search the DETER repository to identify a technology that would meet their development needs. If the practitioner does find a technology, no data is provided to indicate how difficult it will be to adapt the technology. From the literature review, it seems few testbeds provide specifications that would aid in a software engineering technology evaluation. In addition, no experience base is usually provided for practitioners to determine how one technology fared against several others and to indicate how difficult it will be to learn/adapt the technology. With the SETT, practitioners will be able to use the many functionalities of the SETT that will be discussed in Section 3 to make a much more informed choice as to which technology is best suited for them. 3 SETT Domain Model, Requirements and Architecture In order to generate the set of requirements and architecture for a SETT, the Domain- Specific Software Architecture (DSSA) Approach was used. DSSA is a process and infrastructure that supports the development of a Domain Model, Reference Requirements, and Reference Architecture for a family of applications within a particular problem domain. (Tracz 1995; Mettala and Graham 1992).

5 Empir Software Eng (2009) 14: Domain Model The domain model is composed of the customers needs statements, scenarios, domain dictionary, context diagrams, entity/relationship diagrams, data flow models, state transition models, and object models. For brevity, only the scenarios will be shown. To generate the scenarios, interviews were conducted with various technology researchers and NASA software engineers to collect their needs statements, which were then used to generate user scenarios as shown in Fig. 1 and Fig. 2. As summarized in Fig. 1, if a technology researcher has developed an idea for a new technology, the researcher will begin by exploring the experience base for technologies similar to the idea, to make sure the technology has not been developed yet (the researcher may also be researching the literature to ensure the technology has not been developed yet). If similar technologies already exist, the researcher will keep refining the idea. If no existing technology appears to provide the new technology s capabilities, then the researcher will develop the technology. Once finished, the researcher will configure the software engineering technology testbed to evaluate the new technology. Once the evaluation is complete, the researcher will prepare a technology evaluation report and submit the report to the experience base where practitioners looking for dependability help will view it. If the evaluation results are positive and a practitioner has been identified, the researcher may then proceed to collaborate with the practitioner to configure the technology for use on a live project. Once the configuration and application of the technology is performed on the live project, the experience base should be updated to reflect the new results. The evaluation report will be submitted to a testbed coordinator who is in charge of maintaining the testbed and its experience base. If the testbed evaluation results are negative, the researcher should update the experience base with the current results to indicate to practitioners that the technology Continue with research No candidates compare to the new idea Once technology is developed Explore Experience Base for similar technologies New Technology Idea Similar technology found; refine technology idea Technology Researchers Positive Results Configure testbed; Evaluate Technology Negative Results Collaborate with practitioner to configure technology for use on project. Apply, evaluate, and iterate technology Update technology needs and capabiltiies Prepare technology evaluation report, Compare evaluation with evaluations from similar technologies, and Update Experience Base Evaluate until Positive Results Found Refine Technology Fig. 1 Researchers user scenarios

6 584 Empir Software Eng (2009) 14: Practitioner Formulate/revise mission approach, dependability issues Unproven candidate(s) Explore Experience Base for relevant technology. Filter candidates Good candidate(s) Contact Researcher to perform evaluation on testbed Explore technology usage considerations on user project Researcher configures testbed and performs evaluation Prepare technology evaluation report and update Experience Base Collaborate with technology provider to configure technology for use. Apply and evaluate technology Fig. 2 Practitioners user scenarios may not be ready for use yet in this particular context/application and continue to refine the technology until the evaluation results are positive. The negative results could also indicate that the technology was not designed to work in the domain the testbed represents and that the technology would work best in another application/domain. As summarized in Fig. 2, for practitioners having dependability problems, they will first explore the experience base looking for technologies that fit their search criteria. The practitioner will read the technology evaluation reports to determine how well a technology performed on the testbed system or on past projects done for the organization. Once a technology has been identified as a promising candidate, the practitioner will explore using the technology on their project. This could involve collaborating with the technology provider to configure the technology for use. After applying the technology to their project, the practitioner will submit a technology evaluation report to the experience base detailing how well the technology worked for the project. If no proven candidates are identified, then the practitioner will contact the researcher of the technology and ask the researcher to perform an evaluation of the technology on the testbed. The researcher will configure the testbed, perform and evaluation, and update the experience base with the results of the evaluation that will be viewed by the practitioner.

7 Empir Software Eng (2009) 14: Requirements From the domain model, a list of functional and non-functional requirements was generated Functional Requirements F1 testbed should provide specifications and code for the system to be developed. F2 testbed should allow an evaluator to instrument the system in order to collect data during the experiment F3 testbed system should have a library of seeded defects (Mills 1972) that evaluators can use to seed defects into the specifications/code. F4 testbed should provide guidance to the evaluator on how to use the testbed F5 testbed should have a mechanism for evaluators to submit their technology evaluation results (including technology adoption data) for practitioners to review F6 evaluators should be able to run the missions/scenarios on a hardware platform or simulator to obtain or verify the technology evaluation results. Missions are various operational scenarios which the system can perform. An example scenario is for a rover to locate an object of interest. F7 testbed should provide a mission/scenario generator that will produce multiple missions/scenarios from which the evaluator can select from for their evaluation. This will allow the researcher to test their technology under various scenarios. F8 testbed should provide project data such as effort data as part of the testbed system. For example, effort data includes the time developers spent designing, developing, and testing the system. F9 testbed should have a user interface that will allow an evaluator to access the testbed Non-Functional Requirements NF1 the testbed, including mission applications specifications and code, should be representative of the systems the sponsoring organization develops and support a wide range of technologies. In addition, the specifications should follow good software engineering practices. NF2 specifications/code of the mission applications should be tailorable to meet the needs of multiple contributors/evaluators. NF3 testbed should be available for public use. NF4 testbed should have results that are combinable with other evaluators results and are representative to the end user. NF5 the defect library should be composed of actual defects incurred while implementing the mission applications. NF6 researchers should use a common (ideally low-cost) platform (i.e. computer equipment the code runs on) to evaluate their technologies allowing a fair comparison of technologies and allowing results to be combinable. NF7 the defect data classification used should be comparable to classifications used by the evaluators. In addition, evaluators should be able to classify the defects according to their own defect classification system.

8 586 Empir Software Eng (2009) 14: Architecture A brief description of each component in the testbed s architecture is given below along with why the component is important to the architecture (Fig. 3). Instrumentation an instrumentation class will help the evaluators collect data for their evaluation report. Without this feature, an evaluator will have to spend more time in figuring out how to collect the data they need. Seeded Defect Engine and Defect Pool seeded defects will help the evaluators estimate how well their technology finds defects and it prevents evaluators from building their technology to just pass a certain test or operational scenario. In addition, seeded defects give a measurement to evaluators on how well their technologies can find defects and what kind of defects the technology can and cannot find. Code a software system is needed to evaluate the technology on. Specifications specifications are needed if the organization wishes to test more than just technologies that work on code. Specifications provide the basis for the evaluation, along with the code. For example, architecture specifications are needed for architecture technology evaluations. Without specifications, evaluators whose technology requires specifications will not be able to use the testbed or it will increase the amount of time the evaluator conducts the evaluation since they will have to produce the specifications on their own. Thus, without specifications, the scope and usefulness of the testbed will be limited. Mission/Scenario Generator allows evaluators to do a more thorough evaluation of their technology since it will be operating under several scenarios and not just one. Also, prevents an evaluator from developing their technology to work under a specific circumstance. Fig. 3 SETT architecture

9 Empir Software Eng (2009) 14: Platform: Simulator a simulator is needed to do low-cost runs of the system. Running the system in simulation mode will help the evaluator acquire data and do runs of the system at a low cost. For those technologies that require code to be run, a simulator provides a free way to evaluate the system as many times as they want. Without a simulator, evaluators will have to buy hardware, which may not be possible due to limited budgets. Hardware platform hardware provided for evaluators to test their technology on an actual platform. Usually the evaluator would use the hardware after getting good results on the simulator and want to be sure the simulated data will match the actual data obtained from the hardware Manual and Experimentation Guidelines used to guide the evaluators in how to use the testbed and how to set up their experiments. Without this component, evaluators will spend more time to set up their experiment and increase the probability the experiment will be conducted incorrectly. Effort Data effort data that went into the development of the testbed. This data can be used by technologies dealing with cost-estimation. Technology Evaluation Report report indicating how well a technology worked on the testbed system. Includes such information as how many and the type of defects the technology found and technology adoption data such as how much effort it took to apply the technology to the testbed and where in the software lifecycle the technology was applied. Experience Base database of technology evaluation reports SETT users can search through. Without the experience base, practitioners will face a difficult time in finding and choosing the right technology to solve their dependability problems. 4 How to Configure a SETT Figure 1 provided an overview on how an evaluator would use a SETT but this section will go into more detail on how one would configure a testbed to evaluate a technology. To use the testbed, evaluators start by reading the experimentation guidelines that provide them a framework for conducting their evaluation and provide instructions on how to use the testbed. Next, they will define the type of defects the technology is expected to detect and what kind of data the instrumentation class should be collecting. These features will help gather the necessary data used to evaluate the performance of the technology. The next step is to define the appropriate operational scenario under which the technology will be evaluated. Then, based on the criteria defined, appropriate instrumentation and seeded defects are applied to the project artifacts associated with the selected operational scenario. Once the appropriate set of project specifications and code has been obtained, the evaluator will apply the technology to the set of project artifacts. Next, the technology is then executed. This step does not necessarily require code to be run. For example, if a new type of requirements peer reviews were being performed, then no code is necessary. After the execution of the technology, the evaluators use the data provided by the instrumentation to determine the percentage of seeded and unseeded defects of each type that were found. This enables an analysis of how well the technology performs in detecting, avoiding, or compensating for various classes of seeded and previously undiscovered defects, in comparison to alternative technologies. The data and the analysis are then stored in an experience base to be accessed by project managers interested in technology to increase the

10 588 Empir Software Eng (2009) 14: dependability of their delivered systems. Section 5 provides two examples of how evaluators used the software engineering technology testbeds. 5 SETT Instance SCRover and Technologies Evaluated Using the architecture outlined in Section 3, an instantiation of a SETT called SCRover was developed. The SCRover testbed provides an open, experimental framework representative of NASA planetary rovers that enables evaluators to determine the relative cost and effectiveness of a given software dependability technology on a NASA like project. The testbed contains software, supporting information such as specifications, metrics, instrumentation, seeded defects, and guidelines, a robotic platform (both real and simulated), and a development environment. SCRover represents a family of planetary rovers. However, due to export restrictions by the U.S. government and to ensure the testbed would be publicly available to most researchers, the SCRover team and the JPL team deemed it unwise for the USC team to develop an application that was so close to what NASA uses rovers for: looking for life on other planets. Thus, the SCRover and JPL teams decided a similar, but still NASA-representative mission, would be for SCRover to explore buildings for potential chemical hazards. Like its NASA counterparts, SCRover would still roam an unknown territory and use its cameras and sensors to locate objects of interest. To enhance its representativeness of NASA planetary rover software, SCRover was built using the Mission Data Systems (MDS) technology created by the NASA Jet Propulsion Laboratory (JPL) in The MDS goal has been to develop a set of closely matched tools and techniques to reduce development and debugging costs, promote reusability, and increase reliability throughout a project s lifecycle. The principal MDS products include a system engineering methodology called the State Analysis Process, a software framework, a goal-based operational methodology, and a cost estimation model based on COCOMO II (Dvorak et al 2000). Once the SCRover system was built, several technologies were applied to the testbed. A total of 6 technologies were evaluated on SCRover, but this paper will discuss three technologies evaluated, Mae (Roshandel et al. 2004a, b), AcmeStudio (Garlan et al. 2000), and ROPE (Fickas 2004). 5.1 Mae and AcmeStudio Mae According to Dr. Roshandel, the Mae technology serves as a step between the UML diagrams generated by developers and the implemented system. Mae is an extensible architectural evolution environment developed on top of xadl 2.0 that provides functionality for capturing, evolving, and analyzing functional architectural specification (Roshandel et al. 2004a, b; Boehm et al. 2004). Mae models the system components and its behaviors. Once the model is complete and inputted into the Mae tool, the tool reports any mismatches in how the component is used and its intended usage. For example, Mae can detect signature defects like if a component is calling a function that has not been implemented yet or if a component is passing the wrong parameters in a function call. In addition, Mae is able to detect errors in the pre- and post-condition of a function call. For example, a component is making a function call, but the pre-condition for calling that function has not been met yet. In using Mae, developers can find these types of defects faster and better than using traditional methods such as peer review.

11 Empir Software Eng (2009) 14: Since the Mae evaluation was done at USC with the help of the SCRover developers, there was no need for Dr. Roshandel to deploy the SCRover system on her own. She used the same robotic and simulator platforms the SCRover developers used. For the evaluation, Dr. Roshandel used many features of the testbed. First, she read the SCRover manuals (and consulted with the SCRover development team) to determine how the experiment should be set up and what activities she needed to do before the evaluation could start. Next, she defined the type of defects she was looking for and the instrumentation class was modified to collect the data she needed. The third step involved deciding which scenario she would perform her analysis under. The wall-following scenario was chosen. Next, she took the architecture specifications of the system and translated the provided UML models to xadl models. Roshandel used the class and sequence diagrams in determining the architectural configuration of SCRover in terms of components and connectors, and their interaction. The architecture specifications that were translated contained 38 seeded defects. The defects ranged from grammar defects to defects that could potentially cause harmful behaviors. Some of the defects were architectural in nature while others were conceptual. A subset of the architectural defects concerned functional behaviors that Mae captured, while other defects were behaviors not identified by Mae. Re-seeding these defects into the Mae models helped identify the defects that Mae can and cannot detect. The defects Mae cannot detect is valuable in identifying complementary technologies necessary to detect additional classes of architectural defects. Results Roshandel inputted the xadl models (with the seeded defects) into the Mae tool that performed the architecture analysis. The analysis provided by Mae revealed several inconsistencies in the SCRover architecture as dictated by the MDS Framework rules. The tool identified 15 of the seeded defects plus an additional 6 defects that were undetected before. According to Dr. Roshandel, these 6 additional defects concerned inconsistencies in the architecture specification. Specifically, the Mae tool detected inconsistent specification of interfaces and behaviors among interacting components, which could result in harmful system interactions. Figure 4 summarizes the number of seeded defects (left column) against the defects found by the Mae tool. These defects were classified under a categorization schema similar to Orthogonal Defect Classification (Chillarege et al 1992). Afterwards, the SCRover system was run under the wall-following scenario. While the SCRover rover was following various wall configurations, the data that she defined in step 1 was collected with the instrumentation class. With the data, Dr. Roshandel was able to perform an analysis of how many times each defect (found by the Mae tool) occurred in the system, which she then used to determine how much more reliable the SCRover system would have been if the Mae tool was used in the architecture phase of the life-cycle. Costs The Mae evaluation on SCRover took approximately 160 h. Most of the time was spent in translating the UML models into xadl models. However, this time is small when compared to how long it would have taken the Mae team to build a testbed with a NASArepresentative rover. The SCRover team spent approximately 6 months to build the testbed. AcmeStudio A similar experiment was conducted with Dr. David Garlan s AcmeStudio tool (Garlan et al. 2000), which is also designed to find defects in software architecture models. According to Garlan, AcmeStudio determines whether a system s architectural specifications

12 590 Empir Software Eng (2009) 14: Fig. 4 Mae defect detection yield by type 25 Mae Defect Detection Yield by Type Number of Defects Interface Class/Obj Logic/Alg Ambiguity Data Values Other Inconsistency 0 Number of Seeded Defects Number of Defects Mae Detected comply with the relationships and constraints imposed by the architectural style as chosen by the developers. In the SCRover testbed, AcmeStudio would evaluate whether the architecture models provided by the testbed follow the architecture rules as set by the MDS Framework. The AcmeStudio researchers believe that using AcmeStudio will result in developers finding more architecture violation defects than using traditional methods such as peer review. Results In the AcmeStudio experiment, only 18 of the original seeded defects were used. The reason is only the architectural type defects were analyzed. The results of the evaluation are summarized in Fig. 5. Peer Review was used to find the set of original Source: (Roshandel, Schmerl, et al., 2004) Fig. 5 Mae/AcmeStudio/peer review results

13 Empir Software Eng (2009) 14: seeded defects and the defects are classified by their architectural type. The figure indicates what type of defects each method can find. The results verified that Mae and AcmeStudio find different types of defects and are indeed complementary technologies that can be used jointly to find a greater number of defects. From the set of defects analyzed in Fig. 5, AcmeStudio is better at finding architecture mismatches than peer reviews. Likewise, from looking at Figs. 4 and 5, Mae is better at finding defects in the pre- and post-conditions, as well as inconsistency defects, than peer reviews. Costs The AcmeStudio evaluation on SCRover took approximately 120 h. Most of the time was spent in translating the UML models into Acme models. As in Mae, this time is small when compared to how long it would have taken the AcmeStudio team to build a testbed. Further analysis can be done from the Mae/AcmeStudio/Peer results to demonstrate to practitioners how using all three technologies in combination can benefit them in finding a greater pool of defects as shown in Fig. 6. For this study, the set of 32 defects showninfig.5 as well as the 20 seeded defects the Mae and AcmeStudio technologies couldn t find are used. If only peer reviews were used to identify defects, only 38 defects would be found leaving at least 14 more in the architecture specifications. However, if the Mae technology were applied after Peer Reviews, then an additional 6 more would be found. Likewise, if the AcmeStudio technology were applied after Mae and Peer Reviews, then an additional 8 more defects would be found. On the other hand, if AcmeStudio and Mae technologies were used, but not Peer Reviews, then the two technologies would find 32 of the defects in the architecture specifications, leaving at least 20 more defects in the specifications. Many of the 20 defects that AcmeStudio and Mae didn t find were not specifically architectural in nature (e.g. logic and algorithm defects) demonstrating the importance of Peer Reviews. 60 Architecture Defects Found vs. Effort Architecture Defects Found Peer Review first Mae second Acme second Acme third Peer Review third 10 Mae first Effort (Hours) Fig. 6 Architecture defects found vs. effort

14 592 Empir Software Eng (2009) 14: Implications for JPL Project use of Mae, AcmeStudio, and SCRover Testbed Results The results of the Mae and AcmeStudio experiments with the SCRover testbed were of considerable interest to JPL-MDS personnel, who had been experimenting individually with their capabilities. The complementarity of their defect identification and avoidance capabilities, the relatively low level of effort in developing and analyzing the specifications, and the prospect of combining the two toolsets opened up new prospects for using ADLs to supplement MDS s current state-oriented architectural approach. Potential benefits included stronger defect avoidance, detection, and diagnosis; stronger compositional modeling of MDS components and connectors; and an overall strong return-on-investment (ROI) potential of software architecture modeling and analysis compared to that of traditional but expensive engineering review processes. Individually, Dr. Garlan and Dr. Roshandel were also able to use their respective SCRover experience to determine if the tool was useful to a practitioner and what improvements needed to be made to make the tool more useful, thus maturing the tool to where it is suitable for industry use. Furthermore, they were able to use the SCRover experience to effectively demonstrate to the JPL-MDS personnel how effective their respective research was on the MDS technology. The SCRover experience provided the JPL-MDS personnel more confidence in using the tools and working with the researchers to advance their tools and accelerate their usage in industry. 5.3 ROPE The Reasoning about Operational Envelopes (ROPE) project s purpose is to determine the operational envelope of the environment for a given technology during runtime. The operational envelope is defined as the environment or conditions in which the system will work dependably. However, if the system moves outside the envelope, this indicates there are dangers in the environment and that the system cannot be expected to work dependably. Unfortunately, no software system can be built to meet all possible environmental conditions. Ideally, the system s envelope should be identified before placing the system in usage. The ROPE technology will help define what the system s operational envelope is and indicate what the defects of the system are. The defects will be described as environmental conditions in which the system will fail. (Fickas et al. 2004). Unlike Dr. Roshandel, the ROPE researchers conducted their evaluation at the University of Oregon and not at USC. Thus, this required the ROPE team to deploy the SCRover system on their computers and own robotic hardware. The ROPE team first obtained a copy of the whole SCRover testbed system, including all the specifications and code. The researchers then followed the instructions to deploy the SCRover code. However, changes to the code were made first. Since the ROPE team had a different rover and simulator that they wanted to use instead, the ROPE team swapped the provided device drivers with their own robotic device drivers. Once the changes were made, the ROPE team was successively able to run SCRover on their platforms. Afterwards, the ROPE team conducted their evaluation in a similar way as Dr. Roshandel did. This series of activities also demonstrated that the SCRover testbed is portable by being able to run on a different rover. Results After the port, the first activity the ROPE team did was defining the data they wanted to collect and configuring the platform and instrumentation class to collect the data. Then, the team picked the wall-following scenario to evaluate and looked at the SCRover specifications to develop their dependability model. Afterwards, the ROPE team ran the wall-following scenario and was able to detect defects in the environment that would cause the rover to fail its mission.

15 Empir Software Eng (2009) 14: One of the seeded defects the ROPE team discovered was that if two walls converged to a small angle, and if the rover was following one of the walls, the rover would eventually get stuck in the small angle. With the SCRover results, Dr. Fickas was able to demonstrate to the JPL-MDS team the advantages of using ROPE on a rover project. Costs The ROPE team spent 80 h to configure the testbed system to work on their own rover and to run the experiment. 5.4 Synergy between Technology Evaluations Even though a wide range of technologies was applied to SCRover, each researcher used the evaluation process outlined in Section 4. Each researcher was able to configure the testbed to their specific needs (such as what defects they were looking for) and used only artifacts that they needed for the evaluation. In the cases of the Mae and ROPE evaluations, both researcher teams used all the components of the SCRover testbed, save for the project effort data. Both teams used the manuals and guidelines to determine how to conduct the evaluation, used the scenario/mission generator to select which scenario they would use in the experiment, applied their respective technology to the set of project artifacts (including code and specifications), used seeded defects to determine how well their technologies performed, used the platforms to execute the code and collect instrumentation data, and finally use the technology evaluation reports to compare their technologies with similar ones. Since the teams were not doing research in software costs, they did not need to use the project effort data collected by the SCRover team. However, effort data to determine how long their evaluation took compared to other technologies were used by the teams, but that effort data were stored in the technology evaluation reports. The cost of doing each evaluation was one to two person-months for each researcher, most of which was in one-time learning and initial setup effort, thus indicating that using testbeds can be a cost-effective method for evaluating technologies. 5.5 SCRover Limitations While SCRover is representative of NASA s planetary rovers, not every researcher will be able to use the SCRover testbed. In some cases, the SCRover testbed may not have enough capabilities or requirements to satisfy the researcher s technology needs. However, SCRover does provide a representative instance of how software systems at NASA are developed since the JPL-MDS methodology was followed. All project artifacts that MDS required are part of the SCRover testbed. These artifacts span the software lifecycle from inception to delivery, allowing for a wide range of software engineering technologies to be evaluated. Furthermore, while the number of missions SCRover performs is small compared to a Mars planetary rover, the missions that SCRover does execute are part of the missions that a Mars planetary rover performs as well. In fact, the first mission that SCRover performs is the same mission the JPL- MDS group developed in their Mars rover prototype. Both SCRover and Mars planetary rovers do automated obstacle avoidance, battery power management and replanning, and use their cameras to search for and detect objects of interest. Thus, while SCRover may be less complex in terms of capabilities than a Mars planetary rover, it is complete to provide researchers a model of how software for NASA is developed. An example of where SCRover couldn t be used by researchers involved the STAR technology (Roshandel et al. 2006). STAR is a technology that analyzes state diagrams to

16 594 Empir Software Eng (2009) 14: estimate the reliability of a system. However, it was soon discovered that SCRover did not have enough states in its system to perform a solid evaluation of the STAR technology. STAR needed approximately 30 states while SCRover provided about ten states. The following diagrams illustrate various dimensions of the SCRover testbed that will help researchers decide whether or not they can use the SCRover testbed for their technology evaluation. Figure 7 provides an overview of the SCRover testbed while Figs. 8 and 9 provide information for researchers evaluating technologies respectively in architecture and requirements engineering. For each technology family to be evaluated, similar diagrams can be created. The diagrams below are not meant to include all possible dimensions that the SCRover testbed can be measured. However, the diagrams provide enough information for a researcher to decide if the SCRover testbed is appropriate for them to use. Figure 7 outlines the overall project characteristics for the SCRover testbed. This will provide researchers an overview of how complex the SCRover system is. For this diagram, we consider a complex Mars rover as having ratings of along each axis. The Project Data axis is used to describe how much data was collected during the development of the SCRover system. The SCRover team collected much data on defects, effort spent in developing the system and SLOC count, data that the NASA JPL-MDS team collected as well. However, more data could have been collected such as daily builds data and effort to fix defects, which kept the team from scoring a rating. Counting the number of lines of code that was part of the MDS Framework, SCRover had over 300 k LOC (of which about 5 k was the SCRover adaptation code and the rest is the MDS Framework code), which is a fairly high number. The JPL-MDS team had between k for their Mars prototype rover. The number of Operational Capabilities was in the medium range when compared to a Mars rover. The SCRover team spent much time Project Data Scenarios 8 6 Medium 4 2 Low Medium Low 10k Low Medium 50k 300k Source Lines of Code Platforms Medium Low 4 Low 8 Low Medium Medium Operational Capabilities Specifications Fig. 7 Project overview

17 Empir Software Eng (2009) 14: Components 15 Scenarios Architecture Diagrams 8 6 Medium 4 2 Low Medium Low Medium Low 10 Low Medium Low 2 Low 4 Medium6 8 Medium States State Variables Fig. 8 Homeground for architecture technologies 80 Classes developing a high amount of specifications for the SCRover system. The SCRover team used a well-instrumented version of the Win Win Spiral model called Model-Based (System) Architecting and Software Engineering (MBASE) (Boehm and Port 2001; Boehm and USC Center for Systems and Software Engineering 2003) for system and software development. MBASE involves the concurrent development of the system s operational concept, prototypes, requirements, architecture, and lifecycle plans, plus a feasibility rationale ensuring that the artifact definitions are compatible, achievable, and satisfactory to the system s success-critical stakeholders. MBASE shares many aspects with the Rational Unified Process (RUP) (Kruchten 2001), including the use of the Unified Modeling Language (UML) (Booch et al. 1999) and the spiral model anchor point milestones (Boehm 1996). In addition, all specifications that the MDS Framework required were generated as well. The number of platforms SCRover can run on is three (two hardware and one simulator platforms), which places it in the medium high range. The JPL-MDS team planned on using MDS on several platforms, including one simulator. For their prototype, the MDS team tested on a simulator and two different hardware platforms as well. Finally, the last axis covers the number of scenarios/missions the rover could do. SCRover has three missions that placed it in the low-medium range while the Mars rover would have a higher number of missions it could do. However, since SCRover incorporated the MDS goal-driven scenario generator, some classes of additional scenarios can be straight forwardly added. Figure 8 is to be used by architecture researchers to help them decide what type of architecture analysis can be performed. The figure covers many dimensions that

18 596 Empir Software Eng (2009) 14: Medium Low Capability Requirements Low Medium Project Requirements Medium 10 Low Low Medium Level of Service Requirements Requirement Specifications Fig. 9 Homeground for requirement engineering technologies architecture researchers considers in their work, but by no means, does it cover every dimension a researcher may work on. As in Fig. 7, we consider a complex Mars rover as having ratings of along each axis. Components and State Variables are used in the context defined by the MDS Framework (Rinker 2002). States are defined as the states of the system. Classes are the number of object-oriented classes the SCRover team developed and Scenarios are the number of missions/scenarios SCRover could perform. For each of the five axes defined, SCRover falls in the medium range, as the rovers the JPL-MDS team built have a greater number of scenarios, components, states, state variables, and classes. However, the number of architecture diagrams the SCRover team defined is in the high range as the SCRover team produced similar architecture specifications the JPL-MDS team would produce. Figure 9 is to be used by requirements engineering researchers to give them an idea of what type of analysis they could perform. The figure covers many dimensions that many requirements researchers considers in their work, but by no means, does it cover every dimension a researcher may work on. As in Fig. 7, we consider a complex Mars rover as having ratings of along each axis. Capability Requirements are defined as capabilities the system can do. For example, the rover should be able to use its camera to detect objects of interest. Project Requirements are defined as constraints placed upon the design team: e.g., solution constraints on the way that the problem must be solved, such as a mandated technology. For example, requiring MDS Framework to be used on the SCRover system. Project Requirements also summarizes process-related considerations

19 Empir Software Eng (2009) 14: such as cost or schedule constraints. For example, developing the project by a specified date. Level of Service Requirements are defined as how well the system should perform a capability requirement. For example, the accuracy of reaching a target should have an error range of ±10% distance of the expected position of the target. (Boehm and USC Center for Systems and Software Engineering 2003) For each of the requirements, SCRover falls in the medium range, as the JPL-MDS Mars rovers would have a higher number of requirements. However, the amount of requirement specifications the SCRover team defined is in the high range as the SCRover team produced similar requirement specifications the JPL-MDS team would produce. 5.6 TSAFE Testbed The HDCP program also funded another group to develop a testbed to evaluate technologies. The Fraunhofer Center at the University of Maryland (FC-MD) developed the TSAFE testbed (Lindvall et. al. 2007) which includes many of the same architectural elements of a SETT. TSAFE provides a set of seeded defects or faults for technologies to find, a set of specifications to understand the TSAFE system, instrumentation to monitor the TSAFE system and its faults, and provided various test cases. The main difference between the TSAFE and the SCRover testbed is that SCRover is representative of a rover mission that the NASA-JPL group would develop while TSAFE was built for the US Air Traffic Control System. When USC and FC-MD developed their respective testbeds, both groups found that the features stated above were important to include in the testbed when evaluating dependability technologies. In addition, both groups discovered that providing a common platform for researchers to evaluate their technologies benefits both the practitioners and researchers. Researchers do not have to spend a lot of time developing their own testbeds and practitioners are able to use a common testbed to evaluate different technologies. 6 Benefits for Researchers This section provides a summary of the benefits the researchers obtained from using the software engineering technology testbeds components. Specifications and Code all researchers used the specifications and/or code to set up their evaluation. The specifications and code proved to be configurable as the researchers used the same set of artifacts. Without common specifications and/or code, a comparative evaluation would be impossible or at least more difficult to perform. Mission/Scenario Generator more experiments will have to be conducted to determine the relative effectiveness of having a mission generator in the testbed, but as with other scripting languages, its use would generally be more cost effective than manually preparing each test scenario. Instrumentation Roshandel (Mae) was able to use the instrumentation to quickly gather data for her performance analysis. For those researchers needing to collect data during the running of the code, the instrumentation class provided them a tool to quickly gather data. Seeded Defects and Defect Pool the seeded defect approach was effective in identifying the degree to which Mae could identify defects of various classes. However, after estimating nine likely remaining defects, we found that AcmeStudio alone discovered eight remaining defects, five of which were in categories (style usage, completeness) not in our defect categorization scheme. Thus it appears that the seeded defect technique s

20 598 Empir Software Eng (2009) 14: maximum likelihood estimate is better considered as a lower-bound estimate of the defects remaining in the categories constituting the current universe of defect sources. As an analogy, since the seeded defect technique derives from the use of fish tagging to estimate the total number of fish in a body of water, the technique can only estimate the number of fish catchable by the type of net used in catching tagged and untagged fish. There may be a number of smaller but significant fish (i.e., defects) swimming around undetected. Platform: Rover and Simulator Researchers were able to use the simulator and/or rover to execute the code and collect data for their performance analysis. Without a platform to run the code, the evaluation would be impossible for those researchers working with code or needing code to run. In addition, for those researchers using the simulator, it was a relatively low cost way for them to evaluate the technology. Experience Base/Technology Evaluation Results Roshandel and Garlan were able to use the evaluation results to indicate that their technologies were complementary and used it to demonstrate to JPL-MDS personnel that their technology can work on NASA software systems, leading to further usage of their technologies at JPL. Experimentation Guidelines Manuals researchers used the guidelines provided to learn how to conduct an experiment. The manuals were useful in teaching researchers about the MDS Framework and the SCRover testbed. Without the manual, researchers such as the ROPE team would have spent lots of time asking NASA questions about how the MDS technology worked. Project Data no researcher has used this testbed feature yet, but it is our belief that the data will be valuable for researchers developing technologies to estimate cost and schedule as it contains effort data in developing SCRover and defect data collected during the development phase. Ideally, the defect data would contain the time it took to fix the defect and how this impacted the schedule. Finally, one more benefit the researchers obtained from using SETTs is the early feedback on their technologies. Both Dr. Roshandel and the ROPE team indicated that by applying their technologies to the SCRover testbed first, the experience identified what improvements to their respective technologies were needed to be made first before they tried to convince NASA JPL users to adopt their respective technologies. The SCRover experience provided a better idea of how their technology should be applied to NASA-like systems, as opposed to the software systems they built themselves. 7 Conclusions From Section 2, Redwine and Riddle stated the following as difficulties for getting practitioners to adapt to new technologies (Redwine and Riddle 1985): No collection of prior experiences demonstrating positive feedback on a technology with the use of the SCRover testbed, researchers such as Dr. Roshandel and Garlan were able to provide practitioners (in this case, NASA) a positive experience with using their technology on a NASA-like representative system. In addition, the SCRover testbed helped demonstrate to NASA that the Mae and AcmeStudio technologies were indeed complimentary and could be used in combination to find a greater number of defects. Conceptual integrity by performing evaluations on SCRover, the researchers from Mae and ROPE were able to keep refining their technologies until they reached a point where they could demonstrate to NASA JPL that their technologies was well developed to work on their systems.

21 Empir Software Eng (2009) 14: Showing a clear recognition of need for the technology with the SCRover experience, researchers such as the AcmeStudio team and Dr. Roshandel were able to demonstrate to NASA JPL practitioners how well their technologies can detect certain classes of defects in a representative software system and how easy/difficult it was to apply the technology. Tuneability by applying their technologies on SCRover, researchers were able to indicate to NASA what kind of activities one would need to configure their technologies to work on a NASA-like system. Lack of training for the new technology with the SCRover experience, researchers were able to work with the NASA practitioners to identify what training needed to be provided as well as indicate technology adoption data such as, but not limited to, how long it took to apply the technology and how much training was involved before using the technology. In conclusion, this paper introduced the requirements, architecture, and concept of operation of a successfully-used software engineering technology testbed. The experiences of three technology evaluations on an instance of the SETT called SCRover were reported. The results and benefits each researcher obtained from using SCRover were presented, as well as how a practitioner can interpret the data obtained from the evaluations. This paper also included several charts that define the current domain of applicability of the testbed. And as a bottom line, the SCRover testbed provided a working example of how SETTs and their ability to provide users with comparable empirical data can overcome the challenges of technology adoption and maturation in order to increase the speed of the technology maturation and adoption process. Acknowledgements This work was supported by NASA-HDCP contracts to CMU, JPL, and USC. It also benefited from support from the JPL-MDS team and the NSF HDC programs that includes Dr. Roshanak Roshandel, Dr. Steve Fickas and his graduate students, Dr. David Garlan and the AcmeStudio team, Dr. Gupta, Dr. Helmy, Ganesha Bhaskara, and Dr. Carolyn Talcott. In addition, I d like to acknowledge the USC graduate students who helped in developing SCRover. References Benzel T, Braden R, Kim D, Neuman C, Joseph A, Ostrenga R et al Design, Deployment, and Use of the Deter Testbed. Proceedings of the DETER Community Workshop on Cyber Security Experimentation and Test, August Boehm B (1996) Anchoring the software process. IEEE Softw 73 82, (July). doi: / Boehm B, Port D Balancing discipline and flexibility with the Spiral Model and MBASE. Crosstalk, December 2001, pp ( Boehm, B. and USC Center for Software Engineering (2003) Guidelines for Model-Based (System) Architecting and Software Engineering. ( Boehm B, Bhuta J, Garlan D, Gradman E, Huang L, Lam A et al (2004) Using testbeds to accelerate technology maturity and transition: The SCRover Experience. ACM-IEEE International Symposium on Empirical Software Engineering, August, pp Booch G, Rumbaugh J, Jacobson I (1999) The unified modeling language user guide. Addison Wesley, Reading Chillarege R, Bhandari IS, Chaar JK, Halliday MJ, Moebus DS, Ray BK et al (1992) Orthogonal Defect Classification A Concept for In-Process Measurements. IEEE Trans Softw Eng 18(11). doi: / Dvorak D, Rasmussen R, Reeves G, Sacks A (2000) Software architecture themes in JPL s Mission Data System. Proceedings of 2000 IEEE Aerospace Conference. Fickas S, Prideaux J, Fortier A ROPE: Reasoning about OPerational Envelopes. uoregon.edu/research/mds/

22 600 Empir Software Eng (2009) 14: Garlan D, Monroe RT, Wile D (2000). Acme: architectural description of component-based systems. In: Leavens GT, Sitaraman M (eds) Foundations of component-based systems. Cambridge University Press Kruchten P (2001) The rational unified process (2nd edn). Addison Wesley, Reading Lindvall M, Rus I, Donzelli P, Memon A, Zelkowitz M, Betin-Can A et al (2007) Experimenting with software testbeds for evaluating new technologies. Empir Softw Eng 12(4): doi: / s Mettala E, Graham M (1992) The domain-specific software architecture program. Technical Report CMU/ SEI-92-SR-9, CMU Software Engineering Institute Mills H (1972) On The Statistical Validation of Computer Programs. IBM Federal Systems Division Report Redwine S, Riddle W (1985) Software technology maturation. Proceedings of the 8th International Conference on Software Engineering (ICSE1985), pp Rinker G (2002) Mission Data Systems Architecture and Implementation Guidelines. Ground System Architectures Workshop (GSAW 2002). El Segundo, California RoboCup < Roshandel R, Schmerl B, Medvidovic N, Garlan D, Zhang D (2004a) Understanding tradeoffs among different architectural modeling approaches. Proceedings of the 4th Working IEEE/IFIP Conference on Software Architecture (WICSA 2004) Oslo, Norway Roshandel R, van der Hoek A, Mikic-Rakic M, Medvidovic N (2004b) Mae a system model and environment for managing architectural evolution. ACM Trans Softw Eng Methodol 11(2): doi: / Roshandel R, Banerjee S, Cheung L, Medvidovic N, Golubchik L (2006) Estimating software component reliability by leveraging architectural models. 28th International Conference on Software Engineering (ICSE 2006), Shanghai, China, May, pp Stone P (2003) Multiagent competition and research: lessons from RoboCup and TAC. RoboCup-2002: Robot Soccer World Cup VI. Springer Verlag, Berlin, pp Tracz W (1995) DSSA (Domain-Specific Software Architecture) pedagogical example. ACM SIGSOFT Softw Eng Notes 20(3):49 62 Dr. Alexander Lam received his PhD in Computer Science from the University of Southern California (USC) in His research interests include software engineering, in particular software processes.

23 Empir Software Eng (2009) 14: Dr. Barry Boehm TRW Professor of Software Engineering, Computer Science Department, USC Director, USC Center for Systems and Software Engineering B.A., Math, Harvard, 1957; Ph.D., Math, UCLA, Dr. Barry Boehm served within the U.S. Department of Defense (DoD) from 1989 to 1992 as director of the DARPA Information Science and Technology Office and as director of the DDRE Software and Computer Technology Office. He worked at TRW from 1973 to 1989, culminating as chief scientist of the Defense Systems Group, and at the Rand Corporation from 1959 to 1973, culminating as head of the Information Sciences Department. He entered the software field at General Dynamics in His current research interests involve recasting systems and software engineering into a value-based framework, including processes, methods, tools, and an underlying theory and process for value-based systems and software definition, architecting, development, validation, and evolution. His contributions to the field include the Constructive Cost Model (COCOMO) family of systems and software engineering estimation models, the Spiral Model and Incremental Commitment Model of the systems and software engineering process, and the Theory W (win-win) approach to systems and software management and requirements determination. He has received the ACM Distinguished Research Award in Software Engineering and the IEEE Harlan Mills Award, and an honorary ScD in Computer Science from the University of Massachusetts. He is a Fellow of the primary professional societies in computing (ACM), aerospace (AIAA), electronics (IEEE), and systems engineering (INCOSE), and a member of the U.S. National Academy of Engineering.

Using Testbeds to Accelerate Technology Maturity and Transition: The SCRover Experience

Using Testbeds to Accelerate Technology Maturity and Transition: The SCRover Experience Using Testbeds to Accelerate Technology Maturity and Transition: The SCRover Experience Barry Boehm, Jesal Bhuta, David Garlan, Eric Gradman, LiGuo Huang, Alexander Lam, Ray Madachy, Nenad Medvidovic,

More information

UNIT-III LIFE-CYCLE PHASES

UNIT-III LIFE-CYCLE PHASES INTRODUCTION: UNIT-III LIFE-CYCLE PHASES - If there is a well defined separation between research and development activities and production activities then the software is said to be in successful development

More information

Software Maintenance Cycles with the RUP

Software Maintenance Cycles with the RUP Software Maintenance Cycles with the RUP by Philippe Kruchten Rational Fellow Rational Software Canada The Rational Unified Process (RUP ) has no concept of a "maintenance phase." Some people claim that

More information

Refinement and Evolution Issues in Bridging Requirements and Architectures

Refinement and Evolution Issues in Bridging Requirements and Architectures Refinement and Evolution Issues between Requirements and Product Line s 1 Refinement and Evolution Issues in Bridging Requirements and s Alexander Egyed, Paul Gruenbacher, and Nenad Medvidovic University

More information

Software-Intensive Systems Producibility

Software-Intensive Systems Producibility Pittsburgh, PA 15213-3890 Software-Intensive Systems Producibility Grady Campbell Sponsored by the U.S. Department of Defense 2006 by Carnegie Mellon University SSTC 2006. - page 1 Producibility

More information

Using Empirical Testbeds to Accelerate Technology Maturity and Transition: The SCRover Experience

Using Empirical Testbeds to Accelerate Technology Maturity and Transition: The SCRover Experience P P P P Using Empirical Testbeds to Accelerate Technology Maturity and Transition: The SCRover Experience Barry BoehmP P, Jesal BhutaP P, David GarlanP P, Eric GradmanP P, LiGuo HuangP P, * Alexander LamP

More information

Object-Oriented Design

Object-Oriented Design Object-Oriented Design Lecture 2: USDP Overview Department of Computer Engineering Sharif University of Technology 1 Review The Unified Modeling Language (UML) is a standard language for specifying, visualizing,

More information

Stakeholder and process alignment in Navy installation technology transitions

Stakeholder and process alignment in Navy installation technology transitions Calhoun: The NPS Institutional Archive DSpace Repository Faculty and Researchers Faculty and Researchers Collection 2017 Stakeholder and process alignment in Navy installation technology transitions Regnier,

More information

Software Life Cycle Models

Software Life Cycle Models 1 Software Life Cycle Models The goal of Software Engineering is to provide models and processes that lead to the production of well-documented maintainable software in a manner that is predictable. 2

More information

UNIT VIII SYSTEM METHODOLOGY 2014

UNIT VIII SYSTEM METHODOLOGY 2014 SYSTEM METHODOLOGY: UNIT VIII SYSTEM METHODOLOGY 2014 The need for a Systems Methodology was perceived in the second half of the 20th Century, to show how and why systems engineering worked and was so

More information

Grundlagen des Software Engineering Fundamentals of Software Engineering

Grundlagen des Software Engineering Fundamentals of Software Engineering Software Engineering Research Group: Processes and Measurement Fachbereich Informatik TU Kaiserslautern Grundlagen des Software Engineering Fundamentals of Software Engineering Winter Term 2011/12 Prof.

More information

EGS-CC. System Engineering Team. Commonality of Ground Systems. Executive Summary

EGS-CC. System Engineering Team. Commonality of Ground Systems. Executive Summary System Engineering Team Prepared: System Engineering Team Date: Approved: System Engineering Team Leader Date: Authorized: Steering Board Date: Restriction of Disclosure: The copyright of this document

More information

Unit 5: Unified Software Development Process. 3C05: Unified Software Development Process USDP. USDP for your project. Iteration Workflows.

Unit 5: Unified Software Development Process. 3C05: Unified Software Development Process USDP. USDP for your project. Iteration Workflows. Unit 5: Unified Software Development Process 3C05: Unified Software Development Process Objectives: Introduce the main concepts of iterative and incremental development Discuss the main USDP phases 1 2

More information

CSE - Annual Research Review. From Informal WinWin Agreements to Formalized Requirements

CSE - Annual Research Review. From Informal WinWin Agreements to Formalized Requirements CSE - Annual Research Review From Informal WinWin Agreements to Formalized Requirements Hasan Kitapci hkitapci@cse.usc.edu March 15, 2005 Introduction Overview EasyWinWin Requirements Negotiation and Requirements

More information

A FRAMEWORK FOR PERFORMING V&V WITHIN REUSE-BASED SOFTWARE ENGINEERING

A FRAMEWORK FOR PERFORMING V&V WITHIN REUSE-BASED SOFTWARE ENGINEERING A FRAMEWORK FOR PERFORMING V&V WITHIN REUSE-BASED SOFTWARE ENGINEERING Edward A. Addy eaddy@wvu.edu NASA/WVU Software Research Laboratory ABSTRACT Verification and validation (V&V) is performed during

More information

SOFTWARE ARCHITECTURE

SOFTWARE ARCHITECTURE SOFTWARE ARCHITECTURE Foundations, Theory, and Practice Richard N. Taylor University of California, Irvine Nenad Medvidovic University of Southern California Eric M. Dashofy The Aerospace Corporation WILEY

More information

Dan Dvorak and Lorraine Fesq Jet Propulsion Laboratory, California Institute of Technology. Jonathan Wilmot NASA Goddard Space Flight Center

Dan Dvorak and Lorraine Fesq Jet Propulsion Laboratory, California Institute of Technology. Jonathan Wilmot NASA Goddard Space Flight Center Jet Propulsion Laboratory Quality Attributes for Mission Flight Software: A Reference for Architects Dan Dvorak and Lorraine Fesq Jet Propulsion Laboratory, Jonathan Wilmot NASA Goddard Space Flight Center

More information

Arcade Game Maker Product Line Production Plan

Arcade Game Maker Product Line Production Plan Arcade Game Maker Product Line Production Plan ArcadeGame Team July 2003 Table of Contents 1 Overview 1 1.1 Identification 1 1.2 Document Map 1 1.3 Concepts 2 1.4 Readership 2 2 Strategic view of product

More information

Hierarchical Controller for Robotic Soccer

Hierarchical Controller for Robotic Soccer Hierarchical Controller for Robotic Soccer Byron Knoll Cognitive Systems 402 April 13, 2008 ABSTRACT RoboCup is an initiative aimed at advancing Artificial Intelligence (AI) and robotics research. This

More information

Model-Based Systems Engineering Methodologies. J. Bermejo Autonomous Systems Laboratory (ASLab)

Model-Based Systems Engineering Methodologies. J. Bermejo Autonomous Systems Laboratory (ASLab) Model-Based Systems Engineering Methodologies J. Bermejo Autonomous Systems Laboratory (ASLab) Contents Introduction Methodologies IBM Rational Telelogic Harmony SE (Harmony SE) IBM Rational Unified Process

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Object-oriented Analysis and Design

Object-oriented Analysis and Design Object-oriented Analysis and Design Stages in a Software Project Requirements Writing Understanding the Client s environment and needs. Analysis Identifying the concepts (classes) in the problem domain

More information

Violent Intent Modeling System

Violent Intent Modeling System for the Violent Intent Modeling System April 25, 2008 Contact Point Dr. Jennifer O Connor Science Advisor, Human Factors Division Science and Technology Directorate Department of Homeland Security 202.254.6716

More information

Requirements Gathering using Object- Oriented Models

Requirements Gathering using Object- Oriented Models Requirements Gathering using Object- Oriented Models Cycle de vie d un logiciel Software Life Cycle The "software lifecycle" refers to all stages of software development from design to disappearance. The

More information

Introduction to adoption of lean canvas in software test architecture design

Introduction to adoption of lean canvas in software test architecture design Introduction to adoption of lean canvas in software test architecture design Padmaraj Nidagundi 1, Margarita Lukjanska 2 1 Riga Technical University, Kaļķu iela 1, Riga, Latvia. 2 Politecnico di Milano,

More information

SWEN 256 Software Process & Project Management

SWEN 256 Software Process & Project Management SWEN 256 Software Process & Project Management What is quality? A definition of quality should emphasize three important points: 1. Software requirements are the foundation from which quality is measured.

More information

R&D PROJECT MANAGEMENT IS IT AGILE?

R&D PROJECT MANAGEMENT IS IT AGILE? Slide R&D PROJECT MANAGEMENT IS IT AGILE? Jesse Aronson, PMP, PE May, 208 Slide 2 Definitions: Agile and R&D Agile Project Management is an iterative process that focuses on customer value first, team

More information

Industry 4.0: the new challenge for the Italian textile machinery industry

Industry 4.0: the new challenge for the Italian textile machinery industry Industry 4.0: the new challenge for the Italian textile machinery industry Executive Summary June 2017 by Contacts: Economics & Press Office Ph: +39 02 4693611 email: economics-press@acimit.it ACIMIT has

More information

Towards an MDA-based development methodology 1

Towards an MDA-based development methodology 1 Towards an MDA-based development methodology 1 Anastasius Gavras 1, Mariano Belaunde 2, Luís Ferreira Pires 3, João Paulo A. Almeida 3 1 Eurescom GmbH, 2 France Télécom R&D, 3 University of Twente 1 gavras@eurescom.de,

More information

Design and Implementation Options for Digital Library Systems

Design and Implementation Options for Digital Library Systems International Journal of Systems Science and Applied Mathematics 2017; 2(3): 70-74 http://www.sciencepublishinggroup.com/j/ijssam doi: 10.11648/j.ijssam.20170203.12 Design and Implementation Options for

More information

Enhancing industrial processes in the industry sector by the means of service design

Enhancing industrial processes in the industry sector by the means of service design ServDes2018 - Service Design Proof of Concept Politecnico di Milano 18th-19th-20th, June 2018 Enhancing industrial processes in the industry sector by the means of service design giuseppe@attoma.eu, peter.livaudais@attoma.eu

More information

Technology Transfer: An Integrated Culture-Friendly Approach

Technology Transfer: An Integrated Culture-Friendly Approach Technology Transfer: An Integrated Culture-Friendly Approach I.J. Bate, A. Burns, T.O. Jackson, T.P. Kelly, W. Lam, P. Tongue, J.A. McDermid, A.L. Powell, J.E. Smith, A.J. Vickers, A.J. Wellings, B.R.

More information

A Knowledge-Centric Approach for Complex Systems. Chris R. Powell 1/29/2015

A Knowledge-Centric Approach for Complex Systems. Chris R. Powell 1/29/2015 A Knowledge-Centric Approach for Complex Systems Chris R. Powell 1/29/2015 Dr. Chris R. Powell, MBA 31 years experience in systems, hardware, and software engineering 17 years in commercial development

More information

Concurrent Increment Sequencing and Synchronization with Design Structure Matrices in Software- Intensive System Development

Concurrent Increment Sequencing and Synchronization with Design Structure Matrices in Software- Intensive System Development Concurrent Increment Sequencing and Synchronization with Design Structure Matrices in Software- Intensive System Development Dr. Peter Hantos The Aerospace Corporation NDIA Systems Engineering Conference

More information

Manufacturing Readiness Assessment Overview

Manufacturing Readiness Assessment Overview Manufacturing Readiness Assessment Overview Integrity Service Excellence Jim Morgan AFRL/RXMS Air Force Research Lab 1 Overview What is a Manufacturing Readiness Assessment (MRA)? Why Manufacturing Readiness?

More information

ThinkPlace case for IBM/MIT Lecture Series

ThinkPlace case for IBM/MIT Lecture Series ThinkPlace case for IBM/MIT Lecture Series Doug McDavid and Tim Kostyk: IBM Global Business Services Lilian Wu: IBM University Relations and Innovation Discussion paper: draft Version 1.29 (Oct 24, 2006).

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Component Based Mechatronics Modelling Methodology

Component Based Mechatronics Modelling Methodology Component Based Mechatronics Modelling Methodology R.Sell, M.Tamre Department of Mechatronics, Tallinn Technical University, Tallinn, Estonia ABSTRACT There is long history of developing modelling systems

More information

ACE3 Working Group Session, March 2, 2005

ACE3 Working Group Session, March 2, 2005 ACE3 Working Group Session, March 2, 2005 Intensive s The Synergy of Architecture, Life Cycle Models, and Reviews Dr. Peter Hantos The Aerospace Corporation 2003-2005. The Aerospace Corporation. All Rights

More information

CC532 Collaborative System Design

CC532 Collaborative System Design CC532 Collaborative Design Part I: Fundamentals of s Engineering 5. s Thinking, s and Functional Analysis Views External View : showing the system s interaction with environment (users) 2 of 24 Inputs

More information

Requirements Analysis aka Requirements Engineering. Requirements Elicitation Process

Requirements Analysis aka Requirements Engineering. Requirements Elicitation Process C870, Advanced Software Engineering, Requirements Analysis aka Requirements Engineering Defining the WHAT Requirements Elicitation Process Client Us System SRS 1 C870, Advanced Software Engineering, Requirements

More information

About Software Engineering.

About Software Engineering. About Software Engineering pierre-alain.muller@uha.fr What is Software Engineering? Software Engineering Software development Engineering Let s s have a look at ICSE International Conference on Software

More information

Introduction to Software Requirements and Design

Introduction to Software Requirements and Design Introduction to Software Requirements and Software Requirements and CITS 4401 Lecture 1 Outline 1. What to expect in CITS4401 2. SE: what are the problems? 3. Some important concepts Abstraction Product

More information

ware-intensive Systems, Proceedings of the USC-CSE Focused Workshop on Software Architectures, June Luckham, D., Augustin L., Kenney J.

ware-intensive Systems, Proceedings of the USC-CSE Focused Workshop on Software Architectures, June Luckham, D., Augustin L., Kenney J. ware-intensive Systems, Proceedings of the USC-CSE Focused Workshop on Software Architectures, June 1994. Luckham, D., Augustin L., Kenney J., Vera J., Bryan D., and Mann W. (1994), Specification and Analysis

More information

On the Definition of Software System Architecture

On the Definition of Software System Architecture USC C S E University of Southern California Center for Software Engineering Technical Report USC/CSE-95-TR-500 April 1995 Appeared in Proceedings of the First International Workshop on Architectures for

More information

Pervasive Services Engineering for SOAs

Pervasive Services Engineering for SOAs Pervasive Services Engineering for SOAs Dhaminda Abeywickrama (supervised by Sita Ramakrishnan) Clayton School of Information Technology, Monash University, Australia dhaminda.abeywickrama@infotech.monash.edu.au

More information

Towards a Software Engineering Research Framework: Extending Design Science Research

Towards a Software Engineering Research Framework: Extending Design Science Research Towards a Software Engineering Research Framework: Extending Design Science Research Murat Pasa Uysal 1 1Department of Management Information Systems, Ufuk University, Ankara, Turkey ---------------------------------------------------------------------***---------------------------------------------------------------------

More information

NASA s X2000 Program - an Institutional Approach to Enabling Smaller Spacecraft

NASA s X2000 Program - an Institutional Approach to Enabling Smaller Spacecraft NASA s X2000 Program - an Institutional Approach to Enabling Smaller Spacecraft Dr. Leslie J. Deutsch and Chris Salvo Advanced Flight Systems Program Jet Propulsion Laboratory California Institute of Technology

More information

Systems Engineering Presented at Stevens New Jersey Community College Strategic Partnership 27 th September, 2005

Systems Engineering Presented at Stevens New Jersey Community College Strategic Partnership 27 th September, 2005 Systems Engineering Presented at Stevens New Jersey Community College Strategic Partnership 27 th September, 2005 Dr. Rashmi Jain Associate Professor Systems Engineering and Engineering Management 2005

More information

Agile Non-Agile. Previously on Software Engineering

Agile Non-Agile. Previously on Software Engineering Previously on : Are we enough? Wydział Matematyki i Nauk Informacyjnych Politechnika Warszawska DSDM: Project overview Software Development Framework How to communicate? How to divide project into tasks?

More information

Design Science Research Methods. Prof. Dr. Roel Wieringa University of Twente, The Netherlands

Design Science Research Methods. Prof. Dr. Roel Wieringa University of Twente, The Netherlands Design Science Research Methods Prof. Dr. Roel Wieringa University of Twente, The Netherlands www.cs.utwente.nl/~roelw UFPE 26 sept 2016 R.J. Wieringa 1 Research methodology accross the disciplines Do

More information

on-time delivery Ensuring

on-time delivery Ensuring Ensuring on-time delivery Any delay in terms of schedule or not meeting the specifications or budget can have a huge impact on the viability of a program as well as the companies involved. New software

More information

Technology Roadmapping. Lesson 3

Technology Roadmapping. Lesson 3 Technology Roadmapping Lesson 3 Leadership in Science & Technology Management Mission Vision Strategy Goals/ Implementation Strategy Roadmap Creation Portfolios Portfolio Roadmap Creation Project Prioritization

More information

A Product Derivation Framework for Software Product Families

A Product Derivation Framework for Software Product Families A Product Derivation Framework for Software Product Families Sybren Deelstra, Marco Sinnema, Jan Bosch Department of Mathematics and Computer Science, University of Groningen, PO Box 800, 9700 AV Groningen,

More information

HELPING THE DESIGN OF MIXED SYSTEMS

HELPING THE DESIGN OF MIXED SYSTEMS HELPING THE DESIGN OF MIXED SYSTEMS Céline Coutrix Grenoble Informatics Laboratory (LIG) University of Grenoble 1, France Abstract Several interaction paradigms are considered in pervasive computing environments.

More information

Intermediate Systems Acquisition Course. Lesson 2.2 Selecting the Best Technical Alternative. Selecting the Best Technical Alternative

Intermediate Systems Acquisition Course. Lesson 2.2 Selecting the Best Technical Alternative. Selecting the Best Technical Alternative Selecting the Best Technical Alternative Science and technology (S&T) play a critical role in protecting our nation from terrorist attacks and natural disasters, as well as recovering from those catastrophic

More information

Disruption Opportunity Special Notice. Fundamental Design (FUN DESIGN)

Disruption Opportunity Special Notice. Fundamental Design (FUN DESIGN) I. Opportunity Description Disruption Opportunity Special Notice DARPA-SN-17-71, Amendment 1 Fundamental Design (FUN DESIGN) The Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office

More information

Design and Operation of Micro-Gravity Dynamics and Controls Laboratories

Design and Operation of Micro-Gravity Dynamics and Controls Laboratories Design and Operation of Micro-Gravity Dynamics and Controls Laboratories Georgia Institute of Technology Space Systems Engineering Conference Atlanta, GA GT-SSEC.F.4 Alvar Saenz-Otero David W. Miller MIT

More information

IS 525 Chapter 2. Methodology Dr. Nesrine Zemirli

IS 525 Chapter 2. Methodology Dr. Nesrine Zemirli IS 525 Chapter 2 Methodology Dr. Nesrine Zemirli Assistant Professor. IS Department CCIS / King Saud University E-mail: Web: http://fac.ksu.edu.sa/nzemirli/home Chapter Topics Fundamental concepts and

More information

Understand that technology has different levels of maturity and that lower maturity levels come with higher risks.

Understand that technology has different levels of maturity and that lower maturity levels come with higher risks. Technology 1 Agenda Understand that technology has different levels of maturity and that lower maturity levels come with higher risks. Introduce the Technology Readiness Level (TRL) scale used to assess

More information

Cyber-Physical Systems

Cyber-Physical Systems Cyber-Physical Systems Cody Kinneer Slides used with permission from: Dr. Sebastian J. I. Herzig Jet Propulsion Laboratory, California Institute of Technology Oct 2, 2017 The cost information contained

More information

NASA s Strategy for Enabling the Discovery, Access, and Use of Earth Science Data

NASA s Strategy for Enabling the Discovery, Access, and Use of Earth Science Data NASA s Strategy for Enabling the Discovery, Access, and Use of Earth Science Data Francis Lindsay, PhD Martha Maiden Science Mission Directorate NASA Headquarters IEEE International Geoscience and Remote

More information

PROGRAM UNDERSTANDING TASK IN THE CONTEXT OF PSP

PROGRAM UNDERSTANDING TASK IN THE CONTEXT OF PSP PROGRAM UNDERSTANDING TASK IN THE CONTEXT OF PSP Vladan Jovanovic, Georgia Southern University, vladan@georgiasouthern.edu Richard Chambers, Georgia Southern University, rchamber@georgiasouthern.edu Steavn

More information

Design and Creation. Ozan Saltuk & Ismail Kosan SWAL. 7. Mai 2014

Design and Creation. Ozan Saltuk & Ismail Kosan SWAL. 7. Mai 2014 Design and Creation SWAL Ozan Saltuk & Ismail Kosan 7. Mai 2014 Design and Creation - Motivation The ultimate goal of computer science and programming: The art of designing artifacts to solve intricate

More information

Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration

Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration Research Supervisor: Minoru Etoh (Professor, Open and Transdisciplinary Research Initiatives, Osaka University)

More information

UNIT IV SOFTWARE PROCESSES & TESTING SOFTWARE PROCESS - DEFINITION AND IMPLEMENTATION

UNIT IV SOFTWARE PROCESSES & TESTING SOFTWARE PROCESS - DEFINITION AND IMPLEMENTATION UNIT IV SOFTWARE PROCESSES & TESTING Software Process - Definition and implementation; internal Auditing and Assessments; Software testing - Concepts, Tools, Reviews, Inspections & Walkthroughs; P-CMM.

More information

System of Systems Software Assurance

System of Systems Software Assurance System of Systems Software Assurance Introduction Under DoD sponsorship, the Software Engineering Institute has initiated a research project on system of systems (SoS) software assurance. The project s

More information

Israel Railways No Fault Liability Renewal The Implementation of New Technological Safety Devices at Level Crossings. Amos Gellert, Nataly Kats

Israel Railways No Fault Liability Renewal The Implementation of New Technological Safety Devices at Level Crossings. Amos Gellert, Nataly Kats Mr. Amos Gellert Technological aspects of level crossing facilities Israel Railways No Fault Liability Renewal The Implementation of New Technological Safety Devices at Level Crossings Deputy General Manager

More information

Systems Engineering Overview. Axel Claudio Alex Gonzalez

Systems Engineering Overview. Axel Claudio Alex Gonzalez Systems Engineering Overview Axel Claudio Alex Gonzalez Objectives Provide additional insights into Systems and into Systems Engineering Walkthrough the different phases of the product lifecycle Discuss

More information

UML and Patterns.book Page 52 Thursday, September 16, :48 PM

UML and Patterns.book Page 52 Thursday, September 16, :48 PM UML and Patterns.book Page 52 Thursday, September 16, 2004 9:48 PM UML and Patterns.book Page 53 Thursday, September 16, 2004 9:48 PM Chapter 5 5 EVOLUTIONARY REQUIREMENTS Ours is a world where people

More information

RESEARCH OVERVIEW Real Options in Enterprise Architecture

RESEARCH OVERVIEW Real Options in Enterprise Architecture RESEARCH OVERVIEW Real Options in Enterprise Architecture Tsoline Mikaelian, Doctoral Research Assistant tsoline@mit.edu October 21, 2008 Committee: D. Hastings (Chair), D. Nightingale, and D. Rhodes Researcher

More information

Typical Project Life Cycle

Typical Project Life Cycle Typical Project Life Cycle D. KANIPE 1/29/2015 Contract Initiation VISION REQUEST FOR INFORMATION REQUEST FOR PROPOSAL SOURCE EVALUATION BOARD RFI RFP Proposals Evaluated Companies Respond Companies Submit

More information

SR&ED for the Software Sector Northwestern Ontario Innovation Centre

SR&ED for the Software Sector Northwestern Ontario Innovation Centre SR&ED for the Software Sector Northwestern Ontario Innovation Centre Quantifying and qualifying R&D for a tax credit submission Justin Frape, Senior Manager BDO Canada LLP January 16 th, 2013 AGENDA Today

More information

DOCTORAL THESIS (Summary)

DOCTORAL THESIS (Summary) LUCIAN BLAGA UNIVERSITY OF SIBIU Syed Usama Khalid Bukhari DOCTORAL THESIS (Summary) COMPUTER VISION APPLICATIONS IN INDUSTRIAL ENGINEERING PhD. Advisor: Rector Prof. Dr. Ing. Ioan BONDREA 1 Abstract Europe

More information

Systems Architecting and Software Architecting - On Separate or Convergent Paths?

Systems Architecting and Software Architecting - On Separate or Convergent Paths? Paper ID #5762 Systems Architecting and Architecting - On Separate or Convergent Paths? Dr. Howard Eisner, George Washington University Dr. Eisner, since 1989, has served as Distinguished Research Professor

More information

Fault Management Architectures and the Challenges of Providing Software Assurance

Fault Management Architectures and the Challenges of Providing Software Assurance Fault Management Architectures and the Challenges of Providing Software Assurance Presented to the 31 st Space Symposium Date: 4/14/2015 Presenter: Rhonda Fitz (MPL) Primary Author: Shirley Savarino (TASC)

More information

THREAT ANALYSIS FOR THE TRANSPORT OF RADIOACTIVE MATERIAL USING MORPHOLOGICAL ANALYSIS

THREAT ANALYSIS FOR THE TRANSPORT OF RADIOACTIVE MATERIAL USING MORPHOLOGICAL ANALYSIS Proceedings of the 15th International Symposium on the Packaging and Transportation of Radioactive Materials PATRAM 2007 October 21-26, 2007, Miami, Florida, USA THREAT ANALYSIS FOR THE TRANSPORT OF RADIOACTIVE

More information

Inclusion: All members of our community are welcome, and we will make changes, when necessary, to make sure all feel welcome.

Inclusion: All members of our community are welcome, and we will make changes, when necessary, to make sure all feel welcome. The 2016 Plan of Service comprises short-term and long-term goals that we believe will help the Library to deliver on the objectives set out in the Library s Vision, Mission and Values statement. Our Vision

More information

ISHM Testbeds and Prototypes (ITP) Project

ISHM Testbeds and Prototypes (ITP) Project ISHM Testbeds and Prototypes (ITP) Project Sensors for Industry Conference Brief Daniel P. Duncavage Project Manager International Space Station Program NASA Johnson Space Center, Houston, TX February

More information

Making your ISO Flow Flawless Establishing Confidence in Verification Tools

Making your ISO Flow Flawless Establishing Confidence in Verification Tools Making your ISO 26262 Flow Flawless Establishing Confidence in Verification Tools Bryan Ramirez DVT Automotive Product Manager August 2015 What is Tool Confidence? Principle: If a tool supports any process

More information

Other Transaction Authority (OTA)

Other Transaction Authority (OTA) Other Transaction Authority (OTA) Col Christopher Wegner SMC/PK 15 March 2017 Overview OTA Legal Basis Appropriate Use SMC Space Enterprise Consortium Q&A Special Topic. 2 Other Transactions Authority

More information

Software System/Design & Architecture. Eng.Muhammad Fahad Khan Assistant Professor Department of Software Engineering

Software System/Design & Architecture. Eng.Muhammad Fahad Khan Assistant Professor Department of Software Engineering Software System/Design & Architecture Eng.Muhammad Fahad Khan Assistant Professor Department of Software Engineering Sessional Marks Midterm 20% Final 40% Assignment + Quizez 20 % Lab Work 10 % Presentations

More information

THEFUTURERAILWAY THE INDUSTRY S RAIL TECHNICAL STRATEGY 2012 INNOVATION

THEFUTURERAILWAY THE INDUSTRY S RAIL TECHNICAL STRATEGY 2012 INNOVATION 73 INNOVATION 74 VISION A dynamic industry that innovates to evolve, grow and attract the best entrepreneurial talent OBJECTIVES Innovation makes a significant and continuing contribution to rail business

More information

Introduction to Systems Engineering

Introduction to Systems Engineering p. 1/2 ENES 489P Hands-On Systems Engineering Projects Introduction to Systems Engineering Mark Austin E-mail: austin@isr.umd.edu Institute for Systems Research, University of Maryland, College Park Career

More information

Background T

Background T Background» At the 2013 ISSC, the SAE International G-48 System Safety Committee accepted an action to investigate the utility of the Safety Case approach vis-à-vis ANSI/GEIA-STD- 0010-2009.» The Safety

More information

A Mashup of Techniques to Create Reference Architectures

A Mashup of Techniques to Create Reference Architectures A Mashup of Techniques to Create Reference Architectures Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 Rick Kazman, John McGregor Copyright 2012 Carnegie Mellon University.

More information

CHAPTER 1: INTRODUCTION TO SOFTWARE ENGINEERING DESIGN

CHAPTER 1: INTRODUCTION TO SOFTWARE ENGINEERING DESIGN CHAPTER 1: INTRODUCTION TO SOFTWARE ENGINEERING DESIGN SESSION II: OVERVIEW OF SOFTWARE ENGINEERING DESIGN Software Engineering Design: Theory and Practice by Carlos E. Otero Slides copyright 2012 by Carlos

More information

Essential requirements for a spectrum monitoring system for developing countries

Essential requirements for a spectrum monitoring system for developing countries Recommendation ITU-R SM.1392-2 (02/2011) Essential requirements for a spectrum monitoring system for developing countries SM Series Spectrum management ii Rec. ITU-R SM.1392-2 Foreword The role of the

More information

Model Based Systems Engineering with MagicGrid

Model Based Systems Engineering with MagicGrid November 2, 2016 Model Based Systems Engineering with MagicGrid No Magic, Inc. System Model as an Integration Framework Need for Ecosystem 2 2012-2014 by Sanford Friedenthal 19 The modeling language is

More information

estec PROSPECT Project Objectives & Requirements Document

estec PROSPECT Project Objectives & Requirements Document estec European Space Research and Technology Centre Keplerlaan 1 2201 AZ Noordwijk The Netherlands T +31 (0)71 565 6565 F +31 (0)71 565 6040 www.esa.int PROSPECT Project Objectives & Requirements Document

More information

TECHNICAL AND OPERATIONAL NOTE ON CHANGE MANAGEMENT OF GAMBLING TECHNICAL SYSTEMS AND APPROVAL OF THE SUBSTANTIAL CHANGES TO CRITICAL COMPONENTS.

TECHNICAL AND OPERATIONAL NOTE ON CHANGE MANAGEMENT OF GAMBLING TECHNICAL SYSTEMS AND APPROVAL OF THE SUBSTANTIAL CHANGES TO CRITICAL COMPONENTS. TECHNICAL AND OPERATIONAL NOTE ON CHANGE MANAGEMENT OF GAMBLING TECHNICAL SYSTEMS AND APPROVAL OF THE SUBSTANTIAL CHANGES TO CRITICAL COMPONENTS. 1. Document objective This note presents a help guide for

More information

Gerald G. Boyd, Tom D. Anderson, David W. Geiser

Gerald G. Boyd, Tom D. Anderson, David W. Geiser THE ENVIRONMENTAL MANAGEMENT PROGRAM USES PERFORMANCE MEASURES FOR SCIENCE AND TECHNOLOGY TO: FOCUS INVESTMENTS ON ACHIEVING CLEANUP GOALS; IMPROVE THE MANAGEMENT OF SCIENCE AND TECHNOLOGY; AND, EVALUATE

More information

Architectural assumptions and their management in software development Yang, Chen

Architectural assumptions and their management in software development Yang, Chen University of Groningen Architectural assumptions and their management in software development Yang, Chen IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish

More information

RF System Design and Analysis Software Enhances RF Architectural Planning

RF System Design and Analysis Software Enhances RF Architectural Planning RF System Design and Analysis Software Enhances RF Architectural Planning By Dale D. Henkes Applied Computational Sciences (ACS) Historically, commercial software This new software enables convenient simulation

More information

New Approaches to Manufacturing Innovation in DOE

New Approaches to Manufacturing Innovation in DOE New Approaches to Manufacturing Innovation in DOE March 6, 2013 TMS 2013 Annual Meeting Dr. Robert Ivester Director (Acting) Advanced Manufacturing Office 1 Energy Efficiency and Renewable Energy eere.energy.gov

More information

CONTENT PATTERNS Joint Panel. Finding Essentials from Cloud-based Systems and Big Data. Namics.

CONTENT PATTERNS Joint Panel. Finding Essentials from Cloud-based Systems and Big Data. Namics. CONTENT 2018. PATTERNS 2018. Joint Panel. Finding Essentials from Cloud-based Systems and Big Data. Namics. BARCELONA, SPAIN, 22ND FEBRUARY 2018 Hans-Werner Sehring. Senior Solution Architect. Agenda.

More information

Development of the Strategic Research Agenda of the Implementing Geological Disposal of Radioactive Waste Technology Platform

Development of the Strategic Research Agenda of the Implementing Geological Disposal of Radioactive Waste Technology Platform Development of the Strategic Research Agenda of the Implementing Geological Disposal of Radioactive Waste Technology Platform - 11020 P. Marjatta Palmu* and Gerald Ouzounian** * Posiva Oy, Research, Eurajoki,

More information

Modeling & Simulation Roadmap for JSTO-CBD IS CAPO

Modeling & Simulation Roadmap for JSTO-CBD IS CAPO Institute for Defense Analyses 4850 Mark Center Drive Alexandria, Virginia 22311-1882 Modeling & Simulation Roadmap for JSTO-CBD IS CAPO Dr. Don A. Lloyd Dr. Jeffrey H. Grotte Mr. Douglas P. Schultz CBIS

More information

PI: Rhoads. ERRoS: Energetic and Reactive Robotic Swarms

PI: Rhoads. ERRoS: Energetic and Reactive Robotic Swarms ERRoS: Energetic and Reactive Robotic Swarms 1 1 Introduction and Background As articulated in a recent presentation by the Deputy Assistant Secretary of the Army for Research and Technology, the future

More information

Migrating a J2EE project from IBM Rational Rose to IBM Rational XDE Developer v2003

Migrating a J2EE project from IBM Rational Rose to IBM Rational XDE Developer v2003 Copyright IBM Rational software 2003 http://www.therationaledge.com/content/aug_03/rdn.jsp Migrating a J2EE project from IBM Rational Rose to IBM Rational XDE Developer v2003 by Steven Franklin Editor's

More information