Is it extremely rare during the software lifecycle to jettison most of the automated test tools, and then?

Was there "tortious interference" on any software projects in 1994-1995 ?

  • Something stopped the following from happening in some projects in 1994-1995. What might that have been ? <<Medical Device & Diagnostic Industry Magazine | MDDI Article Index Originally published December 1994 SOFTWARE A Maturity Model for Automated Software Testing Mitchel H. Krause Aside from their mandate to provide a safe and reliable product, manufacturers of computerized medical devices may have three very practical reasons for automating their software testing program: their product is too complicated to test manually, the time devoted to manual testing is cutting into potential profits, and current FDA requirements will be easier to satisfy with automated testing and documentation. If any of these factors motivates your company, this article will help you to sort out the issues to be considered and options available. Then, when the automated test program is in place, safer and more reliable products will follow.1 The sorting instrument presented is a maturity model that plots four levels of testing maturity in terms of the resources required to move from one level to the next. The model can be used to determine the level that best fits your company and its products. THE SOFTWARE TESTING MATURITY MODEL The software testing maturity model, shown in Figure 1, is similar to a software process maturity model that is familiar to many software engineers. It has been described by Watts S. Humphrey in his book Managing the Software Process,2 and has been cited by Frank Houston, a former FDA staffer, and Steven Rakitin in presentations to the Health Industry Manufacturers Association.3,4 The version shown here as Figure 2 is adapted from Rakitin's presentation. The process model adapts well to automated software testing because effective software verification and validation programs grow out of development programs that are well planned, executed, managed, and monitored. A good software test program cannot stand alone; it must be an integral part of the software development process. Level 1: Accidental Automation. The first level of the software testing model--like level 1 in the software process model-- is characterized by ad hoc, individualistic, chaotic attempts to get the job done. Important information (for example, what to test) is not documented and must be extracted from in-house experts. Test plans are sketchy. Test results are not documented consistently. Schedules slip. Either products are delayed or testing becomes a cursory, poorly documented exercise. Management is uninvolved or uninformed. This level has been designated Accidental Automation because the use of any automated tools or techniques comes about almost as if by accident and is not supported by process, planning, or management functions. Products released on the basis of such testing may well be accidents waiting to happen. Testing at this level may be appropriate only for a product that has no potential for harming the patient or user; it is never appropriate for a computerized medical device. Level 2: Beginning Automation. The second testing level corresponds directly to Level 2­Repeatable in the software process maturity model (see Figure 2). There are hundreds of capture-and-replay test tools on the market today that simply repeat the responses of a system under test.5 As in the process model, however, these tools have limited capabilities and lose their economic usefulness quickly as a product changes. Level 2 testing is still dependent on information locked in the minds of in-house experts, although documentation is beginning to appear in the form of software requirements specifications (SRSs) and test requirements specifications (TRSs). However, in most cases, large portions of these documents are written after the fact and used to meet regulatory requirements rather than to direct the development and test processes. Writing them does, however, provide good practice for moving to level 3. Level 3: Intentional Automation. At the third level, automated testing becomes both well defined and well managed. The TRSs and the test scripts themselves proceed logically from the SRSs and design documents. Furthermore, because the test team is now part of the development process, these documents are written before the product is delivered for testing. Consequently, schedules become more reliable. Level 3 is appropriate for many medical device manufacturers. Level 4: Advanced Automation. The highest testing maturity level is a practiced and perfected version of level 3 with one major addition: postrelease defect tracking. Defects are trapped and sent directly back through the fix, test creation, and regression test processes. The software test team is now an integral part of product development, and testers and developers work together to build a product that will meet test requirements. Any software bugs that do occur are caught early, when they are much less expensive to fix. When testing i

  • Answer:

    I'm sure there was.

Easter Bunny 72 at Yahoo! Answers Visit the source

Was this solution helpful to you?

Related Q & A:

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.