Krishna Logo
qa training in canada now
Divied
Call: Anusha @ 1 (877) 864-8462

 

Latest News
Home Navigation Divied
SOFTWARE TESTING Navigation Divied MANUAL TESTING
Showing 241 - 250 of 265 Previous | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | Next
MANUAL TESTING
Subcategories
 
241

Integration Testing: Why? What? & How?

Introduction:

As we covered in various articles in the Testing series there are various levels of testing:

Unit Testing, Integration Testing, System Testing

Each level of testing builds on the previous level.

“Unit testing” focuses on testing a unit of the code.
“Integration testing” is the next level of testing. This ‘level of testing’ focuses on testing the integration of “units of code” or components.

How does Integration Testing fit into the Software Development Life Cycle?

Even if a software component is successfully unit tested, in an enterprise n-tier distributed application it is of little or no value if the component cannot be successfully integrated with the rest of the application.

Once unit tested components are delivered we then integrate them together.
These “integrated” components are tested to weed out errors and bugs caused due to the integration. This is a very important step in the Software Development Life Cycle.

It is possible that different programmers developed different components.

A lot of bugs emerge during the integration step.

In most cases a dedicated testing team focuses on Integration Testing.

Prerequisites for Integration Testing:

Before we begin Integration Testing it is important that all the components have been successfully unit tested.

Integration Testing Steps:

Integration Testing typically involves the following Steps:
Step 1: Create a Test Plan
Step 2: Create Test Cases and Test Data
Step 3: If applicable create scripts to run test cases
Step 4: Once the components have been integrated execute the test cases
Step 5: Fix the bugs if any and re test the code
Step 6: Repeat the test cycle until the components have been successfully integrated

What is an ‘Integration Test Plan’?

As you may have read in the other articles in the series, this document typically describes one or more of the following:
- How the tests will be carried out
- The list of things to be Tested
- Roles and Responsibilities
- Prerequisites to begin Testing
- Test Environment
- Assumptions
- What to do after a test is successfully carried out
- What to do if test fails
- Glossary

How to write an Integration Test Case?

Simply put, a Test Case describes exactly how the test should be carried out.
The Integration test cases specifically focus on the flow of data/information/control from one component to the other.

So the Integration Test cases should typically focus on scenarios where one component is being called from another. Also the overall application functionality should be tested to make sure the app works when the different components are brought together.

The various Integration Test Cases clubbed together form an Integration Test Suite
Each suite may have a particular focus. In other words different Test Suites may be created to focus on different areas of the application.

As mentioned before a dedicated Testing Team may be created to execute the Integration test cases. Therefore the Integration Test Cases should be as detailed as possible.

Sample Test Case Table:

Test Case ID

Test Case Description

Input Data

Expected Result

Actual Result

Pass/Fail

Remarks

             

Additionally the following information may also be captured:
a) Test Suite Name
b) Tested By
c) Date
d) Test Iteration (One or more iterations of Integration testing may be performed)

Working towards Effective Integration Testing:

There are various factors that affect Software Integration and hence Integration Testing:

1) Software Configuration Management: Since Integration Testing focuses on Integration of components and components can be built by different developers and even different development teams, it is important the right version of components are tested. This may sound very basic, but the biggest problem faced in n-tier development is integrating the right version of components. Integration testing may run through several iterations and to fix bugs components may undergo changes. Hence it is important that a good Software Configuration Management (SCM) policy is in place. We should be able to track the components and their versions. So each time we integrate the application components we know exactly what versions go into the build process.

2) Automate Build Process where Necessary: A Lot of errors occur because the wrong version of components were sent for the build or there are missing components. If possible write a script to integrate and deploy the components this helps reduce manual errors.

3) Document: Document the Integration process/build process to help eliminate the errors of omission or oversight. It is possible that the person responsible for integrating the components forgets to run a required script and the Integration Testing will not yield correct results.

4) Defect Tracking: Integration Testing will lose its edge if the defects are not tracked correctly. Each defect should be documented and tracked. Information should be captured as to how the defect was fixed. This is valuable information. It can help in future integration and deployment processes.

 

INTEGRATION TESTING
Date Posted: 05/04/2012

Integration Testing: Why? What? & How? Introduction: As we covered in various articles in the Testing series there are various levels of testing: Unit Testing, Integration Testing, System Testing Each level of testing builds on...  

 
 
242

 Q: What testing approaches can you tell me about?

A: Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.

Q: What is stress testing?

A: Stress testing is testing that investigates the behavior of software (and hardware) under extraordinary operating conditions.

For example, when a web server is stress tested, testing aims to find out how many users can be on-line, at the same time, without crashing the server. Stress testing tests the stability of a given system or entity.

Stress testing tests something beyond its normal operational capacity, in order to observe any negative results. For example, a web server is stress tested, using scripts, bots, and various denial of service tools.

Q: What is load testing?

A: Load testing simulates the expected usage of a software program, by simulating multiple users that access the program's services concurrently. Load testing is most useful and most relevant for multi-user systems, client/server models, including web servers.

For example, the load placed on the system is increased above normal usage patterns, in order to test the system's response at peak loads.

Q: What is the difference between stress testing and load testing?

A: Load testing generally stops short of stress testing.

During stress testing, the load is so great that the expected results are errors, though there is gray area in between stress testing and load testing.

Load testing is a blanket term that is used in many different ways across the professional software testing community.

The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing.

Q: What is the difference between performance testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.

Q: What is the difference between reliability testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.

Q: What is automated testing?

A: Automated testing is a formally specified and controlled method of formal testing approach.

Q: What is the difference between volume testing and load testing?

A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.

Q: What is incremental testing?

A: Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers.

Q: What is software testing?

A: Software testing is a process that identifies the correctness, completeness, and quality of software. Actually, testing cannot establish the correctness of software. It can find defects, but cannot prove there are no defects.

Q: What is alpha testing?

A: Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly.

Then, (and this is called second stage of alpha testing), the software is handed over to software QA staff for additional testing in an environment that is similar to the intended use.

Q: What is beta testing?

A: Following alpha testing, "beta versions" of the software are released to a group of people, and limited public tests are performed, so that further testing can ensure the product has few bugs.

Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to benefit the maximum number of future users.

Q: What is the difference between alpha and beta testing?

A: Alpha testing is performed by in-house developers and software QA personnel. Beta testing is performed by the public, a few select prospective customers, or the general public.

Q: What is gamma testing?

A: Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks. Cynics tend to refer to software releases as "gamma testing".

Q: What is boundary value analysis?

A: Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code, is to exercise it at its natural boundaries.

Q: What is ad hoc testing?

A: Ad hoc testing is a testing approach; it is the least formal testing approach.

Q: What is clear box testing?

A: Clear box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.

Q: What is glass box testing?

A: Glass box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.

Q: What is open box testing?

A: Open box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.

Q: What is black box testing?

A: Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the "inner workings" of the software.

Q: What is functional testing?

A: Functional testing is the same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the "inner workings" of the software.

Q: What is closed box testing?

A: Closed box testing is the same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the "inner workings" of the software.

Q: What is bottom-up testing?

A: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components, for testing purposes.

Q: What is software quality?

A: The quality of the software does vary widely from system to system. Some common quality attributes are stability, usability, reliability, portability, and maintainability. See quality standard ISO 9126 for more information on this subject.

Q: What is software fault?

A: A software fault is a hidden programming error. A software fault fault is an error in the correctness of the semantics of a computer program.

Q: What is software failure?

A: Software failure occurs when the software does not do what the user expects to see.

Q: What is a requirements test matrix?

A: The requirements test matrix is a project management tool for tracking and managing testing efforts, based on requirements, throughout the project's life cycle.

The requirements test matrix is a table, where requirement descriptions are put in the rows of the table, and the descriptions of testing efforts are put in the column headers of the same table.

The requirements test matrix is similar to the requirements traceability matrix, which is a representation of user requirements aligned against system functionality.

The requirements traceability matrix ensures that all user requirements are addressed by the system integration team and implemented in the system integration effort.

The requirements test matrix is a representation of user requirements aligned against system testing.

Similarly to the requirements traceability matrix, the requirements test matrix ensures that all user requirements are addressed by the system test team and implemented in the system testing effort.

Q: Give me a requirements test matrix template!

A: For a simple requirements test matrix template, you want a basic table that you would like to use for cross-referencing purposes.

How do you create one? You can create a requirements test matrix template in the following six steps:

Step 1: Find out how many requirements you have.

Step 2: Find out how many test cases you have.
Q: What is the difference between a software fault and software failure?

A: A software failure occurs when the software does not do what the user expects to see. Software faults, on the other hand, are hidden programming errors. Software faults become software failures only when the exact computation conditions are met, and the faulty portion of the code is executed on the CPU. This can occur during normal usage. Other times it occurs when the software is ported to a different hardware platform, or, when the software is ported to a different complier, or, when the software gets extended.

Q: What is a test engineer?

A: We, test engineers, are engineers who specialize in testing. We create test cases, procedures, scripts and generate data. We execute test procedures and scripts, analyze standards of measurements, evaluate results of system/integration/regression testing.

Q: What is a QA engineer?

A: QA engineers are test engineer, but they do more than just testing. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization.

Communication skills and the ability to understand various sides of issues are important. A QA engineer is successful if people listen to him, if people use his tests, if people think that he's useful, and if he's happy doing his work.

I would love to see QA departments staffed with experienced software developers who coach development teams to write better code. But I've never seen it. Instead of coaching, QA engineers tend to be process people.

Q: How do test case templates look like?

A: Software test cases are documents that describe inputs, actions, or events and their expected results, in order to determine if all features of an application are working correctly.

A software test case template is, for example, a 6-column table, where column 1 is the "Test case ID number", column 2 is the "Test case name", column 3 is the "Test objective", column 4 is the "Test conditions/setup", column 5 is the "Input data requirements/steps", and column 6 is the "Expected results".

All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document.

Once QA tester has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.

Q: What is the role of the test engineer?

A: We, test engineers, speed up the work of the development staff, and reduce the risk of your company's legal liability.

We also give your company the evidence that the software is correct and operates properly.

We, test engineers, improve problem tracking and reporting, maximize the value of the software, and the value of the devices that use it.

We, test engineers, assure the successful launch of the product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down.

We, test engineers, help the work of the software development staff, so the development team can devote its time to build up the product.

We, test engineers, promote continual improvement.

We provide documentation required by FDA, FAA, other regulatory agencies, and your customers.

We, test engineers, save your company money by discovering defects EARLY in the design process, before failures occur in production, or in the field. We save the reputation of your company by discovering bugs and design flaws, before bugs and design flaws damage the reputation of your company.

Q: What are the QA engineer's responsibilities?

A: Let's say, an engineer is hired for a small software company's QA role, and there is no QA team. Should he take responsibility to set up a QA infrastructure/process, testing and quality of the entire product? No, because taking this responsibility is a classic trap that QA people get caught in. Why? Because we QA engineers cannot assure quality. And because QA departments cannot create quality.

What we CAN do is to detect lack of quality, and prevent low-quality products from going out the door. What is the solution? We need to drop the QA label, and tell the developers that they are responsible for the quality of their own work. The problem is, sometimes, as soon as the developers learn that there is a test department, they will slack off on their testing. We need to offer to help with quality assessment, only.

Q: What metrics can be used in for software development?

A: Metrics refer to statistical process control. The idea of statistical process control is a great one, but it has only a limited use in software development.

On the negative side, statistical process control works only with processes that are sufficiently well defined AND unvaried, so that they can be analyzed in terms of statistics. The problem is, most software development projects are NOT sufficiently well defined and NOT sufficiently unvaried.

On the positive side, one CAN use statistics. Statistics are excellent tools that project managers can use. Statistics can be used, for example, to determine when to stop testing, i.e. test cases completed with certain percentage passed, or when bug rate falls below a certain level. But, if these are project management tools, why should we label them quality assurance tools?

Q: What is role of the QA engineer?

A: The QA Engineer's function is to use the system much like real users would, find all the bugs, find ways to replicate the bugs, submit bug reports to the developers, and to provide feedback to the developers, i.e. tell them if they've achieved the desired level of quality.

Q: What metrics can be used for bug tracking?

A: Metrics that can be used for bug tracking include the total number of bugs, total number of bugs that have been fixed, number of new bugs per week, and number of fixes per week.

Other metrics in quality assurance include...

McCabe metrics: cyclomatic complexity metric (v(G)), actual complexity metric (AC), module design complexity metric (iv(G)), essential complexity metric (ev(G)), pathological complexity metric (pv(G)), design complexity metric (S0), integration complexity metric (S1), object integration complexity metric (OS1), global data complexity metric (gdv(G)), data complexity metric (DV), tested data complexity metric (TDV), data reference metric (DR), tested data reference metric (TDR), maintenance severity metric (maint_severity), data reference severity metric (DR_severity), data complexity severity metric (DV_severity), global data severity metric (gdv_severity).

Q: What metrics can be used for bug tracking? (Cont'd...)

McCabe object-oriented software metrics: encapsulation percent public data (PCTPUB), access to public data (PUBDATA), polymorphism percent of unoverloaded calls (PCTCALL), number of roots (ROOTCNT), fan-in (FANIN), quality maximum v(G) (MAXV), maximum ev(G) (MAXEV), and hierarchy quality (QUAL).

Other object-oriented software metrics: depth (DEPTH), lack of cohesion of methods (LOCM), number of children (NOC), response for a class (RFC), weighted methods per class (WMC), Halstead software metrics program length, program volume, program level and program difficulty, intelligent content, programming effort, error estimate, and programming time.

Line count software metrics: lines of code, lines of comment, lines of mixed code and comments, and lines left blank.

Q: How do you perform integration testing?

A: First, unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements.

Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.

Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.

Q: What metrics are used for test report generation?

A: Metrics that can be used for test report generation include...

McCabe metrics: Cyclomatic complexity metric (v(G)), Actual complexity metric (AC), Module design complexity metric (iv(G)), Essential complexity metric (ev(G)), Pathological complexity metric (pv(G)), design complexity metric (S0), Integration complexity metric (S1), Object integration complexity metric (OS1), Global data complexity metric (gdv(G)), Data complexity metric (DV), Tested data complexity metric (TDV), Data reference metric (DR), Tested data reference metric (TDR), Maintenance severity metric (maint_severity), Data reference severity metric (DR_severity), Data complexity severity metric (DV_severity), Global data severity metric (gdv_severity).

McCabe object oriented software metrics: Encapsulation percent public data (PCTPUB), and Access to public data (PUBDATA), Polymorphism percent of unoverloaded calls (PCTCALL), Number of roots (ROOTCNT), Fan-in (FANIN), quality maximum v(G) (MAXV), Maximum ev(G) (MAXEV), and Hierarchy quality(QUAL).

Other object oriented software metrics: Depth (DEPTH), Lack of cohesion of methods (LOCM), Number of children (NOC), Response for a class (RFC), Weighted methods per class (WMC), Halstead software metrics program length, Program volume, Program level and program difficulty, Intelligent content, Programming effort, Error estimate, and Programming time.

Line count software metrics: Lines of code, Lines of comment, Lines of mixed code and comments, and Lines left blank.

Q: What is the "bug life cycle"?

A: Bug life cycles are similar to software development life cycles. At any time during the software development life cycle errors can be made during the gathering of requirements, requirements analysis, functional design, internal design, documentation planning, document preparation, coding, unit testing, test planning, integration, testing, maintenance, updates, re-testing and phase-out.

Bug life cycle begins when a programmer, software developer, or architect makes a mistake, creates an unintentional software defect, i.e. a bug, and ends when the bug is fixed, and the bug is no longer in existence.

What should be done after a bug is found? When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested.

Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations.

A variety of commercial, problem-tracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.

Q: What is integration testing?

A: Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.

Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable / acceptable, based on client input.

Q: How do test plan templates look like?

A: The test plan document template describes the objectives, scope, approach and focus of a software testing effort.

Test document templates are often in the form of documents that are divided into sections and subsections. One example of this template is a 4-section document, where section 1 is the "Test Objective", section 2 is the "Scope of Testing", section 3 is the "Test Approach", and section 4 is the "Focus of the Testing Effort".

All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want. With standards and templates, information will not be accidentally omitted from a document.

Once QA tester has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions.

Q: What is a software project test plan?

A: A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product.

The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that no one outside the test group will be able to read it.

Q: When do you choose automated testing?

A: For larger projects, or ongoing long-term projects, automated testing can be valuable. But for small projects, the time needed to learn and implement the automated testing tools is usually not worthwhile.

Automated testing tools sometimes do not make testing easier. One problem with automated testing tools is that if there are continual changes to the product being tested, the recordings have to be changed so often, that it becomes a very time-consuming task to continuously update the scripts.

Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task.

Q: What's the ratio between developers and testers?

A: This ratio is not a fixed one, but depends on what phase of the software development life cycle the project is in. When a product is first conceived, organized, and developed, this ratio tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers. In sharp contrast, when the software is near the end of alpha testing, this ratio tends to be 1:1 or 1:2, in favor of testers.

Q: What is your role in your current organization?

A: I'm a QA Engineer. The QA Engineer's function is to use the system much like real users would, find all the bugs, find ways to replicate the bugs, submit bug reports to the developers, and to provide feedback to the developers, i.e. tell them if they've achieved the desired level of quality.

Q: How can I learn to use WinRunner, without any outside help?

A: I suggest you read all you can, and that includes reading product description pamphlets, manuals, books, information on the Internet, and whatever information you can lay your hands on. Then the next step is actual practice, the gathering of hands-on experience on how to use WinRunner.

If there is a will, there is a way. You CAN do it, if you put your mind to it. You CAN learn to use WinRunner, with little or no outside help.

Q: Should I take a course in manual testing?

A: Yes, you want to consider taking a course in manual testing. Why? Because learning how to perform manual testing is an important part of one's education. Unless you have a significant personal reason for not taking a course, you do not want to skip an important part of an academic program.

Q: To learn to use WinRunner, should I sign up for a course at a nearby educational institution?

A: Free, or inexpensive, education is often provided on the job, by an employer, while one is getting paid to do a job that requires the use of WinRunner and many other software testing tools.

In lieu of a job, it is often a good idea to sign up for courses at nearby educational institutes. Classes, especially non-degree courses in community colleges, tend to be inexpensive.

Q: How can I become a good tester? I have little or no money.

A: The cheapest i.e. "free education" is often provided on the job, by an employer, while one is getting paid to do a testing job, where one is able to use many different software testing tools.

Q: What software tools are in demand these days?

A: There is no good answer to this question. The answer to this question can and will change from day to day. What is in demand today, is not necessarily in demand tomorrow.

To give you some recent examples, some of the software tools on end clients' lists of requirements include LabView, LoadRunner, Rational Tools and Winrunner.

But, as a general rule of thumb, there are many-many other items on their lists, depending on the end client, their needs and preferences.

It is worth repeating... the answer to this question can and will change from one day to the next. What is in demand today will not likely be in demand tomorrow.

Q: Which of these tools should I learn?

A: I suggest you learn some of the most popular software tools (e.g. WinRunner, LoadRunner, LabView, and Rational Rose, etc.) with special attention paid to the Rational Toolset and LoadRunner.

Q: What is software configuration management?

A: Software Configuration management (SCM) relates to Configuration Management (CM).

SCM is the control, and the recording of, changes that are made to the software and documentation throughout the software development life cycle (SDLC).

SCM covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, and changes made to them, and to keep track of who makes the changes.

We, test engineers have experience with a full range of CM tools and concepts, and can easily adapt to an organization's software tool and process needs.

Q: What are some of the software configuration management tools?

A: Software configuration management tools include Rational ClearCase, DOORS, PVCS, CVS; and there are many others. Rational ClearCase is a popular software tool, made by Rational Software, for revision control of source code.

DOORS, or "Dynamic Object Oriented Requirements System", is a requirements version control software tool.

CVS, or "Concurrent Version System", is a popular, open source version control system to keep track of changes in documents associated with software projects. CVS enables several, often distant, developers to work together on the same source code.

PVCS is a document version control tool, a competitor of SCCS. SCCS is an original UNIX program, based on "diff". Diff is a UNIX utility that compares the difference between two text files.

Q: Which of these roles are the best and most popular?

A: In testing, Tester roles tend to be the most popular. The less popular roles include the roles of System Administrator, Test/QA Team Lead, and Test/QA Managers.

Q: What other roles are in testing?

A: Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Leads, Test/QA Managers, System Administrators, Database Administrators, Technical Analysts, Test Build Managers, and Test Configuration Managers.

Depending on the project, one person can and often wear more than one hat. For instance, we Test Engineers often wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager as well.

Q: What's the difference between priority and severity?

A: The simple answer is, "Priority is about scheduling, and severity is about standards."

The complex answer is, "Priority means something is afforded or deserves prior attention; a precedence established by order of importance (or urgency). Severity is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles, e.g. a severe code of behavior."

Q: What is documentation change management?

A: Documentation change management is part of configuration management (CM). CM covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes.

QA tester has had experience with a full range of CM tools and concepts. QA tester can easily adapt to your software tool and process needs.

Q: What is up time?

A: "Up time" is the time period when a system is operational and in service. Up time is the sum of busy time and idle time.

For example, if, out of 168 hours, a system has been busy for 50 hours, idle for 110 hours, and down for 8 hours, then the busy time is 50 hours, idle time is 110 hours, and up time is (110 + 50 =) 160 hours.

Q: What is upwardly compatible software?

A: Upwardly compatible software is compatible with a later or more complex version of itself. For example, an upwardly compatible software is able to handle files created by a later version of itself.

Q: What is upward compression?

A: In software design, upward compression means a form of demodularization, in which a subordinate module is copied into the body of a superior module.

Q: What is usability?

A: Usability means ease of use; the ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a software product.

Q: What is user documentation?

A: User documentation is a document that describes the way a software product or system should be used to obtain the desired results.

Q: What is a user manual?

A: User manual is a document that presents information necessary to employ software or a system to obtain the desired results.

Typically, what is described are system and component capabilities, limitations, options, permitted inputs, expected outputs, error messages, and special instructions.

Q: What is the difference between user documentation and user manual?

A: When a distinction is made between those who operate and use a computer system for its intended purpose, a separate user documentation and user manual is created. Operators get user documentation, and users get user manuals.

Q: What is user friendly software?

A: A computer program is user friendly, when it is designed with ease of use, as one of the primary objectives of its design.

Q: What is a user friendly document?

A: A document is user friendly, when it is designed with ease of use, as one of the primary objectives of its design.

Q: What is a user guide?

A: User guide is the same as the user manual. It is a document that presents information necessary to employ a system or component to obtain the desired results.

Typically, what is described are system and component capabilities, limitations, options, permitted inputs, expected outputs, error messages, and special instructions.

Q: What is user interface?

A: User interface is the interface between a human user and a computer system. It enables the passage of information between a human user and hardware or software components of a computer system.

Q: What is a utility?

A: Utility is a software tool designed to perform some frequently used support function. For example, a program to print files.

Q: What is utilization?

A: Utilization is the ratio of time a system is busy, divided by the time it is available. Uilization is a useful measure in evaluating computer performance.

Q: What is V&V?

A: V&V is an acronym for verification and validation.

Q: What is variable trace?

A: Variable trace is a record of the names and values of variables accessed and changed during the execution of a computer program.

Q: What is value trace?

A: Value trace is same as variable trace. It is a record of the names and values of variables accessed and changed during the execution of a computer program.

Q: What is a variable?

A: Variables are data items whose values can change. One example is a variable we've named "capacitor_voltage_10000", where "capacitor_value_10000" can be any whole number between -10000 and +10000.

Keep in mind, there are local and global variables.

Q: What is a variant?

A: Variants are versions of a program. Variants result from the application of software diversity.

Q: What is verification and validation (V&V)?

A: Verification and validation (V&V) is a process that helps to determine if the software requirements are complete, correct; and if the software of each development phase fulfills the requirements and conditions imposed by the previos phase; and if the final software complies with the applicable software requirements.

Q: What is a software version?

A: A software version is an initial release (or re-release) of a software associated with a complete compilation (or recompilation) of the software.

Q: What is a document version?

A: A document version is an initial release (or a complete re-release) of a document, as opposed to a revision resulting from issuing change pages to a previous release.

Q: What is VDD?

A: VDD is an acronym. It stands for "version description document".

Q: What is a version description document (VDD)?

A: Version description document (VDD) is a document that accompanies and identifies a given version of a software product.

Typically the VDD includes a description, and identification of the software, identification of changes incorporated into this version, and installation and operating information unique to this version of the software.

Q: What is a vertical microinstruction?

A: A vertical microinstruction is a microinstruction that specifies one of a sequence of operations needed to carry out a machine language instruction. Vertical microinstructions are short, 12 to 24 bit instructions. They're called vertical because they are normally listed vertically on a page. These 12 to 24 bit microinstructions instructions are required to carry out a single machine language instruction.

Besides vertical microinstructions, there are horizontal as well as diagonal microinstructions as well.

Q: What is a virtual address?

A: In virtual storage systems, virtual addresses are assigned to auxiliary storage locations. They allow those location to be accessed as though they were part of the main storage.

Q: What is virtual memory?

A: Virtual memory relates to virtual storage. In virtual storage, portions of a user's program and data are placed in auxiliary storage, and the operating system automatically swaps them in and out of main storage as needed.

Q: What is virtual storage?

A: Virtual storage is a storage allocation technique, in which auxiliary storage can be addressed as though it was part of main storage. Portions of a user's program and data are placed in auxiliary storage, and the operating system automatically swaps them in and out of main storage as needed.

Q: What is a waiver?

A: Waivers are authorizations to accept software that has been submitted for inspection, found to depart from specified requirements, but is nevertheless considered suitable for use "as is", or after rework by an approved method.

Q: What is the waterfall model?

A: Waterfall is a model of the software development process in which the concept phase, requirements phase, design phase, implementation phase, test phase, installation phase, and checkout phase are performed in that order, probably with overlap, but with little or no iteration.

Q: What are the phases of the software development process?

A: Software development process consists of the concept phase, requirements phase, design phase, implementation phase, test phase, installation phase, and checkout phase.

Q: What models are used in software development?

A: In software development process the following models are used: waterfall model, incremental development model, rapid prototyping model, and spiral model.

Q: What is SDLC?

A: SDLC is an acronym. It stands for "software development life cycle".

Q: What is the difference between system testing and integration testing?

A: System testing is high level testing, and integration testing is a lower level testing. Integration testing is completed first, not the system testing. In other words, upon completion of integration testing, system testing is started, and not vice versa.

For integration testing, test cases are developed with the express purpose of exercising the interfaces between the components.

For system testing, on the other hand, the complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment.

The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements.

The purpose of system testing, on the other hand, is to validate an application's accuracy and completeness in performing the functions as designed, and to test all functions of the system that are required in real life.

Q: Can you give me more information on software QA/testing, from a tester's point of view?

Q: What are the parameters of performance testing?

A: Performance testing verifies loads, volumes, and response times, as defined by requirements. Performance testing is a part of system testing, but it is also a distinct level of testing.

The term 'performance testing' is often used synonymously with stress testing, load testing, reliability testing, and volume testing.

Q: What types of testing can you tell me about?

A: Each of the followings represents a different type of testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.

Q: What is disaster recovery testing?

A: Disaster recovery testing is testing how well the system recovers from disasters, crashes, hardware failures, or other catastrophic problems.

Q: How do you conduct peer reviews?

A: Peer reviews, sometimes called PDR, are formal meeting, more formalized than a walk-through, and typically consists of 3-10 people including the test lead, task lead (the author of whatever is being reviewed) and a facilitator (to make notes).

The subject of the PDR is typically a code block, release, or feature, or document. The purpose of the PDR is to find problems and see what is missing, not to fix anything.

The result of the meeting is documented in a written report. Attendees should prepare for PDRs by reading through documents, before the meeting starts; most problems are found during this preparation.

Why are PDRs so useful? Because PDRs are cost-effective methods of ensuring quality, because bug prevention is more cost effective than bug detection.

Q: How do you test the password field?

A: To test the password field, we do boundary value testing.

Q: How do you check the security of your application?

A: To check the security of an application, we can use security/penetration testing. Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage.

This type of testing usually requires sophisticated testing techniques.

Q: When testing the password field, what is your focus?

A: When testing the password field, one needs to verify that passwords are encrypted.

Q: What is the objective of regression testing?

A: The objective of regression testing is to test that the fixes have not created any other problems elsewhere. In other words, the objective is to ensure the software has remained intact.

A baseline set of data and scripts are maintained and executed, to verify that changes introduced during the release have not "undone" any previous code.

Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.

Q: What stage of bug fixing is the most cost effective?

A: Bug prevention, i.e. inspections, PDRs, and walk-throughs, is more cost effective than bug detection.

Q: What types of white box testing can you tell me about?

A: White box testing is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.

Clear box testing is a white box type of testing. Glass box testing is also a white box type of testing. Open box testing is also a white box type of testing.

Q: What black box testing types can you tell me about?

A: Black box testing is functional testing, not based on any knowledge of internal software design or code.

Black box testing is based on requirements and functionality. Functional testing is also a black-box type of testing geared to functional requirements of an application.

System testing is also a black box type of testing. Acceptance testing is also a black box type of testing. Functional testing is also a black box type of testing. Closed box testing is also a black box type of testing. Integration testing is also a black box type of testing.

Q: Is regression testing performed manually?

A: It depends on the initial testing approach. If the initial testing approach was manual testing, then the regression testing is normally performed manually.

Conversely, if the initial testing approach was automated testing, then the regression testing is normally performed by automated testing.

Q: How can I learn software testing?

A: I suggest you visit my web site, especially robdavispe.com/free and robdavispe.com/free2, and you will find answers to most questions on software testing. As to questions and answers that are not on my web site at the moment, please be patient. I will add more questions and answers, as soon as time permits.

I also suggest you get a job in software testing. Why? Because you can get additional, free education, on the job, by an employer, while you are being paid to do software testing. On the job, you will be able to use some of the more popular software tools, including WinRunner, LoadRunner, LabView, and the Rational Toolset. The tools you use will depend on the end client, their needs and preferences.

I also suggest you sign up for courses at nearby educational institutions. Classroom education, especially non-degree courses in local community colleges, tends to be inexpensive.

Q: What is your view of software QA/testing?

A: Software QA/testing is easy, if requirements are solid, clear, complete, detailed, cohesive, attainable and testable, and if schedules are realistic, and if there is good communication.

Software QA/testing is a piece of cake, if project schedules are realistic, if adequate time is allowed for planning, design, testing, bug fixing, re-testing, changes, and documentation.

Q: What is your view of software QA/testing? (Cont'd...)

Software QA/testing is relatively easy, if testing is started early on, and if fixes or changes are re-tested, and if sufficient time is planned for both testing and bug fixing.

Software QA/testing is easy, if new features are avoided, and if one sticks to initial requirements as much as possible.

Q: How can I be a good tester?

A: We, good testers, take the customers' point of view. We are tactful and diplomatic. We have a "test to break" attitude, a strong desire for quality, an attention to detail, and good communication skills, both oral and written.

Previous software development experience is also helpful as it provides a deeper understanding of the software development process.

Q: What is the difference between software bug and software defect?

A: A 'software bug' is a nonspecific term that means an inexplicable defect, error, flaw, mistake, failure, fault, or unwanted behavior of a computer program.

Other terms, e.g. software defect and software failure, are more specific.

There are many who believe the word 'bug' is a reference to insects that caused malfunctions in early electromechanical computers (in the 1950s and 1960s), the truth is the word 'bug' has been part of engineering jargon for 100+ years. Thomas Edison, the great inventor, wrote the followings in 1878: "It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise—this thing gives out and [it is] then that "Bugs" — as such little faults and difficulties are called — show themselves and months of intense watching, study and labor are requisite before commercial success or failure is certainly reached."

Q: How can I improve my career in software QA/testing?

A: Invest in your skills! Learn all you can! Visit my web site, and on http://robdavispe.com/free and http://robdavispe.com/free2, you will find answers to the vast majority of questions on testing, from software QA/testers' point of view.

Get additional education, on the job. Free education is often provided by employers, while you are paid to do the job of a tester. On the job, often you can use many software tools, including WinRunner, LoadRunner, LabView, and Rational Toolset. Find an employer whose needs and preferences are similar to yours.

Get an education! Sign up for courses at nearby educational institutes. Take classes! Classroom education, especially non-degree courses in local community colleges, tends to be inexpensive. Improve your attitude! Become the best software QA/tester! Always strive to exceed the expectations of your customers!

Q: How do you compare two files?

A: Use PVCS, SCCS, or "diff". PVCS is a document version control tool, a competitor of SCCS. SCCS is an original UNIX program, based on "diff". Diff is a UNIX utility that compares the difference between two text files.

Q: What do we use for comparison?

A: Generally speaking, when we write a software program to compare files, we compare two files, bit by bit. For example, when we use "diff", a UNIX utility, we compare two text files.

Q: What is the reason we compare files?

A: We compare files because of configuration management, revision control, requirement version control, or document version control. Examples are Rational ClearCase, DOORS, PVCS, and CVS. CVS, for example, enables several, often distant, developers to work together on the same source code.

Q: When is a process repeatable?

A: If we use detailed and well-written processes and procedures, we ensure the correct steps are being executed. This facilitates a successful completion of a task. This is a way we also ensure a process is repeatable.

Q: What is test methodology?

A: One test methodology is a three-step process. Creating a test strategy, creating a test plan/design, and executing tests. This methodology can be used and molded to your organization's needs.

QA tester believes that using this methodology is important in the development and ongoing maintenance of his customers' applications.

Q: What does a Test Strategy Document contain?

A: The test strategy document is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required.

The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team.

The test plan may include test cases, conditions, the test environment, and a list of related tasks, pass/fail criteria and risk assessment.

Additional sections in the test strategy document include:

A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.

A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.

Testing methodology. This is based on known standards.

Functional and technical requirements of the application. This information comes from requirements, change request, technical, and functional design documents.

Requirements that the system cannot provide, e.g. system limitations.

Q: How can I start my career in automated testing?

A: Number one: I suggest you read all you can, and that includes reading product description pamphlets, manuals, books, information on the Internet, and whatever information you can lay your hands on.

Two, get hands-on experience on how to use automated testing tools.

If there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to use WinRunner, and many other automated testing tools, with little or no outside help. Click on a link!

Q: What is monkey testing?

A: "Monkey testing" is random testing performed by automated testing tools. These automated testing tools are considered "monkeys", if they work at random.

We call them "monkeys" because it is widely believed, if we allow six monkeys to pound on six typewriters at random, for a million years, they will recreate all the works of Isaac Asimov.

There are "smart monkeys" and "dumb monkeys".

"Smart monkeys" are valuable for load and stress testing, and will find a significant number of bugs, but they're also very expensive to develop.

"Dumb monkeys", on the other hand, are inexpensive to develop, are able to do some basic testing, but they will find few bugs. However, the bugs "dumb monkeys" do find will be hangs and crashes, i.e. the bugs you least want to have in your software product.

"Monkey testing" can be valuable, but they should not be your only testing.

Q: What is stochastic testing?

A: Stochastic testing is the same as "monkey testing", but stochastic testing is a more technical sounding name for the same testing process.

Stochastic testing is black box testing, random testing, performed by automated testing tools. Stochastic testing is a series of random tests over time.

The software under test typically passes the individual tests, but our goal is to see if it can pass a large series of the individual tests.

Q: What is mutation testing?

A: In mutation testing, we create mutant software, we make mutant software to fail, and thus demonstrate the adequacy of our test case.

When we create a set of mutant software, each mutant software differs from the original software by one mutation, i.e. one single syntax change made to one of its program statements, i.e. each mutant software contains only one single fault.

When we apply test cases to the original software and to the mutant software, we evaluate if our test case is adequate.

Our test case is inadequate, if both the original software and all mutant software generate the same output.

Our test case is adequate, if our test case detects faults, or, if, at least one mutant software generates a different output than does the original software for our test case.

Q: What is PDR?

A: PDR is an acronym. In the world of software QA/testing, it stands for "peer design review", or "peer review".

Q: What is is good about PDRs?

A: PDRs are informal meetings, and I do like all informal meetings. PDRs make perfect sense, because they're for the mutual benefit of you and your end client.

Your end client requires a PDR, because they work on a product, and want to come up with the very best possible design and documentation.

Your end client requires you to have a PDR, because when you organize a PDR, you invite and assemble the end client's best experts and encourage them to voice their concerns as to what should or should not go into the design and documentation, and why.

When you're a developer, designer, author, or writer, it's also to your advantage to come up with the best possible design and documentation.

Therefore you want to embrace the idea of the PDR, because holding a PDR gives you a significant opportunity to invite and assemble the end client's best experts and make them work for you for one hour, for your own benefit.

To come up with the best possible design and documentation, you want to encourage your end client's experts to speak up and voice their concerns as to what should or should not go into your design and documentation, and why.

Q: Why is that my company requires a PDR?

A: Your company requires a PDR, because your company wants to be the owner of the very best possible design and documentation. Your company requires a PDR, because when you organize a PDR, you invite, assemble and encourage the company's best experts to voice their concerns as to what should or should not go into your design and documentation, and why.

Remember, PDRs are not about you, but about design and documentation. Please don't be negative; please do not assume your company is finding fault with your work, or distrusting you in any way. There is a 90+ per cent probability your company wants you, likes you and trust you, because you're a specialist, a GOOD TESTING INTERVIEW QUESTIONS -2

Date Posted: 05/04/2012

 Q: What testing approaches can you tell me about? A: Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, syste...  

 
 

GLOSARRY OF WORDS IN SOFTWARE INDUSTRY

 
 

Testapproach document This document identifies the Testing Depa...  

 
 
245

Defect Tracking Tools

This is a list of defect tracking tools. Both commercial and freeware tools are included. The tools on this list are all available standalone, with the exception of a few that are integrated with a test management system. Tools that are only available as part of a bundled suite of tools, such as a configuration management toolset or a complex CASE tool, are not included. Tools that are better suited as call management tools than defect tracking tools are not included, though some tools that are listed claim to do both well.

Many of the tools on this page do not include a Software Description section. This was done to make is easier when I first set up the page. I am now accepting updates and new entries from vendors that include this section.

Current Listings

  • +1CR (+1 Software Engineering)
  • Aardvark (Red Gate Software Ltd.)
  • Abuky (freeware)
  • AceProject (Websystems Inc.)
  • AdminiTrack (AdminiTrack, Inc.)
  • Advanced Defect Tracking (Borderwave Software)
  • Alcea Fast BugTrack (Alcea Technologies Ltd.)
  • AllChange (Intasoft)
  • AQdevTeam (AutomatedQA Corp.)
  • Atlassian JIRA (Atlassian Software Systems)
  • BitDesk (PTLogica)
  • BMC Remedy Quality Management (BMC Software, Inc.)
  • BridgeTrak Suite (Kemma Software)
  • Bug Trail
  • Bug-Track.com
  • BugAware (Jackal Software Pty Ltd)
  • BugBase 2000 (Threerock Software)
  • BugBox (BugBox)
  • Bugcentral.com (Bugcentral Inc.)
  • Bug/Defect Tracking Expert (Applied Innovation Management, Inc.)
  • Buggit (freeware)
  • Buggy (Novosys EDV GmbH)
  • BugHost (Active-X.COM)
  • BugLister (Hajo Kirchhoff)
  • BugMonitor.com(BugMonitor.com, Inc.)
  • BugRat (freeware)
  • BugStation (Bugopolis LLC)
  • BUGtrack (ForeSoft Corporation)
  • Bugtrack (freeware)
  • Bug Tracker Server (Avensoft)
  • Bug Tracker Software (Bug Tracker Software)
  • BugUP
  • Bugzero (WEBsina)
  • Bugzilla (freeware)
  • Census Bug Tracking and Defect Tracking (Metaquest)
  • Change Commander (Lightspeed Software)
  • ClearDDTS (IBM Rational)
  • ClearQuest (IBM Rational)
  • CustomerFirst (Repository Technologies, Inc.)
  • Debian Bug Tracking System (freeware)
  • Defect Agent (Inborne Technology Corporation)
  • Defect Manager (Tiera Software, Inc)
  • Defect Tracker (New Fire)
  • defectX (defectX)
  • Deskzilla
  • DevTrack (TechExcel, Inc)
  • d-Tracker (Empirix)
  • elementool (elementool Inc.)
  • eQRP (Amadeus International Inc.)
  • ExtraView (Sesame Technology)
  • Flats Helpdesk (WarrinerWare)
  • FMAS (stag software private limited)
  • FogBUGZ (Fog Creek Software)
  • GNATS (freeware)
  • GRAN PM (GRAN Ltd.)
  • Helis (freeware)
  • icTracker (IC Soft, Inc.)
  • inControl (stag software private limited)
  • IOS/Track (Interobject Systems)
  • IssueNet Intercept
  • IssueView (IssueView.Com)
  • ITracker(Cowsultants.com)
  • JitterBug (freeware)
  • JTrac
  • LegendSoft SPoTS (LegendSoft Inc.)
  • Mantis (freeware)
  • McCabe CM - TRUEtrack (McCabe Software, Inc.)
  • OfficeClip Defect Tracker (OfficeClip. LLC)
  • OnTime Defect Tracker
  • Ozibug (Tortuga Technologies)
  • PloneCollectorNG (ZOPYX Software development and consulting Andreas Jung)
  • Problem Reporting System (Testmasters, Inc)
  • ProblemTracker (NetResults)
  • ProjectLocker (One Percent Software)
  • ProjectPortal (Most Media)
  • PR-Tracker (Softwise Company)
  • QAS.PTAR (Problem Tracking and Reporting)
  • QuickBugs (Excel Software)
  • RADAR (Cosmonet Solutions)
  • Razor/PT (Visible Systems Corporation)
  • RMTrack (RMTrack Issue Tracking Solutions Inc.)
  • Roundup (freeware)
  • Scarab (freeware)
  • SilkRadar (Segue Software, Inc.)
  • SourceAction
  • SourceCast(CollabNet, Inc. )
  • Support Tracker
  • SWBTracker (Software with Brains Inc.)
  • Squish (Information Management Services, Inc.)
  • T-Plan Incident Manager (T-Plan)
  • TeamTrack (TeamShare, Inc.)
  • Team Tracker (hs technologies pty ltd.)
  • Telelogic Change (Telelogic AB)
  • TestTrack Pro (Seapine Software)
  • Trac (Edgewall Software)
  • Trackem (Pikon Innovations)
  • Tracker (freeware)
  • TrackStudio Enterprise (TrackStudio, Ltd)
  • TrackWeb Defects (Soffront)
  • Trackgear (LogiGear)
  • TrackRecord (Compuware)
  • Trackwise (Sparta Systems)
  • Visual Intercept (Elsinore Technologies)
  • vManage
  • WebPTS (Zambit Technologies, Inc.)
  • yKAP - Your Kind Attention Please (DCom Solutions)
  • ZeroDefect (ProStyle Software Inc.)
   

Tools Listed Elsewhere

These tools are listed in a different category, but also offer features that are relevant for this page.

  • DuxQA
  • TestDirector
  • SpiraTest

Other Defect Tracking Tool Resources

  • Problem Management Tools Summary from the comp.software.config-mgmt FAQ
  • Call Center, Bug Tracking and Project Management Tools for Linux - also includes some useful terms and definitions
  • CASE tool index, from the Queen's University Software Engineering Archives
  • Phil Verghis' Help Desk FAQ

+1CR

Kind of Tool

Supports extensive problem report management capabilities that allow you to submit, list, view, query, print, and administer change requests.

Organization

+1 Software Engineering
http://www.plus-one.com/+1CR_fact_sheet.html

Platforms

Solaris

Entry added November 29, 2000.
Return to Listings


Aardvark

Kind of Tool

Browser based bug tracking system

Organization

Red Gate Software Ltd.
http://www.red-gate.com/

Platforms

Hosted web-based service, or Aardvark in a Box for Windows NT 4.0 Server, Windows 2000 Server for the server

Entry added December 8, 2000.
Return to Listings


AllChange

Kind of Tool

Configuration/change management system

Organization

Intasoft
http://www.intasoft.co.uk/

Platforms

Server: Windows and UNIX

Entry added November 29, 2000.
Return to Listings


Bugcentral.com

Kind of Tool

Hosted bug tracking service

Organization

Bugcentral Inc.
http://www.bugcentral.com/

Software Description

Bugcentral.com is a fully hosted bug tracking service. No software to install. Companies such as Financial Times of London, Ernst&Young, Accenture, EDS and other major corporations have used Bugcentral as their centralized bug tracking service to keep their distributed teams in sync.

Platforms

All platforms supported, accessible through any browser.

Entry updated March 21, 2008.
Return to Listings


Bug/Defect Tracking Expert

Kind of Tool

Web-based bug/defect tracking system.

Organization

Applied Innovation Management, Inc.
http://www.bug-defect-tracking-expert.com/

Platforms

Windows NT 4.0, 2000; Solaris, RedHat Linux

Entry added November 29, 2000.
Return to Listings


Buggit

Kind of Tool

Manages bugs and features. (freeware)

Organization

Pierce Business Systems
http://www.winsite.com/bin/Info?500000025751 - Access 2000 http://www.winsite.com/bin/Info?460 - Access 97

Software Description

Buggit manages bugs and features throughout the software development process. Testers, developers, and managers can all benefit greatly from the use of Buggit. They can enter and edit bugs/features, perform quick lookups of existing issues, print from a wide variety of powerful reports and graphs (see screen shot links at PBSystems webpage), administer new bug project databases, and much more. Buggit provides an unlimited number of central, multi-user databases, each capable of handling mulitple concurrent users across the development team.

Buggit was developed by Pierce Business Systems, which is no longer in business. It is still distributed by WinSite.

Platforms

Windows with Access 97 or Access 2000

Entry updated August 27, 2007.
Return to Listings


Buggy

Kind of Tool

A multiuser database program designed specifically for keeping track of bugs in your programs.

Organization

Novosys EDV GmbH
http://www.novosys.de/Buggy/Buggy.html

Platforms

Server: Windows NT, 2000, XP, 2003, Clients All 32-bit Windows OS

Entry added November 29, 2000.
Return to Listings


Bugzilla

Kind of Tool

Web-based database for bugs. (freeware)

Organization

Mozilla
http://www.bugzilla.org/

Software Description

A defect dracking dystem that allows individuals or groups of developers to keep track of outstanding bugs in their product effectively.

Platforms

Solaris, Linux, Win32, MacOS X, xBSD

Entry updated February 3, 2003.
Return to Listings


Census Bug Tracking and Defect Tracking

Kind of Tool

Bug tracking system

Organization

MetaQuest Software Inc.
http://www.metaquest.com/Solutions/BugTracking/BugTracking.html

Software Description

Census is a highly scalable Web-based bug tracking and defect tracking tool that can also track change requests, support calls, test cases, timesheets, and much more. Features include full customization capabilities, Visual SourceSafe integration, automatic e-mail notifications, user/group/field-level security, role-based workflow rules, custom Web views for different groups of users, built-in reporting, attachments, and change history tracking.

Platforms

All Windows, web-based

Entry updated September 28, 2004.
Return to Listings


Telelogic Change

Kind of Tool

An entirely Web-based and fully integrated change request tracking and reporting system that simplifies the process for change request management.

Organization

Telelogic AB
http://www.telelogic.com/products/change/

Platforms

Server: Windows Server 2003, Windows XP Professional, Solaris, HP-UX, IBM AIX, Redhat Enterprise Linux. Client: Internet Explorer, Mozilla Firefox.

Entry updated March 21, 2008.
Return to Listings


ClearDDTS

Kind of Tool

A change request management product for UNIX specifically designed to track and manage product defects and enhancement requests uncovered during product development and quality assurance testing.

Organization

IBM Rational Software
http://www-306.ibm.com/software/awdtools/clearddts/

Platforms

Server: Sun Solaris, HP-UX, DEC OSF1, IBM AIX, SGI IRIX

Entry added November 29, 2000.
Return to Listings


ClearQuest

Kind of Tool

A highly flexible defect and change tracking system that captures and tracks all types of change.

Organization

IBM Rational Software
http://www-306.ibm.com/software/awdtools/clearquest/

Platforms

Client: NT 4.0, 95/98, 2000. Server: NT 4.0, 95/98, 2000.

Entry added November 29, 2000.
Return to Listings


CustomerFirst

Kind of Tool

CustomerFirst contains an integrated defect tracking system that improves the communications and workflow between support development quality assurance departments.

Organization

RTI Software
http://www.rti-software.com/customerfirst.html

Platforms

Windows 95, 98, NT, 2000, Netware, OS/2, Unix

Entry added November 29, 2000.
Return to Listings


Debian Bug Tracking System

Kind of Tool

Problem report database (freeware)

Organization

Darren Benham
http://www.chiark.greenend.org.uk/~ian/debbugs/

Software Description

The Debian bug tracking system is a set of scripts which maintain a database of problem reports. All input and manipulation of reports is done by email; developers do not need on-line web access or accounts on the host system. Outstanding, recently closed and other listings of reports are made available via a webserver, and by an email request bot. Each report has a separate email address for submission of additional information. Core functions do not require CGI scripts.

Platforms

Unix

Entry updated February 3, 2003.
Return to Listings


Defect Tracker

Kind of Tool

Tracks and organizes defect r DEFECT TRACKING TOOLS LIST

Date Posted: 05/04/2012

Defect Tracking Tools This is a list of defect tracking tools. Both commercial and freeware tools are included. The tools on this list are all available standalone, with the exception of a few that are integrated with a test man...  

 
 

CLICK HERE TO DOWNLOAD   Guidelines for Usage of this Confi...  

 
 
247

CONFIGURATION MANAGEMENT (CM) GUIDELINES 

 1.PURPOSE STATEMENT

 This guideline describes recommended guidelines to follow in implementing configuration management for project configuration items. These items may include but are not limited to the following:

 a.  Developed project computer software configuration items (releases)

  1. b. Project documentation and specifications
  2. b. Web site content
  3. c. Operational Procedures
  4. d. Training data
  5. e. Computer system support resources (hardware and support software)

 Configuration Management (CM) is a discipline which applies technical and administrative direction, surveillance, and control to all project configuration items. The scope of configuration management addressed in this guidelines has been developed using IEEE 1042‑1987 Guide to Software Configuration Management (ANSI), and IEEE 828‑1983 Standard for Software Configuration Management Plans (ANSI) as guidelines.

 Configuration Management includes the following:

a.         Planning

b.         Configuration item identification

c.         Configuration control

d.         Status accounting and reporting

f.          Release to production process

 The purpose of CM is to maintain the integrity of the product and engineering effort so that the contractual, functional and performance requirements of the system will be met as well as provide a disciplined baseline change control process.

 One of the key functional components of configuration management is change/version control. Use of an automated tool is recommended to facilitate implementation of change control as well as other CM procedures.  It is recommended that a specific written procedure be documented for use of any automated CM tool selected that incorporates the guidelines in this document.

2. GUIDELINES

INTRODUCTION

Configuration management is the process of formally identifying and controlling project configuration items. The following definitions apply to this set of guidelines:

2.1 INTRODUCTION

Configuration management is the process of formally identifying and controlling project configuration items. The following definitions apply to this set of guidelines:

 Baseline ‑ A set of documents, specifications, and/or software products that have been formally reviewed and agreed upon, that thereafter serve as the basis for further development, and that can be changed only through formal change control procedures. A baseline is formally designated and fixed at specific times during the life cycle of a project configuration item or project. Baselines, plus approved changes from those baselines, constitute the current configuration identification.

 Computer Software Component (CSC) ‑ A functionally or logically distinct part of a computer software configuration item (CSCI), typically an aggregate of two or more computer software units (CSU).

 Computer Software Configuration Item (CSCI) ‑ The sum total or aggregation of the application software that is resident on a single computer or CPU. It is treated as a single entity in the configuration management process.

 Project/System ‑ The primary physical parts of a system are Hardware Configuration Item(s) (HWCI,) Computer Software Configuration Item(s) (CSCI) and documentation.

 Hardware Configuration Item (HWCI) ‑ The computer hardware  required to run the Project/System application software.

 Release (Revision, Version) - Any change to any project configuration item. Revision numbers have three levels, x, y, and z (e.g., revision 1.2.3 implies level x=l, level y=2, and level z=3). When x changes, y and z must be set to zero. (The terms release, revision, or version may be used interchangeably as long as they are consistent within a given project.)

2.2  CONFIGURATION ITEM IDENTIFICATION AND NUMBERING

In order to have a consistent project CM process, nomenclature, and organization the following CSCI configuration item example is recommended for project configuration items:

   As a general guideline, changing 10 percent or more of the respective baselined project configuration item constitutes a major change - new release (revision/version). Critical/emergency releases are a separate category of mandatory changes that are time‑sensitive and cannot be postponed until the next planned release.  All other changes are considered minor changes.

 The following provides expanded details and examples:

 a.Number the initial project configuration item Release/Revision/Version 1.0.0. Number major changes with sequential numbers in number level x (e.g., the first major change to Release/Revision/Version 1.0.0 is numbered 2.0.0).

b.Number minor changes to a project configuration item with sequential decimal numbers in number level y (e.g., the first minor change to Release/Revision/Version 1.0.0 is numbered 1.1.0, and the second minor change is numbered Version 1.2.0).

c.Number emergency changes to a project configuration item Release/Revision/Version with sequential decimal numbers in number level z (e.g., the first emergency change to Version 1.0.0 is numbered Version 1.0.1, and the second emergency change to Version 1.0.0 is numbered Version 1.0.2).

Computer Software Configuration Item (CSCI) Release example:

            e.g., Release 1.0.0  => the first "MAJOR" release of a software project

Release 2.0.0 => the second "MAJOR" release of a software project; ~more than 10% of the previous release's functionality has been changed                 

Release 1.1.0  => the first "MINOR" release of a previous "MAJOR" Release 1.0.0; ~less than 10% of the previous release's functionality has been changed

Release 2.2.0  => the second  "MINOR" release of a of a previous "MAJOR" Release 2.0.0; ~less than 10% of the previous release's functionality  has been changed

Release 7.x =>  some future undetermined "MINOR" release; most likely a "MINOR" release of a previous "MAJOR" release 7.n

Release 6.1.1=> The first "Emergency" Release of "MINOR" release 6.1.0; this implies a quick reaction development/testing/implementation for "show stopper" enhancements and "Incidents

 Documentation project configuration item example:

e.g., Assign all documentation a unique title (for example, System Test Plan, Software Design Document), a release/revision/version date, and optionally a document release/revision/version number.  The document title and date can be used as the simplest, complete and unique identifier of a document given that the data is changed each time ANY document content is changed. This uniquely identifies a release/revision/version of a document, however it gives no indication of the revision history.  A release number like the 3 digit number used the above CSCI may be used.

2.2.2 CHANGE CONTROL

Change control may be implemented as follows:

  1. a.All project configuration items are maintained in an electronic file or automated tool repository for which access is controlled.
  2. b.Working copies of baselined project configuration items may be “checked out” to staff engineers to be used in preparing new draft releases/revisions/versions. Only the project leader or designee can give permission to check out these working copies of baselined project configuration items of their respective projects. These working copies are part of an approved baseline only after approval is received from the project leader or designee.  If multiple copies of a project configuration item are “checked out” care must be taken to assure all changes are incorporated when checking these items back into the repository.  It is not recommended that multiple copies of a given project configuration item be “checked out” concurrently.
  3. c. Any changes to the currently approved baseline of a project configuration item are made after receiving any required reviews and/or approvals.  At this point the respective project configuration item is “checked in” to the project repository.

2.2.3 STATUS ACCOUNTING AND REPORTING

Configuration status accounting is the process used to record and report the status of changes to project configuration items under formal configuration management.

 For each project configuration item, the project leader should maintain an organized set of the configuration management records. The configuration management records should include the following:

a.Documentation records used to certify that project configuration items are ready for release for technical review or approval.

b.Documentation status records used to indicate project configuration items release, review, and approval schedule and status.

c.Status of project configuration items change proposals (e.g., Lotus Notes Incidents, Enhancements, Issues)

d.Communication/distribution of changes made and pending approval or implementation.

The project leader may submit a configuration management status report periodically to the designated department manager if any changes have been made to any respective project configuration item since the last report.  This status report should contains the following:

  1. a. A complete list of the latest project configuration items and associated release/revision/version numbers.
  2. b.A summary statement of most recent modification(s) to each project configuration item; a brief description of changes from last report.
  3. c. Dates any changes were incorporated
  4. d. Life cycle development progress records indicating the status of each project configuration items (e.g., “being revised”, “final”, “draft”, etc.).
CONFIGURATION MANAGEMENT (CM) GUIDELINES
Date Posted: 05/04/2012

CONFIGURATION MANAGEMENT (CM) GUIDELINES   1.PURPOSE STATEMENT  This guideline describes recommended guidelines to follow in implementing configuration management for project configuration items. These items may incl...  

 
 

GLOSSARY OF WORDS CLICK HERE TO DOWNLOAD

 
 

CMMI FOR SOFTWARE        

 
 

CLICK HERE TO DOWNLOAD

 
Showing 241 - 250 of 265 Previous | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | Next
Shadow Bottom
 
 
© 2005 -