Defect & Test Management

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

DEFECT & TEST

MANAGEMENT
PMIT 6111: ST & QA
What is Defect?
• When expected result of test mismatched with actual result of the application, this
mismatch is called defect.
• It can also be error, flaw, failure, or fault in a computer program. Most bugs arise
from mistakes and errors made by developers, architects.
• Following are the common types of defects that occur during development:
• Arithmetic Defects
• Logical Defects
• Syntax Defects
• Multithreading Defects
• Interface Defects
• Performance Defects
Defect Classification
• Defects are classified in two perspectives:
• From the QA team perspective as Priority
• From the development perspective as Severity (complexity of code to fix it).
• These are two major classifications that play an important role in the timeframe and the
amount of work that goes in to fix defects.
Priority
• What is Priority?
• Priority is defined as the order in which the defects should be resolved.
• The priority status is usually set by the QA team while raising the defect against the dev team
mentioning the timeframe to fix the defect.
• The Priority status is set based on the requirements of the end users.
• For example, if the company logo is incorrectly placed in the company's web page then the priority is high
but it is of low severity.
• A Priority can be categorized in the following ways −
• Low − This defect can be fixed after the critical ones are fixed.
• Medium − The defect should be resolved in the subsequent builds.
• High − The defect must be resolved immediately because the defect affects the application to a
considerable extent and the relevant modules cannot be used until it is fixed.
• Urgent − The defect must be resolved immediately because the defect affects the application or
the product severely and the product cannot be used until it has been fixed.
Severity
• Severity is defined as the impishness of defect on the application and complexity of code to fix it from
development perspective. It is related to the development aspect of the product.
• Severity can be decided based on how bad/crucial is the defect for the system.
• Severity status can give an idea about the deviation in the functionality due to the defect.
• Example − For flight operating website, defect in generating the ticket number against reservation is
high severity and also high priority.
• Severity can be categorized in the following ways −
• Critical /Severity 1 − Defect impacts most crucial functionality of Application and the QA team cannot
continue with the validation of application under test without fixing it. For example, App/Product crash
frequently.
• Major / Severity 2 − Defect impacts a functional module; the QA team cannot test that particular module
but continue with the validation of other modules. For example, flight reservation is not working.
• Medium / Severity 3 − Defect has issue with single screen or related to a single function, but the system
is still functioning. The defect here does not block any functionality. For example, Ticket# is a
representation which does not follow proper alpha numeric characters like the first five characters and
the last five as numeric.
• Low / Severity 4 − It does not impact the functionality. It may be a cosmetic defect, UI inconsistency for a
field or a suggestion to improve the end user experience from the UI side. For example, the background
colour of the Submit button does not match with that of the Save button.
What is Defect Logging and Tracking?
• Defect logging, a process of finding defects in the application under test or
product by testing or recording feedback from customers and making new
versions of the product that fix the defects or the clients feedback.
• Defect tracking is an important process in software engineering as complex and
business critical systems have hundreds of defects. One of the challenging
factors is Managing, evaluating and prioritizing these defects. The number of
defects gets multiplied over a period of time and to effectively manage them,
defect tracking system is used to make the job easier.
• Examples - Hp Quality Center, IBM Rational Quality Manager
Defect Tracking Parameters
• Defects are tracked based on various parameters such as:
• Defect Id
• Priority
• Severity
• Created by
• Created Date
• Assigned to
• Resolved Date
• Resolved By
• Status
Defect Tracking Process
• After a defect has been found it must be reported to the development team so that they can fix
the issue.
• A tester logged a defect and the initial state of a defect is ‘New’. If a business analyst is available
within the project, then the tester assigns the defect to him. On absence of a business analyst
the project lead or the coordinator reviews the defect. If the defect is valid then it will be
assigned to the development team lead and the status will be ‘Assigned’ , otherwise the defect is
marked as ‘Rejected’. The BA or lead can marked the defect as ‘Duplicate’ if there was a same
bug reported earlier and ‘Deferred’ if this is a future enhancement.
• Once the development team has fixed the issue and provides the test team a new build, the
defect status is changed to ‘Fixed’ and it will be assigned back to the test lead. The test lead
reviews it and assigns the defect back to the tester with the defect status as ‘Ready For Retest’.
The tester verifies it once again and if the functionality works as specified the tester marks the
defect status as ‘Closed’ and also passes the relevant test case.
• On re-testing the defect if the issue still persists then the tester changes the status as ‘Re-
Opened’ and assigns the defect back to the lead (the same cycle will be followed)
What is Defect Life Cycle?
• Defect life cycle, also known as Bug Life cycle is the journey of a defect cycle,
which a defect goes through during its lifetime.
• It varies from organization to organization and also from project to project as it is
governed by the software testing process and also depends upon the tools used.
Defect Life Cycle States:
• New - Potential defect that is raised and yet to be validated.
• Assigned - Assigned against a development team to address it but not yet resolved.
• Active - The Defect is being addressed by the developer and investigation is under progress. At
this stage there are two possible outcomes; viz - Deferred or Rejected.
• Test - The Defect is fixed and ready for testing.
• Verified - The Defect that is retested and the test has been verified by QA.
• Closed - The final state of the defect that can be closed after the QA retesting or can be closed if
the defect is duplicate or considered as NOT a defect.
• Reopened - When the defect is NOT fixed, QA reopens/reactivates the defect.
• Deferred - When a defect cannot be addressed in that particular cycle it is deferred to future
release.
• Rejected - A defect can be rejected for any of the 3 reasons; viz - duplicate defect, NOT a Defect,
Non Reproducible.
Defect Reporting
• The key to make a good report is providing the development team as much
information as necessary to reproduce the bug. These can be broken into the
following:
• Give a brief description of the bug or defect
• List the steps that are needed to reproduce the issue
• Provide all the details on the test data used to get the issue
• Provide all the required screen shots of the defect to the development team
• Provide the opinion from a tester’s perspective
• Generally the more information is shared by the test team, the easier it will be for the
development team to determine the problem and fix the issue. But a bug report is a
case against a product so the report should be concise such that someone who has
never seen the system can follow the steps and reproduce the issue.
Test Management
• Test management is an essential part for Software Testing Life Cycle. To a great
degree we can conclude that test management depends on proper estimation
and judgment. Many external and internal factors impact the project as well as
the project timeline and this is where the test estimation and risk analysis meet
together. Estimation involves risk at all level and mainly the following factors
influence test estimation:
• Requirements
• Available resources
• Test Data
• Fund available
Test Estimation
• By definition Estimation means something that can be changed and it is the basis
on which higher management can make decisions. This is why the project
managers continuously monitor the progress of the project and change the
estimate accordingly. They also need to send the updates to the respective stake
holders. A number of methods and metrics have been proposed and applied to
estimating the size and effort of software projects. Many of these methods are
also used to estimate the effort of software testing activities. Some of the
estimation methods are discussed below:
• Function Point Analysis
• Test Case Point Analysis
Function Point Analysis
• Function point analysis is one of the most commonly used estimation technique in the industry. It is an ISO
recognized method to measure the functional size of an information system. Function Point metric can be
used to estimate the cost or effort required to design, code and test the software or process.
• This analysis measures the software size based on five components including
1. external inputs (EIs),
2. external outputs (EOs),
3. external inquiries (EQs),
4. internal logical files (ILFs)
5. external interface files (EIFs)
• The size output, whose unit is Function Point, is derived using an empirical relationship based on
countable measures of software’s information domain and qualitative assessments of software
complexity. The values of information domain are defined on the five factors listed above. Apart from that
organizations had some weighting factor (simple, average, complex) for each of the entries in the
information domain. Now to compute the function points (FP), the following relationship is used:
• FP=count total * [0.65+0.01*∑(F i)]
• Where count total is the sum of all FP entries and Fi (i=1 to 14) is the value adjustment factor (VAF) determined
based on some responses to the organization specific questions or factors on the developed process.
Function Point Analysis-- Example
• Now consider an example of a small function in a software system which takes 2
external inputs along with 2 external inquiries and 1 internal logical file. The
function itself has 2 external outputs and connected with 3 external interfaces.
Here the total count is determined as per the table shown below:
Weighting Factor
Information Domain Value Count
Simple Average Complex
External Inputs (EIs) 2 3 4 5 6
External Outputs (EOs) 2 2 4 6 4
External Inquiries (EQs) 2 3 5 8 6
Internal Logical Files (ILFs) 1 7 8 9 7
External Interface Files (EIFs) 3 5 7 10 15
Total count 38
Function Point Analysis-- Example
• Here all the entries for the information domain are considered Simple for this
example and assume ∑ Fi = 50 (a moderately complex system)
• Therefore estimated number of FP = 38 * [ 0.65 + (0.01 * 50 )] = 44
• Now for an organization consider average productivity for software for this type is
4 FP / Per month and assuming the approximate labour rate of $ 500 per month ,
the cost of developing this module becomes $ 5500 and estimated effort is 11
person months. Now by adding resources we can reduce the total time if required.
Test Case Point Analysis
• The Test Case Point is a measure of estimating the software testing size and effort. The Test
Case Point Analysis uses test cases as input and generates Test Case Point count for the test
cases being used as input.
• Any application can be divided into several modules and any module can be classified as
Simple, Average and Complex based on the number and complexity of the associated
requirements for that module.
• The test cases associated for a particular requirement can also be classified into Simple,
Average and Complex based on the following factors:
• Complexity of the test case
• Number of verification points or check points
• Interface with other test cases
• Precondition and test data
Test Case Point Analysis
• For simplicity a test case can be considered as complex if it contains a calculation in
any of it check point or any verification point which interfaces with or interacts
with another application. But depending on the respective project, the complexity
calculation may be different.
• Based on the type of the test case an adjustment factor is allocated for simple ,
average, and complex test cases.
• This adjustment factor has been derived after a thorough study and analyzing
the historical data of many testing projects and this factor is organization
specific.
• The weighing factor should be highest for the Complex and lowest for the Simple
test case. From this portion and break up we can easily identify the number of
simple, average and complex requirements associated with the test cases within
the project.
Test Case Point Analysis
• Test Case Point for each type can be calculated as following:
• Simple Test Case Point = [Number of simple requirements in the project * Adjustment factor for
Simple requirements] ( A1 )
• Average Test Case Points = [Number of average requirements in the project * Adjustment factor
for average requirements ]( A2 )
• Complex Test Case Points = [Number of complex requirements in the project * Adjustment
factor for complex requirements ]( A3 )
• Hence the total test case point for the system under test = A1 + A2 + A3
• From estimation perspective if the productivity index is measured as the number of Test
Case points completed per person month then the total effort required can be easily
calculated from this productivity index and total test case point.
Test Case Point Analysis
Test case Complexity Number of test case Adjustment Factor
Simple 50 0.5
Average 70 1.5
Complex 30 2.5
TCP= (50*.5)+(70*1.5)+(30*2.5) = 205
If 10 TCP done per person month time required = 21 person months
If cost of each month is 50,000/- then total cost = 1,050,000/-
QA, QC & Testing
• Most people get confused when it comes to pin down the differences among Quality Assurance,
Quality Control, and Testing. Although they are interrelated and to some extent, they can be
considered as same activities, but there exist distinguishing points that set them apart. The
following table lists the points that differentiate QA, QC, and Testing.
Quality Assurance Quality Control Testing
QA includes activities that ensure the It includes activities that ensure the It includes activities that ensure the
implementation of processes, procedures and verification of a developed software with identification of bugs/error/defects in a
standards in context to verification of respect to documented (or not in some cases) software.
developed software and intended requirements.
requirements.
Focuses on processes and procedures rather Focuses on actual testing by executing the Focuses on actual testing.
than conducting actual testing on the system. software with an aim to identify bug/defect
through implementation of procedures and
process.
Process-oriented activities. Product-oriented activities. Product-oriented activities.
Preventive activities. It is a corrective process. It is a preventive process.
It is a subset of Software Test Life Cycle QC can be considered as the subset of Quality Testing is the subset of Quality Control.
(STLC). Assurance.
Source & Reference
• https://www.tutorialspoint.com/software_testing_dictionary/
• https://www.tutorialspoint.com/software_testing/
• http://www.wideskills.com/software-testing-tutorial/

You might also like