Software Testing Practices, Test Methodologies and Test Competency Services


Welcome to the world of Software Testing & Best Test Practices.

All you wanted to know about Software Testing, Test Practices, Test Methodologies and building the Test Competency Services !!


Monday, May 19, 2008

Software Testing

Software testing is a process used to identify the correctness, completeness and quality of developed computer software. Actually, testing can never establish the correctness of computer software, as this can only be done by formal verification (and only when there is no mistake in the formal verification process). It can be used to show the presence of defects, but never their absence. Software testing is a trade-off between budget, time and quality.

Testing is a process of technical investigation that is intended to reveal quality-related information about the product. It is oriented to detection.

The goal of the testing activity is to find as many errors as possible before the user of the software finds them. We can use testing to determine whether a program component meets its requirements. To accomplish its primary goal (finding errors) or any of its secondary purposes (meeting requirements), software testing must be applied in a systematic fashion.

An important point is that software testing should be distinguished from the separate discipline of software quality assurance, which encompasses all business process areas, not just testing.

Testing in other words, is nothing but CRITICISM or COMPARISION. Here comparison means comparing the actual value with expected one.There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following rote procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces.

The quality of the application can and normally does vary widely from system to system but some of the common quality attributes include reliability, stability, portability, maintainability and usability.No matter the program, the beta testers find something that they feel needs to get fixed before the disc goes gold. Alas, there's never time to fix all the bugs, and the program gets shipped, warts and all. In a triage scenario, veteran software testers know that it's inevitable that something has to slip through. They just hope that the most heinous nasties are squashed before the final release candidate.

Once the program ships, it doesn't take long for the user base to find the bugs. The unwitting and most vocal can tend to blame the Quality Assurance team and the beta software testing effort. "Why didn't you find and fix this?" they ask, without having the insight to consider the forces that come to bear.When it's gotta ship, it's gotta ship. Ready-or-not, here it goes ...By the time the shrink wrap hits the street, those last minute bugs may actually be fixed ... but the fixes won't be on the CD that shipped in the first boxes. They may be slipstreamed into subsequent releases, with patches made available over the Internet.As a user (and a grizzled software tester), I've learned not to immediately jump on and install a release the day it comes out, when possible. I usually wait for the first round of fixes to ship before I roll the dice...

Requirements Analysis

After the project has been acquired and the contract has been signed, one of the first functions of the Analysis team is the process of Requirement Definition. Lets explore this now :)

Requirements analysis encompasses those tasks that go into determining the requirements of a new or altered system, taking account of the possibly conflicting requirements of the various stakeholders, such as users. Requirements analysis is the stage where client requirements are gathered & is critical to the success of a project. This is done on the basis of information provided by the client in the form of documents, existing systems and process specs, on-site analysis interviews with end-users, market research and competitor analysis. This stage has the following steps:

Requirements Elicitation: It is the process of gathering customer needs. This involves asking the customers, users and others about the objectives of the system, what is to be accomplished, how the system fits into the business needs and finally how the system will be used.


Analyzing requirements: Once the requirements have been gathered, they become the basis for “Requirements Analysis”. Analysis categorizes requirements and organizes them into related subsets, explores each requirement in relationship to others, examines requirement for consistency, omissions and ambiguity, and prioritizes requirements based on the needs of the customer. Rough estimates of development are made and used to assess the impact of each requirement on project cost and delivery time. Using an iterative approach, requirements are eliminated, combined and modified so that each party achieves some measure of satisfaction. The requirements are used to generate Business Process Flows, Use Cases Modeling and Data Flow diagrams which facilitates a clearer understanding of the requirements and its solution, for both the customer and the developer.

Requirements Specification: It is the process of describing what a system will do. It involves scoping the requirements so that it meets the customer’s vision. Requirements Specification serves as a foundation software, hardware and database design. It describes the function (Functional and Non-Functional) and performance of the system and the constraints that will govern its development. It specifies the inputs and also describes the outputs. These specifications need to be

(i) Complete, Comprehensive, Consistent, Modifiable, Measurable
(ii) Unambiguous ,Testable, Writable, Implementable

Requirements Management: It is the process that helps the project team identify, control, and track changes to the requirements at any time as the project proceeds. Requirements Verification, Validation and Traceability examines the specification to ensure that all system requirements have been stated unambiguously and that inconsistencies, omissions and errors have been detected and corrected.
Recording requirements: Requirements may be documented in various forms, such as natural-language documents, use cases, user stories, or process specifications.

Refined V Model

Due to cost and time point of view, V-model is not applicable to small scale and medium scale companies. These types of organizations are maintaining a refinement form of V-model.

Development starts with information gathering. After the requirements gathering, BRS/CRS/URS will be prepared. This is done by the Business Analyst.

Requirements analysis: After all the requirements are analyzed, S/wRS is prepared. It consists of the functional (customer requirements) + System Requirements (h/w + S/w) requirements. It is prepared by the system analyst.

During the design phase, two types of designs are done. HLD & LLD.

High Level Design discusses an overall view of how something should work and the top level components that will comprise the proposed solution. It should have very little detail on implementation, i.e. no explicit class definitions, and in some cases not even details such as database type (relational or object) and programming language and platform. In the High-Level Design, the Technical Architect of the project will study the proposed applications functional and non-functional (qualitative) requirements and design overall solution architecture of the application which can handle those needs.

A low level design has nuts and bolts type detail in it which must come after high level design has been signed off by the users, as the high level design is much easier to change than the low level design.

During the coding phase, programs are developed by programmers.

Unit Testing: After the completion of design and their reviews, programmers are concentrating on coding. During this stage they conduct program level testing, with the help of the WBT techniques. This WBT is also known as glass box testing or clear box testing or structural testing.

WBT is based on the code. The senior programmers will conduct testing on programs. WBT is applied at the module level.

There are two types of WBT techniques, such as

1. Execution Testing

  1. Basis path coverage (correctness of every statement execution)
  2. Loops coverage (correctness of loops termination)
  3. Program technique coverage (Less no of Memory Cycles and CPU cycles during execution)

2. Operations Testing: Whether the software is running under the customer expected environment platforms (such as OS, compilers, browsers and etc…sys s/w.)

Integration Testing: After the completion of unit testing, development people concentrate on integration testing, when they complete dependent modules of unit testing. During this test, programmers are verifying integration of modules with respect to HLDD (which contains hierarchy of modules).

There are two types of approaches to conduct Integration Testing:

(i) Top-down Approach
(ii) Bottom-up approach


During the system and functional testing, the actual testers are involved and conduct tests based on S/wRS.

During the UAT, customer site people are also involved and they perform tests based on the BRS.

Reviews during Analysis:

After completion of information gathering and analysis, a review meeting is conducted to decide the following 5 factors.

(i) Are they complete?
(ii) Are they correct?
(iii) Are they achievable?
(iv) Are they reasonable? ( with respect to cost & time)
(v) Are they testable?

Reviews during Design:

After the completion of analysis of customer requirements and their reviews, technical support people (Tech Leads) concentrate on the logical design of the system. In this every stage, they will develop HLDD and LLDD.

After the completion of above like design documents, they (tech leads) concentrate on review of the documents for correctness and completeness. In this review, they can apply the below factors.

(i) Is the design good? (understandable or easy to refer)
(ii) Is the design complete? (all the customer requirements are satisfied or not)
(iii) Is the design correct? (the design flow is correct or not)
(iv) Is the design followable? (the design logic is correct or not)
(v) Does the design handle error handling? ( the design should be able to specify the positive and negative flow also)

V-Model



Port Testing: This is to test the installation process.

DRE (Defect Removal Efficiency):

DRE= a/(a+b)
a = Total no of defects found by testers during testing
b = Total no of defects found by customer during maintenance

Sunday, May 18, 2008

The Waterfall Model

The Waterfall model (non iterative model) is a sequential software development model in which development is seen as flowing steadily downwards (like a waterfall) through the phases of requirements analysis, design, implementation, testing (validation), integration, and maintenance. Progress flows from the top to the bottom, like a waterfall. Thus the waterfall model maintains that one should move to a phase only when its preceding phase is completed and perfected. Phases of development in the waterfall model are thus discrete, and there is no jumping back and forth or overlap between them.

It is also called as Classic Life Cycle Model (or) Linear Sequential Model.


The advantage of waterfall model is that it allows for departmentalization and managerial control. A schedule can be set with deadlines for each stage of development and a product can proceed through the development process. Other advantages include

  1. Testing is inherent to every phase of the waterfall model
  2. It is an enforced disciplined approach
  3. It is documentation driven, that is, documentation is produced at every stage

The disadvantage of waterfall model is that it does not allow for much reflection or revision (inflexible).

However, many projects rarely follow its sequential flow. This is due to the inherent problems associated with its rigid format. Namely:

  1. It only incorporates iteration indirectly, thus changes may cause considerable confusion as the project progresses.
  2. As the client usually only has a vague idea of exactly what is required from the software product, this model has the difficulty accommodating the natural uncertainty that exists at the beginning of the project.
  3. The customer only sees a working version of the product after it has been coded & this may result in disaster if any undetected problems are precipitated to this stage.

Friday, May 16, 2008

SOFTWARE DEVELOPMENT LIFE CYCLE

SDLC is the process of developing information systems through investigation, analysis, design, implementation and maintenance. It is also known as information systems development or application development.

Various SDLC methodologies have been developed to guide the processes involved, including the waterfall model (which was the original SDLC method); rapid application development (RAD); joint application development (JAD); the fountain model; the spiral model; build and fix; and synchronize-and-stabilize. Frequently, several models are combined into some sort of hybrid methodology. The oldest of these, and the best known, is the waterfall: a sequence of stages in which the output of each stage becomes the input for the next.

System Development Life Cycle Model (SDLC Model) This is also known as Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall Method. This has the following activities:

1. System/Information Engineering and Modeling

2. Software Requirements Analysis

3. Systems Analysis and Design

4. Code Generation

5. Testing

6. Maintenance


1) System/Information Engineering and Modeling: As software is always aimed at a large system (or business), work begins by establishing requirements for all the system elements and then allocating some subset of these requirements to the software. This system view is essential when software must interface with other elements such as hardware, people and other resources. System is the basic and very critical requirement for the existence of software in any entity. So if the system is not in place, the system should be engineered and put in place. In some cases, to extract the maximum output, the system should be re-engineered and spruced up. Once the ideal system is engineered or tuned, the development team studies the software requirement for the system. (In short, this phase identifies and defines a need for the new system.)

2) Software Requirement Analysis: This is also known as feasibility study. In this phase, the development team visits the customer and studies their system. They investigate the need for possible software automation in the given system. By the end of the feasibility study, the team furnishes a document that holds the different specific recommendations for the candidate system. It also includes the personnel assignments, costs, project schedule, and target dates. The requirements gathering process is intensified and focused specially on software. To understand the nature of the program(s) to be built, the system engineer ("analyst") must understand the information domain for the software, as well as required function, behavior, performance and interfacing. The essential purpose of this phase is to find the need and to define the problem that needs to be solved. In other words this phase looks into: what interfaces are required (will it run with Windows NT and Windows XP?). What is the functionality required – should it be run with the mouse or keyboard commands? What is the level of proficiency required by the user? Will a new room be needed for the servers or equipment? (In short, this phase analyzes the information needs of the end users)

3) System Analysis and Design: In this phase, the overall structure of the software and its nuances are defined. In terms of the client/server technology, the number of tiers needed for the package architecture, the database design, the data structure design etc are all defined in this phase. A software development model is created. Analysis and Design are very crucial in the whole development cycle. Any glitch in the design phase could be very expensive to solve in the later stage of the software development. Much care is taken during this phase. The logical system of the product is developed in this phase. (In short, this phase creates a blueprint for the design with the necessary specifications for the hardware, software, people and data resources)

4) Code Generation: The design must be translated into a machine-readable form. The code generation step performs this task. If the design is performed in a detailed manner, code generation can be accomplished without much complication. Programming tools like Compilers, Interpreters, and Debuggers are used to generate the code. Different high level programming languages like C, C++, Pascal, Java are used for coding. With respect to the type of application, the right programming language is chosen.

5) Testing: Once the code is generated, the software program testing begins. Different testing tools & methodologies are available to unravel the bugs that were committed during the previous phases.

6) Maintenance: Software will definitely undergo change once it is delivered to the customer. Change could happen because of some unexpected input values into the system. The software should be developed to accommodate changes that could happen during the post implementation period.The maintenance phase is usually the longest stage of the software. In this phase, the software is updated to:

  1. Meet the changing customer needs.
  2. Adapted to accommodate changes in the external environment.
  3. Correct errors and oversights previously undetected in the testing phases.
  4. Enhancing the efficiency of the software.

Kiran’s Conclusion:

All these different software development models have their own advantages and disadvantages. Nevertheless, in the commercial software development world, the fusion of all these methodologies is incorporated. Timing is very crucial in software development. If a delay happens in the development phase, the market could be taken over by the competitor. Also if a 'bug' filled product is launched in a short period of time (quicker than the competitors), it may affect the reputation of the company. So, there should be a tradeoff between the development time and the quality of the product. Customers don't expect a bug free product but they expect a user-friendly product.

What is Software Testing ???


Software testing is a process of verifying and validating that a software application or program

1. Meets the business and technical requirements that guided its design and development
2. Works as expected.

Software testing also identifies important bugs, flaws, or errors in the application code that must be fixed. Generally speaking, an important bug is one that from a customer’s perspective affects the usability or functionality of the application. The testing team cannot improve quality; they can only measure it, although it can be argued that doing things like designing tests before coding begins will improve quality because the coders can then use that information while thinking about their designs and during coding and debugging.

Software testing has three main purposes: verification, validation and bug finding.

The verification process confirms that the software meets its technical specifications. A “specification” is a description of a function in terms of a measurable output value given a specific input value under specific preconditions. A simple specification may be along the line of “a SQL query retrieving data for a single account against the multi-month account-summary table must return these eight fields ordered by month within 3 seconds of submission.”

The validation process confirms that the software meets the business requirements. A simple example of a business requirement is “After choosing a branch office name, information about the branch’s customer account managers will appear in a new window. The window will present manager identification and summary information about each manager’s customer base: .” Other requirements provide details on how the data will be summarized, formatted and displayed.

A bug is a variance between the expected and actual result. The defect’s ultimate source may be traced to a fault introduced in the specification, design, or development (coding) phases.


NOTE: A clever person solves a problem. A wise person avoids it.”


Software testing answers questions that development testing and code reviews can’t.

♦ Does it really work as expected?
♦ Does it meet the users’ requirements?
♦ Is it what the users expect?
♦ Do the users like it?
♦ Is it compatible with our other systems?
♦ How does it perform?
♦ How does it scale when more users are added?
♦ Which areas need more work?
♦ Is it ready for release? What can we do with the answers to these questions?
♦ Save time and money by identifying defects early
♦ Avoid or reduce development downtime
♦ Provide better customer service by building a better application
♦ Know that we’ve satisfied our users’ requirements
♦ Build a list of desired modifications and enhancements for later versions
♦ Identify and catalog reusable modules and components
♦ Identify areas where programmers and developers need training.


WHO DOES THE TESTING?

Software testing is not a one person job. It takes a team and the team may be larger or smaller depending on the size and complexity of the application being tested. The programmer(s) who wrote the application should have a reduced role in the testing if possible. The concern here is that they’re already so intimately involved with the product and “know” that it works that they may not be able to take an unbiased look at the results of their labors. Testers must be cautious, curious, critical but non-judgmental, and good communicators. One part of their job is to ask questions that the developers might find not be able to ask themselves or are awkward, irritating, insulting or even threatening to the developers.

♦ How well does it work?
♦ What does it mean to you that “it works”?
♦ How do you know it works? What evidence do you have?
♦ In what ways could it seem to work but still have something wrong?
♦ In what ways could it seem to not work but really be working?
♦ What might cause it to not to work well?

A good developer does not necessarily make a good tester and vice versa, but testers and developers do share at least one major trait—they itch to get their hands on the keyboard. As laudable as this may be, being in a hurry to start can cause important design work to be glossed over and so special, subtle situations might be missed that would otherwise be identified in planning.

Like code reviews, test design reviews are a good sanity/basic check and well worth the time and effort. Testers are the only IT people who will use the system as heavily an expert user on the business side. User testing almost invariably recruits too many novice business users because they’re available and the application must be usable by them. The problem is that novices don’t have the business experience that the expert users have and might not recognize that something is wrong. Testers from IT must find the defects that only the expert users will find because the experts may not report problems if they’ve learned that it's not worth their time or trouble.

Let me introduce you guyz, what is meant by Testing

Scene 1: You are picnicking by a river. You notice someone in distress in the water. You jump in and pull the person out. The mayor is nearby and pins a medal on you. You return to your picnic. A few minutes later, you spy a second person in the water. You perform a second rescue and receive a second medal. A few minutes later, a third person, a third rescue, and a third medal. This continues throughout the day. By sunset, you are weighed down with medals and honors. You are a hero. Of course, somewhere in the back of your mind there is a sneaking suspicion that you should have walked upriver to find out why people were falling in all day. But, then again, that wouldn't have earned you as many awards.

Scene 2: You are sitting at your computer. You find a bug. Your manager is nearby and rewards you. A few minutes later you find a second bug. And so on. By the end of the day, you are weighed down with accolades and stock options.

If the thought pops up in your mind that maybe you should help prevent those bugs from getting into the system, you squash it—bug prevention doesn't have as much personal payoff as bug hunting. What You Measure Is What You Get B.F. Skinner told us fifty years ago that rats and people tend to perform those actions for which they are rewarded. It is still true today. In our world, as soon as testers find out that a metric is being used to evaluate them, they strive mightily to improve their performance relative to that metric—even if the resulting actions don't actually help the project. If your testers find out that you value finding bugs, you will end up with a team of bug-finders. If prevention is not valued, prevention will not be practiced. For instance, I once knew a team where testers were rewarded solely for the number of bugs they found and not for delivering good products to the customer. As a result, if testers saw a possible ambiguity in the spec, they wouldn't point it out to the development team. They would quietly sit on that information until the code was delivered to test, and then they would pounce on the code and file bugs galore. The testers were rewarded for finding lots of bugs, but the project suffered deeply from all the late churn and bug-fixing. That example sounds crazy, but it happened because the bug count metric supported it. On the flip side, I know of a similar project where testers worked collaboratively to deliver a high-quality product. They reviewed the spec and pointed out ambiguities, they helped prevent defects by performing code reviews, and they worked closely with development.

As a result, very few bugs were found in the code that was officially delivered to test, and high-quality software was delivered to the customer. Unfortunately, management was fixated on the bug count metrics found in the testing phase. Because the testers found few bugs during the official test phase, management decided that the developers must have done a great job, and they gave the developers a big bonus. The testing team didn't get a bonus. How many of those testers do you think supported prevention on the next project? It's not about finding bugs. It's about delivering a great software. No customer ever said with a straight face, "Wow! You found and fixed 65,000 bugs—that must be really great software!"

So, why do many currently use bug counts as a measurement tool? The answer is simple: Bugs are just so darn countable that they are practically irresistible. They can be counted, tracked, and used for forecasting. And it is tempting to do numerical gymnastics with them, such as dividing them by KLOC (thousand lines of code), plotting their rate over time, or predicting their future rates. But all this ignores the complexities that underlie the bug count. Bugs are a useful barometer of your process, but they can't tell the whole story. They merely help you ask useful questions. So What Should We Measure?

Here are some thoughts:
· How many staff hours are devoted to a project? This is a real cost that we care about. How effectively did your whole team (project managers, developers, and testers) go from concept to delivery? Instead of treating these groups as independent teams with clear-cut deliverables to each other, evaluate them as a unit that is moving from concept to code. Encourage the different specialties to work together. Have program management make the spec more crisp. Have development provide testability hooks. Have the test team supply early feedback and testing.

· How many bugs did your customer find? What are customers saying about your product? Have you looked through the support calls on your product? What is customer feedback saying to you about your software's behavior in the field?

· How many bugs did you prevent? Are you using code analysis tools to clean up code before it ever gets past compilation? Are you tracking the results?

· How effectively did your tests cover the requirements and the code? Coverage metrics can be a useful, though not comprehensive, indicator of how your testing is proceeding.

· Finally, a squishy but revealing metric: How many of your own people feel confident about the quality of the product? In some aircraft companies, after the engineers sign off on the project, they all get on the plane for a quick test flight. Assuming that none of your fellow engineers have a death wish, that's a metric you have to respect! It not only says that you found lots of bugs along the way, but that you are satisfied with the resulting deliverable. I recently saw an email signature that sums it up: We are what we measure. It's time we measure what we want to be.