Explore manual testing techniques for software excellence in 2024. Enhance QA processes, bug detection, and user experience. Start learning now!
In the rapidly evolving landscape of software development Manual Testing, mastering manual testing techniques in 2024 is crucial. This comprehensive guide focuses on the principles, methodologies, and practical aspects of manual testing, ensuring a solid foundation for quality assurance processes.
1.1.What is Testing?
Manual testing is like quality control for computer programs. Its main job is to find and fix problems in software before it’s used by people. When software doesn’t work right, it can cause big issues like losing money, wasting time, damaging a company’s reputation, or even causing accidents.
Testing isn’t just about running the software and checking if it passes certain tests. It involves different activities that happen throughout the making of the software. These activities aim to find mistakes (or “defects”) and make sure the software works well for the people who will use it.
There are two main types of testing: dynamic and static. Dynamic testing means actually using the software to see if it works as it should. Static testing, on the other hand, doesn’t involve running the software. It includes things like checking the code and having other people review it to find issues.
Testing isn’t just about making sure the software meets a list of requirements. It’s also about making sure it meets the needs of the people who will use it in the real world. This means considering what users and other important people actually need from the software.
Finally, testing isn’t only a technical thing. It requires planning, managing, estimating how much time it will take, and keeping an eye on it as the software is being made. All these steps help make sure the software works well when it’s finally used by people.
1.2.Why is Testing Necessary?
Testing ensures that the software meets specified goals within set time, quality, and budget constraints. It’s not just a task for the test team; any stakeholder can contribute using testing skills. Examining components, systems, and documentation helps identify software defects.
1.3.Testing Principles
Testing Principles:
- Find Defects, Not Certainties: Testing reveals defects but cannot guarantee absence of all issues.
- Testing Limits: It’s impossible to test everything; focus on techniques, prioritization, and risks.
- Early Testing, Big Savings: Identifying defects early reduces costs and prevents future failures.
- Defect Clustering: A few components usually contain most defects; helps prioritize testing efforts.
- Tests Get Tired: Repeating tests reduces effectiveness; update or create new tests as needed.
- Context Matters: No one-size-fits-all approach to testing; varies based on different contexts.
- Don’t Assume Perfection: Testing alone doesn’t ensure system success; validate against user needs and business goals.
1.4. Test Activities, Testware and Test Roles
Testing has different approaches based on the situation. However, there are typical test tasks crucial for reaching testing goals. These form a test process. How these tasks work, when they happen, and their order are planned for each unique testing situation.
🚀 Exciting News for Software Testing Professionals! 🚀
Are you passionate about software testing and eager to stay ahead in the dynamic tech landscape? Look no further! We’ve got you covered with the latest insights, trends, and career tips to fuel your success in the world of testing.
🔍 Explore Cutting-Edge Technologies: Stay updated on the latest advancements in automation testing, AI-driven testing, and DevOps practices to enhance your testing skills and efficiency.
📈 Career Growth Opportunities: Discover exclusive career development resources, certification programs, and networking events tailored for testing professionals at all levels.
🤝 Connect with Industry Experts: Join our vibrant community of testing enthusiasts and connect with industry experts, thought leaders, and fellow professionals to exchange ideas, seek advice, and expand your network.
🔥 Hot Topics & Discussions: Engage in lively discussions on trending topics such as agile testing, shift-left testing, performance testing, and more. Share your insights, ask questions, and learn from the collective wisdom of the testing community.
🎓 Continuous Learning: Access a wealth of educational resources, webinars, workshops, and online courses to sharpen your testing skills and stay ahead of the curve in this ever-evolving field.
Whether you’re a seasoned testing veteran or just starting your journey in the field, our LinkedIn account is your go-to destination for all things testing-related. Join us today and take your testing career to new heights!
#SoftwareTesting #QualityAssurance #TestingCommunity #CareerDevelopment #TechTrends #ContinuousLearning #Networking
Manual testing interview questions : For details click here
1.4.1 Test Activities and Tasks
- Test Planning: Define objectives and choose an approach considering project constraints.
- Test Monitoring and Control: Continuously check progress against plans and take actions to meet testing goals.
- Test Analysis: Evaluate test basis, identify testable features and risks, and plan what needs testing.
- Test Design: Develop test cases, decide how to test, create test data, and set up the test environment.
- Test Implementation: Prepare testware, organize test cases, build the test environment, and schedule execution.
- Test Execution: Run tests as per schedule, compare actual vs. expected results, log outcomes, and analyze anomalies.
- Test Completion: Resolve remaining defects, archive useful test materials, shut down the test environment, and report findings to stakeholders.
1.4.2. Test Process in Context
- Testing Context: Testing isn’t separate; it’s part of how organizations develop products. It’s funded by stakeholders to meet business needs.
- Factors Affecting Testing:
- Stakeholders: Their needs, cooperation, and expectations influence testing.
- Team: Skills, knowledge, availability, and experience shape testing approaches.
- Business Domain: Risks, market needs, and legal requirements impact testing importance.
- Technical Aspects: Software type, architecture, and technology affect testing methods.
- Project Constraints: Scope, time, budget, and resources dictate testing limitations.
- Organizational Setup: Policies and structure within an organization influence testing methods.
- Development Lifecycle: Engineering practices and methods affect how testing is conducted.
- Tools: Available tools and their usability play a role in testing decisions.
- Impact on Testing: These factors affect test strategy, techniques, automation, coverage, documentation, and reporting.
1.4.3. Testware
Testware refers to outputs from testing activities. Different organizations handle, name, and organize these differently. Configuration management ensures consistency and integrity.
Work Products Examples:
- Test Planning: Includes the test plan, schedule, risk register, entry/exit criteria.
- Monitoring and Control: Test progress reports, control directives, and risk info.
- Test Analysis: Prioritized test conditions, defect reports in the test basis.
- Test Design: Prioritized test cases, charters, coverage items, data, environment requirements.
- Test Implementation: Procedures, automated scripts, suites, data, execution schedule, environment elements (like stubs, drivers).
- Test Execution: Test logs and defect reports.
- Test Completion: Completion report, action items for improvements, lessons learned, change requests.
These work products vary but ensure the thoroughness and manageability of testing activities.
1.4.4. Traceability between the Test Basis and Testware
Traceability in testing refers to the ability to link and track different elements throughout the testing process, like test requirements, test cases, results, and defects. This helps ensure that all aspects are connected and accounted for.
It’s essential because it allows us to:
- Verify Coverage: Linking test cases to requirements ensures that all requirements are tested.
- Evaluate Risks: Connecting test results to risks helps gauge the remaining risk in a product after testing.
- Manage Changes: It helps understand how changes impact different parts of testing.
- Facilitate Audits: Traceability aids in making audits easier by providing clear connections between elements.
- Support Governance: It helps meet IT governance criteria by providing a clear understanding of testing progress.
- Communicate Effectively: By showing the status of different testing elements, it makes reports understandable for stakeholders.
- Assess Quality and Progress: Traceability provides information to evaluate product quality, process effectiveness, and project progress towards business goals.
1.4.5. Roles in Testing
In testing, there are two main roles: test management and testing.
Test Management Role: This role oversees the entire testing process, team, and directs test-related activities like planning, monitoring, and completion. How it’s done can vary based on the project’s needs. In Agile, some tasks might be handled within the development team, while broader tasks involve test managers from outside the team.
Testing Role: This role focuses on the technical side of testing, covering activities like analyzing, designing, implementing, and executing tests. It’s about the hands-on engineering part of testing.
Different people can take on these roles at different times. For instance, team leaders, test managers, or development managers might handle test management. Sometimes, one person might handle both testing and test management at the same time.
2.1 Software Development Lifecycle
The Software Development Lifecycle (SDLC) outlines the stages and processes involved in developing software. Here’s an in-depth explanation with bulleted points:
Each stage in the SDLC is crucial and contributes to the overall quality and success of the software. The lifecycle can follow various models (e.g., Waterfall, Agile, Iterative) with variations in the sequence and emphasis on different stages based on project requirements and methodologies.
- Planning:
- Requirement Analysis: Gathering and understanding client needs and expectations.
- Feasibility Study: Evaluating the project’s technical, operational, and economic feasibility.
- Project Planning: Outlining project scope, timelines, resources, and risks.
- Design:
- System Design: Defining system architecture, components, and their interactions.
- Detailed Design: Creating detailed specifications for modules, databases, and interfaces.
- Implementation:
- Coding: Writing code based on design specifications.
- Unit Testing: Testing individual components to ensure they function as intended.
- Testing:
- Integration Testing: Checking if different modules work together.
- System Testing: Evaluating the entire system’s functionality against requirements.
- Acceptance Testing: Validating software against user expectations.
- Deployment:
- Installation: Deploying the software in the production environment.
- Training: Providing training and documentation for users.
- Transition: Handing over the software from the development team to the operational team.
- Maintenance:
- Monitoring: Keeping an eye on the system’s performance and user feedback.
- Bug Fixes: Addressing issues and releasing patches or updates.
- Enhancements: Making improvements or adding new features based on user needs.
2.2. Test Levels and Test Type
Test levels are different stages in the testing process, each done at a specific point while making software. They start with testing individual parts, then move on to combining these parts, testing the whole system, making sure it works with other systems, and finally checking if it meets the user’s needs before it’s used.
Here are the five test levels explained in simpler terms:
- Component Testing (Unit Testing): Testing separate parts to make sure they work on their own. Usually done by developers in their own environments.
- Component Integration Testing: Checking how different parts work together. It depends on strategies like building from small parts, big parts, or combining both.
- System Testing: Testing the entire system to see if it works well, both in terms of specific tasks and overall quality. This includes checking how easy it is to use.
- System Integration Testing: Making sure the system fits and works properly with other systems or services it interacts with. It needs a similar setup to what it will use in the real world.
- Acceptance Testing: Making sure the system meets the needs of the users before it’s fully used. Users might test it themselves or there could be different kinds of tests based on agreements or regulations.
Each of these testing stages checks different aspects of the software, moving from the smallest parts to the whole system, ensuring it works well in different situations.
2.2.2. Test Type
- Functional Testing: This checks if the software does what it’s supposed to do. It focuses on making sure all the functions work correctly, covering completeness, correctness, and suitability.
- Non-functional Testing: This evaluates how well the software behaves beyond just its functions. It looks at various quality aspects like performance, compatibility, usability, reliability, security, maintainability, and portability.
Types of manual testing:
- White Box Testing: This is when a developer examines every line of code to ensure it’s working correctly before passing it to a Test Engineer. Because the code is visible to the developer during testing, it’s called white box testing.
- Black Box Testing: Test Engineers check the functionality of software based on customer/client needs without seeing the code. They test how the software works without looking at its internal structure, hence the name black box testing.
- Gray Box Testing: This is a mix of both white box and black box testing. It’s done by someone who knows coding and testing. If one person does both white box and black box testing for an application, it’s called gray box testing. It involves having partial visibility into the code while focusing on how the software behaves externally.
- Accessibility Testing: Checking that mobile and web apps are usable for all users, including those with disabilities like vision or hearing impairment.
- Acceptance Testing: Making sure the software meets the goals set in the business requirements and is ready for delivery to customers.
- End to End Testing: Testing an application from start to finish to ensure everything works as it should in its complete workflow.
- Interactive Testing: Hands-on manual testing for those who don’t use automated methods, collecting results from external tests.
- Integration Testing: Testing that an entire system, both hardware and software, meets specific requirements when integrated together.
- Load Testing: Checking how software performs when accessed by many users simultaneously..
- Performance Testing: Testing speed, stability, scalability, and resource usage of software under specific workloads.
- Regression Testing: Checking if code changes break an application or consume resources.
- Sanity Testing: Verifying that bugs are fixed and no new issues are introduced after bug fixes.
- Security Testing: Identifying vulnerabilities in a system to protect against threats or risks to data and reputation.
- Single User Performance Testing: Testing application performance without any system load, setting a benchmark for performance under load.
- Smoke Testing: Validating the basic functionality of a software build to ensure its stability.
- Stress Testing: Pushing software beyond normal capacity to observe how it behaves under extreme conditions.
- Unit Testing: Checking small pieces of code to ensure they work properly, speeding up testing processes.
3.1 Black-Box Test Technique
Commonly used black-box test techniques discussed in the following sections are:
- Equivalence Partitioning
- Boundary Value Analysis
- Decision Table Testing
- State Transition Testing
Equivalence Partitioning:
Equivalence Partitioning (EP) is about organizing data into groups where each group is expected to be treated the same way by the software being tested. The idea is that if one test in a group finds a problem, other tests in that group should also uncover the same issue. So, one test from each group is usually enough to cover the whole group.
Imagine you have a program that accepts ages as input. With equivalence partitioning, you’d split ages into different groups: children (0-12), teenagers (13-19), adults (20-65), and seniors (65+). Testing just one age from each group can help catch problems that might affect all ages in that group.
These groups are called partitions. They should be clear-cut, without overlaps, and cover all possible scenarios for the data being tested. For example, if you’re testing a password input field, valid passwords and invalid ones (like an empty field) would be in separate partitions.
Coverage in EP is about making sure you test at least one thing from each group. So, if you’re testing a program that accepts different ages, you’d want to test at least one person from each age group to ensure the software behaves correctly across all age ranges.
When there are multiple things being tested (like age and gender), you’d aim to test each category within those things at least once. For instance, if you’re testing a system that uses both age and gender, you’d want to test at least one person from each age group and from each gender to cover all possibilities.
The main idea is to simplify testing by choosing representative examples from each group, making sure the software works well across all scenarios without having to test every single possibility.
Boundary Value Analysis
Boundary Value Analysis (BVA) is a technique that focuses on testing the edges or limits of different groups of data. It’s especially useful when the data can be ordered, like numbers or dates.
Imagine you’re testing a system that accepts ages between 1 and 100. BVA would focus on testing values like 1, 100, and values right next to these limits, like 2 and 99. The idea is that mistakes in software code often happen at these boundary points.
There are two versions of BVA: 2-value and 3-value.
- 2-value BVA: For each boundary value, there are two things to test: the boundary value itself and its closest neighbor in the adjacent group. So, if you’re testing ages from 1 to 100, you’d test 1, 2, 99, and 100.
- 3-value BVA: This version checks each boundary value and both of its neighbors. So, for ages 1 to 100, you’d test 1, 2, 3, 99, 100, and 98.
The goal is to catch mistakes made specifically at these critical points. For instance, if the system is supposed to allow ages 1 to 100, but someone mistakenly set it to accept ages 1 to 99, BVA could catch that error by testing the boundaries (like 100).
3-value BVA is more thorough than 2-value BVA and can sometimes find mistakes that 2-value BVA might miss. For example, if the software is supposed to allow numbers less than or equal to 10 but accidentally accepts only the number 10, 3-value BVA might find this error by testing numbers both below and above 10.
Decision Table Testing
Decision tables are like organized charts used to test how different combinations of conditions lead to different outcomes in a system. They’re handy for capturing complex rules or logic, like business rules.
In these tables, you list conditions and the actions the system should take based on those conditions. Each column represents a unique combination of conditions and the corresponding actions. There are two types of decision tables:
- Limited-entry decision tables: These show conditions and actions as either true or false (like checkboxes). They help simplify things but might not cover every possibility.
- Extended-entry decision tables: These can handle conditions and actions with multiple values, like ranges of numbers or specific values. They’re more flexible but can be more complex.
In these tables:
- Conditions marked as “T” mean they’re true or satisfied.
- “F” means the condition is false or not met.
- “-” means the condition doesn’t matter for the outcome.
- “N/A” means the condition can’t happen in a certain situation.
For actions:
- “X” means the action should happen.
- Blank means the action shouldn’t happen.
A full decision table covers every possible combination of conditions. But sometimes, columns with impossible combinations can be removed or merged to simplify the table.
In testing, the goal is to cover all the columns that contain possible combinations of conditions. This ensures thorough testing of all possibilities. Coverage is measured by how many of these columns are tested.
Decision table testing helps find all possible condition combinations, ensuring no scenario is overlooked. It’s useful for spotting gaps or conflicts in requirements. But when there are lots of conditions, testing every rule can be time-consuming. In such cases, a simplified table or a risk-based approach can be used to reduce the number of rules needing testing.
State Transition Testing
A state transition diagram is a visual representation showing how a system moves between different states based on events and conditions. When an event occurs, it triggers a transition from one state to another. These transitions are quick and might cause the system to take certain actions.
This transition is labeled as: “event [guard condition] / action.” The guard condition and action might not always be present or relevant.
A state table is a way to represent the same information as the state transition diagram. Rows represent states, columns represent events (along with conditions), and the table entries show valid transitions between states. Empty cells indicate invalid transitions.
Test cases based on state transition diagrams or state tables are sequences of events that cause state changes in the system. A single test case often covers multiple state changes.
There are three main coverage criteria for state transition testing:
- All States Coverage: Making sure that all possible states in the system are visited during testing.
- Valid Transitions Coverage (0-switch coverage): Ensuring that all valid transitions between states are tested.
- All Transitions Coverage: Testing all transitions, both valid and invalid, listed in the state table.
All States Coverage focuses on visiting every state, Valid Transitions Coverage ensures testing all valid state changes, and All Transitions Coverage includes testing invalid state changes too. Achieving All Transitions Coverage guarantees coverage of both All States and Valid Transitions. For critical software, achieving All Transitions Coverage is essential to ensure thorough testing.
3.2 White-Box Test Techniques
This section focuses on two code-related white-box test techniques:
- Statement testing
- Branch testing
Statement testing: Statement testing focuses on making sure that all the lines of code that can be executed are actually tested at least once. The goal is to create test cases that run through every line of code to achieve a certain level of coverage.
This coverage is measured by how many lines of code were tested versus the total number of lines that can be executed, shown as a percentage.
When you achieve 100% statement coverage, it means that every line of code that can be run has been tested at least once. If there’s a defect in any line of code, a test case will trigger that line, potentially causing a failure that exposes the defect. However, testing every line doesn’t catch all types of defects, especially those that are data-specific, like a division by zero that only fails when a specific condition is met.
Additionally, having 100% statement coverage doesn’t guarantee that all the decision-making parts of the code have been fully tested. For instance, it might not cover all the different paths or branches within the code.
Branch testing:
Branches in code represent different paths or decisions that can be taken while running a program. These branches can be either unconditional (just following a straight path) or conditional (where a decision needs to be made).
Branch testing aims to create test cases that cover these different branches in the code. The goal is to have test cases that go through each branch at least once. Coverage here is measured by the number of branches covered by test cases divided by the total number of branches, shown as a percentage.
Achieving 100% branch coverage means that all the different paths in the code, including both unconditional and conditional branches, have been tested by the created test cases. Conditional branches usually relate to outcomes from decisions like “if…then” statements or different choices in switch/case statements.
However, testing all branches doesn’t guarantee catching all defects. Some defects might only show up when a specific path in the code is executed, which might not happen in the created test cases.
It’s important to note that branch coverage includes coverage of individual statements in the code. So, if you achieve 100% branch coverage, you’ve also achieved 100% statement coverage, but not the other way around.
Welcome to the world of software testing, where precision and meticulousness reign supreme. In this guide, we will explore the fundamentals of manual testing, delve into essential concepts, and equip you with the knowledge needed to excel in this critical aspect of software development. Whether you’re a seasoned professional or a newcomer, this journey will enhance your understanding of manual testing and set you on the path to becoming an expert in the field.
Understanding the Basics
Manual Testing Demystified
Manual testing, as the name suggests, involves the careful examination of software without the aid of automation tools. It is a crucial step in the software development life cycle (SDLC) that ensures the application behaves as intended, meets user requirements, and is free of defects.
Test User: Navigating the Basics
Before delving into manual testing, let’s familiarize ourselves with some foundational terms. “Test user” refers to an individual or persona used to evaluate the functionality of a system. Understanding this concept is pivotal as we progress through various testing methodologies.
The Core of Manual Testing
Software Testing and Its Types
Software testing is a multifaceted discipline, encompassing various types to ensure a robust and error-free application. Here are some key terms to remember:
- Regression Testing: Verifying that recent changes haven’t adversely affected existing functionalities is crucial. Explore the significance of regression testing to maintain software integrity.
- Manual Testing vs. Automation Testing: While automation testing streamlines repetitive tasks, manual testing offers a nuanced approach. Learn when to employ each method for optimal results.
- Types of Software Testing: Gain insights into the diverse landscape of software testing, including functional testing, performance testing, and more.
Navigating Manual Testing Techniques
Test Case and Test Case Design Techniques
Creating effective test cases is an art. Understand the anatomy of a test case, its significance in the testing process, and various design techniques to ensure comprehensive coverage.
Types of Manual Testing
Dive into the different types of manual testing, such as black box testing, white box testing, and grey box testing. Each type plays a unique role in identifying defects and ensuring software reliability.
Exploring Manual Testing Tools
While automation tools have gained prominence, manual testing tools remain invaluable. Discover essential tools that aid manual testers in executing test cases efficiently.
Advancing Your Manual Testing Skills
Manual Testing Interview Questions
Prepare for success with a curated list of manual testing interview questions. Sharpen your knowledge and boost your confidence as you navigate the interview process.
Best Practices in Manual Testing
Explore time-tested best practices to elevate your manual testing game. From test planning to execution, adopt strategies that enhance efficiency and accuracy.
The Future of Manual Testing
Evolving Trends in Software Testing
Stay ahead of the curve by exploring emerging trends in software testing. From artificial intelligence to continuous testing, discover what the future holds for manual testing professionals.
Sanity , Smoke and Regression Testing
Sanity Testing
Sanity Testing as a test execution which is done to check existing/previous functionalityt and its impact but not thoroughly or in-depth.
The sanity test should only be done when you are running short of time, so never use this for your regular releases. Theoretically, this testing is a subset of regression testing.
Sanity testing is done at random to verify that each functionality is working as expected.
This is to verify whether the requirements are met or not, by checking all the features breadth-first.
This is not a planned testing and is done only when there’s a time crunch.
This mainly includes verification of business rules, functionality.
This mostly spans over 1-2 days max.
Regression Testing:
Regression testing is done to verify that the complete system and bug fixes are working fine
This includes in-depth verification of functionality,
This mostly spans over 2-3 days .
Test cases are generally automated as test cases are required to be executed again and again and running the same test cases again and again manually is a time-consuming and tedious one too.
Test cases are re-executed to check the previous functionality of the application is working fine, and the new changes have not produced any bugs.
When the defect fixed-Defects fixed as per the criticality/priority of the defect/bug
Example:
Assume login button is not working in a login page and a tester reports a bug stating that the login button is broken. Once the bug fixed by developers, tester tests it to make sure Login Button is working as per the expected result. Simultaneously, tester tests other functionality which is related to the login button.
Smoke Testing:
This testing is a normal health check-up to the build of an application before taking it to test in-depth.
This testing is conducted to ensure whether the most crucial functions of a program are working,
Smoke Testing is directly related to Build Acceptance Testing (BAT).
In BAT, we do the same testing – to verify if the build has not failed and if the system is working fine or not.
Sometimes, it happens that when a build is created, some issues get introduced and when it is delivered, the build doesn’t work for the QA.
Smoke Testing is ideally performed by the QA
It is used to test the acute functionality of the software. When the developers deliver a new build to the Quality teams, smoke testing is done
-We have 4 environments/servers so we do regression once the code move from 1 server to other to make sure everything works well.
SDLC
The Software Development Lifecycle is a process of building a good software and its life cycle stages provides Quality and Correctness of good software. All the stages of Lifecycle are important in itself. One Wrong step in Lifecycle can create a big mistake in the development of Software.
SDLC is a process followed for a software project, within a software organization.
Stages/phases in SDLC-
1.Requirement/information gathering
2.Analysis
3.Design
4.Coding/Implementation
5.Testing
6.Maintenance
Information/requirement gathering-
1.Business Analyst is responsible for information gathering
2.It is a requirement gathering from customer
3.Information gathering involve business requirement specification (BRS) which is prepared by BA
4.BRS is bridge between client and team (dev,testers)
Analysis-
1.BA involves in this process and here SRS document made which is named as Software/System requirement Specification.
2.This made after BRS
3.SRS is detailed document
BRS-
Gather requirement example-banking project
-sign up page
-home page
-acc info page
-contacts page
-etc
This is overall requirement gathering
SRS-
Consider above example-
-Sign up page should have Name,number,email,password etc
This is the detailed specification which shows minor units of software.
SRS documents include-
1.Functional Flow Diagram.
-Functional flow diagram means flow of our task.
-This shows how relationship between each task.
-This gives proper sequence of task
-Example-Facebook or any website-
Function flow diagram looks like above diagram
-Overall, this functional flow diagram is actually a stepwise representation of software.
2.Functional Requirement.
-Functional Requirement means attributes which are required to complete a specific function.
-Now we have signup function.
-For sign up,its requirements are-
First name
Last name
mobile number
email
pwd
submit button
Now for first name requirement is-
-Name should be in characters
-Name should not have numbers
-It should not have spaces-
-It should not have sp.characters
These all kind of requirement fulfill in this phase.
3.Snapshot
-Its a visualization of the functionality before development of product
-It is created by BA
-This created by IRise software etc.
-This gives idea to developer as how the software will look like.
-SRS send to all stakeholder or team (developer+tester)
-When developer do coding,testers do test case design means write test case
Design-
Based on the requirements specified in SRS,a DDS – Design Document Specification OR Technical Design Document (TDD) is proposed and documented.
Here TDD(Technical design document) documents made which is divided into two levels:
1.HLD
2.LLD
High Level Design Low Level Design
It contains design of working of main
module It includes static logic of sub module
It includes what and how any module do
work In sign up page, sign up is the main module
and the rest fields like first name, last name,
email etc are the sub modules.
It is created by design architect It is created by front end developer
Implementation/Coding:
-Coding means programming
-one line of code is code
-multiple line of code is called program
-set of programs written by developer creates software.
-2 types of developer-
1.Front end developer-UI, functionality, process are developed by the front end developer.
2.Back end developer-Data management,Data gathering,Data security is done by back end developer.
Developer who work as front end developer as well as back end developer is called full stack developer.
Testing:
Testing is the process of checking completeness and correctness of the software.
Methods of testing-
1.White box
2.Black box
3.Grey box
1.White box testing-
-White box testing is done by coder because code knowledge is required.
-It is also called as code level testing/unit testing/clear box testing.
-In white box testing whenever coder complete his code writing, he checks or compile code then if any bug found code have to solve it
-coder cannot send code to tested without doing white box testing
-coder check or test mostly positive scenarios only.
-white box testing has purpose to test correctness and completeness of the program.
2.Black box testing-
-Black box testing is known system and function testing.
-This testing is done by tester.
-Overall functionality get checked in this type of testing.
-Tester check internal functionality depend upon external functionality.
Example-Tester check whenever data is sign module got entered and users press sign up button,this button is process to store entered data.Tester check whether the data is stored correctly or not.
So here internal functionality is storing of data and external functionality is filling up data in fields and submit buttons process.
-Tester test the positive and negative scenarios.
Positive scenario means-
If suppose we have mobile number field with 10 digit functionality then as a tester we will check field functionality by entering 10 digit number whether it works or not.
Negative scenario means-
If suppose we have mobile number field with 10 digit functionality then as a tester if we check with 9 digits or less as it should not accept or more than 10 digits.
Grey box testing:
-Grey box testing is a combination of both white box and black box.
-To do grey box testing,tester need programming knowledge
-The role of grey box tester is whenever final software is handed over to tester tester check its functionality and if any fault occure in the output of function then tester does not revert system back to developer,Instead of that tester himself solve or make changes in the code.So knowledge of coding is required.
like API
Deployment and Maintenance:
-Once the product is tested and ready to be deployed it is released formally in the appropriate market(on production server). Sometime product deployment happens in stages as per the organizations, business strategy.
-Maintenance means proving service after delivery (like bugs or improvement or enhancement) of the project.
-After delivery of any bug or enhancement occur than all comes under maintenance.
-Maintenance involve non technical as well as technical support.
-Non technical support is called as BPO
-Technical support is called KP
Software Testing Life Cycle : (STLC)
Just like developers follow the Software Developement Life Cycle likewise testers also follow the Software Testing Life Cycle which is called as STLC.
Software Testing Life Cycle is a testing process which is executed in a sequence
In this life cycle also, we do have some phases which is shown below-
Each of the step mentioned above has some Entry Criteria (it is a minimum set of conditions to enter any phase) as well as Exit Criteria (it is a minimum set of conditions to exit from any phase)
Let us discuss about each phase in detail:
Requirement Analysis:
Tester analyses requirement document of SDLC (Software Development Life Cycle) to examine requirements stated by the client.
After examining the requirements, the tester makes a test plan.
Test Plan Creation:
Test plan creation is the crucial phase of STLC where all the testing strategies are defined.
Tester determines the estimated effort and cost of the entire project.
This phase takes place after the successful completion of the Requirement Analysis Phase.
Test activities (Test case design and test case execution) can be started after the successful completion of Test Plan Creation.
-Lead or PM implement Test plan
-PM prepare test team
-PM/Lead distribute task or work to all team members
-In planning – estimations and resource planning done. Estimation like which resource will work and till when that resources need to work on one requirement is estimation
Environment setup:
Setup of the test environment is an independent activity and can be started along with Test Case Development.
This is an essential part of the manual testing procedure as without environment, test execution is not possible.
The testing team is not involved in setting up the testing environment, its senior developers who create it.
Test case Execution:
Test case Execution can be done after the successful completion of test case design and planning.
In this phase, the testing team starts execution activity for the test cases.
The testing team make the status and actual results in test case document after execution.
If any test case fails in execution then testers logged a bug against those failed test case.
RTM (Requirement Traceability Matrix) is also prepared in this phase. Requirement Traceability Matrix is industry level format, used for tracking requirements and test case. Each test case is mapped with the requirement specification.
Test Cycle Closure:
Test Documentation-
-Test documentation is report of testing.
-Once the testing completed then tester make document for testing i.e. report also we can say.
-Tester send this document to the team leader or sometimes to complete team.
-Team leader send this document to PM and PM send to customer.
-This document includes-
Entry and exit criteria for STLC
STLC Stage Entry Criteria Exit Criteria Deliverables
Requirement Analysis Requirements Document available (both functional and non functional-if required)(BRS and SRS)
Acceptance criteria defined. Signed off RTM
Understand SRS AND BRS RTM
SRS AND BRS
Test Planning Requirements Documents-SRS AND BRS)
Requirement Traceability matrix-RTM
Test automation plan document(If required). Approved test plan created Test plan document.
Test case development Requirements Documents(SRS/BRS)
RTM and test plan
Automation analysis report Reviewed and signed test Cases/scripts Test cases/scripts
Test Environment setup
Environment set-up plan is available Environment setup is working as per the plan and then all set up should be done Environment ready
Test Execution RTM, Test Plan , Test case/scripts are available
Test environment is ready
Test data set up is done All tests planned are executed
Defects logged Completed RTM
Test cases updated with results and status
Defect logged if any test case fails
Test Cycle closure Testing has been completed
Test results are available
Defect logs are available Test Closure report signed off and shared with team Test Closure report
ACCEPTANCE TESTING [UAT]
⦁ Acceptance testing is done by end users.
⦁ Acceptance Testing is end to end testing where real time scenarios are implemented while testing the application.
⦁ Acceptance Testing also can be called as User acceptance testing (UAT)
⦁ Generally, UAT is done by the customer and checks whether the application is working according to given business scenarios, real-time scenarios.
⦁ UAT done before production/live
⦁ UAT is the process of collecting feedback from customer’s
⦁ Test team,Development team and customer involved in acceptance testing.
⦁ UAT starts after system and function testing i.e. SIT
⦁ Custoomer decide which user story need to be executed
⦁ Customer decide wether build should go to production or not after UAT
Acceptance basically two types,
Alpha Testing/ Internal Acceptance Testing
⦁ Alpha testing is the final stage of testing performed by your QA team to check that your application is ready for release outside your company.
⦁ The testing is coordinated in-house, structured and is usually done by your own test team.
⦁ Alpha testing happen for web base applications
⦁ Alpha testing can be done in front of tester,dev and customer’s
⦁ Real customer mostly in service based industry involved in alpha testing like HDFC,IDBI etc
⦁ The aim is to test every single user flow end to end. The idea is to ensure that your software is bug-free, stable, and functioning as expected.
Beta Testing/ External Acceptance Testing
⦁ Beta testing involves releasing the software to a limited number of real users. They are free to use it as they want.
⦁ However, the users give feedback about how the application performs.
⦁ It is done to get feedback from real users based on their experience.
⦁ Many product based software companies use beta testing to find out if a new feature/improvement for any software product.
⦁ Developer and tester less involved in it.
⦁ Customer are like microsoft,rupay ,master car etc
⦁ In beta testing when developer and tester complete their work then the same product sometimes send to the different different tester to collect feedback.
Difference between Alpha and Beta Testing
Alpha Testing Beta Testing
Alpha testing is performed by testers who are usually internal employees of the organization. Beta testing is performed by end user or from different team QA member.
Alpha testing is performed at the developer’s site. Beta testing is performed at the end-user of the product.
Alpha Testing is done normally in service based org
Beta Testing is normally done in product based org
Developers can immediately address the critical issues or fixes in alpha testing. Most of the issues or feedback or improvements collected from beta testing will be implemented in future versions of the product.
Use / Advantage of Acceptance Testing:
⦁ To find the defects missed during the functional testing phase.
⦁ How well the product is developed.
⦁ A product is what actually the customers need.
⦁ Client satisfies after the UAT
⦁ Feedbacks help in improving the product performance and user experience.
⦁ Minimize or eliminate the issues arising from the production.
Can system testing be done at any stage?
Answer: No, we cannot do system testing at any stage, it must start only if all modules work correctly and are in place, but it should be performed before UAT (user acceptance testing).
Distinguish between System Testing and UAT(User Acceptance Testing)
Answer: UAT: User Acceptance Testing (UAT) is a process to determining whether the product will meet the needs of its users or not.Done by client to ensure all requirements are fulfilled.
System Testing: Also known as end-to-end testing, Here testing done as a whole software/project and finding defects when the system is under test.
Exit criteria for UAT or When we can stop UAT?
Before moving into production, following needs to be considered:
⦁ No critical defects open
⦁ Business requirements met
⦁ UAT Sign off meeting with all stakeholders/client and they