- What are CMM and CMMI? What is the difference?
The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
The Capability Maturity Model Integration (CMMI) provides the guidance for improving your organization’s processes and your ability to manage the development, acquisition, and maintenance of products and services. CMM Integration places proven practices into a structure that helps your organization assess its organizational maturity and process area capability, establish priorities for improvement, and guide the implementation of these improvements.
The new integrated model (CMMI) uses Process Areas (known as PAs) which are different to the previous model, and covers as well systems as software processes, rather than only software processes as in the SW-CMM.
- Do you have a favorite QA book? Why?
Effective Methods for Software Testing – Perry, William E.
It covers the whole software lifecycle, starting with testing the project plan and estimates and ending with testing the effectiveness of the testing process. The book is packed with checklists, worksheets and N-step procedures for each stage of testing.
- When should testing be stopped?
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
– Deadlines (release deadlines, testing deadlines, etc.)
– Test cases completed with certain percentage passed
– Test budget depleted
– Coverage of code/functionality/requirements reaches a specified point
– Bug rate falls below a certain level
– Beta or alpha testing period ends
- When do you start developing your automation tests?
First, the application has to be manually tested. Once the manual testing is over and baseline is established.
- What are positive scenarios?
Testing to see whether the application is doing what it is supposed to do.
- What are negative scenarios?
Testing to see whether the application is not doing what it is not suppose to do.
- What is quality assurance?
The set of support activities (including facilitation, training, measurement and analysis) needed to provide adequate confidence that processes are established and continuously improved in order to produce products that meet specifications and fit for use.
- What is the purpose of the testing?
Testing provides information whether or not a certain product meets the requirements.
- What is the difference between QA and testing?
Quality Assurance is that set of activities that are carried out to set standards and to monitor and improve performance so that the care provided is as effective and as safe as possible. Testing provides information whether or not a certain product meets the requirements. It also provides information where the product fails to meet the requirements.
What are benefits of the test automation?
- Describe some problems that you had with automation testing tools
One of the problems with Automation tools is Object recognition
- Can test automation improver test effectiveness?
Yes, because of the advantages offered by test automation, which includes repeatability, consistency, portability and extensive reporting features.
- What are the main use of test automation?
- Does automation replace manual testing?
No, it does not. There could be several scenarios that cannot be automated or simply too complicated that manual testing would be easier and cost effective. Further automation tools have several constrains with regard the environment in which they run and IDEs they support.
- How will you choose a tool for test automation?
How we decide which automation tool we are going to use for the regression testing?
· Based on risk analysis like: personnel skills, companies software resources
· Based on Cost analysis
· Comparing the tools features with test requirement.
· Support for the applications IDE, support for the application environment/platform.
- What could wrong with automation testing?
There are several things. For ex. Script errors can cause a genuine bug to go undetected or report a bug in the application when the bug does not actually exist.
- How will you describe testing activities?
Testing planning, scripting, execution, defect reporting and tracking, regression testing.
- What type of scripting techniques for test automation do you know?
Modular tests and data driven test
- What are good principles for test scripts?
- What type of document do you need for QA, QC and testing?
Following is the list of documents required by QA and QC teams
- What are the properties of a good requirement?
Understandable, Clear, Concise, Total Coverage of the application
- What kinds of testing have you done?
Manual, automation, regression, integration, system, stress, performance, volume, load, white box, user acceptance, recovery.
- Have you ever written test cases or did you just execute those written by others?
Yes, I was involved in preparing and executing test cases in all the project.
- How do you determine what to test?
Depending upon the User Requirement document.
- How do you decide when you have ‘tested enough?’
Using Exit Criteria document we can decide that we have done enough testing.
Realising you won’t be able to test everything-how do you decide what to test first? OR
What if there isn’t enough time for thorough testing? Use risk analysis to determine where testing should be focused. Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:
· Which functionality is most important to the project’s intended purpose?
· Which functionality is most visible to the user?
· Which functionality has the largest safety impact?
· Which functionality has the largest financial impact on users?
· Which aspects of the application are most important to the customer?
· Which aspects of the application can be tested early in the development cycle?
· Which parts of the code are most complex, and thus most subject to errors?
· Which parts of the application were developed in rush or panic mode?
· Which aspects of similar/related previous projects caused problems?
· Which aspects of similar/related previous projects had large maintenance expenses?
· Which parts of the requirements and design are unclear or poorly thought out?
· What do the developers think are the highest-risk aspects of the application?
· What kinds of problems would cause the worst publicity?
· What kinds of problems would cause the most customer service complaints?
· What kinds of tests could easily cover multiple functionalities?
· Which tests will have the best high-risk-coverage to time-required ratio?