Been to a meeting recently where in the testing process was being demonstrated. Out of sheer curiosity, I popped a question of what defines success or failure in their system… and the answer became a 15 minute long Q&A sessions rather than being an quality assurance process.
My view is that quality assurance is a matter of ascertaining the overall stability and security of a system
This can be achieved simply by listing out all your function points and measuring the success in binary as ‘0’ & ‘1’.
If you had like 100 functionality with over 15 function points, each one of them given a 1 or 0 against their success or failure respectively, you will have the overall score of stability of the system.
This can be a measure in relative sense i.e. 98% stable; where in it indicates that your system needs 2% fixes on some functionality against some function points.
By the way, if you are wondering what that functionality and function point means.
Functionality could be something like a feedback form.
The feedback form can have some fields and they might have some validations.
The data entered in to the feedback form might be getting stored and listed in another functionality called reports.
The data entered from the forms could be measured in relative sense as to what will become of the data and hence they might be editable, can be deleted or searches performed, further escalations such as assignment happen with someone.
So the function points here are:
- Create
- Retrieve
- Update
- Delete
- Assign
- Sort
- Export
- Import
- Approve
- Reject
If you are able to do one of more of them and not do one or more of them based on the permissions you have and with all permissions, then, the test should indicate that testing as a failed.
Where as if you are able to do all of them at unit levels and as integrated testing, the testing indicates as success.
This is comprehensive testing and something that be taken in as an assurance of quality.
If your testing falls short of this and is generating reports that are not factoring such elements and only flags for exceptions, then the testing process and hence the assurance based on that is inadequate.
If automation is included, it could be great, however, the software testing process should be accommodative of planning, measure the outcome from usage point of view and just rate things as success or failure.
Feel Free to Contact Us!
Original Source – The quintessential Quality Assurance