Quality Assurance

Test Case Development

The basic premise behind the development of test cases is to cover the functional behavior of something, typically software, for verifying the functionality, quality, and usability meets design requirements. Test cases should cover input, preconditions, and a set of execution steps with the expected behavior or results after completing those various steps and are typically combined into a group of test cases call a test plan. A test case’s steps can be manually or automatically executed though procedural scripts or code by a test engineer. The development of test cases should include UX / UI testing along with the various functional tests that can be performed manually or automatically. Testing for the User Experience is a vital part of application development as it helps to know how your users interact with your software and how they feel about the software. If a user base has a general negative feeling about the solution the adoption of the product will be much more difficult in the long run including people avoiding the solution altogether.

My primary experience in the development of test cases and test suites has been using tools such as Azure DevOps / VSTS / TFS. I would then execute those tests one by one and report on the results of those tests. As a Software Development Engineer in Test or SDET I would often automate manual test cases and create automated scripts or software code that would test the product in a similar and reputable process each time for regression validation or testing.

This list is not comprehensive but here are some key parts of a test case:

  • Test Case ID
  • Summary Title
  • Detailed description of the test case
  • Test Case Steps which can include Shared Steps and expected result
  • Related Automation Link or ID
  • Attached Files
  • Creator of the test case
  • Date Test case was Created
  • State of the test case
    • Design
    • Ready
    • Closed
  • Test Suite Relation Link or ID
  • User Story or Use Case Link or ID
  • Discussion log
  • Log of Pass / Fail of test case and when the test was executed

Filing Defects / Bugs

The process for filing defects or bugs will depend on the company or department that you are in. One of the most important aspects of filing defects is the ability to communicate the steps necessary to replicate the issue that was found. As a Quality Assurance Engineer in Software Testing you need to balance the level of details so that you are not being overly verbose but providing enough clarity for the issue to be replicated step by step. The other benefit to creating clear reproduction steps is that these new steps become the details to be included as a new test case in your test suite.

Something that I have also found that greatly helps in the remediation of a defect is the use of screen capture or video recording software during the testing cycle. Using such tools makes it perfectly clear what happened how and when. However, relying solely on video or screen capture is not sufficient. It is still important to convert the video reproduction into verbal steps so that you can add it to your test suite later and often the written word is faster to follow than watching a video. Videos and screen captures serve to supply nuanced details that maybe missed through written reproduction steps.

Lastly, as QA it was important to have a level of separation of concerns when triaging defects. QA was responsible for setting the severity of the defect and the product owner set the priority for fixing the defect. Listed below was the basic severity breakdown that we used when classifying defects.

  • Severity 1
    • Completely broken and unable to proceed and the functionality of the software is non-functional. There is a code stack trace with exception details, or the functionality does not work at all. This would also extend to translations that are offensive or completely wrong.
  • Severity 2
    • The functionality is badly broken from a UX / UI perspective. There is a work around for the user but may involve support manually making changes in the database or working with the user one on one to work past the broken point.
  • Severity 3
    • The functionality is working but not optimally. There is a relatively easy work around that the user can be trained to follow. This would also be the defect level for most UX / UI issues.
  • Severity 4
    • Minor UI / UX issue and would include grammatical issues such as spelling or punctuation that is incorrect.

Manual and Automated Testing

Manual Testing

This is the process of having an individual or group of people test for defects in software. This can be done by an engineer who is trained in the discipline of Quality Assurance or it could be completed by any person off the street. Typically, a trained engineer will function in the role as if they were an end user to verify that the applications features perform based on specifications and correct behavior.

Ad hock testing is another form of manual testing that is often performed to go “off script” from a set of defined test cases to hopefully identify defects within a software application. Clearly defined test cases help validate the known user acceptance paths for releaseability of the software, whereas ad hock will hopefully catch those logical paths that were not directly planned or accounted for.

As mentioned above, manual testing can be performed by a trained or untrained individual in the Quality Assurance discipline. Typically, when an untrained individual is being asked to perform quality assurance tasks they are usually being asked to verify usability of the application. This form of testing is often done before the software is complete and shipped so that UX / UI engineers can determine how a particular product should be designed to create the best experience for the customer.

Automated Testing

This is primarily the process of taking the manual test steps that were developed in each test case and leveraging software to automate the script in a repeatable fashion. The execution and testable steps can be written in a number of different programming and various scripting languages to perform various automated processes.

In the area of developing and writing automated tests falls under two general categories. Those that perform actions on a User Interface (UI) or that execute certain actions completely in code.

Remote Process Automation (RPA) is the process of developing automation that will interact with an application and click or type data into the applications fields and forms to perform the various action steps like a live person would.

Unit or integration tests are automated tests that execute software code in the backend of the application to validate individual methods and functional aspects of the software code without launching the application’s user interface. These tests are designed to validate positive and negative logic paths within a method or function set to validate the behavior of individual code modules or blocks.

Selenium WebDriver

As test automation relates to my career and experience. The use of Selenium WebDriver has been my primary platform for executing RPA vs web applications. My preferred method of developing Selenium based tests is to use what is called page maps or models first and then develop the actual test execution steps later. By developing the page models, I can more easily update or modify the automation code without requiring the rerecording of test execution steps through an RPA recorder. This reduces the brittleness factor of the test suite and increases the reliability of those same tests. By simply updating the CSS or XPath locator details in the page model the test steps continue to be viable much longer without requiring recoding of the actual test steps themselves.

Please click the External Link to my Github account to see my Selenium code example developed in C#.


  • Custom Quality Assurance Frameworks
  • Defect Reporting and Tracking
  • MS SQL
  • Ranorex - C#
  • Selenium WebDriver - C#
  • Test Case Development
  • Test Plans
  • UX / UI validation

External Links

Github SeleniumExample >>