Code Coverage Alone Probably Won’t Ensure Your Code is Fully Tested.

For this week’s CS-443 self-directed professional development blog entry I read a blog post written by Mark Seemann, a professional programmer/ software architect from Copenhagen Denmark. The blog post I read is entitled “Code coverage is a useless target measure,” and I found it to be quite relevant to the material we’ve been discussing in class the past couple of weeks, especially regarding path testing and data-flow testing. In this blog post, Seemann urges project managers and test developers not to just set a “code coverage goal” as a means for measuring whether their code is completely tested or not. Seemann explains that he finds this to be a sort of “perverse incentive” as it could encourage developers to write bad unit tests for the simple purpose of just covering the code as the project’s deadline approaches and the pressure on them increases. He provides some examples of what this scenario might look like using the C# programming language.

In his examples, Seemann shows that it is pretty easy to achieve 100% code coverage for a specific class, however, that doesn’t mean the code in that class is sufficiently tested for correct functionality. In his first example test, Seemann shows that it is possible to write an essentially useless test by using a try/catch block and no assertions existing solely for the purpose of covering code. Next, he gives an example of a test with an assertion that might seem like a legitimate test but Seemann shows that “[the test] doesn’t prevent regressions, or [prove] that the System Under Test works as intended.” Finally, Seemann gives a test in which he uses multiple boundary values and explains that even though it is a much better test, it hasn’t increased code coverage over the previous two tests. Hence, Seemann concludes that in order to show that software is working as it is supposed to, you need to do more than just make sure all the code is covered by unit tests, you need to write multiple tests for certain portions of code and ensure correct outputs are generated when given boundary-value inputs as well.

The reason I chose Mark’s blog post for my entry this week was because I thought it related to the material we’ve been discussing recently in class, especially data-flow testing. I think it is important for us to remember, when we are using code based testing techniques, that writing unit tests to simply cover the code are not sufficient to ensuring software is completely functional. Therefore, it’s probably a good idea to use a combination of code and specification based techniques when writing unit tests.


Finding & Testing Independent Paths

Since we have been going over path testing in class this past week I decided to find a blog post relating to that material. The post I found titled, “Path Testing: Independent Paths,” is a continuation of a couple previous posts, Path Testing: The Theory & Path Testing: The Coverage, written by the same author, Jeff Nyman. In this blog post, Nyman offers an explanation into what basis path testing is as well as how to determine the number of linearly independent paths in a chunk of code.

Nyman essentially describes a linearly independent path as any unique path in  the graph that does not contain the same combinations of nodes as any other linearly independent path. He also brings up the point that even though path testing is mainly a code-based approach to testing, by assessing what the inputs and outputs should be of a certain piece of code it is still possible “to figure out and model paths.” He gives the specific example of a function that takes in arbitrary values and determines their Greatest Common Denominator. Nyman uses the following diagram to show how he is able to determine each linearly independent path:

I really liked how he was able to break down the logic in the form of processes, edges and decisions without looking at the code. I feel like sometimes when we are building our graphs strictly based on code it’s easy to get confused and forget about the underlying logic that will determine the amount of tests that are necessary to ensure our code is completely tested. It also helped me understand how basis path testing should work and how it should be implemented.

Nyman goes on by showing that he is able to calculate the number of independent paths using the above graph and the formula for cyclomatic complexity. First he points out that number of nodes is equal to the sum of the number of decisions and the number of processes, which in this case is 6. Then, by plugging numbers into cyclomatic complexity formula (V(G) = e – n +2p), Nyman was able to obtain the following results:


Finally, Nyman ends the post by showing that the same results are obtained when going over the actual code for the Greatest Common Denominator function. He also shows that this same graph could be applicable to something like an amazon shopping cart/wishlist program. I think the biggest take-away from this post was that there is a strong relationship between cyclomatic complexity and testing which can prevent bugs through determining each linearly independent path and making sure they are producing the desired functionality.

October 1, 2017

-Caleb Pruitt


Could Robotics Process Automation (RPA) Be the Future of Testing?

According to blogger, Swapnil Bhukan, robotics process automation is indeed the future of software testing. In his blog post, Robotic Process Automation(RPA) evolution and it’s impact on Testing, he predicts that RPA will perform about “50 to 60% of testing tasks” by the year 2025. If you are not familiar with what RPA is, Bhukan also made a blog post describing what it is, how it works, and some of its benefits. Essentially RPA is a way to automate any repetitive task using bots that are taught how to do said tasks. Currently, according to Bhukan, RPA is only used to perform only about 4% of software testing tasks but that is sure to change as RPA technology advances. Today, the main use case for RPA has to deal with pretty basic data entry tasks.

The reason I chose to write about Bhakan’s blog post is because I found it quite interesting; especially since I was able to relate to the growth of RPA through past experience. Over the summer I had the opportunity to work an internship at an insurance company and all the IT interns had the pleasure of getting to sit down and talk with the EVP/Chief Innovation Technology Officer and ask him some questions. I asked him what kinds of new technologies the company was looking to invest in as well as what new technologies he was most excited about. His answer to both of these questions, requiring little time to think, was, hands down, RPA. Companies today are striving harder and harder to automate as many tasks as possible in order to save money.

One of the downsides to RPA, as Bhukan points out, is that it could potentially put many software testers out of a job. Some of the things keeping RPA form taking over the field of software testing at the moment are budgetary issues (RPA software is pretty expensive), companies being reluctant to adopt such new technology, and apprehension due to the possibility of losing customers if the tests aren’t done correctly. However, I believe that software testers may just have to realign their expertise as RPA technology evolves. By this I mean that the software testing professionals/developers should begin to learn how to teach these bots and leverage the bots’ usefulness in completing repetitive tasks; after all, the bots can only be as smart as those that teach them. I think Bhukan shares this view when he says at the end of his blog post, “sooner or later we (Software testing professionals) need to upgrade our skill set to train the Robots.”


September 24, 2017

-Caleb Pruitt


Create a free website or blog at

Up ↑