Software Development: Measuring and Improving Your Code Coverage

Code coverage, also called test coverage measures the amount (percentage) of code executed during a given test. It helps you as a developer to have an overview of what share of code has been tested and thus improves software stability and security.

What do you actually measure?

Software Development: Measuring and Improving Your Code Coverage
Control flow diagram

Below you will find some coverage metrics:

  • Statement coverage – the number of statements in the software source that have been executed during the test divided by the total number of statements in the code
  • Line coverage – the number of lines of code that have been executed divided by the total number of lines (normally identical to statement coverage)
  • Basic block coverage – execution of all code lines within basic blocks (not considering edges)
  • Function coverage´- takes into account only the invocation of functions
  • Edge coverage – all the edges of the control graph that are being followed during the test divided by all edges (this also includes blocks)
  • Path coverage – takes whole paths within a control flow diagram into account (covers more than edge coverage, potentially indefinite number of paths)

How does it work?

The most common use case for code coverage is to supply additional data to traditional software tests like unit-, integration- or system-tests. When such a tests runs with code coverage enabled, the test suites will generate data files containing information about the execution of functions, basic blocks or lines of code.

In order for coverage information to be generated during a test, the software under test has to be modified. The process of inserting code responsible for generating coverage data into the software is called instrumentation. Usually this is done in conjunction with the compiler. A compiler is basically a computer program that translates computer code written in a programming language, source code, into the machine code.  Modern compilers also have insight into semantic and syntactic information like the CFG (control flow diagram). This makes it easy to identify functions, basic blocks or edges and to add instrumentation in the right place. One notable example of a compiler program is GNU CC (GNU/Linux Compiler Collection).

The next step is generating coverage data. One of the common tools is gcov (GNU/Linux Coverage tool) working only on GNU-CC generated code. It analyzes the program and discovers any untested parts. Using gcov it is possible to gather information on:
– how often each line of code executes
– what lines of code are actually executed
– how much computing time each section of code uses

After the test execution, the coverage information can be found in corresponding .gcno and .gcda files. The names of these files are derived from the original object file by substituting the file suffix with either .gcno, or .gcda. All of these files are placed in the same directory as the object file, and contain data stored in a platform-independent format.

Sometimes it is necessary to produce more sophisticated reports, which can be done, for example with:

Lcov is a graphical front-end for gcov. It collects gcov data for multiple source files and creates HTML pages containing the source code annotated with coverage information. It also adds overview pages for easy navigation within the file structure.

Gcovr provides a utility for managing the use of gcov and generating summarized code coverage results. Gcovr produces either compact human-readable summary reports, machine readable XML reports or a graphical HTML summary. 

Alternative: coverage-based fuzzing 

Fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Modern fuzz-testing software heavily relies on coverage data to execute as many paths as possible in the software under test. The combination of high code coverage and randomized inputs is believed to trigger bugs that are not easily detected by using traditional methods.

In order to use coverage based fuzzing, the user has to compile the software under test with special instrumentation and start the fuzzing process. The instrumentation process can be greatly enhanced using Sanitizer coverage.

Sanitizers are tools for dynamic code analysis. Unlike in classic code coverage, where the compiler will add instructions to write the coverage data to a designated memory region, in Sanitizer coverage, the compiler will insert callbacks* that have to be implemented by the user. The instrumentation points at which these callbacks are placed can be chosen by the user (basic-, block-, function – and edge coverage). It is also possible to instrument pieces of code that are not considered to be code coverage metrics in a traditional way.

In order to trace the data flow throughout a program’s execution, for example, comparisons, divisions and some array operations can be instrumented. When data flow tracing for comparisons is enabled, the compiler will add a callback right before the comparison and supply both compared values as parameters. The fuzzer can implement the callback and use the constant comparison value as a new input. This reduces the number of runs needed to trigger a crash exponentially.

(*A callback function is a function that is called through a function pointer. If you pass the pointer (address) of a function as an argument to another, when that pointer is used to call the function, it is said that a call back is made.)

Coverage-based fuzzing – is it the answer?

You sometimes hear critical voices that even 100% code coverage does not automatically imply the 100% quality and security of the software. The point is not just aiming at the percentage but using valid and effective testing methods and scenarios. Coverage-based fuzzing using sanitizes provides valid results on code quality and thus helps to monitor and improve software development process, even in agile environments.

Share this post

Author: Elena

I acquired a BA degree in International Business with a specialization in Marketing from Nuremberg Technical School and a parallel degree from Leeds Metropolitan University. In 2013-2014, I worked in the field of performance and conversion optimization with an IT company and then was employed in content marketing. In 2016, I went back to working with Web Analytics and gained additional experience in project management. During this time, I received an Award of Achievement in Digital Analytics from the University of British Columbia (Canada). Currently, I am employed in Online Marketing. My areas of specialization include online marketing strategy, content creation, web analytics, conversion optimization and usability.

Leave a Reply

Your email address will not be published.