Welcome to the New Marketing to Convert Blog!

I have now moved to a self-hosted blog from wordpress.com and want to share my experience with moving the site. The main difficulties were:

  • Comparing hosting providers for WP blogs and identifying important features that should be included in the offer.
  • Coming up with a domain name for the blog. I had to analyze the broad spectrum of topics represented on this blog and came up with Marketing to Convert as something conveying the essence and the purpose behind the articles.
  • Finding suitable and GDPR-compatible plugins for analytics, cookies, etc.
  • Finding how to re-implement the functionality of the previous blog (e.g. the WordPress following button is now implemented through the email subscription widget).
  • Fine-tuning the theme and the layout (I stayed with the same theme as on allaroundmarketing.wordpress.com, however, this still looked different on a self-hosted blog). I chose this theme because of good color contrast and the overall sleek and stylish look, however, I am still not 100% satisfied with the font readability.
  • Setting up additional functionality, such as social sharing buttons, etc.
  • Selecting a suitable SEO-Plugin (I went with Yoast first, however, I did not like its functionality and selected Rank Math instead).
  • Updating connected Social Media accounts, including my personal accounts.

There are still some tasks that remain open, such as changing the blog logo and further exploring the additional functionality now available. I am also still thinking about how to handle the existing (old) blog and make the transition as smooth as possible (e.g. checking for backlinks, handling subscriptions, etc.). There is also a lot of SEO optimization to be done both for the articles and for the blog as a whole. However, at this point, I am satisfied with the first results and motivated to work further on improving the visibility of the new website.

 

Software Development: Measuring and Improving Your Code Coverage

Code coverage, also called test coverage measures the amount (percentage) of code executed during a given test. It helps you as a developer to have an overview of what share of code has been tested and thus improves software stability and security.

What do you actually measure?

Software Development: Measuring and Improving Your Code Coverage
Control flow diagram

Below you will find some coverage metrics:

  • Statement coverage – the number of statements in the software source that have been executed during the test divided by the total number of statements in the code
  • Line coverage – the number of lines of code that have been executed divided by the total number of lines (normally identical to statement coverage)
  • Basic block coverage – execution of all code lines within basic blocks (not considering edges)
  • Function coverage´- takes into account only the invocation of functions
  • Edge coverage – all the edges of the control graph that are being followed during the test divided by all edges (this also includes blocks)
  • Path coverage – takes whole paths within a control flow diagram into account (covers more than edge coverage, potentially indefinite number of paths)

How does it work?

The most common use case for code coverage is to supply additional data to traditional software tests like unit-, integration- or system-tests. When such a tests runs with code coverage enabled, the test suites will generate data files containing information about the execution of functions, basic blocks or lines of code.

In order for coverage information to be generated during a test, the software under test has to be modified. The process of inserting code responsible for generating coverage data into the software is called instrumentation. Usually this is done in conjunction with the compiler. A compiler is basically a computer program that translates computer code written in a programming language, source code, into the machine code.  Modern compilers also have insight into semantic and syntactic information like the CFG (control flow diagram). This makes it easy to identify functions, basic blocks or edges and to add instrumentation in the right place. One notable example of a compiler program is GNU CC (GNU/Linux Compiler Collection).

The next step is generating coverage data. One of the common tools is gcov (GNU/Linux Coverage tool) working only on GNU-CC generated code. It analyzes the program and discovers any untested parts. Using gcov it is possible to gather information on:
– how often each line of code executes
– what lines of code are actually executed
– how much computing time each section of code uses

After the test execution, the coverage information can be found in corresponding .gcno and .gcda files. The names of these files are derived from the original object file by substituting the file suffix with either .gcno, or .gcda. All of these files are placed in the same directory as the object file, and contain data stored in a platform-independent format.

Sometimes it is necessary to produce more sophisticated reports, which can be done, for example with:

Lcov is a graphical front-end for gcov. It collects gcov data for multiple source files and creates HTML pages containing the source code annotated with coverage information. It also adds overview pages for easy navigation within the file structure.

Gcovr provides a utility for managing the use of gcov and generating summarized code coverage results. Gcovr produces either compact human-readable summary reports, machine readable XML reports or a graphical HTML summary. 

Alternative: coverage-based fuzzing 

Fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Modern fuzz-testing software heavily relies on coverage data to execute as many paths as possible in the software under test. The combination of high code coverage and randomized inputs is believed to trigger bugs that are not easily detected by using traditional methods.

In order to use coverage based fuzzing, the user has to compile the software under test with special instrumentation and start the fuzzing process. The instrumentation process can be greatly enhanced using Sanitizer coverage.

Sanitizers are tools for dynamic code analysis. Unlike in classic code coverage, where the compiler will add instructions to write the coverage data to a designated memory region, in Sanitizer coverage, the compiler will insert callbacks* that have to be implemented by the user. The instrumentation points at which these callbacks are placed can be chosen by the user (basic-, block-, function – and edge coverage). It is also possible to instrument pieces of code that are not considered to be code coverage metrics in a traditional way.

In order to trace the data flow throughout a program’s execution, for example, comparisons, divisions and some array operations can be instrumented. When data flow tracing for comparisons is enabled, the compiler will add a callback right before the comparison and supply both compared values as parameters. The fuzzer can implement the callback and use the constant comparison value as a new input. This reduces the number of runs needed to trigger a crash exponentially.

(*A callback function is a function that is called through a function pointer. If you pass the pointer (address) of a function as an argument to another, when that pointer is used to call the function, it is said that a call back is made.)

Coverage-based fuzzing – is it the answer?

You sometimes hear critical voices that even 100% code coverage does not automatically imply the 100% quality and security of the software. The point is not just aiming at the percentage but using valid and effective testing methods and scenarios. Coverage-based fuzzing using sanitizes provides valid results on code quality and thus helps to monitor and improve software development process, even in agile environments.

Share this post

Blog Redesign

The new year has brought a new design for my blog. The old theme (Spectrum) was too bright and besides did not render longer post titles correctly. I chose the new design because of its simplicity and business-like color palette. The main blog colors are white and warm brown (#474747), with some elements in light sea green (#33acac). As for the font for the headings, I set Gentium Book Basic, which looks like an older newspaper font and matches the informative purpose of this blog.

Logo_Blog2
All Around Marketing Blog Logo

My blog was also lacking a logo, especially on social profiles linked to it. Basically, there are three types of logos: a symbol, a watermark (consisting only of typed letters) and a combination logo. I chose the third type, which normally includes a symbol and some text. To create the logo,  I got acquainted with Adobe Illustrator, particularly with form creating tools. The logo includes four segments above that represent my main areas of interest.

Firstly, content is about creating and editing texts for websites, as well researching and planning. User Interface (UI) has to do with graphical elements of a website, such as colors, banners, CTAs, etc. Both content and UI are important to increase the conversion of a website. Traffic is about attracting users to a website by promoting it on- and offline. And finally, analytics is evaluating the data on traffic and conversion and adjusting the performance accordingly.

Below the four segments, the names of the blog and the author are included. The logo is concluded with an arc-like shape to give it an even appearance and underline one of the main blog colors.

All in all, the design of the blog and connected social media profiles has become more sleek and professional and also more uniform.

online marketing blog logo
Logo-streamlined version
Share this post