Welcome to the New Marketing to Convert Blog!

I have now moved to a self-hosted blog from wordpress.com and want to share my experience with moving the site. The main difficulties were:

  • Comparing hosting providers for WP blogs and identifying important features that should be included in the offer.
  • Coming up with a domain name for the blog. I had to analyze the broad spectrum of topics represented on this blog and came up with Marketing to Convert as something conveying the essence and the purpose behind the articles.
  • Finding suitable and GDPR-compatible plugins for analytics, cookies, etc.
  • Finding how to re-implement the functionality of the previous blog (e.g. the WordPress following button is now implemented through the email subscription widget).
  • Fine-tuning the theme and the layout (I stayed with the same theme as on allaroundmarketing.wordpress.com, however, this still looked different on a self-hosted blog). I chose this theme because of good color contrast and the overall sleek and stylish look, however, I am still not 100% satisfied with the font readability.
  • Setting up additional functionality, such as social sharing buttons, etc.
  • Selecting a suitable SEO-Plugin (I went with Yoast first, however, I did not like its functionality and selected Rank Math instead).
  • Updating connected Social Media accounts, including my personal accounts.

There are still some tasks that remain open, such as changing the blog logo and further exploring the additional functionality now available. I am also still thinking about how to handle the existing (old) blog and make the transition as smooth as possible (e.g. checking for backlinks, handling subscriptions, etc.). There is also a lot of SEO optimization to be done both for the articles and for the blog as a whole. However, at this point, I am satisfied with the first results and motivated to work further on improving the visibility of the new website.

 

Software Development: Measuring and Improving Your Code Coverage

Code coverage, also called test coverage measures the amount (percentage) of code executed during a given test. It helps you as a developer to have an overview of what share of code has been tested and thus improves software stability and security.

What do you actually measure?

Software Development: Measuring and Improving Your Code Coverage
Control flow diagram

Below you will find some coverage metrics:

  • Statement coverage – the number of statements in the software source that have been executed during the test divided by the total number of statements in the code
  • Line coverage – the number of lines of code that have been executed divided by the total number of lines (normally identical to statement coverage)
  • Basic block coverage – execution of all code lines within basic blocks (not considering edges)
  • Function coverage´- takes into account only the invocation of functions
  • Edge coverage – all the edges of the control graph that are being followed during the test divided by all edges (this also includes blocks)
  • Path coverage – takes whole paths within a control flow diagram into account (covers more than edge coverage, potentially indefinite number of paths)

How does it work?

The most common use case for code coverage is to supply additional data to traditional software tests like unit-, integration- or system-tests. When such a tests runs with code coverage enabled, the test suites will generate data files containing information about the execution of functions, basic blocks or lines of code.

In order for coverage information to be generated during a test, the software under test has to be modified. The process of inserting code responsible for generating coverage data into the software is called instrumentation. Usually this is done in conjunction with the compiler. A compiler is basically a computer program that translates computer code written in a programming language, source code, into the machine code.  Modern compilers also have insight into semantic and syntactic information like the CFG (control flow diagram). This makes it easy to identify functions, basic blocks or edges and to add instrumentation in the right place. One notable example of a compiler program is GNU CC (GNU/Linux Compiler Collection).

The next step is generating coverage data. One of the common tools is gcov (GNU/Linux Coverage tool) working only on GNU-CC generated code. It analyzes the program and discovers any untested parts. Using gcov it is possible to gather information on:
– how often each line of code executes
– what lines of code are actually executed
– how much computing time each section of code uses

After the test execution, the coverage information can be found in corresponding .gcno and .gcda files. The names of these files are derived from the original object file by substituting the file suffix with either .gcno, or .gcda. All of these files are placed in the same directory as the object file, and contain data stored in a platform-independent format.

Sometimes it is necessary to produce more sophisticated reports, which can be done, for example with:

Lcov is a graphical front-end for gcov. It collects gcov data for multiple source files and creates HTML pages containing the source code annotated with coverage information. It also adds overview pages for easy navigation within the file structure.

Gcovr provides a utility for managing the use of gcov and generating summarized code coverage results. Gcovr produces either compact human-readable summary reports, machine readable XML reports or a graphical HTML summary. 

Alternative: coverage-based fuzzing 

Fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Modern fuzz-testing software heavily relies on coverage data to execute as many paths as possible in the software under test. The combination of high code coverage and randomized inputs is believed to trigger bugs that are not easily detected by using traditional methods.

In order to use coverage based fuzzing, the user has to compile the software under test with special instrumentation and start the fuzzing process. The instrumentation process can be greatly enhanced using Sanitizer coverage.

Sanitizers are tools for dynamic code analysis. Unlike in classic code coverage, where the compiler will add instructions to write the coverage data to a designated memory region, in Sanitizer coverage, the compiler will insert callbacks* that have to be implemented by the user. The instrumentation points at which these callbacks are placed can be chosen by the user (basic-, block-, function – and edge coverage). It is also possible to instrument pieces of code that are not considered to be code coverage metrics in a traditional way.

In order to trace the data flow throughout a program’s execution, for example, comparisons, divisions and some array operations can be instrumented. When data flow tracing for comparisons is enabled, the compiler will add a callback right before the comparison and supply both compared values as parameters. The fuzzer can implement the callback and use the constant comparison value as a new input. This reduces the number of runs needed to trigger a crash exponentially.

(*A callback function is a function that is called through a function pointer. If you pass the pointer (address) of a function as an argument to another, when that pointer is used to call the function, it is said that a call back is made.)

Coverage-based fuzzing – is it the answer?

You sometimes hear critical voices that even 100% code coverage does not automatically imply the 100% quality and security of the software. The point is not just aiming at the percentage but using valid and effective testing methods and scenarios. Coverage-based fuzzing using sanitizes provides valid results on code quality and thus helps to monitor and improve software development process, even in agile environments.

Share this post

Blog Redesign

The new year has brought a new design for my blog. The old theme (Spectrum) was too bright and besides did not render longer post titles correctly. I chose the new design because of its simplicity and business-like color palette. The main blog colors are white and warm brown (#474747), with some elements in light sea green (#33acac). As for the font for the headings, I set Gentium Book Basic, which looks like an older newspaper font and matches the informative purpose of this blog.

Logo_Blog2
All Around Marketing Blog Logo

My blog was also lacking a logo, especially on social profiles linked to it. Basically, there are three types of logos: a symbol, a watermark (consisting only of typed letters) and a combination logo. I chose the third type, which normally includes a symbol and some text. To create the logo,  I got acquainted with Adobe Illustrator, particularly with form creating tools. The logo includes four segments above that represent my main areas of interest.

Firstly, content is about creating and editing texts for websites, as well researching and planning. User Interface (UI) has to do with graphical elements of a website, such as colors, banners, CTAs, etc. Both content and UI are important to increase the conversion of a website. Traffic is about attracting users to a website by promoting it on- and offline. And finally, analytics is evaluating the data on traffic and conversion and adjusting the performance accordingly.

Below the four segments, the names of the blog and the author are included. The logo is concluded with an arc-like shape to give it an even appearance and underline one of the main blog colors.

All in all, the design of the blog and connected social media profiles has become more sleek and professional and also more uniform.

online marketing blog logo
Logo-streamlined version
Share this post

Trends in E-Learning

This post is a summary of my visit to the LEARNTEC Conference and contains a list of trends in digital learning.

DialogbildThe digitization of society has not left out the sphere of education.

This post is a summary of my visit to the LEARNTEC Conference and a reference to my background in adult education.

The conference basically covered up two areas: school learning and learning within an organizations and presented such innovative products as:

  • An Internet platform for managing the communication between the school administration, teachers, parents and students (Comjell Company). This system allows entering marks, notify parents and gather signatures with the help of e-mail.
  • A social network that makes it possible for  teachers and students in school classes to communicate with each other and exchange information about the school courses online (EDYOU).
  • A new open-source platform for producing and sharing online courses on any subject (Open Knowledge Worker).
  • A software tool allowing creating interactive learning content (Raptivity).
  • All-in-one software used to create content and track the learning activity of the students both by teachers and by parents (xAPI).
  • Virtual classroom systems (e.g. Reflact).
  • Mobile learning solutions, such as that from Fraunhofer Academy or Zettwerk.
  • Adaptive learning systems, e.g. Erudify (the system allows to  measure the learning progress and adapt the contents accordingly).
  • Various types of content production such as educational videos (Simpleshow), serous games (Zone 2 Connect or  Yeepa), visualization tools (Dialogbild), etc.
  • New learning hardware such as PowerClicker (a wireless gadget for trainer-to-learner communication with light, vibration and sound), interactive multimedia desks, etc.

These technological products reflect the requirements to the educational systems of the future:

  • Flexible, independent of time and space used for CAM00748 (525x640)learning;
  • Diverse in methodology, and learning formats, not boring or repetitive;
  • Involving games and interactive activities to simplify the understanding and memorizing of the material (learning by doing);
  • Social, allowing for instant interaction between the instructor and the participants or among the participants;
  • Individual and learner-centered, adapting to the learner’s needs and progress;
  • Automatized where appropriate, thus reducing the paperwork and simplifying the classroom management;
  • Keeping the knowledge fluid and available to the community, allowing everyone to participate in knowledge creation.

The precondition of these changes to take place across our educational systems, however, is the open-mildness of instructors and learners and the ability to embrace the changes in learning methods and tools.  Shortly speaking, the empowerment of life-long learning.

Share this post

14 Important Facts about Tacit Knowledge

In this blog post, I will share 14 important facts about tacit knowledge. Implicit or tacit knowledge cannot be (easily) codified, stored and transferred.

  1. Tacit knowledge or implicit knowledge is the type of knowledge that cannot be (easily) codified, stored and transferred, as opposed to explicit knowledge, or “normal” data found in books, company archives, etc.
  2. There are different degrees of “implicity” : from subconscious knowledge, e.g. knowing how to ride a bike, to nearly-explicit knowledge that could under circumstances be codified and transferred, for example unwritten workflow procedures in an organization.
  3. There exists a theory of the duality of knowledge, stating that every explicit knowledge (such as information in the text) is almost always combined with implicit knowledge (“reading between the lines”).
  4. Tacit knowledge includes both common, everyday knowledge (e.g. cultural values) and knowledge applicable in limited setting (e.g. knowledge existing in business organizations).
  5. Most of tacit knowledge is never attempted for codification, as it is perceived as “obvious” by people sharing the knowledge and its presence is hard to detect for an outsider.
  6. A competitive advantage of an organization is almost always rooted in tacit knowledge (know-how as opposed to know-what). Thus, tacit knowledge does play a positive role in protecting the competitive advantage against imitation.
  7. Pools of tacit knowledge in an organization appear automatically as a result of organizational knowledge growth and interaction within members of organization. Some members may willingly accumulate “nearly-tacit” knowledge, in an effort to increase their importance or make themselves inalienable in the organization.
  8. Smaller companies often have a higher percentage of tacit knowledge, this explained by the absence of codified procedures and lower level of bureaucracy. In larger organization, pools of tacit knowledge are likely to emerge within company departments thus influencing the communication along the command lines and between the departments in an organization with functional or divisional structure.
  9. Tacit knowledge could be shared by an indefinite number of people in an organization, however it is not always passed on from one member of staff to another, neither it is automatically acquired by a new member of staff after formal training and acquisition of explicit knowledge.
  10. Tacit knowledge inevitably influences processes and output in an organization, however, this influence is difficult to detect and measure. Furthermore, it is reflected in how members in an organization interact with each other on the daily basis and within terminated projects. It often serves as an underlying reason why some projects do not result in productive team work or simply fail.
  11. An organization should thrive to detect the areas where tacit knowledge has the largest influence and scrutinize the ways to control and measure this influence. The detection of pools of tacit knowledge can take place through observation of interactions of staff and management, analyzing the processes of internal knowledge transfer, comparing the outcome of less and more successful projects, where the reason for under-performance does not lie on the surface as well as through strategic analysis of the existing core competencies.
  12. After the pools of tacit knowledge have been detected, it must be decided whether to externalize and make the knowledge explicit and available to a wider circle of recipients. In any case, it is important to monitor how tacit knowledge develops over time and whether its influence on the internal processes becomes overwhelming.
  13. Nonaka (1991) offers a conversion scheme for implicit knowledge, consisting of:
    1. socialization (observing and imitating the activity carried out with implicit knowledge);
    2. externalization (setting up a dialog between the knowledge carrier and knowledge receiver that uses verbal or visual sources for explanation);
    3. combination (arranging derived bits of explicit knowledge and making it transferable);
    4. internalization (translating the codified explicit knowledge into the individual implicit knowledge).
  14. The efforts towards externalization of implicit knowledge should be taken with the idea of organizational knowledge sharing and knowledge growing, not simply codifying and storing the data. The main problem is that the vast amounts of data in a modern organization are explicit by nature, but the ability to handle them is mostly implicit. Thus, the importance of tacit knowledge management must never be underestimated.

Share this post