Quality Assurance

We take care about quality

We continuously improve our internal system of ensuring quality and streamline procedures in order to provide the highest possible quality of any product that reaches our customers.
Our system of ensuring quality consists of:

  • Quality Department (Quality Gate) – through which go all items that the Customer receives. Quality Department has to approve of any new version of a website, created a system or even specification.
  • test scenarios – descriptions of test sequences for Quality Department testers, kept for all big systems
  • bug-tracking system (mantis)
  • automated-testing modules within systems (concept of building-in quality into the developed software)
    • unit tests run in each building process – meant to test if system elements are correct (jUnit, PHPUnit)
    • automated functional tests run at regular intervals on built systems or websites, simulating real actions taken by the user on the user interface (Selenium)
  • error reporting mechanisms – if an error occurs during automated tests, a notification is sent (e-mail or text message) to designated people in order to quickly fix the error

We also run performance tests that make it possible to simulate a given load on a system, service or database in the specified local or distributed environment (JMeter).
The most important element of ensuring software quality is the concept of building-in quality into the developed software. It is closely connected with Test-Driven Development. First, testing mechanisms for a given element are created, and then its implementation stage begins. These tests are maintained and run also at later stages of software development. This helps to make sure that none of the introduced changes causes errors in already existing parts of the system.

Another important element of ensuring quality is the concept of continuous integration. We use a continuous integration server that carries out compilation/installation, full configuration (including databases), and runs a series of automated tests after each change is confirmed in the code repository. If the tests show any irregularities, programmers have to fix them. We are informed about each irregular action.
The described mechanisms allow not only for constant control over the quality of our software, but also help to provide new versions much more quickly.

Methods and practices for high quality and security

  • ITIL
  • Continuous integration practice
  • Functional tests (manual and automatic)
  • Performance tests
  • Penetration tests
  • Our solutions conform to the regulations of provisions of the Inspector General for Personal Data Protection

Test Process Automation

Manual software tests are monotonous and time-consuming. Moreover, the necessity of running regression tests (which check if previously fulfilled functionalities are correct) with each new functionality increases the load of work on testers. The tests cannot be skipped because this may lower the testing quality. Thus, automation of tester’s tasks seems to be an ideal solution. Full implementation of the test automation at 3e was not a simple task and it required changing the whole application building cycle. The first attempts involved just an automation tool (which at that time was Selenium 1.0.3) and were only partially successful due to the tests being difficult to build in cases where the functionality of the application was more complicated. Moreover, we had problems with the reconstruction of a given state of the application. In order to solve these issues, we implemented mechanisms of database versioning. To do that we use the dbMaintain suite, which helps in a simple way to reconstruct states of the database from scratch for the test environment and allows for its updating in production and development environments. Also, the new Selenium 2 version (project connected with Webdriver library) contributed to streamlining the process. It should be also added that our company is the author of a popular open-source library that makes it possible to create tests based on  Selenium2/Webdriver. The last thing to do was to bind everything within an ant script and configure the continuous integration Hudson server. The final building process and automated testing is outlined below:


All source codes are kept in the code repository. Each programmer uploads his changes  to the repository after finishing work. Thanks to the fact that the repository server is versioning the source code files, we know what changes or additions were made to the source code and who made them. The process of building an application is always based on the current code from the repository. This ensures that the tested version is the correct one and that later the same version will be uploaded on production.


The database is a dynamically changing element of the system. Its structure often changes during development, whereas the data is subject to constant modifications as a result of tests. Most tests, irrespective of whether they are automated or manual, require some system configuring data or a process to be tested  in order to begin with. This is why before each running of the test it is important to shape the database into a given state both in the sense of current structures as well as test data. The dbMaintain tool is used for this purpose. It makes it possible to perform efficient versioning of database structures and the uploaded test data. It enforces maintaining of all SQL files in the application code structure and uploads only required scripts during an update. In the process of reconstructing the whole base, dbMaintain first erases it and then uploads the original structures as well as test data and each update one by one. Thanks to such an approach, the same most updated state of the database is always retrieved. What is more, the scripts updating previous versions of the database are also tested, which is important for updating production databases.


Tests automation is a key technology that helps create reliable internet or intranet applications. Auto testing mechanisms, similarly to airplane systems, check if individual elements of the application work and if they correctly operate with one another. Unit tests are mechanisms that test internal components of the system, whereas functional tests simulate real actions performed by users and verify if the results of the actions are the expected ones. The functional tests created by us can be compared to a programmed robot who keeps clicking in the browser checking individual application functions.


From the perspective of an IT project manager there are many indicators that help to determine the level of project execution as well as the potential reliability of the built application. These indicators are:

  • statistics of the number of the automated tests (how many were positive, how many were failed, if the number of the failed ones is gradually decreasing with the progress in implementation),
  • the percentage of how much of the source code has been covered with unit tests (it is possible to check which parts of the code are used while running automated tests; generally, the bigger part of the source codes is subject to these tests, the lower the likelihood of error occurrence in them;  it obviously does not exclude manual testing),
  • the level of source code documentation (the percentage indicator showing how many classes or methods in the source code have comments).

At 3e Software House, we have used all the above-mentioned techniques in order to make sure that the software developed by us is of the highest quality and that our Customers can rely on the applications we provide.