Showing posts with label automated tests. Show all posts
Showing posts with label automated tests. Show all posts

Wednesday, October 30, 2013

Automation logging and reporting

Why write a post about logging?


As an automated tests developer, I often encounter a bit of confusion regarding two terms which might seem similar, but actually are quite different. Automation engineers sometimes mix "tests reporting" and "test application logging" on their implementation of testing framework. I don't blame them though, reporting and logging may sometimes have common elements which are to be reflected externally. On certain points of the tests building blocks, one might wonder- "where should I write this line of information? To the test's report, or maybe to the log?"

Testing may be a tedious task, but it is an inseparable part of any development routine. Giving a reliable and easy-to-read status picture of the system under test, is an essential requirement from every automated testing framework. Furthermore, decision making based on reliable testing results, should seamlessly blend into the application life-cycle management process. Assuming that the application's information to be reflected, is reliable, you still need to choose the right platform to expose it through. You wouldn't want business information on your implementation logs, and your manager wouldn't know what to do with exception information, presented on the test's report. The relevant information is to be presented to the right eyes.


Definition of Terms


Let's start with automation logging. Just like any regular, non-testing-related application, logging is writing technical information to a designated file (or to multiple files), during program execution. On each stage that the application is executing, there should be a line of log information, to trace problematic events later on, if required. The data on the logs includes information about the implementation of the software being executed. One can write to the logs things like methods calling, classes the flow uses, loops the application goes through, conditional branches, exceptions thrown, etc.

As tests developers, we usually would not place data regarding the business into the test logs, (later on, I'll explain why I chose to write- 'usually'). Information like: "Test xyz passed/failed", won't have any meaning and wouldn't help in any way, when you debug your automation code.

Automation tests reports, on the other hand, should include information of all AUT related tests/checks executed. The data reported is a business oriented info regarding the actual purpose of the execution (which is to test a portion of a product), but in a detailed manner. Most of the report is to include pass/fail statuses, and the dominated colors on it, should be green and red (preferably green of course).

There's no point in placing logical-implementation information, (such as classes used by the execution) on the test report, since it wouldn't say anything to the manager who receives the mail with the execution results. The automated tests are executed to test an application, therefor, test report is to include all data indicating the execution and outcome of the automated tests.


Figure_1. Here's a sketch representing the idea:
As seen on the figure_1 above (sorry for over simplifying :)), the result of executing any application is its goal. It could be any functionality intended, and it might be a solution to a problem. But with automated tests, the result of execution is the test report. The tests are executed in order to give us a reliable status-picture of the application / system under test, and the test report should present us this output. There's no other purpose to executing automated tests- it all comes down to executing test scenarios on the AUT, and reporting the outcome.

Logs on the other hand, are a by product of both test automation application, and any other app. These are aimed for the eyes of developers mainly (and also for the QA & support tears), but never intended to be seen by decision makers, since the logs contain technical data.


 Figure_2. Which layer writes to the reports, and which one to the logs:

 

The opposite view- Do not split the data


There's an approach which recognizes that automated test logs, are only the interest of automation developers (and sometimes of a specific test developer), and therefor there's no need to split related data into two separate destination file types. The supporters of this view claim that as opposed to application logs, which can be analyzed by developers, testers and support engineers, test logs could easily be merged into the reports, since the information types of the tests and the application's actions (loops, conditions, methods, classes etc.) are linked.

When logs and reports are separated, You start troubleshoot a problem with the red label on the report. That would direct you to the implementation logs to track down the problem. One would need to locate the time and context with two data sources. This approach suggests putting the logs with the reports, or merge the reports with the logs so that you won't exhaust yourself going back and forth between the logs and reports, pinpointing a problem.


Continuous Integration solves the dilemma


Extending the limits of automation even further, continuous integration systems support common testing frameworks, and can determine the status of a build based on the success or failure results, reported by these frameworks (JUnit, TestNG etc). Though they were originally and mainly targeted unit tests, these well known and thoroughly debugged testing systems, might as well be leveraged for complete system / end-2-end tests. Executed tests information is thrown to the logs, where all exceptions / successes are seen on the CI tracking solution. If one insists on seeing reports as well, some CI systems offer APIs for any extension plug ins.

This basically solves our problem. With continuous integration, the discussion on test logs or reports, is not relevant. The main advantage of integrating end-to-end tests with CI systems, is that the decision on the build's status is taken out of the hands of humans (automation or not? :)). It is all automatic, hence requires reliable deployment procedure, and much more robust and well written tests. Just write all of your data to the logs, and if a problem occurs, the continuous integration mechanism would raise a failure flag, and point you to the error on the logs.





Tuesday, August 6, 2013

The Automation Era. Is this the end of manual testing?


Automation- the new magic word


The last few years have shown us that there's an increase in the prestige of testing in general, and in software automation in particular. More and more highly skilled graduates, choose automation testing, as their professional career. Automation tools and simulators have become more reliable, and sophisticated. Test automation's advantages are too obvious and meaningful to miss out on. Every professional understands the importance of such tests to the organization, and to the elevated quality of the delivered product.

But what is it about automation testing, that makes it so desirable for organizations and to (mainly new) team/group leaders? Well, when performed correctly, automation could be executed in a precise & consistent manner on each execution. It can save time, and ideally use this resource, by constant and ongoing execution of tests. Automation can maximize the use of machines and other setup peripherals available, while covering a wide range of test cases. Automation allows execution of tests in resolution capabilities only possible for a machine, thus avoids human errors.


Is automation really limitless?


Many testing phrases could be used in the automation context. When talking about automation, one might hear expressions like- 'Functional Tests', 'System Tests', 'Stress Tests', 'Acceptance Tests', 'Security Tests', 'Sanity Tests', White/Black Box Tests, etc. All of the above, and much more, could be performed by automated test scenarios, in one way or another. Furthermore, for some testing methodologies, automation could be the ideal solution, like 'Regression Tests', or 'Load Tests'.

But there are test types/styles, which automation cannot be the answer for, by definition. Let's take 'Exploratory Testing' for instance, which is a testing style that is not predetermined, and without a written, predefined scenario. It is a 'free-style' testing type, on which the tester explores the tested application, and performs actions invented on the fly, and while walking through the AUT, he/she finds bugs. It's important to mention that the exploratory test of today, could be a part of the functional tests of tomorrow, especially if a bug was found during its execution. In this case, the test scenario is documented and added to the regression legacy test flows. More important is the fact that many errors on the AUT are found this way, just by strolling through the framework, exploring it.

These kind of tests (Exploratory), could never be done by automation, since automated tests are predicted, documented flows, whereas exploratory testing is script-less, unexpected and not documented. Since many bugs are found while executing tests in this method, one cannot afford to leave these bugs to be found by actual, real clients after the SW product is released. Manual testers who are familiar with the tested application, and who know its pitfalls, are needed to cover the system by exploring it and revealing its hidden bugs.


What about Usability Testing? Could automated tests be the answer for this testing methodology? Usability testing is a testing style on which the system is checked for conceptual/logical/product-level/UX errors (many times by users out of the QA). On this testing stage, software aspects, like user friendliness, ease of use, and general look & feel of the application, are checked. One might not expect "traditional" bugs to be found on this testing phase, but only "usability bugs", which require a higher level of system analysis.

Conventional testing are executed with strict correlation to the testing documentation. Success criteria is clearly defined, and tests outcome is measured with Pass/Fail annotations. This is almost opposite to usability testing, on which the tester needs to state the problem, and actually explain (and convince) what he feels is wrong with the system. It's not a 'black or white' situation. This is why automation is useless when dealing with usability testing. It is built for dichotomous distinction between success and failure, whereas the outcome of usability, is more descriptive, abstract, and sometimes arguable.


Where lies the real power of automation?


Leaving unit tests and performance tests aside (it's obvious that coded tests and automation tools, would be vary adequate for those), automated tests are ideally used for regression tests. Elements that are known to be operational and relatively stable, still need to be checked, and this is where automation steps in. All legacy tests, should be included in the regression, as well as new features which were tested and now are integral part of the released version. To all of that, one needs to add the fixed bugs scenarios, which would make the scope of the regression wider over time. This is part of the reason that automated tests best fit this testing type. The released build is mature, and doesn't need to be changed all the time (like new features and user stories in scrum). The automated tests are developed on a static and stable product, so during test development, debugging would be needed only for the automated tests, and not for the tested application.

Furthermore, covering the regression by reliable automated tests, releases the manual testers to handle other types of tests (Exploratory & Usability for instance :)), increases the range of tests, hence makes the released version better. Manual testers would be free from doing the same monotonous tasks over and over again, and that would surely have a positive impact on their motivation.


Standing still is going backwards


As I tried to demonstrate in this article, there are quite a few good reasons not to neglect manual testing, even if currently, automation is executed in a satisfactory manner. No organization would allow itself to release a product which had not been tested properly. Manual testers are essential for a better quality of the delivery. Apart from exploratory and usability tests, these testers have many issues to cover, and that's why QA engineers are the ones that are most familiar with the application released.

That said, in this business, everyone must be updated with the advancements in technology. Any professional needs to catch up with the commonly used, and evolving working trends. In spite of the fact that in the foreseeable future, manual testers would still be imperative to any software testing process, they need to realize that scripting and coding in general, are increasingly important for their day to day work. Knowing your way with scripts will undoubtedly increase your efficiency, while minimizing the time spent on repetitive setup and testing tasks. There's an increasing demand for testers with development capabilities in different levels of coding skills.

Manual testers have been unjustly underrated for years. They (we) were referred to as 'second best', as 'temporary' & 'dispensable employees'. No more!!  If an answer for the question at the title is needed, then the bottom line is- "No. Manual testing is still a critical part of the overall testing routine." You're safe for now. Nevertheless, every professional in general, and especially QA engineers, must constantly aspire to take their work and knowledge to the next level, for their own professional future, and for the benefit of everyone else.