Wednesday, October 30, 2013

Automation logging and reporting

Why write a post about logging?


As an automated tests developer, I often encounter a bit of confusion regarding two terms which might seem similar, but actually are quite different. Automation engineers sometimes mix "tests reporting" and "test application logging" on their implementation of testing framework. I don't blame them though, reporting and logging may sometimes have common elements which are to be reflected externally. On certain points of the tests building blocks, one might wonder- "where should I write this line of information? To the test's report, or maybe to the log?"

Testing may be a tedious task, but it is an inseparable part of any development routine. Giving a reliable and easy-to-read status picture of the system under test, is an essential requirement from every automated testing framework. Furthermore, decision making based on reliable testing results, should seamlessly blend into the application life-cycle management process. Assuming that the application's information to be reflected, is reliable, you still need to choose the right platform to expose it through. You wouldn't want business information on your implementation logs, and your manager wouldn't know what to do with exception information, presented on the test's report. The relevant information is to be presented to the right eyes.


Definition of Terms


Let's start with automation logging. Just like any regular, non-testing-related application, logging is writing technical information to a designated file (or to multiple files), during program execution. On each stage that the application is executing, there should be a line of log information, to trace problematic events later on, if required. The data on the logs includes information about the implementation of the software being executed. One can write to the logs things like methods calling, classes the flow uses, loops the application goes through, conditional branches, exceptions thrown, etc.

As tests developers, we usually would not place data regarding the business into the test logs, (later on, I'll explain why I chose to write- 'usually'). Information like: "Test xyz passed/failed", won't have any meaning and wouldn't help in any way, when you debug your automation code.

Automation tests reports, on the other hand, should include information of all AUT related tests/checks executed. The data reported is a business oriented info regarding the actual purpose of the execution (which is to test a portion of a product), but in a detailed manner. Most of the report is to include pass/fail statuses, and the dominated colors on it, should be green and red (preferably green of course).

There's no point in placing logical-implementation information, (such as classes used by the execution) on the test report, since it wouldn't say anything to the manager who receives the mail with the execution results. The automated tests are executed to test an application, therefor, test report is to include all data indicating the execution and outcome of the automated tests.


Figure_1. Here's a sketch representing the idea:
As seen on the figure_1 above (sorry for over simplifying :)), the result of executing any application is its goal. It could be any functionality intended, and it might be a solution to a problem. But with automated tests, the result of execution is the test report. The tests are executed in order to give us a reliable status-picture of the application / system under test, and the test report should present us this output. There's no other purpose to executing automated tests- it all comes down to executing test scenarios on the AUT, and reporting the outcome.

Logs on the other hand, are a by product of both test automation application, and any other app. These are aimed for the eyes of developers mainly (and also for the QA & support tears), but never intended to be seen by decision makers, since the logs contain technical data.


 Figure_2. Which layer writes to the reports, and which one to the logs:

 

The opposite view- Do not split the data


There's an approach which recognizes that automated test logs, are only the interest of automation developers (and sometimes of a specific test developer), and therefor there's no need to split related data into two separate destination file types. The supporters of this view claim that as opposed to application logs, which can be analyzed by developers, testers and support engineers, test logs could easily be merged into the reports, since the information types of the tests and the application's actions (loops, conditions, methods, classes etc.) are linked.

When logs and reports are separated, You start troubleshoot a problem with the red label on the report. That would direct you to the implementation logs to track down the problem. One would need to locate the time and context with two data sources. This approach suggests putting the logs with the reports, or merge the reports with the logs so that you won't exhaust yourself going back and forth between the logs and reports, pinpointing a problem.


Continuous Integration solves the dilemma


Extending the limits of automation even further, continuous integration systems support common testing frameworks, and can determine the status of a build based on the success or failure results, reported by these frameworks (JUnit, TestNG etc). Though they were originally and mainly targeted unit tests, these well known and thoroughly debugged testing systems, might as well be leveraged for complete system / end-2-end tests. Executed tests information is thrown to the logs, where all exceptions / successes are seen on the CI tracking solution. If one insists on seeing reports as well, some CI systems offer APIs for any extension plug ins.

This basically solves our problem. With continuous integration, the discussion on test logs or reports, is not relevant. The main advantage of integrating end-to-end tests with CI systems, is that the decision on the build's status is taken out of the hands of humans (automation or not? :)). It is all automatic, hence requires reliable deployment procedure, and much more robust and well written tests. Just write all of your data to the logs, and if a problem occurs, the continuous integration mechanism would raise a failure flag, and point you to the error on the logs.





Tuesday, August 6, 2013

The Automation Era. Is this the end of manual testing?


Automation- the new magic word


The last few years have shown us that there's an increase in the prestige of testing in general, and in software automation in particular. More and more highly skilled graduates, choose automation testing, as their professional career. Automation tools and simulators have become more reliable, and sophisticated. Test automation's advantages are too obvious and meaningful to miss out on. Every professional understands the importance of such tests to the organization, and to the elevated quality of the delivered product.

But what is it about automation testing, that makes it so desirable for organizations and to (mainly new) team/group leaders? Well, when performed correctly, automation could be executed in a precise & consistent manner on each execution. It can save time, and ideally use this resource, by constant and ongoing execution of tests. Automation can maximize the use of machines and other setup peripherals available, while covering a wide range of test cases. Automation allows execution of tests in resolution capabilities only possible for a machine, thus avoids human errors.


Is automation really limitless?


Many testing phrases could be used in the automation context. When talking about automation, one might hear expressions like- 'Functional Tests', 'System Tests', 'Stress Tests', 'Acceptance Tests', 'Security Tests', 'Sanity Tests', White/Black Box Tests, etc. All of the above, and much more, could be performed by automated test scenarios, in one way or another. Furthermore, for some testing methodologies, automation could be the ideal solution, like 'Regression Tests', or 'Load Tests'.

But there are test types/styles, which automation cannot be the answer for, by definition. Let's take 'Exploratory Testing' for instance, which is a testing style that is not predetermined, and without a written, predefined scenario. It is a 'free-style' testing type, on which the tester explores the tested application, and performs actions invented on the fly, and while walking through the AUT, he/she finds bugs. It's important to mention that the exploratory test of today, could be a part of the functional tests of tomorrow, especially if a bug was found during its execution. In this case, the test scenario is documented and added to the regression legacy test flows. More important is the fact that many errors on the AUT are found this way, just by strolling through the framework, exploring it.

These kind of tests (Exploratory), could never be done by automation, since automated tests are predicted, documented flows, whereas exploratory testing is script-less, unexpected and not documented. Since many bugs are found while executing tests in this method, one cannot afford to leave these bugs to be found by actual, real clients after the SW product is released. Manual testers who are familiar with the tested application, and who know its pitfalls, are needed to cover the system by exploring it and revealing its hidden bugs.


What about Usability Testing? Could automated tests be the answer for this testing methodology? Usability testing is a testing style on which the system is checked for conceptual/logical/product-level/UX errors (many times by users out of the QA). On this testing stage, software aspects, like user friendliness, ease of use, and general look & feel of the application, are checked. One might not expect "traditional" bugs to be found on this testing phase, but only "usability bugs", which require a higher level of system analysis.

Conventional testing are executed with strict correlation to the testing documentation. Success criteria is clearly defined, and tests outcome is measured with Pass/Fail annotations. This is almost opposite to usability testing, on which the tester needs to state the problem, and actually explain (and convince) what he feels is wrong with the system. It's not a 'black or white' situation. This is why automation is useless when dealing with usability testing. It is built for dichotomous distinction between success and failure, whereas the outcome of usability, is more descriptive, abstract, and sometimes arguable.


Where lies the real power of automation?


Leaving unit tests and performance tests aside (it's obvious that coded tests and automation tools, would be vary adequate for those), automated tests are ideally used for regression tests. Elements that are known to be operational and relatively stable, still need to be checked, and this is where automation steps in. All legacy tests, should be included in the regression, as well as new features which were tested and now are integral part of the released version. To all of that, one needs to add the fixed bugs scenarios, which would make the scope of the regression wider over time. This is part of the reason that automated tests best fit this testing type. The released build is mature, and doesn't need to be changed all the time (like new features and user stories in scrum). The automated tests are developed on a static and stable product, so during test development, debugging would be needed only for the automated tests, and not for the tested application.

Furthermore, covering the regression by reliable automated tests, releases the manual testers to handle other types of tests (Exploratory & Usability for instance :)), increases the range of tests, hence makes the released version better. Manual testers would be free from doing the same monotonous tasks over and over again, and that would surely have a positive impact on their motivation.


Standing still is going backwards


As I tried to demonstrate in this article, there are quite a few good reasons not to neglect manual testing, even if currently, automation is executed in a satisfactory manner. No organization would allow itself to release a product which had not been tested properly. Manual testers are essential for a better quality of the delivery. Apart from exploratory and usability tests, these testers have many issues to cover, and that's why QA engineers are the ones that are most familiar with the application released.

That said, in this business, everyone must be updated with the advancements in technology. Any professional needs to catch up with the commonly used, and evolving working trends. In spite of the fact that in the foreseeable future, manual testers would still be imperative to any software testing process, they need to realize that scripting and coding in general, are increasingly important for their day to day work. Knowing your way with scripts will undoubtedly increase your efficiency, while minimizing the time spent on repetitive setup and testing tasks. There's an increasing demand for testers with development capabilities in different levels of coding skills.

Manual testers have been unjustly underrated for years. They (we) were referred to as 'second best', as 'temporary' & 'dispensable employees'. No more!!  If an answer for the question at the title is needed, then the bottom line is- "No. Manual testing is still a critical part of the overall testing routine." You're safe for now. Nevertheless, every professional in general, and especially QA engineers, must constantly aspire to take their work and knowledge to the next level, for their own professional future, and for the benefit of everyone else.

Wednesday, July 24, 2013

Automation and Agile- do they mix?


Each one is great by itself


I can't praise the Agile development methodology enough. I think it's fantastic. It is flexible, dynamic, and very much compatible to our rapidly changing demands. Agile is iterative approach, which helps minimize risks, and quickly adjust to changes. Furthermore, with 'Scrum', all team members don't have so many meetings to go to (except the daily), like in the 'Waterfall' approach. Finally, you can get some work done.

While Agile methodology's advantages may be arguable by some people, the benefits of automation are indisputable. When performed correctly, automation can save time & money, while noticeably increasing the quality of the product.


. . . Put them together . . .


Let's talk about the people joining the daily meetings ('Stand-Ups') for a moment. Who are those people? What are their roles? Leaving the 'chickens' and 'pigs' stuff out of this for a moment, whom do we see each morning?
So, we have a few developers, product owner/manager, QA person, project manager (if he/she feels like showing up.. ;)), and one of these guys is also the scrum master... more or less.

Hold it for one second, what about automation? We had automation teams, back in the Waterfall methodology, what should we do with automation developers now? The Scrum system promoters must have said: "Let's just add the automation person into the dish of the daily, and it would be fine. We know that every role has its representation in the scrum team, one more functional member would surely match the pattern, and it would be OK"... Would it really?


On every sprint, which is a period of 2-4 weeks, the scrum team works to realize the user-stories discussed on the sprint planning. There are few software companies that each peer on the team is equal to the others, and takes from the tasks board whichever task that needs to be worked on, whether it's a development task, automation, or QA task. Everybody does everything. But most companies have designated roles for each purpose. Developers are skilled to take dev. tasks. and they also are divided by expertise (UI, DB, client/server side, and so on). The QA person in the scrum team, is trained to check the outcome of the developers work, and the automation guy should automate the manual tests, and add more to them, with a goal to cover regression & bugs found on the system. The general idea is that all development / QA / automation tasks are to be completed by the end of the sprint.

For each development task, there should be a parallel QA task as well. There are some tasks which their execution is a head of schedule, and others might take longer than predicted. Few days before the sprint ends, there's a "code freeze", which means that development is halted, and no new code is to be merged into this sprint's delivery. This is done for reasons of leaving time for testing, bug fixing, and preparation for the next sprint.


Automation in Scrum


Many automation tools have 'Record' and 'Play' features, which almost anyone can activate, maybe make some modifications, and save it as a new test. This could only be used as the first phase of creating automated tests. I would lie if I said that I didn't activate these features at all. On the few times that I did (back in my QTP days), it was only to check the object recognition, or maybe as a beginner- to see the generated code from the recording.
Actual tests are written from scratch (both in 'Waterfall' and 'Agile'), after carefully planning and designing the infrastructure. The testing framework is built in a way that implements code reuse, while the test writer has full control over the test flows. The use of this kind of framework, enables composing tests in a relatively quick manner, basically by using the infrastructure's building blocks.

There are basically two possible states for automation, in scrum methodology. The first is when a team is starting a new project from scratch, and the automation code, like every other element in a new project- doesn't exit. On this state, the application code is just starting to be formed, there are no tests steps outlined for the manual tester, and there's no automation infrastructure (and of course- no tests). The other state regards to working in a routine, continuous manner, sprint after sprint. Most of us are on this working state much of our time, when some of the testing framework is ready, and the automation tasks involve both composing new test flows, and adding to the automation infrastructure.



So far so good, so what's the problem?


I mentioned some of the development procedure above, because getting to the point that you have your automation framework ready, takes time. It takes more than one or two sprints (and it could even take much longer). How can an automation developer catch up with the sprint's tasks, if he needs to close a gap of a few sprints of development?

If we take the first state, as mentioned above, a new project requires the development to lay down the foundation of the application. Of course this initial code could be tested (I'm not referring to unit tests), but normally the first sprints don't have so much development content in them, hence the automation is not so desperately needed at this phase. Initial development period of time, is also characterized by a lot of frequent changes both in the design and on the code, which would require lots of modifications to the automation framework as well. Every automation developer would agree that the power of automated tests is seen best when the tested application has a certain degree of stability. New applications are just not there yet.

And what about the on-going development state (second situation mentioned)? One would assume that while in the middle of development process, automation has much of what it needs from it's framework, and reliable tests could be composed relatively fast. At this stage most of the automated tests infrastructure, is already built, and there shouldn't be any significant hurdle in creating and executing automated tests.

Well, it is all correct, but the full picture is that many times, the delivered product is not working as expected. In these cases the development is using the sprint's time, up to the last minute, leaving very narrow window of time for QA and automation guys, to do their job. Bugs are found during the sprint, some of them are critical, or at least with a high severity level. These bugs force the development in the scrum, to shift it's resources and put some attention to bug fixing, while there's no final build to work on (for QA and automation). The manual QA tester has a very limited time frame, at the end of the sprint, to check the user stories final implementation, so you can only imagine the challenges that the automation developer is facing. He needs to develop automatic procedures and check-ups, parallel to the ones of the manual tester scenarios, and add some more tests, which couldn't be performed by human testers (like API tests). And it all needs to be completed on a time schedule identical to the manual QA tester's, which is at the end of the sprint.


One sprint ends and a new one immediately begins. There are scrum teams, that the only representative of the QA in them, is the automation developer. And since there are no manual testers on these scrums, we, as automation developers, are sometimes the "last line of defense" before the build goes out to alpha, or even to production. This is why no compromises should be made on the quality of the delivery. We need to provide automated tests for the user stories, end to end, some integration tests of the layers/components of the application, and also bugs scenarios. This must be done by automation, and all that is required to complete a full regression, in order to provide an adequate tests package, with high level of commitment to its quality. The time frames, and the working methodology of scrum, makes it very "challenging" for the automation developer, to say the least.


it's not all bad


There are some extenuating circumstances for mixing automation and Agile though. First, Since we're in a scrum, there are times that we work with a tight schedule (tighter than usual, I mean), and all team members join in helping out to the QA and automation effort, at the end of the sprint. The first week or two of every sprint, is relatively 'quiet', and no deliveries are made then, since the developers are still working on the sprint's stories. This time is best used to prepare the automation building blocks, towards composing the actual tests.

Every few sprints there's a release, and the day to day work made us realize that there should be some preparations in automation, before actually writing the tests. This infrastructure is needed to make the tests reliable and robust. We (in my software company) came to the recognition that at least by the time of the release, automated tests are to be ready, and the sprints would be the time that automated framework would be prepared. More lightly that in the final stages of the release, the tested application would be more stable, hence, some real flows could be executed at this time.


Epilogue


Automation developers are an integral part of scrum teams all over the Agile development world. Like it or not- it's the actual, day-to day reality. There are some serious problems that goes along with it. However, the fact that it's our ongoing routine, doesn't mean that this mix is ideal.

Automation is part of the testing domain, and is partially in charge of the quality of the product. Tests need an application to be executed on. The released product is available for tests, only at the end of the release time, and user stories are ready for testing, only at the end of each sprint, leaving very limited & narrow time windows for testing.

The fact of the matter is that nobody would make any alleviations for us, the automation developers. It is what it is, and we have to deal with it and as mentioned, we can find solutions (or at least leverage the 'extenuating circumstances'), and still deliver quality test flows.

The connection between Agile and Automation can definitely work, but is it a natural connection or an ideal mix?? Well, the answer to that is . . . 'No'.




Related Links :

deadly sins of automated software testing

Saturday, June 29, 2013

How to start applying software automation tests in your company?


Test automation sounds "sexy" and attractive to any group/team leader, or to any QA engineer who wants to make an improvement in his testing capabilities. But taking the wrong approach could lead to making some fatal mistakes, which eventually might cause unnecessary waste of resources to the organization, and frustration to all involved. To avoid this, simple rules must be followed, and relevant (truthful) information should be presented to the decision makers. The purpose is to successfully adopt automation tests, and make them an integral part of the development process.


It's Overwhelming !


Implementing software automation testing, might seem a little bombastic to any newbie. The wide range of concerns starting with 'where to begin?', right to the parts of 'what to automate?', 'whether' and 'how to automate setup and configuration procedures?', 'how to check the results on each milestone?', 'logging the tests', and notification issues, such as 'the form of the reports', are all a great deal to handle when you start from scratch.

Not to worry, Rome wasn't built in a day, and neither did any of the greatest existing automation solutions. Don't let the huge amount of issues and tasks, deviate you from your main objective, which is to give a reliable picture of the current status of the tested application. You have to start somewhere, and better sooner than later, since the development is waiting for no one, and new features are piling up all the time.


The Big Question


An important thing before we start diving into the solution, is that it all comes down to one significant question: "What do you want?" Or in other words what is our ideal state that we want to be in (...testing wise)? Surely I'm not going to leave this one open. My answer is: "everything!", I want it all. I want the setup, the configuration, tests execution, checkups, and reports, all fully automated.
As soon as a new build is ready, or even after every commit/check-in (new Dev. code merged into the application), full scale testing procedure should be executed, including sending results notifications. All of these, in my ideal world, would be done without any human touch. This is what we should see (or aspire for) at the end of the road.


Know Your Resources


Sorry for asking this, but what makes you think that manual testing skills, would be adequate for automation system implementation?
Moving to automation would definitely require from you, as a leader/manager, to re-evaluate seriously the capacity and abilities of the resources at your disposal. Automation is a different concept than manual testing execution. It involves different tools and frameworks, and it surely requires different skills than just being capable of doing manual tests (no disrespect for all you, manual testers out there).
If you're going to use your current staff, there's a learning curve you should be aware of. If it's outsource people, or new skilled automation developers you are going to hire, then there's another learning curve to take into advise- the curve of knowing the application under test. As capable and talented your new workers are, they couldn't write automated tests to a system they know nothing about.

In many cases though, some of the manual testers show interest in making their life easier. They may start with scripts to automate their preparation and configuration tasks, but they are the ones that usually open the door for other automation tasks to be accomplished. Encouraging these guys, and giving them all they need to fulfill their curiosity and potential (time, guidance, compensation etc.), would be a big step toward achieving your automation goals.

These were some insights regarding the human resource aspect of starting to automate. But what about the time resource? I have already mentioned the learning curves, but even if you have a very talented new team of experienced automation programmers, that know all there is to know about automation, and all there's to know about the tested application, even then, if automation is done correctly, testing infrastructure must be built. It should be assembled after careful & thorough design, and after proper brain storming. This takes time. In fact, sometimes weeks, or even months will pass from the time one starts writing automation, until he actually sees his first test-flow execution.


Choose Your Tool Wisely


This is critical !!
If you don't want to write automated tests over and over, and change frameworks every month, this must be done properly. Characterize the application which is under test, and analyze the tests procedures. Examine all interfacing angles of the application being tested, and list the tools you currently use to check them. This could include the UI of the end user, admin console interface, the application's APIs, and more.
The chosen automated tests framework must be able to support all (or most) of the listed interfaces, including the future outlook and vision of all interfaces to be used. If all tests are done using WEB browsers, than 'Selenium Web-Driver' might be the best tool to use. If the system under test has some windows apps that are to be tested, then almost certain that 'QTP' is your tool. There are many more options, but these ones I personally worked with.

Sometimes it's good to know (and acknowledge) that you don't know. It would be wise to ask the counseling of experts, especially in this matter. The required knowledge won't be in the organization, since you have just started automating. The automation development framework/tool that you choose, will be used for a long time, and would be the base ground for all test flows developed in the future, so it must be chosen after a serious consideration & research.

One principle to follow, is to set one 'marshaling' language/program/script, from start to end. Meaning it would be best to have a single point of management interface, where all actions and all references of a test flow, will start from, and return to. Same script/language code, which would start the setup procedure, will execute the test, and would also send the report at the end. One, single 'Main'-like  code, which would make a coherent, organized marshaling point, through which all test related actions could be controlled from.

Another issue to consider when choosing development framework, is the R&D background and knowledge. This has extra importance for teams new to automation and code development in general. If R&D is using Java, than Selenium Web-Driver over Java would be preferable. If it's .NET, than you might consider using C# platform. The support system of the application developers is important, not only when you're stuck and need advice (for this, online professional forums would do the work), but also for code & design reviews.

Just to set things straight-changing of code is done all the time. Refactoring and re-designing are done continuously. So no one says that once you set the automation framework design, then that's it. But it would be extremely beneficial to all, if before writing the first line of automation code, one would think it through, consult, and do some research, in order to minimize design mistakes, which might cost dearly.


Prioritize & Focus 


So, after exhausting so many clichés, you should divide your overall execution into segments. Basically, full test flow, end to end, has the following parts (roughly):

• Setup of the testing environment (including DB preparation)
• Application configuration
• Execution
• Checking actual outcome (response) in comparison  to the expected results
• Reports

First thing you need to know is that you must adhere to the current way you test today. As mentioned before, with so much to do in order to achieve a reliable automation system, a focus and prioritization is desperately needed. Furthermore, many times quick results are to be presented (to a stressful management), to justify the efforts and the resources allocation. Generally speaking, all automated tests include two combined elements: simulating a scenario (execution), and checking the results (the response of the application under test). So, from the five segments mentioned above, there is no doubt that the core two, are the scenario 'execution', and 'checking the actual outcome'. If you want to start somewhere, start there.



Some Principles To Live By


Whether it's automation tests for any kind of GUI application, or for other SW component, always strive to generate robust and scalable automation code for maximum control in the future, and in order to keep your independence and flexibility.
For starters, keep the following rules:
  • Design the automation framework in a way that utilities are written once for common use
  • Design the framework so that object repositories are shared and maintained in one place (per version)
  • Plan before writing each test
  • Have design & code reviews to be part of the automation development process- for constant improvement
  • Keep the automation knowledge in-house (not in the hands of out-source people), and share it
  • Use parameters to support multiple configuration & setups execution of single code
  • Make your tests resilient to app changes, by robust object identification (RegEx and other techniques), and object synchronization.
Good Luck !


It's only fair to mention the following sources (there are many more), that each expressed it's own take on the same issue:
Good Practices For Automating Functional Tests
10 Tips you should read before automating your testing work
6 Tips to Get Started with Automated Testing
Getting started with automation testing