Each one is great by itself
I can't praise the Agile development methodology enough. I think it's fantastic. It is flexible, dynamic, and very much compatible to our rapidly changing demands. Agile is iterative approach, which helps minimize risks, and quickly adjust to changes. Furthermore, with 'Scrum', all team members don't have so many meetings to go to (except the daily), like in the 'Waterfall' approach. Finally, you can get some work done.
While Agile methodology's advantages may be arguable by some people, the benefits of automation are indisputable. When performed correctly, automation can save time & money, while noticeably increasing the quality of the product.
. . . Put them together . . .
Let's talk about the people joining the daily meetings ('Stand-Ups') for a moment. Who are those people? What are their roles? Leaving the 'chickens' and 'pigs' stuff out of this for a moment, whom do we see each morning?
So, we have a few developers, product owner/manager, QA person, project manager (if he/she feels like showing up.. ;)), and one of these guys is also the scrum master... more or less.
Hold it for one second, what about automation? We had automation teams, back in the Waterfall methodology, what should we do with automation developers now? The Scrum system promoters must have said: "Let's just add the automation person into the dish of the daily, and it would be fine. We know that every role has its representation in the scrum team, one more functional member would surely match the pattern, and it would be OK"... Would it really?
On every sprint, which is a period of 2-4 weeks, the scrum team works to realize the user-stories discussed on the sprint planning. There are few software companies that each peer on the team is equal to the others, and takes from the tasks board whichever task that needs to be worked on, whether it's a development task, automation, or QA task. Everybody does everything. But most companies have designated roles for each purpose. Developers are skilled to take dev. tasks. and they also are divided by expertise (UI, DB, client/server side, and so on). The QA person in the scrum team, is trained to check the outcome of the developers work, and the automation guy should automate the manual tests, and add more to them, with a goal to cover regression & bugs found on the system. The general idea is that all development / QA / automation tasks are to be completed by the end of the sprint.
For each development task, there should be a parallel QA task as well. There are some tasks which their execution is a head of schedule, and others might take longer than predicted. Few days before the sprint ends, there's a "code freeze", which means that development is halted, and no new code is to be merged into this sprint's delivery. This is done for reasons of leaving time for testing, bug fixing, and preparation for the next sprint.
Automation in Scrum
Many automation tools have 'Record' and 'Play' features, which almost anyone can activate, maybe make some modifications, and save it as a new test. This could only be used as the first phase of creating automated tests. I would lie if I said that I didn't activate these features at all. On the few times that I did (back in my QTP days), it was only to check the object recognition, or maybe as a beginner- to see the generated code from the recording.
Actual tests are written from scratch (both in 'Waterfall' and 'Agile'), after carefully planning and designing the infrastructure. The testing framework is built in a way that implements code reuse, while the test writer has full control over the test flows. The use of this kind of framework, enables composing tests in a relatively quick manner, basically by using the infrastructure's building blocks.
There are basically two possible states for automation, in scrum methodology. The first is when a team is starting a new project from scratch, and the automation code, like every other element in a new project- doesn't exit. On this state, the application code is just starting to be formed, there are no tests steps outlined for the manual tester, and there's no automation infrastructure (and of course- no tests). The other state regards to working in a routine, continuous manner, sprint after sprint. Most of us are on this working state much of our time, when some of the testing framework is ready, and the automation tasks involve both composing new test flows, and adding to the automation infrastructure.
So far so good, so what's the problem?
I mentioned some of the development procedure above, because getting to the point that you have your automation framework ready, takes time. It takes more than one or two sprints (and it could even take much longer). How can an automation developer catch up with the sprint's tasks, if he needs to close a gap of a few sprints of development?
If we take the first state, as mentioned above, a new project requires the development to lay down the foundation of the application. Of course this initial code could be tested (I'm not referring to unit tests), but normally the first sprints don't have so much development content in them, hence the automation is not so desperately needed at this phase. Initial development period of time, is also characterized by a lot of frequent changes both in the design and on the code, which would require lots of modifications to the automation framework as well. Every automation developer would agree that the power of automated tests is seen best when the tested application has a certain degree of stability. New applications are just not there yet.
And what about the on-going development state (second situation mentioned)? One would assume that while in the middle of development process, automation has much of what it needs from it's framework, and reliable tests could be composed relatively fast. At this stage most of the automated tests infrastructure, is already built, and there shouldn't be any significant hurdle in creating and executing automated tests.
Well, it is all correct, but the full picture is that many times, the delivered product is not working as expected. In these cases the development is using the sprint's time, up to the last minute, leaving very narrow window of time for QA and automation guys, to do their job. Bugs are found during the sprint, some of them are critical, or at least with a high severity level. These bugs force the development in the scrum, to shift it's resources and put some attention to bug fixing, while there's no final build to work on (for QA and automation). The manual QA tester has a very limited time frame, at the end of the sprint, to check the user stories final implementation, so you can only imagine the challenges that the automation developer is facing. He needs to develop automatic procedures and check-ups, parallel to the ones of the manual tester scenarios, and add some more tests, which couldn't be performed by human testers (like API tests). And it all needs to be completed on a time schedule identical to the manual QA tester's, which is at the end of the sprint.
One sprint ends and a new one immediately begins. There are scrum teams, that the only representative of the QA in them, is the automation developer. And since there are no manual testers on these scrums, we, as automation developers, are sometimes the "last line of defense" before the build goes out to alpha, or even to production. This is why no compromises should be made on the quality of the delivery. We need to provide automated tests for the user stories, end to end, some integration tests of the layers/components of the application, and also bugs scenarios. This must be done by automation, and all that is required to complete a full regression, in order to provide an adequate tests package, with high level of commitment to its quality. The time frames, and the working methodology of scrum, makes it very "challenging" for the automation developer, to say the least.
it's not all bad
There are some extenuating circumstances for mixing automation and Agile though. First, Since we're in a scrum, there are times that we work with a tight schedule (tighter than usual, I mean), and all team members join in helping out to the QA and automation effort, at the end of the sprint. The first week or two of every sprint, is relatively 'quiet', and no deliveries are made then, since the developers are still working on the sprint's stories. This time is best used to prepare the automation building blocks, towards composing the actual tests.
Every few sprints there's a release, and the day to day work made us realize that there should be some preparations in automation, before actually writing the tests. This infrastructure is needed to make the tests reliable and robust. We (in my software company) came to the recognition that at least by the time of the release, automated tests are to be ready, and the sprints would be the time that automated framework would be prepared. More lightly that in the final stages of the release, the tested application would be more stable, hence, some real flows could be executed at this time.
Epilogue
Automation developers are an integral part of scrum teams all over the Agile development world. Like it or not- it's the actual, day-to day reality. There are some serious problems that goes along with it. However, the fact that it's our ongoing routine, doesn't mean that this mix is ideal.
Automation is part of the testing domain, and is partially in charge of the quality of the product. Tests need an application to be executed on. The released product is available for tests, only at the end of the release time, and user stories are ready for testing, only at the end of each sprint, leaving very limited & narrow time windows for testing.
The fact of the matter is that nobody would make any alleviations for us, the automation developers. It is what it is, and we have to deal with it and as mentioned, we can find solutions (or at least leverage the 'extenuating circumstances'), and still deliver quality test flows.
The connection between Agile and Automation can definitely work, but is it a natural connection or an ideal mix?? Well, the answer to that is . . . 'No'.
Related Links :
deadly sins of automated software testing