I was recently tasked with integrating some automated into the development life cycle for one of my company’s products. The product has been developed using , and has gone through several releases already, although we are still working on it- adding features & improving functionality, etc.

Currently, I have a basic suite up and running- with tests for navigating to most of the pages, checking URL patterns for various pages, checking that various buttons & widgets are all working correctly, and that a number of ‘auto-populated’ fields are populated correctly. As it stands, these tests all pass as I am expecting, although I had to do some tweaking to get the timings working, with waiting for various elements to be displayed/ other tests to complete, etc.

I have now been given another, more urgent task to work on, and have been asked to pass the development of the suite on to a colleague while I work on this other task, with the assumption that I will take it on again once my deadline has passed.

So, I made sure that my testing branch (the branch on which I had been developing the test suite) was in a fully functional state- commented the test that I was in the middle of working on, and left only the completed and working test there- and then pushed the branch up to GitHub for my colleague to pull & start working on.

However, shortly after my colleague pulled my testing branch from GitHub, he sent me a message saying that he was getting a number of failures when running the tests. The failure message he got was:

1) App should navigate to the Config Devices page


Expected ‘http://abc.def.w.xyz/#/config/datalog‘ to be ‘http://abc.def.w.xyz/#/config/device‘.
Error: Failed expectation
at C:Users…Desktop…-test…apps…frontendtestingspec.js:424:36
at ManagedPromise.invokeCallback_
at TaskQueue.execute_ (C:Users…AppDataRoamingnpmnode_modulesprotractornode_modulesselenium-webdrive
at TaskQueue.executeNext_ (C:Users…AppDataRoamingnpmnode_modulesprotractornode_modulesselenium-webd
at asyncRun (C:Users…AppDataRoamingnpmnode_modulesprotractornode_modulesselenium-webdriverlibprom
at C:Users…AppDataRoamingnpmnode_modulesprotractornode_modulesselenium-webdriverlibpromise.js:668
at process._tickCallback (internal/process/next_tick.js:188:7)

I suspect that the issues he’s having may be due to timing problems (i.e. elements not loading before timeouts/ before they are referenced/ used in some other way), and tests getting out of sync with what’s being run against what expectations, as it took me a little while to sort out timing issues while I was developing the scripts on my own computer.

Although we are both running the same version of conf.js & spec.js against the same address (i.e. the IP address that the tests are being run against is a third address- another piece of hardware that we have in the office, not either of our own computers), I suspect that because we are running the script from our own computers, and because they have slightly different specifications/ performance, this might well be causing the script to fail when run from his computer, even though it passes when run from mine.

So, my question is: is there any way that I can ensure that any test scripts I develop with will be 100% ? i.e. I want to be able to push my test suite to GitHub, and then anyone from the company should be able to download it, and run it against either their own versions of the software, or against any other piece of hardware (any other internal IP address) that’s running the software.

If not, how would people suggest I overcome this? Or is it just a case of designing the scripts to be able to accommodate having to wait for longer than usual at any point during the execution of the script (i.e. add browser.wait() calls with much greater timeouts in order to accommodate the varying performances of different computers?

Source link https://sqa.stackexchange.com/questions/30588/angularjs-testing-with-protractor-how-to-make-test-scripts-portable


Please enter your comment!
Please enter your name here