Introducing SchnauzerUI

SchnauzerUI is a DSL and framework for automated web UI testing. I created it out of my frustration with the complexities using a Java/Cucumber/Selenium/Excel based testing setup. In this post I'll give some background on what those issues were, how SchnauzerUI attempts to solve them, and how all this has affected my professional opinions on automated UI testing.

First though, lets include some useful links:

Github: https://github.com/bcpeinhardt/schnauzerUI

Narrative Docs: https://bcpeinhardt.github.io/schnauzerUI/

YouTube series (in which I create the language): https://youtube.com/playlist?list=PLK0mRy_gymKMLPlQ-ZAYfpBzXWjK7W9ER

The Problem: Complexity, Scope Creep, and Abstraction Rabbit Holes

When we set out to create automated E2E test suites for our products, we chose to use a combination of Selenium and the Cucumber framework. There were a number of reasons for this. The QA team had previously tried using a tool called testim.io. The manual team didn't adopt it very extensively, and the truth is I don't fully know why it wasn't successful. It was before I joined the team and before my boss joined the team, and the reasoning wasn't written down. Turnover often results in knowledge loss like this.

When I joined the QA team, it was already decided we would proceed with Selenium and Cucumber, but I was asked by management if Java was a good language for the project. I put aside my personal preference and said what I thought a good engineer would say. "The language is secondary. Java has the largest Selenium user base and consequently the best support and most resources, so it's a great choice." I didn't realize we'd be training non programmers to take over, and so actually prefered a statically typed and compiled language anyway, despite not loving Java the language myself.

So we set out to build a test suite. Development was done initially by a vendor, and then I worked with the vendor on and off to contribute to the test suite (I periodically pulled away to work on api testing). The goal was to build out the test suite for several projects, while the manual QA team trained to eventually take over the project. I didn't have much input into what features were tested. There was an existing initial scope, which we quickly began adding on to. Here's a short summary of the progression over the course of a few months:

  1. A basic Selenium/Cucumber/Java project with a plugin for reporting. We quickly added CI/CD to automatically run tests in bitbucket and email the HTML reports to appropriate parties.
  2. As the number of test cases grew, we decided to split into two repositories (One for each of the products under test). But we wanted to retain some convenience code we were using on top of Selenium, so we extracted that into its own repository as well. Now we have three repos.
  3. We want to provide lots of test cases, so we start using excel datatables with our growing feature files. Feature files became extremely generic, with steps like "And if user provided additional order details create order with below datatable..." and then 30 or so variable names wrapped in <> brackets.
  4. We need to support multiple test suites, so we have another excel sheet for specifying which feature files to run.
  5. We start doing whitebox testing, like validating data against the database, so now we have a MariaDB client, a bunch of hand written excel queries (sorry security people).
  6. The DB client requires we're connected to the VPN, so now we have a flag for toggling the DB portions of tests, if statements all over the place toggling those protions of the tests, and the DB portions of tests don't run in the CI/CD (because bitbucket can be connected to our VPN).

Keep in mind the goal is that manual testers (people who did not sign on the be programmers) will take over this project. The selling point of Cucumber is non technical stakeholder participation. But here's the list of things they'd have to learn to fully understand the project:

  1. Programming concepts in general
  2. Java
  3. Cucumber and the Gherkin DSL
  4. SQL
  5. Maven, including custom arguments and local dependencies (our custom lib provided as a jar)
  6. Git
  7. Bitbucket
  8. Docker and Bitbucket Pipelines (for the CI/CD).

... You see we built a perfectly decent test suite, but we completely failed to balance our desire for a robust automated test suite with the resources and skillset of the QA team. But there's a larger problem than this still. We made no attempt to adapt what we were building to existing processes.  

Our manual testers do a lot of work. They have established meetings (part of scrum) for allocating, estimating, and reviewing QA work. They have good working relationships with developers, and their own preferences for how they perform their work. The shift in process was going to be huge as well.

Needless to say I was frustrated by this. I felt like the current manual QA testers were the customer, and we weren't providing a particularly pleasant experience for them. I myself was frustrated with all the complexity. Understanding a test generally meant a minimum of editing two or three Java classes (depending on if the database is being pinged), a feature file, and two excel files. This is partially because as we extracted our test cases to excel, and accomodated many optional/conditional entries, feature files became largely unreadable and just plain long. Feature files could also lie (if the underlying java didn't do what the feature file asserted). It was also a pain in the ass to develop on, because you have to run a huge test suite to make sure everything still works (we're talking a 30 minute test run to ensure you didn't break anything).

After awhile, I needed catharsis, and began working on SchnauzerUI as a side project in my free time.

An attempted solution to complexity in UI testing, aka SchnauzerUI

I've been interested in DSLs for quite awhile, and I knew web automation testing was a specific enough task that a DSL could be quite simple and still quite powerful. For those readers who don't know, a DSL is a Domain Specific Language. They're little programming languages that focus on one very specific thing, and do that one thing "right". They're like the Five Guys Burgers and Fries of programming languages. SQL, HTML, and CSS are all examples of incredibly useful DSLs.

I wanted to make something that allowed users to automate web interactions that hardly required any of the trappings of a software project. I wanted to extract as much of the complexity of a web automation project as possible away, and hopefully by being opinionated create something that made web automation testing relatively straightforward.

In my opinion, the DSL I ended up with accomplishes this. Here's the code to test searching for cats on youtube.

url "https://youtube.com"
locate "Search" and type "cats" and press "Enter"

This code is sufficient to launch a browser, navigate to youtube, search for cats, and produce an html and json test report. Just put it in a file that ends in .sui and run it with sui -f myTest.sui.

The url command is pretty self explanatory. The locate command finds web elements. Notice we don't save the element as a variable. Schnauzer will let you save some text as a variable for convenience, but not a web element. Instead there is always an implicitly located web element in the background, and commands execute against that element until a different one is located. This is how we use websites manually. Find thing. Type into thing. Find other thing. Click. Find third thing. You get the gist.  

"Search" is the placeholder of the search input. It's the text that's visible to the user. SchnauzerUI favors visible information over things like ids and class names. Nine times out of ten you don't have to look at the HTML to write a SchnauzerUI test, it just gets it. We can do this because we can make good assumptions about how HTML is structured, and about the kinds of elements testers want to use certain commands with. I always found it weird to test at the level of the HTML, because that's not what users interact with, at least not conciously. Of course SchnauzerUI lets you think in terms of HTML as well. The locate command will take an id, title, class name, or xpath and know what you mean as well.

The DSL does get a little more complicated. It supports some fairly programmery concepts, like rudimentary comments, variables, if statements and error handling.

# Close popup located by xpath if present
save "//img[contains(@class, 'alert-x-thing')]" as close-popup-xpath
if locate close-popup-xpath then click

# Try twice for wonky JavaScript, but add a failure to the test report
catch-error: screenshot and refresh and try-again

But in truth it's fairly uncommon. A SchnauzerUI script usually ends up being a very readable, straightforward description of how a user moves through a site. This is because SchnauzerUI doesn't let us abstract ourselves to death. In the Java/Cucumber/Excel project, you might read 6 files to understand what a test does. A SchnauzerUI test involves a maximum of 2 files: The .sui file and a .csv datatable if one is used.

SchnauzerUI will let you inline existing code during REPL driven development to achieve reusability, but it has no concept of referencing chunks of code saved somewhere else. This is because of a larger philosophical choice. SchnauzerUI does not encourage you to make big automated test suites.

Before we unpack that bombshell, I'd like to mention again that SchnauzerUI supports REPL driven development. You can launch a browser and write one command at a time, saving what works and discarding what doesn't, and save the result as a script that can be run by itself later. Compared with editing 6 files to get a new test to even run, the speed up in development time is huge.

Ok, now for the controversal opinion... big automated UI test suites are more trouble than they're worth. Usually.

Before I get canceled by whoever is in charge of QA industry standards, just here me out. QA test suites are real software projects. They're big and complicated and require engineers to work on. They're so big, they take hours or DAYS to run. QA runs them and investigates any errors, and then schedules a meeting or sends a report to a developer when required.

Sure, you can set up webhooks or custom steps in bitbucket or github for developers to hook into their development process, but they won't. You can't wait an hour to know if changing one line of code broke something. E2E test suites, like the ones created with Selenium, are just too slow to provide a feedback loop useful for the development process. And if the new feature involves new UI? It'll be a whole sprint before you have a test verifying if the feature works for all edge cases, unless your developers are also active contributors used to the structure of the selenium test suite.  

SchnauzerUI is just a cli. The DSL is so simple writing a script to test against any environment is trivially easy. The scripts are completely standalone, so they can be attached or even written inline in teams, slack, or jira. SchnauzerUI lets you do automated UI testing without creating a great big software project to do it. Tests can be implemented within the existing manual test processes. They can be added to testpad, or to jira tickets. They can be used as regression or smoke tests, or be quickly written to reproduce bugs or even serve as product documentation.

If you've made it this far, first of all, thanks. Thanks for taking the time to read this. I hope someone reads this article who is questioning the recieved wisdom on their team and decides to trust their gut and find a better way.

If you want the try out SchnauzerUI, I recommend starting with the narrative docs linked at the top of this article, but you can also file an issue on the github page and I'll respond to help you get started!

Subscribe to BenIsOnTheInternet

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe