Season 5 Episode 3 May 26, 2021

The Future of Automation


Don't get testy about test automation


In this episode, we talk about about test automation with Angie Jones, senior director of developer relations at Applitools, and creator of Test Automation University.


Ben Halpern

Forem - Co-founder

Ben Halpern is co-founder and webmaster of DEV/Forem.

Molly Struve

Netflix - Senior Site Reliability Engineer

Molly Struve is senior site reliability engineer at Netflix and former head of engineering at Forem. During her time working in the software industry, she has had the opportunity to work on some challenging problems and thrives on creating reliable, performant infrastructure that can grow as fast as a booming business. When not making systems run faster, she can be found fulfilling her need for speed by riding and jumping her show horses


Angie Jones

Applitools - Senior Director of Developer Relations

Angie Jones is a senior director of developer relations who specializes in test automation strategies and techniques. She shares her wealth of knowledge by speaking and teaching at software conferences all over the world, as well as and leading the online learning platform, Test Automation University. As a Master Inventor, Angie is known for her innovative and out-of-the-box thinking style which has resulted in more than 25 patented inventions in the US and China. In her spare time, Angie volunteers with Black Girls Code to teach coding workshops to young girls in an effort to attract more women and minorities to tech.

Show Notes

Audio file size









[00:00:01] What you build and where it takes you shouldn’t be limited by your database. CockroachDB, the most highly evolved distributed SQL database on the planet, helps developers build and scale apps with fewer obstacles, more freedom, and greater efficiency. So you can forget about the database and trust that it just works. Sign up for your forever free database and get a free T-shirt at


[00:00:27] Cloudways is a leading edge Managed Cloud hosting platform built for your PHP projects. If you simply wish to focus on your business, Cloudways is the way to go. They take over server management and security and free uptime that you can dedicate to growing your business and acquiring new clients. If you want to give them a try, use promo code, DevDiscuss.


[00:00:48] RudderStack is the CDP for developers. It makes it easy to deploy event streaming, ELT, and reverse ELT pipelines. It’s open source and doesn’t store any of your customer data, and you can integrate it into your existing workflow with features like the Transformations API, which lets you run JavaScript functions on your data streams from a GitHub repo. Sign up for free at




[00:01:16] AJ: It’s not added to the curriculum unless you’re at like this master’s level, which I don’t get. Right? I think every developer needs to know how to test. So this should be something that’s built into the curriculum.


[00:01:41] BH: Welcome to DevDiscuss, the show where we cover the burning topics that impact all of our lives as developers. I’m Ben Halpern, a Co-Founder of Forem.


[00:01:49] MS: And I’m Molly Struve, Head of Engineering at Forem. Today, we are talking about test automation with Angie Jones, Senior Director of Developer Relations at Applitools, and Creator of Test Automation University. Thank you so much for joining us.


[00:02:04] AJ: Yeah. Thanks for having me.


[00:02:06] BH: For those who might not know your background, can you tell us a little bit about yourself?


[00:02:10] AJ: Sure. So I’m a Java champion. I’ve been a developer for a very long time, specializing in test automation. So help lots of companies and engineers worldwide with test automation initiatives, whether that’d be workshops, conference talks, consulting, I write about the topic. So just helping developers understand this better because it’s something that we don’t get taught in school, in bootcamps, or when we are self-teaching.


[00:02:46] MS: So obviously, you work a lot with the community. Can you tell us a little bit about your role at Applitools and what you do there?


[00:02:54] AJ: Sure. So I am a senior director there, started out just a developer advocate, and now I’m growing a team of developer advocates there. So our job is essentially to help the community, right? So help people understand how to test better. So we do that by producing content. So again, blog posts, videos, documentation, all of that good stuff that helps developers do their jobs better. And they don’t have to work for amplitudes. So we just help the developer community in general.


[00:03:32] BH: Can we define what automation is sort of in the context of what you do?


[00:03:37] AJ: Yeah, that’s a great question. Because a lot of times when I talk about automated tests, developers automatically assume that that’s limited to just unit tests. Right? But I automate all kinds of tests. So unit tests, integration tests, end-to-end tests. I would say my strongest scale is in UI automation tests. So that’s just something that a lot of developers don’t necessarily have experience with because they mostly focus on the unit tests. And if they are so fortunate, they may have another team or people on the team that’s dedicated to doing the integration and the end-to-end UI tests. So that’s where I help out a lot in helping them understand how to do that.


[00:04:23] BH: So you’ve been around for a while. And the answer you just gave, do you get the idea that this is a different answer than you might have given five years ago or even before then? This is what test automation is today. Has it evolved much recently?


[00:04:41] AJ: So people used to not really do testing at all or developers do, right? If you’re a smaller shop or a startup or something like that, you don’t have any tests at all really. That’s what I’m seeing a lot of the times. If you are a more mature shop, you might have had testing back in the day, but that was a dedicated kind of siloed theme where feature development happened and then you kind of threw it over the fence and you had people who might manually test your product. And you also, if you were really fortunate, you had people like me who are dedicated automation engineers. So I’ve worked on several teams where I’ve not done the feature development. My sole job was to automate the test. And that’s changing nowadays because we want to deliver faster. Right? And so there’s not this luxury of developing your feature in a silo, handing it over and then waiting on someone like me to automate it next month or something like that. You want to check this thing. You might want continuous deployment. And so you need these tests to be a part of your bill process. And in order to do that, you need to automate as part of that feature delivery. So now I’m seeing more developers responsible for automating more than just the unit tests or I’m seeing automation engineers who are no longer siloed off, but are now part of like sprint teams. And they may develop the automation in parallel with the feature development.


[00:06:25] MS: It’s definitely been interesting seeing kind of the change over time as we go from very manual QA to the more automated testing QA. So this might sound like an obvious question, but why is testing important?


[00:06:39] AJ: It’s not that obvious. I get questions all the time from developers, especially newer ones. Like I said, it’s not something you teach in school. So you learn how to build. Right? And once you build it, you check and you make sure that feature actually works. And for newer developers, it’s like, “I already made sure it works. Why on earth would I go through the trouble of automating that check?” Right? And what they don’t understand is when you continuously develop on this project, you have to continuously test, not just the new things that you’re developing, but the old things, and you have to make sure they work as well. So if you have like this laundry list of all of your features and all of the tests that you have checked manually and every time you make a change in the app, you want to manually verify that all of this stuff is working. That’s not feasible. Right? And I think it comes down to experience, right? YOU haven’t been burned yet. So you don’t realize that this is something that needs to be done until it breaks. You write this cool new feature. You made sure it worked. You poked around. You thought of a couple of scenarios. It worked beautifully. Got it integrated. It’s now in production. And oops, our money generating feature is now broken because of this other feature that you checked in. That’s when the light bulb goes off, like. “Oh, we probably should be testing all of the important stuff every time we change the application.” Because there could be related failures or areas that kind of touch each other or whatever. So that’s why you need to test. If you want to continuously make sure that your application is working as you intended, not just the specific feature, but the entire application, every time you add something new or remove something or change something, automated test is the way to go.


[00:08:48] BH: How do you feel about the education around testing right now in terms of maybe colleges and bootcamps? Are they getting this into the curriculum?


[00:08:56] AJ: Not really. I’ve heard from a couple of bootcamps students that they’re starting to introduce some of the concepts, like unit testing or TDD. So that was really good. But generally, I would say no. So I did the CS path. And in undergrad, I didn’t learn about testing at all. I also have a master’s in computer science and that’s where they introduced testing. I was talking to my friend, Jez Humble, and he teaches computer classes at Berkeley and he specifically teaches TDD class. I was just so impressed, like, “Jez, you teach TDD in college? Wow! That’s amazing.” And I asked him, “Is this an undergrad level?” And he said, “No, it’s a master’s.” Exactly, right? So for some weird reason, it’s not added to the curriculum unless you’re at like this master’s level, which I don’t get. Right? I think every developer needs to know how to test. So this should be something that’s built into the curriculum. I actually used to be an adjunct professor and I taught a Java programming course, and I know how tight the schedule is. There are so many things you want to teach. So it’s kind of hard to fit that testing in, but hey, that’s the same thing that we say about development, right? I only have two weeks to develop this feature. How am I going to fit the testing? And so I took the same approach that I do when I develop things and I sprinkled it in. It’s part of that lecture. Right? So as I teach a concept, I also teach them like how to test whatever that they’re building with that concept. So this was a big reason why I started Test Automation University. There are not a lot of courses out there available to help you learn how to test something. Right? And if there is, it’s really generic. So now Test Automation University is a free online platform and it provides courses on all things testing. So you can go pretty deep, for example, instead of, “Oh, let me learn about testing in general,” which is a huge subject. You can pick certain things, like, “I need to learn about accessibility testing.” Okay. Great. Here’s a course with several chapters just on that. I need to learn about testing with Jest. Okay. Great. Here’s a Jest course. You know what I mean? There are courses that are aligned with very specific technologies, as well as testing areas, if you will. So hopefully, this has been really helpful in getting developers up to speed and filling in that gap.


[00:11:39] MS: So in that Test Automation University, do you have any certain programs that you think are like the absolute most important that developers should know?


[00:11:49] AJ: One of my courses is the first course in every single track that we have. So we have like these learning paths. So if you wanted to go really deep in something like UI automation with Java, here’s all of the courses you should take. Right? So at the top of each one of these tracks is my course, setting the foundation, for test automation success. And here’s the thing. And this might not be something a lot of people realize if you don’t have these sophisticated automation setups in your shop. Test automation projects fail a lot. So what I’m seeing is people will try to start it, “Oh, yeah. We should probably get some test automation.” You’re like, “Okay. Yeah. Yeah, let’s do it.” You jump into it and then the tests become unreliable. They’re failing. We can’t deploy anything because of these stupid unreliable tests and you have to end up ignoring the test. You get to the point where it’s just like, “Just shut them all off.”


[00:12:54] MS: We’ve all been there.


[00:12:56] AJ: I know. Right? I know. So this is very, very common across the globe. So this course, setting the foundation, I don’t get into any tools. I don’t show any code. This one is straight up about strategy because what people don’t realize is test code should be treated with the same care as production code. Right? It’s gating your production code. Why would you want it to be faulty? Right? And if you’re developing a very extensive suite, then you have to take care of this. This is basically another software development project. And people fall in the trap of not strategizing that or throwing it to people who might not necessarily have strong coding skills, or, “Okay, we’re hiring somebody brand new. Let’s have them start the test initiative.” Would you do that with your development code? Of course you wouldn’t. So why would you do that with your test code? And that’s why we get into trouble when the tests are unreliable and they’re breaking all the time and they just become like this maintenance nightmare. So in that course, I go through how do you strategize this, making sure you understand what the goal is, like, why are you doing this? And once you understand that, you can align your actions with those goals, how do you set up culture within the company to support this? Because when those tests are failing all the time and we have features to develop, are you going to get support from the business to put some time and effort and some TLC into those tests to get them back running? Or is it like treated as a second class citizen that, “No, no, no, just turn them off and we got to get back to the features”? So all of that comes into play.


[00:14:58] BH: So you’re speaking to some misconceptions around what testing is for I think partly among developers themselves and of course among the folks who pay the developers. There’s a component of communication and buy in there, but I’m wondering about misconceptions around careers in testing or a focus in testing for developers because I know when we’ve been looking for automation specialists, it’s hard to find.


[00:15:28] AJ: I know.


[00:15:28] BH: And sometimes we look for a generalist role and we get 500 applications. Can you kind of speak to what leads, developers not to specialize in testing a little bit more than they could, especially newer developers?


[00:15:41] AJ: Ben, it’s so hard. I’ve been at companies where I’m leading the automation team. They give me head count and I want to hire five of these folks. Right? You can’t find them. It’s like, “What?” We’d lose the budget for it because I can’t feel these roles. That actually is the reason why I started writing and speaking, trying to level up the entire industry on testing and educate them on it. I think a lot of shops don’t have this type of setup. Right? I did a poll asking, “Do you have dedicated testers?” And the majority answer, no. Right? So the shop is set up where developers either do their own testing or they don’t do any testing at all. Right? It’s that kind of sniff test stuff that I was talking about earlier. And so you don’t even realize that this is a thing. A lot of people don’t realize like, “Oh, wow! We can basically have people focus on testing or automation or whatever.” I think there’s that gap in knowledge there. Also, people treat it, if you do know about it, you treat it as like a lower level form of development. Right? And this was a misconception that even I had. So my first job was as a test automation engineer. And so I was doing like all of this coding, not just that. I’m like building entire projects from scratch. And so I’m like flexing all these architecture muscles and stuff. And even in my mind, I’m thinking, “Hmm, but I want to do feature development,” feeling like I wasn’t a real dev because I wasn’t focused on the feature dev. So I went into feature development and I hated it. I had to focus on my one little widget and then that’s it. Whereas when I was building this automation code, I just had the autonomy to like focus on the bigger project as a whole and focus on how our customers are going to use it. And that just made me such a better developer. Right? So I still do like some feature development now, but that’s not my main focus. My main focus is on test code and I just feel that everybody should kind of get a taste for this because it just really, really enhances your development skills and not just the coding aspect, but how you think about stuff. Like as I’m developing features, I have like this test or mindset that influences the way that I develop it.






[00:18:49] RudderStack is the CDP for developers. Its 150 integrations make it easy to connect your data sources to all of the tools used by your marketing, product, and data science teams. With RudderStack, you can say goodbye to the headache of managing custom built integrations and easily manage all of your customer data pipelines in one place. Sign up at to give it a try.


[00:19:12] Cloudways platform offers a choice of IAAS partners, AWS, Google Cloud, Digital Ocean, Linode, and Vultr. In addition, you get a performance optimized stack, managed backups and staging environment where you can test your code before pushing it to live servers. Best of all, composer and kit come pre-installed so you can get your projects up and running quickly. All this power, simplicity, and peace of mind falls right with their brand slogan, moving dreams forward. If you want to give them a try, use promo code, DevDiscuss.




[00:19:48] MS: I really love how you kind of talk about testing. You’re really touching the entire application. You’re being the protector of production with those tests. So I’m curious, obviously, we’ve kind of talked about how much of an impact it can have. Why do you think developers are still kind of versed to the idea of testing or the idea of like specializing and working on testing?


[00:20:12] AJ: Here’s the thing. So you’re given a feature to develop. You are excited about that. You go. You knock it out. When that feature is up and running, you’re ready to be done and you’re ready to move on to the next test. If you then have to like write up 15 unit tests for this feature, that’s not fun anymore. Not only that, they don’t really have time. So this is something that everyone kind of glosses over and it’s like, “Oh, just test your stuff.” And I might be guilty of that too. I pick on you all on Twitter, like, “Test your code.” But deadlines are a real thing. You got two weeks. You got a couple of features you got to knock out. You feel like you just don’t have the time for this. I’m actually a huge advocate of having specialists who can help out with this thing. Right? Yeah. I think developers definitely should test their code. Definitely. And your unit tests. But if I’m asking you to write the feature, write unit tests, write integration tests, I want some end-to-end tests, I want all of that in two weeks. That’s not that realistic. Right? So I think that’s where a lot of the pushback is. The other very real thing is maybe they just don’t know how to test. We’ve talked about this. No one taught them. It’s not their fault. If you didn’t learn that, think about your projects and stuff that you did, whether that’d be self-taught, whether that’d be bootcamp, whether that’d be a degree, whatever it is, your path, you were given like a specific test, you wrote the test, you ran it. If it compiled or whatever and it ran, that was good enough. You consider that as complete. And then you get into the real world and now it’s like, “Oh, did you write tests?” And it’s like, “What? What’s that? How do I do that?” You don’t even know how, right? A lot of people are shy to say that. You don’t want to say, “I don’t know how to write tests.” People are embarrassed to say that, but they tell me in private, like, “I don’t know how to write tests.” And so that’s why Test Automation University is really successful as well because you can kind of learn on your own without anybody knowing you don’t know how to write tests.


[00:22:37] BH: Let’s talk about testing principles that apply maybe across the gamut. 


[00:22:42] AJ: So certain things, you want to make sure like the structure of your tests. This can vary depending on like what layer you’re actually testing at, unit versus higher levels or whatever, but you want to make sure that your tests are focused on one specific thing. Sometimes we get into trouble when we kind of put too much noise in the test and it’s testing a whole bunch of different things. Another thing people don’t know how to do besides testing is debug, right? So you get this test that fails and you have no idea why because the test is so bloated. So that’s a key thing to make sure you’re focused on like one specific thing.


[00:23:26] BH: How about like, I mean, understanding the difference between having test coverage and fully exercising some of the code and maybe thoughts around adding tests as you’re debugging? So like principles maybe around, I mean, I know I’ve tried to add tests like where the code needs to be fixed as sort of a rule of thumb. Like, “Oh, if I’m going to be debugging this anyway, I might as well add a test so that I can prove that…” because bugs might be near other bugs. That’s definitely a principle I think about not as much of a specialist as you.


[00:24:01] AJ: That’s a great one. Like, “Where are your bugs?” Because what that means is you probably didn’t think of that scenario or you didn’t test it, right? Or it regressed in some way. That’s another thing that I feel like developers are not the strongest at is thinking of all of the various ways that people are going to use the software that they develop. So that’s a great opportunity when you have a bug to say, “Oh, yeah, that’s a scenario I didn’t think about. Yeah, it’s a little too late at this point. Someone’s already discovered this.” But better late than never. Right? So you can always add tests that way if they’re a value. If this is some obscure scenario that one person is doing over in like Alaska or something, maybe you don’t have to write a scenario, and that’s another point that a lot of people don’t realize. You don’t have to automate your test for every single thing. Otherwise, you end up with thousands and thousands of tests that are slowing up your build process and you don’t really care about them. A good indicator of this is let’s say this test fails. Right? Fails to build. We had to stop deployment. You went and looked at it and you said, “That’s all right.” And you went ahead with the deployment anyway. Take a step back at that moment to ask, like, “Do I really need this test? I don’t really care if it breaks.” It’s like low risk, like maybe you do want to know about it. So you can leave the test, but not as part of your build process. Maybe this runs once a day or something. And you can see all of the different issues with your application, but I don’t want to stop builds because of this. So that’s something to keep in mind as well.


[00:25:54] MS: So one thing I’m dying to know is what are some of your favorite resources and tools that you like to use to get your testing done?


[00:26:03] AJ: Yeah. So my favorite for web testing is Selenium WebDriver. This is an open source tool. I’m probably a little bit biased, but Applitools, my employer, is definitely a great one because it compliments tools like Selenium and gives visual testing. So all of the automation tools that we know of, they kind of work by interrogating the DOM of the application. So that’s Document Object Model. And so it’s looking under the covers. For example, you’ll ask Selenium like, “Is this button present?” And it says, “Yes, it is.” And it knows that by looking in the DOM and it sees that button is there in the DOM. Right? However, what if that button is like disabled and it’s not supposed to be or it’s covered by some other element or it’s bleeding off the edges? There’s a lot of things that could be wrong with it. Right? So Applitools gives the visual check to that. Right? So I can use Selenium for my interactions with the page, and then I can use visual testing to verify what it looks like, which really is one of the only reasons why you should even be testing at the UI level is to test the display of things. Right? That’s what the UI is for, to display things, right? If you’re not testing that, go down to the business logic layer or the unit test layer.


[00:27:42] BH: And you touched on this a little bit, but should developers write their own tests or should there always be a dedicated team of testers when possible?


[00:27:49] AJ: All right. So the very first blog posts I wrote, I think this was my first one or one of the first ones. It was fairly controversial and the title was, “Developers should not lead the testing efforts.” And then all of the developers were in my mentions. “Oh!” All of a sudden, now they want to test, right? My point with this blog post was not that they can’t. I know that they can. It’s that they shouldn’t. And the reason for that is specialists like myself have made an entire career out of this. And by career, I’m not saying like, “No, no, no, hire me. I don’t want you to do it because I need a job.” No, that’s not what I’m saying. What I’m saying is this is a discipline, meaning there’s a whole world out there. There’s conferences dedicated just to testing. There’s big thick books dedicated just to testing. And this is what I study. Whereas a developer may be studying whatever, React or whatever tools that they’re using, just like there are a vast array of tools for web development and software development in general. Same thing for testing. There’s a vast array of testing tools. This is a discipline. I don’t think that one person will put 100% into studying multiple disciplines. Right? I’m hard pressed to find developers that can name me like testing design patterns or classic testing books or things like that. That’s not their world. So I don’t think that they should lead it. Yes. I think they should participate, but not necessarily lead the testing effort unless they’re going to like focus on this discipline a hundred percent.


[00:30:05] MS: I could not agree more with that. I think it’s very similar too. The background I came from was SRE, similar principle. Engineers, they can make sure codes perform, but you want to have that one person who’s focused on that big picture, and in this case, like to have someone who’s got that big testing picture in their mind. It’s always, I think, more advantageous for companies. So the developers write their tests. They’re making sure each individual feature has it, but if you have one person who’s kind of making sure everything fits together nicely and is running smoothly, I think that’s like super beneficial for companies to have those people when they can.


[00:30:43] AJ: Yep. Totally agree. And I often use a sports team analogy. It’s ridiculous to try to think that everyone should be like these generalists, right? No, no, no. We should all kind of know each other’s role, be able to pitch in where needed, but hey, if you are the quarterback, I need you focused on that. If somebody else is, what else? I don’t know sports that well, whatever else positions.


[00:31:09] MS: A linebacker.


[00:31:10] AJ: Yeah. If you’re the linebacker, you don’t need to be focused…


[00:31:14] MS: We’re on the football.


[00:31:16] AJ: Why would I expect you to? That’s the thing. I’m not going to expect the linebacker to also be 100% up-to-date on everything that the quarterback can do, for example. Right? But they work together as a team for one goal.


[00:31:33] BH: Let’s talk about tests that gate your CI deployment. Can you speak a little bit about your best practices for this?


[00:31:41] AJ: Yes, Ben. I’m so glad you asked that because tests that are part of like CI/CD, I consider CI/CD the big stage like you go through these levels of maturity with your tests. When you first start off a test project, you don’t put them as part of CI, right? You kind of run them locally on your machine. We might check them in. So any developers who are about to check in a feature, they can run the test locally, right? When you run these tests locally, something might fail. And you get scared for a moment. You check the test and it’s like, “Oh, yeah, that was just a fluke. That didn’t have anything to do with me.” And you go ahead and you check in your feature and everybody’s good. No one even knows that test gave you like this false answer. When you run them as part of CI/CD process, you no longer have that cushion. Right? Whatever they say, that’s what goes the very first time. So if they say, “Nope, this check-in is faulty. We’re not deploying a thing,” then that’s what it is. And so your tests have to be really, really good if they’re going to gate your check-ins. Right? So there’s a couple of areas. There are actually four. And I wrote about this in a little DevOps guide ebook kind of thing, but there are four principles. One is the speed of the test. So your tests have to be fast, right? You’re going to run dozens, maybe hundreds, maybe thousands of these things. They can’t take forever to run. Right? So you have to think about that when you’re developing them, “How do I make this as fast as possible?” There are other things of course you can do with your CI pipeline, like running the test in parallel, but that also affects how you write the test, right? If you write them in a way where they’re dependent on each other or they’re sharing tests, data, and stuff like that, then you can’t run them in parallel. So this is something that you have to keep in mind when you’re developing the test. Another thing is the reliability of the test, right? They have to be highly reliable because your build is dependent upon the state of these tests. They can’t be flaky. They can’t tell you yes when it’s really no. They can’t tell you no when it’s really yes. And there’s a lot of things that you can do to address like flakiness within the test. That’s an entire talk or whatever. The quantity of them, right? So that goes back to the point I was making before. Don’t get carried away. I definitely want you to test your code, but I don’t want you writing tests for every single thing. Otherwise, it’s just a bunch of stuff in your pipeline that you don’t necessarily need. So think about what value this test is adding and if you actually want this gating your deployment. You might not. It might not be worth writing the test at all. And if it is, it might not be worth it gating the deployment, but maybe you want to run that as a separate kind of nightly build just to get an overall status on your application. And then the final thing, this is a big one. Everybody, write this down, maintenance. So test code is code, just like you have to maintain your feature code, you have to maintain your test code. When you write your test is written against the state of your application at that time. As your application evolves, your test may need to change because the behavior of a certain feature has changed. Right? People seem to forget this, and it’s not included in a lot of estimates when you say, “Okay, I have to write this feature.” You’re not thinking about the test. Maybe if you change the behavior of the application, how you’re going to have to go back and update maybe some other tests as well.


[00:35:52] MS: So I heard through the grapevine that you’re currently working on some advanced level testing using machine learning. Can you talk a little bit about that?


[00:36:00] AJ: Yeah. That’s actually what Applitools in the visual testing. So visual testing isn’t exactly new in the industry, but the approach that’s being used is like kind of pixel to pixel. So visual testing is basically where you take a screenshot, a picture of your application when it’s in its perfect state. Right? And then on every regression run, it takes another picture. It compares the two. Well, when you compare this pixel to pixel, they don’t work so well. This is the flaky stuff that I’m talking about.


[00:36:31] MS: We’ve experienced that firsthand.


[00:36:34] AJ: This is the flakiness I’m talking about, like, “Oh, the cursor was blinking. It was solid in one and not the other. Fail!” Or, “This button was hovered over by chance. So fail!” So anyway, Applitools has come up with a new way to do this, and it’s using machine learning to mimic the human brain and eye and only detect the changes that we would care about as human beings. So it’s pretty cool stuff and it’s really, really flexible because this stuff can get complicated, test in general with test data and your application looking different and certain regression runs and stuff like that, but it’s really flexible to be able to, like, you can tell it how strict you want it to be, like, “No, I want it to be pretty much the same,” or, “No, I’m using some dynamic content.” So don’t verify the content per se, but the structure of the page. Make sure nothing’s bleeding off or overlapping and stuff like that. Or, “I’ve got an ad right here on my website,” and that’s going to be different every time. So just ignore that whole portion of the page and verify everything else. So it’s really cool.


[00:37:53] BH: What makes you most excited about this functionality? Maybe not today, but over the next few years as you see it mature, where do you see this sort of replacing bigger chunks of what we do as testers?


[00:38:06] AJ: It gives you more coverage with less code. Right? And that’s exciting to me. So as someone who writes tests, I feel a lot of pressure when I’m writing my assertions. I got to think of everything that could possibly be here. Let’s just take a simple scenario. Let’s take like a shopping cart and I increased the quantity. I have two items in the shopping cart. On one of the items, I increase the quantity. Right? So I changed it from one to two. What assertions do I add? Okay. So let me assert that it’s now two. It’s not one anymore. It’s now two. Is my test done? Some might say yes, that’s the thing I changed. I made sure that it was updated and I’m good. Well, there’s a lot of other things in that card. Did the price change on that item? Did the total price change? Did you make sure that the other item was not affected by this? Did the tax change? What about the shipping costs? If it’s a flat rate or it shouldn’t have changed at all or if it’s based on how much you’re spending? There’s a lot that could happen there. So you start writing assertion after assertion after assertion and you’re like, “Oh my God!” And you still don’t know if you got them all, right? What this visual testing, I like to say is like a picture is worth a thousand assertions. I don’t have to think about any of that. I put my application in the state I wanted in. I say take the picture, make sure it’s right, and every time it’s making sure that everything that I want verified on that picture is actually verified. So that’s really exciting. Some newer things that we’re working on is like cross browser testing. So not just testing on Chrome, like a lot of folks do, but I’ve been bitten by this on my own web development journey. Everything I test in Chrome, everything is great. And then Safari, there’s this bug there or Firefox. You’re thinking like, “Oh, we’re at the place where all of these browsers should kind of work the same.” Let me tell you child, they don’t. Okay? So you have to test across like all of these different browsers and don’t get me started on like different viewport sizes and stuff. Things start really going haywire when you start changing your viewport size. And so we’re working on being able to like visually test across all of these different configurations without needing to like execute the test or have like a device lab or anything like that. Instead, you run your tests on let’s say Chrome. It grabs the states of your application. So the DOM, the CSS, the JavaScript, and then it just basically blasts that state onto all of these different configurations and takes all the screenshots and does the comparison. So it’s like magic to me. I love it. I think it’s amazing.


[00:41:14] MS: One thing, you know, whenever we’re talking about AI and machine learning, one topic that always comes up is bias. Is that something you kind of think about or worry about with these kinds of new programs that you’re working with and testing?


[00:41:29] AJ: Ooh, Molly! I have an entire keynote on testing these AI systems, right? Yes. The short answer to that question is definitely yes. What I’m seeing is teams that are utilizing AI for their features, they have this blind faith in them. They think that, “Oh, because it’s AI, it just works and we don’t have to test it.” I’ve literally heard that, “Oh, that’s the AI feature. We don’t have to do any testing for that.” And everybody, even the dedicated testers on the team go, “Oh! Oh, okay!” And move on, like nobody questions it. But I’ve seen like quite a few failures and biases and things like this that have occurred with these AI driven features. And so we definitely need to test them probably more so than the regular non-AI features. And not just test for functionality, but like you said, for bias as well.


[00:42:34] BH: Is all of this advanced machine learning going to cost us our jobs?


[00:42:40] AJ: Good question. So the way that I’ve seen AI used for like testing specifically is as an assistant. So for example, when Applitools sees a difference, right? It doesn’t mark the test as failure. Like your test itself fails, but there’s like this dashboard and you go in there and you look at it and it’s marked as like unresolved. And I’m reading that as the bot saying, “Hey, human overlord, we detected some difference, but we’re not smart enough to determine if this is truly a failure or if your tests are just no longer aligned with the state of your application. Can you please be the judge?” And you go in there and you look at the pictures and it might highlight the differences for you and stuff like that. But that’s a way of assisting you with the testing. Another cool app that I know of uses AI for testing. And this one will basically pick out which tests need to run based on what’s checked in. And so there’s a lot of factors there, like, “Who checked this in? Is it the dude that’s always breaking stuff? Run all the tests out.” Or, “Is this a brand new area? Okay, well, maybe we need to run all the tests.” Or, “Is this a really solid area and this is a very experienced developer in this area? Okay. Let’s just run the test related to this area.” You know what I mean? So those are ways that I’ve seen AI be able to assist, but not take over. I don’t think we are at a place where AI is going to be able to take over what we do.


[00:44:31] MS: So what do you think the future of testing is going to be? So I made like this little cartoony thing and it was actually based on a blog post that a friend of mine wrote. His name is Jason Arbon. Jason’s really brilliant. Right? So he still works at Google and stuff and he has like an AI company and all of this, but this future of testing I thought was really interesting. But it was like a scenario of where you have a woman who was a tester and she’s letting the bots basically generate tests. Based on the features, they generate the test. They execute them. They do like all of the tedious stuff that the devs don’t want to do and then they inform the team of what’s going on. Right? So as a dedicated tester, that person would take the results or the input that they’re given from the bot and then make these business decisions based on it. So that’s the prediction of what the future of testing will look like, doing the stuff that humans don’t want to do.






[00:46:05] Scaling an SQL cluster has historically been a painful task. CockroachDB makes scaling your relational database much easier. The distributed SQL database makes it simple to build resilient, scalable applications quickly. CockroachDB is Postgres compatible, giving the same familiar SQL interface database developers have used for years. But unlike older databases, scaling with CockroachDB is handled within the database itself. So you don’t need to manage charts from your client application. And because the data is distributed, you won’t lose data if a machine or data center goes down. CockroachDB is resilient, adaptable to any environment and Kubernetes native. Hosted on-prem, run it in a hybrid cloud and even deploy it across multiple clouds. The world’s largest banks, massive online retailers, popular gaming platforms and developers from companies of all sizes trust CockroachDB with their most critical data. Sign up for your forever free database and get a free T-shirt at




[00:47:06] MS: Now we’re going to move into a segment where we look at responses that you, the audience, has sent us to a question we made in relation to this episode.


[00:47:16] BH: The question we asked was, “What frustrates you about automation?” Our first response is from Cheecha. They say, “Buggy and slow third-party test software.”


[00:47:32] AJ: So I find that a lot of people beat up on these poor tools. For example, Selenium, like I said, that’s one of my favorite tools and they get like a bad rep and people start saying, “Oh, I’m not using Selenium. So slow and flaky. I’m going to use this other software.” Right. That’s my job. To be on top of this, I try out like all of the other tools, I build stuff with them. I use them and they all have the same issues. It’s not something that’s really specific to a particular tool. And a lot of times those issues, I hate to say this, but as like user issues, right? You’re not using the tool effectively. I don’t have those issues because of the way I utilize the tool. And some of that might be on the tool, making these practices, these preferred practices known or assisting and stuff like that. But I think a lot of it boils down to lack of education on, like I said, there’s a whole world around this, design principles, patterns, the software development patterns and stuff like that that people are not following, and yet they end up falling into these traps because of that.


[00:48:55] MS: So Mark, “Too many people think it is a time saver when it is actually a time sink. Let’s automate our tests because then our QA team will be less busy. Creating and maintaining a valuable suite of automated tests is difficult and represents a significant maintenance burden.” This is a universal principle, but go for the low hanging fruit first. Identify small areas that represent good ROI, deliver opt-in, respect the automated testing pyramid by starting with unit tests, and API. Keep UI test suite small and focused, maybe starting with smoke tests.


[00:49:33] AJ: All right. I agree with some of that. Definitely I’m all about respecting the pyramid. Definitely agree with the maintenance part. That’s like what I said, the reason why that feels so icky to folks is because they didn’t plan on it. It’s something you have to realize is going to occur. Like you have to maintain this stuff. And when you don’t plan on something and then it comes to you unexpectedly, it’s like, “Ah!” You’re upset about it. But if you go in knowing like, “Yup, I’m going to have to write tests, I’m going to have to maintain them, and let me plan accordingly.” It’s not so bad, but definitely pay attention to like what you’re automating, right? Again, if you try to automate everything, you’re just going to get yourself into like this maintenance nightmare. Definitely, yup, focus on the ones that are going to give you return on investment. I actually have another talk. It’s called, “Which test should we automate?” That talk gives you a matrix. And it’s funny because one of the columns is, “What is your gut feeling on this?” And a lot of times people will say, “Well, yeah, let’s go for the low hanging fruit. But when you look at this matrix, there’s four components to it. It’s risk, it’s value, it’s cost efficiency, and it’s historical data. And when you look at all of it together, and there’s a little exercise that goes with it, you realize that some of the things that are cost effective might be low value or they might not be the ones you should be focusing on first.


[00:51:09] BH: The thing that frustrates Tim in this next comment is finding and updating all the relevant tests when requirements change, particularly one-to-one relationships changing from one to many, breaking a lot of assumptions, in a similar vein shifting from exploratory to actual implementation has a fuzzy boundary where prototype code might get reused without having solid tests.


[00:51:35] AJ: Yeah. That goes back to that whole quantity measure that I gave about be mindful of that when writing the test. Let’s say you break a feature, Ben. You did it. You checked it in and you had 25 tests that failed. Right? And you’re like, “Oh my God! I broke 25 tests. This is two lines of code.” And you go and you look through, you have to look through all 25, and they all are failing for the same reason. Why did you need 25 tests to tell you that? Wouldn’t one test have sufficed? So we get into the issue where we’re automating the same thing over and over again with some slight variation. Maybe we don’t have to do that. Right? And so those are the types of things that you should be thinking about when you’re deciding, “What are you going to automate? Do I have this test already?” Yeah, there might be a slight variation that you can get away with just kind of doing some manual or some sniff testing around those or pushing it down. So maybe I need one good test at the UI level and then all of these other variations maybe I can push those down to the unit tests if they’re absolutely needed. But otherwise, you don’t want all of that noise telling you the same thing.


[00:53:04] MS: So another frustration comes to us from Yasin. So they say it’s unnecessarily time consuming to set up and configure. And depending on what tool you use, still doesn’t catch everything.


[00:53:19] AJ: They’re not wrong. I mean, you got to set this stuff up. I hate setting stuff up. I despise it. So yeah, that’s a pain, keeping up to date with the tools again. That’s a whole discipline in and of itself. That’s a lot of what the visual testing addresses. You’re not catching everything. I gave some specific examples of things that you’re not able to catch, unless you write a million assertions and even then, you still are going to miss some things because you don’t have that set of eyes on it. Right? So visual testing actually helps with a lot of that.


[00:54:01] MS: Angie, thank you for joining us today.


[00:54:03] AJ: Yeah. Thanks so much for having me. This was a blast.


[00:54:15] BH: This show is produced and mixed by Levi Sharpe. Editorial oversight by Jess Lee, Peter Frank, and Saron Yitbarek. Our theme song is by Slow Biz. If you have any questions or comments, email [email protected] and make sure to join us for our DevDiscuss Twitter chats every Tuesday at 9:00 PM US Eastern Time, or if you want to start your own discussion, write a post on DEV using the #discuss. Please rate and subscribe to this show on Apple Podcasts.