I'm here with Andres Walgreen, CTO of Electric Cloud. Electric Cloud builds software for continuous delivery. Thanks for joining us Andres. >> My pleasure. >> Let's talk a little bit about functional testing, the sort of medium sized tests. Before we do that let's talk about behavior driven development a little bit. Can you sort of give us your leiman's view of what it is and why it might be important? Yeah, I think one of the really interesting things about behavior driven development is it tends to be implemented in terms of a domain specific language. And tends to be a little bit less programmer driven. And part of the goal of behavior driven developments is to allow non programmers to find the behaviors of the system. In a way that can then be automatically tested, and so it's really part of the kind of test first culture, in some sense. And I think where it gets really interesting is that it allows you to specify very ornate behaviors of the system and starts to kind of get program managers. Project managers and product managers thinking in terms of how do I express this for testability. How do I express this in a logical way that I can then say yes it works or no it doesn't. And so, from very early on in the design process, in the iteration process, for the product. You actually end up with a set of tasks that you hopefully want to kind of connect them together can then execute going forward. >> And what tools have you seen people use in Cucumber, for instance? Is that the right answer? How do you identify the right infrastructure to do this on top of? >> That's a great question. I wish I had a really good answer for that. Cucumber's definitely something that we're looking into right now. We've got some experiments going with that. Trying to figure out where it fits into you know, our processes and our tooling and all of those kinds of things. And to be honest, I'm sure there's lots of other great tools out there. But we haven't gotten far enough along in our investigation to figure those things out. But I think this is definitely something that's real and something that I think is valuable, and that I think a lot of organizations can and should adopt. >> Neil let's talk about functional testing more in general. What practices and behaviors and team configurations make it work well versus kind of hinder it and make it harder to do successfully? Yeah, the more, kind of the more, the higher you walk the complexity chain of testing from unit tests as being the least complex. And system tests or integration tests or end user tests or performance load testing, those kinds of things being at the upper epsilon of complexity for testing, you really want to. As much as possible, focus on having as many good unit tests that are fast to run and verify the behavior of the system as quickly and as completely as possible. But you can't really avoid having more integration or system like tests at some point, especially let's say, your product talks to a database. At some point, you're going to need to execute the code, whether it's a SQL code that you wrote or that you're using as an ORM. Whatever you're doing, at some point, you're going to need to verify that the behavior works, and so you're going to need higher level testing. Now, one thing you've got to figure out is do I want that kind of testing to be part of my CI process? Or is this something that I do in a follow-on to CI? Because there's going to be a trade-off as soon as you do anything that's not just a unit task. Your test times will go up and the complexity of managing them will go up as well. What frequently what people will do is run a quick CI pass that does only unit tests, and then have a follow on after that that does a lengthier Maybe that runs once an hour. Maybe even longer ones run once a day or something like that. Ideally you would run all of your tests instantaneously, but that's unfortunately not the way things work so you've got to figure it out. And I think another good behaviour to get into is whenever a bug escapes some stage in the pipeline. Whether that means that it escapes developement and a QA person finds it or a user finds it or something like that. You always have to ask yourself, what test could I have written to prevent that bug from escaping. And how far left Could I have written that test in my process? Could I have that as a unit test? Could that have to be an integration test? Could it have to be a system test? Those kinds of things. And really, once you get people starting and thinking in that kind of mode where they're about thinking how do I test that behavior in the most efficient manner. That's when you start to get, you build up a really good base of tests that you can use moving forward. >> And let's talk about interdisciplinary collaboration. I mean what do you see working best in terms of the way that a tester, developer. Business person product on our type work together with regard to functional testing. >> Yeah, I mean, I think that the most functional change that we see are the ones where there is a lot of kind of, cross disciplinary cooperation if you will. A lot of the same people in the room solving problems and then kind of bringing their perspective to it. I think it's important for development, for example, to realize that you have to design for testability, whether that means design so that you can write unit tests. Or design so that you can do test automation at later stages in the pipeline. It's very important to kind of realize that, the more you write for testability, the more you architect for testability, the better off you'll be. So I think there's kind of a you know what, it's not just one of those things where OQA has to get better at testing. This is something where engineering has to get better at doing it as well. >> Andrew, how do you create functional tests that are robust, and [INAUDIBLE] >> Break a lot and require a lot of maintenance. >> I think this is probably one of the trickiest areas that people get into as they start to write kind of non unit tests. Now, you're dealing with behaviors between distributive systems. You may be talking to a database or some API server that lives on another end point or something like that. I think its always important to make the test simple as possible and have as few dependencies as you possibly can. Really reserve those more complex kind of tests for the few complicated system tests that you'll have at the end, and make them as simple as possible. The best way that you can write a reliable integration test is to not write one. Write it as a unit test if you can. That sounds kind of trite, but it's very important to keep in mind that the more you pile up expensive integration and system tests, the longer your test runs are going to be, and if you can write them as unit tests, you definitely should do that, because it will run a lot faster, but you need integration and system tests. So, really what you want to focus on is, making sure that you understand where the unpredictability in the system will be. And if you're writing tests that assume that, you're running and all the processes are on the same machine. And you're not on a VM that's on an overloaded host, and the network is always reliable and all of those kinds of things. And that, I mean, you're really when you're writing tests are in many ways learning the same lessons that people learn when they systems which is they're unreliable. And you need to take that into account. And don't assume that this will happen a hundred milliseconds later. It might happen. 110 milliseconds later and then you've written a test that breaks. And I think it's really, it's important to kind of be relentless about the quality of the tests that you're writing and make sure that when there are flaky tests that they get identified. And that they get disabled, and a P1 issue gets logged, that somebody go and fix that test, and write it better. And really treat those as failure to process, not just failures of testing. And really have a culture of getting better at writing tests, they are difficult to do. Having some of your senior people sit with less senior people and co program testing is a great exercise to have. That's, I think, even more valuable than having them sitting side by side programming. I think how you write a good test is much more difficult in some ways to learn than how do I use this algorithm and that hashmap or that API. You can look that up in a book. But the craft aspect of how you write good tests is something you have to learn by doing. >> More than once I've spoken to somebody that said, we hit a wall with these automated functional tests. Management said just taking too long, too much work. Why do you think teams come to that conclusion? And how do you avoid that? Or fix that with that? >> Yeah, I think a lot of times you get into that behavior when you've written your test to kind of test the code, not to test the intended behavior of the code. And a typical syndrome of that is if you can't touch a single line of code without breaking half a dozen tests, your unit tests are probably a little too brutal. You really want to be testing the intent of the code, not necessarily that you know, and then it does this, and the next line should be, you know. At some point you really just You know you're really just taking a photocopy of your code, putting it into your unit test, and making sure that your code is the same as it was the last time you ran it. Which isn't really the point. The point is to test the behavior. So if you focus more on doing that, that tends to lead to less flaky behavior. More persistent you know green builds and really treat. In a pre-testing or something, then you have to be good at the same way that you have to be good at writing code. >> Mm-hm, mm-hm, great. It's a very practical perspective on functional and medium testing. Thanks Sanders for making the time to talk to us. >> You're welcome.