In this episode we’ll test talk with Edaqa Mortoray, creator of the Leaf programming language. Edaqa currently works at Fuse, where he’s in charge of layout and animation and helps drive much of the firm’s testing efforts. He’ll cover testing from a developer’s perspective as well as topics as far and wide as “Will the automator be replaced by the automated?” You don’t want to miss it!
About Edaqa Mortoray
Edaqa is a multidisciplinary freelance developer with over 20 years of experience. His projects include graphic rendering, telecommunications, scientific applications, business processes, video games, financial platforms, and development products. Edaqa is also the creator of the Leaf programming language.
He's currently doing UI and animation programming at Fuse, a mobile app development product.
Quotes & Insights from this Test Talk
- I created Leaf because I've done so much and I wanted that challenge of the programming language and I would say when it comes to testing it's very different I mean in a way it's very different to test a programming language than other applications. But the types of things I test are much the same. The difference here is that because everything is completely orthogonal I don't have like one use case to test. I have like one feature and it has to work every single ways used every combination you can imagine has to work somehow. And this differs from perhaps a business app where you really only have like the one or two use cases that test case have to cover because the user is just not allowed to do anything else. In leaf that I have unit tests and there's a lot of high level integration tests and eventually when I get far enough they'll be test apps but it's just not far enough for that.
- I prefer to call it use driven development as opposed to test driven developments. And I think this is because they work in the UI. Is that whenever I'm implementing a new feature like fused for example since we're UI driven I try to start with like a small application. How is this thing going to actually be used. What are people going to be doing with this. And I don't necessarily have coded completely but I the outline so I have a good idea of how it works and then I start filling in from there the implementation.
- That is something I learned when I when I was in the QA side. That complete testing is a myth — it's basically impossible. It's a complete sort of cost benefit tradeoff. And the goal is to try to get that optimal what what can we do for the least amount of investment. I mean I don't want say for the least amount of money you want to have the most cost optimal investment versus a time and money and best coverage. It's very tricky to manage that but you have to face that you just have to face the fact you just can't test everything.
- for me I always phrase it in a matter of pride. And for me it always was pride I mean as a developer. It seems kind of silly that you'd want to have something that you really don't know if it works or not. I don't understand why some programmers never found that embarrassing when they released something you get this you get this product and the feature doesn't actually work. And it's clear that they didn't even go through like one manual test in this sort of boggles my mind is it at this point it's kind of like a work ethic thing like why would you want to do that. And so for me it's a matter of if you pride yourself on good work at the test it just has to be there. And I know some people just do manual testing they do write automated tests but at least that work ethic should be there and I don't know how to motivate people if that's not happening. I'm not sure I have never fully answered that other than just showing that. So yeah testing is in the developers best interest.
- Part of the reason I don't see AI taking over is because developers and testers spend a vast majority of their time on automating things. That's the sole goal of programming to automate everything. That's what programming basically is. I mean deployment is completely automated so far as possible or packaging we want as many automated unit tests. The sole goal of programming has been automate anything. Any chance we get automate more will gladly going to take it. Right. And part of what is said in the article though is that the higher level work like actually structuring the app putting it together I don't see this as being automated because it's not the type of thing that automation is good at or like these deep learning. And these AIs are actually good at these AIs are very good at specific things and if they can be applied to programming. I'm very happy for. I'll definitely be using such products. But the overall structuring of the apps and the communication with clients this is something that these things just can't do yet. And so I'm not really afraid of that. If we ever do have an AI that communicate successfully with the customer and program for it's I mean it's not just going to be programmers it's going to be everybody out of a job.
- The one piece of actionable advice I would give is that — actually use the product that you're developing. And make your test case reflect how somebody is going to use it. Make sure unit tests actually reflect that use case and that you're actually using that feature in the products. I know it sounds basic, but I think it's very important if you just focus on that aspect. Focus on the use aspect.
Connect with Edaqa Mortoray
May I Ask You For a Favor?
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.
Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!