

We have our test class which includes a “test normalize” function as a method. Sure testing can take up a lot of your precious time but it’s 100% worth it. Unit tests are tremendously useful because they:Įnsure that the code does what is supposed to doĭon’t tell me that you don’t want at least some of the above. To prevent all these things before they even occur. We might even discover that our model was in fact corrupted all this time. That’s all good but what happens when the model will be deployed into a server and used in an actual public faced application? Most likely it will crash because some users may be sending wrong data or because of some silent bug that messes up our data preprocessing pipeline. And then we just need to increase its accuracy until it reaches an acceptable point. When developing a neural network, most of us don’t care about catching all possible exceptions, finding all corner cases, or debugging every single function. We will start on why we need them in our code, then we will do a quick catch up on the basics of testing in python, and then we will go over a number of practical real-life scenarios.
#Unit and benchmark tests unit test a how to
In this article, we are going to focus on how to properly test machine learning code, analyze some best practices when writing unit tests and present a number of example cases where testing is kind of a necessity. Note that this post is the third part of the Deep Learning in Production course where we discover how to convert a notebook into production-ready code that can be served to millions of users. After all, machine learning is not different from any other software. But when your code is going to live in a production environment, making sure that it actually does what is intended should be a priority.

That’s why most of the TensorFlow and PyTorch code out there does not include unit testing. Programming a deep learning model is not easy (I’m not going to lie) but testing one is even harder.
