Creating a container to deploy the Machine Learning model

Task 01 👨🏻‍💻

Task Description 📄

👉 Pull the Docker container image of CentOS image from DockerHub and create a new container

👉 Install the Python software on the top of the docker container

👉 In Container you need to copy/create a machine learning model which you have created in jupyter notebook

What is Docker?

Docker is a set of the platform as service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries, and configuration files; they can communicate with each other through well-defined channels.

Why use Docker in ML?

One of the simplest and most common uses of Docker not just in ML but across the entire development phase — it acts as a complete and solid development environment. We all know that setting up a development environment can be very messy at times, especially the fact that resolving dependencies can be a hectic task. But with Docker, it is as simple as a ‘Docker run’ command in most cases. In the worst-case scenario, we will have to build a custom Docker image with all the dependencies, with a little bit of shell programming knowledge it can be achieved in a matter of a few lines of script.

Docker file is an executable file that allows us to create our own custom Docker images. We can specify what the environment should consist of, all the dependencies and software that needs to be installed etc. After that, just execute a couple of commands and your development environment is ready to be spun up.

This would save a lot of time compared to trying to untangle the dependency errors caused by the different versions of applications in your system.

A good and simple example use case can be when you want to build a simple classifier, and your system has a version of libraries and applications that are not updated and can not be updated since you have some applications that run on these versions. The easiest way to deal with it without messing up your current working set-up is by downloading or building a Docker Image with an environment that has the latest version of all the applications and libraries that you would need for building your new classifier or application.

Besides, many applications and software now come with official Docker Images which makes it effortless to install that application into a local machine.

Practical Demo

  • We will set up a yum repository for docker and install docker

We will now install docker and its requirements by command:

yum install docker-ce — nobest

  • We will now run docker services via command:

systemctl start docker

systemctl enable docker

  • We have pasted our code in the base Linux OS which will predict the marks scored according to the number of hours studied.
  • Run a docker container via command :

docker run — it — name mlops1 centos:latest

Now in the docker container, we will install the prerequisites.

yum install python36

pip3 install — user scikit-learn

pip3 install — user pandas

Now we are copying our dataset and the model created in the docker image

docker cp ml.py mlops1:/

docker cp db.csv mlops1:/

Now we will use docker commit command to create a container image of our model and use it as and when required.

Here is the actual working example of the model created.

Our model has predicted the score based on the number of hours studied.

Conclusion

Docker is employed in various phases of ML development life cycle such as data gathering, data aggregation and preprocessing, data exploration, model training, and predictive analysis, application deployment, and more. It is a breakthrough technology that is not limited to Enterprise level usage or application. The community edition is free and one can easily set up and learn to use Docker. In fact, one of the easiest ways to set up a personal development environment is through Docker.

I automate things 😉