Make your ML deployment easy with Lightning AI

Tin Tin
4 min readNov 11, 2022

--

Make machine learning jobs lightning fast ⚡️

Table of contents

  • Overview
  • Why Lightning AI?
  • Lightning AI in movie streaming scenario
  • Strength and Limitations

Overview

Lightning AI is a functional tools that provide user to build models, AI-application or create ML workflows with minimum effort. It make users to focus on the developing and design of the AI products without handling the operational and deployment overhead.

Why Lightning AI?

In the academic or research scenario, students and researchers spend most of the time on design and development of the machine learning models. That is to say, tasks before deployment including data exploration, data preprocessing, model training, tuning and evaluation are priorities for the stakeholders in this scenario. However, deploying the final deliverables successfully could be equivalently important. It is the most realistic and better way to present the model performance and project result.

Transfer ML component into Lightning app

Lightning AI provides a light and low-code framework for users to transfer the existing Machine Learning components into production. Moreover, it could also build a full-stack AI apps on top of it with few more lines of code embedded.

As a student developer, Heroku, Github Pages or Amazon ElasticBean enable production for full-stack or backend projects putting into production smoothly and easily. Similarly, Lightning AI serves as a powerful tool to turn Machine Learning project into production stage. I would recommend people having little time learning topics such as CI/CD or integration of ML lifecycles to pick up this tool immediately.

Lightning in movie streaming scenario

In the movie streaming scenario, I apply the most common solution Lightning AI provides called Lightning Apps. After embedding few lines of code in your existing Machine Learning component, Lightning app could enable the project to deploy on any machines, including public cloud, private cloud or on-premise.

First, I simplify our movie recommendation code into separate files to create a low dependency pipeline.

# train.py: the model training script
# utils.py: storing model artifact
# movie_logs_10k.csv: movie streaming log of user rate

After splitting up the movie recommendation model pipeline, first we install the dependency using pip:

$ pip install lightning

I introduce another file for building Lightning apps, called app.py:

  • app.py: Configure to run the model training script using lightning AI
# app.py
import lightning as L
from lightning.app.components.training import LightningTrainerScript

# run script that trains Surprise with the Lightning Trainer
model_script = 'train.py'
component = LightningTrainerScript(
model_script,
num_nodes=2,
cloud_compute=L.CloudCompute("cpu")
)
app = L.LightningApp(component)

From the above snippet, we could learn that Lightning AI provide several computing resource for this “training model script” jobs. We could train it distributively and choosing CPU, GPU for the computing resources.

After checking dependency of each file is fulfilled, we could run the command depend on where we want to deploy out ML project.

# run the app on the local
lightning run app app.py

# run the app on the --cloud (--setup installs deps automatically)
lightning run app app.py --setup --cloud

We could check the Lightning AI dashboard to see if our app(model training script) is running:

In conclusion, Lightning AI is a light-weight and low-code solution for developers, researcher or even data scientist to put existing Machine Learning project into production. More features such as building a frontend to evaluate model, or monitor model performance torchmetrics could be explored!

Pros and Cons using Lightning AI

Strength

  1. Low-code solution: For most of people in the ML development cycles, there are a small portion of people familiar with low-level configuring the model serving. It is a highly sophisticated and complicated work usually done by Machine Learning Engineer or Site Reliability Engineer. Users could introduce it into existing codes with little additional code.
  2. Fast prototyping: I personally believe it is a tool good enough to prototyping any ML project. Moreover, most of the projects in academic or research scenario doesn’t require large scaling. The auto configuration by Lightning App provide an abstract layer to decreasing users to make additional effort in deployment.
  3. Full-stack Visualization: Lightning App allow user to build frontend visualization real quick with several lines of code putting in. It reduce user to build another frontend app using heavier framework such as React or Angular.

Limitations

  1. Low Horizontal Scaling: On the other side of fast prototyping, it grant less access for users to configure low-level details in deployment. Regarding this nature, it might not be a suitable tool when project is growing in a large scale.
  2. Error Handling: While I’m using this tool, I found it’s hard for user to debugging if error happening in the deployment. The only way user could detect and debug the error is through the Lightning dashboard. It might be a burden if existing Machine Learning project introduce lots of error in the deployment.
  3. Low Compatibility: Since it is a one-stop solution to deploy a Machine Learning project, it have less consideration on integrating different MLOps tools together for a more complete solution.

Note

This post was made for the assignment of 17–645 Machine Learning in Production in Carnegie Mellon University

--

--

Tin Tin
Tin Tin

Written by Tin Tin

Engineer, Entrepreneur and Rookie medium writer. Focus on technology, entrepreneurship and product management.