SKIL Documentation

Skymind Intelligence Layer

The community edition of the Skymind Intelligence Layer (SKIL) is free. It takes data science projects from prototype to production quickly and easily. SKIL bridges the gap between the Python ecosystem and the JVM with a cross-team platform for Data Scientists, Data Engineers, and DevOps/IT. It is an automation tool for machine-learning workflows that enables easy training on Spark-GPU clusters, experiment tracking, one-click deployment of trained models, model performance monitoring and more.

Get Started

Import Models

TensorFlow, Caffe, Keras, and Deeplearning4j can all be imported into SKIL

SKIL model import allows data scientists to deploy to the SKIL model server the deep learning models that they have created elsewhere. SKIL supports models from any Keras backend library including TensorFlow, Caffe, and also supports Deeplearning4j.

A team with a trained TensorFlow model to host it on the model server for better integration with enterprise applications. The SKIL model server also lets data science and DevOps teams manage all deep-learning models from one, central location no matter how and where those models were trained.

Using Model Imports

To import models, either create a new deployment or open an existing one. From there, click on the "import model" button (below):

Model Import Button

Model Import Button

Clicking on this button brings up a modal dialog in a popup window, which allows you to select a model file to upload (below).

Import Model Modal Window

Import Model Modal Window

Here, you can upload a model from Keras, TensorFlow, or DL4J. Model import gives teams more flexibility in how they can leverage the SKIL platform as a center of deep learning operations.

Importing from TensorFlow

A frozen graph tensorflow model once imported into SKIL can be deployed in the same manner as any other model. When it comes to deployment, SKIL is agnostic to how and where a model was build.

In the screenshot below we can see the import model dialog window in SKIL being setup to include more options when the system detects a .pb file being imported. Enter the names of the placeholders used for inputs and the name of the output nodes.

SKIL Import Model Dialog Window when  Tensor Flow model detected

SKIL Import Model Dialog Window when Tensor Flow model detected

Press enter after specifying the input or output name.

You should see the name inside a yellow 'chip' which you can drag to reorder or remove with the x button.

Tensor Flow models require names of both the input and output placeholders as part of their format for import and hosting.

Order is important

When using multiple inputs or outputs the order reflects what order SKIL expects inputs to be specified when using the endpoint REST API