The Learnig Orchestra is a Machine Learning INTEGRATION tool
The Learning Orchestra tool integrates different services from different layers and the core layer is the ML toolkits layer. The data scientist can use existing ML toolkits, like Tensorflow, Scikit-learn, using a single REST API or the Learning Orchestra Python client. The idea is to facilitate and streamline the data scientist development process. Deployment issues from these ML toolkits are solved by Learning Orchestra. Furthermore, it enables the development of stateful steps of a ML pipeline using these ML toolkits, which means that the data scientist can run a transform step and save its output transparently into a container over a cluster of virtual machines from AWS or Google cloud environments. The next ML pipeline step, for instance a training step, can import the previous saved data and perform its execution transparently. This Learning Orchestra behavior is a fundamental advantage against other ML pipeline tools, since it enables re-executions of some steps of a pipeline and not always the entire pipeline.
- Quick-start
- Learning Orchestra deployment
- Using the Learning Orchestra system
- About Learning Orchestra
- Frequently Asked Questions
The Learning Orchestra system provides two options to access its high level Machine Learning services: a REST API and a Python client.
REST API: We recommand the utilization of a REST API caller tool, like Postman or Insomnia.
Python client:
- Check the repository for more details.
🔔 This documentation assumes that the data scientist is familiar with a number of computer science technologies. We mentioned extra materials for a better reader understanding and some extra concepts explanations at ask for help. There are the frequently asked questions in the FAQ.
The documentation about how to deploy the Learning Orchestra system NOT USING TERRAFORM (using DockerSwarm container manager) - installation docs
The documentation about how to deploy the Learning Orchestra system USING TERRAFORM (using Kubernetes container manager) - [installation docs] (https://github.com/learningOrchestra/deployment)
You just need to run docker stack rm microservice
.
The cluster configuration is related with the Machine Learning model, but the Learning Orchestra requires a small size cluster for simple pipeline settings. We have deployed the Learning Orchestra system over a cluster with only three virtual machines and it runs models, like Titanic, IMDb and MNIST. Several technologies are used by Learning Orchestra on each virtual machine containers. Details about them at requirements
The Learning Orchestra API is organized into interoperable microservices. They offer access to third-party ML toolkits, like Tensorflow, Scikit-learn, Pytorch, MXNet and others to gather data, transform data, explore data, training machine learning models, tune machine learning models, evaluate machine learning models, predict machine learning models and visualize data and results.
There are 11 services in the API:
- Dataset: Responsible to obtain a dataset. External datasets are stored on MongoDB or on volumes using an URL. Dataset service enables the use of csv format datasets or generic format datasets.
- Model: Responsible for loading machine learning models from existing repositories. It is useful to be used to configure a ML toolkit object with a tuned and pre-trained models, like the pre-trained deep learning models provided by Google or Facebook, trained on huge instances, for example. On the other hand, it is also useful to load a customized/optimized neural network developed from scratch by a data scientist team.
- Transform: Responsible for a catalog of tasks, including embedding, normalization, text enrichment, bucketization, data projection and so forth. Learning Orchestra has its own implementations for some services and it encapsulates other transform services from the existing ML toolkits.
- Explore: The data scientist must perform exploratory analysis to understand their data and see the results of their executed actions. So, Learning Orchestra supports data exploration using the catalog provided by the existing ML toolkits, including histogram, clustering, t-SNE, PCA, and others. All outputs of this step are plottable.
- Tune: Performs the search for an optimal set of hyperparameters for a given model. It can be made through strategies like grid-search, random search, or Bayesian optimization.
- Training: Probably it is the most computational expensive service of an ML pipeline, because the models will be trained for best learn the subjacents patterns on data. A diversity of algorithms can be executed, like Support Vector Machine (SVM), Random Forest, Bayesian inference, K-Nearest Neighbors (KNN), Deep Neural Networks (DNN), and many others.
- Evaluate: After training a model, it is necessary to evaluate it's power to generalize to new unseen data. For that, the model needs to perform inferences or classification on a test dataset to obtain metrics that more accurately describe the capabilities of the model. Some common metrics are precision, recall, f1-score, accuracy, mean squared error (MSE), and cross-entropy. This service is useful to describe the generalization power and to detect the need for model calibrations.
- Predict: The model must predict to understand how it runs in real scenarios. Sometimes feedback and reinforcements are necessary.
- Builder: Responsible to execute entire pipelines of existing ML toolkits, like Spark MLlib, in Python, offering an alternative way to use the Learning Orchestra system just as a deployment alternative and not an environment for building ML workflows composed of pipelines.
- Observe: Represents a catalog of collections of Learning Orchestra and a publish/subscribe mechanism. Applications can subscribe to these collections to receive notifications via observers. A collection is the results of any step of a pipeline.
- Function: Responsible to wrap a Python function, representing a wildcard for the data scientist when there is no Learning Orchestra support for a specific ML service. It is different from Builder service, since it does not run the entire pipeline. Instead, it runs just a Python function of an existing ML toolkit on a cluster container.
The REST API can be called by any client developed with any programming language or by an API caller, like Insomnia or Postman. Besides the REST API, there is a Python client to simplify even more the services explained before. The data scientist can choose one of these options.
Details about the REST API at open api documentation.
learning-orchestra-client is a Python 3 package available at the Python Package Index. You must install it with pip install learning-orchestra-client
.
All the Python scripts must import the package and communicate with the Learning Orchestra backend (the cluster IP address). The following code snippet must be inserted:
from learning_orchestra_client import *
cluster_ip = "xx.xx.xxx.xxx"
Context(cluster_ip)
Details about the Learning Orchestra Python client at package documentation.
To check the deployed microservices and machines of your cluster, see cluster-state.
The Learning Orchestra is developed by undergraduate students and developers worldwide.
Two undergraduate final reports:
As we can see on figure above, the Learning Orchestra system will be composed of a set of integration layers. The ML Tollkits layer is just one integration service among others.
The data scientist will be enable to deploy the Learning Orchestra ML toolkits layer alone or in conjunction with other layers, like AutoML, Training, Monitoring and so forth. The idea is to give flexibility for the data scientist, since a specific tuning or distributed training tool is not always necessary. The ML toolkits layer is sufficient for developing the entire ML pipeline as we can see, but sometimes the data scientist requires some extra services, like AutoML.
Thanks goes to these wonderful people (emoji key):
Gabriel Ribeiro 💻 🚧 💬 👀 |
Navendu Pottekkat 📖 🎨 🤔 |
hiperbolt 💻 🤔 🚇 |
Joubert de Castro Lima 🤔 📆 |
Lauro Moraes 🤔 📆 |
LaChapeliere 📖 |
Sudipto Ghosh 💻 |
Gustavo Amorim 💻 |
This project follows the all-contributors specification. Contributions of any kind welcome!
As a final report of an undergraduate student. See link.
The documentation here.
There is no website.
See the contributors list.
The Learning Orchestra system was developed by undergraduate students on their final projects, thus it is a voluntary initiative. The collaborators are also voluntary. The system is free and open source.
Use the Issues page of this repo.
This solution is distributed under the open source GPL-3 license.
You can copy, modify and distribute the code of the repository as long as you understand the license limitations (no liability, no warranty) and respect the license conditions (license and copyright notice, state changes, disclose source, same license.)
Kaggle is a good data source for beginners.
The REST API can be called by any client, including the previous explained Learning Orchestra Python client.
The Learning Orchestra backend must be deployed on Linux based clusters, precisely with Debian 10 distribution.
Theoretically you can. Always a Linux Debian 10 distribution.
If your cluster failed while a microservice is processing a dataset, the task is lost and it must be manually re-submitted. Some fails might corrupt the storage system.
The Docker technology re-deploy the containers when they fail.
If the network connections between cluster instances fail, the Learning Orchestra tries to re-deploy the microservices on different instances.
You just run docker stack rm microservice
in the manager instance of the Docker swarm cluster.
There is an interoperable REST API. There is a Python client.
The Learning Orchestra uses the Scikit-learn, TensorFlow and Spark MLlib solutions.
You can suggest new features by creating an issue in Issues page. New contributors are also welcome new contributors.
The contributing guide.
If you are new with the open source initiative, consider the material FirstTimersOnly.
Yes. Currently, we need help in many directions, including documentation, text review, new pipeline use cases, videos and so forth. Please, check our Issues page for open tasks.