Skip to content

Basic template for using GPT-JT on Banana's serverless GPU platform. Ready for 1-Click deploy

License

Notifications You must be signed in to change notification settings

azrilamil/serverless-template-gpt-jt

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🍌 Banana Serverless

This repo provides a basic template for using GPT-JT on Bananas serverless GPU platform. Ready to be used for 1-Click deploy.

Quickstart:

The repo is already set up to run a basic HuggingFace GPT-JT model.

  1. Run pip3 install -r requirements.txt to download dependencies.
  2. Run python3 server.py to start the server.
  3. Run python3 test.py in a different terminal session to test against it.

Make it your own:

  1. Edit app.py to load and run your model.
  2. Make sure to test with test.py!

if deploying using Docker:

  1. Edit download.py (or the Dockerfile itself) with scripts download your custom model weights at build time.

Move to prod:

At this point, you have a functioning http server for your ML model. You can use it as is, or package it up with our provided Dockerfile and deploy it to your favorite container hosting provider!

If Banana is your favorite GPU hosting provider (and we sure hope it is), read on!

🍌

Deploy to Banana Serverless:

  • Log in to the Banana App
  • Select your customized repo for deploy!

It'll then be built from the dockerfile, optimized, then deployed on our Serverless GPU cluster and callable with any of our SDKs:

You can monitor buildtime and runtime logs by clicking the logs button in the model view on the Banana Dashboard](https://app.banana.dev)


Use Banana for scale.

About

Basic template for using GPT-JT on Banana's serverless GPU platform. Ready for 1-Click deploy

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 88.0%
  • Dockerfile 12.0%