Skip to content

NILAY1556/llama-3.1-local

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

Meta's Llama-3.1-8B Model with FastAPI

This repository contains code to run Meta's Llama-3.1-8B model locally using FastAPI. The model is obtained from Hugging Face.

  • I code this while learning the concepts , and how to use it.

How to Run Locally

  1. Clone this repository to your local machine.
  2. Install the required dependencies by running pip install -r requirements.txt.
  3. Run the FastAPI server by executing uvicorn main:app --reload.
  4. You can now access the API at http://localhost:8000.

References

Feel free to explore and modify the code as needed. Enjoy running Meta's Llama model locally!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages