Skip to content

Releases: runpod/runpod-python

1.2.2

04 Oct 19:59
32a0af4
Compare
Choose a tag to compare

Added

  • User queries and mutations are now available in the Python API wrapper.
  • start_ssh added with the default True when creating new pods.
  • network_volume_id can now be passed in when creating new pods, the correct data center is automatically selected.
  • template_id can now be passed in when creating new pods.

Changes

  • Dependencies updated to the latest versions.
  • Reduced circular imports for version reference.
  • support_public_ip is not default to True when creating new pods.

Fixed

  • Reduce pool_connections for ping requests to 10.
  • Double timeout for ping requests.

What's Changed

New Contributors

Full Changelog: 1.2.1...1.2.2

1.2.1

22 Sep 21:17
ab8d2a7
Compare
Choose a tag to compare

Added

  • Version reported when an error is returned in serverless.
  • Log level can be set with RUNPOD_LOG_LEVEL environment variable.
  • SIGTERM handler initialized when starting serverless worker to avoid hung workers.
  • Progress update method exposed runpod.serverless.progress_update can be called with the job object and string.

Fixed

  • Region is included when using S3 storage via rp_upload, automatically filled in for Amazon S3 buckets and Digital Ocean Spaces.

What's Changed

New Contributors

Full Changelog: 1.2.0...1.2.1

1.2.0

30 Aug 03:16
836a43d
Compare
Choose a tag to compare

Added

  • Command Line Interface (CLI)
  • Can generate a credentials file from the CLI to store your API key.
  • get_gpu now supports gpu_quantity as a parameter.

Changes

  • Minimized the use of pytests in favor of unittests.
  • Re-named api_wrapper to api for consistency.
  • aiohttp_retry packaged replaced rp_retry.py implementation.

Fixed

  • Serverless bug that would not remove task if it failed to submit the results.
  • Added missing get_pod
  • Remove extra print statement when making API calls.

What's Changed

New Contributors

Full Changelog: 1.1.3...1.2.0

1.1.3

20 Aug 02:58
a421768
Compare
Choose a tag to compare

What's Changed

Full Changelog: 1.1.2...1.1.3

1.1.2

18 Aug 16:16
4118877
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 1.1.1...1.1.2

1.1.1

14 Aug 18:22
f8f194b
Compare
Choose a tag to compare

What's Changed

Full Changelog: 1.1.0...1.1.1

1.1.0

08 Aug 23:25
c393d9c
Compare
Choose a tag to compare

Bug Fix

Fixes bug where our ping health monitor would stop if the handler received a blocking function.

What's Changed

Full Changelog: 1.0.1...1.1.0

1.0.1

07 Aug 03:34
4e0cad0
Compare
Choose a tag to compare

What's Changed

Full Changelog: 1.0.0...1.0.1

1.0.0

03 Aug 15:28
102592a
Compare
Choose a tag to compare

We're thrilled to announce the release of RunPod 1.0.0! After refining our CI/CD pipeline and gathering valuable feedback from thousands of users, we are ready to introduce significant enhancements and fixes.

New Features

Multi-Job Concurrency

Workers are now smarter and more efficient. They can fetch and process multiple jobs in parallel, accelerating your workflows and productivity. Here's what's new:

  • Flexible job fetching: You can now fine-tune a worker's operations. When starting a worker, pass a function into the concurrency_controller. This function determines the number of jobs a worker should fetch in parallel.

Job Streaming

Our platform offers a powerful streaming feature that allows users to receive real-time updates on job outputs. This is particularly useful when dealing with Language Model tasks. We support two types of streaming generator functions: regular generator and async generator.

Regular Generator Function:

def generator_streaming(job):
    for i in range(5):
        output = f"Generated token output {i}"
        yield output

Async Generator Function:

async def async_generator_streaming(job):
    for i in range(5):
        output = f"Generated async token output {i}"
        yield output
        await asyncio.sleep(1)  # Simulate an asynchronous task (e.g., LLM processing time).

Usage:

To enable streaming, use either a regular or async generator function to yield the output results. The generator will continuously produce token outputs, streamed to the client in real-time.

How to Stream:

To utilize the generator function for streaming, you must use the /stream/{jobid} endpoint. Clients will receive the streaming token outputs by making an HTTP request to this endpoint. The jobid parameter in the endpoint URL helps the server identify the specific job for which the streaming is requested.

Real-Time Updates:

Once the streaming is initiated, the /stream/{jobid} endpoint will continuously receive generated token outputs as the generator function yields them. This provides real-time updates to the client or user, ensuring they can access the latest results throughout the job execution.

Using generator-type handlers for streaming, our platform enhances the user experience by delivering dynamic and up-to-date information for tasks involving Language Models and beyond.

Updates & Bug Fixes

We've also made some crucial improvements and squashed a few bugs:

  • Improved Initialization: The worker implementation now leverages asyncio, significantly improving initialization times. Your workers are now ready to go faster than ever!
  • Cohesive File Naming: We've renamed some files to improve cohesion and understanding. Now it's even easier to understand the purpose of each file in the project.

What's Changed

New Contributors

Full Changelog: 0.10.0...1.0.0

0.10.0

01 Jul 15:20
Compare
Choose a tag to compare

New Features

Test API Server

  • We introduced the ability to quickly deploy a locally hosted API server with your worker code. This is accomplished by calling your handler file with the --rp_api_serve argument. This allows for faster, more flexible testing environments. Check out our blog post for an example.

Log Level

  • For better control over logging, we now allow setting the log level by calling your handler with the --rp_log_level set to the desired level. If this argument is present, it will override the RUNPOD_DEBUG_LEVEL.

Updates

Logger

  • We've made significant improvements to our logger for better clarity and control:
    • logger.py has been refactored and renamed to rp_logger.py for consistency.
    • Logging level is defaulted to DEBUG by default. Note that when your worker runs on RunPod we set it to ERROR unless otherwise set.
    • Breaking Change: RUNPOD_DEBUG no longer controls whether or not logs are printed. To prevent all logs from printing, set RUNPOD_DEBUG_LEVEL to 0 or call your handler file with the argument --rp_log_level="NOTSET"

RunPod API Python Language Library

  • We've added more options for creating Pods:
    • You can now specify data_center_id and country_code when creating Pods, allowing for more precise control over pod creation.

What's Changed

New Contributors

Full Changelog: 0.9.12...0.10.0