Skip to content

Conversation

@dependabot
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Oct 20, 2025

Bumps the all-python-packages group with 10 updates:

Package From To
pydantic 2.12.0 2.12.3
torch 2.8.0 2.9.0
torchvision 0.23.0 0.24.0
transformers 4.56.2 4.57.1
llama-index-core 0.14.4 0.14.5
langchain-core 0.3.79 1.0.0
unsloth 2025.10.1 2024.8
pylint 4.0.0 4.0.1
ruff 0.14.0 0.14.1
mkdocs-material 9.6.21 9.6.22

Updates pydantic from 2.12.0 to 2.12.3

Release notes

Sourced from pydantic's releases.

v2.12.3 2025-10-17

v2.12.3 (2025-10-17)

What's Changed

This is the third 2.13 patch release, fixing issues related to the FieldInfo class, and reverting a change to the supported after model validator function signatures.

  • Raise a warning when an invalid after model validator function signature is raised by @​Viicos in #12414. Starting in 2.12.0, using class methods for after model validators raised an error, but the error wasn't raised concistently. We decided to emit a deprecation warning instead.
  • Add FieldInfo.asdict() method, improve documentation around FieldInfo by @​Viicos in #12411. This also adds back support for mutations on FieldInfo classes, that are reused as Annotated metadata. However, note that this is still not a supported pattern. Instead, please refer to the added example in the documentation.

The blog post section on changes was also updated to document the changes related to serialize_as_any.

Full Changelog: pydantic/pydantic@v2.12.2...v2.12.3

v2.12.2 2025-10-14

v2.12.2 (2025-10-14)

What's Changed

Fixes

  • Release a new pydantic-core version, as a corrupted CPython 3.10 manylinux2014_aarch64 wheel got uploaded (pydantic-core#1843).
  • Fix issue with recursive generic models with a parent model class by @​Viicos in #12398

Full Changelog: pydantic/pydantic@v2.12.1...v2.12.2

v2.12.1 2025-10-13

v2.12.1 (2025-10-13)

GitHub release

What's Changed

This is the first 2.12 patch release, addressing most (but not all yet) regressions from the initial 2.12.0 release.

Fixes

New Contributors

... (truncated)

Changelog

Sourced from pydantic's changelog.

v2.12.3 (2025-10-17)

GitHub release

What's Changed

This is the third 2.13 patch release, fixing issues related to the FieldInfo class, and reverting a change to the supported after model validator function signatures.

  • Raise a warning when an invalid after model validator function signature is raised by @​Viicos in #12414. Starting in 2.12.0, using class methods for after model validators raised an error, but the error wasn't raised concistently. We decided to emit a deprecation warning instead.
  • Add FieldInfo.asdict() method, improve documentation around FieldInfo by @​Viicos in #12411. This also add back support for mutations on FieldInfo classes, that are reused as Annotated metadata. However, note that this is still not a supported pattern. Instead, please refer to the added example in the documentation.

The blog post section on changes was also updated to document the changes related to serialize_as_any.

v2.12.2 (2025-10-14)

GitHub release

What's Changed

Fixes

  • Release a new pydantic-core version, as a corrupted CPython 3.10 manylinux2014_aarch64 wheel got uploaded (pydantic-core#1843).
  • Fix issue with recursive generic models with a parent model class by @​Viicos in #12398

v2.12.1 (2025-10-13)

GitHub release

What's Changed

This is the first 2.12 patch release, addressing most (but not all yet) regressions from the initial 2.12.0 release.

Fixes

New Contributors

... (truncated)

Commits
  • 1a8850d Prepare release 2.12.3
  • 09dbcf2 Add FieldInfo.asdict() method, improve documentation around FieldInfo
  • 5da4331 Improve documentation about serialize as any behavior
  • 9c86324 Raise a warning when an invalid after model validator function signature is r...
  • 36a73c6 Update pydantic-extra-types dependency to version >=2.10.6
  • 1e616a3 Prepare release v2.12.2
  • dc302e2 Fix issue with recursive generic models with a parent model class
  • 6876485 Bump pydantic-core to v2.41.4
  • b4076c6 Prepare release 2.12.1
  • b67f072 Bump pydantic-core to v2.41.3
  • Additional commits viewable in compare view

Updates torch from 2.8.0 to 2.9.0

Release notes

Sourced from torch's releases.

2.9 Release Notes

PyTorch 2.9.0 Release Notes

Highlights

For more details about these highlighted features, you can look at the release blogpost. Below are the full release notes for this release.

Backwards Incompatible Changes

... (truncated)

Commits
  • 0fabc3b CUDA aarch64 12.6 and 12.8 builds fix triton constraints (#165022)
  • 26e023a [MPS] Update OS version in error message (#164949)
  • 6f12be2 CUDA 13.0 builds fix on Amazon Linux 2023 (#164893)
  • 42f0c2c update the baseline data for the operator benchmark (#164789)
  • b015422 fix cpp extension distributed warning spew (#164785)
  • d4c4307 Fix docker build issue after 164575 (#164779)
  • 3b57315 [ROCm] Increase binary build timeout to 5 hours (300 minutes) (#164770)
  • c74f057 Pin conda version for Docker builds (#164579)
  • fd36458 [Cherry-Pick] Work Around exposing statically linked libstdc++ CXX11 ABI stro...
  • 2f6387e [CherrryPick][2.9] Cherry pick request for `Reapply "Make functionalization V...
  • Additional commits viewable in compare view

Updates torchvision from 0.23.0 to 0.24.0

Release notes

Sourced from torchvision's releases.

Torchvision 0.24 release

Improving KeyPoints and Rotated boxes support!

We are releasing a tutorial on how to use KeyPoint transformations in our Transforms on KeyPoints with a preview below!

[!NOTE] These features are still in BETA status. The API are unlikely to change, but we may have some rough edges and we may make some slight bug fixes in future releases. Please let us know if you encounter any issue!

Detailed changes

Improvements

[ops] Improve efficiency of the box_area and box_iou functions by eliminating the intermediate to "xyxy" conversion (#8992) [ops] Update box operations to support arbitrary batch dimensions (#9058) [utils] Add control for the background color of label text boxes (#9204) [transforms] Add support for uint8 image format to the GaussianNoise transform (#9169) [transforms] Accelerate the resize transform on machines with AVX512 (#9190) [transforms] Better error handling in RandomApply for empty list of transforms (#9130) [documentation] New tutorial for KeyPoints transforms (#9209) [documentation] Various documentation improvements (#9186, #9180, #9172) [code quality] Various code quality improvements (#9193, #9161, #9201, #9218, #9160)

Bug Fixes and deprecations

[transforms] Fix output of some geometric transforms for rotated boxes (#9181, #9175) [transforms] Fix clamping for key points and add sanitization feature (#9236, #9235) [datasets] Update download links to official repo for the Caltech-101 & 256 datasets (#9205) [ops] Raise error in drop_block[2,3]d by enforcing odd-sized block sizes (#9157) [io] Removed deprecated video_reader video decoding backend. (#9208)

Contributors

🎉 We're grateful for our community, which helps us improve Torchvision by submitting issues and PRs, and providing feedback and suggestions. The following persons have contributed patches for this release: @​alperenunlu, @​AndreiMoraru123, @​atalman, @​AntoineSimoulin, @​5had3z, @​dcasbol, @​GdoongMathew, @​hrsvrn, @​JonasKlotz, @​zklaus, @​NicolasHug, @​rdong8, @​scotts, @​get9, @​diaz-esparza, @​ZainRizvi, @​Callidior, and @​pytorch/xla-devs

Commits

Updates transformers from 4.56.2 to 4.57.1

Release notes

Sourced from transformers's releases.

Patch release v4.57.1

This patch most notably fixes an issue with an optional dependency (optax), which resulted in parsing errors with poetry. It contains the following fixes:

v4.57.0: Qwen3-Next, Vault Gemma, Qwen3 VL, LongCat Flash, Flex OLMO, LFM2 VL, BLT, Qwen3 OMNI MoE, Parakeet, EdgeTAM, OLMO3

New model additions

Qwen3 Next

The Qwen3-Next series represents the Qwen team's next-generation foundation models, optimized for extreme context length and large-scale parameter efficiency. The series introduces a suite of architectural innovations designed to maximize performance while minimizing computational cost:

  • Hybrid Attention: Replaces standard attention with the combination of Gated DeltaNet and Gated Attention, enabling efficient context modeling.
  • High-Sparsity MoE: Achieves an extreme low activation ratio as 1:50 in MoE layers — drastically reducing FLOPs per token while preserving model capacity.
  • Multi-Token Prediction(MTP): Boosts pretraining model performance, and accelerates inference.
  • Other Optimizations: Includes techniques such as zero-centered and weight-decayed layernorm, Gated Attention, and other stabilizing enhancements for robust training.

Built on this architecture, they trained and open-sourced Qwen3-Next-80B-A3B — 80B total parameters, only 3B active — achieving extreme sparsity and efficiency.

Despite its ultra-efficiency, it outperforms Qwen3-32B on downstream tasks — while requiring less than 1/10 of the training cost. Moreover, it delivers over 10x higher inference throughput than Qwen3-32B when handling contexts longer than 32K tokens.

For more details, please visit their blog Qwen3-Next (blog post).

Vault Gemma

VaultGemma is a text-only decoder model derived from Gemma 2, notably it drops the norms after the Attention and MLP blocks, and uses full attention for all layers instead of alternating between full attention and local sliding attention. VaultGemma is available as a pretrained model with 1B parameters that uses a 1024 token sequence length.

VaultGemma was trained from scratch with sequence-level differential privacy (DP). Its training data includes the same mixture as the Gemma 2 models, consisting of a number of documents of varying lengths. Additionally, it is trained using DP stochastic gradient descent (DP-SGD) and provides a (ε ≤ 2.0, δ ≤ 1.1e-10)-sequence-level DP guarantee, where a sequence consists of 1024 consecutive tokens extracted from heterogeneous data sources. Specifically, the privacy unit of the guarantee is for the sequences after sampling and packing of the mixture.

Qwen3 VL

Qwen3-VL is a multimodal vision-language model series, encompassing both dense and MoE variants, as well as Instruct and Thinking versions.

Building upon its predecessors, Qwen3-VL delivers significant improvements in visual understanding while maintaining strong pure text capabilities. Key architectural advancements include: enhanced MRope with interleaved layout for better spatial-temporal modeling, DeepStack integration to effectively leverage multi-level features from the Vision Transformer (ViT), and improved video understanding through text-based time alignment—evolving from T-RoPE to text timestamp alignment for more precise temporal grounding.

... (truncated)

Commits

Updates llama-index-core from 0.14.4 to 0.14.5

Release notes

Sourced from llama-index-core's releases.

v0.14.5

Release Notes

[2025-10-15]

llama-index-core [0.14.5]

  • Remove debug print (#20000)
  • safely initialize RefDocInfo in Docstore (#20031)
  • Add progress bar for multiprocess loading (#20048)
  • Fix duplicate node positions when identical text appears multiple times in document (#20050)
  • chore: tool call block - part 1 (#20074)

llama-index-instrumentation [0.4.2]

  • update instrumentation package metadata (#20079)

llama-index-llms-anthropic [0.9.5]

  • ✨ feat(anthropic): add prompt caching model validation utilities (#20069)
  • fix streaming thinking/tool calling with anthropic (#20077)
  • Add haiku 4.5 support (#20092)

llama-index-llms-baseten [0.1.6]

  • Baseten provider Kimi K2 0711, Llama 4 Maverick and Llama 4 Scout Model APIs deprecation (#20042)

llama-index-llms-bedrock-converse [0.10.5]

  • feat: List Claude Sonnet 4.5 as a reasoning model (#20022)
  • feat: Support global cross-region inference profile prefix (#20064)
  • Update utils.py for opus 4.1 (#20076)
  • 4.1 opus bedrockconverse missing in funcitoncalling models (#20084)
  • Add haiku 4.5 support (#20092)

llama-index-llms-fireworks [0.4.4]

  • Add Support for Custom Models in Fireworks LLM (#20023)
  • fix(llms/fireworks): Cannot use Fireworks Deepseek V3.1-20006 issue (#20028)

llama-index-llms-oci-genai [0.6.3]

  • Add support for xAI models in OCI GenAI (#20089)

llama-index-llms-openai [0.6.4]

  • Gpt 5 pro addition (#20029)
  • fix collecting final response with openai responses streaming (#20037)
  • Add support for GPT-5 models in utils.py (JSON_SCHEMA_MODELS) (#20045)
  • chore: tool call block - part 1 (#20074)

... (truncated)

Changelog

Sourced from llama-index-core's changelog.

llama-index-core [0.14.5]

  • Remove debug print (#20000)
  • safely initialize RefDocInfo in Docstore (#20031)
  • Add progress bar for multiprocess loading (#20048)
  • Fix duplicate node positions when identical text appears multiple times in document (#20050)
  • chore: tool call block - part 1 (#20074)

llama-index-instrumentation [0.4.2]

  • update instrumentation package metadata (#20079)

llama-index-llms-anthropic [0.9.5]

  • ✨ feat(anthropic): add prompt caching model validation utilities (#20069)
  • fix streaming thinking/tool calling with anthropic (#20077)
  • Add haiku 4.5 support (#20092)

llama-index-llms-baseten [0.1.6]

  • Baseten provider Kimi K2 0711, Llama 4 Maverick and Llama 4 Scout Model APIs deprecation (#20042)

llama-index-llms-bedrock-converse [0.10.5]

  • feat: List Claude Sonnet 4.5 as a reasoning model (#20022)
  • feat: Support global cross-region inference profile prefix (#20064)
  • Update utils.py for opus 4.1 (#20076)
  • 4.1 opus bedrockconverse missing in funcitoncalling models (#20084)
  • Add haiku 4.5 support (#20092)

llama-index-llms-fireworks [0.4.4]

  • Add Support for Custom Models in Fireworks LLM (#20023)
  • fix(llms/fireworks): Cannot use Fireworks Deepseek V3.1-20006 issue (#20028)

llama-index-llms-oci-genai [0.6.3]

  • Add support for xAI models in OCI GenAI (#20089)

llama-index-llms-openai [0.6.4]

  • Gpt 5 pro addition (#20029)
  • fix collecting final response with openai responses streaming (#20037)
  • Add support for GPT-5 models in utils.py (JSON_SCHEMA_MODELS) (#20045)
  • chore: tool call block - part 1 (#20074)

llama-index-llms-sglang [0.1.0]

  • Added Sglang llm integration (#20020)

... (truncated)

Commits

Updates langchain-core from 0.3.79 to 1.0.0

Release notes

Sourced from langchain-core's releases.

langchain-core==1.0.0

Release notes

langchain-core==1.0.0rc3

Initial release

release: joint rcs for core + langchain (#33549) chore(langchain): allow injection of ToolRuntime and generic ToolRuntime[ContextT, StateT] (#33546) chore: more sweeping (#33533) release(core): 1.0.0rc2 (#33530) docs(anthropic): update extended thinking docs and fix urls (#33525) feat(core): support vertexai standard content (#33521) style: more sweeping refs work (#33513) style: more work for refs (#33508) feat(core): include original block type in server tool results for google-genai (#33502) release(core): 1.0.0rc1 (#33497) chore(core): delete BaseMemory, move to langchain-classic (#33373) docs: update package READMEs (#33488) fix(core): propagate extras when aggregating tool calls in v1 content (#33494) chore(core): delete items marked for removal in schemas.py (#33375) fix(docs): Fix several typos and grammar (#33487) chore(core): delete function_calling.py utils marked for removal (#33376) chore(core): delete pydantic_v1/ (#33374) feat(groq): support built-in tools in message content (#33459) chore(core): delete get_relevant_documents (#33378) style: llm -> model (#33423) chore(langchain): remove arg types from docstrings (#33413) style: fix tables, capitalization (#33417) fix(core): handle parent/child mustache vars (#33345) style: remove Defaults to None (#33404) style: .. code-block:: admonition translations (#33400) style: address Sphinx double-backtick snippet syntax (#33389) chore(core): remove arg types from docstrings (#33388) chore: update Sphinx links to markdown (#33386) fix(core): override streaming callback if streaming attribute is set (#33351) refactor(core): clean up sys_info.py (#33372) style: remove more Optional syntax (#33371) chore: drop UP045 (#33362) refactor(core): remove keep-runtime-typing from pyproject.toml following dropping 3.9 (#33360) style: monorepo pass for refs (#33359) fix(core): don't print package if no version found (#33347) chore: enrich pyproject.toml files with links to new references, others (#33343) release(core): 1.0.0a8 (#33341) fix(core): add back add_user_message and add_ai_message (#33340) release(core): 1.0.0a7 (#33309) fix(core,openai,anthropic): delegate to core implementation on invoke when streaming=True (#33308) fix(core): fix string content when streaming output_version="v1" (#33261) chore(infra): pdm -> hatchling (#33289) style(core): drop python 39 linting target for 3.10 (#33286) chore(core): docstring nits (#33285)

... (truncated)

Commits
  • 90346b8 release(core): 1.0.0 (#33562)
  • 2d5efd7 fix(core): support for Python 3.14 (#33461)
  • 1d22735 docs: more fixes for refs (#33554)
  • 9dd494d fix(langchain): conditional tools -> end edge when all client side calls retu...
  • 2fa07b1 chore(langchain_v1): relax typing on input state (#33552)
  • a022e3c feat(langchain_v1): Add ShellToolMiddleware and ClaudeBashToolMiddleware (#33...
  • e0e1142 feat(langchain): file-search middleware (#33551)
  • 34de8ec feat(anthropic): add more anthropic middleware (#33510)
  • 3d288fd release: joint rcs for core + langchain (#33549)
  • 055cccd chore(langchain): allow injection of ToolRuntime and generic `ToolRuntime[C...
  • Additional commits viewable in compare view

Updates unsloth from 2025.10.1 to 2024.8

Release notes

Sourced from unsloth's releases.

gpt-oss Reinforcement Learning + Auto Kernel Notebook

We’re introducing gpt-oss RL support and the fastest RL inference and lowest VRAM use vs. any implementation. Blog: https://docs.unsloth.ai/new/gpt-oss-reinforcement-learning

  • Unsloth now offers the fastest inference (~3x faster), lowest VRAM (50% less) and most context (8x longer) for gpt-oss RL vs. any implementation - with no accuracy loss.
  • Since RL on gpt-oss isn't yet vLLM compatible, we rewrote Transformers inference code to enable faster inference
  • gpt-oss-20b GSPO free Colab notebook
  • This notebook automatically creates faster matrix multiplication kernels and uses a new Unsloth reward function. We also show how to counteract reward-hacking which is one of RL's biggest challenges.
  • We previously released Vision RL with GSPO support
  • ⚠️ Reminder to NOT use Flash Attention 3 for gpt-oss as it'll make your training loss wrong.
  • DeepSeek-V3.1-Terminus is here and you can run locally via our GGUF Read how our 3-bit GGUF beats Claude-4-Opus (thinking) on Aider Polyglot here
  • Magistral 1.2 is here and you can run it locally here or fine-tune it for free by using our Kaggle notebook
  • Fine-tuning the new Qwen3 models including Qwen3-VL, Qwen3-Omni and Qwen3-Next should work in Unsloth if you install the latest transformers. The models are big however so ensure you have enough VRAM.
  • BERT is now fixed! Feel free to use our BERT fine-tuning notebook
  • ⭐ We’re hosting a Developer event with Mistral AI & NVIDIA at Y Combinator’s Office in San Francisco on Oct 21. Come say hello!
  • We’re also joining Pytorch and AMD for a 2 day Virtual AI Agents Challenge with prizes. Join Hackathon

Don't forget to also join our Reddit: r/unsloth 🥰

What's Changed

New Contributors

Full Changelog: unslothai/unsloth@September-2025-v2...September-2025-v3

Vision Reinforcement Learning + Memory Efficient RL

We're excited to support Vision models for RL and even more memory efficient + faster RL!

Unsloth now supports vision/multimodal RL with Gemma 3, Qwen2.5-VL and other vision models. Due to Unsloth's unique weight sharing and custom kernels, Unsloth makes VLM RL 1.5–2× faster, uses 90% less VRAM, and enables 10× longer context lengths than FA2 setups, with no accuracy loss. Qwen2.5-VL GSPO notebook Gemma 3 (4B) Vision GSPO notebook

Full details in our blogpost: https://docs.unsloth.ai/new/vision-reinforcement-learning-vlm-rl

  • This update also introduces Qwen's GSPO algorithm.
  • Our new vision RL support also comes now even faster & more memory efficient! Our new kernels & algos allows faster RL for text and vision LLMs with 50% less VRAM & 10× more context.
  • Introducing a new RL feature called 'Standby'. Before, RL requires GPU splitting between training & inference. With Unsloth Standby, you no longer have to & 'Unsloth Standby' uniquely limits speed degradation compared to other implementations and sometimes makes training even faster! Read our Blog

... (truncated)

Commits

Updates pylint from 4.0.0 to 4.0.1

Commits
  • 9a30350 Bump pylint to 4.0.1, update changelog (#10667)
  • 0ad9d26 [Backport maintenance/4.0.x] Check enums created with functional syntax again...
  • 60a01e4 [Backport maintenance/4.0.x] Improve conditionals (#10655)
  • e60b80e [Backport maintenance/4.0.x] Fix unused-variable false positive with `__all...
  • abcf2ed [Backport maintenance/4.0.x] Fix false-positive for bare-name-capture-pattern...
  • c13b2b6 [Backport maintenance/4.0.x] Fix reference in 4.0 whatsnew (#10645)
  • See full diff in compare view

Updates ruff from 0.14.0 to 0.14.1

Release notes

Sourced from ruff's releases.

0.14.1

Release Notes

Released on 2025-10-16.

Preview features

  • [formatter] Remove parentheses around multiple exception types on Python 3.14+ (#20768)
  • [flake8-bugbear] Omit annotation in preview fix for B006 (#20877)
  • [flake8-logging-format] Avoid dropping implicitly concatenated pieces in the G004 fix (#20793)
  • [pydoclint] Implement docstring-extraneous-parameter (DOC102) (#20376)
  • [pyupgrade] Extend UP019 to detect typing_extensions.Text (UP019) (#20825)
  • [pyupgrade] Fix false negative for TypeVar with default argument in non-pep695-generic-class (UP046) (#20660)

Bug fixes

  • Fix false negatives in Truthiness::from_expr for lambdas, generators, and f-strings (#20704)
  • Fix syntax error false positives for escapes and quotes in f-strings (#20867)
  • Fix syntax error false positives on parenthesized context managers (#20846)
  • [fastapi] Fix false positives for path parameters that FastAPI doesn't recognize (FAST003) (#20687)
  • [flake8-pyi] Fix operator precedence by adding parentheses when needed (PYI061) (#20508)
  • [ruff] Suppress diagnostic for f-string interpolations with debug text (RUF010) (#20525)

Rule changes

  • [airflow] Add warning to airflow.datasets.DatasetEvent usage (AIR301) (#20551)
  • [flake8-bugbear] Mark B905 and B912 fixes as unsafe (#20695)
  • Use DiagnosticTag for more rules - changes display in editors (#20758,#20734...

    Description has been truncated

Bumps the all-python-packages group with 10 updates:

| Package | From | To |
| --- | --- | --- |
| [pydantic](https://github.com/pydantic/pydantic) | `2.12.0` | `2.12.3` |
| [torch](https://github.com/pytorch/pytorch) | `2.8.0` | `2.9.0` |
| [torchvision](https://github.com/pytorch/vision) | `0.23.0` | `0.24.0` |
| [transformers](https://github.com/huggingface/transformers) | `4.56.2` | `4.57.1` |
| [llama-index-core](https://github.com/run-llama/llama_index) | `0.14.4` | `0.14.5` |
| [langchain-core](https://github.com/langchain-ai/langchain) | `0.3.79` | `1.0.0` |
| [unsloth](https://github.com/unslothai/unsloth) | `2025.10.1` | `2024.8` |
| [pylint](https://github.com/pylint-dev/pylint) | `4.0.0` | `4.0.1` |
| [ruff](https://github.com/astral-sh/ruff) | `0.14.0` | `0.14.1` |
| [mkdocs-material](https://github.com/squidfunk/mkdocs-material) | `9.6.21` | `9.6.22` |


Updates `pydantic` from 2.12.0 to 2.12.3
- [Release notes](https://github.com/pydantic/pydantic/releases)
- [Changelog](https://github.com/pydantic/pydantic/blob/main/HISTORY.md)
- [Commits](pydantic/pydantic@v2.12.0...v2.12.3)

Updates `torch` from 2.8.0 to 2.9.0
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](pytorch/pytorch@v2.8.0...v2.9.0)

Updates `torchvision` from 0.23.0 to 0.24.0
- [Release notes](https://github.com/pytorch/vision/releases)
- [Commits](pytorch/vision@0.23.0...v0.24.0)

Updates `transformers` from 4.56.2 to 4.57.1
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.56.2...v4.57.1)

Updates `llama-index-core` from 0.14.4 to 0.14.5
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](run-llama/llama_index@v0.14.4...v0.14.5)

Updates `langchain-core` from 0.3.79 to 1.0.0
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](langchain-ai/langchain@langchain-core==0.3.79...langchain-core==1.0.0)

Updates `unsloth` from 2025.10.1 to 2024.8
- [Release notes](https://github.com/unslothai/unsloth/releases)
- [Commits](https://github.com/unslothai/unsloth/commits)

Updates `pylint` from 4.0.0 to 4.0.1
- [Release notes](https://github.com/pylint-dev/pylint/releases)
- [Commits](pylint-dev/pylint@v4.0.0...v4.0.1)

Updates `ruff` from 0.14.0 to 0.14.1
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](astral-sh/ruff@0.14.0...0.14.1)

Updates `mkdocs-material` from 9.6.21 to 9.6.22
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](squidfunk/mkdocs-material@9.6.21...9.6.22)

---
updated-dependencies:
- dependency-name: pydantic
  dependency-version: 2.12.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-python-packages
- dependency-name: torch
  dependency-version: 2.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-python-packages
- dependency-name: torchvision
  dependency-version: 0.24.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-python-packages
- dependency-name: transformers
  dependency-version: 4.57.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-python-packages
- dependency-name: llama-index-core
  dependency-version: 0.14.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-python-packages
- dependency-name: langchain-core
  dependency-version: 1.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: all-python-packages
- dependency-name: unsloth
  dependency-version: '2024.8'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: all-python-packages
- dependency-name: pylint
  dependency-version: 4.0.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-python-packages
- dependency-name: ruff
  dependency-version: 0.14.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-python-packages
- dependency-name: mkdocs-material
  dependency-version: 9.6.22
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-python-packages
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python:uv Pull requests that update python:uv code labels Oct 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python:uv Pull requests that update python:uv code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants