Skip to content

Conversation

@ParamThakkar123
Copy link
Contributor

I am getting these errors quite frequently from the API after I fetched the latest version from main

image

@codecov
Copy link

codecov bot commented Nov 15, 2025

Codecov Report

❌ Patch coverage is 0% with 22 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
transformerlab/shared/shared.py 0.00% 22 Missing ⚠️

📢 Thoughts on this report? Let us know!

@ParamThakkar123
Copy link
Contributor Author

I am getting these errors quite frequently from the API after I fetched the latest version from main

image

This one got fixed after plugin reinstallation

ParamThakkar123 and others added 2 commits November 16, 2025 16:28
…hrough an exception

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
@ParamThakkar123
Copy link
Contributor Author

Depends on transformerlab/transformerlab-app#882

@deep1401
Copy link
Member

I am getting these errors quite frequently from the API after I fetched the latest version from main

image

This is probably caused by an older version of the plugin. The error will go away when you update your plugin

Copy link
Member

@deep1401 deep1401 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tail field in job data stored things correctly for me, but I think the initial startup of the subprocess isthe one that takes time and there is no output in console then. Which is why you'll only see Starting model... for most plugins while only some whose actual process also takes time, those will show these things.

Maybe @dadmobile can help us resolve but I think this wont solve the problem right?

Also side note, we wouldn't need to add a new server logs route in API right?
I thought you were just reading from job data directly on the frontend already

Just for reference this is what tail had in job data:

{
  "tail": [
    "2025-11-17 07:55:03 | INFO | model_worker | Loading the model ['Qwen3-0.6B'] on workerb617e619, worker type: MLX worker...",
    "2025-11-17 07:55:03 | INFO | model_worker | Model architecture: Qwen3ForCausalLM",
    "2025-11-17 07:55:04 | ERROR | stderr | \rFetching 7 files:   0%|          | 0/7 [00:00<?, ?it/s]",
    "2025-11-17 07:55:04 | ERROR | stderr | \rFetching 7 files: 100%|██████████| 7/7 [00:00<00:00, 30551.64it/s]",
    "2025-11-17 07:55:04 | ERROR | stderr |",
    "2025-11-17 07:55:05 | INFO | stdout | Context length:  40960"
  ]
}

@ParamThakkar123
Copy link
Contributor Author

Changes

I updated the code to use the JobData and removed a separate serverlog route

@deep1401
Copy link
Member

To be ported into transformerlab/transformerlab-app#882

@ParamThakkar123
Copy link
Contributor Author

@deep1401 I already added this code to the linked pull request on the app. We can probably close this :)

@deep1401
Copy link
Member

Code has been ported to App so closing this

@deep1401 deep1401 closed this Nov 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants