Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skipping unknown extra network: lora #4

Open
Marootc opened this issue Apr 25, 2023 · 9 comments
Open

Skipping unknown extra network: lora #4

Marootc opened this issue Apr 25, 2023 · 9 comments

Comments

@Marootc
Copy link

Marootc commented Apr 25, 2023

lora:xxxxx:0.7 我按照这种格式 添加到prompt中 ,控制台跳出 Skipping unknown extra network: lora lora
请问如何修复?

@wolverinn
Copy link
Owner

你把对应的lora模型放到models/Lora文件夹下了吗?

@Marootc
Copy link
Author

Marootc commented Apr 25, 2023

@wolverinn 是的,我已经把lora文件放在了指定的文件夹内!但是使用lora的时候就是报这个,不起作用!

@Marootc
Copy link
Author

Marootc commented Apr 25, 2023

@wolverinn AUTOMATIC1111/stable-diffusion-webui#7984 这是我找到了一个类似的解决方案,但是不知道如何解决这个问题。

@Noobmaster6978
Copy link

@wolverinn AUTOMATIC1111/stable-diffusion-webui#7984 这是我找到了一个类似的解决方案,但是不知道如何解决这个问题。

I'm trying to fix this bug with this solution for the past few days now, Its not getting resolved.

@wolverinn
Copy link
Owner

我照着这个issue里的解决方法改了下代码,你可以试下

@Marootc
Copy link
Author

Marootc commented Apr 26, 2023

@wolverinn 还是一样
Checkpoint chilloutmix_NiPrunedFp32Fix.safetensors [fc2511737a] not found; loading fallback v1-5-pruned-emaonly.safetensors [6ce0161689] Loading weights [6ce0161689] from /root/autodl-tmp/stable-diffusion-multi-user/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors Creating model from config: /root/autodl-tmp/stable-diffusion-multi-user/configs/v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Applying cross attention optimization (Doggettx). Textual inversion embeddings loaded(0): Model loaded in 2.5s (create model: 0.5s, apply weights to model: 0.4s, apply half(): 0.3s, load VAE: 0.9s, move model to device: 0.4s). System check identified some issues:

get request: {'prompt': 'A black long hair, white shirt and tie the girl wearing a black jacket and white shirt,<lora:taiwanDollLikeness_v10:0.7>', 'negative_prompt': '(realistic:0.1~1), (low quality, worst quality:1.4), bad image, bad anatomy', 'sampler_name': 'DPM++ SDE Karras', 'steps': 20, 'cfg_scale': 7, 'width': 512, 'height': 512, 'seed': -1, 'do_not_save_samples': True, 'do_not_save_grid': True, 'enable_hr': True} Calculating sha256 for /root/autodl-tmp/stable-diffusion-multi-user/simple/../models/Stable-diffusion/chilloutmix_NiPrunedFp32Fix.safetensors: fc2511737a54c5e80b89ab03e0ab4b98d051ab187f92860f3cd664dc9d08b271 Loading weights [fc2511737a] from /root/autodl-tmp/stable-diffusion-multi-user/simple/../models/Stable-diffusion/chilloutmix_NiPrunedFp32Fix.safetensors Applying cross attention optimization (Doggettx). **Weights loaded in 11.2s (calculate hash: 10.3s, apply weights to model: 0.5s, move model to device: 0.3s).** Loading VAE weights from unknown source: /root/autodl-tmp/stable-diffusion-multi-user/simple/../models/VAE/CamelliaMix.safetensors **Skipping unknown extra network: lora** 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:09<00:00, 2.02it/s] 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:58<00:00, 2.90s/it] [26/Apr/2023 10:02:01] "POST /txt2img/ HTTP/1.1" 200 1568953███████████████████████████████████████████████████████████████████████████████████████████████████████████| 40/40 [01:08<00:00, 2.60s/it]

更新了代码后还是一样出现这个问题

@Marootc
Copy link
Author

Marootc commented Apr 26, 2023

@wolverinn 已经解决lora 跳过的问题 修改了一下 init_model.py 这个文件
import os
import sys
import time
import importlib
import signal
import re
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.gzip import GZipMiddleware
from packaging import version

import logging
logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())

from modules import import_hook, errors, extra_networks, ui_extra_networks_checkpoints
from modules import extra_networks_hypernet, ui_extra_networks_hypernets, ui_extra_networks_textual_inversion
from modules.call_queue import wrap_queued_call, queue_lock, wrap_gradio_gpu_call

import torch

Truncate version number of nightly/local build of PyTorch to not cause exceptions with CodeFormer or Safetensors

if ".dev" in torch.version or "+git" in torch.version:
torch.long_version = torch.version
torch.version = re.search(r'[\d.]+[\d]', torch.version).group(0)

from modules import shared, devices, sd_samplers, upscaler, extensions, localization, ui_tempdir, ui_extra_networks
import modules.codeformer_model as codeformer
import modules.face_restoration
import modules.gfpgan_model as gfpgan
import modules.img2img

import modules.lowvram
import modules.paths
import modules.scripts
import modules.sd_hijack
import modules.sd_models
import modules.sd_vae
import modules.txt2img
import modules.script_callbacks
import modules.textual_inversion.textual_inversion
import modules.progress

import modules.ui
from modules import modelloader
from modules.shared import cmd_opts
import modules.hypernetworks.hypernetwork

def check_versions():
if shared.cmd_opts.skip_version_check:
return

def initialize():

check_versions()
extensions.list_extensions()
localization.list_localizations(cmd_opts.localizations_dir)



if cmd_opts.ui_debug_mode:
    shared.sd_upscalers = upscaler.UpscalerLanczos().scalers
    modules.scripts.load_scripts()
    return

modelloader.cleanup_models()
modules.sd_models.setup_model()
codeformer.setup_model(cmd_opts.codeformer_models_path)
gfpgan.setup_model(cmd_opts.gfpgan_models_path)
modelloader.list_builtin_upscalers()
modules.scripts.load_scripts()
modelloader.load_upscalers()

modules.sd_vae.refresh_vae_list()
modules.textual_inversion.textual_inversion.list_textual_inversion_templates()

try:
    modules.sd_models.load_model()
except Exception as e:
    errors.display(e, "loading stable diffusion model")
    print("", file=sys.stderr)
    print("Stable diffusion model failed to load, exiting", file=sys.stderr)
    exit(1)

shared.opts.data["sd_model_checkpoint"] = shared.sd_model.sd_checkpoint_info.title

shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()))
shared.opts.onchange("sd_vae", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)
shared.opts.onchange("sd_vae_as_default", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)
shared.opts.onchange("temp_dir", ui_tempdir.on_tmpdir_changed)

# shared.reload_hypernetworks()

# ui_extra_networks.intialize()
# ui_extra_networks.register_page(ui_extra_networks_textual_inversion.ExtraNetworksPageTextualInversion())
# ui_extra_networks.register_page(ui_extra_networks_hypernets.ExtraNetworksPageHypernetworks())
# ui_extra_networks.register_page(ui_extra_networks_checkpoints.ExtraNetworksPageCheckpoints())

extra_networks.initialize()
extra_networks.register_extra_network(extra_networks_hypernet.ExtraNetworkHypernet())
modules.script_callbacks.before_ui_callback()

# if cmd_opts.tls_keyfile is not None and cmd_opts.tls_keyfile is not None:

#     try:
#         if not os.path.exists(cmd_opts.tls_keyfile):
#             print("Invalid path to TLS keyfile given")
#         if not os.path.exists(cmd_opts.tls_certfile):
#             print(f"Invalid path to TLS certfile: '{cmd_opts.tls_certfile}'")
#     except TypeError:
#         cmd_opts.tls_keyfile = cmd_opts.tls_certfile = None
#         print("TLS setup invalid, running webui without TLS")
#     else:
#         print("Running with TLS")

# make the program just exit at ctrl+c without waiting for anything
# def sigint_handler(sig, frame):
#     print(f'Interrupted with signal {sig} in {frame}')
#     os._exit(0)

# signal.signal(signal.SIGINT, sigint_handler)

@Noobmaster6978
Copy link

Can you tell me how you solved it, I am noob in coding.

@wolverinn
Copy link
Owner

Looks like he added the line:
modules.script_callbacks.before_ui_callback()
In init_model.pay

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants