Been trying to run this but no success, tried to use the qwen32b and switched to the 14b model still not able to get past here
INFO: Calling LLM to generate ontology definition...
[backend] 172.18.0.1 - - [07/Apr/2026 16:12:24] "POST /api/graph/ontology/generate HTTP/1.1" 500 -
[backend] [16:12:26] INFO: === Starting ontology generation ===
[backend] [16:12:26] INFO: Project created: proj_fce167f17c91
[backend] [16:12:27] INFO: Text extraction completed, total 79423 characters
[backend] [16:12:27] INFO: Calling LLM to generate ontology definition...
[backend] 172.18.0.1 - - [07/Apr/2026 16:12:28] "POST /api/graph/ontology/generate HTTP/1.1" 500 -
Been trying to run this but no success, tried to use the qwen32b and switched to the 14b model still not able to get past here
INFO: Calling LLM to generate ontology definition...
[backend] 172.18.0.1 - - [07/Apr/2026 16:12:24] "POST /api/graph/ontology/generate HTTP/1.1" 500 -
[backend] [16:12:26] INFO: === Starting ontology generation ===
[backend] [16:12:26] INFO: Project created: proj_fce167f17c91
[backend] [16:12:27] INFO: Text extraction completed, total 79423 characters
[backend] [16:12:27] INFO: Calling LLM to generate ontology definition...
[backend] 172.18.0.1 - - [07/Apr/2026 16:12:28] "POST /api/graph/ontology/generate HTTP/1.1" 500 -