-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: unsupported data types in input with Process_Query #77
Comments
Hi, could you please first update to popV 0.5.2? 0.5.0 contains a major rewrite of the code. It will be helpful to check that adata.X contains the same sparse format before running popV. It looks like one is COO and the other CSR and Scanpy has issues concatenating those. You can always convert to dense if you have issues doing it manually. |
Hi, @canergen ,I tried to install the new version of popv but encountered an error during the installation. I tried many methods but did not solve it, the following is the script and the error message (An error was encountered installing tiledbsoma): × Building wheel for tiledbsoma (pyproject.toml) did not run successfully.
note: This error originates from a subprocess, and is likely not a problem with pip. |
Report
Hello ,thanks for this great tool.
I got an error when I ran the Process_Query function. Here is my code and what the error says:
adata = Process_Query(
query_adata_intersection,
ref_adata_intersection,
query_labels_key=query_labels_key,
query_batch_key=query_batch_key,
ref_labels_key=ref_labels_key,
ref_batch_key=ref_batch_key,
unknown_celltype_label=unknown_celltype_label,
save_path_trained_models='./',
cl_obo_folder="/bhpublic/datas02/luolf/test/PopV/PopV-main/resources/ontology/",
#cl_obo_folder=False,
prediction_mode="retrain", # 'fast' mode gives fast results (does not include BBKNN and Scanorama and makes more inaccurate predictions)
n_samples_per_label=n_samples_per_label,
accelerator="cpu", #GPU:cuda
devices=1,
compute_embedding=True,
hvg=None,
).adata
Sampling 500 per label
Traceback (most recent call last):
File "", line 1, in
File "/bhpublic/datas02/luolf/test/PopV/PopV-main/popv/preprocessing.py", line 211, in init
self._preprocess()
File "/bhpublic/datas02/luolf/test/PopV/PopV-main/popv/preprocessing.py", line 260, in _preprocess
self.adata = anndata.concat(
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/anndata/_core/merge.py", line 1344, in concat
raw = concat(
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/anndata/_core/merge.py", line 1298, in concat
X = concat_Xs(adatas, reindexers, axis=axis, fill_value=fill_value)
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/anndata/_core/merge.py", line 1048, in concat_Xs
return concat_arrays(Xs, reindexers, axis=axis, fill_value=fill_value)
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/anndata/_core/merge.py", line 819, in concat_arrays
[
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/anndata/_core/merge.py", line 820, in
f(as_sparse(a), axis=1 - axis, fill_value=fill_value)
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/anndata/_core/merge.py", line 526, in call
return self.apply(el, axis=axis, fill_value=fill_value)
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/anndata/_core/merge.py", line 539, in apply
return self._apply_to_sparse(el, axis=axis, fill_value=fill_value)
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/anndata/_core/merge.py", line 659, in _apply_to_sparse
out = el @ idxmtx
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/scipy/sparse/_base.py", line 695, in matmul
return self._matmul_dispatch(other)
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/scipy/sparse/_base.py", line 606, in _matmul_dispatch
return self._matmul_sparse(other)
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/scipy/sparse/_compressed.py", line 514, in _matmul_sparse
other = self.class(other) # convert to this format
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/scipy/sparse/_compressed.py", line 34, in init
arg1 = arg1.asformat(self.format)
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/scipy/sparse/_base.py", line 435, in asformat
return convert_method(copy=copy)
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/scipy/sparse/_coo.py", line 344, in tocsr
indptr, indices, data, shape = self._coo_to_compressed(csr_array._swap)
File "/data3/luolf/miniconda3/envs/mitoSplitter/lib/python3.9/site-packages/scipy/sparse/_coo.py", line 366, in _coo_to_compressed
coo_tocsr(M, N, nnz, major, minor, self.data, indptr, indices, data)
ValueError: unsupported data types in input
query_adata_intersection and ref_adata_intersection are datasets that have been intersected for common genes. The query_adata was converted from RDS format to h5 format using the method DietSeurat. Additionally, the following operation was performed: query_adata.X = query_adata.raw.X.
When I set prediction_mode="inference", I encountered an error: AssertionError: Query dataset misses genes that were used for reference model training. Retrain reference model, set mode='retrain'. Therefore, I set prediction_mode="retrain", but then I encountered the same issue, and I suspect there might be a data type mismatch somewhere.
Looking forward to your prompt reply.
Version information
Package Version Editable project location
absl-py 2.1.0
aiohappyeyeballs 2.4.4
aiohttp 3.11.9
aiosignal 1.3.1
alphashape 1.3.1
anndata 0.10.8
annoy 1.17.3
array_api_compat 1.9.1
astunparse 1.6.3
async-timeout 5.0.1
attrs 24.2.0
bbknn 1.6.0
beautifulsoup4 4.12.3
celltypist 1.6.3
certifi 2023.11.17
charset-normalizer 3.3.2
chex 0.1.87
click 8.1.7
click-log 0.4.0
contextlib2 21.6.0
contourpy 1.2.0
cycler 0.12.1
Cython 3.0.11
docrep 0.3.2
et_xmlfile 2.0.0
etils 1.5.2
exceptiongroup 1.2.2
fbpca 1.0
filelock 3.16.1
flatbuffers 24.3.25
flax 0.8.5
fonttools 4.45.1
frozenlist 1.5.0
fsspec 2024.10.0
gast 0.6.0
gdown 5.2.0
geosketch 1.3
get-annotations 0.1.2
google-pasta 0.2.0
grpcio 1.68.1
h5py 3.12.1
harmony-pytorch 0.1.8
huggingface-hub 0.26.3
humanize 4.11.0
idna 3.6
igraph 0.11.8
importlib_metadata 8.5.0
importlib-resources 6.1.1
intervaltree 3.1.0
jax 0.4.30
jaxlib 0.4.30
Jinja2 3.1.4
joblib 1.3.2
keras 3.7.0
kiwisolver 1.4.5
legacy-api-wrap 1.4.1
leidenalg 0.10.2
libclang 18.1.1
lightning 2.1.4
lightning-utilities 0.11.9
llvmlite 0.43.0
loompy 3.0.7
Markdown 3.7
markdown-it-py 3.0.0
MarkupSafe 3.0.2
matplotlib 3.9.3
mdurl 0.1.2
ml-collections 0.1.1
ml-dtypes 0.4.1
mpmath 1.3.0
msgpack 1.1.0
mudata 0.2.4
multidict 6.1.0
multipledispatch 1.0.0
namex 0.0.8
natsort 8.4.0
nest-asyncio 1.6.0
networkx 3.2.1
numba 0.60.0
numpy 1.26.4
numpy-groupies 0.10.2
numpyro 0.15.0
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
obonet 1.1.0
OnClass 1.3
openpyxl 3.1.5
opt_einsum 3.4.0
optax 0.2.4
optree 0.13.1
orbax-checkpoint 0.6.4
packaging 23.2
pandas 1.5.3
patsy 0.5.3
Pillow 10.1.0
pip 24.3.1
PopV 0.4.2 /bhpublic/datas02/luolf/test/PopV/PopV-main
propcache 0.2.1
protobuf 5.29.0
psutil 6.1.0
Pygments 2.18.0
pynndescent 0.5.11
pyparsing 3.1.1
pyro-api 0.1.2
pyro-ppl 1.9.1
pysam 0.20.0
PySocks 1.7.1
python-dateutil 2.8.2
pytorch-lightning 2.4.0
pytz 2023.3.post1
PyYAML 6.0.2
RedeFISH 1.0.0
regex 2024.11.6
requests 2.31.0
rich 13.9.4
Rtree 1.1.0
safetensors 0.4.5
scanorama 1.7.4
scanpy 1.10.3
scikit-learn 1.0.2
scikit-misc 0.1.4
scipy 1.13.1
scvelo 0.2.5
scvi-tools 1.1.3
seaborn 0.13.2
sentence-transformers 3.3.1
session-info 1.0.0
setuptools 75.6.0
Shapely 1.8.5
six 1.16.0
sortedcontainers 2.4.0
soupsieve 2.6
statsmodels 0.14.0
stdlib-list 0.10.0
sympy 1.13.1
tensorboard 2.18.0
tensorboard-data-server 0.7.2
tensorflow 2.18.0
tensorflow-io-gcs-filesystem 0.37.1
tensorstore 0.1.69
termcolor 2.5.0
texttable 1.7.0
threadpoolctl 3.2.0
tokenizers 0.20.3
toolz 1.0.0
torch 2.5.1
torchaudio 2.5.1
torchmetrics 1.6.0
torchvision 0.20.1
tqdm 4.66.1
transformers 4.46.3
trimesh 4.0.5
triton 3.1.0
typing_extensions 4.8.0
tzdata 2024.2
umap-learn 0.5.5
urllib3 2.1.0
Werkzeug 3.1.3
wheel 0.45.1
wrapt 1.17.0
yarl 1.18.3
zipp 3.21.0
The text was updated successfully, but these errors were encountered: