You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
did you test a fresh compile from the source tree?
Compiling pgloader from sources is documented in the README, it's
easy to do, and if patches are to be made to fix your bug, you're going
to have to build from sources to get the fix anyway…
did you search for other similar issues?
how can I reproduce the bug?
Include a self-contained pgloader command file.
If you're loading from a database, consider attaching a database dump to
your issue. For MySQL, use mysqldump. For SQLite, just send over your
source file, that's easy. Maybe be the one with your production data, of
course, the one with just the sample of data that allows me to reproduce
your bug.
When using a proprietary database system as a source, consider creating
a sample database on some Cloud service or somewhere you can then give
me access to, and see my email address on my GitHub profile to send me
the credentials. Still open a public issue for tracking and as
documentation for other users.
LOAD DATABASE
FROM postgresql://mydatabase
INTO postgresql://mydatabase
/* WITH include drop, drop schema */
/* WITH batch rows = 10000, batch size = 400 MB, prefetch rows = 10000, truncate */
WITH batch rows = 2500, batch size = 600 MB, prefetch rows = 10000, no truncate , include no drop, on error resume next, reset no sequences, disable triggers, create no tables
INCLUDING ONLY TABLE NAMES MATCHING ~/.*/ IN SCHEMA 'serverdb'
alter schema 'serverdb' rename to 'serverdb_new'
cast type tinyint to smallint drop typemod
;
pgloader output you obtain
I can provide additional details if needed, but this is the end of the log. I have an identical server with a (theoretically) identical schema where I don't get this error. Haven't run a compare of the schemas but they were the same a couple of months ago and should be in sync. If I run the last query in the log manually, I get 2 rows back.
-- filter-list-to-where-clause for including
-- excluding
-- filter-list-to-where-clause for excluding
with seqattr as
(
select adrelid,
adnum,
pg_get_expr(d.adbin, d.adrelid) as adsrc,
case when pg_get_expr(d.adbin, d.adrelid) ~ 'nextval'
then substring(pg_get_expr(d.adbin, d.adrelid)
from '''([^'']+)'''
)
else null
end as seqname
from pg_attrdef d
)
select nspname, relname, c.oid, attname,
t.oid::regtype as type,
case when atttypmod > 0
then substring(format_type(t.oid, atttypmod) from '\d+(?:,\d+)?')
else null
end as typmod,
attnotnull,
case when atthasdef
then pg_get_expr(def.adbin, def.adrelid)
end as default ,
case when s.seqname is not null then 'auto_increment' end as extra
from pg_class c
join pg_namespace n on n.oid = c.relnamespace
left join pg_attribute a on c.oid = a.attrelid
join pg_type t on t.oid = a.atttypid and attnum > 0
left join pg_attrdef def on a.attrelid = def.adrelid
and a.attnum = def.adnum
and a.atthasdef
left join seqattr s on def.adrelid = s.adrelid
and def.adnum = s.adnum
where nspname !~ '^pg_' and n.nspname <> 'information_schema'
and relkind in ('r', 'f', 'p')
and ((n.nspname = 'serverdb' and c.relname ~ '.*'))
order by nspname, relname, attnum;
2021-05-28T15:01:59.553000-05:00 SQL -- params: including (table)
-- filter-list-to-where-clause for including
-- excluding (table)
-- filter-list-to-where-clause for excluding
-- including (ftable)
-- filter-list-to-where-clause for including
-- excluding (ftable)
-- filter-list-to-where-clause for excluding
select n.nspname, c.relname, nf.nspname, cf.relname as frelname,
r.oid,
d.refobjid as pkeyoid,
conname,
pg_catalog.pg_get_constraintdef(r.oid, true) as condef,
(select string_agg(attname, ',')
from pg_attribute
where attrelid = r.conrelid
and array[attnum::integer] <@ conkey::integer[]
) as conkey,
(select string_agg(attname, ',')
from pg_attribute
where attrelid = r.confrelid
and array[attnum::integer] <@ confkey::integer[]
) as confkey,
confupdtype, confdeltype, confmatchtype,
condeferrable, condeferred
from pg_catalog.pg_constraint r
JOIN pg_class c on r.conrelid = c.oid
JOIN pg_namespace n on c.relnamespace = n.oid
JOIN pg_class cf on r.confrelid = cf.oid
JOIN pg_namespace nf on cf.relnamespace = nf.oid
JOIN pg_depend d on d.classid = 'pg_constraint'::regclass
and d.objid = r.oid
and d.refobjsubid = 0
where r.contype = 'f'
AND c.relkind in ('r', 'f', 'p')
AND cf.relkind in ('r', 'f', 'p')
AND n.nspname !~ '^pg_' and n.nspname <> 'information_schema'
AND nf.nspname !~ '^pg_' and nf.nspname <> 'information_schema'
and ((n.nspname = 'serverdb' and c.relname ~ '.*'))
and ((nf.nspname = 'serverdb' and cf.relname ~ '.*'))
2021-05-28T15:01:59.560000-05:00 ERROR pgsql: The value
NIL
is not of type
PGLOADER.CATALOG:INDEX
2021-05-28T15:01:59.560000-05:00 LOG report summary reset
2021-05-28T15:01:59.589000-05:00 INFO Stopping monitor
2021-05-28T15:42:10.019000-05:00 NOTICE Starting pgloader, log system is ready.
2021-05-28T15:42:10.044000-05:00 LOG pgloader version "3.6.3047c9a"
2021-05-28T15:42:10.195000-05:00 LOG Migrating from #<PGSQL-CONNECTION pgsql://mydatabase {10082CB713}>
2021-05-28T15:42:10.196000-05:00 LOG Migrating into #<PGSQL-CONNECTION pgsql://mydtabase {10082CDFA3}>
2021-05-28T15:42:10.329000-05:00 ERROR pgsql: The value
NIL
is not of type
PGLOADER.CATALOG:INDEX
2021-05-28T15:42:10.329000-05:00 LOG report summary reset
The text was updated successfully, but these errors were encountered:
MikeN123
pushed a commit
to semestry/pgloader
that referenced
this issue
Aug 20, 2021
pgloader --version
did you test a fresh compile from the source tree?
Compiling pgloader from sources is documented in the
README, it's
easy to do, and if patches are to be made to fix your bug, you're going
to have to build from sources to get the fix anyway…
did you search for other similar issues?
how can I reproduce the bug?
Include a self-contained pgloader command file.
If you're loading from a database, consider attaching a database dump to
your issue. For MySQL, use
mysqldump
. For SQLite, just send over yoursource file, that's easy. Maybe be the one with your production data, of
course, the one with just the sample of data that allows me to reproduce
your bug.
When using a proprietary database system as a source, consider creating
a sample database on some Cloud service or somewhere you can then give
me access to, and see my email address on my GitHub profile to send me
the credentials. Still open a public issue for tracking and as
documentation for other users.
I can provide additional details if needed, but this is the end of the log. I have an identical server with a (theoretically) identical schema where I don't get this error. Haven't run a compare of the schemas but they were the same a couple of months ago and should be in sync. If I run the last query in the log manually, I get 2 rows back.
The text was updated successfully, but these errors were encountered: