0s autopkgtest [22:29:39]: starting date and time: 2024-07-30 22:29:39+0000 0s autopkgtest [22:29:39]: git checkout: fd3bed09 nova: allow more retries for quota issues 0s autopkgtest [22:29:39]: host juju-7f2275-prod-proposed-migration-environment-2; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.77_dkg9w/out --timeout-copy=6000 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:sphinx --apt-upgrade patroni --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=sphinx/7.3.7-4 -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-2@bos01-s390x-22.secgroup --name adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 --image adt/ubuntu-oracular-s390x-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-2 --net-id=net_prod-proposed-migration -e TERM=linux -e ''"'"'http_proxy=http://squid.internal:3128'"'"'' -e ''"'"'https_proxy=http://squid.internal:3128'"'"'' -e ''"'"'no_proxy=127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,keyserver.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,10.24.0.0/24,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com'"'"'' --mirror=http://us.ports.ubuntu.com/ubuntu-ports/ 124s autopkgtest [22:31:43]: testbed dpkg architecture: s390x 125s autopkgtest [22:31:44]: testbed apt version: 2.9.6 125s autopkgtest [22:31:44]: @@@@@@@@@@@@@@@@@@@@ test bed setup 126s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [126 kB] 126s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [8548 B] 126s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [514 kB] 127s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [6368 B] 127s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [52.0 kB] 127s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x Packages [73.3 kB] 127s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x c-n-f Metadata [2112 B] 127s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/restricted s390x Packages [1368 B] 127s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/restricted s390x c-n-f Metadata [120 B] 127s Get:10 http://ftpmaster.internal/ubuntu oracular-proposed/universe s390x Packages [433 kB] 128s Get:11 http://ftpmaster.internal/ubuntu oracular-proposed/universe s390x c-n-f Metadata [8372 B] 128s Get:12 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse s390x Packages [3620 B] 128s Get:13 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse s390x c-n-f Metadata [120 B] 128s Fetched 1229 kB in 2s (580 kB/s) 128s Reading package lists... 135s Reading package lists... 135s Building dependency tree... 135s Reading state information... 135s Calculating upgrade... 136s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 136s Reading package lists... 136s Building dependency tree... 136s Reading state information... 136s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 137s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 137s Hit:2 http://ftpmaster.internal/ubuntu oracular InRelease 137s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 137s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 138s Reading package lists... 138s Reading package lists... 138s Building dependency tree... 138s Reading state information... 138s Calculating upgrade... 138s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 138s Reading package lists... 139s Building dependency tree... 139s Reading state information... 139s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 147s autopkgtest [22:32:06]: testbed running kernel: Linux 6.8.0-31-generic #31-Ubuntu SMP Sat Apr 20 00:14:26 UTC 2024 147s autopkgtest [22:32:06]: @@@@@@@@@@@@@@@@@@@@ apt-source patroni 151s Get:1 http://ftpmaster.internal/ubuntu oracular/universe patroni 3.3.1-1 (dsc) [2851 B] 151s Get:2 http://ftpmaster.internal/ubuntu oracular/universe patroni 3.3.1-1 (tar) [1150 kB] 151s Get:3 http://ftpmaster.internal/ubuntu oracular/universe patroni 3.3.1-1 (diff) [23.1 kB] 151s gpgv: Signature made Tue Jul 2 12:54:38 2024 UTC 151s gpgv: using RSA key 9CA877749FAB2E4FA96862ECDC686A27B43481B0 151s gpgv: Can't check signature: No public key 151s dpkg-source: warning: cannot verify inline signature for ./patroni_3.3.1-1.dsc: no acceptable signature found 151s autopkgtest [22:32:10]: testing package patroni version 3.3.1-1 152s autopkgtest [22:32:11]: build not needed 153s autopkgtest [22:32:12]: test acceptance-etcd3: preparing testbed 155s Reading package lists... 155s Building dependency tree... 155s Reading state information... 155s Starting pkgProblemResolver with broken count: 0 155s Starting 2 pkgProblemResolver with broken count: 0 155s Done 156s The following additional packages will be installed: 156s etcd-server fonts-font-awesome fonts-lato libio-pty-perl libipc-run-perl 156s libjs-jquery libjs-sphinxdoc libjs-underscore libjson-perl libpq5 156s libtime-duration-perl libtimedate-perl libxslt1.1 moreutils patroni 156s patroni-doc postgresql postgresql-16 postgresql-client-16 156s postgresql-client-common postgresql-common python3-behave python3-cdiff 156s python3-click python3-colorama python3-coverage python3-dateutil 156s python3-dnspython python3-etcd python3-parse python3-parse-type 156s python3-prettytable python3-psutil python3-psycopg2 python3-six 156s python3-wcwidth sphinx-rtd-theme-common ssl-cert 156s Suggested packages: 156s etcd-client vip-manager haproxy postgresql-doc postgresql-doc-16 156s python-coverage-doc python3-trio python3-aioquic python3-h2 python3-httpx 156s python3-httpcore etcd python-psycopg2-doc 156s Recommended packages: 156s javascript-common libjson-xs-perl 156s The following NEW packages will be installed: 156s autopkgtest-satdep etcd-server fonts-font-awesome fonts-lato libio-pty-perl 156s libipc-run-perl libjs-jquery libjs-sphinxdoc libjs-underscore libjson-perl 156s libpq5 libtime-duration-perl libtimedate-perl libxslt1.1 moreutils patroni 156s patroni-doc postgresql postgresql-16 postgresql-client-16 156s postgresql-client-common postgresql-common python3-behave python3-cdiff 156s python3-click python3-colorama python3-coverage python3-dateutil 156s python3-dnspython python3-etcd python3-parse python3-parse-type 156s python3-prettytable python3-psutil python3-psycopg2 python3-six 156s python3-wcwidth sphinx-rtd-theme-common ssl-cert 156s 0 upgraded, 39 newly installed, 0 to remove and 0 not upgraded. 156s Need to get 33.4 MB/33.4 MB of archives. 156s After this operation, 111 MB of additional disk space will be used. 156s Get:1 /tmp/autopkgtest.qFf46z/1-autopkgtest-satdep.deb autopkgtest-satdep s390x 0 [760 B] 156s Get:2 http://ftpmaster.internal/ubuntu oracular/main s390x fonts-lato all 2.015-1 [2781 kB] 159s Get:3 http://ftpmaster.internal/ubuntu oracular/main s390x libjson-perl all 4.10000-1 [81.9 kB] 160s Get:4 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-client-common all 261 [36.6 kB] 160s Get:5 http://ftpmaster.internal/ubuntu oracular/main s390x ssl-cert all 1.1.2ubuntu2 [18.0 kB] 160s Get:6 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-common all 261 [162 kB] 160s Get:7 http://ftpmaster.internal/ubuntu oracular/universe s390x etcd-server s390x 3.4.30-1build1 [7777 kB] 163s Get:8 http://ftpmaster.internal/ubuntu oracular/main s390x fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 163s Get:9 http://ftpmaster.internal/ubuntu oracular/main s390x libio-pty-perl s390x 1:1.20-1build2 [31.3 kB] 163s Get:10 http://ftpmaster.internal/ubuntu oracular/main s390x libipc-run-perl all 20231003.0-2 [91.5 kB] 163s Get:11 http://ftpmaster.internal/ubuntu oracular/main s390x libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 163s Get:12 http://ftpmaster.internal/ubuntu oracular/main s390x libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 163s Get:13 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x libjs-sphinxdoc all 7.3.7-4 [154 kB] 163s Get:14 http://ftpmaster.internal/ubuntu oracular/main s390x libpq5 s390x 16.3-1 [144 kB] 163s Get:15 http://ftpmaster.internal/ubuntu oracular/main s390x libtime-duration-perl all 1.21-2 [12.3 kB] 163s Get:16 http://ftpmaster.internal/ubuntu oracular/main s390x libtimedate-perl all 2.3300-2 [34.0 kB] 163s Get:17 http://ftpmaster.internal/ubuntu oracular/main s390x libxslt1.1 s390x 1.1.39-0exp1build1 [170 kB] 163s Get:18 http://ftpmaster.internal/ubuntu oracular/universe s390x moreutils s390x 0.69-1 [57.4 kB] 163s Get:19 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-cdiff all 1.0-1.1 [16.4 kB] 163s Get:20 http://ftpmaster.internal/ubuntu oracular/main s390x python3-colorama all 0.4.6-4 [32.1 kB] 163s Get:21 http://ftpmaster.internal/ubuntu oracular/main s390x python3-click all 8.1.7-2 [79.5 kB] 163s Get:22 http://ftpmaster.internal/ubuntu oracular/main s390x python3-six all 1.16.0-6 [13.0 kB] 163s Get:23 http://ftpmaster.internal/ubuntu oracular/main s390x python3-dateutil all 2.9.0-2 [80.3 kB] 163s Get:24 http://ftpmaster.internal/ubuntu oracular/main s390x python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 163s Get:25 http://ftpmaster.internal/ubuntu oracular/main s390x python3-prettytable all 3.10.1-1 [34.0 kB] 163s Get:26 http://ftpmaster.internal/ubuntu oracular/main s390x python3-psutil s390x 5.9.8-2build2 [195 kB] 163s Get:27 http://ftpmaster.internal/ubuntu oracular/main s390x python3-psycopg2 s390x 2.9.9-1build1 [133 kB] 163s Get:28 http://ftpmaster.internal/ubuntu oracular/main s390x python3-dnspython all 2.6.1-1ubuntu1 [163 kB] 164s Get:29 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-etcd all 0.4.5-4 [31.9 kB] 164s Get:30 http://ftpmaster.internal/ubuntu oracular/universe s390x patroni all 3.3.1-1 [264 kB] 164s Get:31 http://ftpmaster.internal/ubuntu oracular/main s390x sphinx-rtd-theme-common all 2.0.0+dfsg-2 [1012 kB] 164s Get:32 http://ftpmaster.internal/ubuntu oracular/universe s390x patroni-doc all 3.3.1-1 [497 kB] 164s Get:33 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-client-16 s390x 16.3-1 [1290 kB] 165s Get:34 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-16 s390x 16.3-1 [16.7 MB] 174s Get:35 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql all 16+261 [11.7 kB] 174s Get:36 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-parse all 1.20.2-1 [27.0 kB] 174s Get:37 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-parse-type all 0.6.2-1 [22.7 kB] 174s Get:38 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-behave all 1.2.6-5 [98.4 kB] 174s Get:39 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-coverage s390x 7.4.4+dfsg1-0ubuntu2 [147 kB] 174s Preconfiguring packages ... 175s Fetched 33.4 MB in 18s (1806 kB/s) 175s Selecting previously unselected package fonts-lato. 175s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 54832 files and directories currently installed.) 175s Preparing to unpack .../00-fonts-lato_2.015-1_all.deb ... 175s Unpacking fonts-lato (2.015-1) ... 175s Selecting previously unselected package libjson-perl. 175s Preparing to unpack .../01-libjson-perl_4.10000-1_all.deb ... 175s Unpacking libjson-perl (4.10000-1) ... 175s Selecting previously unselected package postgresql-client-common. 175s Preparing to unpack .../02-postgresql-client-common_261_all.deb ... 175s Unpacking postgresql-client-common (261) ... 175s Selecting previously unselected package ssl-cert. 175s Preparing to unpack .../03-ssl-cert_1.1.2ubuntu2_all.deb ... 175s Unpacking ssl-cert (1.1.2ubuntu2) ... 175s Selecting previously unselected package postgresql-common. 175s Preparing to unpack .../04-postgresql-common_261_all.deb ... 175s Adding 'diversion of /usr/bin/pg_config to /usr/bin/pg_config.libpq-dev by postgresql-common' 175s Unpacking postgresql-common (261) ... 175s Selecting previously unselected package etcd-server. 175s Preparing to unpack .../05-etcd-server_3.4.30-1build1_s390x.deb ... 175s Unpacking etcd-server (3.4.30-1build1) ... 175s Selecting previously unselected package fonts-font-awesome. 175s Preparing to unpack .../06-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 175s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 175s Selecting previously unselected package libio-pty-perl. 175s Preparing to unpack .../07-libio-pty-perl_1%3a1.20-1build2_s390x.deb ... 175s Unpacking libio-pty-perl (1:1.20-1build2) ... 175s Selecting previously unselected package libipc-run-perl. 175s Preparing to unpack .../08-libipc-run-perl_20231003.0-2_all.deb ... 175s Unpacking libipc-run-perl (20231003.0-2) ... 175s Selecting previously unselected package libjs-jquery. 175s Preparing to unpack .../09-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 175s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 175s Selecting previously unselected package libjs-underscore. 175s Preparing to unpack .../10-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 175s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 175s Selecting previously unselected package libjs-sphinxdoc. 175s Preparing to unpack .../11-libjs-sphinxdoc_7.3.7-4_all.deb ... 175s Unpacking libjs-sphinxdoc (7.3.7-4) ... 175s Selecting previously unselected package libpq5:s390x. 175s Preparing to unpack .../12-libpq5_16.3-1_s390x.deb ... 175s Unpacking libpq5:s390x (16.3-1) ... 175s Selecting previously unselected package libtime-duration-perl. 175s Preparing to unpack .../13-libtime-duration-perl_1.21-2_all.deb ... 175s Unpacking libtime-duration-perl (1.21-2) ... 175s Selecting previously unselected package libtimedate-perl. 175s Preparing to unpack .../14-libtimedate-perl_2.3300-2_all.deb ... 175s Unpacking libtimedate-perl (2.3300-2) ... 175s Selecting previously unselected package libxslt1.1:s390x. 175s Preparing to unpack .../15-libxslt1.1_1.1.39-0exp1build1_s390x.deb ... 175s Unpacking libxslt1.1:s390x (1.1.39-0exp1build1) ... 175s Selecting previously unselected package moreutils. 175s Preparing to unpack .../16-moreutils_0.69-1_s390x.deb ... 175s Unpacking moreutils (0.69-1) ... 176s Selecting previously unselected package python3-cdiff. 176s Preparing to unpack .../17-python3-cdiff_1.0-1.1_all.deb ... 176s Unpacking python3-cdiff (1.0-1.1) ... 176s Selecting previously unselected package python3-colorama. 176s Preparing to unpack .../18-python3-colorama_0.4.6-4_all.deb ... 176s Unpacking python3-colorama (0.4.6-4) ... 176s Selecting previously unselected package python3-click. 176s Preparing to unpack .../19-python3-click_8.1.7-2_all.deb ... 176s Unpacking python3-click (8.1.7-2) ... 176s Selecting previously unselected package python3-six. 176s Preparing to unpack .../20-python3-six_1.16.0-6_all.deb ... 176s Unpacking python3-six (1.16.0-6) ... 176s Selecting previously unselected package python3-dateutil. 176s Preparing to unpack .../21-python3-dateutil_2.9.0-2_all.deb ... 176s Unpacking python3-dateutil (2.9.0-2) ... 176s Selecting previously unselected package python3-wcwidth. 176s Preparing to unpack .../22-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 176s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 176s Selecting previously unselected package python3-prettytable. 176s Preparing to unpack .../23-python3-prettytable_3.10.1-1_all.deb ... 176s Unpacking python3-prettytable (3.10.1-1) ... 176s Selecting previously unselected package python3-psutil. 176s Preparing to unpack .../24-python3-psutil_5.9.8-2build2_s390x.deb ... 176s Unpacking python3-psutil (5.9.8-2build2) ... 176s Selecting previously unselected package python3-psycopg2. 176s Preparing to unpack .../25-python3-psycopg2_2.9.9-1build1_s390x.deb ... 176s Unpacking python3-psycopg2 (2.9.9-1build1) ... 176s Selecting previously unselected package python3-dnspython. 176s Preparing to unpack .../26-python3-dnspython_2.6.1-1ubuntu1_all.deb ... 176s Unpacking python3-dnspython (2.6.1-1ubuntu1) ... 176s Selecting previously unselected package python3-etcd. 176s Preparing to unpack .../27-python3-etcd_0.4.5-4_all.deb ... 176s Unpacking python3-etcd (0.4.5-4) ... 176s Selecting previously unselected package patroni. 176s Preparing to unpack .../28-patroni_3.3.1-1_all.deb ... 176s Unpacking patroni (3.3.1-1) ... 176s Selecting previously unselected package sphinx-rtd-theme-common. 176s Preparing to unpack .../29-sphinx-rtd-theme-common_2.0.0+dfsg-2_all.deb ... 176s Unpacking sphinx-rtd-theme-common (2.0.0+dfsg-2) ... 176s Selecting previously unselected package patroni-doc. 176s Preparing to unpack .../30-patroni-doc_3.3.1-1_all.deb ... 176s Unpacking patroni-doc (3.3.1-1) ... 176s Selecting previously unselected package postgresql-client-16. 176s Preparing to unpack .../31-postgresql-client-16_16.3-1_s390x.deb ... 176s Unpacking postgresql-client-16 (16.3-1) ... 176s Selecting previously unselected package postgresql-16. 176s Preparing to unpack .../32-postgresql-16_16.3-1_s390x.deb ... 176s Unpacking postgresql-16 (16.3-1) ... 176s Selecting previously unselected package postgresql. 176s Preparing to unpack .../33-postgresql_16+261_all.deb ... 176s Unpacking postgresql (16+261) ... 176s Selecting previously unselected package python3-parse. 176s Preparing to unpack .../34-python3-parse_1.20.2-1_all.deb ... 176s Unpacking python3-parse (1.20.2-1) ... 176s Selecting previously unselected package python3-parse-type. 176s Preparing to unpack .../35-python3-parse-type_0.6.2-1_all.deb ... 176s Unpacking python3-parse-type (0.6.2-1) ... 176s Selecting previously unselected package python3-behave. 176s Preparing to unpack .../36-python3-behave_1.2.6-5_all.deb ... 176s Unpacking python3-behave (1.2.6-5) ... 176s Selecting previously unselected package python3-coverage. 176s Preparing to unpack .../37-python3-coverage_7.4.4+dfsg1-0ubuntu2_s390x.deb ... 176s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 176s Selecting previously unselected package autopkgtest-satdep. 176s Preparing to unpack .../38-1-autopkgtest-satdep.deb ... 176s Unpacking autopkgtest-satdep (0) ... 177s Setting up postgresql-client-common (261) ... 177s Setting up fonts-lato (2.015-1) ... 177s Setting up libio-pty-perl (1:1.20-1build2) ... 177s Setting up python3-colorama (0.4.6-4) ... 177s Setting up python3-cdiff (1.0-1.1) ... 177s Setting up libpq5:s390x (16.3-1) ... 177s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 177s Setting up python3-click (8.1.7-2) ... 177s Setting up python3-psutil (5.9.8-2build2) ... 178s Setting up python3-six (1.16.0-6) ... 178s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 178s Setting up ssl-cert (1.1.2ubuntu2) ... 178s Created symlink '/etc/systemd/system/multi-user.target.wants/ssl-cert.service' → '/usr/lib/systemd/system/ssl-cert.service'. 179s Setting up python3-psycopg2 (2.9.9-1build1) ... 179s Setting up libipc-run-perl (20231003.0-2) ... 179s Setting up libtime-duration-perl (1.21-2) ... 179s Setting up libtimedate-perl (2.3300-2) ... 179s Setting up python3-dnspython (2.6.1-1ubuntu1) ... 179s Setting up python3-parse (1.20.2-1) ... 179s Setting up libjson-perl (4.10000-1) ... 179s Setting up libxslt1.1:s390x (1.1.39-0exp1build1) ... 179s Setting up python3-dateutil (2.9.0-2) ... 180s Setting up etcd-server (3.4.30-1build1) ... 180s info: Selecting UID from range 100 to 999 ... 180s 180s info: Selecting GID from range 100 to 999 ... 180s info: Adding system user `etcd' (UID 107) ... 180s info: Adding new group `etcd' (GID 113) ... 180s info: Adding new user `etcd' (UID 107) with group `etcd' ... 180s info: Creating home directory `/var/lib/etcd/' ... 180s Created symlink '/etc/systemd/system/etcd2.service' → '/usr/lib/systemd/system/etcd.service'. 180s Created symlink '/etc/systemd/system/multi-user.target.wants/etcd.service' → '/usr/lib/systemd/system/etcd.service'. 181s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 181s Setting up python3-prettytable (3.10.1-1) ... 182s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 182s Setting up sphinx-rtd-theme-common (2.0.0+dfsg-2) ... 182s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 182s Setting up moreutils (0.69-1) ... 182s Setting up python3-etcd (0.4.5-4) ... 182s Setting up postgresql-client-16 (16.3-1) ... 182s update-alternatives: using /usr/share/postgresql/16/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode 182s Setting up python3-parse-type (0.6.2-1) ... 182s Setting up postgresql-common (261) ... 183s 183s Creating config file /etc/postgresql-common/createcluster.conf with new version 183s Building PostgreSQL dictionaries from installed myspell/hunspell packages... 183s Removing obsolete dictionary files: 183s Created symlink '/etc/systemd/system/multi-user.target.wants/postgresql.service' → '/usr/lib/systemd/system/postgresql.service'. 184s Setting up libjs-sphinxdoc (7.3.7-4) ... 184s Setting up python3-behave (1.2.6-5) ... 184s /usr/lib/python3/dist-packages/behave/formatter/ansi_escapes.py:57: SyntaxWarning: invalid escape sequence '\[' 184s _ANSI_ESCAPE_PATTERN = re.compile(u"\x1b\[\d+[mA]", re.UNICODE) 184s /usr/lib/python3/dist-packages/behave/matchers.py:267: SyntaxWarning: invalid escape sequence '\d' 184s """Registers a custom type that will be available to "parse" 184s Setting up patroni (3.3.1-1) ... 184s Created symlink '/etc/systemd/system/multi-user.target.wants/patroni.service' → '/usr/lib/systemd/system/patroni.service'. 185s Setting up postgresql-16 (16.3-1) ... 185s Creating new PostgreSQL cluster 16/main ... 185s /usr/lib/postgresql/16/bin/initdb -D /var/lib/postgresql/16/main --auth-local peer --auth-host scram-sha-256 --no-instructions 185s The files belonging to this database system will be owned by user "postgres". 185s This user must also own the server process. 185s 185s The database cluster will be initialized with locale "C.UTF-8". 185s The default database encoding has accordingly been set to "UTF8". 185s The default text search configuration will be set to "english". 185s 185s Data page checksums are disabled. 185s 185s fixing permissions on existing directory /var/lib/postgresql/16/main ... ok 185s creating subdirectories ... ok 185s selecting dynamic shared memory implementation ... posix 185s selecting default max_connections ... 100 185s selecting default shared_buffers ... 128MB 185s selecting default time zone ... Etc/UTC 185s creating configuration files ... ok 185s running bootstrap script ... ok 186s performing post-bootstrap initialization ... ok 186s syncing data to disk ... ok 189s Setting up patroni-doc (3.3.1-1) ... 189s Setting up postgresql (16+261) ... 189s Setting up autopkgtest-satdep (0) ... 189s Processing triggers for man-db (2.12.1-2) ... 190s Processing triggers for libc-bin (2.39-0ubuntu9) ... 194s (Reading database ... 58232 files and directories currently installed.) 194s Removing autopkgtest-satdep (0) ... 195s autopkgtest [22:32:54]: test acceptance-etcd3: debian/tests/acceptance etcd3 195s autopkgtest [22:32:54]: test acceptance-etcd3: [----------------------- 196s dpkg-architecture: warning: cannot determine CC system type, falling back to default (native compilation) 196s ### PostgreSQL 16 acceptance-etcd3 ### 196s ++ ls -1r /usr/lib/postgresql/ 196s + for PG_VERSION in $(ls -1r /usr/lib/postgresql/) 196s + '[' 16 == 10 -o 16 == 11 ']' 196s + echo '### PostgreSQL 16 acceptance-etcd3 ###' 196s + bash -c 'set -o pipefail; ETCD_UNSUPPORTED_ARCH=s390x DCS=etcd3 PATH=/usr/lib/postgresql/16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin behave | ts' 196s Jul 30 22:32:55 Feature: basic replication # features/basic_replication.feature:1 196s Jul 30 22:32:55 We should check that the basic bootstrapping, replication and failover works. 196s Jul 30 22:32:55 Scenario: check replication of a single table # features/basic_replication.feature:4 196s Jul 30 22:32:55 Given I start postgres0 # features/steps/basic_replication.py:8 200s Jul 30 22:32:59 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 200s Jul 30 22:32:59 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 200s Jul 30 22:32:59 When I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "synchronous_mode": true} # features/steps/patroni_api.py:71 201s Jul 30 22:33:00 Then I receive a response code 200 # features/steps/patroni_api.py:98 201s Jul 30 22:33:00 When I start postgres1 # features/steps/basic_replication.py:8 205s Jul 30 22:33:04 And I configure and start postgres2 with a tag replicatefrom postgres0 # features/steps/cascading_replication.py:7 209s Jul 30 22:33:08 And "sync" key in DCS has leader=postgres0 after 20 seconds # features/steps/cascading_replication.py:23 209s Jul 30 22:33:08 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 209s Jul 30 22:33:08 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 210s Jul 30 22:33:09 Then table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 210s Jul 30 22:33:09 210s Jul 30 22:33:09 Scenario: check restart of sync replica # features/basic_replication.feature:17 210s Jul 30 22:33:09 Given I shut down postgres2 # features/steps/basic_replication.py:29 211s Jul 30 22:33:10 Then "sync" key in DCS has sync_standby=postgres1 after 5 seconds # features/steps/cascading_replication.py:23 211s Jul 30 22:33:10 When I start postgres2 # features/steps/basic_replication.py:8 214s Jul 30 22:33:13 And I shut down postgres1 # features/steps/basic_replication.py:29 217s Jul 30 22:33:16 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 218s Jul 30 22:33:17 When I start postgres1 # features/steps/basic_replication.py:8 221s Jul 30 22:33:20 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 222s Jul 30 22:33:21 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 222s Jul 30 22:33:21 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 222s Jul 30 22:33:21 222s Jul 30 22:33:21 Scenario: check stuck sync replica # features/basic_replication.feature:28 222s Jul 30 22:33:21 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": true, "maximum_lag_on_syncnode": 15000000, "postgresql": {"parameters": {"synchronous_commit": "remote_apply"}}} # features/steps/patroni_api.py:71 222s Jul 30 22:33:21 Then I receive a response code 200 # features/steps/patroni_api.py:98 222s Jul 30 22:33:21 And I create table on postgres0 # features/steps/basic_replication.py:73 222s Jul 30 22:33:21 And table mytest is present on postgres1 after 2 seconds # features/steps/basic_replication.py:93 223s Jul 30 22:33:22 And table mytest is present on postgres2 after 2 seconds # features/steps/basic_replication.py:93 223s Jul 30 22:33:22 When I pause wal replay on postgres2 # features/steps/basic_replication.py:64 223s Jul 30 22:33:22 And I load data on postgres0 # features/steps/basic_replication.py:84 224s Jul 30 22:33:23 Then "sync" key in DCS has sync_standby=postgres1 after 15 seconds # features/steps/cascading_replication.py:23 227s Jul 30 22:33:26 And I resume wal replay on postgres2 # features/steps/basic_replication.py:64 227s Jul 30 22:33:26 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 227s Jul 30 22:33:26 And Status code on GET http://127.0.0.1:8010/async is 200 after 3 seconds # features/steps/patroni_api.py:142 227s Jul 30 22:33:26 When I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": null, "maximum_lag_on_syncnode": -1, "postgresql": {"parameters": {"synchronous_commit": "on"}}} # features/steps/patroni_api.py:71 227s Jul 30 22:33:26 Then I receive a response code 200 # features/steps/patroni_api.py:98 227s Jul 30 22:33:26 And I drop table on postgres0 # features/steps/basic_replication.py:73 227s Jul 30 22:33:26 227s Jul 30 22:33:26 Scenario: check multi sync replication # features/basic_replication.feature:44 227s Jul 30 22:33:26 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 2} # features/steps/patroni_api.py:71 227s Jul 30 22:33:26 Then I receive a response code 200 # features/steps/patroni_api.py:98 227s Jul 30 22:33:26 Then "sync" key in DCS has sync_standby=postgres1,postgres2 after 10 seconds # features/steps/cascading_replication.py:23 231s Jul 30 22:33:30 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 231s Jul 30 22:33:30 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 232s Jul 30 22:33:30 When I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 1} # features/steps/patroni_api.py:71 232s Jul 30 22:33:31 Then I receive a response code 200 # features/steps/patroni_api.py:98 232s Jul 30 22:33:31 And I shut down postgres1 # features/steps/basic_replication.py:29 235s Jul 30 22:33:34 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 236s Jul 30 22:33:35 When I start postgres1 # features/steps/basic_replication.py:8 240s Jul 30 22:33:39 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 240s Jul 30 22:33:39 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 240s Jul 30 22:33:39 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 240s Jul 30 22:33:39 240s Jul 30 22:33:39 Scenario: check the basic failover in synchronous mode # features/basic_replication.feature:59 240s Jul 30 22:33:39 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 242s Jul 30 22:33:41 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 242s Jul 30 22:33:41 When I sleep for 2 seconds # features/steps/patroni_api.py:39 244s Jul 30 22:33:43 And I shut down postgres0 # features/steps/basic_replication.py:29 245s Jul 30 22:33:44 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 247s Jul 30 22:33:46 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 247s Jul 30 22:33:46 And postgres2 role is the primary after 24 seconds # features/steps/basic_replication.py:105 266s Jul 30 22:34:05 And Response on GET http://127.0.0.1:8010/history contains recovery after 10 seconds # features/steps/patroni_api.py:156 267s Jul 30 22:34:06 And there is a postgres2_cb.log with "on_role_change master batman" in postgres2 data directory # features/steps/cascading_replication.py:12 267s Jul 30 22:34:06 When I issue a PATCH request to http://127.0.0.1:8010/config with {"synchronous_mode": null, "master_start_timeout": 0} # features/steps/patroni_api.py:71 267s Jul 30 22:34:06 Then I receive a response code 200 # features/steps/patroni_api.py:98 267s Jul 30 22:34:06 When I add the table bar to postgres2 # features/steps/basic_replication.py:54 267s Jul 30 22:34:06 Then table bar is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 270s Jul 30 22:34:09 And Response on GET http://127.0.0.1:8010/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 270s Jul 30 22:34:09 270s Jul 30 22:34:09 Scenario: check rejoin of the former primary with pg_rewind # features/basic_replication.feature:75 270s Jul 30 22:34:09 Given I add the table splitbrain to postgres0 # features/steps/basic_replication.py:54 270s Jul 30 22:34:09 And I start postgres0 # features/steps/basic_replication.py:8 270s Jul 30 22:34:09 Then postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 284s Jul 30 22:34:23 When I add the table buz to postgres2 # features/steps/basic_replication.py:54 284s Jul 30 22:34:23 Then table buz is present on postgres0 after 20 seconds # features/steps/basic_replication.py:93 284s Jul 30 22:34:23 284s Jul 30 22:34:23 @reject-duplicate-name 284s Jul 30 22:34:23 Scenario: check graceful rejection when two nodes have the same name # features/basic_replication.feature:83 284s Jul 30 22:34:23 Given I start duplicate postgres0 on port 8011 # features/steps/basic_replication.py:13 288s Jul 30 22:34:27 Then there is one of ["Can't start; there is already a node named 'postgres0' running"] CRITICAL in the dup-postgres0 patroni log after 5 seconds # features/steps/basic_replication.py:121 292s Jul 30 22:34:31 292s Jul 30 22:34:31 Feature: cascading replication # features/cascading_replication.feature:1 292s Jul 30 22:34:31 We should check that patroni can do base backup and streaming from the replica 292s Jul 30 22:34:31 Scenario: check a base backup and streaming replication from a replica # features/cascading_replication.feature:4 292s Jul 30 22:34:31 Given I start postgres0 # features/steps/basic_replication.py:8 296s Jul 30 22:34:35 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 296s Jul 30 22:34:35 And I configure and start postgres1 with a tag clonefrom true # features/steps/cascading_replication.py:7 300s Jul 30 22:34:39 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 301s Jul 30 22:34:40 And I create label with "postgres0" in postgres0 data directory # features/steps/cascading_replication.py:18 301s Jul 30 22:34:40 And I create label with "postgres1" in postgres1 data directory # features/steps/cascading_replication.py:18 301s Jul 30 22:34:40 And "members/postgres1" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 301s Jul 30 22:34:40 And I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 305s Jul 30 22:34:44 Then replication works from postgres0 to postgres2 after 30 seconds # features/steps/basic_replication.py:112 306s Jul 30 22:34:45 And there is a label with "postgres1" in postgres2 data directory # features/steps/cascading_replication.py:12 312s Jul 30 22:34:51 312s SKIP FEATURE citus: Citus extenstion isn't available 312s SKIP Scenario check that worker cluster is registered in the coordinator: Citus extenstion isn't available 312s SKIP Scenario coordinator failover updates pg_dist_node: Citus extenstion isn't available 312s SKIP Scenario worker switchover doesn't break client queries on the coordinator: Citus extenstion isn't available 312s SKIP Scenario worker primary restart doesn't break client queries on the coordinator: Citus extenstion isn't available 312s SKIP Scenario check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node: Citus extenstion isn't available 312s Jul 30 22:34:51 Feature: citus # features/citus.feature:1 312s Jul 30 22:34:51 We should check that coordinator discovers and registers workers and clients don't have errors when worker cluster switches over 312s Jul 30 22:34:51 Scenario: check that worker cluster is registered in the coordinator # features/citus.feature:4 312s Jul 30 22:34:51 Given I start postgres0 in citus group 0 # None 312s Jul 30 22:34:51 And I start postgres2 in citus group 1 # None 312s Jul 30 22:34:51 Then postgres0 is a leader in a group 0 after 10 seconds # None 312s Jul 30 22:34:51 And postgres2 is a leader in a group 1 after 10 seconds # None 312s Jul 30 22:34:51 When I start postgres1 in citus group 0 # None 312s Jul 30 22:34:51 And I start postgres3 in citus group 1 # None 312s Jul 30 22:34:51 Then replication works from postgres0 to postgres1 after 15 seconds # None 312s Jul 30 22:34:51 Then replication works from postgres2 to postgres3 after 15 seconds # None 312s Jul 30 22:34:51 And postgres0 is registered in the postgres0 as the primary in group 0 after 5 seconds # None 312s Jul 30 22:34:51 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 312s Jul 30 22:34:51 312s Jul 30 22:34:51 Scenario: coordinator failover updates pg_dist_node # features/citus.feature:16 312s Jul 30 22:34:51 Given I run patronictl.py failover batman --group 0 --candidate postgres1 --force # None 312s Jul 30 22:34:51 Then postgres1 role is the primary after 10 seconds # None 312s Jul 30 22:34:51 And "members/postgres0" key in a group 0 in DCS has state=running after 15 seconds # None 312s Jul 30 22:34:51 And replication works from postgres1 to postgres0 after 15 seconds # None 312s Jul 30 22:34:51 And postgres1 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 312s Jul 30 22:34:51 And "sync" key in a group 0 in DCS has sync_standby=postgres0 after 15 seconds # None 312s Jul 30 22:34:51 When I run patronictl.py switchover batman --group 0 --candidate postgres0 --force # None 312s Jul 30 22:34:51 Then postgres0 role is the primary after 10 seconds # None 312s Jul 30 22:34:51 And replication works from postgres0 to postgres1 after 15 seconds # None 312s Jul 30 22:34:51 And postgres0 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 312s Jul 30 22:34:51 And "sync" key in a group 0 in DCS has sync_standby=postgres1 after 15 seconds # None 312s Jul 30 22:34:51 312s Jul 30 22:34:51 Scenario: worker switchover doesn't break client queries on the coordinator # features/citus.feature:29 312s Jul 30 22:34:51 Given I create a distributed table on postgres0 # None 312s Jul 30 22:34:51 And I start a thread inserting data on postgres0 # None 312s Jul 30 22:34:51 When I run patronictl.py switchover batman --group 1 --force # None 312s Jul 30 22:34:51 Then I receive a response returncode 0 # None 312s Jul 30 22:34:51 And postgres3 role is the primary after 10 seconds # None 312s Jul 30 22:34:51 And "members/postgres2" key in a group 1 in DCS has state=running after 15 seconds # None 312s Jul 30 22:34:51 And replication works from postgres3 to postgres2 after 15 seconds # None 312s Jul 30 22:34:51 And postgres3 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 312s Jul 30 22:34:51 And "sync" key in a group 1 in DCS has sync_standby=postgres2 after 15 seconds # None 312s Jul 30 22:34:51 And a thread is still alive # None 312s Jul 30 22:34:51 When I run patronictl.py switchover batman --group 1 --force # None 312s Jul 30 22:34:51 Then I receive a response returncode 0 # None 312s Jul 30 22:34:51 And postgres2 role is the primary after 10 seconds # None 312s Jul 30 22:34:51 And replication works from postgres2 to postgres3 after 15 seconds # None 312s Jul 30 22:34:51 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 312s Jul 30 22:34:51 And "sync" key in a group 1 in DCS has sync_standby=postgres3 after 15 seconds # None 312s Jul 30 22:34:51 And a thread is still alive # None 312s Jul 30 22:34:51 When I stop a thread # None 312s Jul 30 22:34:51 Then a distributed table on postgres0 has expected rows # None 312s Jul 30 22:34:51 312s Jul 30 22:34:51 Scenario: worker primary restart doesn't break client queries on the coordinator # features/citus.feature:50 312s Jul 30 22:34:51 Given I cleanup a distributed table on postgres0 # None 312s Jul 30 22:34:51 And I start a thread inserting data on postgres0 # None 312s Jul 30 22:34:51 When I run patronictl.py restart batman postgres2 --group 1 --force # None 312s Jul 30 22:34:51 Then I receive a response returncode 0 # None 312s Jul 30 22:34:51 And postgres2 role is the primary after 10 seconds # None 312s Jul 30 22:34:51 And replication works from postgres2 to postgres3 after 15 seconds # None 312s Jul 30 22:34:51 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 312s Jul 30 22:34:51 And a thread is still alive # None 312s Jul 30 22:34:51 When I stop a thread # None 312s Jul 30 22:34:51 Then a distributed table on postgres0 has expected rows # None 312s Jul 30 22:34:51 312s Jul 30 22:34:51 Scenario: check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node # features/citus.feature:62 312s Jul 30 22:34:51 Given I start postgres4 in citus group 2 # None 312s Jul 30 22:34:51 Then postgres4 is a leader in a group 2 after 10 seconds # None 312s Jul 30 22:34:51 And "members/postgres4" key in a group 2 in DCS has role=master after 3 seconds # None 312s Jul 30 22:34:51 When I run patronictl.py edit-config batman --group 2 -s ttl=20 --force # None 312s Jul 30 22:34:51 Then I receive a response returncode 0 # None 312s Jul 30 22:34:51 And I receive a response output "+ttl: 20" # None 312s Jul 30 22:34:51 Then postgres4 is registered in the postgres2 as the primary in group 2 after 5 seconds # None 312s Jul 30 22:34:51 When I shut down postgres4 # None 312s Jul 30 22:34:51 Then there is a transaction in progress on postgres0 changing pg_dist_node after 5 seconds # None 312s Jul 30 22:34:51 When I run patronictl.py restart batman postgres2 --group 1 --force # None 312s Jul 30 22:34:51 Then a transaction finishes in 20 seconds # None 312s Jul 30 22:34:51 312s Jul 30 22:34:51 Feature: custom bootstrap # features/custom_bootstrap.feature:1 312s Jul 30 22:34:51 We should check that patroni can bootstrap a new cluster from a backup 312s Jul 30 22:34:51 Scenario: clone existing cluster using pg_basebackup # features/custom_bootstrap.feature:4 312s Jul 30 22:34:51 Given I start postgres0 # features/steps/basic_replication.py:8 316s Jul 30 22:34:55 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 317s Jul 30 22:34:56 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 317s Jul 30 22:34:56 And I start postgres1 in a cluster batman1 as a clone of postgres0 # features/steps/custom_bootstrap.py:6 322s Jul 30 22:35:01 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 323s Jul 30 22:35:02 Then table foo is present on postgres1 after 10 seconds # features/steps/basic_replication.py:93 323s Jul 30 22:35:02 323s Jul 30 22:35:02 Scenario: make a backup and do a restore into a new cluster # features/custom_bootstrap.feature:12 323s Jul 30 22:35:02 Given I add the table bar to postgres1 # features/steps/basic_replication.py:54 323s Jul 30 22:35:02 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 324s Jul 30 22:35:03 When I start postgres2 in a cluster batman2 from backup # features/steps/custom_bootstrap.py:11 329s Jul 30 22:35:08 Then postgres2 is a leader of batman2 after 30 seconds # features/steps/custom_bootstrap.py:16 330s Jul 30 22:35:09 And table bar is present on postgres2 after 10 seconds # features/steps/basic_replication.py:93 336s Jul 30 22:35:15 336s Jul 30 22:35:15 Feature: dcs failsafe mode # features/dcs_failsafe_mode.feature:1 336s Jul 30 22:35:15 We should check the basic dcs failsafe mode functioning 336s Jul 30 22:35:15 Scenario: check failsafe mode can be successfully enabled # features/dcs_failsafe_mode.feature:4 336s Jul 30 22:35:15 Given I start postgres0 # features/steps/basic_replication.py:8 340s Jul 30 22:35:19 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 341s Jul 30 22:35:20 Then "config" key in DCS has ttl=30 after 10 seconds # features/steps/cascading_replication.py:23 341s Jul 30 22:35:20 When I issue a PATCH request to http://127.0.0.1:8008/config with {"loop_wait": 2, "ttl": 20, "retry_timeout": 3, "failsafe_mode": true} # features/steps/patroni_api.py:71 341s Jul 30 22:35:20 Then I receive a response code 200 # features/steps/patroni_api.py:98 341s Jul 30 22:35:20 And Response on GET http://127.0.0.1:8008/failsafe contains postgres0 after 10 seconds # features/steps/patroni_api.py:156 343s Jul 30 22:35:21 When I issue a GET request to http://127.0.0.1:8008/failsafe # features/steps/patroni_api.py:61 343s Jul 30 22:35:22 Then I receive a response code 200 # features/steps/patroni_api.py:98 343s Jul 30 22:35:22 And I receive a response postgres0 http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:98 343s Jul 30 22:35:22 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}},"slots":{"dcs_slot_1": null,"postgres0":null}} # features/steps/patroni_api.py:71 343s Jul 30 22:35:22 Then I receive a response code 200 # features/steps/patroni_api.py:98 343s Jul 30 22:35:22 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots": {"dcs_slot_0": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 343s Jul 30 22:35:22 Then I receive a response code 200 # features/steps/patroni_api.py:98 343s Jul 30 22:35:22 343s Jul 30 22:35:22 @dcs-failsafe 343s Jul 30 22:35:22 Scenario: check one-node cluster is functioning while DCS is down # features/dcs_failsafe_mode.feature:20 343s Jul 30 22:35:22 Given DCS is down # None 343s Jul 30 22:35:22 Then Response on GET http://127.0.0.1:8008/primary contains failsafe_mode_is_active after 12 seconds # None 343s Jul 30 22:35:22 And postgres0 role is the primary after 10 seconds # None 343s Jul 30 22:35:22 343s Jul 30 22:35:22 @dcs-failsafe 343s Jul 30 22:35:22 Scenario: check new replica isn't promoted when leader is down and DCS is up # features/dcs_failsafe_mode.feature:26 343s Jul 30 22:35:22 Given DCS is up # None 343s Jul 30 22:35:22 When I do a backup of postgres0 # None 343s Jul 30 22:35:22 And I shut down postgres0 # None 343s Jul 30 22:35:22 When I start postgres1 in a cluster batman from backup with no_leader # None 343s Jul 30 22:35:22 Then postgres1 role is the replica after 12 seconds # None 343s SKIP Scenario check one-node cluster is functioning while DCS is down: it is not possible to control state of etcd3 from tests 343s SKIP Scenario check new replica isn't promoted when leader is down and DCS is up: it is not possible to control state of etcd3 from tests 343s Jul 30 22:35:22 343s Jul 30 22:35:22 Scenario: check leader and replica are both in /failsafe key after leader is back # features/dcs_failsafe_mode.feature:33 343s Jul 30 22:35:22 Given I start postgres0 # features/steps/basic_replication.py:8 343s Jul 30 22:35:22 And I start postgres1 # features/steps/basic_replication.py:8 347s Jul 30 22:35:26 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 347s Jul 30 22:35:26 And "members/postgres1" key in DCS has state=running after 2 seconds # features/steps/cascading_replication.py:23 348s Jul 30 22:35:27 And Response on GET http://127.0.0.1:8009/failsafe contains postgres1 after 10 seconds # features/steps/patroni_api.py:156 348s Jul 30 22:35:27 When I issue a GET request to http://127.0.0.1:8009/failsafe # features/steps/patroni_api.py:61 348s Jul 30 22:35:27 Then I receive a response code 200 # features/steps/patroni_api.py:98 348s Jul 30 22:35:27 And I receive a response postgres0 http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:98 348s Jul 30 22:35:27 And I receive a response postgres1 http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:98 348s Jul 30 22:35:27 348s Jul 30 22:35:27 @dcs-failsafe @slot-advance 348s Jul 30 22:35:27SKIP Scenario check leader and replica are functioning while DCS is down: it is not possible to control state of etcd3 from tests 348s SKIP Scenario check primary is demoted when one replica is shut down and DCS is down: it is not possible to control state of etcd3 from tests 348s SKIP Scenario check known replica is promoted when leader is down and DCS is up: it is not possible to control state of etcd3 from tests 348s Scenario: check leader and replica are functioning while DCS is down # features/dcs_failsafe_mode.feature:46 348s Jul 30 22:35:27 Given I get all changes from physical slot dcs_slot_1 on postgres0 # None 348s Jul 30 22:35:27 Then physical slot dcs_slot_1 is in sync between postgres0 and postgres1 after 10 seconds # None 348s Jul 30 22:35:27 And logical slot dcs_slot_0 is in sync between postgres0 and postgres1 after 10 seconds # None 348s Jul 30 22:35:27 And DCS is down # None 348s Jul 30 22:35:27 Then Response on GET http://127.0.0.1:8008/primary contains failsafe_mode_is_active after 12 seconds # None 348s Jul 30 22:35:27 Then postgres0 role is the primary after 10 seconds # None 348s Jul 30 22:35:27 And postgres1 role is the replica after 2 seconds # None 348s Jul 30 22:35:27 And replication works from postgres0 to postgres1 after 10 seconds # None 348s Jul 30 22:35:27 When I get all changes from logical slot dcs_slot_0 on postgres0 # None 348s Jul 30 22:35:27 And I get all changes from physical slot dcs_slot_1 on postgres0 # None 348s Jul 30 22:35:27 Then logical slot dcs_slot_0 is in sync between postgres0 and postgres1 after 20 seconds # None 348s Jul 30 22:35:27 And physical slot dcs_slot_1 is in sync between postgres0 and postgres1 after 10 seconds # None 348s Jul 30 22:35:27 348s Jul 30 22:35:27 @dcs-failsafe 348s Jul 30 22:35:27 Scenario: check primary is demoted when one replica is shut down and DCS is down # features/dcs_failsafe_mode.feature:61 348s Jul 30 22:35:27 Given DCS is down # None 348s Jul 30 22:35:27 And I kill postgres1 # None 348s Jul 30 22:35:27 And I kill postmaster on postgres1 # None 348s Jul 30 22:35:27 Then postgres0 role is the replica after 12 seconds # None 348s Jul 30 22:35:27 348s Jul 30 22:35:27 @dcs-failsafe 348s Jul 30 22:35:27 Scenario: check known replica is promoted when leader is down and DCS is up # features/dcs_failsafe_mode.feature:68 348s Jul 30 22:35:27 Given I kill postgres0 # None 348s Jul 30 22:35:27 And I shut down postmaster on postgres0 # None 348s Jul 30 22:35:27 And DCS is up # None 348s Jul 30 22:35:27 When I start postgres1 # None 348s Jul 30 22:35:27 Then "members/postgres1" key in DCS has state=running after 10 seconds # None 348s Jul 30 22:35:27 And postgres1 role is the primary after 25 seconds # None 348s Jul 30 22:35:27 348s Jul 30 22:35:27 @dcs-failsafe 348s Jul 30 22:35:27 Scenario: scale to three-node cluster # features/dcs_failsafe_mode.feature:77 348s Jul 30 22:35:27 Given I start postgres0 # None 348s Jul 30 22:35:27 And I start postgres2 # None 348s Jul 30 22:35:27 Then "members/postgres2" key in DCS has state=running after 10 seconds # None 348s Jul 30 22:35:27 And "members/postgres0" key in DCS has state=running after 20 seconds # None 348s Jul 30 22:35:27 And Response on GET http://127.0.0.1:8008/failsafe contains postgres2 after 10 seconds # None 348s Jul 30 22:35:27 And replication works from postgres1 to postgres0 after 10 seconds # None 348s Jul 30 22:35:27 And replication works from postgres1 to postgres2 after 10 seconds # None 348s Jul 30 22:35:27 348s Jul 30 22:35:27 @dcs-failsafe @slot-advance 348s Jul 30 22:35:27 Scenario: make sure permanent slots exist on replicas # features/dcs_failsafe_mode.feature:88 348s Jul 30 22:35:27 Given I issue a PATCH request to http://127.0.0.1:8009/config with {"slots":{"dcs_slot_0":null,"dcs_slot_2":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # None 348s Jul 30 22:35:27 Then logical slot dcs_slot_2 is in sync between postgres1 and postgres0 after 20 seconds # None 348s Jul 30 22:35:27 And logical slot dcs_slot_2 is in sync between postgres1 and postgres2 after 20 seconds # None 348s Jul 30 22:35:27 When I get all changes from physical slot dcs_slot_1 on postgres1 # None 348s Jul 30 22:35:27 Then physical slot dcs_slot_1 is in sync between postgres1 and postgres0 after 10 seconds # None 348s Jul 30 22:35:27 And physical slot dcs_slot_1 is in sync between postgres1 and postgres2 after 10 seconds # None 348s Jul 30 22:35:27 And physical slot postgres0 is in sync between postgres1 and postgres2 after 10 seconds # None 348s Jul 30 22:35:27 348s Jul 30 22:35:27 @dcs-failsafe 348s Jul 30 22:35:27 Scenario: check three-node cluster is functioning while DCS is down # features/dcs_failsafe_mode.feature:98 348s Jul 30 22:35:27 Given DCS is down # None 348s Jul 30 22:35:27 Then Response on GET http://127.0.0.1:8009/primary contains failsafe_mode_is_active after 12 seconds # None 348s Jul 30 22:35:27 Then postgres1 role is the primary after 10 seconds # None 348s Jul 30 22:35:27 And postgres0 role is the replica after 2 seconds # None 348s Jul 30 22:35:27 And postgres2 role is the replica after 2 seconds # None 348s SKIP Scenario scale to three-node cluster: it is not possible to control state of etcd3 from tests 348s SKIP Scenario make sure permanent slots exist on replicas: it is not possible to control state of etcd3 from tests 348s SKIP Scenario check three-node cluster is functioning while DCS is down: it is not possible to control state of etcd3 from tests 348s SKIP Scenario check that permanent slots are in sync between nodes while DCS is down: it is not possible to control state of etcd3 from tests 352s Jul 30 22:35:31 352s Jul 30 22:35:31 @dcs-failsafe @slot-advance 352s Jul 30 22:35:31 Scenario: check that permanent slots are in sync between nodes while DCS is down # features/dcs_failsafe_mode.feature:107 352s Jul 30 22:35:31 Given replication works from postgres1 to postgres0 after 10 seconds # None 352s Jul 30 22:35:31 And replication works from postgres1 to postgres2 after 10 seconds # None 352s Jul 30 22:35:31 When I get all changes from logical slot dcs_slot_2 on postgres1 # None 352s Jul 30 22:35:31 And I get all changes from physical slot dcs_slot_1 on postgres1 # None 352s Jul 30 22:35:31 Then logical slot dcs_slot_2 is in sync between postgres1 and postgres0 after 20 seconds # None 352s Jul 30 22:35:31 And logical slot dcs_slot_2 is in sync between postgres1 and postgres2 after 20 seconds # None 352s Jul 30 22:35:31 And physical slot dcs_slot_1 is in sync between postgres1 and postgres0 after 10 seconds # None 352s Jul 30 22:35:31 And physical slot dcs_slot_1 is in sync between postgres1 and postgres2 after 10 seconds # None 352s Jul 30 22:35:31 And physical slot postgres0 is in sync between postgres1 and postgres2 after 10 seconds # None 352s Jul 30 22:35:31 352s Jul 30 22:35:31 Feature: ignored slots # features/ignored_slots.feature:1 352s Jul 30 22:35:31 352s Jul 30 22:35:31 Scenario: check ignored slots aren't removed on failover/switchover # features/ignored_slots.feature:2 352s Jul 30 22:35:31 Given I start postgres1 # features/steps/basic_replication.py:8 356s Jul 30 22:35:35 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 358s Jul 30 22:35:36 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 358s Jul 30 22:35:36 When I issue a PATCH request to http://127.0.0.1:8009/config with {"ignore_slots": [{"name": "unmanaged_slot_0", "database": "postgres", "plugin": "test_decoding", "type": "logical"}, {"name": "unmanaged_slot_1", "database": "postgres", "plugin": "test_decoding"}, {"name": "unmanaged_slot_2", "database": "postgres"}, {"name": "unmanaged_slot_3"}], "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 358s Jul 30 22:35:36 Then I receive a response code 200 # features/steps/patroni_api.py:98 358s Jul 30 22:35:36 And Response on GET http://127.0.0.1:8009/config contains ignore_slots after 10 seconds # features/steps/patroni_api.py:156 358s Jul 30 22:35:36 When I shut down postgres1 # features/steps/basic_replication.py:29 359s Jul 30 22:35:38 And I start postgres1 # features/steps/basic_replication.py:8 362s Jul 30 22:35:41 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 363s Jul 30 22:35:42 And "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 364s Jul 30 22:35:43 And postgres1 role is the primary after 20 seconds # features/steps/basic_replication.py:105 364s Jul 30 22:35:43 When I create a logical replication slot unmanaged_slot_0 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 364s Jul 30 22:35:43 And I create a logical replication slot unmanaged_slot_1 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 364s Jul 30 22:35:43 And I create a logical replication slot unmanaged_slot_2 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 364s Jul 30 22:35:43 And I create a logical replication slot unmanaged_slot_3 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 364s Jul 30 22:35:43 And I create a logical replication slot dummy_slot on postgres1 with the test_decoding plugin # features/steps/slots.py:8 364s Jul 30 22:35:43 Then postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 364s Jul 30 22:35:43 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 364s Jul 30 22:35:43 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 364s Jul 30 22:35:43 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 364s Jul 30 22:35:43 When I start postgres0 # features/steps/basic_replication.py:8 368s Jul 30 22:35:47 Then "members/postgres0" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 368s Jul 30 22:35:47 And postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 368s Jul 30 22:35:47 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 370s Jul 30 22:35:48 When I shut down postgres1 # features/steps/basic_replication.py:29 372s Jul 30 22:35:50 Then "members/postgres0" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 373s Jul 30 22:35:51 When I start postgres1 # features/steps/basic_replication.py:8 376s Jul 30 22:35:55 Then postgres1 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 376s Jul 30 22:35:55 And "members/postgres1" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 377s Jul 30 22:35:56 And I sleep for 2 seconds # features/steps/patroni_api.py:39 379s Jul 30 22:35:58 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 379s Jul 30 22:35:58 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 379s Jul 30 22:35:58 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 379s Jul 30 22:35:58 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 379s Jul 30 22:35:58 And postgres1 does not have a replication slot named dummy_slot # features/steps/slots.py:40 379s Jul 30 22:35:58 When I shut down postgres0 # features/steps/basic_replication.py:29 381s Jul 30 22:36:00 Then "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 382s Jul 30 22:36:01 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 382s Jul 30 22:36:01 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 382s Jul 30 22:36:01 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 382s Jul 30 22:36:01 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 384s Jul 30 22:36:03 384s Jul 30 22:36:03 Feature: nostream node # features/nostream_node.feature:1 384s Jul 30 22:36:03 384s Jul 30 22:36:03 Scenario: check nostream node is recovering from archive # features/nostream_node.feature:3 384s Jul 30 22:36:03 When I start postgres0 # features/steps/basic_replication.py:8 388s Jul 30 22:36:07 And I configure and start postgres1 with a tag nostream true # features/steps/cascading_replication.py:7 392s Jul 30 22:36:11 Then "members/postgres1" key in DCS has replication_state=in archive recovery after 10 seconds # features/steps/cascading_replication.py:23 393s Jul 30 22:36:12 And replication works from postgres0 to postgres1 after 30 seconds # features/steps/basic_replication.py:112 397s Jul 30 22:36:16 397s Jul 30 22:36:16 @slot-advance 397s Jul 30 22:36:16 Scenario: check permanent logical replication slots are not copied # features/nostream_node.feature:10 397s Jul 30 22:36:16 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}}, "slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 397s Jul 30 22:36:16 Then I receive a response code 200 # features/steps/patroni_api.py:98 397s Jul 30 22:36:16 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 400s Jul 30 22:36:19 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 401s Jul 30 22:36:20 When I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 406s Jul 30 22:36:25 Then "members/postgres2" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 413s Jul 30 22:36:32 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 413s Jul 30 22:36:32 And postgres2 does not have a replication slot named test_logical # features/steps/slots.py:40 419s Jul 30 22:36:38 419s Jul 30 22:36:38 Feature: patroni api # features/patroni_api.feature:1 419s Jul 30 22:36:38 We should check that patroni correctly responds to valid and not-valid API requests. 419s Jul 30 22:36:38 Scenario: check API requests on a stand-alone server # features/patroni_api.feature:4 419s Jul 30 22:36:38 Given I start postgres0 # features/steps/basic_replication.py:8 423s Jul 30 22:36:42 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 424s Jul 30 22:36:43 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 424s Jul 30 22:36:43 Then I receive a response code 200 # features/steps/patroni_api.py:98 424s Jul 30 22:36:43 And I receive a response state running # features/steps/patroni_api.py:98 424s Jul 30 22:36:43 And I receive a response role master # features/steps/patroni_api.py:98 424s Jul 30 22:36:43 When I issue a GET request to http://127.0.0.1:8008/standby_leader # features/steps/patroni_api.py:61 424s Jul 30 22:36:43 Then I receive a response code 503 # features/steps/patroni_api.py:98 424s Jul 30 22:36:43 When I issue a GET request to http://127.0.0.1:8008/health # features/steps/patroni_api.py:61 424s Jul 30 22:36:43 Then I receive a response code 200 # features/steps/patroni_api.py:98 424s Jul 30 22:36:43 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 424s Jul 30 22:36:43 Then I receive a response code 503 # features/steps/patroni_api.py:98 424s Jul 30 22:36:43 When I issue a POST request to http://127.0.0.1:8008/reinitialize with {"force": true} # features/steps/patroni_api.py:71 424s Jul 30 22:36:43 Then I receive a response code 503 # features/steps/patroni_api.py:98 424s Jul 30 22:36:43 And I receive a response text I am the leader, can not reinitialize # features/steps/patroni_api.py:98 424s Jul 30 22:36:43 When I run patronictl.py switchover batman --master postgres0 --force # features/steps/patroni_api.py:86 426s Jul 30 22:36:45 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 426s Jul 30 22:36:45 And I receive a response output "Error: No candidates found to switchover to" # features/steps/patroni_api.py:98 426s Jul 30 22:36:45 When I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0"} # features/steps/patroni_api.py:71 426s Jul 30 22:36:45 Then I receive a response code 412 # features/steps/patroni_api.py:98 426s Jul 30 22:36:45 And I receive a response text switchover is not possible: cluster does not have members except leader # features/steps/patroni_api.py:98 426s Jul 30 22:36:45 When I issue an empty POST request to http://127.0.0.1:8008/failover # features/steps/patroni_api.py:66 426s Jul 30 22:36:45 Then I receive a response code 400 # features/steps/patroni_api.py:98 426s Jul 30 22:36:45 When I issue a POST request to http://127.0.0.1:8008/failover with {"foo": "bar"} # features/steps/patroni_api.py:71 427s Jul 30 22:36:46 Then I receive a response code 400 # features/steps/patroni_api.py:98 427s Jul 30 22:36:46 And I receive a response text "Failover could be performed only to a specific candidate" # features/steps/patroni_api.py:98 427s Jul 30 22:36:46 427s Jul 30 22:36:46 Scenario: check local configuration reload # features/patroni_api.feature:32 427s Jul 30 22:36:46 Given I add tag new_tag new_value to postgres0 config # features/steps/patroni_api.py:137 427s Jul 30 22:36:46 And I issue an empty POST request to http://127.0.0.1:8008/reload # features/steps/patroni_api.py:66 427s Jul 30 22:36:46 Then I receive a response code 202 # features/steps/patroni_api.py:98 427s Jul 30 22:36:46 427s Jul 30 22:36:46 Scenario: check dynamic configuration change via DCS # features/patroni_api.feature:37 427s Jul 30 22:36:46 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "postgresql": {"parameters": {"max_connections": "101"}}} # features/steps/patroni_api.py:71 427s Jul 30 22:36:46 Then I receive a response code 200 # features/steps/patroni_api.py:98 427s Jul 30 22:36:46 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 11 seconds # features/steps/patroni_api.py:156 432s Jul 30 22:36:49 When I issue a GET request to http://127.0.0.1:8008/config # features/steps/patroni_api.py:61 432s Jul 30 22:36:49 Then I receive a response code 200 # features/steps/patroni_api.py:98 432s Jul 30 22:36:49 And I receive a response ttl 20 # features/steps/patroni_api.py:98 432s Jul 30 22:36:49 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 432s Jul 30 22:36:49 Then I receive a response code 200 # features/steps/patroni_api.py:98 432s Jul 30 22:36:49 And I receive a response tags {'new_tag': 'new_value'} # features/steps/patroni_api.py:98 432s Jul 30 22:36:49 And I sleep for 4 seconds # features/steps/patroni_api.py:39 434s Jul 30 22:36:53 434s Jul 30 22:36:53 Scenario: check the scheduled restart # features/patroni_api.feature:49 434s Jul 30 22:36:53 Given I run patronictl.py edit-config -p 'superuser_reserved_connections=6' --force batman # features/steps/patroni_api.py:86 436s Jul 30 22:36:55 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 436s Jul 30 22:36:55 And I receive a response output "+ superuser_reserved_connections: 6" # features/steps/patroni_api.py:98 436s Jul 30 22:36:55 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 5 seconds # features/steps/patroni_api.py:156 436s Jul 30 22:36:55 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"role": "replica"} # features/steps/patroni_api.py:124 436s Jul 30 22:36:55 Then I receive a response code 202 # features/steps/patroni_api.py:98 436s Jul 30 22:36:55 And I sleep for 8 seconds # features/steps/patroni_api.py:39 444s Jul 30 22:37:03 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 10 seconds # features/steps/patroni_api.py:156 444s Jul 30 22:37:03 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"restart_pending": "True"} # features/steps/patroni_api.py:124 444s Jul 30 22:37:03 Then I receive a response code 202 # features/steps/patroni_api.py:98 444s Jul 30 22:37:03 And Response on GET http://127.0.0.1:8008/patroni does not contain pending_restart after 10 seconds # features/steps/patroni_api.py:171 451s Jul 30 22:37:10 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 452s Jul 30 22:37:11 452s Jul 30 22:37:11 Scenario: check API requests for the primary-replica pair in the pause mode # features/patroni_api.feature:63 452s Jul 30 22:37:11 Given I start postgres1 # features/steps/basic_replication.py:8 456s Jul 30 22:37:15 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 457s Jul 30 22:37:16 When I run patronictl.py pause batman # features/steps/patroni_api.py:86 459s Jul 30 22:37:18 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 459s Jul 30 22:37:18 When I kill postmaster on postgres1 # features/steps/basic_replication.py:44 459s Jul 30 22:37:18 waiting for server to shut down.... done 459s Jul 30 22:37:18 server stopped 459s Jul 30 22:37:18 And I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 459s Jul 30 22:37:18 Then I receive a response code 503 # features/steps/patroni_api.py:98 459s Jul 30 22:37:18 And "members/postgres1" key in DCS has state=stopped after 10 seconds # features/steps/cascading_replication.py:23 461s Jul 30 22:37:20 When I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 464s Jul 30 22:37:23 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 464s Jul 30 22:37:23 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 465s Jul 30 22:37:24 And I sleep for 2 seconds # features/steps/patroni_api.py:39 467s Jul 30 22:37:26 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 474s Jul 30 22:37:26 Then I receive a response code 200 # features/steps/patroni_api.py:98 474s Jul 30 22:37:26 And I receive a response state running # features/steps/patroni_api.py:98 474s Jul 30 22:37:26 And I receive a response role replica # features/steps/patroni_api.py:98 474s Jul 30 22:37:26 When I run patronictl.py reinit batman postgres1 --force --wait # features/steps/patroni_api.py:86 474s Jul 30 22:37:30 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 474s Jul 30 22:37:30 And I receive a response output "Success: reinitialize for member postgres1" # features/steps/patroni_api.py:98 474s Jul 30 22:37:30 And postgres1 role is the secondary after 30 seconds # features/steps/basic_replication.py:105 474s Jul 30 22:37:31 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 474s Jul 30 22:37:31 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 476s Jul 30 22:37:35 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 476s Jul 30 22:37:35 And I receive a response output "Success: restart on member postgres0" # features/steps/patroni_api.py:98 476s Jul 30 22:37:35 And postgres0 role is the primary after 5 seconds # features/steps/basic_replication.py:105 477s Jul 30 22:37:36 477s Jul 30 22:37:36 Scenario: check the switchover via the API in the pause mode # features/patroni_api.feature:90 477s Jul 30 22:37:36 Given I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0", "candidate": "postgres1"} # features/steps/patroni_api.py:71 479s Jul 30 22:37:38 Then I receive a response code 200 # features/steps/patroni_api.py:98 479s Jul 30 22:37:38 And postgres1 is a leader after 5 seconds # features/steps/patroni_api.py:29 479s Jul 30 22:37:38 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 479s Jul 30 22:37:38 And postgres0 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 484s Jul 30 22:37:43 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 485s Jul 30 22:37:43 And "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 486s Jul 30 22:37:44 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 486s Jul 30 22:37:45 Then I receive a response code 503 # features/steps/patroni_api.py:98 486s Jul 30 22:37:45 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 486s Jul 30 22:37:45 Then I receive a response code 200 # features/steps/patroni_api.py:98 486s Jul 30 22:37:45 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 486s Jul 30 22:37:45 Then I receive a response code 200 # features/steps/patroni_api.py:98 486s Jul 30 22:37:45 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 486s Jul 30 22:37:45 Then I receive a response code 503 # features/steps/patroni_api.py:98 486s Jul 30 22:37:45 486s Jul 30 22:37:45 Scenario: check the scheduled switchover # features/patroni_api.feature:107 486s Jul 30 22:37:45 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 488s Jul 30 22:37:47 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 488s Jul 30 22:37:47 And I receive a response output "Can't schedule switchover in the paused state" # features/steps/patroni_api.py:98 488s Jul 30 22:37:47 When I run patronictl.py resume batman # features/steps/patroni_api.py:86 490s Jul 30 22:37:48 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 490s Jul 30 22:37:48 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 491s Jul 30 22:37:50 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 491s Jul 30 22:37:50 And postgres0 is a leader after 20 seconds # features/steps/patroni_api.py:29 501s Jul 30 22:38:00 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 501s Jul 30 22:38:00 And postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 505s Jul 30 22:38:04 And replication works from postgres0 to postgres1 after 25 seconds # features/steps/basic_replication.py:112 505s Jul 30 22:38:04 And "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 506s Jul 30 22:38:05 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 506s Jul 30 22:38:05 Then I receive a response code 200 # features/steps/patroni_api.py:98 506s Jul 30 22:38:05 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 506s Jul 30 22:38:05 Then I receive a response code 503 # features/steps/patroni_api.py:98 506s Jul 30 22:38:05 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 506s Jul 30 22:38:05 Then I receive a response code 503 # features/steps/patroni_api.py:98 506s Jul 30 22:38:05 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 506s Jul 30 22:38:05 Then I receive a response code 200 # features/steps/patroni_api.py:98 510s Jul 30 22:38:09 510s Jul 30 22:38:09 Feature: permanent slots # features/permanent_slots.feature:1 510s Jul 30 22:38:09 510s Jul 30 22:38:09 Scenario: check that physical permanent slots are created # features/permanent_slots.feature:2 510s Jul 30 22:38:09 Given I start postgres0 # features/steps/basic_replication.py:8 514s Jul 30 22:38:13 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 515s Jul 30 22:38:14 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 515s Jul 30 22:38:14 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_physical":0,"postgres0":0,"postgres1":0,"postgres3":0},"postgresql":{"parameters":{"wal_level":"logical"}}} # features/steps/patroni_api.py:71 515s Jul 30 22:38:14 Then I receive a response code 200 # features/steps/patroni_api.py:98 515s Jul 30 22:38:14 And Response on GET http://127.0.0.1:8008/config contains slots after 10 seconds # features/steps/patroni_api.py:156 515s Jul 30 22:38:14 When I start postgres1 # features/steps/basic_replication.py:8 519s Jul 30 22:38:18 And I start postgres2 # features/steps/basic_replication.py:8 523s Jul 30 22:38:22 And I configure and start postgres3 with a tag replicatefrom postgres2 # features/steps/cascading_replication.py:7 527s Jul 30 22:38:26 Then postgres0 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 527s Jul 30 22:38:26 And postgres0 has a physical replication slot named postgres1 after 10 seconds # features/steps/slots.py:80 527s Jul 30 22:38:26 And postgres0 has a physical replication slot named postgres2 after 10 seconds # features/steps/slots.py:80 527s Jul 30 22:38:26 And postgres2 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 527s Jul 30 22:38:26 527s Jul 30 22:38:26 @slot-advance 527s Jul 30 22:38:26 Scenario: check that logical permanent slots are created # features/permanent_slots.feature:18 527s Jul 30 22:38:26 Given I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 531s Jul 30 22:38:30 And I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 531s Jul 30 22:38:30 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 532s Jul 30 22:38:31 532s Jul 30 22:38:31 @slot-advance 532s Jul 30 22:38:31 Scenario: check that permanent slots are created on replicas # features/permanent_slots.feature:24 532s Jul 30 22:38:31 Given postgres1 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 538s Jul 30 22:38:37 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 538s Jul 30 22:38:37 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 539s Jul 30 22:38:38 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 540s Jul 30 22:38:39 And postgres1 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 540s Jul 30 22:38:39 And postgres2 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 540s Jul 30 22:38:39 And postgres3 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 540s Jul 30 22:38:39 540s Jul 30 22:38:39 @slot-advance 540s Jul 30 22:38:39 Scenario: check permanent physical slots that match with member names # features/permanent_slots.feature:34 540s Jul 30 22:38:39 Given postgres0 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 540s Jul 30 22:38:39 And postgres1 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 540s Jul 30 22:38:39 And postgres1 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 540s Jul 30 22:38:39 And postgres2 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 540s Jul 30 22:38:39 And postgres2 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 540s Jul 30 22:38:39 And postgres2 has a physical replication slot named postgres1 after 2 seconds # features/steps/slots.py:80 540s Jul 30 22:38:39 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 540s Jul 30 22:38:39 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 540s Jul 30 22:38:39 540s Jul 30 22:38:39 @slot-advance 540s Jul 30 22:38:39 Scenario: check that permanent slots are advanced on replicas # features/permanent_slots.feature:45 540s Jul 30 22:38:39 Given I add the table replicate_me to postgres0 # features/steps/basic_replication.py:54 540s Jul 30 22:38:39 When I get all changes from logical slot test_logical on postgres0 # features/steps/slots.py:70 540s Jul 30 22:38:39 And I get all changes from physical slot test_physical on postgres0 # features/steps/slots.py:75 540s Jul 30 22:38:39 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 541s Jul 30 22:38:40 And Physical slot test_physical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 541s Jul 30 22:38:40 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 541s Jul 30 22:38:40 And Physical slot test_physical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 541s Jul 30 22:38:40 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 541s Jul 30 22:38:40 And Physical slot test_physical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 541s Jul 30 22:38:40 And Physical slot postgres1 is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 541s Jul 30 22:38:40 And Physical slot postgres3 is in sync between postgres2 and postgres0 after 20 seconds # features/steps/slots.py:51 543s Jul 30 22:38:42 And Physical slot postgres3 is in sync between postgres2 and postgres1 after 10 seconds # features/steps/slots.py:51 543s Jul 30 22:38:42 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 543s Jul 30 22:38:42 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 543s Jul 30 22:38:42 543s Jul 30 22:38:42 @slot-advance 543s Jul 30 22:38:42 Scenario: check that only permanent slots are written to the /status key # features/permanent_slots.feature:62 543s Jul 30 22:38:42 Given "status" key in DCS has test_physical in slots # features/steps/slots.py:96 543s Jul 30 22:38:42 And "status" key in DCS has postgres0 in slots # features/steps/slots.py:96 543s Jul 30 22:38:42 And "status" key in DCS has postgres1 in slots # features/steps/slots.py:96 543s Jul 30 22:38:42 And "status" key in DCS does not have postgres2 in slots # features/steps/slots.py:102 543s Jul 30 22:38:42 And "status" key in DCS has postgres3 in slots # features/steps/slots.py:96 543s Jul 30 22:38:42 543s Jul 30 22:38:42 Scenario: check permanent physical replication slot after failover # features/permanent_slots.feature:69 543s Jul 30 22:38:42 Given I shut down postgres3 # features/steps/basic_replication.py:29 544s Jul 30 22:38:43 And I shut down postgres2 # features/steps/basic_replication.py:29 545s Jul 30 22:38:44 And I shut down postgres0 # features/steps/basic_replication.py:29 547s Jul 30 22:38:46 Then postgres1 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 547s Jul 30 22:38:46 And postgres1 has a physical replication slot named postgres0 after 10 seconds # features/steps/slots.py:80 547s Jul 30 22:38:46 And postgres1 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 550s Jul 30 22:38:48 550s Jul 30 22:38:48 Feature: priority replication # features/priority_failover.feature:1 550s Jul 30 22:38:48 We should check that we can give nodes priority during failover 550s Jul 30 22:38:48 Scenario: check failover priority 0 prevents leaderships # features/priority_failover.feature:4 550s Jul 30 22:38:48 Given I configure and start postgres0 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 554s Jul 30 22:38:53 And I configure and start postgres1 with a tag failover_priority 0 # features/steps/cascading_replication.py:7 558s Jul 30 22:38:57 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 558s Jul 30 22:38:57 When I shut down postgres0 # features/steps/basic_replication.py:29 560s Jul 30 22:38:59 And there is one of ["following a different leader because I am not allowed to promote"] INFO in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 562s Jul 30 22:39:01 Then postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 562s Jul 30 22:39:01 When I start postgres0 # features/steps/basic_replication.py:8 565s Jul 30 22:39:04 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 567s Jul 30 22:39:06 567s Jul 30 22:39:06 Scenario: check higher failover priority is respected # features/priority_failover.feature:14 567s Jul 30 22:39:06 Given I configure and start postgres2 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 571s Jul 30 22:39:10 And I configure and start postgres3 with a tag failover_priority 2 # features/steps/cascading_replication.py:7 575s Jul 30 22:39:14 Then replication works from postgres0 to postgres2 after 20 seconds # features/steps/basic_replication.py:112 575s Jul 30 22:39:14 And replication works from postgres0 to postgres3 after 20 seconds # features/steps/basic_replication.py:112 576s Jul 30 22:39:15 When I shut down postgres0 # features/steps/basic_replication.py:29 579s Jul 30 22:39:18 Then postgres3 role is the primary after 10 seconds # features/steps/basic_replication.py:105 579s Jul 30 22:39:18 And there is one of ["postgres3 has equally tolerable WAL position and priority 2, while this node has priority 1","Wal position of postgres3 is ahead of my wal position"] INFO in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 579s Jul 30 22:39:18 579s Jul 30 22:39:18 Scenario: check conflicting configuration handling # features/priority_failover.feature:23 579s Jul 30 22:39:18 When I set nofailover tag in postgres2 config # features/steps/patroni_api.py:131 579s Jul 30 22:39:18 And I issue an empty POST request to http://127.0.0.1:8010/reload # features/steps/patroni_api.py:66 579s Jul 30 22:39:18 Then I receive a response code 202 # features/steps/patroni_api.py:98 579s Jul 30 22:39:18 And there is one of ["Conflicting configuration between nofailover: True and failover_priority: 1. Defaulting to nofailover: True"] WARNING in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 580s Jul 30 22:39:19 And "members/postgres2" key in DCS has tags={'failover_priority': '1', 'nofailover': True} after 10 seconds # features/steps/cascading_replication.py:23 581s Jul 30 22:39:20 When I issue a POST request to http://127.0.0.1:8010/failover with {"candidate": "postgres2"} # features/steps/patroni_api.py:71 582s Jul 30 22:39:20 Then I receive a response code 412 # features/steps/patroni_api.py:98 582s Jul 30 22:39:20 And I receive a response text "failover is not possible: no good candidates have been found" # features/steps/patroni_api.py:98 582s Jul 30 22:39:20 When I reset nofailover tag in postgres1 config # features/steps/patroni_api.py:131 582s Jul 30 22:39:20 And I issue an empty POST request to http://127.0.0.1:8009/reload # features/steps/patroni_api.py:66 582s Jul 30 22:39:21 Then I receive a response code 202 # features/steps/patroni_api.py:98 582s Jul 30 22:39:21 And there is one of ["Conflicting configuration between nofailover: False and failover_priority: 0. Defaulting to nofailover: False"] WARNING in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 584s Jul 30 22:39:23 And "members/postgres1" key in DCS has tags={'failover_priority': '0', 'nofailover': False} after 10 seconds # features/steps/cascading_replication.py:23 585s Jul 30 22:39:24 And I issue a POST request to http://127.0.0.1:8009/failover with {"candidate": "postgres1"} # features/steps/patroni_api.py:71 588s Jul 30 22:39:27 Then I receive a response code 200 # features/steps/patroni_api.py:98 588s Jul 30 22:39:27 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 592s Jul 30 22:39:31 592s Jul 30 22:39:31 Feature: recovery # features/recovery.feature:1 592s Jul 30 22:39:31 We want to check that crashed postgres is started back 592s Jul 30 22:39:31 Scenario: check that timeline is not incremented when primary is started after crash # features/recovery.feature:4 592s Jul 30 22:39:31 Given I start postgres0 # features/steps/basic_replication.py:8 596s Jul 30 22:39:35 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 597s Jul 30 22:39:36 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 597s Jul 30 22:39:36 When I start postgres1 # features/steps/basic_replication.py:8 601s Jul 30 22:39:40 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 601s Jul 30 22:39:40 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 602s Jul 30 22:39:41 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 602s Jul 30 22:39:41 waiting for server to shut down.... done 602s Jul 30 22:39:41 server stopped 602s Jul 30 22:39:41 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 603s Jul 30 22:39:42 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 603s Jul 30 22:39:42 Then I receive a response code 200 # features/steps/patroni_api.py:98 603s Jul 30 22:39:42 And I receive a response role master # features/steps/patroni_api.py:98 603s Jul 30 22:39:42 And I receive a response timeline 1 # features/steps/patroni_api.py:98 603s Jul 30 22:39:42 And "members/postgres0" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 604s Jul 30 22:39:43 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 608s Jul 30 22:39:47 608s Jul 30 22:39:47 Scenario: check immediate failover when master_start_timeout=0 # features/recovery.feature:20 608s Jul 30 22:39:47 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"master_start_timeout": 0} # features/steps/patroni_api.py:71 609s Jul 30 22:39:48 Then I receive a response code 200 # features/steps/patroni_api.py:98 609s Jul 30 22:39:48 And Response on GET http://127.0.0.1:8008/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 609s Jul 30 22:39:48 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 609s Jul 30 22:39:48 waiting for server to shut down.... done 609s Jul 30 22:39:48 server stopped 609s Jul 30 22:39:48 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 612s Jul 30 22:39:51 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 615s Jul 30 22:39:54 615s Jul 30 22:39:54 Feature: standby cluster # features/standby_cluster.feature:1 615s Jul 30 22:39:54 615s Jul 30 22:39:54 Scenario: prepare the cluster with logical slots # features/standby_cluster.feature:2 615s Jul 30 22:39:54 Given I start postgres1 # features/steps/basic_replication.py:8 619s Jul 30 22:39:58 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 620s Jul 30 22:39:59 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 620s Jul 30 22:39:59 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"pm_1": {"type": "physical"}}, "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 620s Jul 30 22:39:59 Then I receive a response code 200 # features/steps/patroni_api.py:98 620s Jul 30 22:39:59 And Response on GET http://127.0.0.1:8009/config contains slots after 10 seconds # features/steps/patroni_api.py:156 620s Jul 30 22:39:59 And I sleep for 3 seconds # features/steps/patroni_api.py:39 623s Jul 30 22:40:02 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"test_logical": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 623s Jul 30 22:40:02 Then I receive a response code 200 # features/steps/patroni_api.py:98 623s Jul 30 22:40:02 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 624s Jul 30 22:40:03 When I start postgres0 # features/steps/basic_replication.py:8 628s Jul 30 22:40:07 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 628s Jul 30 22:40:07 And replication works from postgres1 to postgres0 after 15 seconds # features/steps/basic_replication.py:112 629s Jul 30 22:40:08 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 629s Jul 30 22:40:08 Then I receive a response code 200 # features/steps/patroni_api.py:98 629s Jul 30 22:40:08 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 629s Jul 30 22:40:08 And "members/postgres0" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 629s Jul 30 22:40:08 629s Jul 30 22:40:08 @slot-advance 629s Jul 30 22:40:08 Scenario: check permanent logical slots are synced to the replica # features/standby_cluster.feature:22 629s Jul 30 22:40:08 Given I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 633s Jul 30 22:40:12 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 639s Jul 30 22:40:18 639s Jul 30 22:40:18 Scenario: Detach exiting node from the cluster # features/standby_cluster.feature:26 639s Jul 30 22:40:18 When I shut down postgres1 # features/steps/basic_replication.py:29 641s Jul 30 22:40:20 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 641s Jul 30 22:40:20 And "members/postgres0" key in DCS has role=master after 5 seconds # features/steps/cascading_replication.py:23 642s Jul 30 22:40:21 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 642s Jul 30 22:40:21 Then I receive a response code 200 # features/steps/patroni_api.py:98 642s Jul 30 22:40:21 642s Jul 30 22:40:21 Scenario: check replication of a single table in a standby cluster # features/standby_cluster.feature:33 642s Jul 30 22:40:21 Given I start postgres1 in a standby cluster batman1 as a clone of postgres0 # features/steps/standby_cluster.py:23 645s Jul 30 22:40:24 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 646s Jul 30 22:40:25 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 646s Jul 30 22:40:25 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 646s Jul 30 22:40:25 When I issue a GET request to http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:61 646s Jul 30 22:40:25 Then I receive a response code 200 # features/steps/patroni_api.py:98 646s Jul 30 22:40:25 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 646s Jul 30 22:40:25 And I sleep for 3 seconds # features/steps/patroni_api.py:39 649s Jul 30 22:40:28 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 649s Jul 30 22:40:28 Then I receive a response code 503 # features/steps/patroni_api.py:98 649s Jul 30 22:40:28 When I issue a GET request to http://127.0.0.1:8009/standby_leader # features/steps/patroni_api.py:61 649s Jul 30 22:40:28 Then I receive a response code 200 # features/steps/patroni_api.py:98 649s Jul 30 22:40:28 And I receive a response role standby_leader # features/steps/patroni_api.py:98 649s Jul 30 22:40:28 And there is a postgres1_cb.log with "on_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 649s Jul 30 22:40:28 When I start postgres2 in a cluster batman1 # features/steps/standby_cluster.py:12 653s Jul 30 22:40:32 Then postgres2 role is the replica after 24 seconds # features/steps/basic_replication.py:105 653s Jul 30 22:40:32 And postgres2 is replicating from postgres1 after 10 seconds # features/steps/standby_cluster.py:52 653s Jul 30 22:40:32 And table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 653s Jul 30 22:40:32 When I issue a GET request to http://127.0.0.1:8010/patroni # features/steps/patroni_api.py:61 653s Jul 30 22:40:32 Then I receive a response code 200 # features/steps/patroni_api.py:98 653s Jul 30 22:40:32 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 653s Jul 30 22:40:32 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 653s Jul 30 22:40:32 653s Jul 30 22:40:32 Scenario: check switchover # features/standby_cluster.feature:57 653s Jul 30 22:40:32 Given I run patronictl.py switchover batman1 --force # features/steps/patroni_api.py:86 657s Jul 30 22:40:36 Then Status code on GET http://127.0.0.1:8010/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 658s Jul 30 22:40:36 And postgres1 is replicating from postgres2 after 32 seconds # features/steps/standby_cluster.py:52 660s Jul 30 22:40:38 And there is a postgres2_cb.log with "on_start replica batman1\non_role_change standby_leader batman1" in postgres2 data directory # features/steps/cascading_replication.py:12 660s Jul 30 22:40:38 660s Jul 30 22:40:38 Scenario: check failover # features/standby_cluster.feature:63 660s Jul 30 22:40:38 When I kill postgres2 # features/steps/basic_replication.py:34 661s Jul 30 22:40:39 And I kill postmaster on postgres2 # features/steps/basic_replication.py:44 661s Jul 30 22:40:40 waiting for server to shut down.... done 661s Jul 30 22:40:40 server stopped 661s Jul 30 22:40:40 Then postgres1 is replicating from postgres0 after 32 seconds # features/steps/standby_cluster.py:52 680s Jul 30 22:40:59 And Status code on GET http://127.0.0.1:8009/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 680s Jul 30 22:40:59 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 680s Jul 30 22:40:59 Then I receive a response code 503 # features/steps/patroni_api.py:98 680s Jul 30 22:40:59 And I receive a response role standby_leader # features/steps/patroni_api.py:98 680s Jul 30 22:40:59 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 681s Jul 30 22:41:00 And there is a postgres1_cb.log with "on_role_change replica batman1\non_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 685s Jul 30 22:41:04 685s Jul 30 22:41:04 Feature: watchdog # features/watchdog.feature:1 685s Jul 30 22:41:04 Verify that watchdog gets pinged and triggered under appropriate circumstances. 685s Jul 30 22:41:04 Scenario: watchdog is opened and pinged # features/watchdog.feature:4 685s Jul 30 22:41:04 Given I start postgres0 with watchdog # features/steps/watchdog.py:16 689s Jul 30 22:41:08 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 690s Jul 30 22:41:09 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 690s Jul 30 22:41:09 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 690s Jul 30 22:41:09 And postgres0 watchdog has a 15 second timeout # features/steps/watchdog.py:34 690s Jul 30 22:41:09 690s Jul 30 22:41:09 Scenario: watchdog is reconfigured after global ttl changed # features/watchdog.feature:11 690s Jul 30 22:41:09 Given I run patronictl.py edit-config batman -s ttl=30 --force # features/steps/patroni_api.py:86 692s Jul 30 22:41:11 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 692s Jul 30 22:41:11 And I receive a response output "+ttl: 30" # features/steps/patroni_api.py:98 692s Jul 30 22:41:11 When I sleep for 4 seconds # features/steps/patroni_api.py:39 696s Jul 30 22:41:15 Then postgres0 watchdog has a 25 second timeout # features/steps/watchdog.py:34 696s Jul 30 22:41:15 696s Jul 30 22:41:15 Scenario: watchdog is disabled during pause # features/watchdog.feature:18 696s Jul 30 22:41:15 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 698s Jul 30 22:41:17 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 698s Jul 30 22:41:17 When I sleep for 2 seconds # features/steps/patroni_api.py:39 700s Jul 30 22:41:19 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 700s Jul 30 22:41:19 700s Jul 30 22:41:19 Scenario: watchdog is opened and pinged after resume # features/watchdog.feature:24 700s Jul 30 22:41:19 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 700s Jul 30 22:41:19 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 702s Jul 30 22:41:21 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 702s Jul 30 22:41:21 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 703s Jul 30 22:41:22 703s Jul 30 22:41:22 Scenario: watchdog is disabled when shutting down # features/watchdog.feature:30 703s Jul 30 22:41:22 Given I shut down postgres0 # features/steps/basic_replication.py:29 705s Jul 30 22:41:24 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 705s Jul 30 22:41:24 705s Jul 30 22:41:24 Scenario: watchdog is triggered if patroni stops responding # features/watchdog.feature:34 705s Jul 30 22:41:24 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 705s Jul 30 22:41:24 And I start postgres0 with watchdog # features/steps/watchdog.py:16 708s Jul 30 22:41:27 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 709s Jul 30 22:41:28 When postgres0 hangs for 30 seconds # features/steps/watchdog.py:52 709s Jul 30 22:41:28 Then postgres0 watchdog is triggered after 30 seconds # features/steps/watchdog.py:44 736s Jul 30 22:41:55 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.4736.XgCgVVZx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.4781.XRSbkWfx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.4827.XhHzxQhx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.4874.XvFgXlFx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.4919.XKrVkEFx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.4990.XnfJNksx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5039.XrOnWzwx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5042.XbRcnJnx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5127.XPuHMehx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5224.XvPSEiZx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5232.XroDJVnx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5275.XjfVNZyx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5329.XGntTGlx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5482.XjZMZUVx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5528.XlhpCusx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5586.XSGWXdZx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5672.XozdQYEx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5723.XYoqaAQx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5818.XbVDOFhx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5871.XMFszDmx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.5932.XZiJKAIx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6023.XKsgMwtx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6123.XhDWFQYx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6159.XGNSczvx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6231.XQrEUbfx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6265.XSrEKnBx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6441.XdXSzexx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6491.XcJscqdx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6508.XnsKHIwx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6546.XYklRjDx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6595.XymszZBx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6600.XRvzzrzx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6634.XNLgPfBx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6674.XQuGuvax 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6839.XYixaMlx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6842.XCONFhEx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6848.XYjRvzFx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.6990.XrxxTsUx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.7036.XzGysorx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.7085.XaduDdUx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.7137.XDiGGJnx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.7190.XEeubstx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.7387.XGjHuOdx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.7424.XFkhEuvx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.7502.XNFcDJwx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.7580.XDrNsYBx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.7653.XyCjEjmx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.7968.XuIRLnpx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.8014.XDmODGWx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.8158.XjjDeBEx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.8221.XvQZJTAx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.8286.XpzrMmcx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.8390.XbpZggIx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.8504.XfLrMiKx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.8636.XmUksdPx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.8680.XKVbxHRx 737s Jul 30 22:41:56 Skipping duplicate data .coverage.autopkgtest.8682.XWooHJgx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.8685.XkWyeVVx 737s Jul 30 22:41:56 Combined data file .coverage.autopkgtest.8697.XMQANtEx 740s Jul 30 22:41:59 Name Stmts Miss Cover 740s Jul 30 22:41:59 ------------------------------------------------------------------------------------------------------------- 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/OpenSSL/SSL.py 1072 596 44% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/OpenSSL/__init__.py 4 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/OpenSSL/_util.py 41 14 66% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/OpenSSL/crypto.py 1225 982 20% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/OpenSSL/version.py 10 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/_distutils_hack/__init__.py 101 96 5% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/__about__.py 5 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/__init__.py 3 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/exceptions.py 26 5 81% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/__init__.py 2 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/_oid.py 126 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/__init__.py 0 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/__init__.py 0 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/_conditional.py 50 23 54% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/binding.py 62 12 81% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/__init__.py 0 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_asymmetric.py 6 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_cipheralgorithm.py 17 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_serialization.py 79 35 56% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/__init__.py 0 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dh.py 47 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dsa.py 55 5 91% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ec.py 164 17 90% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed448.py 45 12 73% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed25519.py 43 12 72% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/padding.py 55 23 58% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/rsa.py 90 38 58% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/types.py 19 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/utils.py 14 5 64% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x448.py 43 12 72% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x25519.py 41 12 71% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/__init__.py 4 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/algorithms.py 129 35 73% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/base.py 140 81 42% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/modes.py 139 58 58% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/constant_time.py 6 3 50% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/hashes.py 127 20 84% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/__init__.py 5 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/base.py 7 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/ssh.py 758 602 21% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/utils.py 77 29 62% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/x509/__init__.py 70 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/x509/base.py 487 229 53% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/x509/certificate_transparency.py 42 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/x509/extensions.py 1038 569 45% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/x509/general_name.py 166 94 43% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/x509/name.py 232 141 39% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/x509/oid.py 3 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/cryptography/x509/verification.py 10 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dateutil/__init__.py 13 4 69% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dateutil/_common.py 25 15 40% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dateutil/_version.py 11 2 82% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dateutil/parser/__init__.py 33 4 88% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dateutil/parser/_parser.py 813 436 46% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dateutil/parser/isoparser.py 185 150 19% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dateutil/relativedelta.py 241 206 15% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dateutil/tz/__init__.py 4 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dateutil/tz/_common.py 161 121 25% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dateutil/tz/_factories.py 49 21 57% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dateutil/tz/tz.py 800 626 22% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dateutil/tz/win.py 153 149 3% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/__init__.py 3 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/_asyncbackend.py 14 6 57% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/_ddr.py 105 86 18% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/_features.py 44 7 84% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/_immutable_ctx.py 40 5 88% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/asyncbackend.py 44 32 27% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/asyncquery.py 277 242 13% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/edns.py 270 161 40% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/entropy.py 80 49 39% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/enum.py 72 46 36% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/exception.py 60 33 45% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/flags.py 41 14 66% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/grange.py 34 30 12% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/immutable.py 41 30 27% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/inet.py 80 65 19% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/ipv4.py 27 20 26% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/ipv6.py 115 100 13% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/message.py 809 662 18% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/name.py 620 427 31% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/nameserver.py 101 54 47% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/node.py 118 71 40% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/opcode.py 31 7 77% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/query.py 536 462 14% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/quic/__init__.py 26 23 12% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rcode.py 69 13 81% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rdata.py 377 269 29% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rdataclass.py 44 9 80% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rdataset.py 193 133 31% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rdatatype.py 214 25 88% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/OPT.py 34 19 44% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/SOA.py 41 26 37% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/TSIG.py 58 42 28% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/ZONEMD.py 43 27 37% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/__init__.py 2 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rdtypes/__init__.py 2 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rdtypes/svcbbase.py 397 261 34% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rdtypes/util.py 191 154 19% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/renderer.py 152 118 22% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/resolver.py 899 719 20% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/reversename.py 33 24 27% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/rrset.py 78 56 28% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/serial.py 93 79 15% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/set.py 149 108 28% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/tokenizer.py 335 279 17% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/transaction.py 271 203 25% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/tsig.py 177 122 31% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/ttl.py 45 38 16% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/version.py 7 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/wire.py 64 42 34% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/xfr.py 148 126 15% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/zone.py 508 383 25% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/zonefile.py 429 380 11% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/dns/zonetypes.py 15 2 87% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/etcd/__init__.py 125 63 50% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/etcd/client.py 380 256 33% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/etcd/lock.py 125 103 18% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/idna/__init__.py 4 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/idna/core.py 293 258 12% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/idna/idnadata.py 4 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/idna/intranges.py 30 24 20% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/idna/package_data.py 1 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/__init__.py 13 2 85% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/__main__.py 199 63 68% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/api.py 770 286 63% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/async_executor.py 96 15 84% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/collections.py 56 6 89% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/config.py 371 94 75% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/config_generator.py 212 159 25% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/daemon.py 76 3 96% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/dcs/__init__.py 646 78 88% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/dcs/etcd3.py 679 124 82% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/dcs/etcd.py 603 253 58% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/dynamic_loader.py 35 7 80% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/exceptions.py 16 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/file_perm.py 43 8 81% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/global_config.py 81 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/ha.py 1244 362 71% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/log.py 219 69 68% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/__init__.py 821 173 79% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/available_parameters/__init__.py 21 1 95% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/bootstrap.py 252 62 75% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/callback_executor.py 55 8 85% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/cancellable.py 104 41 61% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/config.py 813 216 73% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/connection.py 75 1 99% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/misc.py 41 8 80% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/mpp/__init__.py 89 11 88% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py 170 85 50% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/rewind.py 416 166 60% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/slots.py 334 37 89% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/sync.py 130 19 85% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/postgresql/validator.py 157 23 85% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/psycopg.py 42 16 62% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/request.py 62 7 89% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/tags.py 38 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/utils.py 350 106 70% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/validator.py 301 208 31% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/version.py 1 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/watchdog/__init__.py 2 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/watchdog/base.py 203 42 79% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/patroni/watchdog/linux.py 135 35 74% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/psutil/__init__.py 951 629 34% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/psutil/_common.py 424 212 50% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/psutil/_compat.py 302 263 13% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/psutil/_pslinux.py 1251 924 26% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/psutil/_psposix.py 96 38 60% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/psycopg2/__init__.py 19 3 84% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/psycopg2/_json.py 64 27 58% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/psycopg2/_range.py 269 172 36% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/psycopg2/errors.py 3 2 33% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/psycopg2/extensions.py 91 25 73% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/six.py 504 250 50% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/__init__.py 50 14 72% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/_base_connection.py 70 52 26% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/_collections.py 234 123 47% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/_request_methods.py 53 23 57% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/_version.py 2 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/connection.py 324 99 69% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/connectionpool.py 347 124 64% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/contrib/__init__.py 0 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/contrib/pyopenssl.py 257 96 63% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/exceptions.py 115 37 68% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/fields.py 92 73 21% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/filepost.py 37 24 35% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/poolmanager.py 233 85 64% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/response.py 562 280 50% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/util/__init__.py 10 0 100% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/util/connection.py 66 42 36% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/util/proxy.py 13 6 54% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/util/request.py 104 49 53% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/util/response.py 32 15 53% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/util/retry.py 173 49 72% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/util/ssl_.py 177 78 56% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/util/ssl_match_hostname.py 66 54 18% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/util/ssltransport.py 160 112 30% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/util/timeout.py 71 14 80% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/util/url.py 205 72 65% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/util/util.py 26 10 62% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/urllib3/util/wait.py 49 18 63% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/__init__.py 165 109 34% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/composer.py 92 17 82% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/constructor.py 479 276 42% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/cyaml.py 46 24 48% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/dumper.py 23 12 48% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/emitter.py 838 769 8% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/error.py 58 42 28% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/events.py 61 6 90% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/loader.py 47 24 49% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/nodes.py 29 7 76% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/parser.py 352 198 44% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/reader.py 122 34 72% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/representer.py 248 176 29% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/resolver.py 135 76 44% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/scanner.py 758 437 42% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/serializer.py 85 70 18% 740s Jul 30 22:41:59 /usr/lib/python3/dist-packages/yaml/tokens.py 76 17 78% 740s Jul 30 22:41:59 patroni/__init__.py 13 2 85% 740s Jul 30 22:41:59 patroni/__main__.py 199 199 0% 740s Jul 30 22:41:59 patroni/api.py 770 770 0% 740s Jul 30 22:41:59 patroni/async_executor.py 96 69 28% 740s Jul 30 22:41:59 patroni/collections.py 56 15 73% 740s Jul 30 22:41:59 patroni/config.py 371 196 47% 740s Jul 30 22:41:59 patroni/config_generator.py 212 212 0% 740s Jul 30 22:41:59 patroni/ctl.py 936 411 56% 740s Jul 30 22:41:59 patroni/daemon.py 76 76 0% 740s Jul 30 22:41:59 patroni/dcs/__init__.py 646 269 58% 740s Jul 30 22:41:59 patroni/dcs/consul.py 485 485 0% 740s Jul 30 22:41:59 patroni/dcs/etcd3.py 679 346 49% 740s Jul 30 22:41:59 patroni/dcs/etcd.py 603 277 54% 740s Jul 30 22:41:59 patroni/dcs/exhibitor.py 61 61 0% 740s Jul 30 22:41:59 patroni/dcs/kubernetes.py 938 938 0% 740s Jul 30 22:41:59 patroni/dcs/raft.py 319 319 0% 740s Jul 30 22:41:59 patroni/dcs/zookeeper.py 288 288 0% 740s Jul 30 22:41:59 patroni/dynamic_loader.py 35 7 80% 740s Jul 30 22:41:59 patroni/exceptions.py 16 1 94% 740s Jul 30 22:41:59 patroni/file_perm.py 43 15 65% 740s Jul 30 22:41:59 patroni/global_config.py 81 18 78% 740s Jul 30 22:41:59 patroni/ha.py 1244 1244 0% 740s Jul 30 22:41:59 patroni/log.py 219 173 21% 740s Jul 30 22:41:59 patroni/postgresql/__init__.py 821 651 21% 740s Jul 30 22:41:59 patroni/postgresql/available_parameters/__init__.py 21 1 95% 740s Jul 30 22:41:59 patroni/postgresql/bootstrap.py 252 222 12% 740s Jul 30 22:41:59 patroni/postgresql/callback_executor.py 55 34 38% 740s Jul 30 22:41:59 patroni/postgresql/cancellable.py 104 84 19% 740s Jul 30 22:41:59 patroni/postgresql/config.py 813 698 14% 740s Jul 30 22:41:59 patroni/postgresql/connection.py 75 50 33% 740s Jul 30 22:41:59 patroni/postgresql/misc.py 41 29 29% 740s Jul 30 22:41:59 patroni/postgresql/mpp/__init__.py 89 21 76% 740s Jul 30 22:41:59 patroni/postgresql/mpp/citus.py 259 259 0% 740s Jul 30 22:41:59 patroni/postgresql/postmaster.py 170 139 18% 740s Jul 30 22:41:59 patroni/postgresql/rewind.py 416 416 0% 740s Jul 30 22:41:59 patroni/postgresql/slots.py 334 285 15% 740s Jul 30 22:41:59 patroni/postgresql/sync.py 130 96 26% 740s Jul 30 22:41:59 patroni/postgresql/validator.py 157 52 67% 740s Jul 30 22:41:59 patroni/psycopg.py 42 28 33% 740s Jul 30 22:41:59 patroni/raft_controller.py 22 22 0% 740s Jul 30 22:41:59 patroni/request.py 62 6 90% 740s Jul 30 22:41:59 patroni/scripts/__init__.py 0 0 100% 740s Jul 30 22:41:59 patroni/scripts/aws.py 59 59 0% 740s Jul 30 22:41:59 patroni/scripts/barman/__init__.py 0 0 100% 740s Jul 30 22:41:59 patroni/scripts/barman/cli.py 51 51 0% 740s Jul 30 22:41:59 patroni/scripts/barman/config_switch.py 51 51 0% 740s Jul 30 22:41:59 patroni/scripts/barman/recover.py 37 37 0% 740s Jul 30 22:41:59 patroni/scripts/barman/utils.py 94 94 0% 740s Jul 30 22:41:59 patroni/scripts/wale_restore.py 207 207 0% 740s Jul 30 22:41:59 patroni/tags.py 38 11 71% 740s Jul 30 22:41:59 patroni/utils.py 350 177 49% 740s Jul 30 22:41:59 patroni/validator.py 301 215 29% 740s Jul 30 22:41:59 patroni/version.py 1 0 100% 740s Jul 30 22:41:59 patroni/watchdog/__init__.py 2 2 0% 740s Jul 30 22:41:59 patroni/watchdog/base.py 203 203 0% 740s Jul 30 22:41:59 patroni/watchdog/linux.py 135 135 0% 740s Jul 30 22:41:59 ------------------------------------------------------------------------------------------------------------- 740s Jul 30 22:41:59 TOTAL 53856 32395 40% 740s Jul 30 22:41:59 12 features passed, 0 failed, 1 skipped 740s Jul 30 22:41:59 46 scenarios passed, 0 failed, 14 skipped 740s Jul 30 22:41:59 466 steps passed, 0 failed, 119 skipped, 0 undefined 740s Jul 30 22:41:59 Took 8m10.571s 740s ### End 16 acceptance-etcd3 ### 740s + echo '### End 16 acceptance-etcd3 ###' 740s + rm -f '/tmp/pgpass?' 740s ++ id -u 740s + '[' 1000 -eq 0 ']' 740s autopkgtest [22:41:59]: test acceptance-etcd3: -----------------------] 741s acceptance-etcd3 PASS 741s autopkgtest [22:42:00]: test acceptance-etcd3: - - - - - - - - - - results - - - - - - - - - - 742s autopkgtest [22:42:01]: test acceptance-etcd-basic: preparing testbed 1512s nova [W] Using flock in scalingstack-bos01-s390x 1512s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 1512s nova [W] Using flock in scalingstack-bos01-s390x 1512s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 1512s nova [E] nova boot failed (attempt #0): 1512s nova [E] DEBUG (extension:189) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3multifactor = keystoneauth1.loading._plugins.identity.v3:MultiFactor') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') 1512s DEBUG (session:517) REQ: curl -g -i -X GET http://keystone.infra.bos01.scalingstack:5000/v3/ -H "Accept: application/json" -H "User-Agent: nova keystoneauth1/4.0.0 python-requests/2.22.0 CPython/3.8.10" 1512s DEBUG (connectionpool:222) Starting new HTTP connection (1): keystone.infra.bos01.scalingstack:5000 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "GET /v3/ HTTP/1.1" 200 273 1512s DEBUG (session:548) RESP: [200] Connection: Keep-Alive Content-Length: 273 Content-Type: application/json Date: Tue, 30 Jul 2024 22:43:14 GMT Keep-Alive: timeout=5, max=100 Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (session:580) RESP BODY: {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://keystone.infra.bos01.scalingstack:5000/v3/", "rel": "self"}]}} 1512s DEBUG (session:946) GET call to http://keystone.infra.bos01.scalingstack:5000/v3/ used request id req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (base:182) Making authentication request to http://keystone.infra.bos01.scalingstack:5000/v3/auth/tokens 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "POST /v3/auth/tokens HTTP/1.1" 201 4363 1512s DEBUG (base:187) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "is_admin_project": false, "project": {"domain": {"id": "default", "name": "Default"}, "id": "3f3b771a247746688951a4c90bf16631", "name": "prod-proposed-migration_project"}, "catalog": [{"endpoints": [{"url": "http://10.189.0.40", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "7d31d2904b56461cb46c735fc00850b0"}, {"url": "http://10.189.0.40", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "931e03b1033c4992ac8d223599983801"}, {"url": "http://10.189.0.40", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "c703b3c5e7224cfd893f622a7def99d7"}], "type": "product-streams", "id": "6723640fcf314f1c84ab92b0b7b7d343", "name": "image-stream"}, {"endpoints": [{"url": "http://neutron-api.infra.bos01.scalingstack:9696", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "13475a253aba4a63883ad9da9631b1d3"}, {"url": "http://10.189.0.22:9696", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "63b2334803a742048e95cd48d39f1674"}, {"url": "http://10.189.0.22:9696", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9d19ce3dbfd544ef90e7694049018957"}], "type": "network", "id": "6a80a28849da43ce9839207bb1e98bfc", "name": "neutron"}, {"endpoints": [{"url": "http://10.189.0.20:5000/v3", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "51d5e1cea07c4644b44a8bf114268a27"}, {"url": "http://10.189.0.20:35357/v3", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "79c780094b2f40e5a70ee3a6353760a0"}, {"url": "http://keystone.infra.bos01.scalingstack:5000/v3", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9cdf3486e4a94ca0a181e87bc1ff344f"}], "type": "identity", "id": "ad3a88bc8df3470b938f685304ad3ae9", "name": "keystone"}, {"endpoints": [{"url": "http://nova-api.infra.bos01.scalingstack:8778", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "83e5577919844e47bbf3dffc39f71e5f"}, {"url": "http://10.189.0.23:8778", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "86cd7636126b4214a0c0de3c50936bb9"}, {"url": "http://10.189.0.23:8778", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "eb918cef1bd546fcaafc28133e511d6c"}], "type": "placement", "id": "af7144bdc8404803a159883c31910f75", "name": "placement"}, {"endpoints": [{"url": "http://10.189.0.23:8774/v2.1", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "202b55f38ce646fe8ec9e2b956672f07"}, {"url": "http://10.189.0.23:8774/v2.1", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "b29375d70fd748e699859503279177e3"}, {"url": "http://nova-api.infra.bos01.scalingstack:8774/v2.1", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "ff7b759bc23341fe911fedfc2cd9ae07"}], "type": "compute", "id": "e34360be9bc6484eb95832a381a2d650", "name": "nova"}, {"endpoints": [{"url": "http://glance.infra.bos01.scalingstack:9292", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "0bacddbfbda545f087dab7ef5745707d"}, {"url": "http://10.189.0.19:9292", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "0f69442c439d471b9761ccd46fc6ca2e"}, {"url": "http://10.189.0.19:9292", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9cd58aadc9e94eea8783da595c3474f3"}], "type": "image", "id": "f29a943021f34b6682d21957ddc8acac", "name": "glance"}], "expires_at": "2024-07-30T23:43:15.000000Z", "user": {"password_expires_at": null, "domain": {"id": "default", "name": "Default"}, "id": "3afbd64474684647986f8a196316be34", "name": "prod-proposed-migration-s390x"}, "audit_ids": ["x2L1ftY1TzukeWDGxzjX2g"], "issued_at": "2024-07-30T22:43:15.000000Z"}} 1512s REQ: curl -g -i -X GET http://nova-api.infra.bos01.scalingstack:8774/v2.1 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}56d61b22fc64d260198a6ec092c20ec243019681a297bf97e15574f039470966" 1512s DEBUG (session:517) REQ: curl -g -i -X GET http://nova-api.infra.bos01.scalingstack:8774/v2.1 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}56d61b22fc64d260198a6ec092c20ec243019681a297bf97e15574f039470966" 1512s DEBUG (connectionpool:222) Starting new HTTP connection (1): nova-api.infra.bos01.scalingstack:8774 1512s DEBUG (connectionpool:429) http://nova-api.infra.bos01.scalingstack:8774 "GET /v2.1 HTTP/1.1" 302 0 1512s RESP: [302] Connection: keep-alive Content-Length: 0 Content-Type: text/plain; charset=utf8 Date: Tue, 30 Jul 2024 22:43:15 GMT Location: http://nova-api.infra.bos01.scalingstack:8774/v2.1/ X-Compute-Request-Id: req-f2ab9875-cf22-4955-aa0f-4e1cbda58e12 X-Openstack-Request-Id: req-f2ab9875-cf22-4955-aa0f-4e1cbda58e12 1512s DEBUG (session:548) RESP: [302] Connection: keep-alive Content-Length: 0 Content-Type: text/plain; charset=utf8 Date: Tue, 30 Jul 2024 22:43:15 GMT Location: http://nova-api.infra.bos01.scalingstack:8774/v2.1/ X-Compute-Request-Id: req-f2ab9875-cf22-4955-aa0f-4e1cbda58e12 X-Openstack-Request-Id: req-f2ab9875-cf22-4955-aa0f-4enova [W] Using flock in scalingstack-bos01-s390x 1512s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 1512s nova [E] nova boot failed (attempt #0): 1512s nova [E] DEBUG (extension:189) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3multifactor = keystoneauth1.loading._plugins.identity.v3:MultiFactor') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') 1512s DEBUG (session:517) REQ: curl -g -i -X GET http://keystone.infra.bos01.scalingstack:5000/v3/ -H "Accept: application/json" -H "User-Agent: nova keystoneauth1/4.0.0 python-requests/2.22.0 CPython/3.8.10" 1512s DEBUG (connectionpool:222) Starting new HTTP connection (1): keystone.infra.bos01.scalingstack:5000 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "GET /v3/ HTTP/1.1" 200 273 1512s DEBUG (session:548) RESP: [200] Connection: Keep-Alive Content-Length: 273 Content-Type: application/json Date: Tue, 30 Jul 2024 22:43:14 GMT Keep-Alive: timeout=5, max=100 Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (session:580) RESP BODY: {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://keystone.infra.bos01.scalingstack:5000/v3/", "rel": "self"}]}} 1512s DEBUG (session:946) GET call to http://keystone.infra.bos01.scalingstack:5000/v3/ used request id req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (base:182) Making authentication request to http://keystone.infra.bos01.scalingstack:5000/v3/auth/tokens 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "POST /v3/auth/tokens HTTP/1.1" 201 4363 1512s DEBUG (base:187) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "is_admin_project": false, "project": {"domain": {"id": "default", "name": "Default"}, "id": "3f3b771a247746688951a4c90bf16631", "name": "prod-proposed-migration_project"}, "catalog": [{"endpoints": [{"url": "http://10.189.0.40", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "7d31d2904b56461cb46c735fc00850b0"}, {"url": "http://10.189.0.40", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "931e03b1033c4992ac8d223599983801"}, {"url": "http://10.189.0.40", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "c703b3c5e7224cfd893f622a7def99d7"}], "type": "product-streams", "id": "6723640fcf314f1c84ab92b0b7b7d343", "name": "image-stream"}, {"endpoints": [{"url": "http://neutron-api.infra.bos01.scalingstack:9696", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "13475a253aba4a63883ad9da9631b1d3"}, {"url": "http://10.189.0.22:9696", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "63b2334803a742048e95cd48d39f1674"}, {"url": "http://10.189.0.22:9696", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9d19ce3dbfd544ef90e7694049018957"}], "type": "network", "id": "6a80a28849da43ce9839207bb1e98bfc", "name": "neutron"}, {"endpoints": [{"url": "http://10.189.0.20:5000/v3", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "51d5e1cea07c4644b44a8bf114268a27"}, {"url": "http://10.189.0.20:35357/v3", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "79c780094b2f40e5a70ee3a6353760a0"}, {"url": "http://keystone.infra.bos01.scalingstack:5000/v3", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9cdf3486e4a94ca0a181e87bc1ff344f"}], "type": "identity", "id": "ad3a88bc8df3470b938f685304ad3ae9", "name": "keystone"}, {"endpoints": [{"url": "http://nova-api.infra.bos01.scalingstack:8778", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "83e5577919844e47bbf3dffc39f71e5f"}, {"url": "http://10.189.0.23:8778", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "86cd7636126b4214a0c0de3c50936bb9"}, {"url": "http://10.189.0.23:8778", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "eb918cef1bd546fcaafc28133e511d6c"}], "type": "placement", "id": "af7144bdc8404803a159883c31910f75", "name": "placement"}, {"endpoints": [{"url": "http://10.189.0.23:8774/v2.1", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "202b55f38ce646fe8ec9e2b956672f07"}, {"url": "http://10.189.0.23:8774/v2.1", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "b29375d70fd748e699859503279177e3"}, {"url": "http://nova-api.infra.bos01.scalingstack:8774/v2.1", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "ff7b759bc23341fe911fedfc2cd9ae07"}], "type": "compute", "id": "e34360be9bc6484eb95832a381a2d650", "name": "nova"}, {"endpoints": [{"url": "http://glance.infra.bos01.scalingstack:9292", "interface": "nova [W] Using flock in scalingstack-bos01-s390x 1512s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 1512s nova [E] nova boot failed (attempt #0): 1512s nova [E] DEBUG (extension:189) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3multifactor = keystoneauth1.loading._plugins.identity.v3:MultiFactor') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') 1512s DEBUG (session:517) REQ: curl -g -i -X GET http://keystone.infra.bos01.scalingstack:5000/v3/ -H "Accept: application/json" -H "User-Agent: nova keystoneauth1/4.0.0 python-requests/2.22.0 CPython/3.8.10" 1512s DEBUG (connectionpool:222) Starting new HTTP connection (1): keystone.infra.bos01.scalingstack:5000 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "GET /v3/ HTTP/1.1" 200 273 1512s DEBUG (session:548) RESP: [200] Connection: Keep-Alive Content-Length: 273 Content-Type: application/json Date: Tue, 30 Jul 2024 22:43:14 GMT Keep-Alive: timeout=5, max=100 Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (session:580) RESP BODY: {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://keystone.infra.bos01.scalingstack:5000/v3/", "rel": "self"}]}} 1512s DEBUG (session:946) GET call to http://keystone.infra.bos01.scalingstack:5000/v3/ used request id req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (base:182) Making authentication request to http://keystone.infra.bos01.scalingstack:5000/v3/auth/tokens 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "POST /v3/auth/tokens HTTP/1.1" 201 4363 1512s DEBUG (base:187) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "is_admin_project": false, "project": {"domain": {"id": "default", "name": "Default"}, "id": "3f3b771a247746688951a4c90bf16631", "name": "prod-proposed-migration_project"}, "catalog": [{"endpoints": [{"url": "http://10.189.0.40", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "7d31d2904b56461cb46c735fc00850b0"}, {"url": "http://10.189.0.40", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "931e03b1033c4992ac8d223599983801"}, {"url": "http://10.189.0.40", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "c703b3c5e7224cfd893f622a7def99d7"}], "type": "product-streams", "id": "6723640fcf314f1c84ab92b0b7b7d343", "name": "image-stream"}, {"endpoints": [{"url": "http://neutron-api.infra.bos01.scalingstack:9696", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "13475a253aba4a63883ad9da9631b1d3"}, {"url": "http://10.189.0.22:9696", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "63b2334803a742048e95cd48d39f1674"}, {"url": "http://10.189.0.22:9696", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9d19ce3dbfd544ef90e7694049018957"}], "type": "network", "id": "6a80a28849da43ce9839207bb1e98bfc", "name": "neutron"}, {"endpoints": [{"url": "http://10.189.0.20:5000/v3", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "51d5e1cea07c4644b44a8bf114268a27"}, {"url": "http://10.189.0.20:35357/v3", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "79c780094b2f40e5a70ee3a6353760a0"}, {"url": "http://keystone.infra.bos01.scalingstack:5000/v3", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9cdf3486e4a94ca0a181e87bc1ff344f"}], "type": "identity", "id": "ad3a88bc8df3470b938f685304ad3ae9", "name": "keystone"}, {"endpoints": [{"url": "http://nova-api.infra.bos01.scalingstack:8778", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "83e5577919844e47bbf3dffc39f71e5f"}, {"url": "http://10.189.0.23:8778", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "86cd7636126b4214a0c0de3c50936bb9"}, {"url": "http://10.189.0.23:8778", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "eb918cef1bd546fcaafc28133e511d6c"}], "type": "placement", "id": "af7144bdc8404803a159883c31910f75", "name": "placement"}, {"endpoints": [{"url": "http://10.189.0.23:8774/v2.1", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "202b55f38ce646fe8ec9e2b956672f07"}, {"url": "http://10.189.0.23:8774/v2.1", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "b29375d70fd748e699859503279177e3"}, {"url": "http://nova-api.infra.bos01.scalingstack:8774/v2.1", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "ff7b759bc23341fe911fedfc2cd9ae07"}], "type": "compute", "id": "e34360be9bc6484eb95832a381a2d650", "name": "nova"}, {"endpoints": [{"url": "http://glance.infra.bos01.scalingstack:9292", "interface": "nova [W] Using flock in scalingstack-bos01-s390x 1512s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 1512s nova [E] nova boot failed (attempt #0): 1512s nova [E] DEBUG (extension:189) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3multifactor = keystoneauth1.loading._plugins.identity.v3:MultiFactor') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') 1512s DEBUG (session:517) REQ: curl -g -i -X GET http://keystone.infra.bos01.scalingstack:5000/v3/ -H "Accept: application/json" -H "User-Agent: nova keystoneauth1/4.0.0 python-requests/2.22.0 CPython/3.8.10" 1512s DEBUG (connectionpool:222) Starting new HTTP connection (1): keystone.infra.bos01.scalingstack:5000 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "GET /v3/ HTTP/1.1" 200 273 1512s DEBUG (session:548) RESP: [200] Connection: Keep-Alive Content-Length: 273 Content-Type: application/json Date: Tue, 30 Jul 2024 22:43:14 GMT Keep-Alive: timeout=5, max=100 Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (session:580) RESP BODY: {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://keystone.infra.bos01.scalingstack:5000/v3/", "rel": "self"}]}} 1512s DEBUG (session:946) GET call to http://keystone.infra.bos01.scalingstack:5000/v3/ used request id req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (base:182) Making authentication request to http://keystone.infra.bos01.scalingstack:5000/v3/auth/tokens 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "POST /v3/auth/tokens HTTP/1.1" 201 4363 1512s DEBUG (base:187) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "is_admin_project": false, "project": {"domain": {"id": "default", "name": "Default"}, "id": "3f3b771a247746688951a4c90bf16631", "name": "prod-proposed-migration_project"}, "catalog": [{"endpoints": [{"url": "http://10.189.0.40", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "7d31d2904b56461cb46c735fc00850b0"}, {"url": "http://10.189.0.40", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "931e03b1033c4992ac8d223599983801"}, {"url": "http://10.189.0.40", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "c703b3c5e7224cfd893f622a7def99d7"}], "type": "product-streams", "id": "6723640fcf314f1c84ab92b0b7b7d343", "name": "image-stream"}, {"endpoints": [{"url": "http://neutron-api.infra.bos01.scalingstack:9696", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "13475a253aba4a63883ad9da9631b1d3"}, {"url": "http://10.189.0.22:9696", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "63b2334803a742048e95cd48d39f1674"}, {"url": "http://10.189.0.22:9696", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9d19ce3dbfd544ef90e7694049018957"}], "type": "network", "id": "6a80a28849da43ce9839207bb1e98bfc", "name": "neutron"}, {"endpoints": [{"url": "http://10.189.0.20:5000/v3", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "51d5e1cea07c4644b44a8bf114268a27"}, {"url": "http://10.189.0.20:35357/v3", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "79c780094b2f40e5a70ee3a6353760a0"}, {"url": "http://keystone.infra.bos01.scalingstack:5000/v3", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9cdf3486e4a94ca0a181e87bc1ff344f"}], "type": "identity", "id": "ad3a88bc8df3470b938f685304ad3ae9", "name": "keystone"}, {"endpoints": [{"url": "http://nova-api.infra.bos01.scalingstack:8778", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "83e5577919844e47bbf3dffc39f71e5f"}, {"url": "http://10.189.0.23:8778", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "86cd7636126b4214a0c0de3c50936bb9"}, {"url": "http://10.189.0.23:8778", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "eb918cef1bd546fcaafc28133e511d6c"}], "type": "placement", "id": "af7144bdc8404803a159883c31910f75", "name": "placement"}, {"endpoints": [{"url": "http://10.189.0.23:8774/v2.1", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "202b55f38ce646fe8ec9e2b956672f07"}, {"url": "http://10.189.0.23:8774/v2.1", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "b29375d70fd748e699859503279177e3"}, {"url": "http://nova-api.infra.bos01.scalingstack:8774/v2.1", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "ff7b759bc23341fe911fedfc2cd9ae07"}], "type": "compute", "id": "e34360be9bc6484eb95832a381a2d650", "name": "nova"}, {"endpoints": [{"url": "http://glance.infra.bos01.scalingstack:9292", "interface": "nova [W] Using flock in scalingstack-bos01-s390x 1512s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 1512s nova [E] nova boot failed (attempt #0): 1512s nova [E] DEBUG (extension:189) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3multifactor = keystoneauth1.loading._plugins.identity.v3:MultiFactor') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') 1512s DEBUG (session:517) REQ: curl -g -i -X GET http://keystone.infra.bos01.scalingstack:5000/v3/ -H "Accept: application/json" -H "User-Agent: nova keystoneauth1/4.0.0 python-requests/2.22.0 CPython/3.8.10" 1512s DEBUG (connectionpool:222) Starting new HTTP connection (1): keystone.infra.bos01.scalingstack:5000 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "GET /v3/ HTTP/1.1" 200 273 1512s DEBUG (session:548) RESP: [200] Connection: Keep-Alive Content-Length: 273 Content-Type: application/json Date: Tue, 30 Jul 2024 22:43:14 GMT Keep-Alive: timeout=5, max=100 Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (session:580) RESP BODY: {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://keystone.infra.bos01.scalingstack:5000/v3/", "rel": "self"}]}} 1512s DEBUG (session:946) GET call to http://keystone.infra.bos01.scalingstack:5000/v3/ used request id req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (base:182) Making authentication request to http://keystone.infra.bos01.scalingstack:5000/v3/auth/tokens 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "POST /v3/auth/tokens HTTP/1.1" 201 4363 1512s DEBUG (base:187) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "is_admin_project": false, "project": {"domain": {"id": "default", "name": "Default"}, "id": "3f3b771a247746688951a4c90bf16631", "name": "prod-proposed-migration_project"}, "catalog": [{"endpoints": [{"url": "http://10.189.0.40", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "7d31d2904b56461cb46c735fc00850b0"}, {"url": "http://10.189.0.40", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "931e03b1033c4992ac8d223599983801"}, {"url": "http://10.189.0.40", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "c703b3c5e7224cfd893f622a7def99d7"}], "type": "product-streams", "id": "6723640fcf314f1c84ab92b0b7b7d343", "name": "image-stream"}, {"endpoints": [{"url": "http://neutron-api.infra.bos01.scalingstack:9696", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "13475a253aba4a63883ad9da9631b1d3"}, {"url": "http://10.189.0.22:9696", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "63b2334803a742048e95cd48d39f1674"}, {"url": "http://10.189.0.22:9696", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9d19ce3dbfd544ef90e7694049018957"}], "type": "network", "id": "6a80a28849da43ce9839207bb1e98bfc", "name": "neutron"}, {"endpoints": [{"url": "http://10.189.0.20:5000/v3", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "51d5e1cea07c4644b44a8bf114268a27"}, {"url": "http://10.189.0.20:35357/v3", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "79c780094b2f40e5a70ee3a6353760a0"}, {"url": "http://keystone.infra.bos01.scalingstack:5000/v3", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9cdf3486e4a94ca0a181e87bc1ff344f"}], "type": "identity", "id": "ad3a88bc8df3470b938f685304ad3ae9", "name": "keystone"}, {"endpoints": [{"url": "http://nova-api.infra.bos01.scalingstack:8778", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "83e5577919844e47bbf3dffc39f71e5f"}, {"url": "http://10.189.0.23:8778", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "86cd7636126b4214a0c0de3c50936bb9"}, {"url": "http://10.189.0.23:8778", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "eb918cef1bd546fcaafc28133e511d6c"}], "type": "placement", "id": "af7144bdc8404803a159883c31910f75", "name": "placement"}, {"endpoints": [{"url": "http://10.189.0.23:8774/v2.1", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "202b55f38ce646fe8ec9e2b956672f07"}, {"url": "http://10.189.0.23:8774/v2.1", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "b29375d70fd748e699859503279177e3"}, {"url": "http://nova-api.infra.bos01.scalingstack:8774/v2.1", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "ff7b759bc23341fe911fedfc2cd9ae07"}], "type": "compute", "id": "e34360be9bc6484eb95832a381a2d650", "name": "nova"}, {"endpoints": [{"url": "http://glance.infra.bos01.scalingstack:9292", "interface": "nova [W] Using flock in scalingstack-bos01-s390x 1512s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 1512s nova [E] nova boot failed (attempt #0): 1512s nova [E] DEBUG (extension:189) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3multifactor = keystoneauth1.loading._plugins.identity.v3:MultiFactor') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') 1512s DEBUG (session:517) REQ: curl -g -i -X GET http://keystone.infra.bos01.scalingstack:5000/v3/ -H "Accept: application/json" -H "User-Agent: nova keystoneauth1/4.0.0 python-requests/2.22.0 CPython/3.8.10" 1512s DEBUG (connectionpool:222) Starting new HTTP connection (1): keystone.infra.bos01.scalingstack:5000 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "GET /v3/ HTTP/1.1" 200 273 1512s DEBUG (session:548) RESP: [200] Connection: Keep-Alive Content-Length: 273 Content-Type: application/json Date: Tue, 30 Jul 2024 22:43:14 GMT Keep-Alive: timeout=5, max=100 Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (session:580) RESP BODY: {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://keystone.infra.bos01.scalingstack:5000/v3/", "rel": "self"}]}} 1512s DEBUG (session:946) GET call to http://keystone.infra.bos01.scalingstack:5000/v3/ used request id req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (base:182) Making authentication request to http://keystone.infra.bos01.scalingstack:5000/v3/auth/tokens 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "POST /v3/auth/tokens HTTP/1.1" 201 4363 1512s DEBUG (base:187) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "is_admin_project": false, "project": {"domain": {"id": "default", "name": "Default"}, "id": "3f3b771a247746688951a4c90bf16631", "name": "prod-proposed-migration_project"}, "catalog": [{"endpoints": [{"url": "http://10.189.0.40", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "7d31d2904b56461cb46c735fc00850b0"}, {"url": "http://10.189.0.40", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "931e03b1033c4992ac8d223599983801"}, {"url": "http://10.189.0.40", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "c703b3c5e7224cfd893f622a7def99d7"}], "type": "product-streams", "id": "6723640fcf314f1c84ab92b0b7b7d343", "name": "image-stream"}, {"endpoints": [{"url": "http://neutron-api.infra.bos01.scalingstack:9696", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "13475a253aba4a63883ad9da9631b1d3"}, {"url": "http://10.189.0.22:9696", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "63b2334803a742048e95cd48d39f1674"}, {"url": "http://10.189.0.22:9696", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9d19ce3dbfd544ef90e7694049018957"}], "type": "network", "id": "6a80a28849da43ce9839207bb1e98bfc", "name": "neutron"}, {"endpoints": [{"url": "http://10.189.0.20:5000/v3", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "51d5e1cea07c4644b44a8bf114268a27"}, {"url": "http://10.189.0.20:35357/v3", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "79c780094b2f40e5a70ee3a6353760a0"}, {"url": "http://keystone.infra.bos01.scalingstack:5000/v3", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9cdf3486e4a94ca0a181e87bc1ff344f"}], "type": "identity", "id": "ad3a88bc8df3470b938f685304ad3ae9", "name": "keystone"}, {"endpoints": [{"url": "http://nova-api.infra.bos01.scalingstack:8778", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "83e5577919844e47bbf3dffc39f71e5f"}, {"url": "http://10.189.0.23:8778", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "86cd7636126b4214a0c0de3c50936bb9"}, {"url": "http://10.189.0.23:8778", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "eb918cef1bd546fcaafc28133e511d6c"}], "type": "placement", "id": "af7144bdc8404803a159883c31910f75", "name": "placement"}, {"endpoints": [{"url": "http://10.189.0.23:8774/v2.1", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "202b55f38ce646fe8ec9e2b956672f07"}, {"url": "http://10.189.0.23:8774/v2.1", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "b29375d70fd748e699859503279177e3"}, {"url": "http://nova-api.infra.bos01.scalingstack:8774/v2.1", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "ff7b759bc23341fe911fedfc2cd9ae07"}], "type": "compute", "id": "e34360be9bc6484eb95832a381a2d650", "name": "nova"}, {"endpoints": [{"url": "http://glance.infra.bos01.scalingstack:9292", "interface": "nova [W] Using flock in scalingstack-bos01-s390x 1512s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 1512s nova [E] nova boot failed (attempt #0): 1512s nova [E] DEBUG (extension:189) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3multifactor = keystoneauth1.loading._plugins.identity.v3:MultiFactor') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') 1512s DEBUG (session:517) REQ: curl -g -i -X GET http://keystone.infra.bos01.scalingstack:5000/v3/ -H "Accept: application/json" -H "User-Agent: nova keystoneauth1/4.0.0 python-requests/2.22.0 CPython/3.8.10" 1512s DEBUG (connectionpool:222) Starting new HTTP connection (1): keystone.infra.bos01.scalingstack:5000 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "GET /v3/ HTTP/1.1" 200 273 1512s DEBUG (session:548) RESP: [200] Connection: Keep-Alive Content-Length: 273 Content-Type: application/json Date: Tue, 30 Jul 2024 22:43:14 GMT Keep-Alive: timeout=5, max=100 Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (session:580) RESP BODY: {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://keystone.infra.bos01.scalingstack:5000/v3/", "rel": "self"}]}} 1512s DEBUG (session:946) GET call to http://keystone.infra.bos01.scalingstack:5000/v3/ used request id req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (base:182) Making authentication request to http://keystone.infra.bos01.scalingstack:5000/v3/auth/tokens 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "POST /v3/auth/tokens HTTP/1.1" 201 4363 1512s DEBUG (base:187) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "is_admin_project": false, "project": {"domain": {"id": "default", "name": "Default"}, "id": "3f3b771a247746688951a4c90bf16631", "name": "prod-proposed-migration_project"}, "catalog": [{"endpoints": [{"url": "http://10.189.0.40", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "7d31d2904b56461cb46c735fc00850b0"}, {"url": "http://10.189.0.40", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "931e03b1033c4992ac8d223599983801"}, {"url": "http://10.189.0.40", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "c703b3c5e7224cfd893f622a7def99d7"}], "type": "product-streams", "id": "6723640fcf314f1c84ab92b0b7b7d343", "name": "image-stream"}, {"endpoints": [{"url": "http://neutron-api.infra.bos01.scalingstack:9696", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "13475a253aba4a63883ad9da9631b1d3"}, {"url": "http://10.189.0.22:9696", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "63b2334803a742048e95cd48d39f1674"}, {"url": "http://10.189.0.22:9696", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9d19ce3dbfd544ef90e7694049018957"}], "type": "network", "id": "6a80a28849da43ce9839207bb1e98bfc", "name": "neutron"}, {"endpoints": [{"url": "http://10.189.0.20:5000/v3", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "51d5e1cea07c4644b44a8bf114268a27"}, {"url": "http://10.189.0.20:35357/v3", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "79c780094b2f40e5a70ee3a6353760a0"}, {"url": "http://keystone.infra.bos01.scalingstack:5000/v3", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9cdf3486e4a94ca0a181e87bc1ff344f"}], "type": "identity", "id": "ad3a88bc8df3470b938f685304ad3ae9", "name": "keystone"}, {"endpoints": [{"url": "http://nova-api.infra.bos01.scalingstack:8778", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "83e5577919844e47bbf3dffc39f71e5f"}, {"url": "http://10.189.0.23:8778", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "86cd7636126b4214a0c0de3c50936bb9"}, {"url": "http://10.189.0.23:8778", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "eb918cef1bd546fcaafc28133e511d6c"}], "type": "placement", "id": "af7144bdc8404803a159883c31910f75", "name": "placement"}, {"endpoints": [{"url": "http://10.189.0.23:8774/v2.1", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "202b55f38ce646fe8ec9e2b956672f07"}, {"url": "http://10.189.0.23:8774/v2.1", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "b29375d70fd748e699859503279177e3"}, {"url": "http://nova-api.infra.bos01.scalingstack:8774/v2.1", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "ff7b759bc23341fe911fedfc2cd9ae07"}], "type": "compute", "id": "e34360be9bc6484eb95832a381a2d650", "name": "nova"}, {"endpoints": [{"url": "http://glance.infra.bos01.scalingstack:9292", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "0bacddbfbda545f087dab7ef5745707d"}, {"url": "http://10.189.0.19:9292", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "0f69442c439d471b9761ccd46fc6ca2e"}, {"url": "http://10.189.0.19:9292", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9cd58aadc9e94eea8783da595c3474f3"}], "type": "image", "id": "f29a943021f34b6682d21957ddc8acac", "name": "glance"}], "expires_at": "2024-07-30T23:43:15.000000Z", "user": {"password_expires_at": null, "domain": {"id": "default", "name": "Default"}, "id": "3afbd64474684647986f8a196316be34", "name": "prod-proposed-migration-s390x"}, "audit_ids": ["x2L1ftY1TzukeWDGxzjX2g"], "issued_at": "2024-07-30T22:43:15.000000Z"}} 1512s REQ: curl -g -i -X GET http://nova-api.infra.bos01.scalingstack:8774/v2.1 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}56d61b22fc64d260198a6ec092c20ec243019681a297bf97e15574f039470966" 1512s DEBUG (session:517) REQ: curl -g -i -X GET http://nova-api.infra.bos01.scalingstack:8774/v2.1 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}56d61b22fc64d260198a6ec092c20ec243019681a297bf97e15574f039470966" 1512s DEBUG (connectionpool:222) Starting new HTTP connection (1): nova-api.infra.bos01.scalingstack:8774 1512s DEBUG (connectionpool:429) http://nova-api.infra.bos01.scalingstack:8774 "GET /v2.1 HTTP/1.1" 302 0 1512s RESP: [302] Connection: keep-alive Content-Length: 0 Content-Type: text/plain; charset=utf8 Date: Tue, 30 Jul 2024 22:43:15 GMT Location: http://nova-api.infra.bos01.scalingstack:8774/v2.1/ X-Compute-Request-Id: req-f2ab9875-cf22-4955-aa0f-4e1cbda58e12 X-Openstack-Request-Id: req-f2ab9875-cf22-4955-aa0f-4e1cbda58e12 1512s DEBUG (session:548) RESP: [302] Connection: keep-alive Content-Length: 0 Content-Type: text/plain; charset=utf8 Date: Tue, 30 Jul 2024 22:43:15 GMT Location: http://nova-api.infra.bos01.scalingstack:8774/v2.1/ X-Compute-Request-Id: req-f2ab9875-cf22-4955-aa0f-4e1cbda58e12 X-Openstack-Request-Id: req-f2ab9875-cf22-4955-aa0f-4e1cbda58e12 1512s RESP BODY: Omitted, Content-Type is set to text/plain; charset=utf8. Only application/json responses have their bodies logged. 1512s DEBUG (session:580) RESP BODY: Omitted, Content-Type is set to text/plain; charset=utf8. Only application/json responses have their bodies logged. 1512s DEBUG (connectionpool:429) http://nova-api.infra.bos01.scalingstack:8774 "GET /v2.1/ HTTP/1.1" 200 407 1512s RESP: [200] Connection: keep-alive Content-Length: 407 Content-Type: application/json Date: Tue, 30 Jul 2024 22:43:15 GMT Openstack-Api-Version: compute 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Compute-Request-Id: req-50506067-01dd-48bb-9889-ce6a951c0ae0 X-Openstack-Nova-Api-Version: 2.1 X-Openstack-Request-Id: req-50506067-01dd-48bb-9889-ce6a951c0ae0 1512s DEBUG (session:548) RESP: [200] Connection: keep-alive Content-Length: 407 Content-Type: application/json Date: Tue, 30 Jul 2024 22:43:15 GMT Openstack-Api-Version: compute 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Compute-Request-Id: req-50506067-01dd-48bb-9889-ce6a951c0ae0 X-Openstack-Nova-Api-Version: 2.1 X-Openstack-Request-Id: req-50506067-01dd-48bb-9889-ce6a951c0ae0 1512s RESP BODY: {"version": {"status": "CURRENT", "updated": "2013-07-23T11:33:21Z", "links": [{"href": "http://nova-api.infra.bos01.scalingstack:8774/v2.1/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}], "min_version": "2.1", "version": "2.60", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1"}], "id": "v2.1"}} 1512s DEBUG (session:580) RESP BODY: {"version": {"status": "CURRENT", "updated": "2013-07-23T11:33:21Z", "links": [{"href": "http://nova-api.infra.bos01.scalingstack:8774/v2.1/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}], "min_version": "2.1", "version": "2.60", "media-types": [{"base": "applicationova [W] Using flock in scalingstack-bos01-s390x 1512s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 1512s nova [E] nova boot failed (attempt #0): 1512s nova [E] DEBUG (extension:189) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3multifactor = keystoneauth1.loading._plugins.identity.v3:MultiFactor') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') 1512s DEBUG (session:517) REQ: curl -g -i -X GET http://keystone.infra.bos01.scalingstack:5000/v3/ -H "Accept: application/json" -H "User-Agent: nova keystoneauth1/4.0.0 python-requests/2.22.0 CPython/3.8.10" 1512s DEBUG (connectionpool:222) Starting new HTTP connection (1): keystone.infra.bos01.scalingstack:5000 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "GET /v3/ HTTP/1.1" 200 273 1512s DEBUG (session:548) RESP: [200] Connection: Keep-Alive Content-Length: 273 Content-Type: application/json Date: Tue, 30 Jul 2024 22:43:14 GMT Keep-Alive: timeout=5, max=100 Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (session:580) RESP BODY: {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://keystone.infra.bos01.scalingstack:5000/v3/", "rel": "self"}]}} 1512s DEBUG (session:946) GET call to http://keystone.infra.bos01.scalingstack:5000/v3/ used request id req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (base:182) Making authentication request to http://keystone.infra.bos01.scalingstack:5000/v3/auth/tokens 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "POST /v3/auth/tokens HTTP/1.1" 201 4363 1512s DEBUG (base:187) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "is_admin_project": false, "project": {"domain": {"id": "default", "name": "Default"}, "id": "3f3b771a247746688951a4c90bf16631", "name": "prod-proposed-migration_project"}, "catalog": [{"endpoints": [{"url": "http://10.189.0.40", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "7d31d2904b56461cb46c735fc00850b0"}, {"url": "http://10.189.0.40", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "931e03b1033c4992ac8d223599983801"}, {"url": "http://10.189.0.40", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "c703b3c5e7224cfd893f622a7def99d7"}], "type": "product-streams", "id": "6723640fcf314f1c84ab92b0b7b7d343", "name": "image-stream"}, {"endpoints": [{"url": "http://neutron-api.infra.bos01.scalingstack:9696", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "13475a253aba4a63883ad9da9631b1d3"}, {"url": "http://10.189.0.22:9696", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "63b2334803a742048e95cd48d39f1674"}, {"url": "http://10.189.0.22:9696", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9d19ce3dbfd544ef90e7694049018957"}], "type": "network", "id": "6a80a28849da43ce9839207bb1e98bfc", "name": "neutron"}, {"endpoints": [{"url": "http://10.189.0.20:5000/v3", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "51d5e1cea07c4644b44a8bf114268a27"}, {"url": "http://10.189.0.20:35357/v3", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "79c780094b2f40e5a70ee3a6353760a0"}, {"url": "http://keystone.infra.bos01.scalingstack:5000/v3", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9cdf3486e4a94ca0a181e87bc1ff344f"}], "type": "identity", "id": "ad3a88bc8df3470b938f685304ad3ae9", "name": "keystone"}, {"endpoints": [{"url": "http://nova-api.infra.bos01.scalingstack:8778", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "83e5577919844e47bbf3dffc39f71e5f"}, {"url": "http://10.189.0.23:8778", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "86cd7636126b4214a0c0de3c50936bb9"}, {"url": "http://10.189.0.23:8778", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "eb918cef1bd546fcaafc28133e511d6c"}], "type": "placement", "id": "af7144bdc8404803a159883c31910f75", "name": "placement"}, {"endpoints": [{"url": "http://10.189.0.23:8774/v2.1", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "202b55f38ce646fe8ec9e2b956672f07"}, {"url": "http://10.189.0.23:8774/v2.1", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "b29375d70fd748e699859503279177e3"}, {"url": "http://nova-api.infra.bos01.scalingstack:8774/v2.1", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "ff7b759bc23341fe911fedfc2cd9ae07"}], "type": "compute", "id": "e34360be9bc6484eb95832a381a2d650", "name": "nova"}, {"endpoints": [{"url": "http://glance.infra.bos01.scalingstack:9292", "interface": "nova [W] Using flock in scalingstack-bos01-s390x 1512s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 1512s nova [E] nova boot failed (attempt #0): 1512s nova [E] DEBUG (extension:189) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3multifactor = keystoneauth1.loading._plugins.identity.v3:MultiFactor') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') 1512s DEBUG (session:517) REQ: curl -g -i -X GET http://keystone.infra.bos01.scalingstack:5000/v3/ -H "Accept: application/json" -H "User-Agent: nova keystoneauth1/4.0.0 python-requests/2.22.0 CPython/3.8.10" 1512s DEBUG (connectionpool:222) Starting new HTTP connection (1): keystone.infra.bos01.scalingstack:5000 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "GET /v3/ HTTP/1.1" 200 273 1512s DEBUG (session:548) RESP: [200] Connection: Keep-Alive Content-Length: 273 Content-Type: application/json Date: Tue, 30 Jul 2024 22:43:14 GMT Keep-Alive: timeout=5, max=100 Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (session:580) RESP BODY: {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://keystone.infra.bos01.scalingstack:5000/v3/", "rel": "self"}]}} 1512s DEBUG (session:946) GET call to http://keystone.infra.bos01.scalingstack:5000/v3/ used request id req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (base:182) Making authentication request to http://keystone.infra.bos01.scalingstack:5000/v3/auth/tokens 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "POST /v3/auth/tokens HTTP/1.1" 201 4363 1512s DEBUG (base:187) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "is_admin_project": false, "project": {"domain": {"id": "default", "name": "Default"}, "id": "3f3b771a247746688951a4c90bf16631", "name": "prod-proposed-migration_project"}, "catalog": [{"endpoints": [{"url": "http://10.189.0.40", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "7d31d2904b56461cb46c735fc00850b0"}, {"url": "http://10.189.0.40", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "931e03b1033c4992ac8d223599983801"}, {"url": "http://10.189.0.40", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "c703b3c5e7224cfd893f622a7def99d7"}], "type": "product-streams", "id": "6723640fcf314f1c84ab92b0b7b7d343", "name": "image-stream"}, {"endpoints": [{"url": "http://neutron-api.infra.bos01.scalingstack:9696", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "13475a253aba4a63883ad9da9631b1d3"}, {"url": "http://10.189.0.22:9696", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "63b2334803a742048e95cd48d39f1674"}, {"url": "http://10.189.0.22:9696", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9d19ce3dbfd544ef90e7694049018957"}], "type": "network", "id": "6a80a28849da43ce9839207bb1e98bfc", "name": "neutron"}, {"endpoints": [{"url": "http://10.189.0.20:5000/v3", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "51d5e1cea07c4644b44a8bf114268a27"}, {"url": "http://10.189.0.20:35357/v3", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "79c780094b2f40e5a70ee3a6353760a0"}, {"url": "http://keystone.infra.bos01.scalingstack:5000/v3", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9cdf3486e4a94ca0a181e87bc1ff344f"}], "type": "identity", "id": "ad3a88bc8df3470b938f685304ad3ae9", "name": "keystone"}, {"endpoints": [{"url": "http://nova-api.infra.bos01.scalingstack:8778", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "83e5577919844e47bbf3dffc39f71e5f"}, {"url": "http://10.189.0.23:8778", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "86cd7636126b4214a0c0de3c50936bb9"}, {"url": "http://10.189.0.23:8778", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "eb918cef1bd546fcaafc28133e511d6c"}], "type": "placement", "id": "af7144bdc8404803a159883c31910f75", "name": "placement"}, {"endpoints": [{"url": "http://10.189.0.23:8774/v2.1", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "202b55f38ce646fe8ec9e2b956672f07"}, {"url": "http://10.189.0.23:8774/v2.1", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "b29375d70fd748e699859503279177e3"}, {"url": "http://nova-api.infra.bos01.scalingstack:8774/v2.1", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "ff7b759bc23341fe911fedfc2cd9ae07"}], "type": "compute", "id": "e34360be9bc6484eb95832a381a2d650", "name": "nova"}, {"endpoints": [{"url": "http://glance.infra.bos01.scalingstack:9292", "interface": "nova [W] Using flock in scalingstack-bos01-s390x 1512s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 1512s nova [E] nova boot failed (attempt #0): 1512s nova [E] DEBUG (extension:189) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3multifactor = keystoneauth1.loading._plugins.identity.v3:MultiFactor') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') 1512s DEBUG (session:517) REQ: curl -g -i -X GET http://keystone.infra.bos01.scalingstack:5000/v3/ -H "Accept: application/json" -H "User-Agent: nova keystoneauth1/4.0.0 python-requests/2.22.0 CPython/3.8.10" 1512s DEBUG (connectionpool:222) Starting new HTTP connection (1): keystone.infra.bos01.scalingstack:5000 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "GET /v3/ HTTP/1.1" 200 273 1512s DEBUG (session:548) RESP: [200] Connection: Keep-Alive Content-Length: 273 Content-Type: application/json Date: Tue, 30 Jul 2024 22:43:14 GMT Keep-Alive: timeout=5, max=100 Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (session:580) RESP BODY: {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://keystone.infra.bos01.scalingstack:5000/v3/", "rel": "self"}]}} 1512s DEBUG (session:946) GET call to http://keystone.infra.bos01.scalingstack:5000/v3/ used request id req-75e1da0a-6398-4210-b440-a93c9e0a1215 1512s DEBUG (base:182) Making authentication request to http://keystone.infra.bos01.scalingstack:5000/v3/auth/tokens 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "POST /v3/auth/tokens HTTP/1.1" 201 4363 1512s DEBUG (base:187) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "is_admin_project": false, "project": {"domain": {"id": "default", "name": "Default"}, "id": "3f3b771a247746688951a4c90bf16631", "name": "prod-proposed-migration_project"}, "catalog": [{"endpoints": [{"url": "http://10.189.0.40", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "7d31d2904b56461cb46c735fc00850b0"}, {"url": "http://10.189.0.40", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "931e03b1033c4992ac8d223599983801"}, {"url": "http://10.189.0.40", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "c703b3c5e7224cfd893f622a7def99d7"}], "type": "product-streams", "id": "6723640fcf314f1c84ab92b0b7b7d343", "name": "image-stream"}, {"endpoints": [{"url": "http://neutron-api.infra.bos01.scalingstack:9696", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "13475a253aba4a63883ad9da9631b1d3"}, {"url": "http://10.189.0.22:9696", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "63b2334803a742048e95cd48d39f1674"}, {"url": "http://10.189.0.22:9696", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9d19ce3dbfd544ef90e7694049018957"}], "type": "network", "id": "6a80a28849da43ce9839207bb1e98bfc", "name": "neutron"}, {"endpoints": [{"url": "http://10.189.0.20:5000/v3", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "51d5e1cea07c4644b44a8bf114268a27"}, {"url": "http://10.189.0.20:35357/v3", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "79c780094b2f40e5a70ee3a6353760a0"}, {"url": "http://keystone.infra.bos01.scalingstack:5000/v3", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9cdf3486e4a94ca0a181e87bc1ff344f"}], "type": "identity", "id": "ad3a88bc8df3470b938f685304ad3ae9", "name": "keystone"}, {"endpoints": [{"url": "http://nova-api.infra.bos01.scalingstack:8778", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "83e5577919844e47bbf3dffc39f71e5f"}, {"url": "http://10.189.0.23:8778", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "86cd7636126b4214a0c0de3c50936bb9"}, {"url": "http://10.189.0.23:8778", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "eb918cef1bd546fcaafc28133e511d6c"}], "type": "placement", "id": "af7144bdc8404803a159883c31910f75", "name": "placement"}, {"endpoints": [{"url": "http://10.189.0.23:8774/v2.1", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "202b55f38ce646fe8ec9e2b956672f07"}, {"url": "http://10.189.0.23:8774/v2.1", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "b29375d70fd748e699859503279177e3"}, {"url": "http://nova-api.infra.bos01.scalingstack:8774/v2.1", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "ff7b759bc23341fe911fedfc2cd9ae07"}], "type": "compute", "id": "e34360be9bc6484eb95832a381a2d650", "name": "nova"}, {"endpoints": [{"url": "http://glance.infra.bos01.scalingstack:9292", "interface": "nova [W] Using flock in scalingstack-bos01-s390x 1512s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 1512s nova [E] nova boot failed (attempt #0): 1512s nova [E] DEBUG (extension:189) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') 1512s DEBUG (extension:189) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3multifactor = keystoneauth1.loading._plugins.identity.v3:MultiFactor') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') 1512s DEBUG (extension:189) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') 1512s DEBUG (session:517) REQ: curl -g -i -X GET http://keystone.infra.bos01.scalingstack:5000/v3/ -H "Accept: application/json" -H "User-Agent: nova keystoneauth1/4.0.0 python-requests/2.22.0 CPython/3.8.10" 1512s DEBUG (connectionpool:222) Starting new HTTP connection (1): keystone.infra.bos01.scalingstack:5000 1512s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "GET /v3/ HTTP/1.1" 200 273 1512s DEBUG (session:548) RESP: [200] Connection: Keep-Alive Content-Length: 273 Content-Type: application/json Date: Tue, 30 Jul 2024 22:43:14 GMT Keep-Alive: timeout=5, max=100 Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-75e1da0a-6398-4210-b440-a93c9e0a1215 1516s DEBUG (session:580) RESP BODY: {"version": {"status": "stable", "updated": "2018-02-28T00:00:00Z", "media-types": [{"autopkgtest [22:54:55]: testbed dpkg architecture: s390x 1517s autopkgtest [22:54:56]: testbed apt version: 2.9.6 1517s autopkgtest [22:54:56]: @@@@@@@@@@@@@@@@@@@@ test bed setup 1518s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [126 kB] 1519s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [514 kB] 1519s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [8548 B] 1519s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [52.0 kB] 1519s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [6368 B] 1519s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x Packages [73.3 kB] 1519s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x c-n-f Metadata [2112 B] 1519s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/restricted s390x Packages [1368 B] 1519s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/restricted s390x c-n-f Metadata [120 B] 1519s Get:10 http://ftpmaster.internal/ubuntu oracular-proposed/universe s390x Packages [433 kB] 1519s Get:11 http://ftpmaster.internal/ubuntu oracular-proposed/universe s390x c-n-f Metadata [8372 B] 1519s Get:12 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse s390x Packages [3620 B] 1519s Get:13 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse s390x c-n-f Metadata [120 B] 1520s Fetched 1229 kB in 1s (1161 kB/s) 1520s Reading package lists... 1523s Reading package lists... 1524s Building dependency tree... 1524s Reading state information... 1524s Calculating upgrade... 1524s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1524s Reading package lists... 1524s Building dependency tree... 1524s Reading state information... 1524s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1525s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 1525s Hit:2 http://ftpmaster.internal/ubuntu oracular InRelease 1525s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 1525s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 1526s Reading package lists... 1526s Reading package lists... 1526s Building dependency tree... 1526s Reading state information... 1527s Calculating upgrade... 1527s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1527s Reading package lists... 1527s Building dependency tree... 1527s Reading state information... 1527s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1534s Reading package lists... 1534s Building dependency tree... 1534s Reading state information... 1534s Starting pkgProblemResolver with broken count: 0 1534s Starting 2 pkgProblemResolver with broken count: 0 1534s Done 1534s The following additional packages will be installed: 1534s etcd-server fonts-font-awesome fonts-lato libio-pty-perl libipc-run-perl 1534s libjs-jquery libjs-sphinxdoc libjs-underscore libjson-perl libpq5 1534s libtime-duration-perl libtimedate-perl libxslt1.1 moreutils patroni 1534s patroni-doc postgresql postgresql-16 postgresql-client-16 1534s postgresql-client-common postgresql-common python3-behave python3-cdiff 1534s python3-click python3-colorama python3-coverage python3-dateutil 1534s python3-dnspython python3-etcd python3-parse python3-parse-type 1534s python3-prettytable python3-psutil python3-psycopg2 python3-six 1534s python3-wcwidth sphinx-rtd-theme-common ssl-cert 1535s Suggested packages: 1535s etcd-client vip-manager haproxy postgresql-doc postgresql-doc-16 1535s python-coverage-doc python3-trio python3-aioquic python3-h2 python3-httpx 1535s python3-httpcore etcd python-psycopg2-doc 1535s Recommended packages: 1535s javascript-common libjson-xs-perl 1535s The following NEW packages will be installed: 1535s autopkgtest-satdep etcd-server fonts-font-awesome fonts-lato libio-pty-perl 1535s libipc-run-perl libjs-jquery libjs-sphinxdoc libjs-underscore libjson-perl 1535s libpq5 libtime-duration-perl libtimedate-perl libxslt1.1 moreutils patroni 1535s patroni-doc postgresql postgresql-16 postgresql-client-16 1535s postgresql-client-common postgresql-common python3-behave python3-cdiff 1535s python3-click python3-colorama python3-coverage python3-dateutil 1535s python3-dnspython python3-etcd python3-parse python3-parse-type 1535s python3-prettytable python3-psutil python3-psycopg2 python3-six 1535s python3-wcwidth sphinx-rtd-theme-common ssl-cert 1535s 0 upgraded, 39 newly installed, 0 to remove and 0 not upgraded. 1535s Need to get 33.4 MB/33.4 MB of archives. 1535s After this operation, 111 MB of additional disk space will be used. 1535s Get:1 /tmp/autopkgtest.qFf46z/2-autopkgtest-satdep.deb autopkgtest-satdep s390x 0 [768 B] 1535s Get:2 http://ftpmaster.internal/ubuntu oracular/main s390x fonts-lato all 2.015-1 [2781 kB] 1536s Get:3 http://ftpmaster.internal/ubuntu oracular/main s390x libjson-perl all 4.10000-1 [81.9 kB] 1536s Get:4 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-client-common all 261 [36.6 kB] 1536s Get:5 http://ftpmaster.internal/ubuntu oracular/main s390x ssl-cert all 1.1.2ubuntu2 [18.0 kB] 1536s Get:6 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-common all 261 [162 kB] 1536s Get:7 http://ftpmaster.internal/ubuntu oracular/universe s390x etcd-server s390x 3.4.30-1build1 [7777 kB] 1538s Get:8 http://ftpmaster.internal/ubuntu oracular/main s390x fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 1538s Get:9 http://ftpmaster.internal/ubuntu oracular/main s390x libio-pty-perl s390x 1:1.20-1build2 [31.3 kB] 1538s Get:10 http://ftpmaster.internal/ubuntu oracular/main s390x libipc-run-perl all 20231003.0-2 [91.5 kB] 1538s Get:11 http://ftpmaster.internal/ubuntu oracular/main s390x libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 1538s Get:12 http://ftpmaster.internal/ubuntu oracular/main s390x libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 1538s Get:13 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x libjs-sphinxdoc all 7.3.7-4 [154 kB] 1538s Get:14 http://ftpmaster.internal/ubuntu oracular/main s390x libpq5 s390x 16.3-1 [144 kB] 1538s Get:15 http://ftpmaster.internal/ubuntu oracular/main s390x libtime-duration-perl all 1.21-2 [12.3 kB] 1538s Get:16 http://ftpmaster.internal/ubuntu oracular/main s390x libtimedate-perl all 2.3300-2 [34.0 kB] 1538s Get:17 http://ftpmaster.internal/ubuntu oracular/main s390x libxslt1.1 s390x 1.1.39-0exp1build1 [170 kB] 1538s Get:18 http://ftpmaster.internal/ubuntu oracular/universe s390x moreutils s390x 0.69-1 [57.4 kB] 1538s Get:19 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-cdiff all 1.0-1.1 [16.4 kB] 1538s Get:20 http://ftpmaster.internal/ubuntu oracular/main s390x python3-colorama all 0.4.6-4 [32.1 kB] 1538s Get:21 http://ftpmaster.internal/ubuntu oracular/main s390x python3-click all 8.1.7-2 [79.5 kB] 1538s Get:22 http://ftpmaster.internal/ubuntu oracular/main s390x python3-six all 1.16.0-6 [13.0 kB] 1538s Get:23 http://ftpmaster.internal/ubuntu oracular/main s390x python3-dateutil all 2.9.0-2 [80.3 kB] 1538s Get:24 http://ftpmaster.internal/ubuntu oracular/main s390x python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 1538s Get:25 http://ftpmaster.internal/ubuntu oracular/main s390x python3-prettytable all 3.10.1-1 [34.0 kB] 1538s Get:26 http://ftpmaster.internal/ubuntu oracular/main s390x python3-psutil s390x 5.9.8-2build2 [195 kB] 1538s Get:27 http://ftpmaster.internal/ubuntu oracular/main s390x python3-psycopg2 s390x 2.9.9-1build1 [133 kB] 1538s Get:28 http://ftpmaster.internal/ubuntu oracular/main s390x python3-dnspython all 2.6.1-1ubuntu1 [163 kB] 1538s Get:29 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-etcd all 0.4.5-4 [31.9 kB] 1538s Get:30 http://ftpmaster.internal/ubuntu oracular/universe s390x patroni all 3.3.1-1 [264 kB] 1539s Get:31 http://ftpmaster.internal/ubuntu oracular/main s390x sphinx-rtd-theme-common all 2.0.0+dfsg-2 [1012 kB] 1539s Get:32 http://ftpmaster.internal/ubuntu oracular/universe s390x patroni-doc all 3.3.1-1 [497 kB] 1539s Get:33 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-client-16 s390x 16.3-1 [1290 kB] 1539s Get:34 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-16 s390x 16.3-1 [16.7 MB] 1541s Get:35 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql all 16+261 [11.7 kB] 1541s Get:36 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-parse all 1.20.2-1 [27.0 kB] 1541s Get:37 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-parse-type all 0.6.2-1 [22.7 kB] 1541s Get:38 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-behave all 1.2.6-5 [98.4 kB] 1541s Get:39 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-coverage s390x 7.4.4+dfsg1-0ubuntu2 [147 kB] 1541s Preconfiguring packages ... 1541s Fetched 33.4 MB in 6s (5210 kB/s) 1541s Selecting previously unselected package fonts-lato. 1541s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 54832 files and directories currently installed.) 1541s Preparing to unpack .../00-fonts-lato_2.015-1_all.deb ... 1541s Unpacking fonts-lato (2.015-1) ... 1542s Selecting previously unselected package libjson-perl. 1542s Preparing to unpack .../01-libjson-perl_4.10000-1_all.deb ... 1542s Unpacking libjson-perl (4.10000-1) ... 1542s Selecting previously unselected package postgresql-client-common. 1542s Preparing to unpack .../02-postgresql-client-common_261_all.deb ... 1542s Unpacking postgresql-client-common (261) ... 1542s Selecting previously unselected package ssl-cert. 1542s Preparing to unpack .../03-ssl-cert_1.1.2ubuntu2_all.deb ... 1542s Unpacking ssl-cert (1.1.2ubuntu2) ... 1542s Selecting previously unselected package postgresql-common. 1542s Preparing to unpack .../04-postgresql-common_261_all.deb ... 1542s Adding 'diversion of /usr/bin/pg_config to /usr/bin/pg_config.libpq-dev by postgresql-common' 1542s Unpacking postgresql-common (261) ... 1542s Selecting previously unselected package etcd-server. 1542s Preparing to unpack .../05-etcd-server_3.4.30-1build1_s390x.deb ... 1542s Unpacking etcd-server (3.4.30-1build1) ... 1542s Selecting previously unselected package fonts-font-awesome. 1542s Preparing to unpack .../06-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 1542s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 1542s Selecting previously unselected package libio-pty-perl. 1542s Preparing to unpack .../07-libio-pty-perl_1%3a1.20-1build2_s390x.deb ... 1542s Unpacking libio-pty-perl (1:1.20-1build2) ... 1542s Selecting previously unselected package libipc-run-perl. 1542s Preparing to unpack .../08-libipc-run-perl_20231003.0-2_all.deb ... 1542s Unpacking libipc-run-perl (20231003.0-2) ... 1542s Selecting previously unselected package libjs-jquery. 1542s Preparing to unpack .../09-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 1542s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 1542s Selecting previously unselected package libjs-underscore. 1542s Preparing to unpack .../10-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 1542s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 1542s Selecting previously unselected package libjs-sphinxdoc. 1542s Preparing to unpack .../11-libjs-sphinxdoc_7.3.7-4_all.deb ... 1542s Unpacking libjs-sphinxdoc (7.3.7-4) ... 1542s Selecting previously unselected package libpq5:s390x. 1542s Preparing to unpack .../12-libpq5_16.3-1_s390x.deb ... 1542s Unpacking libpq5:s390x (16.3-1) ... 1542s Selecting previously unselected package libtime-duration-perl. 1542s Preparing to unpack .../13-libtime-duration-perl_1.21-2_all.deb ... 1542s Unpacking libtime-duration-perl (1.21-2) ... 1542s Selecting previously unselected package libtimedate-perl. 1542s Preparing to unpack .../14-libtimedate-perl_2.3300-2_all.deb ... 1542s Unpacking libtimedate-perl (2.3300-2) ... 1542s Selecting previously unselected package libxslt1.1:s390x. 1542s Preparing to unpack .../15-libxslt1.1_1.1.39-0exp1build1_s390x.deb ... 1542s Unpacking libxslt1.1:s390x (1.1.39-0exp1build1) ... 1542s Selecting previously unselected package moreutils. 1542s Preparing to unpack .../16-moreutils_0.69-1_s390x.deb ... 1542s Unpacking moreutils (0.69-1) ... 1542s Selecting previously unselected package python3-cdiff. 1542s Preparing to unpack .../17-python3-cdiff_1.0-1.1_all.deb ... 1542s Unpacking python3-cdiff (1.0-1.1) ... 1542s Selecting previously unselected package python3-colorama. 1542s Preparing to unpack .../18-python3-colorama_0.4.6-4_all.deb ... 1542s Unpacking python3-colorama (0.4.6-4) ... 1542s Selecting previously unselected package python3-click. 1542s Preparing to unpack .../19-python3-click_8.1.7-2_all.deb ... 1542s Unpacking python3-click (8.1.7-2) ... 1542s Selecting previously unselected package python3-six. 1542s Preparing to unpack .../20-python3-six_1.16.0-6_all.deb ... 1542s Unpacking python3-six (1.16.0-6) ... 1542s Selecting previously unselected package python3-dateutil. 1542s Preparing to unpack .../21-python3-dateutil_2.9.0-2_all.deb ... 1542s Unpacking python3-dateutil (2.9.0-2) ... 1542s Selecting previously unselected package python3-wcwidth. 1542s Preparing to unpack .../22-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 1542s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 1542s Selecting previously unselected package python3-prettytable. 1542s Preparing to unpack .../23-python3-prettytable_3.10.1-1_all.deb ... 1542s Unpacking python3-prettytable (3.10.1-1) ... 1542s Selecting previously unselected package python3-psutil. 1542s Preparing to unpack .../24-python3-psutil_5.9.8-2build2_s390x.deb ... 1542s Unpacking python3-psutil (5.9.8-2build2) ... 1542s Selecting previously unselected package python3-psycopg2. 1542s Preparing to unpack .../25-python3-psycopg2_2.9.9-1build1_s390x.deb ... 1542s Unpacking python3-psycopg2 (2.9.9-1build1) ... 1542s Selecting previously unselected package python3-dnspython. 1542s Preparing to unpack .../26-python3-dnspython_2.6.1-1ubuntu1_all.deb ... 1542s Unpacking python3-dnspython (2.6.1-1ubuntu1) ... 1543s Selecting previously unselected package python3-etcd. 1543s Preparing to unpack .../27-python3-etcd_0.4.5-4_all.deb ... 1543s Unpacking python3-etcd (0.4.5-4) ... 1543s Selecting previously unselected package patroni. 1543s Preparing to unpack .../28-patroni_3.3.1-1_all.deb ... 1543s Unpacking patroni (3.3.1-1) ... 1543s Selecting previously unselected package sphinx-rtd-theme-common. 1543s Preparing to unpack .../29-sphinx-rtd-theme-common_2.0.0+dfsg-2_all.deb ... 1543s Unpacking sphinx-rtd-theme-common (2.0.0+dfsg-2) ... 1543s Selecting previously unselected package patroni-doc. 1543s Preparing to unpack .../30-patroni-doc_3.3.1-1_all.deb ... 1543s Unpacking patroni-doc (3.3.1-1) ... 1543s Selecting previously unselected package postgresql-client-16. 1543s Preparing to unpack .../31-postgresql-client-16_16.3-1_s390x.deb ... 1543s Unpacking postgresql-client-16 (16.3-1) ... 1543s Selecting previously unselected package postgresql-16. 1543s Preparing to unpack .../32-postgresql-16_16.3-1_s390x.deb ... 1543s Unpacking postgresql-16 (16.3-1) ... 1543s Selecting previously unselected package postgresql. 1543s Preparing to unpack .../33-postgresql_16+261_all.deb ... 1543s Unpacking postgresql (16+261) ... 1543s Selecting previously unselected package python3-parse. 1543s Preparing to unpack .../34-python3-parse_1.20.2-1_all.deb ... 1543s Unpacking python3-parse (1.20.2-1) ... 1543s Selecting previously unselected package python3-parse-type. 1543s Preparing to unpack .../35-python3-parse-type_0.6.2-1_all.deb ... 1543s Unpacking python3-parse-type (0.6.2-1) ... 1543s Selecting previously unselected package python3-behave. 1543s Preparing to unpack .../36-python3-behave_1.2.6-5_all.deb ... 1543s Unpacking python3-behave (1.2.6-5) ... 1545s Selecting previously unselected package python3-coverage. 1545s Preparing to unpack .../37-python3-coverage_7.4.4+dfsg1-0ubuntu2_s390x.deb ... 1545s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 1545s Selecting previously unselected package autopkgtest-satdep. 1545s Preparing to unpack .../38-2-autopkgtest-satdep.deb ... 1545s Unpacking autopkgtest-satdep (0) ... 1545s Setting up postgresql-client-common (261) ... 1545s Setting up fonts-lato (2.015-1) ... 1545s Setting up libio-pty-perl (1:1.20-1build2) ... 1545s Setting up python3-colorama (0.4.6-4) ... 1545s Setting up python3-cdiff (1.0-1.1) ... 1545s Setting up libpq5:s390x (16.3-1) ... 1545s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 1545s Setting up python3-click (8.1.7-2) ... 1545s Setting up python3-psutil (5.9.8-2build2) ... 1545s Setting up python3-six (1.16.0-6) ... 1545s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 1545s Setting up ssl-cert (1.1.2ubuntu2) ... 1545s Created symlink '/etc/systemd/system/multi-user.target.wants/ssl-cert.service' → '/usr/lib/systemd/system/ssl-cert.service'. 1545s Setting up python3-psycopg2 (2.9.9-1build1) ... 1546s Setting up libipc-run-perl (20231003.0-2) ... 1546s Setting up libtime-duration-perl (1.21-2) ... 1546s Setting up libtimedate-perl (2.3300-2) ... 1546s Setting up python3-dnspython (2.6.1-1ubuntu1) ... 1546s Setting up python3-parse (1.20.2-1) ... 1546s Setting up libjson-perl (4.10000-1) ... 1546s Setting up libxslt1.1:s390x (1.1.39-0exp1build1) ... 1546s Setting up python3-dateutil (2.9.0-2) ... 1546s Setting up etcd-server (3.4.30-1build1) ... 1546s info: Selecting UID from range 100 to 999 ... 1546s 1546s info: Selecting GID from range 100 to 999 ... 1546s info: Adding system user `etcd' (UID 107) ... 1546s info: Adding new group `etcd' (GID 113) ... 1546s info: Adding new user `etcd' (UID 107) with group `etcd' ... 1546s info: Creating home directory `/var/lib/etcd/' ... 1547s Created symlink '/etc/systemd/system/etcd2.service' → '/usr/lib/systemd/system/etcd.service'. 1547s Created symlink '/etc/systemd/system/multi-user.target.wants/etcd.service' → '/usr/lib/systemd/system/etcd.service'. 1547s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 1547s Setting up python3-prettytable (3.10.1-1) ... 1547s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 1547s Setting up sphinx-rtd-theme-common (2.0.0+dfsg-2) ... 1547s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 1547s Setting up moreutils (0.69-1) ... 1547s Setting up python3-etcd (0.4.5-4) ... 1548s Setting up postgresql-client-16 (16.3-1) ... 1548s update-alternatives: using /usr/share/postgresql/16/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode 1548s Setting up python3-parse-type (0.6.2-1) ... 1548s Setting up postgresql-common (261) ... 1548s 1548s Creating config file /etc/postgresql-common/createcluster.conf with new version 1548s Building PostgreSQL dictionaries from installed myspell/hunspell packages... 1548s Removing obsolete dictionary files: 1549s Created symlink '/etc/systemd/system/multi-user.target.wants/postgresql.service' → '/usr/lib/systemd/system/postgresql.service'. 1550s Setting up libjs-sphinxdoc (7.3.7-4) ... 1550s Setting up python3-behave (1.2.6-5) ... 1550s /usr/lib/python3/dist-packages/behave/formatter/ansi_escapes.py:57: SyntaxWarning: invalid escape sequence '\[' 1550s _ANSI_ESCAPE_PATTERN = re.compile(u"\x1b\[\d+[mA]", re.UNICODE) 1550s /usr/lib/python3/dist-packages/behave/matchers.py:267: SyntaxWarning: invalid escape sequence '\d' 1550s """Registers a custom type that will be available to "parse" 1550s Setting up patroni (3.3.1-1) ... 1550s Created symlink '/etc/systemd/system/multi-user.target.wants/patroni.service' → '/usr/lib/systemd/system/patroni.service'. 1551s Setting up postgresql-16 (16.3-1) ... 1551s Creating new PostgreSQL cluster 16/main ... 1551s /usr/lib/postgresql/16/bin/initdb -D /var/lib/postgresql/16/main --auth-local peer --auth-host scram-sha-256 --no-instructions 1551s The files belonging to this database system will be owned by user "postgres". 1551s This user must also own the server process. 1551s 1551s The database cluster will be initialized with locale "C.UTF-8". 1551s The default database encoding has accordingly been set to "UTF8". 1551s The default text search configuration will be set to "english". 1551s 1551s Data page checksums are disabled. 1551s 1551s fixing permissions on existing directory /var/lib/postgresql/16/main ... ok 1551s creating subdirectories ... ok 1551s selecting dynamic shared memory implementation ... posix 1551s selecting default max_connections ... 100 1551s selecting default shared_buffers ... 128MB 1551s selecting default time zone ... Etc/UTC 1551s creating configuration files ... ok 1551s running bootstrap script ... ok 1552s performing post-bootstrap initialization ... ok 1552s syncing data to disk ... ok 1556s Setting up patroni-doc (3.3.1-1) ... 1556s Setting up postgresql (16+261) ... 1556s Setting up autopkgtest-satdep (0) ... 1556s Processing triggers for man-db (2.12.1-2) ... 1557s Processing triggers for libc-bin (2.39-0ubuntu9) ... 1569s (Reading database ... 58232 files and directories currently installed.) 1569s Removing autopkgtest-satdep (0) ... 1585s autopkgtest [22:56:04]: test acceptance-etcd-basic: debian/tests/acceptance etcd features/basic_replication.feature 1585s autopkgtest [22:56:04]: test acceptance-etcd-basic: [----------------------- 1586s dpkg-architecture: warning: cannot determine CC system type, falling back to default (native compilation) 1586s ○ etcd.service - etcd - highly-available key value store 1586s Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; preset: enabled) 1586s Active: inactive (dead) since Tue 2024-07-30 22:56:05 UTC; 10ms ago 1586s Duration: 39.230s 1586s Invocation: 942eca1738c54e319f8093542866c5df 1586s Docs: https://etcd.io/docs 1586s man:etcd 1586s Process: 2392 ExecStart=/usr/bin/etcd $DAEMON_ARGS (code=killed, signal=TERM) 1586s Main PID: 2392 (code=killed, signal=TERM) 1586s 1586s Jul 30 22:55:26 autopkgtest systemd[1]: Started etcd.service - etcd - highly-available key value store. 1586s Jul 30 22:55:26 autopkgtest etcd[2392]: set the initial cluster version to 3.4 1586s Jul 30 22:55:26 autopkgtest etcd[2392]: enabled capabilities for version 3.4 1586s Jul 30 22:56:05 autopkgtest etcd[2392]: received terminated signal, shutting down... 1586s Jul 30 22:56:05 autopkgtest etcd[2392]: stopping insecure grpc server due to error: accept tcp 127.0.0.1:2379: use of closed network connection 1586s Jul 30 22:56:05 autopkgtest systemd[1]: Stopping etcd.service - etcd - highly-available key value store... 1586s Jul 30 22:56:05 autopkgtest etcd[2392]: stopped insecure grpc server due to error: accept tcp 127.0.0.1:2379: use of closed network connection 1586s Jul 30 22:56:05 autopkgtest etcd[2392]: skipped leadership transfer for single voting member cluster 1586s Jul 30 22:56:05 autopkgtest systemd[1]: etcd.service: Deactivated successfully. 1586s Jul 30 22:56:05 autopkgtest systemd[1]: Stopped etcd.service - etcd - highly-available key value store. 1586s ++ ls -1r /usr/lib/postgresql/ 1586s + for PG_VERSION in $(ls -1r /usr/lib/postgresql/) 1586s + '[' 16 == 10 -o 16 == 11 ']' 1586s + echo '### PostgreSQL 16 acceptance-etcd features/basic_replication.feature ###' 1586s ### PostgreSQL 16 acceptance-etcd features/basic_replication.feature ### 1586s + su postgres -p -c 'set -o pipefail; ETCD_UNSUPPORTED_ARCH=s390x DCS=etcd PATH=/usr/lib/postgresql/16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin behave features/basic_replication.feature | ts' 1588s Jul 30 22:56:07 Feature: basic replication # features/basic_replication.feature:1 1588s Jul 30 22:56:07 We should check that the basic bootstrapping, replication and failover works. 1588s Jul 30 22:56:07 Scenario: check replication of a single table # features/basic_replication.feature:4 1588s Jul 30 22:56:07 Given I start postgres0 # features/steps/basic_replication.py:8 1592s Jul 30 22:56:11 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1593s Jul 30 22:56:12 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 1593s Jul 30 22:56:12 When I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "synchronous_mode": true} # features/steps/patroni_api.py:71 1593s Jul 30 22:56:12 Then I receive a response code 200 # features/steps/patroni_api.py:98 1593s Jul 30 22:56:12 When I start postgres1 # features/steps/basic_replication.py:8 1598s Jul 30 22:56:16 And I configure and start postgres2 with a tag replicatefrom postgres0 # features/steps/cascading_replication.py:7 1602s Jul 30 22:56:21 And "sync" key in DCS has leader=postgres0 after 20 seconds # features/steps/cascading_replication.py:23 1602s Jul 30 22:56:21 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 1602s Jul 30 22:56:21 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 1603s Jul 30 22:56:22 Then table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 1603s Jul 30 22:56:22 1603s Jul 30 22:56:22 Scenario: check restart of sync replica # features/basic_replication.feature:17 1603s Jul 30 22:56:22 Given I shut down postgres2 # features/steps/basic_replication.py:29 1604s Jul 30 22:56:23 Then "sync" key in DCS has sync_standby=postgres1 after 5 seconds # features/steps/cascading_replication.py:23 1604s Jul 30 22:56:23 When I start postgres2 # features/steps/basic_replication.py:8 1607s Jul 30 22:56:26 And I shut down postgres1 # features/steps/basic_replication.py:29 1610s Jul 30 22:56:29 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 1611s Jul 30 22:56:30 When I start postgres1 # features/steps/basic_replication.py:8 1614s Jul 30 22:56:33 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1615s Jul 30 22:56:34 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1615s Jul 30 22:56:34 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 1615s Jul 30 22:56:34 1615s Jul 30 22:56:34 Scenario: check stuck sync replica # features/basic_replication.feature:28 1615s Jul 30 22:56:34 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": true, "maximum_lag_on_syncnode": 15000000, "postgresql": {"parameters": {"synchronous_commit": "remote_apply"}}} # features/steps/patroni_api.py:71 1615s Jul 30 22:56:34 Then I receive a response code 200 # features/steps/patroni_api.py:98 1615s Jul 30 22:56:34 And I create table on postgres0 # features/steps/basic_replication.py:73 1615s Jul 30 22:56:34 And table mytest is present on postgres1 after 2 seconds # features/steps/basic_replication.py:93 1616s Jul 30 22:56:35 And table mytest is present on postgres2 after 2 seconds # features/steps/basic_replication.py:93 1616s Jul 30 22:56:35 When I pause wal replay on postgres2 # features/steps/basic_replication.py:64 1616s Jul 30 22:56:35 And I load data on postgres0 # features/steps/basic_replication.py:84 1617s Jul 30 22:56:36 Then "sync" key in DCS has sync_standby=postgres1 after 15 seconds # features/steps/cascading_replication.py:23 1620s Jul 30 22:56:39 And I resume wal replay on postgres2 # features/steps/basic_replication.py:64 1620s Jul 30 22:56:39 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1620s Jul 30 22:56:39 And Status code on GET http://127.0.0.1:8010/async is 200 after 3 seconds # features/steps/patroni_api.py:142 1620s Jul 30 22:56:39 When I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": null, "maximum_lag_on_syncnode": -1, "postgresql": {"parameters": {"synchronous_commit": "on"}}} # features/steps/patroni_api.py:71 1620s Jul 30 22:56:39 Then I receive a response code 200 # features/steps/patroni_api.py:98 1620s Jul 30 22:56:39 And I drop table on postgres0 # features/steps/basic_replication.py:73 1620s Jul 30 22:56:39 1620s Jul 30 22:56:39 Scenario: check multi sync replication # features/basic_replication.feature:44 1620s Jul 30 22:56:39 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 2} # features/steps/patroni_api.py:71 1620s Jul 30 22:56:39 Then I receive a response code 200 # features/steps/patroni_api.py:98 1620s Jul 30 22:56:39 Then "sync" key in DCS has sync_standby=postgres1,postgres2 after 10 seconds # features/steps/cascading_replication.py:23 1624s Jul 30 22:56:43 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1624s Jul 30 22:56:43 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1624s Jul 30 22:56:43 When I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 1} # features/steps/patroni_api.py:71 1624s Jul 30 22:56:43 Then I receive a response code 200 # features/steps/patroni_api.py:98 1624s Jul 30 22:56:43 And I shut down postgres1 # features/steps/basic_replication.py:29 1627s Jul 30 22:56:46 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 1628s Jul 30 22:56:47 When I start postgres1 # features/steps/basic_replication.py:8 1632s Jul 30 22:56:51 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1632s Jul 30 22:56:51 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1632s Jul 30 22:56:51 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 1632s Jul 30 22:56:51 1632s Jul 30 22:56:51 Scenario: check the basic failover in synchronous mode # features/basic_replication.feature:59 1632s Jul 30 22:56:51 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 1640s Jul 30 22:56:53 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1640s Jul 30 22:56:53 When I sleep for 2 seconds # features/steps/patroni_api.py:39 1640s Jul 30 22:56:55 And I shut down postgres0 # features/steps/basic_replication.py:29 1640s Jul 30 22:56:56 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 1640s Jul 30 22:56:58 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1640s Jul 30 22:56:58 And postgres2 role is the primary after 24 seconds # features/steps/basic_replication.py:105 1658s Jul 30 22:57:17 And Response on GET http://127.0.0.1:8010/history contains recovery after 10 seconds # features/steps/patroni_api.py:156 1661s Jul 30 22:57:20 And there is a postgres2_cb.log with "on_role_change master batman" in postgres2 data directory # features/steps/cascading_replication.py:12 1661s Jul 30 22:57:20 When I issue a PATCH request to http://127.0.0.1:8010/config with {"synchronous_mode": null, "master_start_timeout": 0} # features/steps/patroni_api.py:71 1661s Jul 30 22:57:20 Then I receive a response code 200 # features/steps/patroni_api.py:98 1661s Jul 30 22:57:20 When I add the table bar to postgres2 # features/steps/basic_replication.py:54 1661s Jul 30 22:57:20 Then table bar is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 1664s Jul 30 22:57:23 And Response on GET http://127.0.0.1:8010/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 1664s Jul 30 22:57:23 1664s Jul 30 22:57:23 Scenario: check rejoin of the former primary with pg_rewind # features/basic_replication.feature:75 1664s Jul 30 22:57:23 Given I add the table splitbrain to postgres0 # features/steps/basic_replication.py:54 1664s Jul 30 22:57:23 And I start postgres0 # features/steps/basic_replication.py:8 1664s Jul 30 22:57:23 Then postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 1668s Jul 30 22:57:27 When I add the table buz to postgres2 # features/steps/basic_replication.py:54 1668s Jul 30 22:57:27 Then table buz is present on postgres0 after 20 seconds # features/steps/basic_replication.py:93 1681s Jul 30 22:57:40 1681s Jul 30 22:57:40 @reject-duplicate-name 1681s Jul 30 22:57:40 Scenario: check graceful rejection when two nodes have the same name # features/basic_replication.feature:83 1681s Jul 30 22:57:40 Given I start duplicate postgres0 on port 8011 # features/steps/basic_replication.py:13 1683s Jul 30 22:57:42 Then there is one of ["Can't start; there is already a node named 'postgres0' running"] CRITICAL in the dup-postgres0 patroni log after 5 seconds # features/steps/basic_replication.py:121 1687s Jul 30 22:57:46 1687s Failed to get list of machines from http://[::1]:2379/v2: MaxRetryError("HTTPConnectionPool(host='::1', port=2379): Max retries exceeded with url: /v2/machines (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))") 1688s Failed to get list of machines from http://[::1]:2379/v2: MaxRetryError("HTTPConnectionPool(host='::1', port=2379): Max retries exceeded with url: /v2/machines (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))") 1688s Failed to get list of machines from http://127.0.0.1:2379/v2: MaxRetryError("HTTPConnectionPool(host='127.0.0.1', port=2379): Max retries exceeded with url: /v2/machines (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))") 1688s Jul 30 22:57:47 Combined data file .coverage.autopkgtest.4611.XLyaKWSx 1688s Jul 30 22:57:47 Combined data file .coverage.autopkgtest.4655.XYnykZpx 1688s Jul 30 22:57:47 Combined data file .coverage.autopkgtest.4701.XAWCiBox 1688s Jul 30 22:57:47 Combined data file .coverage.autopkgtest.4747.XNmQhMWx 1688s Jul 30 22:57:47 Combined data file .coverage.autopkgtest.4791.XzCGtayx 1689s Jul 30 22:57:47 Combined data file .coverage.autopkgtest.4863.XmJvLCux 1689s Jul 30 22:57:47 Combined data file .coverage.autopkgtest.4910.XCrFksqx 1689s Jul 30 22:57:47 Combined data file .coverage.autopkgtest.4913.XhlhKEHx 1689s Jul 30 22:57:47 Combined data file .coverage.autopkgtest.4996.XcIdyGqx 1689s Jul 30 22:57:47 Combined data file .coverage.autopkgtest.5096.XcpvENMx 1691s Jul 30 22:57:50 Name Stmts Miss Cover 1691s Jul 30 22:57:50 ------------------------------------------------------------------------------------------------------------- 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/OpenSSL/SSL.py 1072 602 44% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/OpenSSL/__init__.py 4 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/OpenSSL/_util.py 41 14 66% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/OpenSSL/crypto.py 1225 982 20% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/OpenSSL/version.py 10 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/_distutils_hack/__init__.py 101 96 5% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/__about__.py 5 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/__init__.py 3 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/exceptions.py 26 5 81% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/__init__.py 2 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/_oid.py 126 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/__init__.py 0 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/__init__.py 0 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/_conditional.py 50 23 54% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/binding.py 62 12 81% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/__init__.py 0 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_asymmetric.py 6 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_cipheralgorithm.py 17 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_serialization.py 79 35 56% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/__init__.py 0 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dh.py 47 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dsa.py 55 5 91% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ec.py 164 17 90% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed448.py 45 12 73% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed25519.py 43 12 72% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/padding.py 55 23 58% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/rsa.py 90 38 58% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/types.py 19 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/utils.py 14 5 64% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x448.py 43 12 72% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x25519.py 41 12 71% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/__init__.py 4 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/algorithms.py 129 35 73% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/base.py 140 81 42% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/modes.py 139 58 58% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/constant_time.py 6 3 50% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/hashes.py 127 20 84% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/__init__.py 5 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/base.py 7 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/ssh.py 758 602 21% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/utils.py 77 29 62% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/x509/__init__.py 70 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/x509/base.py 487 229 53% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/x509/certificate_transparency.py 42 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/x509/extensions.py 1038 569 45% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/x509/general_name.py 166 94 43% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/x509/name.py 232 141 39% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/x509/oid.py 3 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/cryptography/x509/verification.py 10 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dateutil/__init__.py 13 4 69% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dateutil/_common.py 25 15 40% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dateutil/_version.py 11 2 82% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dateutil/parser/__init__.py 33 4 88% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dateutil/parser/_parser.py 813 688 15% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dateutil/parser/isoparser.py 185 150 19% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dateutil/relativedelta.py 241 206 15% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dateutil/tz/__init__.py 4 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dateutil/tz/_common.py 161 124 23% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dateutil/tz/_factories.py 49 21 57% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dateutil/tz/tz.py 800 629 21% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dateutil/tz/win.py 153 149 3% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/__init__.py 3 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/_asyncbackend.py 14 6 57% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/_ddr.py 105 86 18% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/_features.py 44 7 84% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/_immutable_ctx.py 40 5 88% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/asyncbackend.py 44 32 27% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/asyncquery.py 277 242 13% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/edns.py 270 161 40% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/entropy.py 80 49 39% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/enum.py 72 46 36% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/exception.py 60 33 45% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/flags.py 41 14 66% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/grange.py 34 30 12% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/immutable.py 41 30 27% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/inet.py 80 65 19% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/ipv4.py 27 20 26% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/ipv6.py 115 100 13% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/message.py 809 662 18% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/name.py 620 427 31% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/nameserver.py 101 54 47% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/node.py 118 71 40% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/opcode.py 31 7 77% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/query.py 536 462 14% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/quic/__init__.py 26 23 12% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rcode.py 69 13 81% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rdata.py 377 269 29% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rdataclass.py 44 9 80% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rdataset.py 193 133 31% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rdatatype.py 214 25 88% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/OPT.py 34 19 44% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/SOA.py 41 26 37% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/TSIG.py 58 42 28% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/ZONEMD.py 43 27 37% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/__init__.py 2 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rdtypes/__init__.py 2 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rdtypes/svcbbase.py 397 261 34% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rdtypes/util.py 191 154 19% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/renderer.py 152 118 22% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/resolver.py 899 719 20% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/reversename.py 33 24 27% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/rrset.py 78 56 28% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/serial.py 93 79 15% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/set.py 149 108 28% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/tokenizer.py 335 279 17% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/transaction.py 271 203 25% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/tsig.py 177 122 31% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/ttl.py 45 38 16% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/version.py 7 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/wire.py 64 42 34% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/xfr.py 148 126 15% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/zone.py 508 383 25% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/zonefile.py 429 380 11% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/dns/zonetypes.py 15 2 87% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/etcd/__init__.py 125 27 78% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/etcd/client.py 380 195 49% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/etcd/lock.py 125 103 18% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/idna/__init__.py 4 0 100% 1691s Ju+ echo '### End 16 acceptance-etcd features/basic_replication.feature ###' 1691s + rm -f '/tmp/pgpass?' 1691s ++ id -u 1691s + '[' 0 -eq 0 ']' 1691s + '[' -x /etc/init.d/zookeeper ']' 1691s l 30 22:57:50 /usr/lib/python3/dist-packages/idna/core.py 293 258 12% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/idna/idnadata.py 4 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/idna/intranges.py 30 24 20% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/idna/package_data.py 1 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/__init__.py 13 2 85% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/__main__.py 199 67 66% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/api.py 770 430 44% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/async_executor.py 96 19 80% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/collections.py 56 6 89% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/config.py 371 110 70% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/config_generator.py 212 159 25% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/daemon.py 76 6 92% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/dcs/__init__.py 646 149 77% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/dcs/etcd.py 603 180 70% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/dynamic_loader.py 35 7 80% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/exceptions.py 16 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/file_perm.py 43 9 79% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/global_config.py 81 4 95% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/ha.py 1244 616 50% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/log.py 219 71 68% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/__init__.py 821 239 71% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/available_parameters/__init__.py 21 1 95% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/bootstrap.py 252 91 64% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/callback_executor.py 55 8 85% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/cancellable.py 104 41 61% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/config.py 813 256 69% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/connection.py 75 7 91% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/misc.py 41 13 68% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/mpp/__init__.py 89 12 87% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py 170 92 46% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/rewind.py 416 200 52% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/slots.py 334 174 48% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/sync.py 130 19 85% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/postgresql/validator.py 157 23 85% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/psycopg.py 42 16 62% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/request.py 62 7 89% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/tags.py 38 5 87% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/utils.py 350 140 60% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/validator.py 301 211 30% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/version.py 1 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/watchdog/__init__.py 2 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/watchdog/base.py 203 49 76% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/patroni/watchdog/linux.py 135 50 63% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/psutil/__init__.py 951 636 33% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/psutil/_common.py 424 212 50% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/psutil/_compat.py 302 264 13% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/psutil/_pslinux.py 1251 936 25% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/psutil/_psposix.py 96 41 57% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/psycopg2/__init__.py 19 3 84% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/psycopg2/_json.py 64 27 58% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/psycopg2/_range.py 269 172 36% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/psycopg2/errors.py 3 2 33% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/psycopg2/extensions.py 91 25 73% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/six.py 504 250 50% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/__init__.py 50 14 72% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/_base_connection.py 70 52 26% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/_collections.py 234 100 57% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/_request_methods.py 53 11 79% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/_version.py 2 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/connection.py 324 100 69% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/connectionpool.py 347 130 63% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/contrib/__init__.py 0 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/contrib/pyopenssl.py 257 101 61% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/exceptions.py 115 37 68% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/fields.py 92 73 21% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/filepost.py 37 24 35% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/poolmanager.py 233 85 64% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/response.py 562 318 43% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/util/__init__.py 10 0 100% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/util/connection.py 66 42 36% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/util/proxy.py 13 6 54% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/util/request.py 104 49 53% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/util/response.py 32 17 47% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/util/retry.py 173 55 68% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/util/ssl_.py 177 78 56% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/util/ssl_match_hostname.py 66 54 18% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/util/ssltransport.py 160 112 30% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/util/timeout.py 71 14 80% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/util/url.py 205 68 67% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/util/util.py 26 10 62% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/urllib3/util/wait.py 49 18 63% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/__init__.py 165 109 34% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/composer.py 92 17 82% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/constructor.py 479 276 42% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/cyaml.py 46 24 48% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/dumper.py 23 12 48% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/emitter.py 838 769 8% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/error.py 58 42 28% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/events.py 61 6 90% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/loader.py 47 24 49% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/nodes.py 29 7 76% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/parser.py 352 198 44% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/reader.py 122 34 72% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/representer.py 248 176 29% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/resolver.py 135 76 44% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/scanner.py 758 437 42% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/serializer.py 85 70 18% 1691s Jul 30 22:57:50 /usr/lib/python3/dist-packages/yaml/tokens.py 76 17 78% 1691s Jul 30 22:57:50 patroni/__init__.py 13 2 85% 1691s Jul 30 22:57:50 patroni/__main__.py 199 199 0% 1691s Jul 30 22:57:50 patroni/api.py 770 770 0% 1691s Jul 30 22:57:50 patroni/async_executor.py 96 69 28% 1691s Jul 30 22:57:50 patroni/collections.py 56 15 73% 1691s Jul 30 22:57:50 patroni/config.py 371 196 47% 1691s Jul 30 22:57:50 patroni/config_generator.py 212 212 0% 1691s Jul 30 22:57:50 patroni/ctl.py 936 663 29% 1691s Jul 30 22:57:50 patroni/daemon.py 76 76 0% 1691s Jul 30 22:57:50 patroni/dcs/__init__.py 646 308 52% 1691s Jul 30 22:57:50 patroni/dcs/consul.py 485 485 0% 1691s Jul 30 22:57:50 patroni/dcs/etcd3.py 679 679 0% 1691s Jul 30 22:57:50 patroni/dcs/etcd.py 603 232 62% 1691s Jul 30 22:57:50 patroni/dcs/exhibitor.py 61 61 0% 1691s Jul 30 22:57:50 patroni/dcs/kubernetes.py 938 938 0% 1691s Jul 30 22:57:50 patroni/dcs/raft.py 319 319 0% 1691s Jul 30 22:57:50 patroni/dcs/zookeeper.py 288 288 0% 1691s Jul 30 22:57:50 patroni/dynamic_loader.py 35 7 80% 1691s Jul 30 22:57:50 patroni/exceptions.py 16 1 94% 1691s Jul 30 22:57:50 patroni/file_perm.py 43 15 65% 1691s Jul 30 22:57:50 patroni/global_config.py 81 23 72% 1691s Jul 30 22:57:50 patroni/ha.py 1244 1244 0% 1691s Jul 30 22:57:50 patroni/log.py 219 173 21% 1691s Jul 30 22:57:50 patroni/postgresql/__init__.py 821 651 21% 1691s Jul 30 22:57:50 patroni/postgresql/available_parameters/__init__.py 21 3 86% 1691s Jul 30 22:57:50 patroni/postgresql/bootstrap.py 252 222 12% 1691s Jul 30 22:57:50 patroni/postgresql/callback_executor.py 55 34 38% 1691s Jul 30 22:57:50 patroni/postgresql/cancellable.py 104 84 19% 1691s Jul 30 22:57:50 patroni/postgresql/config.py 813 698 14% 1691s Jul 30 22:57:50 patroni/postgresql/connection.py 75 50 33% 1691s Jul 30 22:57:50 patroni/postgresql/misc.py 41 29 29% 1691s Jul 30 22:57:50 patroni/postgresql/mpp/__init__.py 89 21 76% 1691s Jul 30 22:57:50 patroni/postgresql/mpp/citus.py 259 259 0% 1691s Jul 30 22:57:50 patroni/postgresql/postmaster.py 170 139 18% 1691s Jul 30 22:57:50 patroni/postgresql/rewind.py 416 416 0% 1691s Jul 30 22:57:50 patroni/postgresql/slots.py 334 285 15% 1691s Jul 30 22:57:50 patroni/postgresql/sync.py 130 96 26% 1691s Jul 30 22:57:50 patroni/postgresql/validator.py 157 52 67% 1691s Jul 30 22:57:50 patroni/psycopg.py 42 28 33% 1691s Jul 30 22:57:50 patroni/raft_controller.py 22 22 0% 1691s Jul 30 22:57:50 patroni/request.py 62 6 90% 1691s Jul 30 22:57:50 patroni/scripts/__init__.py 0 0 100% 1691s Jul 30 22:57:50 patroni/scripts/aws.py 59 59 0% 1691s Jul 30 22:57:50 patroni/scripts/barman/__init__.py 0 0 100% 1691s Jul 30 22:57:50 patroni/scripts/barman/cli.py 51 51 0% 1691s Jul 30 22:57:50 patroni/scripts/barman/config_switch.py 51 51 0% 1691s Jul 30 22:57:50 patroni/scripts/barman/recover.py 37 37 0% 1691s Jul 30 22:57:50 patroni/scripts/barman/utils.py 94 94 0% 1691s Jul 30 22:57:50 patroni/scripts/wale_restore.py 207 207 0% 1691s Jul 30 22:57:50 patroni/tags.py 38 15 61% 1691s Jul 30 22:57:50 patroni/utils.py 350 246 30% 1691s Jul 30 22:57:50 patroni/validator.py 301 215 29% 1691s Jul 30 22:57:50 patroni/version.py 1 0 100% 1691s Jul 30 22:57:50 patroni/watchdog/__init__.py 2 2 0% 1691s Jul 30 22:57:50 patroni/watchdog/base.py 203 203 0% 1691s Jul 30 22:57:50 patroni/watchdog/linux.py 135 135 0% 1691s Jul 30 22:57:50 ------------------------------------------------------------------------------------------------------------- 1691s Jul 30 22:57:50 TOTAL 53177 33958 36% 1691s Jul 30 22:57:50 1 feature passed, 0 failed, 0 skipped 1691s Jul 30 22:57:50 7 scenarios passed, 0 failed, 0 skipped 1691s Jul 30 22:57:50 68 steps passed, 0 failed, 0 skipped, 0 undefined 1691s Jul 30 22:57:50 Took 1m34.658s 1691s ### End 16 acceptance-etcd features/basic_replication.feature ### 1692s autopkgtest [22:57:51]: test acceptance-etcd-basic: -----------------------] 1694s autopkgtest [22:57:53]: test acceptance-etcd-basic: - - - - - - - - - - results - - - - - - - - - - 1694s acceptance-etcd-basic PASS 1699s autopkgtest [22:57:58]: test acceptance-etcd: preparing testbed 1709s Reading package lists... 1709s Building dependency tree... 1709s Reading state information... 1710s Starting pkgProblemResolver with broken count: 0 1710s Starting 2 pkgProblemResolver with broken count: 0 1710s Done 1710s The following NEW packages will be installed: 1710s autopkgtest-satdep 1710s 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 1710s Need to get 0 B/768 B of archives. 1710s After this operation, 0 B of additional disk space will be used. 1710s Get:1 /tmp/autopkgtest.qFf46z/3-autopkgtest-satdep.deb autopkgtest-satdep s390x 0 [768 B] 1710s Selecting previously unselected package autopkgtest-satdep. 1710s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 58232 files and directories currently installed.) 1710s Preparing to unpack .../3-autopkgtest-satdep.deb ... 1710s Unpacking autopkgtest-satdep (0) ... 1710s Setting up autopkgtest-satdep (0) ... 1712s (Reading database ... 58232 files and directories currently installed.) 1712s Removing autopkgtest-satdep (0) ... 1722s autopkgtest [22:58:21]: test acceptance-etcd: debian/tests/acceptance etcd 1722s autopkgtest [22:58:21]: test acceptance-etcd: [----------------------- 1722s dpkg-architecture: warning: cannot determine CC system type, falling back to default (native compilation) 1723s ○ etcd.service - etcd - highly-available key value store 1723s Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; preset: enabled) 1723s Active: inactive (dead) since Tue 2024-07-30 22:56:05 UTC; 2min 16s ago 1723s Duration: 39.230s 1723s Invocation: 942eca1738c54e319f8093542866c5df 1723s Docs: https://etcd.io/docs 1723s man:etcd 1723s Process: 2392 ExecStart=/usr/bin/etcd $DAEMON_ARGS (code=killed, signal=TERM) 1723s Main PID: 2392 (code=killed, signal=TERM) 1723s 1723s Jul 30 22:55:26 autopkgtest systemd[1]: Started etcd.service - etcd - highly-available key value store. 1723s Jul 30 22:55:26 autopkgtest etcd[2392]: set the initial cluster version to 3.4 1723s Jul 30 22:55:26 autopkgtest etcd[2392]: enabled capabilities for version 3.4 1723s Jul 30 22:56:05 autopkgtest etcd[2392]: received terminated signal, shutting down... 1723s Jul 30 22:56:05 autopkgtest etcd[2392]: stopping insecure grpc server due to error: accept tcp 127.0.0.1:2379: use of closed network connection 1723s Jul 30 22:56:05 autopkgtest systemd[1]: Stopping etcd.service - etcd - highly-available key value store... 1723s Jul 30 22:56:05 autopkgtest etcd[2392]: stopped insecure grpc server due to error: accept tcp 127.0.0.1:2379: use of closed network connection 1723s Jul 30 22:56:05 autopkgtest etcd[2392]: skipped leadership transfer for single voting member cluster 1723s Jul 30 22:56:05 autopkgtest systemd[1]: etcd.service: Deactivated successfully. 1723s Jul 30 22:56:05 autopkgtest systemd[1]: Stopped etcd.service - etcd - highly-available key value store. 1723s ++ ls -1r /usr/lib/postgresql/ 1723s + for PG_VERSION in $(ls -1r /usr/lib/postgresql/) 1723s + '[' 16 == 10 -o 16 == 11 ']' 1723s + echo '### PostgreSQL 16 acceptance-etcd ###' 1723s + su postgres -p -c 'set -o pipefail; ETCD_UNSUPPORTED_ARCH=s390x DCS=etcd PATH=/usr/lib/postgresql/16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin behave | ts' 1723s ### PostgreSQL 16 acceptance-etcd ### 1725s Jul 30 22:58:24 Feature: basic replication # features/basic_replication.feature:1 1725s Jul 30 22:58:24 We should check that the basic bootstrapping, replication and failover works. 1725s Jul 30 22:58:24 Scenario: check replication of a single table # features/basic_replication.feature:4 1725s Jul 30 22:58:24 Given I start postgres0 # features/steps/basic_replication.py:8 1728s Jul 30 22:58:27 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1729s Jul 30 22:58:28 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 1729s Jul 30 22:58:28 When I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "synchronous_mode": true} # features/steps/patroni_api.py:71 1729s Jul 30 22:58:28 Then I receive a response code 200 # features/steps/patroni_api.py:98 1729s Jul 30 22:58:28 When I start postgres1 # features/steps/basic_replication.py:8 1733s Jul 30 22:58:32 And I configure and start postgres2 with a tag replicatefrom postgres0 # features/steps/cascading_replication.py:7 1737s Jul 30 22:58:36 And "sync" key in DCS has leader=postgres0 after 20 seconds # features/steps/cascading_replication.py:23 1737s Jul 30 22:58:36 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 1737s Jul 30 22:58:36 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 1737s Jul 30 22:58:36 Then table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 1737s Jul 30 22:58:36 1737s Jul 30 22:58:36 Scenario: check restart of sync replica # features/basic_replication.feature:17 1737s Jul 30 22:58:36 Given I shut down postgres2 # features/steps/basic_replication.py:29 1738s Jul 30 22:58:37 Then "sync" key in DCS has sync_standby=postgres1 after 5 seconds # features/steps/cascading_replication.py:23 1738s Jul 30 22:58:37 When I start postgres2 # features/steps/basic_replication.py:8 1741s Jul 30 22:58:40 And I shut down postgres1 # features/steps/basic_replication.py:29 1744s Jul 30 22:58:43 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 1744s Jul 30 22:58:43 When I start postgres1 # features/steps/basic_replication.py:8 1747s Jul 30 22:58:46 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1748s Jul 30 22:58:47 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1748s Jul 30 22:58:47 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 1748s Jul 30 22:58:47 1748s Jul 30 22:58:47 Scenario: check stuck sync replica # features/basic_replication.feature:28 1748s Jul 30 22:58:47 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": true, "maximum_lag_on_syncnode": 15000000, "postgresql": {"parameters": {"synchronous_commit": "remote_apply"}}} # features/steps/patroni_api.py:71 1748s Jul 30 22:58:47 Then I receive a response code 200 # features/steps/patroni_api.py:98 1748s Jul 30 22:58:47 And I create table on postgres0 # features/steps/basic_replication.py:73 1748s Jul 30 22:58:47 And table mytest is present on postgres1 after 2 seconds # features/steps/basic_replication.py:93 1749s Jul 30 22:58:48 And table mytest is present on postgres2 after 2 seconds # features/steps/basic_replication.py:93 1749s Jul 30 22:58:48 When I pause wal replay on postgres2 # features/steps/basic_replication.py:64 1749s Jul 30 22:58:48 And I load data on postgres0 # features/steps/basic_replication.py:84 1750s Jul 30 22:58:49 Then "sync" key in DCS has sync_standby=postgres1 after 15 seconds # features/steps/cascading_replication.py:23 1753s Jul 30 22:58:52 And I resume wal replay on postgres2 # features/steps/basic_replication.py:64 1753s Jul 30 22:58:52 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1754s Jul 30 22:58:53 And Status code on GET http://127.0.0.1:8010/async is 200 after 3 seconds # features/steps/patroni_api.py:142 1754s Jul 30 22:58:53 When I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": null, "maximum_lag_on_syncnode": -1, "postgresql": {"parameters": {"synchronous_commit": "on"}}} # features/steps/patroni_api.py:71 1754s Jul 30 22:58:53 Then I receive a response code 200 # features/steps/patroni_api.py:98 1754s Jul 30 22:58:53 And I drop table on postgres0 # features/steps/basic_replication.py:73 1754s Jul 30 22:58:53 1754s Jul 30 22:58:53 Scenario: check multi sync replication # features/basic_replication.feature:44 1754s Jul 30 22:58:53 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 2} # features/steps/patroni_api.py:71 1755s Jul 30 22:58:54 Then I receive a response code 200 # features/steps/patroni_api.py:98 1755s Jul 30 22:58:54 Then "sync" key in DCS has sync_standby=postgres1,postgres2 after 10 seconds # features/steps/cascading_replication.py:23 1759s Jul 30 22:58:58 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1759s Jul 30 22:58:58 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1759s Jul 30 22:58:58 When I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 1} # features/steps/patroni_api.py:71 1759s Jul 30 22:58:58 Then I receive a response code 200 # features/steps/patroni_api.py:98 1759s Jul 30 22:58:58 And I shut down postgres1 # features/steps/basic_replication.py:29 1762s Jul 30 22:59:01 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 1763s Jul 30 22:59:02 When I start postgres1 # features/steps/basic_replication.py:8 1767s Jul 30 22:59:06 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1767s Jul 30 22:59:06 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 1767s Jul 30 22:59:06 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 1767s Jul 30 22:59:06 1767s Jul 30 22:59:06 Scenario: check the basic failover in synchronous mode # features/basic_replication.feature:59 1767s Jul 30 22:59:06 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 1769s Jul 30 22:59:08 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1769s Jul 30 22:59:08 When I sleep for 2 seconds # features/steps/patroni_api.py:39 1771s Jul 30 22:59:10 And I shut down postgres0 # features/steps/basic_replication.py:29 1772s Jul 30 22:59:11 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 1774s Jul 30 22:59:13 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 1774s Jul 30 22:59:13 And postgres2 role is the primary after 24 seconds # features/steps/basic_replication.py:105 1793s Jul 30 22:59:32 And Response on GET http://127.0.0.1:8010/history contains recovery after 10 seconds # features/steps/patroni_api.py:156 1795s Jul 30 22:59:34 And there is a postgres2_cb.log with "on_role_change master batman" in postgres2 data directory # features/steps/cascading_replication.py:12 1795s Jul 30 22:59:34 When I issue a PATCH request to http://127.0.0.1:8010/config with {"synchronous_mode": null, "master_start_timeout": 0} # features/steps/patroni_api.py:71 1795s Jul 30 22:59:34 Then I receive a response code 200 # features/steps/patroni_api.py:98 1795s Jul 30 22:59:34 When I add the table bar to postgres2 # features/steps/basic_replication.py:54 1795s Jul 30 22:59:34 Then table bar is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 1798s Jul 30 22:59:37 And Response on GET http://127.0.0.1:8010/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 1798s Jul 30 22:59:37 1798s Jul 30 22:59:37 Scenario: check rejoin of the former primary with pg_rewind # features/basic_replication.feature:75 1798s Jul 30 22:59:37 Given I add the table splitbrain to postgres0 # features/steps/basic_replication.py:54 1798s Jul 30 22:59:37 And I start postgres0 # features/steps/basic_replication.py:8 1801s Jul 30 22:59:37 Then postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 1802s Jul 30 22:59:41 When I add the table buz to postgres2 # features/steps/basic_replication.py:54 1802s Jul 30 22:59:41 Then table buz is present on postgres0 after 20 seconds # features/steps/basic_replication.py:93 1807s Jul 30 22:59:45 1807s Jul 30 22:59:45 @reject-duplicate-name 1807s Jul 30 22:59:45 Scenario: check graceful rejection when two nodes have the same name # features/basic_replication.feature:83 1807s Jul 30 22:59:45 Given I start duplicate postgres0 on port 8011 # features/steps/basic_replication.py:13 1809s Jul 30 22:59:47 Then there is one of ["Can't start; there is already a node named 'postgres0' running"] CRITICAL in the dup-postgres0 patroni log after 5 seconds # features/steps/basic_replication.py:121 1813s Jul 30 22:59:52 1813s Jul 30 22:59:52 Feature: cascading replication # features/cascading_replication.feature:1 1813s Jul 30 22:59:52 We should check that patroni can do base backup and streaming from the replica 1813s Jul 30 22:59:52 Scenario: check a base backup and streaming replication from a replica # features/cascading_replication.feature:4 1813s Jul 30 22:59:52 Given I start postgres0 # features/steps/basic_replication.py:8 1816s Jul 30 22:59:55 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1817s Jul 30 22:59:56 And I configure and start postgres1 with a tag clonefrom true # features/steps/cascading_replication.py:7 1821s Jul 30 23:00:00 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 1822s Jul 30 23:00:01 And I create label with "postgres0" in postgres0 data directory # features/steps/cascading_replication.py:18 1822s Jul 30 23:00:01 And I create label with "postgres1" in postgres1 data directory # features/steps/cascading_replication.py:18 1822s Jul 30 23:00:01 And "members/postgres1" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 1822s Jul 30 23:00:01 And I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 1825s Jul 30 23:00:04 Then replication works from postgres0 to postgres2 after 30 seconds # features/steps/basic_replication.py:112 1826s Jul 30 23:00:05 And there is a label with "postgres1" in postgres2 data directory # features/steps/cascading_replication.py:12 1832s Jul 30 23:00:11 1832s Jul 30 23:00:11 Feature: citus # features/citus.feature:1 1832s SKIP FEATURE citus: Citus extenstion isn't available 1832s SKIP Scenario check that worker cluster is registered in the coordinator: Citus extenstion isn't available 1832s SKIP Scenario coordinator failover updates pg_dist_node: Citus extenstion isn't available 1832s SKIP Scenario worker switchover doesn't break client queries on the coordinator: Citus extenstion isn't available 1832s SKIP Scenario worker primary restart doesn't break client queries on the coordinator: Citus extenstion isn't available 1832s SKIP Scenario check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node: Citus extenstion isn't available 1832s Jul 30 23:00:11 We should check that coordinator discovers and registers workers and clients don't have errors when worker cluster switches over 1832s Jul 30 23:00:11 Scenario: check that worker cluster is registered in the coordinator # features/citus.feature:4 1832s Jul 30 23:00:11 Given I start postgres0 in citus group 0 # None 1832s Jul 30 23:00:11 And I start postgres2 in citus group 1 # None 1832s Jul 30 23:00:11 Then postgres0 is a leader in a group 0 after 10 seconds # None 1832s Jul 30 23:00:11 And postgres2 is a leader in a group 1 after 10 seconds # None 1832s Jul 30 23:00:11 When I start postgres1 in citus group 0 # None 1832s Jul 30 23:00:11 And I start postgres3 in citus group 1 # None 1832s Jul 30 23:00:11 Then replication works from postgres0 to postgres1 after 15 seconds # None 1832s Jul 30 23:00:11 Then replication works from postgres2 to postgres3 after 15 seconds # None 1832s Jul 30 23:00:11 And postgres0 is registered in the postgres0 as the primary in group 0 after 5 seconds # None 1832s Jul 30 23:00:11 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 1832s Jul 30 23:00:11 1832s Jul 30 23:00:11 Scenario: coordinator failover updates pg_dist_node # features/citus.feature:16 1832s Jul 30 23:00:11 Given I run patronictl.py failover batman --group 0 --candidate postgres1 --force # None 1832s Jul 30 23:00:11 Then postgres1 role is the primary after 10 seconds # None 1832s Jul 30 23:00:11 And "members/postgres0" key in a group 0 in DCS has state=running after 15 seconds # None 1832s Jul 30 23:00:11 And replication works from postgres1 to postgres0 after 15 seconds # None 1832s Jul 30 23:00:11 And postgres1 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 1832s Jul 30 23:00:11 And "sync" key in a group 0 in DCS has sync_standby=postgres0 after 15 seconds # None 1832s Jul 30 23:00:11 When I run patronictl.py switchover batman --group 0 --candidate postgres0 --force # None 1832s Jul 30 23:00:11 Then postgres0 role is the primary after 10 seconds # None 1832s Jul 30 23:00:11 And replication works from postgres0 to postgres1 after 15 seconds # None 1832s Jul 30 23:00:11 And postgres0 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 1832s Jul 30 23:00:11 And "sync" key in a group 0 in DCS has sync_standby=postgres1 after 15 seconds # None 1832s Jul 30 23:00:11 1832s Jul 30 23:00:11 Scenario: worker switchover doesn't break client queries on the coordinator # features/citus.feature:29 1832s Jul 30 23:00:11 Given I create a distributed table on postgres0 # None 1832s Jul 30 23:00:11 And I start a thread inserting data on postgres0 # None 1832s Jul 30 23:00:11 When I run patronictl.py switchover batman --group 1 --force # None 1832s Jul 30 23:00:11 Then I receive a response returncode 0 # None 1832s Jul 30 23:00:11 And postgres3 role is the primary after 10 seconds # None 1832s Jul 30 23:00:11 And "members/postgres2" key in a group 1 in DCS has state=running after 15 seconds # None 1832s Jul 30 23:00:11 And replication works from postgres3 to postgres2 after 15 seconds # None 1832s Jul 30 23:00:11 And postgres3 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 1832s Jul 30 23:00:11 And "sync" key in a group 1 in DCS has sync_standby=postgres2 after 15 seconds # None 1832s Jul 30 23:00:11 And a thread is still alive # None 1832s Jul 30 23:00:11 When I run patronictl.py switchover batman --group 1 --force # None 1832s Jul 30 23:00:11 Then I receive a response returncode 0 # None 1832s Jul 30 23:00:11 And postgres2 role is the primary after 10 seconds # None 1832s Jul 30 23:00:11 And replication works from postgres2 to postgres3 after 15 seconds # None 1832s Jul 30 23:00:11 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 1832s Jul 30 23:00:11 And "sync" key in a group 1 in DCS has sync_standby=postgres3 after 15 seconds # None 1832s Jul 30 23:00:11 And a thread is still alive # None 1832s Jul 30 23:00:11 When I stop a thread # None 1832s Jul 30 23:00:11 Then a distributed table on postgres0 has expected rows # None 1832s Jul 30 23:00:11 1832s Jul 30 23:00:11 Scenario: worker primary restart doesn't break client queries on the coordinator # features/citus.feature:50 1832s Jul 30 23:00:11 Given I cleanup a distributed table on postgres0 # None 1832s Jul 30 23:00:11 And I start a thread inserting data on postgres0 # None 1832s Jul 30 23:00:11 When I run patronictl.py restart batman postgres2 --group 1 --force # None 1832s Jul 30 23:00:11 Then I receive a response returncode 0 # None 1832s Jul 30 23:00:11 And postgres2 role is the primary after 10 seconds # None 1832s Jul 30 23:00:11 And replication works from postgres2 to postgres3 after 15 seconds # None 1832s Jul 30 23:00:11 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 1832s Jul 30 23:00:11 And a thread is still alive # None 1832s Jul 30 23:00:11 When I stop a thread # None 1832s Jul 30 23:00:11 Then a distributed table on postgres0 has expected rows # None 1832s Jul 30 23:00:11 1832s Jul 30 23:00:11 Scenario: check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node # features/citus.feature:62 1832s Jul 30 23:00:11 Given I start postgres4 in citus group 2 # None 1832s Jul 30 23:00:11 Then postgres4 is a leader in a group 2 after 10 seconds # None 1832s Jul 30 23:00:11 And "members/postgres4" key in a group 2 in DCS has role=master after 3 seconds # None 1832s Jul 30 23:00:11 When I run patronictl.py edit-config batman --group 2 -s ttl=20 --force # None 1832s Jul 30 23:00:11 Then I receive a response returncode 0 # None 1832s Jul 30 23:00:11 And I receive a response output "+ttl: 20" # None 1832s Jul 30 23:00:11 Then postgres4 is registered in the postgres2 as the primary in group 2 after 5 seconds # None 1832s Jul 30 23:00:11 When I shut down postgres4 # None 1832s Jul 30 23:00:11 Then there is a transaction in progress on postgres0 changing pg_dist_node after 5 seconds # None 1832s Jul 30 23:00:11 When I run patronictl.py restart batman postgres2 --group 1 --force # None 1832s Jul 30 23:00:11 Then a transaction finishes in 20 seconds # None 1832s Jul 30 23:00:11 1832s Jul 30 23:00:11 Feature: custom bootstrap # features/custom_bootstrap.feature:1 1832s Jul 30 23:00:11 We should check that patroni can bootstrap a new cluster from a backup 1832s Jul 30 23:00:11 Scenario: clone existing cluster using pg_basebackup # features/custom_bootstrap.feature:4 1832s Jul 30 23:00:11 Given I start postgres0 # features/steps/basic_replication.py:8 1835s Jul 30 23:00:14 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1836s Jul 30 23:00:15 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 1836s Jul 30 23:00:15 And I start postgres1 in a cluster batman1 as a clone of postgres0 # features/steps/custom_bootstrap.py:6 1840s Jul 30 23:00:19 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 1841s Jul 30 23:00:20 Then table foo is present on postgres1 after 10 seconds # features/steps/basic_replication.py:93 1841s Jul 30 23:00:20 1841s Jul 30 23:00:20 Scenario: make a backup and do a restore into a new cluster # features/custom_bootstrap.feature:12 1841s Jul 30 23:00:20 Given I add the table bar to postgres1 # features/steps/basic_replication.py:54 1841s Jul 30 23:00:20 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 1842s Jul 30 23:00:21 When I start postgres2 in a cluster batman2 from backup # features/steps/custom_bootstrap.py:11 1846s Jul 30 23:00:25 Then postgres2 is a leader of batman2 after 30 seconds # features/steps/custom_bootstrap.py:16 1847s Jul 30 23:00:26 And table bar is present on postgres2 after 10 seconds # features/steps/basic_replication.py:93 1853s Jul 30 23:00:32 1853s Jul 30 23:00:32 Feature: dcs failsafe mode # features/dcs_failsafe_mode.feature:1 1853s Jul 30 23:00:32 We should check the basic dcs failsafe mode functioning 1853s Jul 30 23:00:32 Scenario: check failsafe mode can be successfully enabled # features/dcs_failsafe_mode.feature:4 1853s Jul 30 23:00:32 Given I start postgres0 # features/steps/basic_replication.py:8 1856s Jul 30 23:00:35 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 1857s Jul 30 23:00:36 Then "config" key in DCS has ttl=30 after 10 seconds # features/steps/cascading_replication.py:23 1857s Jul 30 23:00:36 When I issue a PATCH request to http://127.0.0.1:8008/config with {"loop_wait": 2, "ttl": 20, "retry_timeout": 3, "failsafe_mode": true} # features/steps/patroni_api.py:71 1857s Jul 30 23:00:36 Then I receive a response code 200 # features/steps/patroni_api.py:98 1857s Jul 30 23:00:36 And Response on GET http://127.0.0.1:8008/failsafe contains postgres0 after 10 seconds # features/steps/patroni_api.py:156 1857s Jul 30 23:00:36 When I issue a GET request to http://127.0.0.1:8008/failsafe # features/steps/patroni_api.py:61 1857s Jul 30 23:00:36 Then I receive a response code 200 # features/steps/patroni_api.py:98 1857s Jul 30 23:00:36 And I receive a response postgres0 http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:98 1857s Jul 30 23:00:36 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}},"slots":{"dcs_slot_1": null,"postgres0":null}} # features/steps/patroni_api.py:71 1857s Jul 30 23:00:36 Then I receive a response code 200 # features/steps/patroni_api.py:98 1857s Jul 30 23:00:36 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots": {"dcs_slot_0": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 1858s Jul 30 23:00:36 Then I receive a response code 200 # features/steps/patroni_api.py:98 1858s Jul 30 23:00:36 1858s Jul 30 23:00:36 @dcs-failsafe 1858s Jul 30 23:00:36 Scenario: check one-node cluster is functioning while DCS is down # features/dcs_failsafe_mode.feature:20 1858s Jul 30 23:00:36 Given DCS is down # features/steps/dcs_failsafe_mode.py:4 1858s Jul 30 23:00:36 Then Response on GET http://127.0.0.1:8008/primary contains failsafe_mode_is_active after 12 seconds # features/steps/patroni_api.py:156 1864s Jul 30 23:00:43 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1864s Jul 30 23:00:43 1864s Jul 30 23:00:43 @dcs-failsafe 1864s Jul 30 23:00:43 Scenario: check new replica isn't promoted when leader is down and DCS is up # features/dcs_failsafe_mode.feature:26 1864s Jul 30 23:00:43 Given DCS is up # features/steps/dcs_failsafe_mode.py:9 1864s Jul 30 23:00:43 When I do a backup of postgres0 # features/steps/custom_bootstrap.py:25 1864s Jul 30 23:00:43 And I shut down postgres0 # features/steps/basic_replication.py:29 1866s Jul 30 23:00:45 When I start postgres1 in a cluster batman from backup with no_leader # features/steps/dcs_failsafe_mode.py:14 1870s Jul 30 23:00:48 Then postgres1 role is the replica after 12 seconds # features/steps/basic_replication.py:105 1870s Jul 30 23:00:48 1870s Jul 30 23:00:48 Scenario: check leader and replica are both in /failsafe key after leader is back # features/dcs_failsafe_mode.feature:33 1870s Jul 30 23:00:48 Given I start postgres0 # features/steps/basic_replication.py:8 1873s Jul 30 23:00:52 And I start postgres1 # features/steps/basic_replication.py:8 1873s Jul 30 23:00:52 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1874s Jul 30 23:00:53 And "members/postgres1" key in DCS has state=running after 2 seconds # features/steps/cascading_replication.py:23 1874s Jul 30 23:00:53 And Response on GET http://127.0.0.1:8009/failsafe contains postgres1 after 10 seconds # features/steps/patroni_api.py:156 1875s Jul 30 23:00:54 When I issue a GET request to http://127.0.0.1:8009/failsafe # features/steps/patroni_api.py:61 1875s Jul 30 23:00:54 Then I receive a response code 200 # features/steps/patroni_api.py:98 1875s Jul 30 23:00:54 And I receive a response postgres0 http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:98 1875s Jul 30 23:00:54 And I receive a response postgres1 http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:98 1875s Jul 30 23:00:54 1875s Jul 30 23:00:54 @dcs-failsafe @slot-advance 1875s Jul 30 23:00:54 Scenario: check leader and replica are functioning while DCS is down # features/dcs_failsafe_mode.feature:46 1875s Jul 30 23:00:54 Given I get all changes from physical slot dcs_slot_1 on postgres0 # features/steps/slots.py:75 1875s Jul 30 23:00:54 Then physical slot dcs_slot_1 is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 1877s Jul 30 23:00:56 And logical slot dcs_slot_0 is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 1880s Jul 30 23:00:59 And DCS is down # features/steps/dcs_failsafe_mode.py:4 1880s Jul 30 23:00:59 Then Response on GET http://127.0.0.1:8008/primary contains failsafe_mode_is_active after 12 seconds # features/steps/patroni_api.py:156 1886s Jul 30 23:01:05 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1886s Jul 30 23:01:05 And postgres1 role is the replica after 2 seconds # features/steps/basic_replication.py:105 1886s Jul 30 23:01:05 And replication works from postgres0 to postgres1 after 10 seconds # features/steps/basic_replication.py:112 1886s Jul 30 23:01:05 When I get all changes from logical slot dcs_slot_0 on postgres0 # features/steps/slots.py:70 1886s Jul 30 23:01:05 And I get all changes from physical slot dcs_slot_1 on postgres0 # features/steps/slots.py:75 1886s Jul 30 23:01:05 Then logical slot dcs_slot_0 is in sync between postgres0 and postgres1 after 20 seconds # features/steps/slots.py:51 1890s Jul 30 23:01:09 And physical slot dcs_slot_1 is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 1890s Jul 30 23:01:09 1890s Jul 30 23:01:09 @dcs-failsafe 1890s Jul 30 23:01:09 Scenario: check primary is demoted when one replica is shut down and DCS is down # features/dcs_failsafe_mode.feature:61 1890s Jul 30 23:01:09 Given DCS is down # features/steps/dcs_failsafe_mode.py:4 1890s Jul 30 23:01:09 And I kill postgres1 # features/steps/basic_replication.py:34 1891s Jul 30 23:01:10 And I kill postmaster on postgres1 # features/steps/basic_replication.py:44 1891s Jul 30 23:01:10 waiting for server to shut down.... done 1891s Jul 30 23:01:10 server stopped 1891s Jul 30 23:01:10 Then postgres0 role is the replica after 12 seconds # features/steps/basic_replication.py:105 1893s Jul 30 23:01:12 1893s Jul 30 23:01:12 @dcs-failsafe 1893s Jul 30 23:01:12 Scenario: check known replica is promoted when leader is down and DCS is up # features/dcs_failsafe_mode.feature:68 1893s Jul 30 23:01:12 Given I kill postgres0 # features/steps/basic_replication.py:34 1894s Jul 30 23:01:13 And I shut down postmaster on postgres0 # features/steps/basic_replication.py:39 1895s Jul 30 23:01:13 waiting for server to shut down.... done 1895s Jul 30 23:01:13 server stopped 1895s Jul 30 23:01:13 And DCS is up # features/steps/dcs_failsafe_mode.py:9 1895s Jul 30 23:01:13 When I start postgres1 # features/steps/basic_replication.py:8 1898s Jul 30 23:01:17 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1899s Jul 30 23:01:18 And postgres1 role is the primary after 25 seconds # features/steps/basic_replication.py:105 1900s Jul 30 23:01:19 1900s Jul 30 23:01:19 @dcs-failsafe 1900s Jul 30 23:01:19 Scenario: scale to three-node cluster # features/dcs_failsafe_mode.feature:77 1900s Jul 30 23:01:19 Given I start postgres0 # features/steps/basic_replication.py:8 1904s Jul 30 23:01:23 And I start postgres2 # features/steps/basic_replication.py:8 1908s Jul 30 23:01:27 Then "members/postgres2" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 1909s Jul 30 23:01:28 And "members/postgres0" key in DCS has state=running after 20 seconds # features/steps/cascading_replication.py:23 1909s Jul 30 23:01:28 And Response on GET http://127.0.0.1:8008/failsafe contains postgres2 after 10 seconds # features/steps/patroni_api.py:156 1909s Jul 30 23:01:28 And replication works from postgres1 to postgres0 after 10 seconds # features/steps/basic_replication.py:112 1910s Jul 30 23:01:29 And replication works from postgres1 to postgres2 after 10 seconds # features/steps/basic_replication.py:112 1911s Jul 30 23:01:30 1911s Jul 30 23:01:30 @dcs-failsafe @slot-advance 1911s Jul 30 23:01:30 Scenario: make sure permanent slots exist on replicas # features/dcs_failsafe_mode.feature:88 1911s Jul 30 23:01:30 Given I issue a PATCH request to http://127.0.0.1:8009/config with {"slots":{"dcs_slot_0":null,"dcs_slot_2":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 1911s Jul 30 23:01:30 Then logical slot dcs_slot_2 is in sync between postgres1 and postgres0 after 20 seconds # features/steps/slots.py:51 1917s Jul 30 23:01:36 And logical slot dcs_slot_2 is in sync between postgres1 and postgres2 after 20 seconds # features/steps/slots.py:51 1918s Jul 30 23:01:37 When I get all changes from physical slot dcs_slot_1 on postgres1 # features/steps/slots.py:75 1918s Jul 30 23:01:37 Then physical slot dcs_slot_1 is in sync between postgres1 and postgres0 after 10 seconds # features/steps/slots.py:51 1919s Jul 30 23:01:38 And physical slot dcs_slot_1 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 1919s Jul 30 23:01:38 And physical slot postgres0 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 1919s Jul 30 23:01:38 1919s Jul 30 23:01:38 @dcs-failsafe 1919s Jul 30 23:01:38 Scenario: check three-node cluster is functioning while DCS is down # features/dcs_failsafe_mode.feature:98 1919s Jul 30 23:01:38 Given DCS is down # features/steps/dcs_failsafe_mode.py:4 1919s Jul 30 23:01:38 Then Response on GET http://127.0.0.1:8009/primary contains failsafe_mode_is_active after 12 seconds # features/steps/patroni_api.py:156 1927s Jul 30 23:01:46 Then postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 1927s Jul 30 23:01:46 And postgres0 role is the replica after 2 seconds # features/steps/basic_replication.py:105 1927s Jul 30 23:01:46 And postgres2 role is the replica after 2 seconds # features/steps/basic_replication.py:105 1927s Jul 30 23:01:46 1927s Jul 30 23:01:46 @dcs-failsafe @slot-advance 1927s Jul 30 23:01:46 Scenario: check that permanent slots are in sync between nodes while DCS is down # features/dcs_failsafe_mode.feature:107 1927s Jul 30 23:01:46 Given replication works from postgres1 to postgres0 after 10 seconds # features/steps/basic_replication.py:112 1927s Jul 30 23:01:46 And replication works from postgres1 to postgres2 after 10 seconds # features/steps/basic_replication.py:112 1928s Jul 30 23:01:47 When I get all changes from logical slot dcs_slot_2 on postgres1 # features/steps/slots.py:70 1928s Jul 30 23:01:47 And I get all changes from physical slot dcs_slot_1 on postgres1 # features/steps/slots.py:75 1928s Jul 30 23:01:47 Then logical slot dcs_slot_2 is in sync between postgres1 and postgres0 after 20 seconds # features/steps/slots.py:51 1929s Jul 30 23:01:48 And logical slot dcs_slot_2 is in sync between postgres1 and postgres2 after 20 seconds # features/steps/slots.py:51 1929s Jul 30 23:01:48 And physical slot dcs_slot_1 is in sync between postgres1 and postgres0 after 10 seconds # features/steps/slots.py:51 1929s Jul 30 23:01:48 And physical slot dcs_slot_1 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 1929s Jul 30 23:01:48 And physical slot postgres0 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 1933s Jul 30 23:01:52 1933s Jul 30 23:01:52 Feature: ignored slots # features/ignored_slots.feature:1 1933s Jul 30 23:01:52 1933s Jul 30 23:01:52 Scenario: check ignored slots aren't removed on failover/switchover # features/ignored_slots.feature:2 1933s Jul 30 23:01:52 Given I start postgres1 # features/steps/basic_replication.py:8 1936s Jul 30 23:01:55 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 1937s Jul 30 23:01:56 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 1937s Jul 30 23:01:56 When I issue a PATCH request to http://127.0.0.1:8009/config with {"ignore_slots": [{"name": "unmanaged_slot_0", "database": "postgres", "plugin": "test_decoding", "type": "logical"}, {"name": "unmanaged_slot_1", "database": "postgres", "plugin": "test_decoding"}, {"name": "unmanaged_slot_2", "database": "postgres"}, {"name": "unmanaged_slot_3"}], "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 1937s Jul 30 23:01:56 Then I receive a response code 200 # features/steps/patroni_api.py:98 1937s Jul 30 23:01:56 And Response on GET http://127.0.0.1:8009/config contains ignore_slots after 10 seconds # features/steps/patroni_api.py:156 1937s Jul 30 23:01:56 When I shut down postgres1 # features/steps/basic_replication.py:29 1939s Jul 30 23:01:58 And I start postgres1 # features/steps/basic_replication.py:8 1942s Jul 30 23:02:01 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 1943s Jul 30 23:02:02 And "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 1944s Jul 30 23:02:03 And postgres1 role is the primary after 20 seconds # features/steps/basic_replication.py:105 1944s Jul 30 23:02:03 When I create a logical replication slot unmanaged_slot_0 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1944s Jul 30 23:02:03 And I create a logical replication slot unmanaged_slot_1 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1944s Jul 30 23:02:03 And I create a logical replication slot unmanaged_slot_2 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1944s Jul 30 23:02:03 And I create a logical replication slot unmanaged_slot_3 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1944s Jul 30 23:02:03 And I create a logical replication slot dummy_slot on postgres1 with the test_decoding plugin # features/steps/slots.py:8 1944s Jul 30 23:02:03 Then postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1944s Jul 30 23:02:03 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1944s Jul 30 23:02:03 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1944s Jul 30 23:02:03 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1944s Jul 30 23:02:03 When I start postgres0 # features/steps/basic_replication.py:8 1947s Jul 30 23:02:06 Then "members/postgres0" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 1948s Jul 30 23:02:07 And postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 1948s Jul 30 23:02:07 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 1953s Jul 30 23:02:12 When I shut down postgres1 # features/steps/basic_replication.py:29 1955s Jul 30 23:02:14 Then "members/postgres0" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 1956s Jul 30 23:02:15 When I start postgres1 # features/steps/basic_replication.py:8 1959s Jul 30 23:02:18 Then postgres1 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 1959s Jul 30 23:02:18 And "members/postgres1" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 1960s Jul 30 23:02:19 And I sleep for 2 seconds # features/steps/patroni_api.py:39 1962s Jul 30 23:02:21 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1962s Jul 30 23:02:21 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1962s Jul 30 23:02:21 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1962s Jul 30 23:02:21 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1962s Jul 30 23:02:21 And postgres1 does not have a replication slot named dummy_slot # features/steps/slots.py:40 1963s Jul 30 23:02:21 When I shut down postgres0 # features/steps/basic_replication.py:29 1964s Jul 30 23:02:23 Then "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 1965s Jul 30 23:02:24 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1969s Jul 30 23:02:24 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1969s Jul 30 23:02:24 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1969s Jul 30 23:02:24 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 1969s Jul 30 23:02:27 1969s Jul 30 23:02:27 Feature: nostream node # features/nostream_node.feature:1 1969s Jul 30 23:02:27 1969s Jul 30 23:02:27 Scenario: check nostream node is recovering from archive # features/nostream_node.feature:3 1969s Jul 30 23:02:27 When I start postgres0 # features/steps/basic_replication.py:8 1971s Jul 30 23:02:30 And I configure and start postgres1 with a tag nostream true # features/steps/cascading_replication.py:7 1974s Jul 30 23:02:33 Then "members/postgres1" key in DCS has replication_state=in archive recovery after 10 seconds # features/steps/cascading_replication.py:23 1975s Jul 30 23:02:34 And replication works from postgres0 to postgres1 after 30 seconds # features/steps/basic_replication.py:112 1980s Jul 30 23:02:39 1980s Jul 30 23:02:39 @slot-advance 1980s Jul 30 23:02:39 Scenario: check permanent logical replication slots are not copied # features/nostream_node.feature:10 1980s Jul 30 23:02:39 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}}, "slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 1980s Jul 30 23:02:39 Then I receive a response code 200 # features/steps/patroni_api.py:98 1980s Jul 30 23:02:39 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 1983s Jul 30 23:02:42 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 1984s Jul 30 23:02:43 When I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 1989s Jul 30 23:02:47 Then "members/postgres2" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 1994s Jul 30 23:02:52 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 1994s Jul 30 23:02:52 And postgres2 does not have a replication slot named test_logical # features/steps/slots.py:40 2000s Jul 30 23:02:59 2000s Jul 30 23:02:59 Feature: patroni api # features/patroni_api.feature:1 2000s Jul 30 23:02:59 We should check that patroni correctly responds to valid and not-valid API requests. 2000s Jul 30 23:02:59 Scenario: check API requests on a stand-alone server # features/patroni_api.feature:4 2000s Jul 30 23:02:59 Given I start postgres0 # features/steps/basic_replication.py:8 2003s Jul 30 23:03:02 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2003s Jul 30 23:03:02 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 2003s Jul 30 23:03:02 Then I receive a response code 200 # features/steps/patroni_api.py:98 2003s Jul 30 23:03:02 And I receive a response state running # features/steps/patroni_api.py:98 2003s Jul 30 23:03:02 And I receive a response role master # features/steps/patroni_api.py:98 2003s Jul 30 23:03:02 When I issue a GET request to http://127.0.0.1:8008/standby_leader # features/steps/patroni_api.py:61 2003s Jul 30 23:03:02 Then I receive a response code 503 # features/steps/patroni_api.py:98 2003s Jul 30 23:03:02 When I issue a GET request to http://127.0.0.1:8008/health # features/steps/patroni_api.py:61 2003s Jul 30 23:03:02 Then I receive a response code 200 # features/steps/patroni_api.py:98 2003s Jul 30 23:03:02 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 2003s Jul 30 23:03:02 Then I receive a response code 503 # features/steps/patroni_api.py:98 2003s Jul 30 23:03:02 When I issue a POST request to http://127.0.0.1:8008/reinitialize with {"force": true} # features/steps/patroni_api.py:71 2003s Jul 30 23:03:02 Then I receive a response code 503 # features/steps/patroni_api.py:98 2003s Jul 30 23:03:02 And I receive a response text I am the leader, can not reinitialize # features/steps/patroni_api.py:98 2003s Jul 30 23:03:02 When I run patronictl.py switchover batman --master postgres0 --force # features/steps/patroni_api.py:86 2005s Jul 30 23:03:04 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 2005s Jul 30 23:03:04 And I receive a response output "Error: No candidates found to switchover to" # features/steps/patroni_api.py:98 2005s Jul 30 23:03:04 When I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0"} # features/steps/patroni_api.py:71 2005s Jul 30 23:03:04 Then I receive a response code 412 # features/steps/patroni_api.py:98 2005s Jul 30 23:03:04 And I receive a response text switchover is not possible: cluster does not have members except leader # features/steps/patroni_api.py:98 2005s Jul 30 23:03:04 When I issue an empty POST request to http://127.0.0.1:8008/failover # features/steps/patroni_api.py:66 2005s Jul 30 23:03:04 Then I receive a response code 400 # features/steps/patroni_api.py:98 2005s Jul 30 23:03:04 When I issue a POST request to http://127.0.0.1:8008/failover with {"foo": "bar"} # features/steps/patroni_api.py:71 2005s Jul 30 23:03:04 Then I receive a response code 400 # features/steps/patroni_api.py:98 2005s Jul 30 23:03:04 And I receive a response text "Failover could be performed only to a specific candidate" # features/steps/patroni_api.py:98 2005s Jul 30 23:03:04 2005s Jul 30 23:03:04 Scenario: check local configuration reload # features/patroni_api.feature:32 2005s Jul 30 23:03:04 Given I add tag new_tag new_value to postgres0 config # features/steps/patroni_api.py:137 2005s Jul 30 23:03:04 And I issue an empty POST request to http://127.0.0.1:8008/reload # features/steps/patroni_api.py:66 2005s Jul 30 23:03:04 Then I receive a response code 202 # features/steps/patroni_api.py:98 2005s Jul 30 23:03:04 2005s Jul 30 23:03:04 Scenario: check dynamic configuration change via DCS # features/patroni_api.feature:37 2005s Jul 30 23:03:04 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "postgresql": {"parameters": {"max_connections": "101"}}} # features/steps/patroni_api.py:71 2005s Jul 30 23:03:04 Then I receive a response code 200 # features/steps/patroni_api.py:98 2005s Jul 30 23:03:04 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 11 seconds # features/steps/patroni_api.py:156 2007s Jul 30 23:03:06 When I issue a GET request to http://127.0.0.1:8008/config # features/steps/patroni_api.py:61 2007s Jul 30 23:03:06 Then I receive a response code 200 # features/steps/patroni_api.py:98 2007s Jul 30 23:03:06 And I receive a response ttl 20 # features/steps/patroni_api.py:98 2007s Jul 30 23:03:06 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 2008s Jul 30 23:03:06 Then I receive a response code 200 # features/steps/patroni_api.py:98 2008s Jul 30 23:03:06 And I receive a response tags {'new_tag': 'new_value'} # features/steps/patroni_api.py:98 2008s Jul 30 23:03:06 And I sleep for 4 seconds # features/steps/patroni_api.py:39 2012s Jul 30 23:03:10 2012s Jul 30 23:03:10 Scenario: check the scheduled restart # features/patroni_api.feature:49 2012s Jul 30 23:03:10 Given I run patronictl.py edit-config -p 'superuser_reserved_connections=6' --force batman # features/steps/patroni_api.py:86 2013s Jul 30 23:03:12 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2013s Jul 30 23:03:12 And I receive a response output "+ superuser_reserved_connections: 6" # features/steps/patroni_api.py:98 2013s Jul 30 23:03:12 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 5 seconds # features/steps/patroni_api.py:156 2014s Jul 30 23:03:12 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"role": "replica"} # features/steps/patroni_api.py:124 2014s Jul 30 23:03:13 Then I receive a response code 202 # features/steps/patroni_api.py:98 2014s Jul 30 23:03:13 And I sleep for 8 seconds # features/steps/patroni_api.py:39 2022s Jul 30 23:03:21 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 10 seconds # features/steps/patroni_api.py:156 2022s Jul 30 23:03:21 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"restart_pending": "True"} # features/steps/patroni_api.py:124 2022s Jul 30 23:03:21 Then I receive a response code 202 # features/steps/patroni_api.py:98 2022s Jul 30 23:03:21 And Response on GET http://127.0.0.1:8008/patroni does not contain pending_restart after 10 seconds # features/steps/patroni_api.py:171 2028s Jul 30 23:03:27 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2029s Jul 30 23:03:28 2029s Jul 30 23:03:28 Scenario: check API requests for the primary-replica pair in the pause mode # features/patroni_api.feature:63 2029s Jul 30 23:03:28 Given I start postgres1 # features/steps/basic_replication.py:8 2034s Jul 30 23:03:33 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2035s Jul 30 23:03:34 When I run patronictl.py pause batman # features/steps/patroni_api.py:86 2037s Jul 30 23:03:36 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2037s Jul 30 23:03:36 When I kill postmaster on postgres1 # features/steps/basic_replication.py:44 2037s Jul 30 23:03:36 waiting for server to shut down.... done 2037s Jul 30 23:03:36 server stopped 2037s Jul 30 23:03:36 And I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 2037s Jul 30 23:03:36 Then I receive a response code 503 # features/steps/patroni_api.py:98 2037s Jul 30 23:03:36 And "members/postgres1" key in DCS has state=stopped after 10 seconds # features/steps/cascading_replication.py:23 2038s Jul 30 23:03:37 When I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 2041s Jul 30 23:03:40 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2041s Jul 30 23:03:40 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2042s Jul 30 23:03:41 And I sleep for 2 seconds # features/steps/patroni_api.py:39 2047s Jul 30 23:03:43 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 2047s Jul 30 23:03:43 Then I receive a response code 200 # features/steps/patroni_api.py:98 2047s Jul 30 23:03:43 And I receive a response state running # features/steps/patroni_api.py:98 2047s Jul 30 23:03:43 And I receive a response role replica # features/steps/patroni_api.py:98 2047s Jul 30 23:03:43 When I run patronictl.py reinit batman postgres1 --force --wait # features/steps/patroni_api.py:86 2048s Jul 30 23:03:47 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2048s Jul 30 23:03:47 And I receive a response output "Success: reinitialize for member postgres1" # features/steps/patroni_api.py:98 2048s Jul 30 23:03:47 And postgres1 role is the secondary after 30 seconds # features/steps/basic_replication.py:105 2049s Jul 30 23:03:48 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2049s Jul 30 23:03:48 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 2054s Jul 30 23:03:52 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2054s Jul 30 23:03:52 And I receive a response output "Success: restart on member postgres0" # features/steps/patroni_api.py:98 2054s Jul 30 23:03:52 And postgres0 role is the primary after 5 seconds # features/steps/basic_replication.py:105 2055s Jul 30 23:03:53 2055s Jul 30 23:03:53 Scenario: check the switchover via the API in the pause mode # features/patroni_api.feature:90 2055s Jul 30 23:03:53 Given I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0", "candidate": "postgres1"} # features/steps/patroni_api.py:71 2057s Jul 30 23:03:56 Then I receive a response code 200 # features/steps/patroni_api.py:98 2057s Jul 30 23:03:56 And postgres1 is a leader after 5 seconds # features/steps/patroni_api.py:29 2057s Jul 30 23:03:56 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2058s Jul 30 23:03:57 And postgres0 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 2063s Jul 30 23:04:02 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 2063s Jul 30 23:04:02 And "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2063s Jul 30 23:04:02 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 2063s Jul 30 23:04:02 Then I receive a response code 503 # features/steps/patroni_api.py:98 2063s Jul 30 23:04:02 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 2063s Jul 30 23:04:02 Then I receive a response code 200 # features/steps/patroni_api.py:98 2063s Jul 30 23:04:02 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 2063s Jul 30 23:04:02 Then I receive a response code 200 # features/steps/patroni_api.py:98 2063s Jul 30 23:04:02 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 2063s Jul 30 23:04:02 Then I receive a response code 503 # features/steps/patroni_api.py:98 2063s Jul 30 23:04:02 2063s Jul 30 23:04:02 Scenario: check the scheduled switchover # features/patroni_api.feature:107 2063s Jul 30 23:04:02 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 2065s Jul 30 23:04:04 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 2065s Jul 30 23:04:04 And I receive a response output "Can't schedule switchover in the paused state" # features/steps/patroni_api.py:98 2065s Jul 30 23:04:04 When I run patronictl.py resume batman # features/steps/patroni_api.py:86 2067s Jul 30 23:04:06 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2067s Jul 30 23:04:06 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 2069s Jul 30 23:04:08 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2069s Jul 30 23:04:08 And postgres0 is a leader after 20 seconds # features/steps/patroni_api.py:29 2079s Jul 30 23:04:18 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2079s Jul 30 23:04:18 And postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 2081s Jul 30 23:04:20 And replication works from postgres0 to postgres1 after 25 seconds # features/steps/basic_replication.py:112 2081s Jul 30 23:04:20 And "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2082s Jul 30 23:04:21 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 2082s Jul 30 23:04:21 Then I receive a response code 200 # features/steps/patroni_api.py:98 2082s Jul 30 23:04:21 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 2082s Jul 30 23:04:21 Then I receive a response code 503 # features/steps/patroni_api.py:98 2082s Jul 30 23:04:21 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 2082s Jul 30 23:04:21 Then I receive a response code 503 # features/steps/patroni_api.py:98 2082s Jul 30 23:04:21 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 2082s Jul 30 23:04:21 Then I receive a response code 200 # features/steps/patroni_api.py:98 2089s Jul 30 23:04:25 2089s Jul 30 23:04:25 Feature: permanent slots # features/permanent_slots.feature:1 2089s Jul 30 23:04:25 2089s Jul 30 23:04:25 Scenario: check that physical permanent slots are created # features/permanent_slots.feature:2 2089s Jul 30 23:04:25 Given I start postgres0 # features/steps/basic_replication.py:8 2089s Jul 30 23:04:28 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2090s Jul 30 23:04:29 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 2090s Jul 30 23:04:29 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_physical":0,"postgres0":0,"postgres1":0,"postgres3":0},"postgresql":{"parameters":{"wal_level":"logical"}}} # features/steps/patroni_api.py:71 2091s Jul 30 23:04:29 Then I receive a response code 200 # features/steps/patroni_api.py:98 2091s Jul 30 23:04:29 And Response on GET http://127.0.0.1:8008/config contains slots after 10 seconds # features/steps/patroni_api.py:156 2091s Jul 30 23:04:30 When I start postgres1 # features/steps/basic_replication.py:8 2094s Jul 30 23:04:33 And I start postgres2 # features/steps/basic_replication.py:8 2097s Jul 30 23:04:36 And I configure and start postgres3 with a tag replicatefrom postgres2 # features/steps/cascading_replication.py:7 2101s Jul 30 23:04:40 Then postgres0 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 2101s Jul 30 23:04:40 And postgres0 has a physical replication slot named postgres1 after 10 seconds # features/steps/slots.py:80 2101s Jul 30 23:04:40 And postgres0 has a physical replication slot named postgres2 after 10 seconds # features/steps/slots.py:80 2101s Jul 30 23:04:40 And postgres2 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 2101s Jul 30 23:04:40 2101s Jul 30 23:04:40 @slot-advance 2101s Jul 30 23:04:40 Scenario: check that logical permanent slots are created # features/permanent_slots.feature:18 2101s Jul 30 23:04:40 Given I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 2105s Jul 30 23:04:44 And I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 2105s Jul 30 23:04:44 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 2106s Jul 30 23:04:45 2106s Jul 30 23:04:45 @slot-advance 2106s Jul 30 23:04:45 Scenario: check that permanent slots are created on replicas # features/permanent_slots.feature:24 2106s Jul 30 23:04:45 Given postgres1 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 2112s Jul 30 23:04:51 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2112s Jul 30 23:04:51 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 2113s Jul 30 23:04:52 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 2114s Jul 30 23:04:53 And postgres1 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 2114s Jul 30 23:04:53 And postgres2 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 2114s Jul 30 23:04:53 And postgres3 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 2114s Jul 30 23:04:53 2114s Jul 30 23:04:53 @slot-advance 2114s Jul 30 23:04:53 Scenario: check permanent physical slots that match with member names # features/permanent_slots.feature:34 2114s Jul 30 23:04:53 Given postgres0 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 2114s Jul 30 23:04:53 And postgres1 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 2114s Jul 30 23:04:53 And postgres1 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 2114s Jul 30 23:04:53 And postgres2 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 2114s Jul 30 23:04:53 And postgres2 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 2114s Jul 30 23:04:53 And postgres2 has a physical replication slot named postgres1 after 2 seconds # features/steps/slots.py:80 2114s Jul 30 23:04:53 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 2114s Jul 30 23:04:53 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 2114s Jul 30 23:04:53 2114s Jul 30 23:04:53 @slot-advance 2114s Jul 30 23:04:53 Scenario: check that permanent slots are advanced on replicas # features/permanent_slots.feature:45 2114s Jul 30 23:04:53 Given I add the table replicate_me to postgres0 # features/steps/basic_replication.py:54 2114s Jul 30 23:04:53 When I get all changes from logical slot test_logical on postgres0 # features/steps/slots.py:70 2114s Jul 30 23:04:53 And I get all changes from physical slot test_physical on postgres0 # features/steps/slots.py:75 2114s Jul 30 23:04:53 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2116s Jul 30 23:04:55 And Physical slot test_physical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2116s Jul 30 23:04:55 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 2116s Jul 30 23:04:55 And Physical slot test_physical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 2116s Jul 30 23:04:55 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 2116s Jul 30 23:04:55 And Physical slot test_physical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 2116s Jul 30 23:04:55 And Physical slot postgres1 is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 2116s Jul 30 23:04:55 And Physical slot postgres3 is in sync between postgres2 and postgres0 after 20 seconds # features/steps/slots.py:51 2118s Jul 30 23:04:57 And Physical slot postgres3 is in sync between postgres2 and postgres1 after 10 seconds # features/steps/slots.py:51 2118s Jul 30 23:04:57 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 2118s Jul 30 23:04:57 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 2118s Jul 30 23:04:57 2118s Jul 30 23:04:57 @slot-advance 2118s Jul 30 23:04:57 Scenario: check that only permanent slots are written to the /status key # features/permanent_slots.feature:62 2118s Jul 30 23:04:57 Given "status" key in DCS has test_physical in slots # features/steps/slots.py:96 2118s Jul 30 23:04:57 And "status" key in DCS has postgres0 in slots # features/steps/slots.py:96 2118s Jul 30 23:04:57 And "status" key in DCS has postgres1 in slots # features/steps/slots.py:96 2118s Jul 30 23:04:57 And "status" key in DCS does not have postgres2 in slots # features/steps/slots.py:102 2118s Jul 30 23:04:57 And "status" key in DCS has postgres3 in slots # features/steps/slots.py:96 2118s Jul 30 23:04:57 2118s Jul 30 23:04:57 Scenario: check permanent physical replication slot after failover # features/permanent_slots.feature:69 2118s Jul 30 23:04:57 Given I shut down postgres3 # features/steps/basic_replication.py:29 2119s Jul 30 23:04:58 And I shut down postgres2 # features/steps/basic_replication.py:29 2120s Jul 30 23:04:59 And I shut down postgres0 # features/steps/basic_replication.py:29 2122s Jul 30 23:05:01 Then postgres1 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 2122s Jul 30 23:05:01 And postgres1 has a physical replication slot named postgres0 after 10 seconds # features/steps/slots.py:80 2122s Jul 30 23:05:01 And postgres1 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 2124s Jul 30 23:05:03 2124s Jul 30 23:05:03 Feature: priority replication # features/priority_failover.feature:1 2124s Jul 30 23:05:03 We should check that we can give nodes priority during failover 2124s Jul 30 23:05:03 Scenario: check failover priority 0 prevents leaderships # features/priority_failover.feature:4 2124s Jul 30 23:05:03 Given I configure and start postgres0 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 2127s Jul 30 23:05:06 And I configure and start postgres1 with a tag failover_priority 0 # features/steps/cascading_replication.py:7 2131s Jul 30 23:05:10 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2132s Jul 30 23:05:11 When I shut down postgres0 # features/steps/basic_replication.py:29 2134s Jul 30 23:05:13 And there is one of ["following a different leader because I am not allowed to promote"] INFO in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 2136s Jul 30 23:05:15 Then postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 2136s Jul 30 23:05:15 When I start postgres0 # features/steps/basic_replication.py:8 2139s Jul 30 23:05:18 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2140s Jul 30 23:05:19 2140s Jul 30 23:05:19 Scenario: check higher failover priority is respected # features/priority_failover.feature:14 2140s Jul 30 23:05:19 Given I configure and start postgres2 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 2144s Jul 30 23:05:23 And I configure and start postgres3 with a tag failover_priority 2 # features/steps/cascading_replication.py:7 2149s Jul 30 23:05:28 Then replication works from postgres0 to postgres2 after 20 seconds # features/steps/basic_replication.py:112 2150s Jul 30 23:05:29 And replication works from postgres0 to postgres3 after 20 seconds # features/steps/basic_replication.py:112 2152s Jul 30 23:05:30 When I shut down postgres0 # features/steps/basic_replication.py:29 2154s Jul 30 23:05:33 Then postgres3 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2155s Jul 30 23:05:34 And there is one of ["postgres3 has equally tolerable WAL position and priority 2, while this node has priority 1","Wal position of postgres3 is ahead of my wal position"] INFO in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 2155s Jul 30 23:05:34 2155s Jul 30 23:05:34 Scenario: check conflicting configuration handling # features/priority_failover.feature:23 2155s Jul 30 23:05:34 When I set nofailover tag in postgres2 config # features/steps/patroni_api.py:131 2155s Jul 30 23:05:34 And I issue an empty POST request to http://127.0.0.1:8010/reload # features/steps/patroni_api.py:66 2155s Jul 30 23:05:34 Then I receive a response code 202 # features/steps/patroni_api.py:98 2155s Jul 30 23:05:34 And there is one of ["Conflicting configuration between nofailover: True and failover_priority: 1. Defaulting to nofailover: True"] WARNING in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 2156s Jul 30 23:05:35 And "members/postgres2" key in DCS has tags={'failover_priority': '1', 'nofailover': True} after 10 seconds # features/steps/cascading_replication.py:23 2157s Jul 30 23:05:36 When I issue a POST request to http://127.0.0.1:8010/failover with {"candidate": "postgres2"} # features/steps/patroni_api.py:71 2157s Jul 30 23:05:36 Then I receive a response code 412 # features/steps/patroni_api.py:98 2157s Jul 30 23:05:36 And I receive a response text "failover is not possible: no good candidates have been found" # features/steps/patroni_api.py:98 2157s Jul 30 23:05:36 When I reset nofailover tag in postgres1 config # features/steps/patroni_api.py:131 2157s Jul 30 23:05:36 And I issue an empty POST request to http://127.0.0.1:8009/reload # features/steps/patroni_api.py:66 2157s Jul 30 23:05:36 Then I receive a response code 202 # features/steps/patroni_api.py:98 2157s Jul 30 23:05:36 And there is one of ["Conflicting configuration between nofailover: False and failover_priority: 0. Defaulting to nofailover: False"] WARNING in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 2159s Jul 30 23:05:38 And "members/postgres1" key in DCS has tags={'failover_priority': '0', 'nofailover': False} after 10 seconds # features/steps/cascading_replication.py:23 2160s Jul 30 23:05:39 And I issue a POST request to http://127.0.0.1:8009/failover with {"candidate": "postgres1"} # features/steps/patroni_api.py:71 2162s Jul 30 23:05:41 Then I receive a response code 200 # features/steps/patroni_api.py:98 2162s Jul 30 23:05:41 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2167s Jul 30 23:05:46 2167s Jul 30 23:05:46 Feature: recovery # features/recovery.feature:1 2167s Jul 30 23:05:46 We want to check that crashed postgres is started back 2167s Jul 30 23:05:46 Scenario: check that timeline is not incremented when primary is started after crash # features/recovery.feature:4 2167s Jul 30 23:05:46 Given I start postgres0 # features/steps/basic_replication.py:8 2170s Jul 30 23:05:49 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2170s Jul 30 23:05:49 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 2170s Jul 30 23:05:49 When I start postgres1 # features/steps/basic_replication.py:8 2174s Jul 30 23:05:53 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 2174s Jul 30 23:05:53 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 2175s Jul 30 23:05:54 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 2176s Jul 30 23:05:55 waiting for server to shut down.... done 2176s Jul 30 23:05:55 server stopped 2176s Jul 30 23:05:55 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2178s Jul 30 23:05:57 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 2178s Jul 30 23:05:57 Then I receive a response code 200 # features/steps/patroni_api.py:98 2178s Jul 30 23:05:57 And I receive a response role master # features/steps/patroni_api.py:98 2178s Jul 30 23:05:57 And I receive a response timeline 1 # features/steps/patroni_api.py:98 2178s Jul 30 23:05:57 And "members/postgres0" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 2179s Jul 30 23:05:58 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 2182s Jul 30 23:06:01 2182s Jul 30 23:06:01 Scenario: check immediate failover when master_start_timeout=0 # features/recovery.feature:20 2182s Jul 30 23:06:01 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"master_start_timeout": 0} # features/steps/patroni_api.py:71 2182s Jul 30 23:06:01 Then I receive a response code 200 # features/steps/patroni_api.py:98 2182s Jul 30 23:06:01 And Response on GET http://127.0.0.1:8008/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 2182s Jul 30 23:06:01 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 2182s Jul 30 23:06:01 waiting for server to shut down.... done 2182s Jul 30 23:06:01 server stopped 2182s Jul 30 23:06:01 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 2184s Jul 30 23:06:03 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2188s Jul 30 23:06:07 2188s Jul 30 23:06:07 Feature: standby cluster # features/standby_cluster.feature:1 2188s Jul 30 23:06:07 2188s Jul 30 23:06:07 Scenario: prepare the cluster with logical slots # features/standby_cluster.feature:2 2188s Jul 30 23:06:07 Given I start postgres1 # features/steps/basic_replication.py:8 2191s Jul 30 23:06:10 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 2192s Jul 30 23:06:11 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 2192s Jul 30 23:06:11 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"pm_1": {"type": "physical"}}, "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 2192s Jul 30 23:06:11 Then I receive a response code 200 # features/steps/patroni_api.py:98 2192s Jul 30 23:06:11 And Response on GET http://127.0.0.1:8009/config contains slots after 10 seconds # features/steps/patroni_api.py:156 2192s Jul 30 23:06:11 And I sleep for 3 seconds # features/steps/patroni_api.py:39 2195s Jul 30 23:06:14 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"test_logical": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 2195s Jul 30 23:06:14 Then I receive a response code 200 # features/steps/patroni_api.py:98 2195s Jul 30 23:06:14 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 2196s Jul 30 23:06:15 When I start postgres0 # features/steps/basic_replication.py:8 2199s Jul 30 23:06:18 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2203s Jul 30 23:06:19 And replication works from postgres1 to postgres0 after 15 seconds # features/steps/basic_replication.py:112 2203s Jul 30 23:06:20 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 2203s Jul 30 23:06:20 Then I receive a response code 200 # features/steps/patroni_api.py:98 2203s Jul 30 23:06:20 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 2203s Jul 30 23:06:20 And "members/postgres0" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 2203s Jul 30 23:06:21 2203s Jul 30 23:06:21 @slot-advance 2203s Jul 30 23:06:21 Scenario: check permanent logical slots are synced to the replica # features/standby_cluster.feature:22 2203s Jul 30 23:06:21 Given I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 2206s Jul 30 23:06:25 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 2211s Jul 30 23:06:30 2211s Jul 30 23:06:30 Scenario: Detach exiting node from the cluster # features/standby_cluster.feature:26 2211s Jul 30 23:06:30 When I shut down postgres1 # features/steps/basic_replication.py:29 2213s Jul 30 23:06:32 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2213s Jul 30 23:06:32 And "members/postgres0" key in DCS has role=master after 5 seconds # features/steps/cascading_replication.py:23 2214s Jul 30 23:06:33 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 2214s Jul 30 23:06:33 Then I receive a response code 200 # features/steps/patroni_api.py:98 2214s Jul 30 23:06:33 2214s Jul 30 23:06:33 Scenario: check replication of a single table in a standby cluster # features/standby_cluster.feature:33 2214s Jul 30 23:06:33 Given I start postgres1 in a standby cluster batman1 as a clone of postgres0 # features/steps/standby_cluster.py:23 2217s Jul 30 23:06:36 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 2218s Jul 30 23:06:37 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 2218s Jul 30 23:06:37 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 2218s Jul 30 23:06:37 When I issue a GET request to http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:61 2218s Jul 30 23:06:37 Then I receive a response code 200 # features/steps/patroni_api.py:98 2218s Jul 30 23:06:37 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 2218s Jul 30 23:06:37 And I sleep for 3 seconds # features/steps/patroni_api.py:39 2221s Jul 30 23:06:40 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 2221s Jul 30 23:06:40 Then I receive a response code 503 # features/steps/patroni_api.py:98 2221s Jul 30 23:06:40 When I issue a GET request to http://127.0.0.1:8009/standby_leader # features/steps/patroni_api.py:61 2221s Jul 30 23:06:40 Then I receive a response code 200 # features/steps/patroni_api.py:98 2221s Jul 30 23:06:40 And I receive a response role standby_leader # features/steps/patroni_api.py:98 2221s Jul 30 23:06:40 And there is a postgres1_cb.log with "on_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 2221s Jul 30 23:06:40 When I start postgres2 in a cluster batman1 # features/steps/standby_cluster.py:12 2224s Jul 30 23:06:43 Then postgres2 role is the replica after 24 seconds # features/steps/basic_replication.py:105 2224s Jul 30 23:06:43 And postgres2 is replicating from postgres1 after 10 seconds # features/steps/standby_cluster.py:52 2225s Jul 30 23:06:44 And table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 2225s Jul 30 23:06:44 When I issue a GET request to http://127.0.0.1:8010/patroni # features/steps/patroni_api.py:61 2225s Jul 30 23:06:44 Then I receive a response code 200 # features/steps/patroni_api.py:98 2225s Jul 30 23:06:44 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 2225s Jul 30 23:06:44 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 2225s Jul 30 23:06:44 2225s Jul 30 23:06:44 Scenario: check switchover # features/standby_cluster.feature:57 2225s Jul 30 23:06:44 Given I run patronictl.py switchover batman1 --force # features/steps/patroni_api.py:86 2229s Jul 30 23:06:48 Then Status code on GET http://127.0.0.1:8010/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 2229s Jul 30 23:06:48 And postgres1 is replicating from postgres2 after 32 seconds # features/steps/standby_cluster.py:52 2231s Jul 30 23:06:50 And there is a postgres2_cb.log with "on_start replica batman1\non_role_change standby_leader batman1" in postgres2 data directory # features/steps/cascading_replication.py:12 2231s Jul 30 23:06:50 2231s Jul 30 23:06:50 Scenario: check failover # features/standby_cluster.feature:63 2231s Jul 30 23:06:50 When I kill postgres2 # features/steps/basic_replication.py:34 2232s Jul 30 23:06:51 And I kill postmaster on postgres2 # features/steps/basic_replication.py:44 2232s Jul 30 23:06:51 waiting for server to shut down.... done 2232s Jul 30 23:06:51 server stopped 2232s Jul 30 23:06:51 Then postgres1 is replicating from postgres0 after 32 seconds # features/steps/standby_cluster.py:52 2251s Jul 30 23:07:10 And Status code on GET http://127.0.0.1:8009/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 2252s Jul 30 23:07:10 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 2252s Jul 30 23:07:11 Then I receive a response code 503 # features/steps/patroni_api.py:98 2252s Jul 30 23:07:11 And I receive a response role standby_leader # features/steps/patroni_api.py:98 2252s Jul 30 23:07:11 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 2253s Jul 30 23:07:12 And there is a postgres1_cb.log with "on_role_change replica batman1\non_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 2257s Jul 30 23:07:16 2257s Jul 30 23:07:16 Feature: watchdog # features/watchdog.feature:1 2257s Jul 30 23:07:16 Verify that watchdog gets pinged and triggered under appropriate circumstances. 2257s Jul 30 23:07:16 Scenario: watchdog is opened and pinged # features/watchdog.feature:4 2257s Jul 30 23:07:16 Given I start postgres0 with watchdog # features/steps/watchdog.py:16 2260s Jul 30 23:07:19 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2261s Jul 30 23:07:20 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2261s Jul 30 23:07:20 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 2262s Jul 30 23:07:21 And postgres0 watchdog has a 15 second timeout # features/steps/watchdog.py:34 2262s Jul 30 23:07:21 2262s Jul 30 23:07:21 Scenario: watchdog is reconfigured after global ttl changed # features/watchdog.feature:11 2262s Jul 30 23:07:21 Given I run patronictl.py edit-config batman -s ttl=30 --force # features/steps/patroni_api.py:86 2264s Jul 30 23:07:23 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2264s Jul 30 23:07:23 And I receive a response output "+ttl: 30" # features/steps/patroni_api.py:98 2264s Jul 30 23:07:23 When I sleep for 4 seconds # features/steps/patroni_api.py:39 2268s Jul 30 23:07:27 Then postgres0 watchdog has a 25 second timeout # features/steps/watchdog.py:34 2268s Jul 30 23:07:27 2268s Jul 30 23:07:27 Scenario: watchdog is disabled during pause # features/watchdog.feature:18 2268s Jul 30 23:07:27 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 2270s Jul 30 23:07:29 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2270s Jul 30 23:07:29 When I sleep for 2 seconds # features/steps/patroni_api.py:39 2272s Jul 30 23:07:31 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 2272s Jul 30 23:07:31 2272s Jul 30 23:07:31 Scenario: watchdog is opened and pinged after resume # features/watchdog.feature:24 2272s Jul 30 23:07:31 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 2272s Jul 30 23:07:31 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 2274s Jul 30 23:07:33 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2274s Jul 30 23:07:33 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 2275s Jul 30 23:07:34 2275s Jul 30 23:07:34 Scenario: watchdog is disabled when shutting down # features/watchdog.feature:30 2275s Jul 30 23:07:34 Given I shut down postgres0 # features/steps/basic_replication.py:29 2277s Jul 30 23:07:36 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 2277s Jul 30 23:07:36 2277s Jul 30 23:07:36 Scenario: watchdog is triggered if patroni stops responding # features/watchdog.feature:34 2277s Jul 30 23:07:36 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 2277s Jul 30 23:07:36 And I start postgres0 with watchdog # features/steps/watchdog.py:16 2280s Jul 30 23:07:39 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2281s Jul 30 23:07:40 When postgres0 hangs for 30 seconds # features/steps/watchdog.py:52 2281s Jul 30 23:07:40 Then postgres0 watchdog is triggered after 30 seconds # features/steps/watchdog.py:44 2304s Jul 30 23:08:03 2305s Failed to get list of machines from http://127.0.0.1:2379/v2: MaxRetryError("HTTPConnectionPool(host='127.0.0.1', port=2379): Max retries exceeded with url: /v2/machines (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))") 2305s Failed to get list of machines from http://[::1]:2379/v2: MaxRetryError("HTTPConnectionPool(host='::1', port=2379): Max retries exceeded with url: /v2/machines (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))") 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.5401.XboZAIzx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.5445.XGzTYFyx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.5491.XejXnNNx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.5540.XWGIljmx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.5585.XKYoqinx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.5655.XRVUhbex 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.5703.XkkcBbpx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.5706.XCpiVzNx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.5791.XkxixJvx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.5892.XFKQBikx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.5901.XvGhLCbx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.5944.XSvKXlHx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.5992.XICQhpPx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.6168.XvwsSpux 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.6213.XauTckVx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.6267.XIQXGvGx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.6352.XPBFSmXx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.6666.XOCBazhx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.6742.XXqZtuBx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.6795.XWaykvNx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7035.XFmRmjpx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7087.XywxzGzx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7147.XrydQjNx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7246.XjiWtukx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7351.XZiEWZEx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7387.XRqJthAx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7459.XDqHYtex 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7491.XcEESmRx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7671.XueZUNJx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7719.XgqKrcox 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7734.XhxNMTkx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7772.XYBBnEBx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7820.XjXntbUx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7825.XoWjJVEx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7859.XUJEynAx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.7901.XatRXQIx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.8061.XMehbaOx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.8063.XSsMYXCx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.8068.XAoYKJCx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.8199.XIomMzjx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.8246.XsFDIvtx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.8283.XfnQItfx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.8324.XQMlmnbx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.8379.XlUqNSMx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.8568.XRLbrgZx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.8601.XKTjRMGx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.8682.XNRbPDDx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.8754.XgSnlarx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.8804.XpTYURHx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.9169.XYSQQTXx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.9211.XsrrjpLx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.9346.XCssaGKx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.9408.XgaCDivx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.9472.XLEATvyx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.9582.XrFHnQlx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.9693.XbhNCCSx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.9830.XTLQSAux 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.9873.XPjfvuCx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.9875.XcJPHExx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.9878.XttbgBBx 2305s Jul 30 23:08:04 Combined data file .coverage.autopkgtest.9890.XrfTyfIx 2308s Jul 30 23:08:06 Name Stmts Miss Cover 2308s Jul 30 23:08:06 ------------------------------------------------------------------------------------------------------------- 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/OpenSSL/SSL.py 1072 596 44% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/OpenSSL/__init__.py 4 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/OpenSSL/_util.py 41 14 66% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/OpenSSL/crypto.py 1225 982 20% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/OpenSSL/version.py 10 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/_distutils_hack/__init__.py 101 96 5% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/__about__.py 5 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/__init__.py 3 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/exceptions.py 26 5 81% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/__init__.py 2 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/_oid.py 126 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/__init__.py 0 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/__init__.py 0 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/_conditional.py 50 23 54% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/binding.py 62 12 81% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/__init__.py 0 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_asymmetric.py 6 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_cipheralgorithm.py 17 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_serialization.py 79 35 56% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/__init__.py 0 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dh.py 47 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dsa.py 55 5 91% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ec.py 164 17 90% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed448.py 45 12 73% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed25519.py 43 12 72% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/padding.py 55 23 58% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/rsa.py 90 38 58% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/types.py 19 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/utils.py 14 5 64% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x448.py 43 12 72% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x25519.py 41 12 71% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/__init__.py 4 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/algorithms.py 129 35 73% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/base.py 140 81 42% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/modes.py 139 58 58% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/constant_time.py 6 3 50% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/hashes.py 127 20 84% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/__init__.py 5 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/base.py 7 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/ssh.py 758 602 21% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/utils.py 77 29 62% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/x509/__init__.py 70 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/x509/base.py 487 229 53% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/x509/certificate_transparency.py 42 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/x509/extensions.py 1038 569 45% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/x509/general_name.py 166 94 43% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/x509/name.py 232 141 39% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/x509/oid.py 3 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/cryptography/x509/verification.py 10 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dateutil/__init__.py 13 4 69% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dateutil/_common.py 25 15 40% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dateutil/_version.py 11 2 82% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dateutil/parser/__init__.py 33 4 88% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dateutil/parser/_parser.py 813 436 46% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dateutil/parser/isoparser.py 185 150 19% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dateutil/relativedelta.py 241 206 15% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dateutil/tz/__init__.py 4 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dateutil/tz/_common.py 161 121 25% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dateutil/tz/_factories.py 49 21 57% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dateutil/tz/tz.py 800 626 22% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dateutil/tz/win.py 153 149 3% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/__init__.py 3 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/_asyncbackend.py 14 6 57% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/_ddr.py 105 86 18% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/_features.py 44 7 84% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/_immutable_ctx.py 40 5 88% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/asyncbackend.py 44 32 27% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/asyncquery.py 277 242 13% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/edns.py 270 161 40% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/entropy.py 80 49 39% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/enum.py 72 46 36% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/exception.py 60 33 45% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/flags.py 41 14 66% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/grange.py 34 30 12% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/immutable.py 41 30 27% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/inet.py 80 65 19% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/ipv4.py 27 20 26% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/ipv6.py 115 100 13% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/message.py 809 662 18% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/name.py 620 427 31% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/nameserver.py 101 54 47% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/node.py 118 71 40% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/opcode.py 31 7 77% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/query.py 536 462 14% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/quic/__init__.py 26 23 12% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rcode.py 69 13 81% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rdata.py 377 269 29% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rdataclass.py 44 9 80% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rdataset.py 193 133 31% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rdatatype.py 214 25 88% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/OPT.py 34 19 44% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/SOA.py 41 26 37% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/TSIG.py 58 42 28% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/ZONEMD.py 43 27 37% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rdtypes/ANY/__init__.py 2 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rdtypes/__init__.py 2 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rdtypes/svcbbase.py 397 261 34% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rdtypes/util.py 191 154 19% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/renderer.py 152 118 22% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/resolver.py 899 719 20% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/reversename.py 33 24 27% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/rrset.py 78 56 28% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/serial.py 93 79 15% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/set.py 149 108 28% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/tokenizer.py 335 279 17% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/transaction.py 271 203 25% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/tsig.py 177 122 31% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/ttl.py 45 38 16% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/version.py 7 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/wire.py 64 42 34% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/xfr.py 148 126 15% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/zone.py 508 383 25% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/zonefile.py 429 380 11% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/dns/zonetypes.py 15 2 87% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/etcd/__init__.py 125 24 81% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/etcd/client.py 380 192 49% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/etcd/lock.py 125 103 18% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/idna/__init__.py 4 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/idna/core.py 293 258 12% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/idna/idnadata.py 4 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/idna/intranges.py 30 24 20% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/idna/package_data.py 1 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/__init__.py 13 2 85% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/__main__.py 199 63 68% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/api.py 770 279 64% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/async_executor.py 96 15 84% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/collections.py 56 6 89% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/config.py 371 94 75% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/config_generator.py 212 159 25% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/daemon.py 76 3 96% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/dcs/__init__.py 646 77 88% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/dcs/etcd.py 603 119 80% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/dynamic_loader.py 35 7 80% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/exceptions.py 16 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/file_perm.py 43 8 81% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/global_config.py 81 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/ha.py 1244 320 74% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/log.py 219 69 68% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/__init__.py 821 175 79% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/available_parameters/__init__.py 21 1 95% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/bootstrap.py 252 62 75% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/callback_executor.py 55 8 85% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/cancellable.py 104 34 67% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/config.py 813 214 74% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/connection.py 75 1 99% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/misc.py 41 8 80% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/mpp/__init__.py 89 11 88% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py 170 85 50% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/rewind.py 416 163 61% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/slots.py 334 34 90% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/sync.py 130 19 85% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/postgresql/validator.py 157 23 85% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/psycopg.py 42 16 62% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/request.py 62 6 90% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/tags.py 38 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/utils.py 350 120 66% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/validator.py 301 208 31% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/version.py 1 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/watchdog/__init__.py 2 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/watchdog/base.py 203 42 79% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/patroni/watchdog/linux.py 135 35 74% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/psutil/__init__.py 951 615 35% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/psutil/_common.py 424 212 50% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/psutil/_compat.py 302 263 13% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/psutil/_pslinux.py 1251 924 26% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/psutil/_psposix.py 96 38 60% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/psycopg2/__init__.py 19 3 84% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/psycopg2/_json.py 64 27 58% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/psycopg2/_range.py 269 172 36% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/psycopg2/errors.py 3 2 33% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/psycopg2/extensions.py 91 25 73% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/six.py 504 250 50% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/__init__.py 50 14 72% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/_base_connection.py 70 52 26% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/_collections.py 234 100 57% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/_request_methods.py 53 9 83% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/_version.py 2 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/connection.py 324 99 69% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/connectionpool.py 347 120 65% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/contrib/__init__.py 0 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/contrib/pyopenssl.py 257 96 63% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/exceptions.py 115 37 68% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/fields.py 92 73 21% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/filepost.py 37 24 35% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/poolmanager.py 233 85 64% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/response.py 562 310 45% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/util/__init__.py 10 0 100% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/util/connection.py 66 42 36% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/util/proxy.py 13 6 54% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/util/request.py 104 49 53% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/util/response.py 32 17 47% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/util/retry.py 173 47 73% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/util/ssl_.py 177 78 56% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/util/ssl_match_hostname.py 66 54 18% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/util/ssltransport.py 160 112 30% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/util/timeout.py 71 14 80% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/util/url.py 205 68 67% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/util/util.py 26 10 62% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/urllib3/util/wait.py 49 18 63% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/__init__.py 165 109 34% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/composer.py 92 17 82% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/constructor.py 479 276 42% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/cyaml.py 46 24 48% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/dumper.py 23 12 48% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/emitter.py 838 769 8% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/error.py 58 42 28% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/events.py 61 6 90% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/loader.py 47 24 49% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/nodes.py 29 7 76% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/parser.py 352 198 44% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/reader.py 122 34 72% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/representer.py 248 176 29% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/resolver.py 135 76 44% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/scanner.py 758 437 42% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/serializer.py 85 70 18% 2308s Jul 30 23:08:06 /usr/lib/python3/dist-packages/yaml/tokens.py 76 17 78% 2308s Jul 30 23:08:06 patroni/__init__.py 13 2 85% 2308s Jul 30 23:08:06 patroni/__main__.py 199 199 0% 2308s Jul 30 23:08:06 patroni/api.py 770 770 0% 2308s Jul 30 23:08:06 patroni/async_executor.py 96 69 28% 2308s Jul 30 23:08:06 patroni/collections.py 56 15 73% 2308s Jul 30 23:08:06 patroni/config.py 371 196 47% 2308s Jul 30 23:08:06 patroni/config_generator.py 212 212 0% 2308s Jul 30 23:08:06 patroni/ctl.py 936 411 56% 2308s Jul 30 23:08:06 patroni/daemon.py 76 76 0% 2308s Jul 30 23:08:06 patroni/dcs/__init__.py 646 270 58% 2308s Jul 30 23:08:06 patroni/dcs/consul.py 485 485 0% 2308s Jul 30 23:08:06 patroni/dcs/etcd3.py 679 679 0% 2308s Jul 30 23:08:06 patroni/dcs/etcd.py 603 224 63% 2308s Jul 30 23:08:06 patroni/dcs/exhibitor.py 61 61 0% 2308s Jul 30 23:08:06 patroni/dcs/kubernetes.py 938 938 0% 2308s Jul 30 23:08:06 patroni/dcs/raft.py 319 319 0% 2308s Jul 30 23:08:06 patroni/dcs/zookeeper.py 288 288 0% 2308s Jul 30 23:08:06 patroni/dynamic_loader.py 35 7 80% 2308s Jul 30 23:08:06 patroni/exceptions.py 16 1 94% 2308s Jul 30 23:08:06 patroni/file_perm.py 43 15 65% 2308s Jul 30 23:08:06 patroni/global_config.py 81 18 78% 2308s Jul 30 23:08:06 patroni/ha.py 1244 1244 0% 2308s Jul 30 23:08:06 patroni/log.py 219 173 21% 2308s Jul 30 23:08:06 patroni/postgresql/__init__.py 821 651 21% 2308s Jul 30 23:08:06 patroni/postgresql/available_parameters/__init__.py 21 3 86% 2308s Jul 30 23:08:06 patroni/postgresql/bootstrap.py 252 222 12% 2308s Jul 30 23:08:06 patroni/postgresql/callback_executor.py 55 34 38% 2308s Jul 30 23:08:06 patroni/postgresql/cancellable.py 104 84 19% 2308s Jul 30 23:08:06 patroni/postgresql/config.py 813 698 14% 2308s Jul 30 23:08:06 patroni/postgresql/connection.py 75 50 33% 2308s Jul 30 23:08:06 patroni/postgresql/misc.py 41 29 29% 2308s Jul 30 23:08:06 patroni/postgresql/mpp/__init__.py 89 21 76% 2308s Jul 30 23:08:06 patroni/postgresql/mpp/citus.py 259 259 0% 2308s Jul 30 23:08:06 patroni/postgresql/postmaster.py 170 139 18% 2308s Jul 30 23:08:06 patroni/postgresql/rewind.py 416 416 0% 2308s Jul 30 23:08:06 patroni/postgresql/slots.py 334 285 15% 2308s Jul 30 23:08:06 patroni/postgresql/sync.py 130 96 26% 2308s Jul 30 23:08:06 patroni/postgresql/validator.py 157 52 67% 2308s Jul 30 23:08:06 patroni/psycopg.py 42 28 33% 2308s Jul 30 23:08:06 patroni/raft_controller.py 22 22 0% 2308s Jul 30 23:08:06 patroni/request.py 62 6 90% 2308s Jul 30 23:08:06 patroni/scripts/__init__.py 0 0 100% 2308s Jul 30 23:08:06 patroni/scripts/aws.py 59 59 0% 2308s Jul 30 23:08:06 patroni/scripts/barman/__init__.py 0 0 100% 2308s Jul 30 23:08:06 patroni/scripts/barman/cli.py 51 51 0% 2308s Jul 30 23:08:06 patroni/scripts/barman/config_switch.py 51 51 0% 2308s Jul 30 23:08:06 patroni/scripts/barman/recover.py 37 37 0% 2308s Jul 30 23:08:06 patroni/scripts/barman/utils.py 94 94 0% 2308s Jul 30 23:08:06 patroni/scripts/wale_restore.py 207 207 0% 2308s Jul 30 23:08:06 patroni/tags.py 38 11 71% 2308s Jul 30 23:08:06 patroni/utils.py 350 196 44% 2308s Jul 30 23:08:06 patroni/validator.py 301 215 29% 2308s Jul 30 23:08:06 patroni/version.py 1 0 100% 2308s Jul 30 23:08:06 patroni/watchdog/__init__.py 2 2 0% 2308s Jul 30 23:08:06 patroni/watchdog/base.py 203 203 0% 2308s Jul 30 23:08:06 patroni/watchdog/linux.py 135 135 0% 2308s Jul 30 23:08:06 ------------------------------------------------------------------------------------------------------------- 2308s Jul 30 23:08:06 TOTAL 53177 32257 39% 2308s Jul 30 23:08:07 12 features passed, 0 failed, 1 skipped 2308s Jul 30 23:08:07 55 scenarios passed, 0 failed, 5 skipped 2308s Jul 30 23:08:07 524 steps passed, 0 failed, 61 skipped, 0 undefined 2308s Jul 30 23:08:07 Took 8m49.592s 2308s ### End 16 acceptance-etcd ### 2308s + echo '### End 16 acceptance-etcd ###' 2308s + rm -f '/tmp/pgpass?' 2308s ++ id -u 2308s + '[' 0 -eq 0 ']' 2308s + '[' -x /etc/init.d/zookeeper ']' 2308s autopkgtest [23:08:07]: test acceptance-etcd: -----------------------] 2317s autopkgtest [23:08:16]: test acceptance-etcd: - - - - - - - - - - results - - - - - - - - - - 2317s acceptance-etcd PASS 2318s autopkgtest [23:08:17]: test acceptance-zookeeper: preparing testbed 2572s autopkgtest [23:12:31]: testbed dpkg architecture: s390x 2573s autopkgtest [23:12:32]: testbed apt version: 2.9.6 2573s autopkgtest [23:12:32]: @@@@@@@@@@@@@@@@@@@@ test bed setup 2574s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [126 kB] 2575s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [52.0 kB] 2575s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [6368 B] 2575s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [8548 B] 2575s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [514 kB] 2575s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x Packages [73.3 kB] 2575s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x c-n-f Metadata [2112 B] 2575s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/restricted s390x Packages [1368 B] 2575s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/restricted s390x c-n-f Metadata [120 B] 2575s Get:10 http://ftpmaster.internal/ubuntu oracular-proposed/universe s390x Packages [433 kB] 2575s Get:11 http://ftpmaster.internal/ubuntu oracular-proposed/universe s390x c-n-f Metadata [8372 B] 2575s Get:12 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse s390x Packages [3620 B] 2575s Get:13 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse s390x c-n-f Metadata [120 B] 2575s Fetched 1229 kB in 1s (1517 kB/s) 2575s Reading package lists... 2579s Reading package lists... 2579s Building dependency tree... 2579s Reading state information... 2580s Calculating upgrade... 2580s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2580s Reading package lists... 2580s Building dependency tree... 2580s Reading state information... 2580s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2581s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 2581s Hit:2 http://ftpmaster.internal/ubuntu oracular InRelease 2581s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 2581s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 2582s Reading package lists... 2582s Reading package lists... 2582s Building dependency tree... 2582s Reading state information... 2582s Calculating upgrade... 2583s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2583s Reading package lists... 2583s Building dependency tree... 2583s Reading state information... 2583s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2641s Reading package lists... 2641s Building dependency tree... 2641s Reading state information... 2642s Starting pkgProblemResolver with broken count: 0 2642s Starting 2 pkgProblemResolver with broken count: 0 2642s Done 2642s The following additional packages will be installed: 2642s adwaita-icon-theme at-spi2-common ca-certificates-java 2642s dconf-gsettings-backend dconf-service default-jre default-jre-headless 2642s fontconfig fonts-font-awesome fonts-lato gtk-update-icon-cache 2642s hicolor-icon-theme humanity-icon-theme java-common junit4 libactivation-java 2642s libapache-pom-java libapr1t64 libasm-java libasound2-data libasound2t64 2642s libatinject-jsr330-api-java libatk-bridge2.0-0t64 libatk1.0-0t64 2642s libatspi2.0-0t64 libavahi-client3 libavahi-common-data libavahi-common3 2642s libcairo-gobject2 libcairo2 libcares2 libcolord2 libcommons-cli-java 2642s libcommons-io-java libcommons-logging-java libcommons-parent-java 2642s libcups2t64 libdatrie1 libdconf1 libdrm-amdgpu1 libdrm-nouveau2 2642s libdrm-radeon1 libdropwizard-metrics-java libeclipse-jdt-core-java 2642s libel-api-java libepoxy0 liberror-prone-java libev4t64 2642s libfindbugs-annotations-java libgdk-pixbuf-2.0-0 libgdk-pixbuf2.0-common 2642s libgif7 libgl1 libgl1-mesa-dri libglapi-mesa libglvnd0 libglx-mesa0 libglx0 2642s libgtk-3-0t64 libgtk-3-common libguava-java libhamcrest-java libio-pty-perl 2642s libipc-run-perl libjackson2-annotations-java libjackson2-core-java 2642s libjackson2-databind-java libjaxb-api-java libjctools-java 2642s libjetty9-extra-java libjetty9-java libjffi-java libjffi-jni 2642s libjnr-constants-java libjnr-enxio-java libjnr-ffi-java libjnr-posix-java 2642s libjnr-unixsocket-java libjnr-x86asm-java libjs-jquery libjs-sphinxdoc 2642s libjs-underscore libjson-perl libjsp-api-java libjsr305-java liblcms2-2 2642s libllvm17t64 liblog4j1.2-java libmail-java libnetty-java 2642s libnetty-tcnative-java libnetty-tcnative-jni libpango-1.0-0 2642s libpangocairo-1.0-0 libpangoft2-1.0-0 libpcsclite1 libpixman-1-0 libpq5 2642s libservlet-api-java libslf4j-java libsnappy-java libsnappy-jni libsnappy1v5 2642s libspring-beans-java libspring-core-java libtaglibs-standard-impl-java 2642s libtaglibs-standard-spec-java libthai-data libthai0 libtime-duration-perl 2642s libtimedate-perl libtomcat9-java libvulkan1 libwayland-client0 2642s libwayland-cursor0 libwayland-egl1 libwebsocket-api-java libx11-xcb1 2642s libxcb-dri2-0 libxcb-dri3-0 libxcb-glx0 libxcb-present0 libxcb-randr0 2642s libxcb-render0 libxcb-shm0 libxcb-sync1 libxcb-xfixes0 libxcomposite1 2642s libxcursor1 libxdamage1 libxfixes3 libxi6 libxinerama1 libxrandr2 2642s libxrender1 libxshmfence1 libxslt1.1 libxtst6 libxxf86vm1 libzookeeper-java 2642s moreutils openjdk-21-jre openjdk-21-jre-headless patroni patroni-doc 2642s postgresql postgresql-16 postgresql-client-16 postgresql-client-common 2642s postgresql-common python3-behave python3-cdiff python3-click 2642s python3-colorama python3-coverage python3-dateutil python3-dnspython 2642s python3-eventlet python3-gevent python3-greenlet python3-kazoo 2642s python3-kerberos python3-parse python3-parse-type python3-prettytable 2642s python3-psutil python3-psycopg2 python3-pure-sasl python3-six 2642s python3-wcwidth python3-zope.event python3-zope.interface 2642s sphinx-rtd-theme-common ssl-cert ubuntu-mono x11-common zookeeper zookeeperd 2642s Suggested packages: 2642s alsa-utils libasound2-plugins libatinject-jsr330-api-java-doc colord 2642s libavalon-framework-java libexcalibur-logkit-java cups-common gvfs 2642s libjackson2-annotations-java-doc jetty9 libjnr-ffi-java-doc 2642s libjnr-posix-java-doc libjsr305-java-doc liblcms2-utils liblog4j1.2-java-doc 2642s libmail-java-doc libbcpkix-java libcompress-lzf-java libjzlib-java 2642s liblog4j2-java libprotobuf-java pcscd libcglib-java libyaml-snake-java 2642s libaspectj-java libcommons-collections3-java tomcat9 libzookeeper-java-doc 2642s libnss-mdns fonts-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho 2642s fonts-wqy-microhei | fonts-wqy-zenhei fonts-indic vip-manager haproxy 2642s postgresql-doc postgresql-doc-16 python-coverage-doc python3-trio 2642s python3-aioquic python3-h2 python3-httpx python3-httpcore 2642s python-eventlet-doc python-gevent-doc python-greenlet-dev 2642s python-greenlet-doc python-kazoo-doc python-psycopg2-doc 2642s Recommended packages: 2642s librsvg2-common alsa-ucm-conf alsa-topology-conf at-spi2-core 2642s libgdk-pixbuf2.0-bin libgl1-amber-dri libgtk-3-bin javascript-common 2642s libjson-xs-perl mesa-vulkan-drivers | vulkan-icd libatk-wrapper-java-jni 2642s fonts-dejavu-extra 2642s The following NEW packages will be installed: 2642s adwaita-icon-theme at-spi2-common autopkgtest-satdep ca-certificates-java 2642s dconf-gsettings-backend dconf-service default-jre default-jre-headless 2642s fontconfig fonts-font-awesome fonts-lato gtk-update-icon-cache 2642s hicolor-icon-theme humanity-icon-theme java-common junit4 libactivation-java 2642s libapache-pom-java libapr1t64 libasm-java libasound2-data libasound2t64 2642s libatinject-jsr330-api-java libatk-bridge2.0-0t64 libatk1.0-0t64 2642s libatspi2.0-0t64 libavahi-client3 libavahi-common-data libavahi-common3 2642s libcairo-gobject2 libcairo2 libcares2 libcolord2 libcommons-cli-java 2642s libcommons-io-java libcommons-logging-java libcommons-parent-java 2642s libcups2t64 libdatrie1 libdconf1 libdrm-amdgpu1 libdrm-nouveau2 2642s libdrm-radeon1 libdropwizard-metrics-java libeclipse-jdt-core-java 2642s libel-api-java libepoxy0 liberror-prone-java libev4t64 2642s libfindbugs-annotations-java libgdk-pixbuf-2.0-0 libgdk-pixbuf2.0-common 2642s libgif7 libgl1 libgl1-mesa-dri libglapi-mesa libglvnd0 libglx-mesa0 libglx0 2642s libgtk-3-0t64 libgtk-3-common libguava-java libhamcrest-java libio-pty-perl 2642s libipc-run-perl libjackson2-annotations-java libjackson2-core-java 2642s libjackson2-databind-java libjaxb-api-java libjctools-java 2642s libjetty9-extra-java libjetty9-java libjffi-java libjffi-jni 2642s libjnr-constants-java libjnr-enxio-java libjnr-ffi-java libjnr-posix-java 2642s libjnr-unixsocket-java libjnr-x86asm-java libjs-jquery libjs-sphinxdoc 2642s libjs-underscore libjson-perl libjsp-api-java libjsr305-java liblcms2-2 2642s libllvm17t64 liblog4j1.2-java libmail-java libnetty-java 2642s libnetty-tcnative-java libnetty-tcnative-jni libpango-1.0-0 2642s libpangocairo-1.0-0 libpangoft2-1.0-0 libpcsclite1 libpixman-1-0 libpq5 2642s libservlet-api-java libslf4j-java libsnappy-java libsnappy-jni libsnappy1v5 2642s libspring-beans-java libspring-core-java libtaglibs-standard-impl-java 2642s libtaglibs-standard-spec-java libthai-data libthai0 libtime-duration-perl 2642s libtimedate-perl libtomcat9-java libvulkan1 libwayland-client0 2642s libwayland-cursor0 libwayland-egl1 libwebsocket-api-java libx11-xcb1 2642s libxcb-dri2-0 libxcb-dri3-0 libxcb-glx0 libxcb-present0 libxcb-randr0 2642s libxcb-render0 libxcb-shm0 libxcb-sync1 libxcb-xfixes0 libxcomposite1 2642s libxcursor1 libxdamage1 libxfixes3 libxi6 libxinerama1 libxrandr2 2642s libxrender1 libxshmfence1 libxslt1.1 libxtst6 libxxf86vm1 libzookeeper-java 2642s moreutils openjdk-21-jre openjdk-21-jre-headless patroni patroni-doc 2642s postgresql postgresql-16 postgresql-client-16 postgresql-client-common 2642s postgresql-common python3-behave python3-cdiff python3-click 2642s python3-colorama python3-coverage python3-dateutil python3-dnspython 2642s python3-eventlet python3-gevent python3-greenlet python3-kazoo 2642s python3-kerberos python3-parse python3-parse-type python3-prettytable 2642s python3-psutil python3-psycopg2 python3-pure-sasl python3-six 2642s python3-wcwidth python3-zope.event python3-zope.interface 2642s sphinx-rtd-theme-common ssl-cert ubuntu-mono x11-common zookeeper zookeeperd 2642s 0 upgraded, 179 newly installed, 0 to remove and 0 not upgraded. 2642s Need to get 156 MB/156 MB of archives. 2642s After this operation, 585 MB of additional disk space will be used. 2642s Get:1 /tmp/autopkgtest.qFf46z/4-autopkgtest-satdep.deb autopkgtest-satdep s390x 0 [764 B] 2642s Get:2 http://ftpmaster.internal/ubuntu oracular/main s390x fonts-lato all 2.015-1 [2781 kB] 2643s Get:3 http://ftpmaster.internal/ubuntu oracular/main s390x libjson-perl all 4.10000-1 [81.9 kB] 2643s Get:4 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-client-common all 261 [36.6 kB] 2643s Get:5 http://ftpmaster.internal/ubuntu oracular/main s390x ssl-cert all 1.1.2ubuntu2 [18.0 kB] 2643s Get:6 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-common all 261 [162 kB] 2643s Get:7 http://ftpmaster.internal/ubuntu oracular/main s390x ca-certificates-java all 20240118 [11.6 kB] 2643s Get:8 http://ftpmaster.internal/ubuntu oracular/main s390x java-common all 0.75+exp1 [6798 B] 2643s Get:9 http://ftpmaster.internal/ubuntu oracular/main s390x liblcms2-2 s390x 2.14-2build1 [172 kB] 2643s Get:10 http://ftpmaster.internal/ubuntu oracular/main s390x libpcsclite1 s390x 2.2.3-1 [23.8 kB] 2643s Get:11 http://ftpmaster.internal/ubuntu oracular/main s390x openjdk-21-jre-headless s390x 21.0.4+7-1ubuntu2 [44.0 MB] 2645s Get:12 http://ftpmaster.internal/ubuntu oracular/main s390x default-jre-headless s390x 2:1.21-75+exp1 [3094 B] 2645s Get:13 http://ftpmaster.internal/ubuntu oracular/main s390x libgdk-pixbuf2.0-common all 2.42.12+dfsg-1 [7888 B] 2645s Get:14 http://ftpmaster.internal/ubuntu oracular/main s390x libgdk-pixbuf-2.0-0 s390x 2.42.12+dfsg-1 [152 kB] 2645s Get:15 http://ftpmaster.internal/ubuntu oracular/main s390x gtk-update-icon-cache s390x 3.24.43-1ubuntu1 [52.4 kB] 2645s Get:16 http://ftpmaster.internal/ubuntu oracular/main s390x hicolor-icon-theme all 0.18-1 [13.5 kB] 2645s Get:17 http://ftpmaster.internal/ubuntu oracular/main s390x humanity-icon-theme all 0.6.16 [1282 kB] 2645s Get:18 http://ftpmaster.internal/ubuntu oracular/main s390x ubuntu-mono all 24.04-0ubuntu1 [151 kB] 2645s Get:19 http://ftpmaster.internal/ubuntu oracular/main s390x adwaita-icon-theme all 46.0-1 [723 kB] 2645s Get:20 http://ftpmaster.internal/ubuntu oracular/main s390x at-spi2-common all 2.52.0-1build1 [8674 B] 2645s Get:21 http://ftpmaster.internal/ubuntu oracular/main s390x libatk1.0-0t64 s390x 2.52.0-1build1 [56.4 kB] 2645s Get:22 http://ftpmaster.internal/ubuntu oracular/main s390x libxi6 s390x 2:1.8.1-1build1 [35.7 kB] 2645s Get:23 http://ftpmaster.internal/ubuntu oracular/main s390x libatspi2.0-0t64 s390x 2.52.0-1build1 [81.1 kB] 2645s Get:24 http://ftpmaster.internal/ubuntu oracular/main s390x libatk-bridge2.0-0t64 s390x 2.52.0-1build1 [66.9 kB] 2645s Get:25 http://ftpmaster.internal/ubuntu oracular/main s390x libpixman-1-0 s390x 0.42.2-1build1 [206 kB] 2645s Get:26 http://ftpmaster.internal/ubuntu oracular/main s390x libxcb-render0 s390x 1.17.0-2 [17.0 kB] 2645s Get:27 http://ftpmaster.internal/ubuntu oracular/main s390x libxcb-shm0 s390x 1.17.0-2 [5862 B] 2645s Get:28 http://ftpmaster.internal/ubuntu oracular/main s390x libxrender1 s390x 1:0.9.10-1.1build1 [20.4 kB] 2645s Get:29 http://ftpmaster.internal/ubuntu oracular/main s390x libcairo2 s390x 1.18.0-3build1 [589 kB] 2645s Get:30 http://ftpmaster.internal/ubuntu oracular/main s390x libcairo-gobject2 s390x 1.18.0-3build1 [127 kB] 2645s Get:31 http://ftpmaster.internal/ubuntu oracular/main s390x libcolord2 s390x 1.4.7-1build2 [151 kB] 2645s Get:32 http://ftpmaster.internal/ubuntu oracular/main s390x libavahi-common-data s390x 0.8-13ubuntu6 [29.7 kB] 2645s Get:33 http://ftpmaster.internal/ubuntu oracular/main s390x libavahi-common3 s390x 0.8-13ubuntu6 [24.1 kB] 2645s Get:34 http://ftpmaster.internal/ubuntu oracular/main s390x libavahi-client3 s390x 0.8-13ubuntu6 [27.2 kB] 2645s Get:35 http://ftpmaster.internal/ubuntu oracular/main s390x libcups2t64 s390x 2.4.7-1.2ubuntu9 [277 kB] 2645s Get:36 http://ftpmaster.internal/ubuntu oracular/main s390x libepoxy0 s390x 1.5.10-1build1 [224 kB] 2645s Get:37 http://ftpmaster.internal/ubuntu oracular/main s390x fontconfig s390x 2.15.0-1.1ubuntu2 [191 kB] 2645s Get:38 http://ftpmaster.internal/ubuntu oracular/main s390x libthai-data all 0.1.29-2build1 [158 kB] 2645s Get:39 http://ftpmaster.internal/ubuntu oracular/main s390x libdatrie1 s390x 0.2.13-3build1 [20.6 kB] 2645s Get:40 http://ftpmaster.internal/ubuntu oracular/main s390x libthai0 s390x 0.1.29-2build1 [20.7 kB] 2645s Get:41 http://ftpmaster.internal/ubuntu oracular/main s390x libpango-1.0-0 s390x 1.54.0+ds-1 [243 kB] 2645s Get:42 http://ftpmaster.internal/ubuntu oracular/main s390x libpangoft2-1.0-0 s390x 1.54.0+ds-1 [43.4 kB] 2645s Get:43 http://ftpmaster.internal/ubuntu oracular/main s390x libpangocairo-1.0-0 s390x 1.54.0+ds-1 [28.2 kB] 2645s Get:44 http://ftpmaster.internal/ubuntu oracular/main s390x libwayland-client0 s390x 1.23.0-1 [27.6 kB] 2645s Get:45 http://ftpmaster.internal/ubuntu oracular/main s390x libwayland-cursor0 s390x 1.23.0-1 [11.5 kB] 2645s Get:46 http://ftpmaster.internal/ubuntu oracular/main s390x libwayland-egl1 s390x 1.23.0-1 [5584 B] 2645s Get:47 http://ftpmaster.internal/ubuntu oracular/main s390x libxcomposite1 s390x 1:0.4.5-1build3 [6340 B] 2645s Get:48 http://ftpmaster.internal/ubuntu oracular/main s390x libxfixes3 s390x 1:6.0.0-2build1 [11.3 kB] 2645s Get:49 http://ftpmaster.internal/ubuntu oracular/main s390x libxcursor1 s390x 1:1.2.2-1 [22.7 kB] 2645s Get:50 http://ftpmaster.internal/ubuntu oracular/main s390x libxdamage1 s390x 1:1.1.6-1build1 [6156 B] 2645s Get:51 http://ftpmaster.internal/ubuntu oracular/main s390x libxinerama1 s390x 2:1.1.4-3build1 [6476 B] 2645s Get:52 http://ftpmaster.internal/ubuntu oracular/main s390x libxrandr2 s390x 2:1.5.4-1 [20.8 kB] 2645s Get:53 http://ftpmaster.internal/ubuntu oracular/main s390x libdconf1 s390x 0.40.0-4build2 [40.3 kB] 2645s Get:54 http://ftpmaster.internal/ubuntu oracular/main s390x dconf-service s390x 0.40.0-4build2 [28.6 kB] 2645s Get:55 http://ftpmaster.internal/ubuntu oracular/main s390x dconf-gsettings-backend s390x 0.40.0-4build2 [23.2 kB] 2645s Get:56 http://ftpmaster.internal/ubuntu oracular/main s390x libgtk-3-common all 3.24.43-1ubuntu1 [1201 kB] 2645s Get:57 http://ftpmaster.internal/ubuntu oracular/main s390x libgtk-3-0t64 s390x 3.24.43-1ubuntu1 [2908 kB] 2645s Get:58 http://ftpmaster.internal/ubuntu oracular/main s390x libglvnd0 s390x 1.7.0-1build1 [110 kB] 2645s Get:59 http://ftpmaster.internal/ubuntu oracular/main s390x libglapi-mesa s390x 24.0.9-0ubuntu2 [65.9 kB] 2645s Get:60 http://ftpmaster.internal/ubuntu oracular/main s390x libx11-xcb1 s390x 2:1.8.7-1build1 [7826 B] 2645s Get:61 http://ftpmaster.internal/ubuntu oracular/main s390x libxcb-dri2-0 s390x 1.17.0-2 [7448 B] 2645s Get:62 http://ftpmaster.internal/ubuntu oracular/main s390x libxcb-dri3-0 s390x 1.17.0-2 [7616 B] 2645s Get:63 http://ftpmaster.internal/ubuntu oracular/main s390x libxcb-glx0 s390x 1.17.0-2 [26.0 kB] 2645s Get:64 http://ftpmaster.internal/ubuntu oracular/main s390x libxcb-present0 s390x 1.17.0-2 [6244 B] 2645s Get:65 http://ftpmaster.internal/ubuntu oracular/main s390x libxcb-randr0 s390x 1.17.0-2 [19.2 kB] 2645s Get:66 http://ftpmaster.internal/ubuntu oracular/main s390x libxcb-sync1 s390x 1.17.0-2 [9488 B] 2645s Get:67 http://ftpmaster.internal/ubuntu oracular/main s390x libxcb-xfixes0 s390x 1.17.0-2 [10.5 kB] 2645s Get:68 http://ftpmaster.internal/ubuntu oracular/main s390x libxshmfence1 s390x 1.3-1build5 [4772 B] 2645s Get:69 http://ftpmaster.internal/ubuntu oracular/main s390x libxxf86vm1 s390x 1:1.1.4-1build4 [9630 B] 2645s Get:70 http://ftpmaster.internal/ubuntu oracular/main s390x libvulkan1 s390x 1.3.283.0-1 [156 kB] 2645s Get:71 http://ftpmaster.internal/ubuntu oracular/main s390x libdrm-amdgpu1 s390x 2.4.121-2 [21.3 kB] 2645s Get:72 http://ftpmaster.internal/ubuntu oracular/main s390x libdrm-nouveau2 s390x 2.4.121-2 [18.1 kB] 2645s Get:73 http://ftpmaster.internal/ubuntu oracular/main s390x libdrm-radeon1 s390x 2.4.121-2 [22.2 kB] 2645s Get:74 http://ftpmaster.internal/ubuntu oracular/main s390x libllvm17t64 s390x 1:17.0.6-12 [31.0 MB] 2646s Get:75 http://ftpmaster.internal/ubuntu oracular/main s390x libgl1-mesa-dri s390x 24.0.9-0ubuntu2 [7077 kB] 2647s Get:76 http://ftpmaster.internal/ubuntu oracular/main s390x libglx-mesa0 s390x 24.0.9-0ubuntu2 [174 kB] 2647s Get:77 http://ftpmaster.internal/ubuntu oracular/main s390x libglx0 s390x 1.7.0-1build1 [32.2 kB] 2647s Get:78 http://ftpmaster.internal/ubuntu oracular/main s390x libgl1 s390x 1.7.0-1build1 [142 kB] 2647s Get:79 http://ftpmaster.internal/ubuntu oracular/main s390x libasound2-data all 1.2.12-1 [21.0 kB] 2647s Get:80 http://ftpmaster.internal/ubuntu oracular/main s390x libasound2t64 s390x 1.2.12-1 [408 kB] 2647s Get:81 http://ftpmaster.internal/ubuntu oracular/main s390x libgif7 s390x 5.2.2-1ubuntu1 [38.0 kB] 2647s Get:82 http://ftpmaster.internal/ubuntu oracular/main s390x x11-common all 1:7.7+23ubuntu3 [21.7 kB] 2647s Get:83 http://ftpmaster.internal/ubuntu oracular/main s390x libxtst6 s390x 2:1.2.3-1.1build1 [13.4 kB] 2647s Get:84 http://ftpmaster.internal/ubuntu oracular/main s390x openjdk-21-jre s390x 21.0.4+7-1ubuntu2 [234 kB] 2647s Get:85 http://ftpmaster.internal/ubuntu oracular/main s390x default-jre s390x 2:1.21-75+exp1 [922 B] 2647s Get:86 http://ftpmaster.internal/ubuntu oracular/universe s390x libhamcrest-java all 2.2-2 [117 kB] 2647s Get:87 http://ftpmaster.internal/ubuntu oracular/universe s390x junit4 all 4.13.2-4 [347 kB] 2647s Get:88 http://ftpmaster.internal/ubuntu oracular/universe s390x libcommons-cli-java all 1.6.0-1 [59.9 kB] 2647s Get:89 http://ftpmaster.internal/ubuntu oracular/universe s390x libapache-pom-java all 29-2 [5284 B] 2647s Get:90 http://ftpmaster.internal/ubuntu oracular/universe s390x libcommons-parent-java all 56-1 [10.7 kB] 2647s Get:91 http://ftpmaster.internal/ubuntu oracular/universe s390x libcommons-io-java all 2.16.1-1 [451 kB] 2647s Get:92 http://ftpmaster.internal/ubuntu oracular/universe s390x libdropwizard-metrics-java all 3.2.6-1 [240 kB] 2647s Get:93 http://ftpmaster.internal/ubuntu oracular/universe s390x libfindbugs-annotations-java all 3.1.0~preview2-3 [49.2 kB] 2647s Get:94 http://ftpmaster.internal/ubuntu oracular/universe s390x libatinject-jsr330-api-java all 1.0+ds1-5 [5348 B] 2647s Get:95 http://ftpmaster.internal/ubuntu oracular/universe s390x liberror-prone-java all 2.18.0-1 [22.5 kB] 2647s Get:96 http://ftpmaster.internal/ubuntu oracular/universe s390x libjsr305-java all 0.1~+svn49-11 [27.0 kB] 2647s Get:97 http://ftpmaster.internal/ubuntu oracular/universe s390x libguava-java all 32.0.1-1 [2692 kB] 2647s Get:98 http://ftpmaster.internal/ubuntu oracular/universe s390x libjackson2-annotations-java all 2.14.0-1 [64.7 kB] 2647s Get:99 http://ftpmaster.internal/ubuntu oracular/universe s390x libjackson2-core-java all 2.14.1-1 [432 kB] 2647s Get:100 http://ftpmaster.internal/ubuntu oracular/universe s390x libjackson2-databind-java all 2.14.0-1 [1531 kB] 2647s Get:101 http://ftpmaster.internal/ubuntu oracular/universe s390x libasm-java all 9.7-1 [392 kB] 2647s Get:102 http://ftpmaster.internal/ubuntu oracular/universe s390x libel-api-java all 3.0.0-3 [64.9 kB] 2647s Get:103 http://ftpmaster.internal/ubuntu oracular/universe s390x libjsp-api-java all 2.3.4-3 [53.7 kB] 2647s Get:104 http://ftpmaster.internal/ubuntu oracular/universe s390x libservlet-api-java all 4.0.1-2 [81.0 kB] 2647s Get:105 http://ftpmaster.internal/ubuntu oracular/universe s390x libwebsocket-api-java all 1.1-2 [40.1 kB] 2648s Get:106 http://ftpmaster.internal/ubuntu oracular/universe s390x libjetty9-java all 9.4.54-1 [2787 kB] 2648s Get:107 http://ftpmaster.internal/ubuntu oracular/universe s390x libjnr-constants-java all 0.10.4-2 [1397 kB] 2648s Get:108 http://ftpmaster.internal/ubuntu oracular/universe s390x libjffi-jni s390x 1.3.13+ds-1 [30.7 kB] 2648s Get:109 http://ftpmaster.internal/ubuntu oracular/universe s390x libjffi-java all 1.3.13+ds-1 [112 kB] 2648s Get:110 http://ftpmaster.internal/ubuntu oracular/universe s390x libjnr-x86asm-java all 1.0.2-5.1 [207 kB] 2648s Get:111 http://ftpmaster.internal/ubuntu oracular/universe s390x libjnr-ffi-java all 2.2.15-2 [627 kB] 2648s Get:112 http://ftpmaster.internal/ubuntu oracular/universe s390x libjnr-enxio-java all 0.32.16-1 [33.7 kB] 2648s Get:113 http://ftpmaster.internal/ubuntu oracular/universe s390x libjnr-posix-java all 3.1.18-1 [267 kB] 2648s Get:114 http://ftpmaster.internal/ubuntu oracular/universe s390x libjnr-unixsocket-java all 0.38.21-2 [46.9 kB] 2648s Get:115 http://ftpmaster.internal/ubuntu oracular/universe s390x libactivation-java all 1.2.0-2 [84.7 kB] 2648s Get:116 http://ftpmaster.internal/ubuntu oracular/universe s390x libmail-java all 1.6.5-2 [681 kB] 2648s Get:117 http://ftpmaster.internal/ubuntu oracular/universe s390x libcommons-logging-java all 1.3.0-1ubuntu1 [63.8 kB] 2648s Get:118 http://ftpmaster.internal/ubuntu oracular/universe s390x libjaxb-api-java all 2.3.1-1 [119 kB] 2648s Get:119 http://ftpmaster.internal/ubuntu oracular/universe s390x libspring-core-java all 4.3.30-2 [1015 kB] 2648s Get:120 http://ftpmaster.internal/ubuntu oracular/universe s390x libspring-beans-java all 4.3.30-2 [675 kB] 2648s Get:121 http://ftpmaster.internal/ubuntu oracular/universe s390x libtaglibs-standard-spec-java all 1.2.5-3 [35.2 kB] 2648s Get:122 http://ftpmaster.internal/ubuntu oracular/universe s390x libtaglibs-standard-impl-java all 1.2.5-3 [182 kB] 2648s Get:123 http://ftpmaster.internal/ubuntu oracular/universe s390x libeclipse-jdt-core-java all 3.32.0+eclipse4.26-2 [6438 kB] 2648s Get:124 http://ftpmaster.internal/ubuntu oracular/universe s390x libtomcat9-java all 9.0.70-2 [6154 kB] 2649s Get:125 http://ftpmaster.internal/ubuntu oracular/universe s390x libjetty9-extra-java all 9.4.54-1 [1199 kB] 2649s Get:126 http://ftpmaster.internal/ubuntu oracular/universe s390x libjctools-java all 2.0.2-1 [188 kB] 2649s Get:127 http://ftpmaster.internal/ubuntu oracular/universe s390x libnetty-java all 1:4.1.48-10 [3628 kB] 2649s Get:128 http://ftpmaster.internal/ubuntu oracular/universe s390x libslf4j-java all 1.7.32-1 [141 kB] 2649s Get:129 http://ftpmaster.internal/ubuntu oracular/main s390x libsnappy1v5 s390x 1.2.1-1 [33.0 kB] 2649s Get:130 http://ftpmaster.internal/ubuntu oracular/universe s390x libsnappy-jni s390x 1.1.10.5-2 [6716 B] 2649s Get:131 http://ftpmaster.internal/ubuntu oracular/universe s390x libsnappy-java all 1.1.10.5-2 [83.7 kB] 2649s Get:132 http://ftpmaster.internal/ubuntu oracular/main s390x libapr1t64 s390x 1.7.2-3.2 [113 kB] 2649s Get:133 http://ftpmaster.internal/ubuntu oracular/universe s390x libnetty-tcnative-jni s390x 2.0.28-1build4 [36.8 kB] 2649s Get:134 http://ftpmaster.internal/ubuntu oracular/universe s390x libnetty-tcnative-java all 2.0.28-1build4 [24.8 kB] 2649s Get:135 http://ftpmaster.internal/ubuntu oracular/universe s390x liblog4j1.2-java all 1.2.17-11 [439 kB] 2649s Get:136 http://ftpmaster.internal/ubuntu oracular/universe s390x libzookeeper-java all 3.9.2-2 [1885 kB] 2649s Get:137 http://ftpmaster.internal/ubuntu oracular/universe s390x zookeeper all 3.9.2-2 [57.8 kB] 2649s Get:138 http://ftpmaster.internal/ubuntu oracular/universe s390x zookeeperd all 3.9.2-2 [6036 B] 2649s Get:139 http://ftpmaster.internal/ubuntu oracular/main s390x fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 2649s Get:140 http://ftpmaster.internal/ubuntu oracular/main s390x libcares2 s390x 1.32.3-1 [85.4 kB] 2649s Get:141 http://ftpmaster.internal/ubuntu oracular/universe s390x libev4t64 s390x 1:4.33-2.1build1 [32.0 kB] 2649s Get:142 http://ftpmaster.internal/ubuntu oracular/main s390x libio-pty-perl s390x 1:1.20-1build2 [31.3 kB] 2649s Get:143 http://ftpmaster.internal/ubuntu oracular/main s390x libipc-run-perl all 20231003.0-2 [91.5 kB] 2649s Get:144 http://ftpmaster.internal/ubuntu oracular/main s390x libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 2649s Get:145 http://ftpmaster.internal/ubuntu oracular/main s390x libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 2649s Get:146 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x libjs-sphinxdoc all 7.3.7-4 [154 kB] 2649s Get:147 http://ftpmaster.internal/ubuntu oracular/main s390x libpq5 s390x 16.3-1 [144 kB] 2649s Get:148 http://ftpmaster.internal/ubuntu oracular/main s390x libtime-duration-perl all 1.21-2 [12.3 kB] 2649s Get:149 http://ftpmaster.internal/ubuntu oracular/main s390x libtimedate-perl all 2.3300-2 [34.0 kB] 2649s Get:150 http://ftpmaster.internal/ubuntu oracular/main s390x libxslt1.1 s390x 1.1.39-0exp1build1 [170 kB] 2649s Get:151 http://ftpmaster.internal/ubuntu oracular/universe s390x moreutils s390x 0.69-1 [57.4 kB] 2649s Get:152 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-cdiff all 1.0-1.1 [16.4 kB] 2649s Get:153 http://ftpmaster.internal/ubuntu oracular/main s390x python3-colorama all 0.4.6-4 [32.1 kB] 2649s Get:154 http://ftpmaster.internal/ubuntu oracular/main s390x python3-click all 8.1.7-2 [79.5 kB] 2649s Get:155 http://ftpmaster.internal/ubuntu oracular/main s390x python3-six all 1.16.0-6 [13.0 kB] 2649s Get:156 http://ftpmaster.internal/ubuntu oracular/main s390x python3-dateutil all 2.9.0-2 [80.3 kB] 2649s Get:157 http://ftpmaster.internal/ubuntu oracular/main s390x python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 2649s Get:158 http://ftpmaster.internal/ubuntu oracular/main s390x python3-prettytable all 3.10.1-1 [34.0 kB] 2649s Get:159 http://ftpmaster.internal/ubuntu oracular/main s390x python3-psutil s390x 5.9.8-2build2 [195 kB] 2649s Get:160 http://ftpmaster.internal/ubuntu oracular/main s390x python3-psycopg2 s390x 2.9.9-1build1 [133 kB] 2649s Get:161 http://ftpmaster.internal/ubuntu oracular/main s390x python3-greenlet s390x 3.0.3-0ubuntu5 [156 kB] 2649s Get:162 http://ftpmaster.internal/ubuntu oracular/main s390x python3-dnspython all 2.6.1-1ubuntu1 [163 kB] 2649s Get:163 http://ftpmaster.internal/ubuntu oracular/main s390x python3-eventlet all 0.35.2-0ubuntu1 [274 kB] 2649s Get:164 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-zope.event all 5.0-0.1 [7512 B] 2649s Get:165 http://ftpmaster.internal/ubuntu oracular/main s390x python3-zope.interface s390x 6.4-1 [137 kB] 2649s Get:166 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-gevent s390x 24.2.1-1 [835 kB] 2649s Get:167 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-kerberos s390x 1.1.14-3.1build9 [21.4 kB] 2649s Get:168 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-pure-sasl all 0.5.1+dfsg1-4 [11.4 kB] 2649s Get:169 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-kazoo all 2.9.0-2 [103 kB] 2649s Get:170 http://ftpmaster.internal/ubuntu oracular/universe s390x patroni all 3.3.1-1 [264 kB] 2649s Get:171 http://ftpmaster.internal/ubuntu oracular/main s390x sphinx-rtd-theme-common all 2.0.0+dfsg-2 [1012 kB] 2650s Get:172 http://ftpmaster.internal/ubuntu oracular/universe s390x patroni-doc all 3.3.1-1 [497 kB] 2650s Get:173 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-client-16 s390x 16.3-1 [1290 kB] 2650s Get:174 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-16 s390x 16.3-1 [16.7 MB] 2650s Get:175 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql all 16+261 [11.7 kB] 2650s Get:176 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-parse all 1.20.2-1 [27.0 kB] 2650s Get:177 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-parse-type all 0.6.2-1 [22.7 kB] 2650s Get:178 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-behave all 1.2.6-5 [98.4 kB] 2650s Get:179 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-coverage s390x 7.4.4+dfsg1-0ubuntu2 [147 kB] 2651s Preconfiguring packages ... 2651s Fetched 156 MB in 9s (18.3 MB/s) 2651s Selecting previously unselected package fonts-lato. 2651s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 54832 files and directories currently installed.) 2651s Preparing to unpack .../000-fonts-lato_2.015-1_all.deb ... 2651s Unpacking fonts-lato (2.015-1) ... 2651s Selecting previously unselected package libjson-perl. 2651s Preparing to unpack .../001-libjson-perl_4.10000-1_all.deb ... 2651s Unpacking libjson-perl (4.10000-1) ... 2651s Selecting previously unselected package postgresql-client-common. 2651s Preparing to unpack .../002-postgresql-client-common_261_all.deb ... 2652s Unpacking postgresql-client-common (261) ... 2652s Selecting previously unselected package ssl-cert. 2652s Preparing to unpack .../003-ssl-cert_1.1.2ubuntu2_all.deb ... 2652s Unpacking ssl-cert (1.1.2ubuntu2) ... 2652s Selecting previously unselected package postgresql-common. 2652s Preparing to unpack .../004-postgresql-common_261_all.deb ... 2652s Adding 'diversion of /usr/bin/pg_config to /usr/bin/pg_config.libpq-dev by postgresql-common' 2652s Unpacking postgresql-common (261) ... 2652s Selecting previously unselected package ca-certificates-java. 2652s Preparing to unpack .../005-ca-certificates-java_20240118_all.deb ... 2652s Unpacking ca-certificates-java (20240118) ... 2652s Selecting previously unselected package java-common. 2652s Preparing to unpack .../006-java-common_0.75+exp1_all.deb ... 2652s Unpacking java-common (0.75+exp1) ... 2652s Selecting previously unselected package liblcms2-2:s390x. 2652s Preparing to unpack .../007-liblcms2-2_2.14-2build1_s390x.deb ... 2652s Unpacking liblcms2-2:s390x (2.14-2build1) ... 2652s Selecting previously unselected package libpcsclite1:s390x. 2652s Preparing to unpack .../008-libpcsclite1_2.2.3-1_s390x.deb ... 2652s Unpacking libpcsclite1:s390x (2.2.3-1) ... 2652s Selecting previously unselected package openjdk-21-jre-headless:s390x. 2652s Preparing to unpack .../009-openjdk-21-jre-headless_21.0.4+7-1ubuntu2_s390x.deb ... 2652s Unpacking openjdk-21-jre-headless:s390x (21.0.4+7-1ubuntu2) ... 2653s Selecting previously unselected package default-jre-headless. 2653s Preparing to unpack .../010-default-jre-headless_2%3a1.21-75+exp1_s390x.deb ... 2653s Unpacking default-jre-headless (2:1.21-75+exp1) ... 2653s Selecting previously unselected package libgdk-pixbuf2.0-common. 2653s Preparing to unpack .../011-libgdk-pixbuf2.0-common_2.42.12+dfsg-1_all.deb ... 2653s Unpacking libgdk-pixbuf2.0-common (2.42.12+dfsg-1) ... 2653s Selecting previously unselected package libgdk-pixbuf-2.0-0:s390x. 2653s Preparing to unpack .../012-libgdk-pixbuf-2.0-0_2.42.12+dfsg-1_s390x.deb ... 2653s Unpacking libgdk-pixbuf-2.0-0:s390x (2.42.12+dfsg-1) ... 2653s Selecting previously unselected package gtk-update-icon-cache. 2653s Preparing to unpack .../013-gtk-update-icon-cache_3.24.43-1ubuntu1_s390x.deb ... 2653s Unpacking gtk-update-icon-cache (3.24.43-1ubuntu1) ... 2653s Selecting previously unselected package hicolor-icon-theme. 2653s Preparing to unpack .../014-hicolor-icon-theme_0.18-1_all.deb ... 2653s Unpacking hicolor-icon-theme (0.18-1) ... 2653s Selecting previously unselected package humanity-icon-theme. 2653s Preparing to unpack .../015-humanity-icon-theme_0.6.16_all.deb ... 2653s Unpacking humanity-icon-theme (0.6.16) ... 2654s Selecting previously unselected package ubuntu-mono. 2654s Preparing to unpack .../016-ubuntu-mono_24.04-0ubuntu1_all.deb ... 2654s Unpacking ubuntu-mono (24.04-0ubuntu1) ... 2654s Selecting previously unselected package adwaita-icon-theme. 2654s Preparing to unpack .../017-adwaita-icon-theme_46.0-1_all.deb ... 2654s Unpacking adwaita-icon-theme (46.0-1) ... 2654s Selecting previously unselected package at-spi2-common. 2654s Preparing to unpack .../018-at-spi2-common_2.52.0-1build1_all.deb ... 2654s Unpacking at-spi2-common (2.52.0-1build1) ... 2654s Selecting previously unselected package libatk1.0-0t64:s390x. 2654s Preparing to unpack .../019-libatk1.0-0t64_2.52.0-1build1_s390x.deb ... 2654s Unpacking libatk1.0-0t64:s390x (2.52.0-1build1) ... 2654s Selecting previously unselected package libxi6:s390x. 2654s Preparing to unpack .../020-libxi6_2%3a1.8.1-1build1_s390x.deb ... 2654s Unpacking libxi6:s390x (2:1.8.1-1build1) ... 2654s Selecting previously unselected package libatspi2.0-0t64:s390x. 2654s Preparing to unpack .../021-libatspi2.0-0t64_2.52.0-1build1_s390x.deb ... 2654s Unpacking libatspi2.0-0t64:s390x (2.52.0-1build1) ... 2654s Selecting previously unselected package libatk-bridge2.0-0t64:s390x. 2654s Preparing to unpack .../022-libatk-bridge2.0-0t64_2.52.0-1build1_s390x.deb ... 2654s Unpacking libatk-bridge2.0-0t64:s390x (2.52.0-1build1) ... 2654s Selecting previously unselected package libpixman-1-0:s390x. 2654s Preparing to unpack .../023-libpixman-1-0_0.42.2-1build1_s390x.deb ... 2654s Unpacking libpixman-1-0:s390x (0.42.2-1build1) ... 2654s Selecting previously unselected package libxcb-render0:s390x. 2654s Preparing to unpack .../024-libxcb-render0_1.17.0-2_s390x.deb ... 2654s Unpacking libxcb-render0:s390x (1.17.0-2) ... 2654s Selecting previously unselected package libxcb-shm0:s390x. 2654s Preparing to unpack .../025-libxcb-shm0_1.17.0-2_s390x.deb ... 2654s Unpacking libxcb-shm0:s390x (1.17.0-2) ... 2654s Selecting previously unselected package libxrender1:s390x. 2654s Preparing to unpack .../026-libxrender1_1%3a0.9.10-1.1build1_s390x.deb ... 2654s Unpacking libxrender1:s390x (1:0.9.10-1.1build1) ... 2654s Selecting previously unselected package libcairo2:s390x. 2654s Preparing to unpack .../027-libcairo2_1.18.0-3build1_s390x.deb ... 2654s Unpacking libcairo2:s390x (1.18.0-3build1) ... 2655s Selecting previously unselected package libcairo-gobject2:s390x. 2655s Preparing to unpack .../028-libcairo-gobject2_1.18.0-3build1_s390x.deb ... 2655s Unpacking libcairo-gobject2:s390x (1.18.0-3build1) ... 2655s Selecting previously unselected package libcolord2:s390x. 2655s Preparing to unpack .../029-libcolord2_1.4.7-1build2_s390x.deb ... 2655s Unpacking libcolord2:s390x (1.4.7-1build2) ... 2655s Selecting previously unselected package libavahi-common-data:s390x. 2655s Preparing to unpack .../030-libavahi-common-data_0.8-13ubuntu6_s390x.deb ... 2655s Unpacking libavahi-common-data:s390x (0.8-13ubuntu6) ... 2655s Selecting previously unselected package libavahi-common3:s390x. 2655s Preparing to unpack .../031-libavahi-common3_0.8-13ubuntu6_s390x.deb ... 2655s Unpacking libavahi-common3:s390x (0.8-13ubuntu6) ... 2655s Selecting previously unselected package libavahi-client3:s390x. 2655s Preparing to unpack .../032-libavahi-client3_0.8-13ubuntu6_s390x.deb ... 2655s Unpacking libavahi-client3:s390x (0.8-13ubuntu6) ... 2655s Selecting previously unselected package libcups2t64:s390x. 2655s Preparing to unpack .../033-libcups2t64_2.4.7-1.2ubuntu9_s390x.deb ... 2655s Unpacking libcups2t64:s390x (2.4.7-1.2ubuntu9) ... 2655s Selecting previously unselected package libepoxy0:s390x. 2655s Preparing to unpack .../034-libepoxy0_1.5.10-1build1_s390x.deb ... 2655s Unpacking libepoxy0:s390x (1.5.10-1build1) ... 2655s Selecting previously unselected package fontconfig. 2655s Preparing to unpack .../035-fontconfig_2.15.0-1.1ubuntu2_s390x.deb ... 2655s Unpacking fontconfig (2.15.0-1.1ubuntu2) ... 2655s Selecting previously unselected package libthai-data. 2655s Preparing to unpack .../036-libthai-data_0.1.29-2build1_all.deb ... 2655s Unpacking libthai-data (0.1.29-2build1) ... 2655s Selecting previously unselected package libdatrie1:s390x. 2655s Preparing to unpack .../037-libdatrie1_0.2.13-3build1_s390x.deb ... 2655s Unpacking libdatrie1:s390x (0.2.13-3build1) ... 2655s Selecting previously unselected package libthai0:s390x. 2655s Preparing to unpack .../038-libthai0_0.1.29-2build1_s390x.deb ... 2655s Unpacking libthai0:s390x (0.1.29-2build1) ... 2655s Selecting previously unselected package libpango-1.0-0:s390x. 2655s Preparing to unpack .../039-libpango-1.0-0_1.54.0+ds-1_s390x.deb ... 2655s Unpacking libpango-1.0-0:s390x (1.54.0+ds-1) ... 2655s Selecting previously unselected package libpangoft2-1.0-0:s390x. 2655s Preparing to unpack .../040-libpangoft2-1.0-0_1.54.0+ds-1_s390x.deb ... 2655s Unpacking libpangoft2-1.0-0:s390x (1.54.0+ds-1) ... 2655s Selecting previously unselected package libpangocairo-1.0-0:s390x. 2655s Preparing to unpack .../041-libpangocairo-1.0-0_1.54.0+ds-1_s390x.deb ... 2655s Unpacking libpangocairo-1.0-0:s390x (1.54.0+ds-1) ... 2655s Selecting previously unselected package libwayland-client0:s390x. 2655s Preparing to unpack .../042-libwayland-client0_1.23.0-1_s390x.deb ... 2655s Unpacking libwayland-client0:s390x (1.23.0-1) ... 2655s Selecting previously unselected package libwayland-cursor0:s390x. 2655s Preparing to unpack .../043-libwayland-cursor0_1.23.0-1_s390x.deb ... 2655s Unpacking libwayland-cursor0:s390x (1.23.0-1) ... 2655s Selecting previously unselected package libwayland-egl1:s390x. 2655s Preparing to unpack .../044-libwayland-egl1_1.23.0-1_s390x.deb ... 2655s Unpacking libwayland-egl1:s390x (1.23.0-1) ... 2655s Selecting previously unselected package libxcomposite1:s390x. 2655s Preparing to unpack .../045-libxcomposite1_1%3a0.4.5-1build3_s390x.deb ... 2655s Unpacking libxcomposite1:s390x (1:0.4.5-1build3) ... 2655s Selecting previously unselected package libxfixes3:s390x. 2655s Preparing to unpack .../046-libxfixes3_1%3a6.0.0-2build1_s390x.deb ... 2655s Unpacking libxfixes3:s390x (1:6.0.0-2build1) ... 2655s Selecting previously unselected package libxcursor1:s390x. 2655s Preparing to unpack .../047-libxcursor1_1%3a1.2.2-1_s390x.deb ... 2655s Unpacking libxcursor1:s390x (1:1.2.2-1) ... 2655s Selecting previously unselected package libxdamage1:s390x. 2655s Preparing to unpack .../048-libxdamage1_1%3a1.1.6-1build1_s390x.deb ... 2655s Unpacking libxdamage1:s390x (1:1.1.6-1build1) ... 2655s Selecting previously unselected package libxinerama1:s390x. 2655s Preparing to unpack .../049-libxinerama1_2%3a1.1.4-3build1_s390x.deb ... 2655s Unpacking libxinerama1:s390x (2:1.1.4-3build1) ... 2655s Selecting previously unselected package libxrandr2:s390x. 2655s Preparing to unpack .../050-libxrandr2_2%3a1.5.4-1_s390x.deb ... 2655s Unpacking libxrandr2:s390x (2:1.5.4-1) ... 2655s Selecting previously unselected package libdconf1:s390x. 2655s Preparing to unpack .../051-libdconf1_0.40.0-4build2_s390x.deb ... 2655s Unpacking libdconf1:s390x (0.40.0-4build2) ... 2655s Selecting previously unselected package dconf-service. 2655s Preparing to unpack .../052-dconf-service_0.40.0-4build2_s390x.deb ... 2655s Unpacking dconf-service (0.40.0-4build2) ... 2655s Selecting previously unselected package dconf-gsettings-backend:s390x. 2655s Preparing to unpack .../053-dconf-gsettings-backend_0.40.0-4build2_s390x.deb ... 2655s Unpacking dconf-gsettings-backend:s390x (0.40.0-4build2) ... 2655s Selecting previously unselected package libgtk-3-common. 2655s Preparing to unpack .../054-libgtk-3-common_3.24.43-1ubuntu1_all.deb ... 2655s Unpacking libgtk-3-common (3.24.43-1ubuntu1) ... 2655s Selecting previously unselected package libgtk-3-0t64:s390x. 2655s Preparing to unpack .../055-libgtk-3-0t64_3.24.43-1ubuntu1_s390x.deb ... 2655s Unpacking libgtk-3-0t64:s390x (3.24.43-1ubuntu1) ... 2655s Selecting previously unselected package libglvnd0:s390x. 2655s Preparing to unpack .../056-libglvnd0_1.7.0-1build1_s390x.deb ... 2655s Unpacking libglvnd0:s390x (1.7.0-1build1) ... 2655s Selecting previously unselected package libglapi-mesa:s390x. 2655s Preparing to unpack .../057-libglapi-mesa_24.0.9-0ubuntu2_s390x.deb ... 2655s Unpacking libglapi-mesa:s390x (24.0.9-0ubuntu2) ... 2655s Selecting previously unselected package libx11-xcb1:s390x. 2655s Preparing to unpack .../058-libx11-xcb1_2%3a1.8.7-1build1_s390x.deb ... 2655s Unpacking libx11-xcb1:s390x (2:1.8.7-1build1) ... 2655s Selecting previously unselected package libxcb-dri2-0:s390x. 2655s Preparing to unpack .../059-libxcb-dri2-0_1.17.0-2_s390x.deb ... 2655s Unpacking libxcb-dri2-0:s390x (1.17.0-2) ... 2655s Selecting previously unselected package libxcb-dri3-0:s390x. 2655s Preparing to unpack .../060-libxcb-dri3-0_1.17.0-2_s390x.deb ... 2655s Unpacking libxcb-dri3-0:s390x (1.17.0-2) ... 2655s Selecting previously unselected package libxcb-glx0:s390x. 2655s Preparing to unpack .../061-libxcb-glx0_1.17.0-2_s390x.deb ... 2655s Unpacking libxcb-glx0:s390x (1.17.0-2) ... 2655s Selecting previously unselected package libxcb-present0:s390x. 2655s Preparing to unpack .../062-libxcb-present0_1.17.0-2_s390x.deb ... 2655s Unpacking libxcb-present0:s390x (1.17.0-2) ... 2655s Selecting previously unselected package libxcb-randr0:s390x. 2655s Preparing to unpack .../063-libxcb-randr0_1.17.0-2_s390x.deb ... 2655s Unpacking libxcb-randr0:s390x (1.17.0-2) ... 2655s Selecting previously unselected package libxcb-sync1:s390x. 2655s Preparing to unpack .../064-libxcb-sync1_1.17.0-2_s390x.deb ... 2655s Unpacking libxcb-sync1:s390x (1.17.0-2) ... 2655s Selecting previously unselected package libxcb-xfixes0:s390x. 2655s Preparing to unpack .../065-libxcb-xfixes0_1.17.0-2_s390x.deb ... 2655s Unpacking libxcb-xfixes0:s390x (1.17.0-2) ... 2655s Selecting previously unselected package libxshmfence1:s390x. 2655s Preparing to unpack .../066-libxshmfence1_1.3-1build5_s390x.deb ... 2655s Unpacking libxshmfence1:s390x (1.3-1build5) ... 2655s Selecting previously unselected package libxxf86vm1:s390x. 2655s Preparing to unpack .../067-libxxf86vm1_1%3a1.1.4-1build4_s390x.deb ... 2655s Unpacking libxxf86vm1:s390x (1:1.1.4-1build4) ... 2655s Selecting previously unselected package libvulkan1:s390x. 2655s Preparing to unpack .../068-libvulkan1_1.3.283.0-1_s390x.deb ... 2655s Unpacking libvulkan1:s390x (1.3.283.0-1) ... 2656s Selecting previously unselected package libdrm-amdgpu1:s390x. 2656s Preparing to unpack .../069-libdrm-amdgpu1_2.4.121-2_s390x.deb ... 2656s Unpacking libdrm-amdgpu1:s390x (2.4.121-2) ... 2656s Selecting previously unselected package libdrm-nouveau2:s390x. 2656s Preparing to unpack .../070-libdrm-nouveau2_2.4.121-2_s390x.deb ... 2656s Unpacking libdrm-nouveau2:s390x (2.4.121-2) ... 2656s Selecting previously unselected package libdrm-radeon1:s390x. 2656s Preparing to unpack .../071-libdrm-radeon1_2.4.121-2_s390x.deb ... 2656s Unpacking libdrm-radeon1:s390x (2.4.121-2) ... 2656s Selecting previously unselected package libllvm17t64:s390x. 2656s Preparing to unpack .../072-libllvm17t64_1%3a17.0.6-12_s390x.deb ... 2656s Unpacking libllvm17t64:s390x (1:17.0.6-12) ... 2657s Selecting previously unselected package libgl1-mesa-dri:s390x. 2657s Preparing to unpack .../073-libgl1-mesa-dri_24.0.9-0ubuntu2_s390x.deb ... 2657s Unpacking libgl1-mesa-dri:s390x (24.0.9-0ubuntu2) ... 2657s Selecting previously unselected package libglx-mesa0:s390x. 2657s Preparing to unpack .../074-libglx-mesa0_24.0.9-0ubuntu2_s390x.deb ... 2657s Unpacking libglx-mesa0:s390x (24.0.9-0ubuntu2) ... 2657s Selecting previously unselected package libglx0:s390x. 2657s Preparing to unpack .../075-libglx0_1.7.0-1build1_s390x.deb ... 2657s Unpacking libglx0:s390x (1.7.0-1build1) ... 2657s Selecting previously unselected package libgl1:s390x. 2657s Preparing to unpack .../076-libgl1_1.7.0-1build1_s390x.deb ... 2657s Unpacking libgl1:s390x (1.7.0-1build1) ... 2657s Selecting previously unselected package libasound2-data. 2657s Preparing to unpack .../077-libasound2-data_1.2.12-1_all.deb ... 2657s Unpacking libasound2-data (1.2.12-1) ... 2657s Selecting previously unselected package libasound2t64:s390x. 2657s Preparing to unpack .../078-libasound2t64_1.2.12-1_s390x.deb ... 2657s Unpacking libasound2t64:s390x (1.2.12-1) ... 2657s Selecting previously unselected package libgif7:s390x. 2657s Preparing to unpack .../079-libgif7_5.2.2-1ubuntu1_s390x.deb ... 2657s Unpacking libgif7:s390x (5.2.2-1ubuntu1) ... 2657s Selecting previously unselected package x11-common. 2657s Preparing to unpack .../080-x11-common_1%3a7.7+23ubuntu3_all.deb ... 2657s Unpacking x11-common (1:7.7+23ubuntu3) ... 2657s Selecting previously unselected package libxtst6:s390x. 2657s Preparing to unpack .../081-libxtst6_2%3a1.2.3-1.1build1_s390x.deb ... 2657s Unpacking libxtst6:s390x (2:1.2.3-1.1build1) ... 2657s Selecting previously unselected package openjdk-21-jre:s390x. 2657s Preparing to unpack .../082-openjdk-21-jre_21.0.4+7-1ubuntu2_s390x.deb ... 2657s Unpacking openjdk-21-jre:s390x (21.0.4+7-1ubuntu2) ... 2657s Selecting previously unselected package default-jre. 2657s Preparing to unpack .../083-default-jre_2%3a1.21-75+exp1_s390x.deb ... 2657s Unpacking default-jre (2:1.21-75+exp1) ... 2657s Selecting previously unselected package libhamcrest-java. 2657s Preparing to unpack .../084-libhamcrest-java_2.2-2_all.deb ... 2657s Unpacking libhamcrest-java (2.2-2) ... 2657s Selecting previously unselected package junit4. 2657s Preparing to unpack .../085-junit4_4.13.2-4_all.deb ... 2657s Unpacking junit4 (4.13.2-4) ... 2657s Selecting previously unselected package libcommons-cli-java. 2657s Preparing to unpack .../086-libcommons-cli-java_1.6.0-1_all.deb ... 2657s Unpacking libcommons-cli-java (1.6.0-1) ... 2657s Selecting previously unselected package libapache-pom-java. 2657s Preparing to unpack .../087-libapache-pom-java_29-2_all.deb ... 2657s Unpacking libapache-pom-java (29-2) ... 2657s Selecting previously unselected package libcommons-parent-java. 2657s Preparing to unpack .../088-libcommons-parent-java_56-1_all.deb ... 2657s Unpacking libcommons-parent-java (56-1) ... 2657s Selecting previously unselected package libcommons-io-java. 2657s Preparing to unpack .../089-libcommons-io-java_2.16.1-1_all.deb ... 2657s Unpacking libcommons-io-java (2.16.1-1) ... 2657s Selecting previously unselected package libdropwizard-metrics-java. 2657s Preparing to unpack .../090-libdropwizard-metrics-java_3.2.6-1_all.deb ... 2657s Unpacking libdropwizard-metrics-java (3.2.6-1) ... 2657s Selecting previously unselected package libfindbugs-annotations-java. 2657s Preparing to unpack .../091-libfindbugs-annotations-java_3.1.0~preview2-3_all.deb ... 2657s Unpacking libfindbugs-annotations-java (3.1.0~preview2-3) ... 2657s Selecting previously unselected package libatinject-jsr330-api-java. 2657s Preparing to unpack .../092-libatinject-jsr330-api-java_1.0+ds1-5_all.deb ... 2657s Unpacking libatinject-jsr330-api-java (1.0+ds1-5) ... 2657s Selecting previously unselected package liberror-prone-java. 2657s Preparing to unpack .../093-liberror-prone-java_2.18.0-1_all.deb ... 2657s Unpacking liberror-prone-java (2.18.0-1) ... 2657s Selecting previously unselected package libjsr305-java. 2657s Preparing to unpack .../094-libjsr305-java_0.1~+svn49-11_all.deb ... 2657s Unpacking libjsr305-java (0.1~+svn49-11) ... 2657s Selecting previously unselected package libguava-java. 2657s Preparing to unpack .../095-libguava-java_32.0.1-1_all.deb ... 2657s Unpacking libguava-java (32.0.1-1) ... 2657s Selecting previously unselected package libjackson2-annotations-java. 2657s Preparing to unpack .../096-libjackson2-annotations-java_2.14.0-1_all.deb ... 2657s Unpacking libjackson2-annotations-java (2.14.0-1) ... 2658s Selecting previously unselected package libjackson2-core-java. 2658s Preparing to unpack .../097-libjackson2-core-java_2.14.1-1_all.deb ... 2658s Unpacking libjackson2-core-java (2.14.1-1) ... 2658s Selecting previously unselected package libjackson2-databind-java. 2658s Preparing to unpack .../098-libjackson2-databind-java_2.14.0-1_all.deb ... 2658s Unpacking libjackson2-databind-java (2.14.0-1) ... 2658s Selecting previously unselected package libasm-java. 2658s Preparing to unpack .../099-libasm-java_9.7-1_all.deb ... 2658s Unpacking libasm-java (9.7-1) ... 2658s Selecting previously unselected package libel-api-java. 2658s Preparing to unpack .../100-libel-api-java_3.0.0-3_all.deb ... 2658s Unpacking libel-api-java (3.0.0-3) ... 2658s Selecting previously unselected package libjsp-api-java. 2658s Preparing to unpack .../101-libjsp-api-java_2.3.4-3_all.deb ... 2658s Unpacking libjsp-api-java (2.3.4-3) ... 2658s Selecting previously unselected package libservlet-api-java. 2658s Preparing to unpack .../102-libservlet-api-java_4.0.1-2_all.deb ... 2658s Unpacking libservlet-api-java (4.0.1-2) ... 2658s Selecting previously unselected package libwebsocket-api-java. 2658s Preparing to unpack .../103-libwebsocket-api-java_1.1-2_all.deb ... 2658s Unpacking libwebsocket-api-java (1.1-2) ... 2658s Selecting previously unselected package libjetty9-java. 2658s Preparing to unpack .../104-libjetty9-java_9.4.54-1_all.deb ... 2658s Unpacking libjetty9-java (9.4.54-1) ... 2658s Selecting previously unselected package libjnr-constants-java. 2658s Preparing to unpack .../105-libjnr-constants-java_0.10.4-2_all.deb ... 2658s Unpacking libjnr-constants-java (0.10.4-2) ... 2658s Selecting previously unselected package libjffi-jni:s390x. 2658s Preparing to unpack .../106-libjffi-jni_1.3.13+ds-1_s390x.deb ... 2658s Unpacking libjffi-jni:s390x (1.3.13+ds-1) ... 2658s Selecting previously unselected package libjffi-java. 2658s Preparing to unpack .../107-libjffi-java_1.3.13+ds-1_all.deb ... 2658s Unpacking libjffi-java (1.3.13+ds-1) ... 2658s Selecting previously unselected package libjnr-x86asm-java. 2658s Preparing to unpack .../108-libjnr-x86asm-java_1.0.2-5.1_all.deb ... 2658s Unpacking libjnr-x86asm-java (1.0.2-5.1) ... 2658s Selecting previously unselected package libjnr-ffi-java. 2658s Preparing to unpack .../109-libjnr-ffi-java_2.2.15-2_all.deb ... 2658s Unpacking libjnr-ffi-java (2.2.15-2) ... 2658s Selecting previously unselected package libjnr-enxio-java. 2658s Preparing to unpack .../110-libjnr-enxio-java_0.32.16-1_all.deb ... 2658s Unpacking libjnr-enxio-java (0.32.16-1) ... 2658s Selecting previously unselected package libjnr-posix-java. 2658s Preparing to unpack .../111-libjnr-posix-java_3.1.18-1_all.deb ... 2658s Unpacking libjnr-posix-java (3.1.18-1) ... 2658s Selecting previously unselected package libjnr-unixsocket-java. 2658s Preparing to unpack .../112-libjnr-unixsocket-java_0.38.21-2_all.deb ... 2658s Unpacking libjnr-unixsocket-java (0.38.21-2) ... 2658s Selecting previously unselected package libactivation-java. 2658s Preparing to unpack .../113-libactivation-java_1.2.0-2_all.deb ... 2658s Unpacking libactivation-java (1.2.0-2) ... 2658s Selecting previously unselected package libmail-java. 2658s Preparing to unpack .../114-libmail-java_1.6.5-2_all.deb ... 2658s Unpacking libmail-java (1.6.5-2) ... 2658s Selecting previously unselected package libcommons-logging-java. 2658s Preparing to unpack .../115-libcommons-logging-java_1.3.0-1ubuntu1_all.deb ... 2658s Unpacking libcommons-logging-java (1.3.0-1ubuntu1) ... 2658s Selecting previously unselected package libjaxb-api-java. 2658s Preparing to unpack .../116-libjaxb-api-java_2.3.1-1_all.deb ... 2658s Unpacking libjaxb-api-java (2.3.1-1) ... 2658s Selecting previously unselected package libspring-core-java. 2658s Preparing to unpack .../117-libspring-core-java_4.3.30-2_all.deb ... 2658s Unpacking libspring-core-java (4.3.30-2) ... 2658s Selecting previously unselected package libspring-beans-java. 2658s Preparing to unpack .../118-libspring-beans-java_4.3.30-2_all.deb ... 2658s Unpacking libspring-beans-java (4.3.30-2) ... 2658s Selecting previously unselected package libtaglibs-standard-spec-java. 2658s Preparing to unpack .../119-libtaglibs-standard-spec-java_1.2.5-3_all.deb ... 2658s Unpacking libtaglibs-standard-spec-java (1.2.5-3) ... 2658s Selecting previously unselected package libtaglibs-standard-impl-java. 2658s Preparing to unpack .../120-libtaglibs-standard-impl-java_1.2.5-3_all.deb ... 2658s Unpacking libtaglibs-standard-impl-java (1.2.5-3) ... 2658s Selecting previously unselected package libeclipse-jdt-core-java. 2658s Preparing to unpack .../121-libeclipse-jdt-core-java_3.32.0+eclipse4.26-2_all.deb ... 2658s Unpacking libeclipse-jdt-core-java (3.32.0+eclipse4.26-2) ... 2658s Selecting previously unselected package libtomcat9-java. 2658s Preparing to unpack .../122-libtomcat9-java_9.0.70-2_all.deb ... 2658s Unpacking libtomcat9-java (9.0.70-2) ... 2658s Selecting previously unselected package libjetty9-extra-java. 2658s Preparing to unpack .../123-libjetty9-extra-java_9.4.54-1_all.deb ... 2658s Unpacking libjetty9-extra-java (9.4.54-1) ... 2658s Selecting previously unselected package libjctools-java. 2658s Preparing to unpack .../124-libjctools-java_2.0.2-1_all.deb ... 2658s Unpacking libjctools-java (2.0.2-1) ... 2658s Selecting previously unselected package libnetty-java. 2658s Preparing to unpack .../125-libnetty-java_1%3a4.1.48-10_all.deb ... 2658s Unpacking libnetty-java (1:4.1.48-10) ... 2658s Selecting previously unselected package libslf4j-java. 2658s Preparing to unpack .../126-libslf4j-java_1.7.32-1_all.deb ... 2658s Unpacking libslf4j-java (1.7.32-1) ... 2658s Selecting previously unselected package libsnappy1v5:s390x. 2658s Preparing to unpack .../127-libsnappy1v5_1.2.1-1_s390x.deb ... 2658s Unpacking libsnappy1v5:s390x (1.2.1-1) ... 2658s Selecting previously unselected package libsnappy-jni. 2658s Preparing to unpack .../128-libsnappy-jni_1.1.10.5-2_s390x.deb ... 2658s Unpacking libsnappy-jni (1.1.10.5-2) ... 2658s Selecting previously unselected package libsnappy-java. 2658s Preparing to unpack .../129-libsnappy-java_1.1.10.5-2_all.deb ... 2658s Unpacking libsnappy-java (1.1.10.5-2) ... 2658s Selecting previously unselected package libapr1t64:s390x. 2658s Preparing to unpack .../130-libapr1t64_1.7.2-3.2_s390x.deb ... 2658s Unpacking libapr1t64:s390x (1.7.2-3.2) ... 2658s Selecting previously unselected package libnetty-tcnative-jni. 2658s Preparing to unpack .../131-libnetty-tcnative-jni_2.0.28-1build4_s390x.deb ... 2658s Unpacking libnetty-tcnative-jni (2.0.28-1build4) ... 2659s Selecting previously unselected package libnetty-tcnative-java. 2659s Preparing to unpack .../132-libnetty-tcnative-java_2.0.28-1build4_all.deb ... 2659s Unpacking libnetty-tcnative-java (2.0.28-1build4) ... 2659s Selecting previously unselected package liblog4j1.2-java. 2659s Preparing to unpack .../133-liblog4j1.2-java_1.2.17-11_all.deb ... 2659s Unpacking liblog4j1.2-java (1.2.17-11) ... 2659s Selecting previously unselected package libzookeeper-java. 2659s Preparing to unpack .../134-libzookeeper-java_3.9.2-2_all.deb ... 2659s Unpacking libzookeeper-java (3.9.2-2) ... 2659s Selecting previously unselected package zookeeper. 2659s Preparing to unpack .../135-zookeeper_3.9.2-2_all.deb ... 2659s Unpacking zookeeper (3.9.2-2) ... 2659s Selecting previously unselected package zookeeperd. 2659s Preparing to unpack .../136-zookeeperd_3.9.2-2_all.deb ... 2659s Unpacking zookeeperd (3.9.2-2) ... 2659s Selecting previously unselected package fonts-font-awesome. 2659s Preparing to unpack .../137-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 2659s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 2659s Selecting previously unselected package libcares2:s390x. 2659s Preparing to unpack .../138-libcares2_1.32.3-1_s390x.deb ... 2659s Unpacking libcares2:s390x (1.32.3-1) ... 2659s Selecting previously unselected package libev4t64:s390x. 2659s Preparing to unpack .../139-libev4t64_1%3a4.33-2.1build1_s390x.deb ... 2659s Unpacking libev4t64:s390x (1:4.33-2.1build1) ... 2659s Selecting previously unselected package libio-pty-perl. 2659s Preparing to unpack .../140-libio-pty-perl_1%3a1.20-1build2_s390x.deb ... 2659s Unpacking libio-pty-perl (1:1.20-1build2) ... 2659s Selecting previously unselected package libipc-run-perl. 2659s Preparing to unpack .../141-libipc-run-perl_20231003.0-2_all.deb ... 2659s Unpacking libipc-run-perl (20231003.0-2) ... 2659s Selecting previously unselected package libjs-jquery. 2659s Preparing to unpack .../142-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 2659s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 2659s Selecting previously unselected package libjs-underscore. 2659s Preparing to unpack .../143-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 2659s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 2659s Selecting previously unselected package libjs-sphinxdoc. 2659s Preparing to unpack .../144-libjs-sphinxdoc_7.3.7-4_all.deb ... 2659s Unpacking libjs-sphinxdoc (7.3.7-4) ... 2659s Selecting previously unselected package libpq5:s390x. 2659s Preparing to unpack .../145-libpq5_16.3-1_s390x.deb ... 2659s Unpacking libpq5:s390x (16.3-1) ... 2659s Selecting previously unselected package libtime-duration-perl. 2659s Preparing to unpack .../146-libtime-duration-perl_1.21-2_all.deb ... 2659s Unpacking libtime-duration-perl (1.21-2) ... 2659s Selecting previously unselected package libtimedate-perl. 2659s Preparing to unpack .../147-libtimedate-perl_2.3300-2_all.deb ... 2659s Unpacking libtimedate-perl (2.3300-2) ... 2659s Selecting previously unselected package libxslt1.1:s390x. 2659s Preparing to unpack .../148-libxslt1.1_1.1.39-0exp1build1_s390x.deb ... 2659s Unpacking libxslt1.1:s390x (1.1.39-0exp1build1) ... 2659s Selecting previously unselected package moreutils. 2659s Preparing to unpack .../149-moreutils_0.69-1_s390x.deb ... 2659s Unpacking moreutils (0.69-1) ... 2659s Selecting previously unselected package python3-cdiff. 2659s Preparing to unpack .../150-python3-cdiff_1.0-1.1_all.deb ... 2659s Unpacking python3-cdiff (1.0-1.1) ... 2659s Selecting previously unselected package python3-colorama. 2659s Preparing to unpack .../151-python3-colorama_0.4.6-4_all.deb ... 2659s Unpacking python3-colorama (0.4.6-4) ... 2659s Selecting previously unselected package python3-click. 2659s Preparing to unpack .../152-python3-click_8.1.7-2_all.deb ... 2659s Unpacking python3-click (8.1.7-2) ... 2659s Selecting previously unselected package python3-six. 2659s Preparing to unpack .../153-python3-six_1.16.0-6_all.deb ... 2659s Unpacking python3-six (1.16.0-6) ... 2659s Selecting previously unselected package python3-dateutil. 2659s Preparing to unpack .../154-python3-dateutil_2.9.0-2_all.deb ... 2659s Unpacking python3-dateutil (2.9.0-2) ... 2659s Selecting previously unselected package python3-wcwidth. 2659s Preparing to unpack .../155-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 2659s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 2659s Selecting previously unselected package python3-prettytable. 2659s Preparing to unpack .../156-python3-prettytable_3.10.1-1_all.deb ... 2659s Unpacking python3-prettytable (3.10.1-1) ... 2659s Selecting previously unselected package python3-psutil. 2659s Preparing to unpack .../157-python3-psutil_5.9.8-2build2_s390x.deb ... 2659s Unpacking python3-psutil (5.9.8-2build2) ... 2659s Selecting previously unselected package python3-psycopg2. 2659s Preparing to unpack .../158-python3-psycopg2_2.9.9-1build1_s390x.deb ... 2659s Unpacking python3-psycopg2 (2.9.9-1build1) ... 2659s Selecting previously unselected package python3-greenlet. 2659s Preparing to unpack .../159-python3-greenlet_3.0.3-0ubuntu5_s390x.deb ... 2659s Unpacking python3-greenlet (3.0.3-0ubuntu5) ... 2659s Selecting previously unselected package python3-dnspython. 2659s Preparing to unpack .../160-python3-dnspython_2.6.1-1ubuntu1_all.deb ... 2659s Unpacking python3-dnspython (2.6.1-1ubuntu1) ... 2659s Selecting previously unselected package python3-eventlet. 2659s Preparing to unpack .../161-python3-eventlet_0.35.2-0ubuntu1_all.deb ... 2659s Unpacking python3-eventlet (0.35.2-0ubuntu1) ... 2659s Selecting previously unselected package python3-zope.event. 2659s Preparing to unpack .../162-python3-zope.event_5.0-0.1_all.deb ... 2659s Unpacking python3-zope.event (5.0-0.1) ... 2659s Selecting previously unselected package python3-zope.interface. 2659s Preparing to unpack .../163-python3-zope.interface_6.4-1_s390x.deb ... 2659s Unpacking python3-zope.interface (6.4-1) ... 2659s Selecting previously unselected package python3-gevent. 2659s Preparing to unpack .../164-python3-gevent_24.2.1-1_s390x.deb ... 2659s Unpacking python3-gevent (24.2.1-1) ... 2659s Selecting previously unselected package python3-kerberos. 2659s Preparing to unpack .../165-python3-kerberos_1.1.14-3.1build9_s390x.deb ... 2659s Unpacking python3-kerberos (1.1.14-3.1build9) ... 2659s Selecting previously unselected package python3-pure-sasl. 2659s Preparing to unpack .../166-python3-pure-sasl_0.5.1+dfsg1-4_all.deb ... 2659s Unpacking python3-pure-sasl (0.5.1+dfsg1-4) ... 2659s Selecting previously unselected package python3-kazoo. 2659s Preparing to unpack .../167-python3-kazoo_2.9.0-2_all.deb ... 2659s Unpacking python3-kazoo (2.9.0-2) ... 2659s Selecting previously unselected package patroni. 2659s Preparing to unpack .../168-patroni_3.3.1-1_all.deb ... 2659s Unpacking patroni (3.3.1-1) ... 2660s Selecting previously unselected package sphinx-rtd-theme-common. 2660s Preparing to unpack .../169-sphinx-rtd-theme-common_2.0.0+dfsg-2_all.deb ... 2660s Unpacking sphinx-rtd-theme-common (2.0.0+dfsg-2) ... 2660s Selecting previously unselected package patroni-doc. 2660s Preparing to unpack .../170-patroni-doc_3.3.1-1_all.deb ... 2660s Unpacking patroni-doc (3.3.1-1) ... 2660s Selecting previously unselected package postgresql-client-16. 2660s Preparing to unpack .../171-postgresql-client-16_16.3-1_s390x.deb ... 2660s Unpacking postgresql-client-16 (16.3-1) ... 2660s Selecting previously unselected package postgresql-16. 2660s Preparing to unpack .../172-postgresql-16_16.3-1_s390x.deb ... 2660s Unpacking postgresql-16 (16.3-1) ... 2660s Selecting previously unselected package postgresql. 2660s Preparing to unpack .../173-postgresql_16+261_all.deb ... 2660s Unpacking postgresql (16+261) ... 2660s Selecting previously unselected package python3-parse. 2660s Preparing to unpack .../174-python3-parse_1.20.2-1_all.deb ... 2660s Unpacking python3-parse (1.20.2-1) ... 2660s Selecting previously unselected package python3-parse-type. 2660s Preparing to unpack .../175-python3-parse-type_0.6.2-1_all.deb ... 2660s Unpacking python3-parse-type (0.6.2-1) ... 2660s Selecting previously unselected package python3-behave. 2660s Preparing to unpack .../176-python3-behave_1.2.6-5_all.deb ... 2660s Unpacking python3-behave (1.2.6-5) ... 2660s Selecting previously unselected package python3-coverage. 2660s Preparing to unpack .../177-python3-coverage_7.4.4+dfsg1-0ubuntu2_s390x.deb ... 2660s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 2660s Selecting previously unselected package autopkgtest-satdep. 2660s Preparing to unpack .../178-4-autopkgtest-satdep.deb ... 2660s Unpacking autopkgtest-satdep (0) ... 2660s Setting up postgresql-client-common (261) ... 2660s Setting up libxcb-dri3-0:s390x (1.17.0-2) ... 2660s Setting up liblcms2-2:s390x (2.14-2build1) ... 2660s Setting up libtaglibs-standard-spec-java (1.2.5-3) ... 2660s Setting up libpixman-1-0:s390x (0.42.2-1build1) ... 2660s Setting up libev4t64:s390x (1:4.33-2.1build1) ... 2660s Setting up libjackson2-annotations-java (2.14.0-1) ... 2660s Setting up libx11-xcb1:s390x (2:1.8.7-1build1) ... 2660s Setting up libslf4j-java (1.7.32-1) ... 2660s Setting up fontconfig (2.15.0-1.1ubuntu2) ... 2662s Regenerating fonts cache... done. 2662s Setting up libdrm-nouveau2:s390x (2.4.121-2) ... 2662s Setting up fonts-lato (2.015-1) ... 2662s Setting up libxdamage1:s390x (1:1.1.6-1build1) ... 2662s Setting up libxcb-xfixes0:s390x (1.17.0-2) ... 2662s Setting up libjsr305-java (0.1~+svn49-11) ... 2662s Setting up hicolor-icon-theme (0.18-1) ... 2662s Setting up libxi6:s390x (2:1.8.1-1build1) ... 2662s Setting up java-common (0.75+exp1) ... 2662s Setting up libxrender1:s390x (1:0.9.10-1.1build1) ... 2662s Setting up libdatrie1:s390x (0.2.13-3build1) ... 2662s Setting up libcommons-cli-java (1.6.0-1) ... 2662s Setting up libio-pty-perl (1:1.20-1build2) ... 2662s Setting up python3-colorama (0.4.6-4) ... 2663s Setting up libxcb-render0:s390x (1.17.0-2) ... 2663s Setting up python3-zope.event (5.0-0.1) ... 2663s Setting up python3-zope.interface (6.4-1) ... 2663s Setting up libdrm-radeon1:s390x (2.4.121-2) ... 2663s Setting up libglvnd0:s390x (1.7.0-1build1) ... 2663s Setting up libxcb-glx0:s390x (1.17.0-2) ... 2663s Setting up python3-cdiff (1.0-1.1) ... 2663s Setting up libgdk-pixbuf2.0-common (2.42.12+dfsg-1) ... 2663s Setting up libasm-java (9.7-1) ... 2663s Setting up x11-common (1:7.7+23ubuntu3) ... 2664s Setting up libpq5:s390x (16.3-1) ... 2664s Setting up python3-kerberos (1.1.14-3.1build9) ... 2664s Setting up liblog4j1.2-java (1.2.17-11) ... 2664s Setting up libel-api-java (3.0.0-3) ... 2664s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 2664s Setting up libxcb-shm0:s390x (1.17.0-2) ... 2664s Setting up python3-click (8.1.7-2) ... 2664s Setting up libjnr-x86asm-java (1.0.2-5.1) ... 2664s Setting up libcairo2:s390x (1.18.0-3build1) ... 2664s Setting up libcolord2:s390x (1.4.7-1build2) ... 2664s Setting up python3-psutil (5.9.8-2build2) ... 2664s Setting up libeclipse-jdt-core-java (3.32.0+eclipse4.26-2) ... 2664s Setting up libxxf86vm1:s390x (1:1.1.4-1build4) ... 2664s Setting up libsnappy1v5:s390x (1.2.1-1) ... 2664s Setting up libxcb-present0:s390x (1.17.0-2) ... 2664s Setting up libtaglibs-standard-impl-java (1.2.5-3) ... 2664s Setting up libdconf1:s390x (0.40.0-4build2) ... 2664s Setting up libjctools-java (2.0.2-1) ... 2664s Setting up libdropwizard-metrics-java (3.2.6-1) ... 2664s Setting up python3-six (1.16.0-6) ... 2665s Setting up libasound2-data (1.2.12-1) ... 2665s Setting up libasound2t64:s390x (1.2.12-1) ... 2665s Setting up libllvm17t64:s390x (1:17.0.6-12) ... 2665s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 2665s Setting up libfindbugs-annotations-java (3.1.0~preview2-3) ... 2665s Setting up libepoxy0:s390x (1.5.10-1build1) ... 2665s Setting up ssl-cert (1.1.2ubuntu2) ... 2665s Created symlink '/etc/systemd/system/multi-user.target.wants/ssl-cert.service' → '/usr/lib/systemd/system/ssl-cert.service'. 2666s Setting up libxfixes3:s390x (1:6.0.0-2build1) ... 2666s Setting up libxcb-sync1:s390x (1.17.0-2) ... 2666s Setting up libapache-pom-java (29-2) ... 2666s Setting up libavahi-common-data:s390x (0.8-13ubuntu6) ... 2666s Setting up libatinject-jsr330-api-java (1.0+ds1-5) ... 2666s Setting up libatspi2.0-0t64:s390x (2.52.0-1build1) ... 2666s Setting up libwebsocket-api-java (1.1-2) ... 2666s Setting up python3-greenlet (3.0.3-0ubuntu5) ... 2666s Setting up libxinerama1:s390x (2:1.1.4-3build1) ... 2666s Setting up libcares2:s390x (1.32.3-1) ... 2666s Setting up libxrandr2:s390x (2:1.5.4-1) ... 2666s Setting up python3-psycopg2 (2.9.9-1build1) ... 2666s Setting up libipc-run-perl (20231003.0-2) ... 2666s Setting up libpcsclite1:s390x (2.2.3-1) ... 2666s Setting up libactivation-java (1.2.0-2) ... 2666s Setting up libtomcat9-java (9.0.70-2) ... 2666s Setting up libhamcrest-java (2.2-2) ... 2666s Setting up libglapi-mesa:s390x (24.0.9-0ubuntu2) ... 2666s Setting up libjsp-api-java (2.3.4-3) ... 2666s Setting up libvulkan1:s390x (1.3.283.0-1) ... 2666s Setting up libtime-duration-perl (1.21-2) ... 2666s Setting up libtimedate-perl (2.3300-2) ... 2666s Setting up libxcb-dri2-0:s390x (1.17.0-2) ... 2666s Setting up libgif7:s390x (5.2.2-1ubuntu1) ... 2666s Setting up libxshmfence1:s390x (1.3-1build5) ... 2666s Setting up libmail-java (1.6.5-2) ... 2666s Setting up at-spi2-common (2.52.0-1build1) ... 2666s Setting up python3-dnspython (2.6.1-1ubuntu1) ... 2666s Setting up libnetty-java (1:4.1.48-10) ... 2666s Setting up libxcb-randr0:s390x (1.17.0-2) ... 2666s Setting up python3-parse (1.20.2-1) ... 2666s Setting up libapr1t64:s390x (1.7.2-3.2) ... 2666s Setting up libjson-perl (4.10000-1) ... 2666s Setting up libxslt1.1:s390x (1.1.39-0exp1build1) ... 2666s Setting up libservlet-api-java (4.0.1-2) ... 2666s Setting up libjackson2-core-java (2.14.1-1) ... 2666s Setting up libthai-data (0.1.29-2build1) ... 2666s Setting up python3-dateutil (2.9.0-2) ... 2667s Setting up libgdk-pixbuf-2.0-0:s390x (2.42.12+dfsg-1) ... 2667s Setting up libcairo-gobject2:s390x (1.18.0-3build1) ... 2667s Setting up libjffi-jni:s390x (1.3.13+ds-1) ... 2667s Setting up libwayland-egl1:s390x (1.23.0-1) ... 2667s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 2667s Setting up ca-certificates-java (20240118) ... 2667s No JRE found. Skipping Java certificates setup. 2667s Setting up python3-prettytable (3.10.1-1) ... 2667s Setting up libsnappy-jni (1.1.10.5-2) ... 2667s Setting up libxcomposite1:s390x (1:0.4.5-1build3) ... 2667s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 2667s Setting up sphinx-rtd-theme-common (2.0.0+dfsg-2) ... 2667s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 2667s Setting up libdrm-amdgpu1:s390x (2.4.121-2) ... 2667s Setting up libjnr-constants-java (0.10.4-2) ... 2667s Setting up libwayland-client0:s390x (1.23.0-1) ... 2667s Setting up libjaxb-api-java (2.3.1-1) ... 2667s Setting up libjffi-java (1.3.13+ds-1) ... 2667s Setting up gtk-update-icon-cache (3.24.43-1ubuntu1) ... 2667s Setting up libjetty9-java (9.4.54-1) ... 2667s Setting up moreutils (0.69-1) ... 2667s Setting up libatk1.0-0t64:s390x (2.52.0-1build1) ... 2667s Setting up openjdk-21-jre-headless:s390x (21.0.4+7-1ubuntu2) ... 2667s update-alternatives: using /usr/lib/jvm/java-21-openjdk-s390x/bin/java to provide /usr/bin/java (java) in auto mode 2667s update-alternatives: using /usr/lib/jvm/java-21-openjdk-s390x/bin/jpackage to provide /usr/bin/jpackage (jpackage) in auto mode 2667s update-alternatives: using /usr/lib/jvm/java-21-openjdk-s390x/bin/keytool to provide /usr/bin/keytool (keytool) in auto mode 2667s update-alternatives: using /usr/lib/jvm/java-21-openjdk-s390x/bin/rmiregistry to provide /usr/bin/rmiregistry (rmiregistry) in auto mode 2667s update-alternatives: using /usr/lib/jvm/java-21-openjdk-s390x/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode 2667s Setting up python3-pure-sasl (0.5.1+dfsg1-4) ... 2667s Setting up libxtst6:s390x (2:1.2.3-1.1build1) ... 2667s Setting up libxcursor1:s390x (1:1.2.2-1) ... 2667s Setting up postgresql-client-16 (16.3-1) ... 2667s update-alternatives: using /usr/share/postgresql/16/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode 2667s Setting up libgl1-mesa-dri:s390x (24.0.9-0ubuntu2) ... 2667s Setting up libcommons-parent-java (56-1) ... 2667s Setting up libavahi-common3:s390x (0.8-13ubuntu6) ... 2667s Setting up libcommons-logging-java (1.3.0-1ubuntu1) ... 2667s Setting up dconf-service (0.40.0-4build2) ... 2667s Setting up python3-gevent (24.2.1-1) ... 2668s Setting up libjackson2-databind-java (2.14.0-1) ... 2668s Setting up libthai0:s390x (0.1.29-2build1) ... 2668s Setting up python3-parse-type (0.6.2-1) ... 2668s Setting up python3-eventlet (0.35.2-0ubuntu1) ... 2668s Setting up libnetty-tcnative-jni (2.0.28-1build4) ... 2668s Setting up python3-kazoo (2.9.0-2) ... 2668s Setting up postgresql-common (261) ... 2669s 2669s Creating config file /etc/postgresql-common/createcluster.conf with new version 2669s Building PostgreSQL dictionaries from installed myspell/hunspell packages... 2669s Removing obsolete dictionary files: 2670s Created symlink '/etc/systemd/system/multi-user.target.wants/postgresql.service' → '/usr/lib/systemd/system/postgresql.service'. 2670s Setting up libjs-sphinxdoc (7.3.7-4) ... 2670s Setting up libwayland-cursor0:s390x (1.23.0-1) ... 2670s Setting up python3-behave (1.2.6-5) ... 2670s /usr/lib/python3/dist-packages/behave/formatter/ansi_escapes.py:57: SyntaxWarning: invalid escape sequence '\[' 2670s _ANSI_ESCAPE_PATTERN = re.compile(u"\x1b\[\d+[mA]", re.UNICODE) 2670s /usr/lib/python3/dist-packages/behave/matchers.py:267: SyntaxWarning: invalid escape sequence '\d' 2670s """Registers a custom type that will be available to "parse" 2670s Setting up libsnappy-java (1.1.10.5-2) ... 2670s Setting up patroni (3.3.1-1) ... 2670s Created symlink '/etc/systemd/system/multi-user.target.wants/patroni.service' → '/usr/lib/systemd/system/patroni.service'. 2671s Setting up libavahi-client3:s390x (0.8-13ubuntu6) ... 2671s Setting up libjnr-ffi-java (2.2.15-2) ... 2671s Setting up libatk-bridge2.0-0t64:s390x (2.52.0-1build1) ... 2671s Setting up libglx-mesa0:s390x (24.0.9-0ubuntu2) ... 2671s Setting up postgresql-16 (16.3-1) ... 2671s Creating new PostgreSQL cluster 16/main ... 2671s /usr/lib/postgresql/16/bin/initdb -D /var/lib/postgresql/16/main --auth-local peer --auth-host scram-sha-256 --no-instructions 2671s The files belonging to this database system will be owned by user "postgres". 2671s This user must also own the server process. 2671s 2671s The database cluster will be initialized with locale "C.UTF-8". 2671s The default database encoding has accordingly been set to "UTF8". 2671s The default text search configuration will be set to "english". 2671s 2671s Data page checksums are disabled. 2671s 2671s fixing permissions on existing directory /var/lib/postgresql/16/main ... ok 2671s creating subdirectories ... ok 2671s selecting dynamic shared memory implementation ... posix 2671s selecting default max_connections ... 100 2671s selecting default shared_buffers ... 128MB 2671s selecting default time zone ... Etc/UTC 2671s creating configuration files ... ok 2672s running bootstrap script ... ok 2672s performing post-bootstrap initialization ... ok 2672s syncing data to disk ... ok 2675s Setting up libglx0:s390x (1.7.0-1build1) ... 2675s Setting up libspring-core-java (4.3.30-2) ... 2675s Setting up dconf-gsettings-backend:s390x (0.40.0-4build2) ... 2675s Setting up libcommons-io-java (2.16.1-1) ... 2675s Setting up patroni-doc (3.3.1-1) ... 2676s Setting up libpango-1.0-0:s390x (1.54.0+ds-1) ... 2676s Setting up libjnr-enxio-java (0.32.16-1) ... 2676s Setting up libgl1:s390x (1.7.0-1build1) ... 2676s Setting up postgresql (16+261) ... 2676s Setting up libpangoft2-1.0-0:s390x (1.54.0+ds-1) ... 2676s Setting up libcups2t64:s390x (2.4.7-1.2ubuntu9) ... 2676s Setting up libgtk-3-common (3.24.43-1ubuntu1) ... 2676s Setting up libjnr-posix-java (3.1.18-1) ... 2676s Setting up libpangocairo-1.0-0:s390x (1.54.0+ds-1) ... 2676s Setting up libspring-beans-java (4.3.30-2) ... 2676s Setting up libjnr-unixsocket-java (0.38.21-2) ... 2676s Setting up libjetty9-extra-java (9.4.54-1) ... 2676s Setting up libguava-java (32.0.1-1) ... 2676s Setting up adwaita-icon-theme (46.0-1) ... 2676s update-alternatives: using /usr/share/icons/Adwaita/cursor.theme to provide /usr/share/icons/default/index.theme (x-cursor-theme) in auto mode 2676s Setting up liberror-prone-java (2.18.0-1) ... 2676s Setting up humanity-icon-theme (0.6.16) ... 2676s Setting up ubuntu-mono (24.04-0ubuntu1) ... 2676s Processing triggers for man-db (2.12.1-2) ... 2685s Processing triggers for libglib2.0-0t64:s390x (2.81.0-1ubuntu2) ... 2685s Setting up libgtk-3-0t64:s390x (3.24.43-1ubuntu1) ... 2686s Processing triggers for libc-bin (2.39-0ubuntu9) ... 2686s Processing triggers for ca-certificates-java (20240118) ... 2686s Adding debian:ACCVRAIZ1.pem 2686s Adding debian:AC_RAIZ_FNMT-RCM.pem 2686s Adding debian:AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem 2686s Adding debian:ANF_Secure_Server_Root_CA.pem 2686s Adding debian:Actalis_Authentication_Root_CA.pem 2686s Adding debian:AffirmTrust_Commercial.pem 2686s Adding debian:AffirmTrust_Networking.pem 2686s Adding debian:AffirmTrust_Premium.pem 2686s Adding debian:AffirmTrust_Premium_ECC.pem 2686s Adding debian:Amazon_Root_CA_1.pem 2686s Adding debian:Amazon_Root_CA_2.pem 2686s Adding debian:Amazon_Root_CA_3.pem 2686s Adding debian:Amazon_Root_CA_4.pem 2686s Adding debian:Atos_TrustedRoot_2011.pem 2686s Adding debian:Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem 2686s Adding debian:Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem 2686s Adding debian:Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem 2686s Adding debian:BJCA_Global_Root_CA1.pem 2686s Adding debian:BJCA_Global_Root_CA2.pem 2686s Adding debian:Baltimore_CyberTrust_Root.pem 2686s Adding debian:Buypass_Class_2_Root_CA.pem 2686s Adding debian:Buypass_Class_3_Root_CA.pem 2686s Adding debian:CA_Disig_Root_R2.pem 2686s Adding debian:CFCA_EV_ROOT.pem 2686s Adding debian:COMODO_Certification_Authority.pem 2686s Adding debian:COMODO_ECC_Certification_Authority.pem 2686s Adding debian:COMODO_RSA_Certification_Authority.pem 2686s Adding debian:Certainly_Root_E1.pem 2686s Adding debian:Certainly_Root_R1.pem 2686s Adding debian:Certigna.pem 2686s Adding debian:Certigna_Root_CA.pem 2686s Adding debian:Certum_EC-384_CA.pem 2686s Adding debian:Certum_Trusted_Network_CA.pem 2686s Adding debian:Certum_Trusted_Network_CA_2.pem 2686s Adding debian:Certum_Trusted_Root_CA.pem 2686s Adding debian:CommScope_Public_Trust_ECC_Root-01.pem 2686s Adding debian:CommScope_Public_Trust_ECC_Root-02.pem 2686s Adding debian:CommScope_Public_Trust_RSA_Root-01.pem 2686s Adding debian:CommScope_Public_Trust_RSA_Root-02.pem 2686s Adding debian:Comodo_AAA_Services_root.pem 2686s Adding debian:D-TRUST_BR_Root_CA_1_2020.pem 2686s Adding debian:D-TRUST_EV_Root_CA_1_2020.pem 2686s Adding debian:D-TRUST_Root_Class_3_CA_2_2009.pem 2686s Adding debian:D-TRUST_Root_Class_3_CA_2_EV_2009.pem 2686s Adding debian:DigiCert_Assured_ID_Root_CA.pem 2686s Adding debian:DigiCert_Assured_ID_Root_G2.pem 2686s Adding debian:DigiCert_Assured_ID_Root_G3.pem 2686s Adding debian:DigiCert_Global_Root_CA.pem 2686s Adding debian:DigiCert_Global_Root_G2.pem 2686s Adding debian:DigiCert_Global_Root_G3.pem 2686s Adding debian:DigiCert_High_Assurance_EV_Root_CA.pem 2686s Adding debian:DigiCert_TLS_ECC_P384_Root_G5.pem 2686s Adding debian:DigiCert_TLS_RSA4096_Root_G5.pem 2686s Adding debian:DigiCert_Trusted_Root_G4.pem 2686s Adding debian:Entrust.net_Premium_2048_Secure_Server_CA.pem 2686s Adding debian:Entrust_Root_Certification_Authority.pem 2686s Adding debian:Entrust_Root_Certification_Authority_-_EC1.pem 2686s Adding debian:Entrust_Root_Certification_Authority_-_G2.pem 2686s Adding debian:Entrust_Root_Certification_Authority_-_G4.pem 2686s Adding debian:GDCA_TrustAUTH_R5_ROOT.pem 2686s Adding debian:GLOBALTRUST_2020.pem 2686s Adding debian:GTS_Root_R1.pem 2686s Adding debian:GTS_Root_R2.pem 2686s Adding debian:GTS_Root_R3.pem 2686s Adding debian:GTS_Root_R4.pem 2686s Adding debian:GlobalSign_ECC_Root_CA_-_R4.pem 2686s Adding debian:GlobalSign_ECC_Root_CA_-_R5.pem 2686s Adding debian:GlobalSign_Root_CA.pem 2686s Adding debian:GlobalSign_Root_CA_-_R3.pem 2686s Adding debian:GlobalSign_Root_CA_-_R6.pem 2686s Adding debian:GlobalSign_Root_E46.pem 2686s Adding debian:GlobalSign_Root_R46.pem 2686s Adding debian:Go_Daddy_Class_2_CA.pem 2686s Adding debian:Go_Daddy_Root_Certificate_Authority_-_G2.pem 2686s Adding debian:HARICA_TLS_ECC_Root_CA_2021.pem 2686s Adding debian:HARICA_TLS_RSA_Root_CA_2021.pem 2686s Adding debian:Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem 2686s Adding debian:Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem 2686s Adding debian:HiPKI_Root_CA_-_G1.pem 2686s Adding debian:Hongkong_Post_Root_CA_3.pem 2686s Adding debian:ISRG_Root_X1.pem 2686s Adding debian:ISRG_Root_X2.pem 2686s Adding debian:IdenTrust_Commercial_Root_CA_1.pem 2686s Adding debian:IdenTrust_Public_Sector_Root_CA_1.pem 2686s Adding debian:Izenpe.com.pem 2686s Adding debian:Microsec_e-Szigno_Root_CA_2009.pem 2686s Adding debian:Microsoft_ECC_Root_Certificate_Authority_2017.pem 2686s Adding debian:Microsoft_RSA_Root_Certificate_Authority_2017.pem 2686s Adding debian:NAVER_Global_Root_Certification_Authority.pem 2686s Adding debian:NetLock_Arany_=Class_Gold=_Főtanúsítvány.pem 2686s Adding debian:OISTE_WISeKey_Global_Root_GB_CA.pem 2686s Adding debian:OISTE_WISeKey_Global_Root_GC_CA.pem 2686s Adding debian:QuoVadis_Root_CA_1_G3.pem 2686s Adding debian:QuoVadis_Root_CA_2.pem 2686s Adding debian:QuoVadis_Root_CA_2_G3.pem 2686s Adding debian:QuoVadis_Root_CA_3.pem 2686s Adding debian:QuoVadis_Root_CA_3_G3.pem 2686s Adding debian:SSL.com_EV_Root_Certification_Authority_ECC.pem 2686s Adding debian:SSL.com_EV_Root_Certification_Authority_RSA_R2.pem 2686s Adding debian:SSL.com_Root_Certification_Authority_ECC.pem 2686s Adding debian:SSL.com_Root_Certification_Authority_RSA.pem 2686s Adding debian:SSL.com_TLS_ECC_Root_CA_2022.pem 2686s Adding debian:SSL.com_TLS_RSA_Root_CA_2022.pem 2686s Adding debian:SZAFIR_ROOT_CA2.pem 2686s Adding debian:Sectigo_Public_Server_Authentication_Root_E46.pem 2686s Adding debian:Sectigo_Public_Server_Authentication_Root_R46.pem 2686s Adding debian:SecureSign_RootCA11.pem 2686s Adding debian:SecureTrust_CA.pem 2686s Adding debian:Secure_Global_CA.pem 2686s Adding debian:Security_Communication_ECC_RootCA1.pem 2686s Adding debian:Security_Communication_RootCA2.pem 2686s Adding debian:Security_Communication_RootCA3.pem 2686s Adding debian:Security_Communication_Root_CA.pem 2686s Adding debian:Starfield_Class_2_CA.pem 2686s Adding debian:Starfield_Root_Certificate_Authority_-_G2.pem 2686s Adding debian:Starfield_Services_Root_Certificate_Authority_-_G2.pem 2686s Adding debian:SwissSign_Gold_CA_-_G2.pem 2686s Adding debian:SwissSign_Silver_CA_-_G2.pem 2686s Adding debian:T-TeleSec_GlobalRoot_Class_2.pem 2686s Adding debian:T-TeleSec_GlobalRoot_Class_3.pem 2686s Adding debian:TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem 2686s Adding debian:TWCA_Global_Root_CA.pem 2686s Adding debian:TWCA_Root_Certification_Authority.pem 2686s Adding debian:TeliaSonera_Root_CA_v1.pem 2686s Adding debian:Telia_Root_CA_v2.pem 2686s Adding debian:TrustAsia_Global_Root_CA_G3.pem 2686s Adding debian:TrustAsia_Global_Root_CA_G4.pem 2686s Adding debian:Trustwave_Global_Certification_Authority.pem 2686s Adding debian:Trustwave_Global_ECC_P256_Certification_Authority.pem 2686s Adding debian:Trustwave_Global_ECC_P384_Certification_Authority.pem 2686s Adding debian:TunTrust_Root_CA.pem 2686s Adding debian:UCA_Extended_Validation_Root.pem 2686s Adding debian:UCA_Global_G2_Root.pem 2686s Adding debian:USERTrust_ECC_Certification_Authority.pem 2686s Adding debian:USERTrust_RSA_Certification_Authority.pem 2686s Adding debian:XRamp_Global_CA_Root.pem 2686s Adding debian:certSIGN_ROOT_CA.pem 2686s Adding debian:certSIGN_Root_CA_G2.pem 2686s Adding debian:e-Szigno_Root_CA_2017.pem 2686s Adding debian:ePKI_Root_Certification_Authority.pem 2686s Adding debian:emSign_ECC_Root_CA_-_C3.pem 2686s Adding debian:emSign_ECC_Root_CA_-_G3.pem 2686s Adding debian:emSign_Root_CA_-_C1.pem 2686s Adding debian:emSign_Root_CA_-_G1.pem 2686s Adding debian:vTrus_ECC_Root_CA.pem 2686s Adding debian:vTrus_Root_CA.pem 2686s done. 2686s Setting up openjdk-21-jre:s390x (21.0.4+7-1ubuntu2) ... 2686s Setting up junit4 (4.13.2-4) ... 2686s Setting up default-jre-headless (2:1.21-75+exp1) ... 2686s Setting up default-jre (2:1.21-75+exp1) ... 2686s Setting up libnetty-tcnative-java (2.0.28-1build4) ... 2686s Setting up libzookeeper-java (3.9.2-2) ... 2686s Setting up zookeeper (3.9.2-2) ... 2686s warn: The home directory `/var/lib/zookeeper' already exists. Not touching this directory. 2686s warn: Warning: The home directory `/var/lib/zookeeper' does not belong to the user you are currently creating. 2686s update-alternatives: using /etc/zookeeper/conf_example to provide /etc/zookeeper/conf (zookeeper-conf) in auto mode 2686s Setting up zookeeperd (3.9.2-2) ... 2687s Setting up autopkgtest-satdep (0) ... 2701s (Reading database ... 74724 files and directories currently installed.) 2701s Removing autopkgtest-satdep (0) ... 2707s autopkgtest [23:14:46]: test acceptance-zookeeper: debian/tests/acceptance zookeeper "-e dcs_failsafe_mode" 2707s autopkgtest [23:14:46]: test acceptance-zookeeper: [----------------------- 2717s dpkg-architecture: warning: cannot determine CC system type, falling back to default (native compilation) 2717s ++ ls -1r /usr/lib/postgresql/ 2717s + for PG_VERSION in $(ls -1r /usr/lib/postgresql/) 2717s + '[' 16 == 10 -o 16 == 11 ']' 2717s + echo '### PostgreSQL 16 acceptance-zookeeper -e dcs_failsafe_mode ###' 2717s + su postgres -p -c 'set -o pipefail; ETCD_UNSUPPORTED_ARCH=s390x DCS=zookeeper PATH=/usr/lib/postgresql/16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin behave -e dcs_failsafe_mode | ts' 2717s ### PostgreSQL 16 acceptance-zookeeper -e dcs_failsafe_mode ### 2717s Jul 30 23:14:56 Feature: basic replication # features/basic_replication.feature:1 2717s Jul 30 23:14:56 We should check that the basic bootstrapping, replication and failover works. 2717s Jul 30 23:14:56 Scenario: check replication of a single table # features/basic_replication.feature:4 2717s Jul 30 23:14:56 Given I start postgres0 # features/steps/basic_replication.py:8 2721s Jul 30 23:14:59 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2722s Jul 30 23:15:00 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 2722s Jul 30 23:15:00 When I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "synchronous_mode": true} # features/steps/patroni_api.py:71 2722s Jul 30 23:15:01 Then I receive a response code 200 # features/steps/patroni_api.py:98 2722s Jul 30 23:15:01 When I start postgres1 # features/steps/basic_replication.py:8 2725s Jul 30 23:15:04 And I configure and start postgres2 with a tag replicatefrom postgres0 # features/steps/cascading_replication.py:7 2728s Jul 30 23:15:07 And "sync" key in DCS has leader=postgres0 after 20 seconds # features/steps/cascading_replication.py:23 2728s Jul 30 23:15:07 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 2728s Jul 30 23:15:07 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 2729s Jul 30 23:15:08 Then table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 2729s Jul 30 23:15:08 2729s Jul 30 23:15:08 Scenario: check restart of sync replica # features/basic_replication.feature:17 2729s Jul 30 23:15:08 Given I shut down postgres2 # features/steps/basic_replication.py:29 2730s Jul 30 23:15:09 Then "sync" key in DCS has sync_standby=postgres1 after 5 seconds # features/steps/cascading_replication.py:23 2730s Jul 30 23:15:09 When I start postgres2 # features/steps/basic_replication.py:8 2733s Jul 30 23:15:12 And I shut down postgres1 # features/steps/basic_replication.py:29 2736s Jul 30 23:15:15 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 2736s Jul 30 23:15:15 When I start postgres1 # features/steps/basic_replication.py:8 2739s Jul 30 23:15:18 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2739s Jul 30 23:15:18 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 2739s Jul 30 23:15:18 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 2739s Jul 30 23:15:18 2739s Jul 30 23:15:18 Scenario: check stuck sync replica # features/basic_replication.feature:28 2739s Jul 30 23:15:18 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": true, "maximum_lag_on_syncnode": 15000000, "postgresql": {"parameters": {"synchronous_commit": "remote_apply"}}} # features/steps/patroni_api.py:71 2739s Jul 30 23:15:18 Then I receive a response code 200 # features/steps/patroni_api.py:98 2739s Jul 30 23:15:18 And I create table on postgres0 # features/steps/basic_replication.py:73 2739s Jul 30 23:15:18 And table mytest is present on postgres1 after 2 seconds # features/steps/basic_replication.py:93 2740s Jul 30 23:15:19 And table mytest is present on postgres2 after 2 seconds # features/steps/basic_replication.py:93 2740s Jul 30 23:15:19 When I pause wal replay on postgres2 # features/steps/basic_replication.py:64 2740s Jul 30 23:15:19 And I load data on postgres0 # features/steps/basic_replication.py:84 2741s Jul 30 23:15:20 Then "sync" key in DCS has sync_standby=postgres1 after 15 seconds # features/steps/cascading_replication.py:23 2744s Jul 30 23:15:23 And I resume wal replay on postgres2 # features/steps/basic_replication.py:64 2744s Jul 30 23:15:23 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 2745s Jul 30 23:15:24 And Status code on GET http://127.0.0.1:8010/async is 200 after 3 seconds # features/steps/patroni_api.py:142 2745s Jul 30 23:15:24 When I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": null, "maximum_lag_on_syncnode": -1, "postgresql": {"parameters": {"synchronous_commit": "on"}}} # features/steps/patroni_api.py:71 2745s Jul 30 23:15:24 Then I receive a response code 200 # features/steps/patroni_api.py:98 2745s Jul 30 23:15:24 And I drop table on postgres0 # features/steps/basic_replication.py:73 2745s Jul 30 23:15:24 2745s Jul 30 23:15:24 Scenario: check multi sync replication # features/basic_replication.feature:44 2745s Jul 30 23:15:24 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 2} # features/steps/patroni_api.py:71 2745s Jul 30 23:15:24 Then I receive a response code 200 # features/steps/patroni_api.py:98 2745s Jul 30 23:15:24 Then "sync" key in DCS has sync_standby=postgres1,postgres2 after 10 seconds # features/steps/cascading_replication.py:23 2749s Jul 30 23:15:28 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 2750s Jul 30 23:15:28 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 2750s Jul 30 23:15:29 When I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 1} # features/steps/patroni_api.py:71 2750s Jul 30 23:15:29 Then I receive a response code 200 # features/steps/patroni_api.py:98 2750s Jul 30 23:15:29 And I shut down postgres1 # features/steps/basic_replication.py:29 2753s Jul 30 23:15:32 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 2754s Jul 30 23:15:33 When I start postgres1 # features/steps/basic_replication.py:8 2757s Jul 30 23:15:36 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2757s Jul 30 23:15:36 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 2757s Jul 30 23:15:36 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 2757s Jul 30 23:15:36 2757s Jul 30 23:15:36 Scenario: check the basic failover in synchronous mode # features/basic_replication.feature:59 2757s Jul 30 23:15:36 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 2758s Jul 30 23:15:37 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2758s Jul 30 23:15:37 When I sleep for 2 seconds # features/steps/patroni_api.py:39 2760s Jul 30 23:15:39 And I shut down postgres0 # features/steps/basic_replication.py:29 2761s Jul 30 23:15:40 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 2763s Jul 30 23:15:42 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2763s Jul 30 23:15:42 And postgres2 role is the primary after 24 seconds # features/steps/basic_replication.py:105 2784s Jul 30 23:16:03 And Response on GET http://127.0.0.1:8010/history contains recovery after 10 seconds # features/steps/patroni_api.py:156 2784s Jul 30 23:16:03 And there is a postgres2_cb.log with "on_role_change master batman" in postgres2 data directory # features/steps/cascading_replication.py:12 2784s Jul 30 23:16:03 When I issue a PATCH request to http://127.0.0.1:8010/config with {"synchronous_mode": null, "master_start_timeout": 0} # features/steps/patroni_api.py:71 2784s Jul 30 23:16:03 Then I receive a response code 200 # features/steps/patroni_api.py:98 2784s Jul 30 23:16:03 When I add the table bar to postgres2 # features/steps/basic_replication.py:54 2784s Jul 30 23:16:03 Then table bar is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 2784s Jul 30 23:16:03 And Response on GET http://127.0.0.1:8010/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 2784s Jul 30 23:16:03 2784s Jul 30 23:16:03 Scenario: check rejoin of the former primary with pg_rewind # features/basic_replication.feature:75 2784s Jul 30 23:16:03 Given I add the table splitbrain to postgres0 # features/steps/basic_replication.py:54 2784s Jul 30 23:16:03 And I start postgres0 # features/steps/basic_replication.py:8 2784s Jul 30 23:16:03 Then postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 2791s Jul 30 23:16:10 When I add the table buz to postgres2 # features/steps/basic_replication.py:54 2791s Jul 30 23:16:10 Then table buz is present on postgres0 after 20 seconds # features/steps/basic_replication.py:93 2791s Jul 30 23:16:10 2791s Jul 30 23:16:10 @reject-duplicate-name 2791s Jul 30 23:16:10 Scenario: check graceful rejection when two nodes have the same name # features/basic_replication.feature:83 2791s Jul 30 23:16:10 Given I start duplicate postgres0 on port 8011 # features/steps/basic_replication.py:13 2793s Jul 30 23:16:12 Then there is one of ["Can't start; there is already a node named 'postgres0' running"] CRITICAL in the dup-postgres0 patroni log after 5 seconds # features/steps/basic_replication.py:121 2798s Jul 30 23:16:17 2798s Jul 30 23:16:17 Feature: cascading replication # features/cascading_replication.feature:1 2798s Jul 30 23:16:17 We should check that patroni can do base backup and streaming from the replica 2798s Jul 30 23:16:17 Scenario: check a base backup and streaming replication from a replica # features/cascading_replication.feature:4 2798s Jul 30 23:16:17 Given I start postgres0 # features/steps/basic_replication.py:8 2801s Jul 30 23:16:20 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2802s Jul 30 23:16:21 And I configure and start postgres1 with a tag clonefrom true # features/steps/cascading_replication.py:7 2805s Jul 30 23:16:24 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2806s Jul 30 23:16:25 And I create label with "postgres0" in postgres0 data directory # features/steps/cascading_replication.py:18 2806s Jul 30 23:16:25 And I create label with "postgres1" in postgres1 data directory # features/steps/cascading_replication.py:18 2806s Jul 30 23:16:25 And "members/postgres1" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 2806s Jul 30 23:16:25 And I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 2809s Jul 30 23:16:28 Then replication works from postgres0 to postgres2 after 30 seconds # features/steps/basic_replication.py:112 2810s Jul 30 23:16:29 And there is a label with "postgres1" in postgres2 data directory # features/steps/cascading_replication.py:12 2815s Jul 30 23:16:34 2815s SKIP FEATURE citus: Citus extenstion isn't available 2815s SKIP Scenario check that worker cluster is registered in the coordinator: Citus extenstion isn't available 2815s SKIP Scenario coordinator failover updates pg_dist_node: Citus extenstion isn't available 2815s SKIP Scenario worker switchover doesn't break client queries on the coordinator: Citus extenstion isn't available 2815s SKIP Scenario worker primary restart doesn't break client queries on the coordinator: Citus extenstion isn't available 2815s SKIP Scenario check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node: Citus extenstion isn't available 2815s Jul 30 23:16:34 Feature: citus # features/citus.feature:1 2815s Jul 30 23:16:34 We should check that coordinator discovers and registers workers and clients don't have errors when worker cluster switches over 2815s Jul 30 23:16:34 Scenario: check that worker cluster is registered in the coordinator # features/citus.feature:4 2815s Jul 30 23:16:34 Given I start postgres0 in citus group 0 # None 2815s Jul 30 23:16:34 And I start postgres2 in citus group 1 # None 2815s Jul 30 23:16:34 Then postgres0 is a leader in a group 0 after 10 seconds # None 2815s Jul 30 23:16:34 And postgres2 is a leader in a group 1 after 10 seconds # None 2815s Jul 30 23:16:34 When I start postgres1 in citus group 0 # None 2815s Jul 30 23:16:34 And I start postgres3 in citus group 1 # None 2815s Jul 30 23:16:34 Then replication works from postgres0 to postgres1 after 15 seconds # None 2815s Jul 30 23:16:34 Then replication works from postgres2 to postgres3 after 15 seconds # None 2815s Jul 30 23:16:34 And postgres0 is registered in the postgres0 as the primary in group 0 after 5 seconds # None 2815s Jul 30 23:16:34 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 2815s Jul 30 23:16:34 2815s Jul 30 23:16:34 Scenario: coordinator failover updates pg_dist_node # features/citus.feature:16 2815s Jul 30 23:16:34 Given I run patronictl.py failover batman --group 0 --candidate postgres1 --force # None 2815s Jul 30 23:16:34 Then postgres1 role is the primary after 10 seconds # None 2815s Jul 30 23:16:34 And "members/postgres0" key in a group 0 in DCS has state=running after 15 seconds # None 2815s Jul 30 23:16:34 And replication works from postgres1 to postgres0 after 15 seconds # None 2815s Jul 30 23:16:34 And postgres1 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 2815s Jul 30 23:16:34 And "sync" key in a group 0 in DCS has sync_standby=postgres0 after 15 seconds # None 2815s Jul 30 23:16:34 When I run patronictl.py switchover batman --group 0 --candidate postgres0 --force # None 2815s Jul 30 23:16:34 Then postgres0 role is the primary after 10 seconds # None 2815s Jul 30 23:16:34 And replication works from postgres0 to postgres1 after 15 seconds # None 2815s Jul 30 23:16:34 And postgres0 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 2815s Jul 30 23:16:34 And "sync" key in a group 0 in DCS has sync_standby=postgres1 after 15 seconds # None 2815s Jul 30 23:16:34 2815s Jul 30 23:16:34 Scenario: worker switchover doesn't break client queries on the coordinator # features/citus.feature:29 2815s Jul 30 23:16:34 Given I create a distributed table on postgres0 # None 2815s Jul 30 23:16:34 And I start a thread inserting data on postgres0 # None 2815s Jul 30 23:16:34 When I run patronictl.py switchover batman --group 1 --force # None 2815s Jul 30 23:16:34 Then I receive a response returncode 0 # None 2815s Jul 30 23:16:34 And postgres3 role is the primary after 10 seconds # None 2815s Jul 30 23:16:34 And "members/postgres2" key in a group 1 in DCS has state=running after 15 seconds # None 2815s Jul 30 23:16:34 And replication works from postgres3 to postgres2 after 15 seconds # None 2815s Jul 30 23:16:34 And postgres3 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 2815s Jul 30 23:16:34 And "sync" key in a group 1 in DCS has sync_standby=postgres2 after 15 seconds # None 2815s Jul 30 23:16:34 And a thread is still alive # None 2815s Jul 30 23:16:34 When I run patronictl.py switchover batman --group 1 --force # None 2815s Jul 30 23:16:34 Then I receive a response returncode 0 # None 2815s Jul 30 23:16:34 And postgres2 role is the primary after 10 seconds # None 2815s Jul 30 23:16:34 And replication works from postgres2 to postgres3 after 15 seconds # None 2815s Jul 30 23:16:34 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 2815s Jul 30 23:16:34 And "sync" key in a group 1 in DCS has sync_standby=postgres3 after 15 seconds # None 2815s Jul 30 23:16:34 And a thread is still alive # None 2815s Jul 30 23:16:34 When I stop a thread # None 2815s Jul 30 23:16:34 Then a distributed table on postgres0 has expected rows # None 2815s Jul 30 23:16:34 2815s Jul 30 23:16:34 Scenario: worker primary restart doesn't break client queries on the coordinator # features/citus.feature:50 2815s Jul 30 23:16:34 Given I cleanup a distributed table on postgres0 # None 2815s Jul 30 23:16:34 And I start a thread inserting data on postgres0 # None 2815s Jul 30 23:16:34 When I run patronictl.py restart batman postgres2 --group 1 --force # None 2815s Jul 30 23:16:34 Then I receive a response returncode 0 # None 2815s Jul 30 23:16:34 And postgres2 role is the primary after 10 seconds # None 2815s Jul 30 23:16:34 And replication works from postgres2 to postgres3 after 15 seconds # None 2815s Jul 30 23:16:34 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 2815s Jul 30 23:16:34 And a thread is still alive # None 2815s Jul 30 23:16:34 When I stop a thread # None 2815s Jul 30 23:16:34 Then a distributed table on postgres0 has expected rows # None 2815s Jul 30 23:16:34 2815s Jul 30 23:16:34 Scenario: check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node # features/citus.feature:62 2815s Jul 30 23:16:34 Given I start postgres4 in citus group 2 # None 2815s Jul 30 23:16:34 Then postgres4 is a leader in a group 2 after 10 seconds # None 2815s Jul 30 23:16:34 And "members/postgres4" key in a group 2 in DCS has role=master after 3 seconds # None 2815s Jul 30 23:16:34 When I run patronictl.py edit-config batman --group 2 -s ttl=20 --force # None 2815s Jul 30 23:16:34 Then I receive a response returncode 0 # None 2815s Jul 30 23:16:34 And I receive a response output "+ttl: 20" # None 2815s Jul 30 23:16:34 Then postgres4 is registered in the postgres2 as the primary in group 2 after 5 seconds # None 2815s Jul 30 23:16:34 When I shut down postgres4 # None 2815s Jul 30 23:16:34 Then there is a transaction in progress on postgres0 changing pg_dist_node after 5 seconds # None 2815s Jul 30 23:16:34 When I run patronictl.py restart batman postgres2 --group 1 --force # None 2815s Jul 30 23:16:34 Then a transaction finishes in 20 seconds # None 2815s Jul 30 23:16:34 2815s Jul 30 23:16:34 Feature: custom bootstrap # features/custom_bootstrap.feature:1 2815s Jul 30 23:16:34 We should check that patroni can bootstrap a new cluster from a backup 2815s Jul 30 23:16:34 Scenario: clone existing cluster using pg_basebackup # features/custom_bootstrap.feature:4 2815s Jul 30 23:16:34 Given I start postgres0 # features/steps/basic_replication.py:8 2818s Jul 30 23:16:37 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2819s Jul 30 23:16:38 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 2819s Jul 30 23:16:38 And I start postgres1 in a cluster batman1 as a clone of postgres0 # features/steps/custom_bootstrap.py:6 2823s Jul 30 23:16:42 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 2824s Jul 30 23:16:43 Then table foo is present on postgres1 after 10 seconds # features/steps/basic_replication.py:93 2824s Jul 30 23:16:43 2824s Jul 30 23:16:43 Scenario: make a backup and do a restore into a new cluster # features/custom_bootstrap.feature:12 2824s Jul 30 23:16:43 Given I add the table bar to postgres1 # features/steps/basic_replication.py:54 2824s Jul 30 23:16:43 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 2825s Jul 30 23:16:44 When I start postgres2 in a cluster batman2 from backup # features/steps/custom_bootstrap.py:11 2829s Jul 30 23:16:48 Then postgres2 is a leader of batman2 after 30 seconds # features/steps/custom_bootstrap.py:16 2830s Jul 30 23:16:49 And table bar is present on postgres2 after 10 seconds # features/steps/basic_replication.py:93 2836s Jul 30 23:16:55 2836s Jul 30 23:16:55 Feature: ignored slots # features/ignored_slots.feature:1 2836s Jul 30 23:16:55 2836s Jul 30 23:16:55 Scenario: check ignored slots aren't removed on failover/switchover # features/ignored_slots.feature:2 2836s Jul 30 23:16:55 Given I start postgres1 # features/steps/basic_replication.py:8 2839s Jul 30 23:16:58 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 2840s Jul 30 23:16:59 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 2840s Jul 30 23:16:59 When I issue a PATCH request to http://127.0.0.1:8009/config with {"ignore_slots": [{"name": "unmanaged_slot_0", "database": "postgres", "plugin": "test_decoding", "type": "logical"}, {"name": "unmanaged_slot_1", "database": "postgres", "plugin": "test_decoding"}, {"name": "unmanaged_slot_2", "database": "postgres"}, {"name": "unmanaged_slot_3"}], "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 2840s Jul 30 23:16:59 Then I receive a response code 200 # features/steps/patroni_api.py:98 2840s Jul 30 23:16:59 And Response on GET http://127.0.0.1:8009/config contains ignore_slots after 10 seconds # features/steps/patroni_api.py:156 2840s Jul 30 23:16:59 When I shut down postgres1 # features/steps/basic_replication.py:29 2842s Jul 30 23:17:01 And I start postgres1 # features/steps/basic_replication.py:8 2845s Jul 30 23:17:04 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 2845s Jul 30 23:17:04 And "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 2846s Jul 30 23:17:05 And postgres1 role is the primary after 20 seconds # features/steps/basic_replication.py:105 2846s Jul 30 23:17:05 When I create a logical replication slot unmanaged_slot_0 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 2847s Jul 30 23:17:06 And I create a logical replication slot unmanaged_slot_1 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 2847s Jul 30 23:17:06 And I create a logical replication slot unmanaged_slot_2 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 2847s Jul 30 23:17:06 And I create a logical replication slot unmanaged_slot_3 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 2847s Jul 30 23:17:06 And I create a logical replication slot dummy_slot on postgres1 with the test_decoding plugin # features/steps/slots.py:8 2847s Jul 30 23:17:06 Then postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2847s Jul 30 23:17:06 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2847s Jul 30 23:17:06 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2847s Jul 30 23:17:06 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2847s Jul 30 23:17:06 When I start postgres0 # features/steps/basic_replication.py:8 2850s Jul 30 23:17:09 Then "members/postgres0" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 2851s Jul 30 23:17:10 And postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 2851s Jul 30 23:17:10 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 2852s Jul 30 23:17:11 When I shut down postgres1 # features/steps/basic_replication.py:29 2854s Jul 30 23:17:13 Then "members/postgres0" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 2855s Jul 30 23:17:14 When I start postgres1 # features/steps/basic_replication.py:8 2858s Jul 30 23:17:17 Then postgres1 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 2858s Jul 30 23:17:17 And "members/postgres1" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 2858s Jul 30 23:17:17 And I sleep for 2 seconds # features/steps/patroni_api.py:39 2860s Jul 30 23:17:19 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2860s Jul 30 23:17:19 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2860s Jul 30 23:17:19 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2860s Jul 30 23:17:19 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2860s Jul 30 23:17:19 And postgres1 does not have a replication slot named dummy_slot # features/steps/slots.py:40 2860s Jul 30 23:17:19 When I shut down postgres0 # features/steps/basic_replication.py:29 2862s Jul 30 23:17:21 Then "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 2863s Jul 30 23:17:22 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2863s Jul 30 23:17:22 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2863s Jul 30 23:17:22 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2863s Jul 30 23:17:22 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 2865s Jul 30 23:17:24 2865s Jul 30 23:17:24 Feature: nostream node # features/nostream_node.feature:1 2865s Jul 30 23:17:24 2865s Jul 30 23:17:24 Scenario: check nostream node is recovering from archive # features/nostream_node.feature:3 2865s Jul 30 23:17:24 When I start postgres0 # features/steps/basic_replication.py:8 2868s Jul 30 23:17:27 And I configure and start postgres1 with a tag nostream true # features/steps/cascading_replication.py:7 2871s Jul 30 23:17:30 Then "members/postgres1" key in DCS has replication_state=in archive recovery after 10 seconds # features/steps/cascading_replication.py:23 2872s Jul 30 23:17:31 And replication works from postgres0 to postgres1 after 30 seconds # features/steps/basic_replication.py:112 2877s Jul 30 23:17:36 2877s Jul 30 23:17:36 @slot-advance 2877s Jul 30 23:17:36 Scenario: check permanent logical replication slots are not copied # features/nostream_node.feature:10 2877s Jul 30 23:17:36 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}}, "slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 2877s Jul 30 23:17:36 Then I receive a response code 200 # features/steps/patroni_api.py:98 2877s Jul 30 23:17:36 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 2880s Jul 30 23:17:39 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 2881s Jul 30 23:17:40 When I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 2886s Jul 30 23:17:44 Then "members/postgres2" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 2892s Jul 30 23:17:51 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 2892s Jul 30 23:17:51 And postgres2 does not have a replication slot named test_logical # features/steps/slots.py:40 2897s Jul 30 23:17:56 2897s Jul 30 23:17:56 Feature: patroni api # features/patroni_api.feature:1 2897s Jul 30 23:17:56 We should check that patroni correctly responds to valid and not-valid API requests. 2897s Jul 30 23:17:56 Scenario: check API requests on a stand-alone server # features/patroni_api.feature:4 2897s Jul 30 23:17:56 Given I start postgres0 # features/steps/basic_replication.py:8 2900s Jul 30 23:17:59 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2901s Jul 30 23:18:00 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 2901s Jul 30 23:18:00 Then I receive a response code 200 # features/steps/patroni_api.py:98 2901s Jul 30 23:18:00 And I receive a response state running # features/steps/patroni_api.py:98 2901s Jul 30 23:18:00 And I receive a response role master # features/steps/patroni_api.py:98 2901s Jul 30 23:18:00 When I issue a GET request to http://127.0.0.1:8008/standby_leader # features/steps/patroni_api.py:61 2901s Jul 30 23:18:00 Then I receive a response code 503 # features/steps/patroni_api.py:98 2901s Jul 30 23:18:00 When I issue a GET request to http://127.0.0.1:8008/health # features/steps/patroni_api.py:61 2901s Jul 30 23:18:00 Then I receive a response code 200 # features/steps/patroni_api.py:98 2901s Jul 30 23:18:00 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 2901s Jul 30 23:18:00 Then I receive a response code 503 # features/steps/patroni_api.py:98 2901s Jul 30 23:18:00 When I issue a POST request to http://127.0.0.1:8008/reinitialize with {"force": true} # features/steps/patroni_api.py:71 2901s Jul 30 23:18:00 Then I receive a response code 503 # features/steps/patroni_api.py:98 2901s Jul 30 23:18:00 And I receive a response text I am the leader, can not reinitialize # features/steps/patroni_api.py:98 2901s Jul 30 23:18:00 When I run patronictl.py switchover batman --master postgres0 --force # features/steps/patroni_api.py:86 2903s Jul 30 23:18:02 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 2903s Jul 30 23:18:02 And I receive a response output "Error: No candidates found to switchover to" # features/steps/patroni_api.py:98 2903s Jul 30 23:18:02 When I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0"} # features/steps/patroni_api.py:71 2903s Jul 30 23:18:02 Then I receive a response code 412 # features/steps/patroni_api.py:98 2903s Jul 30 23:18:02 And I receive a response text switchover is not possible: cluster does not have members except leader # features/steps/patroni_api.py:98 2903s Jul 30 23:18:02 When I issue an empty POST request to http://127.0.0.1:8008/failover # features/steps/patroni_api.py:66 2903s Jul 30 23:18:02 Then I receive a response code 400 # features/steps/patroni_api.py:98 2903s Jul 30 23:18:02 When I issue a POST request to http://127.0.0.1:8008/failover with {"foo": "bar"} # features/steps/patroni_api.py:71 2903s Jul 30 23:18:02 Then I receive a response code 400 # features/steps/patroni_api.py:98 2903s Jul 30 23:18:02 And I receive a response text "Failover could be performed only to a specific candidate" # features/steps/patroni_api.py:98 2903s Jul 30 23:18:02 2903s Jul 30 23:18:02 Scenario: check local configuration reload # features/patroni_api.feature:32 2903s Jul 30 23:18:02 Given I add tag new_tag new_value to postgres0 config # features/steps/patroni_api.py:137 2903s Jul 30 23:18:02 And I issue an empty POST request to http://127.0.0.1:8008/reload # features/steps/patroni_api.py:66 2903s Jul 30 23:18:02 Then I receive a response code 202 # features/steps/patroni_api.py:98 2903s Jul 30 23:18:02 2903s Jul 30 23:18:02 Scenario: check dynamic configuration change via DCS # features/patroni_api.feature:37 2903s Jul 30 23:18:02 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "postgresql": {"parameters": {"max_connections": "101"}}} # features/steps/patroni_api.py:71 2903s Jul 30 23:18:02 Then I receive a response code 200 # features/steps/patroni_api.py:98 2903s Jul 30 23:18:02 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 11 seconds # features/steps/patroni_api.py:156 2905s Jul 30 23:18:04 When I issue a GET request to http://127.0.0.1:8008/config # features/steps/patroni_api.py:61 2905s Jul 30 23:18:04 Then I receive a response code 200 # features/steps/patroni_api.py:98 2905s Jul 30 23:18:04 And I receive a response ttl 20 # features/steps/patroni_api.py:98 2905s Jul 30 23:18:04 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 2905s Jul 30 23:18:04 Then I receive a response code 200 # features/steps/patroni_api.py:98 2905s Jul 30 23:18:04 And I receive a response tags {'new_tag': 'new_value'} # features/steps/patroni_api.py:98 2905s Jul 30 23:18:04 And I sleep for 4 seconds # features/steps/patroni_api.py:39 2909s Jul 30 23:18:08 2909s Jul 30 23:18:08 Scenario: check the scheduled restart # features/patroni_api.feature:49 2909s Jul 30 23:18:08 Given I run patronictl.py edit-config -p 'superuser_reserved_connections=6' --force batman # features/steps/patroni_api.py:86 2911s Jul 30 23:18:10 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2911s Jul 30 23:18:10 And I receive a response output "+ superuser_reserved_connections: 6" # features/steps/patroni_api.py:98 2911s Jul 30 23:18:10 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 5 seconds # features/steps/patroni_api.py:156 2911s Jul 30 23:18:10 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"role": "replica"} # features/steps/patroni_api.py:124 2911s Jul 30 23:18:10 Then I receive a response code 202 # features/steps/patroni_api.py:98 2911s Jul 30 23:18:10 And I sleep for 8 seconds # features/steps/patroni_api.py:39 2919s Jul 30 23:18:18 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 10 seconds # features/steps/patroni_api.py:156 2919s Jul 30 23:18:18 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"restart_pending": "True"} # features/steps/patroni_api.py:124 2919s Jul 30 23:18:18 Then I receive a response code 202 # features/steps/patroni_api.py:98 2919s Jul 30 23:18:18 And Response on GET http://127.0.0.1:8008/patroni does not contain pending_restart after 10 seconds # features/steps/patroni_api.py:171 2926s Jul 30 23:18:25 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2927s Jul 30 23:18:26 2927s Jul 30 23:18:26 Scenario: check API requests for the primary-replica pair in the pause mode # features/patroni_api.feature:63 2927s Jul 30 23:18:26 Given I start postgres1 # features/steps/basic_replication.py:8 2930s Jul 30 23:18:29 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2931s Jul 30 23:18:30 When I run patronictl.py pause batman # features/steps/patroni_api.py:86 2933s Jul 30 23:18:32 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2933s Jul 30 23:18:32 When I kill postmaster on postgres1 # features/steps/basic_replication.py:44 2933s Jul 30 23:18:32 waiting for server to shut down.... done 2933s Jul 30 23:18:32 server stopped 2933s Jul 30 23:18:32 And I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 2933s Jul 30 23:18:32 Then I receive a response code 503 # features/steps/patroni_api.py:98 2933s Jul 30 23:18:32 And "members/postgres1" key in DCS has state=stopped after 10 seconds # features/steps/cascading_replication.py:23 2934s Jul 30 23:18:33 When I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 2937s Jul 30 23:18:36 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2937s Jul 30 23:18:36 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2938s Jul 30 23:18:37 And I sleep for 2 seconds # features/steps/patroni_api.py:39 2940s Jul 30 23:18:39 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 2940s Jul 30 23:18:39 Then I receive a response code 200 # features/steps/patroni_api.py:98 2940s Jul 30 23:18:39 And I receive a response state running # features/steps/patroni_api.py:98 2940s Jul 30 23:18:39 And I receive a response role replica # features/steps/patroni_api.py:98 2940s Jul 30 23:18:39 When I run patronictl.py reinit batman postgres1 --force --wait # features/steps/patroni_api.py:86 2944s Jul 30 23:18:43 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2944s Jul 30 23:18:43 And I receive a response output "Success: reinitialize for member postgres1" # features/steps/patroni_api.py:98 2944s Jul 30 23:18:43 And postgres1 role is the secondary after 30 seconds # features/steps/basic_replication.py:105 2945s Jul 30 23:18:44 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 2945s Jul 30 23:18:44 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 2948s Jul 30 23:18:47 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2948s Jul 30 23:18:47 And I receive a response output "Success: restart on member postgres0" # features/steps/patroni_api.py:98 2948s Jul 30 23:18:47 And postgres0 role is the primary after 5 seconds # features/steps/basic_replication.py:105 2949s Jul 30 23:18:48 2949s Jul 30 23:18:48 Scenario: check the switchover via the API in the pause mode # features/patroni_api.feature:90 2949s Jul 30 23:18:48 Given I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0", "candidate": "postgres1"} # features/steps/patroni_api.py:71 2951s Jul 30 23:18:50 Then I receive a response code 200 # features/steps/patroni_api.py:98 2951s Jul 30 23:18:50 And postgres1 is a leader after 5 seconds # features/steps/patroni_api.py:29 2951s Jul 30 23:18:50 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2951s Jul 30 23:18:50 And postgres0 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 2956s Jul 30 23:18:55 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 2956s Jul 30 23:18:55 And "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2957s Jul 30 23:18:56 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 2957s Jul 30 23:18:56 Then I receive a response code 503 # features/steps/patroni_api.py:98 2957s Jul 30 23:18:56 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 2957s Jul 30 23:18:56 Then I receive a response code 200 # features/steps/patroni_api.py:98 2957s Jul 30 23:18:56 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 2957s Jul 30 23:18:56 Then I receive a response code 200 # features/steps/patroni_api.py:98 2957s Jul 30 23:18:56 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 2957s Jul 30 23:18:56 Then I receive a response code 503 # features/steps/patroni_api.py:98 2957s Jul 30 23:18:56 2957s Jul 30 23:18:56 Scenario: check the scheduled switchover # features/patroni_api.feature:107 2957s Jul 30 23:18:56 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 2959s Jul 30 23:18:58 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 2959s Jul 30 23:18:58 And I receive a response output "Can't schedule switchover in the paused state" # features/steps/patroni_api.py:98 2959s Jul 30 23:18:58 When I run patronictl.py resume batman # features/steps/patroni_api.py:86 2961s Jul 30 23:19:00 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2961s Jul 30 23:19:00 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 2962s Jul 30 23:19:01 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 2962s Jul 30 23:19:01 And postgres0 is a leader after 20 seconds # features/steps/patroni_api.py:29 2972s Jul 30 23:19:11 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 2972s Jul 30 23:19:11 And postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 2975s Jul 30 23:19:14 And replication works from postgres0 to postgres1 after 25 seconds # features/steps/basic_replication.py:112 2975s Jul 30 23:19:14 And "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 2976s Jul 30 23:19:15 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 2977s Jul 30 23:19:15 Then I receive a response code 200 # features/steps/patroni_api.py:98 2977s Jul 30 23:19:15 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 2977s Jul 30 23:19:16 Then I receive a response code 503 # features/steps/patroni_api.py:98 2977s Jul 30 23:19:16 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 2977s Jul 30 23:19:16 Then I receive a response code 503 # features/steps/patroni_api.py:98 2977s Jul 30 23:19:16 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 2977s Jul 30 23:19:16 Then I receive a response code 200 # features/steps/patroni_api.py:98 2981s Jul 30 23:19:20 2981s Jul 30 23:19:20 Feature: permanent slots # features/permanent_slots.feature:1 2981s Jul 30 23:19:20 2981s Jul 30 23:19:20 Scenario: check that physical permanent slots are created # features/permanent_slots.feature:2 2981s Jul 30 23:19:20 Given I start postgres0 # features/steps/basic_replication.py:8 2984s Jul 30 23:19:23 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 2984s Jul 30 23:19:23 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 2984s Jul 30 23:19:23 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_physical":0,"postgres0":0,"postgres1":0,"postgres3":0},"postgresql":{"parameters":{"wal_level":"logical"}}} # features/steps/patroni_api.py:71 2984s Jul 30 23:19:23 Then I receive a response code 200 # features/steps/patroni_api.py:98 2984s Jul 30 23:19:23 And Response on GET http://127.0.0.1:8008/config contains slots after 10 seconds # features/steps/patroni_api.py:156 2984s Jul 30 23:19:23 When I start postgres1 # features/steps/basic_replication.py:8 2987s Jul 30 23:19:26 And I start postgres2 # features/steps/basic_replication.py:8 2990s Jul 30 23:19:29 And I configure and start postgres3 with a tag replicatefrom postgres2 # features/steps/cascading_replication.py:7 2993s Jul 30 23:19:32 Then postgres0 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 2993s Jul 30 23:19:32 And postgres0 has a physical replication slot named postgres1 after 10 seconds # features/steps/slots.py:80 2993s Jul 30 23:19:32 And postgres0 has a physical replication slot named postgres2 after 10 seconds # features/steps/slots.py:80 2993s Jul 30 23:19:32 And postgres2 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 2993s Jul 30 23:19:32 2993s Jul 30 23:19:32 @slot-advance 2993s Jul 30 23:19:32 Scenario: check that logical permanent slots are created # features/permanent_slots.feature:18 2993s Jul 30 23:19:32 Given I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 2997s Jul 30 23:19:36 And I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 2998s Jul 30 23:19:36 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 2999s Jul 30 23:19:38 2999s Jul 30 23:19:38 @slot-advance 2999s Jul 30 23:19:38 Scenario: check that permanent slots are created on replicas # features/permanent_slots.feature:24 2999s Jul 30 23:19:38 Given postgres1 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 3004s Jul 30 23:19:43 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 3004s Jul 30 23:19:43 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 3005s Jul 30 23:19:44 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 3006s Jul 30 23:19:45 And postgres1 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 3006s Jul 30 23:19:45 And postgres2 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 3006s Jul 30 23:19:45 And postgres3 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 3006s Jul 30 23:19:45 3006s Jul 30 23:19:45 @slot-advance 3006s Jul 30 23:19:45 Scenario: check permanent physical slots that match with member names # features/permanent_slots.feature:34 3006s Jul 30 23:19:45 Given postgres0 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 3006s Jul 30 23:19:45 And postgres1 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 3006s Jul 30 23:19:45 And postgres1 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 3006s Jul 30 23:19:45 And postgres2 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 3006s Jul 30 23:19:45 And postgres2 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 3006s Jul 30 23:19:45 And postgres2 has a physical replication slot named postgres1 after 2 seconds # features/steps/slots.py:80 3006s Jul 30 23:19:45 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 3006s Jul 30 23:19:45 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 3006s Jul 30 23:19:45 3006s Jul 30 23:19:45 @slot-advance 3006s Jul 30 23:19:45 Scenario: check that permanent slots are advanced on replicas # features/permanent_slots.feature:45 3006s Jul 30 23:19:45 Given I add the table replicate_me to postgres0 # features/steps/basic_replication.py:54 3006s Jul 30 23:19:45 When I get all changes from logical slot test_logical on postgres0 # features/steps/slots.py:70 3006s Jul 30 23:19:45 And I get all changes from physical slot test_physical on postgres0 # features/steps/slots.py:75 3006s Jul 30 23:19:45 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 3010s Jul 30 23:19:49 And Physical slot test_physical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 3010s Jul 30 23:19:49 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 3010s Jul 30 23:19:49 And Physical slot test_physical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 3010s Jul 30 23:19:49 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 3010s Jul 30 23:19:49 And Physical slot test_physical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 3010s Jul 30 23:19:49 And Physical slot postgres1 is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 3010s Jul 30 23:19:49 And Physical slot postgres3 is in sync between postgres2 and postgres0 after 20 seconds # features/steps/slots.py:51 3010s Jul 30 23:19:49 And Physical slot postgres3 is in sync between postgres2 and postgres1 after 10 seconds # features/steps/slots.py:51 3010s Jul 30 23:19:49 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 3011s Jul 30 23:19:49 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 3011s Jul 30 23:19:49 3011s Jul 30 23:19:49 @slot-advance 3011s Jul 30 23:19:49 Scenario: check that only permanent slots are written to the /status key # features/permanent_slots.feature:62 3011s Jul 30 23:19:49 Given "status" key in DCS has test_physical in slots # features/steps/slots.py:96 3011s Jul 30 23:19:49 And "status" key in DCS has postgres0 in slots # features/steps/slots.py:96 3011s Jul 30 23:19:49 And "status" key in DCS has postgres1 in slots # features/steps/slots.py:96 3011s Jul 30 23:19:49 And "status" key in DCS does not have postgres2 in slots # features/steps/slots.py:102 3011s Jul 30 23:19:49 And "status" key in DCS has postgres3 in slots # features/steps/slots.py:96 3011s Jul 30 23:19:49 3011s Jul 30 23:19:49 Scenario: check permanent physical replication slot after failover # features/permanent_slots.feature:69 3011s Jul 30 23:19:49 Given I shut down postgres3 # features/steps/basic_replication.py:29 3011s Jul 30 23:19:50 And I shut down postgres2 # features/steps/basic_replication.py:29 3012s Jul 30 23:19:51 And I shut down postgres0 # features/steps/basic_replication.py:29 3014s Jul 30 23:19:53 Then postgres1 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 3014s Jul 30 23:19:53 And postgres1 has a physical replication slot named postgres0 after 10 seconds # features/steps/slots.py:80 3014s Jul 30 23:19:53 And postgres1 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 3016s Jul 30 23:19:55 3016s Jul 30 23:19:55 Feature: priority replication # features/priority_failover.feature:1 3016s Jul 30 23:19:55 We should check that we can give nodes priority during failover 3016s Jul 30 23:19:55 Scenario: check failover priority 0 prevents leaderships # features/priority_failover.feature:4 3016s Jul 30 23:19:55 Given I configure and start postgres0 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 3019s Jul 30 23:19:58 And I configure and start postgres1 with a tag failover_priority 0 # features/steps/cascading_replication.py:7 3022s Jul 30 23:20:01 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 3023s Jul 30 23:20:02 When I shut down postgres0 # features/steps/basic_replication.py:29 3025s Jul 30 23:20:04 And there is one of ["following a different leader because I am not allowed to promote"] INFO in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 3027s Jul 30 23:20:06 Then postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 3027s Jul 30 23:20:06 When I start postgres0 # features/steps/basic_replication.py:8 3030s Jul 30 23:20:09 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3031s Jul 30 23:20:10 3031s Jul 30 23:20:10 Scenario: check higher failover priority is respected # features/priority_failover.feature:14 3031s Jul 30 23:20:10 Given I configure and start postgres2 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 3034s Jul 30 23:20:13 And I configure and start postgres3 with a tag failover_priority 2 # features/steps/cascading_replication.py:7 3037s Jul 30 23:20:16 Then replication works from postgres0 to postgres2 after 20 seconds # features/steps/basic_replication.py:112 3039s Jul 30 23:20:18 And replication works from postgres0 to postgres3 after 20 seconds # features/steps/basic_replication.py:112 3040s Jul 30 23:20:19 When I shut down postgres0 # features/steps/basic_replication.py:29 3042s Jul 30 23:20:21 Then postgres3 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3043s Jul 30 23:20:22 And there is one of ["postgres3 has equally tolerable WAL position and priority 2, while this node has priority 1","Wal position of postgres3 is ahead of my wal position"] INFO in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 3043s Jul 30 23:20:22 3043s Jul 30 23:20:22 Scenario: check conflicting configuration handling # features/priority_failover.feature:23 3043s Jul 30 23:20:22 When I set nofailover tag in postgres2 config # features/steps/patroni_api.py:131 3043s Jul 30 23:20:22 And I issue an empty POST request to http://127.0.0.1:8010/reload # features/steps/patroni_api.py:66 3043s Jul 30 23:20:22 Then I receive a response code 202 # features/steps/patroni_api.py:98 3043s Jul 30 23:20:22 And there is one of ["Conflicting configuration between nofailover: True and failover_priority: 1. Defaulting to nofailover: True"] WARNING in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 3044s Jul 30 23:20:23 And "members/postgres2" key in DCS has tags={'failover_priority': '1', 'nofailover': True} after 10 seconds # features/steps/cascading_replication.py:23 3045s Jul 30 23:20:24 When I issue a POST request to http://127.0.0.1:8010/failover with {"candidate": "postgres2"} # features/steps/patroni_api.py:71 3045s Jul 30 23:20:24 Then I receive a response code 412 # features/steps/patroni_api.py:98 3045s Jul 30 23:20:24 And I receive a response text "failover is not possible: no good candidates have been found" # features/steps/patroni_api.py:98 3045s Jul 30 23:20:24 When I reset nofailover tag in postgres1 config # features/steps/patroni_api.py:131 3045s Jul 30 23:20:24 And I issue an empty POST request to http://127.0.0.1:8009/reload # features/steps/patroni_api.py:66 3045s Jul 30 23:20:24 Then I receive a response code 202 # features/steps/patroni_api.py:98 3045s Jul 30 23:20:24 And there is one of ["Conflicting configuration between nofailover: False and failover_priority: 0. Defaulting to nofailover: False"] WARNING in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 3046s Jul 30 23:20:25 And "members/postgres1" key in DCS has tags={'failover_priority': '0', 'nofailover': False} after 10 seconds # features/steps/cascading_replication.py:23 3047s Jul 30 23:20:26 And I issue a POST request to http://127.0.0.1:8009/failover with {"candidate": "postgres1"} # features/steps/patroni_api.py:71 3050s Jul 30 23:20:29 Then I receive a response code 200 # features/steps/patroni_api.py:98 3050s Jul 30 23:20:29 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3054s Jul 30 23:20:33 3054s Jul 30 23:20:33 Feature: recovery # features/recovery.feature:1 3054s Jul 30 23:20:33 We want to check that crashed postgres is started back 3054s Jul 30 23:20:33 Scenario: check that timeline is not incremented when primary is started after crash # features/recovery.feature:4 3054s Jul 30 23:20:33 Given I start postgres0 # features/steps/basic_replication.py:8 3057s Jul 30 23:20:36 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 3058s Jul 30 23:20:37 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 3058s Jul 30 23:20:37 When I start postgres1 # features/steps/basic_replication.py:8 3061s Jul 30 23:20:40 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 3062s Jul 30 23:20:40 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 3063s Jul 30 23:20:41 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 3063s Jul 30 23:20:42 waiting for server to shut down.... done 3063s Jul 30 23:20:42 server stopped 3063s Jul 30 23:20:42 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3065s Jul 30 23:20:44 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 3065s Jul 30 23:20:44 Then I receive a response code 200 # features/steps/patroni_api.py:98 3065s Jul 30 23:20:44 And I receive a response role master # features/steps/patroni_api.py:98 3065s Jul 30 23:20:44 And I receive a response timeline 1 # features/steps/patroni_api.py:98 3065s Jul 30 23:20:44 And "members/postgres0" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 3066s Jul 30 23:20:45 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 3069s Jul 30 23:20:48 3069s Jul 30 23:20:48 Scenario: check immediate failover when master_start_timeout=0 # features/recovery.feature:20 3069s Jul 30 23:20:48 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"master_start_timeout": 0} # features/steps/patroni_api.py:71 3069s Jul 30 23:20:48 Then I receive a response code 200 # features/steps/patroni_api.py:98 3069s Jul 30 23:20:48 And Response on GET http://127.0.0.1:8008/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 3069s Jul 30 23:20:48 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 3069s Jul 30 23:20:48 waiting for server to shut down.... done 3069s Jul 30 23:20:48 server stopped 3069s Jul 30 23:20:48 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 3071s Jul 30 23:20:50 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3075s Jul 30 23:20:54 3075s Jul 30 23:20:54 Feature: standby cluster # features/standby_cluster.feature:1 3075s Jul 30 23:20:54 3075s Jul 30 23:20:54 Scenario: prepare the cluster with logical slots # features/standby_cluster.feature:2 3075s Jul 30 23:20:54 Given I start postgres1 # features/steps/basic_replication.py:8 3078s Jul 30 23:20:57 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 3079s Jul 30 23:20:58 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 3079s Jul 30 23:20:58 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"pm_1": {"type": "physical"}}, "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 3079s Jul 30 23:20:58 Then I receive a response code 200 # features/steps/patroni_api.py:98 3079s Jul 30 23:20:58 And Response on GET http://127.0.0.1:8009/config contains slots after 10 seconds # features/steps/patroni_api.py:156 3079s Jul 30 23:20:58 And I sleep for 3 seconds # features/steps/patroni_api.py:39 3082s Jul 30 23:21:01 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"test_logical": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 3082s Jul 30 23:21:01 Then I receive a response code 200 # features/steps/patroni_api.py:98 3082s Jul 30 23:21:01 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 3083s Jul 30 23:21:02 When I start postgres0 # features/steps/basic_replication.py:8 3086s Jul 30 23:21:05 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 3087s Jul 30 23:21:06 And replication works from postgres1 to postgres0 after 15 seconds # features/steps/basic_replication.py:112 3088s Jul 30 23:21:07 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 3088s Jul 30 23:21:07 Then I receive a response code 200 # features/steps/patroni_api.py:98 3088s Jul 30 23:21:07 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 3088s Jul 30 23:21:07 And "members/postgres0" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 3088s Jul 30 23:21:07 3088s Jul 30 23:21:07 @slot-advance 3088s Jul 30 23:21:07 Scenario: check permanent logical slots are synced to the replica # features/standby_cluster.feature:22 3088s Jul 30 23:21:07 Given I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 3091s Jul 30 23:21:10 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 3098s Jul 30 23:21:17 3098s Jul 30 23:21:17 Scenario: Detach exiting node from the cluster # features/standby_cluster.feature:26 3098s Jul 30 23:21:17 When I shut down postgres1 # features/steps/basic_replication.py:29 3100s Jul 30 23:21:19 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 3100s Jul 30 23:21:19 And "members/postgres0" key in DCS has role=master after 5 seconds # features/steps/cascading_replication.py:23 3101s Jul 30 23:21:20 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 3101s Jul 30 23:21:20 Then I receive a response code 200 # features/steps/patroni_api.py:98 3101s Jul 30 23:21:20 3101s Jul 30 23:21:20 Scenario: check replication of a single table in a standby cluster # features/standby_cluster.feature:33 3101s Jul 30 23:21:20 Given I start postgres1 in a standby cluster batman1 as a clone of postgres0 # features/steps/standby_cluster.py:23 3104s Jul 30 23:21:23 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 3104s Jul 30 23:21:23 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 3104s Jul 30 23:21:23 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 3104s Jul 30 23:21:23 When I issue a GET request to http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:61 3104s Jul 30 23:21:23 Then I receive a response code 200 # features/steps/patroni_api.py:98 3104s Jul 30 23:21:23 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 3104s Jul 30 23:21:23 And I sleep for 3 seconds # features/steps/patroni_api.py:39 3107s Jul 30 23:21:26 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 3107s Jul 30 23:21:26 Then I receive a response code 503 # features/steps/patroni_api.py:98 3107s Jul 30 23:21:26 When I issue a GET request to http://127.0.0.1:8009/standby_leader # features/steps/patroni_api.py:61 3107s Jul 30 23:21:26 Then I receive a response code 200 # features/steps/patroni_api.py:98 3107s Jul 30 23:21:26 And I receive a response role standby_leader # features/steps/patroni_api.py:98 3107s Jul 30 23:21:26 And there is a postgres1_cb.log with "on_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 3107s Jul 30 23:21:26 When I start postgres2 in a cluster batman1 # features/steps/standby_cluster.py:12 3110s Jul 30 23:21:29 Then postgres2 role is the replica after 24 seconds # features/steps/basic_replication.py:105 3110s Jul 30 23:21:29 And postgres2 is replicating from postgres1 after 10 seconds # features/steps/standby_cluster.py:52 3110s Jul 30 23:21:29 And table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 3110s Jul 30 23:21:29 When I issue a GET request to http://127.0.0.1:8010/patroni # features/steps/patroni_api.py:61 3110s Jul 30 23:21:29 Then I receive a response code 200 # features/steps/patroni_api.py:98 3110s Jul 30 23:21:29 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 3110s Jul 30 23:21:29 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 3110s Jul 30 23:21:29 3110s Jul 30 23:21:29 Scenario: check switchover # features/standby_cluster.feature:57 3110s Jul 30 23:21:29 Given I run patronictl.py switchover batman1 --force # features/steps/patroni_api.py:86 3114s Jul 30 23:21:33 Then Status code on GET http://127.0.0.1:8010/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 3114s Jul 30 23:21:33 And postgres1 is replicating from postgres2 after 32 seconds # features/steps/standby_cluster.py:52 3116s Jul 30 23:21:35 And there is a postgres2_cb.log with "on_start replica batman1\non_role_change standby_leader batman1" in postgres2 data directory # features/steps/cascading_replication.py:12 3116s Jul 30 23:21:35 3116s Jul 30 23:21:35 Scenario: check failover # features/standby_cluster.feature:63 3116s Jul 30 23:21:35 When I kill postgres2 # features/steps/basic_replication.py:34 3117s Jul 30 23:21:36 And I kill postmaster on postgres2 # features/steps/basic_replication.py:44 3117s Jul 30 23:21:36 waiting for server to shut down.... done 3117s Jul 30 23:21:36 server stopped 3117s Jul 30 23:21:36 Then postgres1 is replicating from postgres0 after 32 seconds # features/steps/standby_cluster.py:52 3138s Jul 30 23:21:57 And Status code on GET http://127.0.0.1:8009/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 3138s Jul 30 23:21:57 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 3138s Jul 30 23:21:57 Then I receive a response code 503 # features/steps/patroni_api.py:98 3138s Jul 30 23:21:57 And I receive a response role standby_leader # features/steps/patroni_api.py:98 3138s Jul 30 23:21:57 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 3139s Jul 30 23:21:58 And there is a postgres1_cb.log with "on_role_change replica batman1\non_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 3144s Jul 30 23:22:03 3144s Jul 30 23:22:03 Feature: watchdog # features/watchdog.feature:1 3144s Jul 30 23:22:03 Verify that watchdog gets pinged and triggered under appropriate circumstances. 3144s Jul 30 23:22:03 Scenario: watchdog is opened and pinged # features/watchdog.feature:4 3144s Jul 30 23:22:03 Given I start postgres0 with watchdog # features/steps/watchdog.py:16 3147s Jul 30 23:22:06 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 3148s Jul 30 23:22:07 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3148s Jul 30 23:22:07 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 3149s Jul 30 23:22:08 And postgres0 watchdog has a 15 second timeout # features/steps/watchdog.py:34 3149s Jul 30 23:22:08 3149s Jul 30 23:22:08 Scenario: watchdog is reconfigured after global ttl changed # features/watchdog.feature:11 3149s Jul 30 23:22:08 Given I run patronictl.py edit-config batman -s ttl=30 --force # features/steps/patroni_api.py:86 3150s Jul 30 23:22:09 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3150s Jul 30 23:22:09 And I receive a response output "+ttl: 30" # features/steps/patroni_api.py:98 3150s Jul 30 23:22:09 When I sleep for 4 seconds # features/steps/patroni_api.py:39 3154s Jul 30 23:22:13 Then postgres0 watchdog has a 25 second timeout # features/steps/watchdog.py:34 3154s Jul 30 23:22:13 3154s Jul 30 23:22:13 Scenario: watchdog is disabled during pause # features/watchdog.feature:18 3154s Jul 30 23:22:13 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 3156s Jul 30 23:22:15 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3156s Jul 30 23:22:15 When I sleep for 2 seconds # features/steps/patroni_api.py:39 3158s Jul 30 23:22:17 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 3158s Jul 30 23:22:17 3158s Jul 30 23:22:17 Scenario: watchdog is opened and pinged after resume # features/watchdog.feature:24 3158s Jul 30 23:22:17 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 3158s Jul 30 23:22:17 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 3159s Jul 30 23:22:18 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3159s Jul 30 23:22:18 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 3159s Jul 30 23:22:18 3159s Jul 30 23:22:18 Scenario: watchdog is disabled when shutting down # features/watchdog.feature:30 3159s Jul 30 23:22:18 Given I shut down postgres0 # features/steps/basic_replication.py:29 3161s Jul 30 23:22:20 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 3161s Jul 30 23:22:20 3161s Jul 30 23:22:20 Scenario: watchdog is triggered if patroni stops responding # features/watchdog.feature:34 3161s Jul 30 23:22:20 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 3161s Jul 30 23:22:20 And I start postgres0 with watchdog # features/steps/watchdog.py:16 3164s Jul 30 23:22:23 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3165s Jul 30 23:22:24 When postgres0 hangs for 30 seconds # features/steps/watchdog.py:52 3165s Jul 30 23:22:24 Then postgres0 watchdog is triggered after 30 seconds # features/steps/watchdog.py:44 3192s Jul 30 23:22:51 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.10070.XKqZjeKx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.10119.XSoGqNVx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.10127.XYTOFycx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.10132.XUnZxKrx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.10148.XWaELabx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.6305.XpVPsBOx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.6354.XMdYFpSx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.6398.XFBHYCEx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.6468.XBDfFckx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.6516.XGpprXox 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.6590.XrxpNgvx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.6639.XMSrJxbx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.6644.XAMPdtQx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.6725.XoeUcgXx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.6817.XgeFPrsx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.6835.XNWPvyFx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.6880.XErzrqYx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.6928.XMseulQx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7058.XTiFWSpx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7105.XzgjcRpx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7161.XCDbDKix 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7258.XRjTTArx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7315.XZKfIXYx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7381.XfwhGXMx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7484.XIoGLgLx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7588.XtoElhIx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7623.XrNXBsmx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7701.XfbHlcsx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7733.XqPySBAx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7868.XNtFZplx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7919.XCeHUTmx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7939.XawSPTgx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.7980.XfSRZHGx 3193s Jul 30 23:22:52 Skipping duplicate data .coverage.autopkgtest.8031.XySkqecx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8038.XSVVMIWx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8076.XnLXyubx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8119.XHDyzNfx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8283.XiGtslyx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8287.XqgSvfnx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8295.XSKfmzGx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8438.XBBEkVux 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8485.XevrHQEx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8524.XAgMRAHx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8568.XOvvnDFx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8615.XPWANQFx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8812.XMmrstDx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8847.XOKteLsx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.8935.XRytCFpx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.9020.XXzibtox 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.9064.XSlnhFIx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.9388.XIoRtvMx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.9433.XJqDbOIx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.9574.XHLpFrcx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.9639.XQnRIbKx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.9693.XlrsgIWx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.9807.XIuhKSKx 3193s Jul 30 23:22:52 Combined data file .coverage.autopkgtest.9929.XESDiTOx 3195s Jul 30 23:22:54 Name Stmts Miss Cover 3195s Jul 30 23:22:54 -------------------------------------------------------------------------------------------------------- 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/_distutils_hack/__init__.py 101 96 5% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/dateutil/__init__.py 13 4 69% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/dateutil/_common.py 25 15 40% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/dateutil/_version.py 11 2 82% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/dateutil/parser/__init__.py 33 4 88% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/dateutil/parser/_parser.py 813 436 46% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/dateutil/parser/isoparser.py 185 150 19% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/dateutil/relativedelta.py 241 206 15% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/dateutil/tz/__init__.py 4 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/dateutil/tz/_common.py 161 121 25% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/dateutil/tz/_factories.py 49 21 57% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/dateutil/tz/tz.py 800 626 22% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/dateutil/tz/win.py 153 149 3% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/__init__.py 1 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/client.py 629 266 58% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/exceptions.py 110 1 99% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/handlers/__init__.py 0 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/handlers/threading.py 94 15 84% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/handlers/utils.py 222 74 67% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/hosts.py 18 4 78% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/loggingsupport.py 1 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/protocol/__init__.py 0 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/protocol/connection.py 485 176 64% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/protocol/paths.py 33 8 76% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/protocol/serialization.py 316 111 65% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/protocol/states.py 49 9 82% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/python2atexit.py 32 19 41% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/recipe/__init__.py 0 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/recipe/barrier.py 97 80 18% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/recipe/counter.py 49 36 27% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/recipe/election.py 16 10 38% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/recipe/lease.py 54 36 33% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/recipe/lock.py 295 242 18% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/recipe/partitioner.py 155 120 23% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/recipe/party.py 62 43 31% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/recipe/queue.py 157 126 20% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/recipe/watchers.py 172 138 20% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/retry.py 60 9 85% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/security.py 58 35 40% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/kazoo/version.py 1 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/__init__.py 13 2 85% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/__main__.py 199 63 68% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/api.py 770 286 63% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/async_executor.py 96 15 84% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/collections.py 56 6 89% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/config.py 371 92 75% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/config_generator.py 212 159 25% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/daemon.py 76 3 96% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/dcs/__init__.py 646 92 86% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/dcs/zookeeper.py 288 68 76% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/dynamic_loader.py 35 7 80% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/exceptions.py 16 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/file_perm.py 43 8 81% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/global_config.py 81 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/ha.py 1244 373 70% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/log.py 219 67 69% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/__init__.py 821 173 79% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/available_parameters/__init__.py 21 1 95% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/bootstrap.py 252 62 75% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/callback_executor.py 55 8 85% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/cancellable.py 104 34 67% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/config.py 813 214 74% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/connection.py 75 1 99% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/misc.py 41 8 80% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/mpp/__init__.py 89 11 88% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py 170 85 50% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/rewind.py 416 166 60% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/slots.py 334 34 90% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/sync.py 130 19 85% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/postgresql/validator.py 157 23 85% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/psycopg.py 42 16 62% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/request.py 62 7 89% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/tags.py 38 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/utils.py 350 123 65% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/validator.py 301 208 31% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/version.py 1 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/watchdog/__init__.py 2 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/watchdog/base.py 203 46 77% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/patroni/watchdog/linux.py 135 35 74% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/psutil/__init__.py 951 615 35% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/psutil/_common.py 424 212 50% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/psutil/_compat.py 302 263 13% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/psutil/_pslinux.py 1251 924 26% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/psutil/_psposix.py 96 34 65% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/psycopg2/__init__.py 19 3 84% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/psycopg2/_json.py 64 27 58% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/psycopg2/_range.py 269 172 36% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/psycopg2/errors.py 3 2 33% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/psycopg2/extensions.py 91 25 73% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/puresasl/__init__.py 21 2 90% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/puresasl/client.py 71 47 34% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/puresasl/mechanisms.py 363 263 28% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/six.py 504 249 51% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/__init__.py 50 14 72% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/_base_connection.py 70 52 26% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/_collections.py 234 128 45% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/_request_methods.py 53 23 57% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/_version.py 2 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/connection.py 324 110 66% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/connectionpool.py 347 136 61% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/exceptions.py 115 37 68% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/fields.py 92 73 21% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/filepost.py 37 24 35% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/poolmanager.py 233 88 62% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/response.py 562 334 41% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/util/__init__.py 10 0 100% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/util/connection.py 66 9 86% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/util/proxy.py 13 6 54% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/util/request.py 104 52 50% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/util/response.py 32 17 47% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/util/retry.py 173 52 70% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/util/ssl_.py 177 75 58% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/util/ssl_match_hostname.py 66 54 18% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/util/ssltransport.py 160 112 30% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/util/timeout.py 71 19 73% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/util/url.py 205 78 62% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/util/util.py 26 18 31% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/urllib3/util/wait.py 49 38 22% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/__init__.py 165 109 34% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/composer.py 92 17 82% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/constructor.py 479 276 42% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/cyaml.py 46 24 48% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/dumper.py 23 12 48% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/emitter.py 838 769 8% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/error.py 58 42 28% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/events.py 61 6 90% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/loader.py 47 24 49% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/nodes.py 29 7 76% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/parser.py 352 180 49% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/reader.py 122 30 75% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/representer.py 248 176 29% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/resolver.py 135 76 44% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/scanner.py 758 415 45% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/serializer.py 85 70 18% 3195s Jul 30 23:22:54 /usr/lib/python3/dist-packages/yaml/tokens.py 76 17 78% 3195s Jul 30 23:22:54 patroni/__init__.py 13 2 85% 3195s Jul 30 23:22:54 patroni/__main__.py 199 199 0% 3195s Jul 30 23:22:54 patroni/api.py 770 770 0% 3195s Jul 30 23:22:54 patroni/async_executor.py 96 69 28% 3195s Jul 30 23:22:54 patroni/collections.py 56 15 73% 3195s Jul 30 23:22:54 patroni/config.py 371 194 48% 3195s Jul 30 23:22:54 patroni/config_generator.py 212 212 0% 3195s Jul 30 23:22:54 patroni/ctl.py 936 411 56% 3195s Jul 30 23:22:54 patroni/daemon.py 76 76 0% 3195s Jul 30 23:22:54 patroni/dcs/__init__.py 646 271 58% 3195s Jul 30 23:22:54 patroni/dcs/consul.py 485 485 0% 3195s Jul 30 23:22:54 patroni/dcs/etcd3.py 679 679 0% 3195s Jul 30 23:22:54 patroni/dcs/etcd.py 603 603 0% 3195s Jul 30 23:22:54 patroni/dcs/exhibitor.py 61 61 0% 3195s Jul 30 23:22:54 patroni/dcs/kubernetes.py 938 938 0% 3195s Jul 30 23:22:54 patroni/dcs/raft.py 319 319 0% 3195s Jul 30 23:22:54 patroni/dcs/zookeeper.py 288 152 47% 3195s Jul 30 23:22:54 patroni/dynamic_loader.py 35 7 80% 3195s Jul 30 23:22:54 patroni/exceptions.py 16 1 94% 3195s Jul 30 23:22:54 patroni/file_perm.py 43 15 65% 3195s Jul 30 23:22:54 patroni/global_config.py 81 18 78% 3195s Jul 30 23:22:54 patroni/ha.py 1244 1244 0% 3195s Jul 30 23:22:54 patroni/log.py 219 173 21% 3195s Jul 30 23:22:54 patroni/postgresql/__init__.py 821 651 21% 3195s Jul 30 23:22:54 patroni/postgresql/available_parameters/__init__.py 21 3 86% 3195s Jul 30 23:22:54 patroni/postgresql/bootstrap.py 252 222 12% 3195s Jul 30 23:22:54 patroni/postgresql/callback_executor.py 55 34 38% 3195s Jul 30 23:22:54 patroni/postgresql/cancellable.py 104 84 19% 3195s Jul 30 23:22:54 patroni/postgresql/config.py 813 698 14% 3195s Jul 30 23:22:54 patroni/postgresql/connection.py 75 50 33% 3195s Jul 30 23:22:54 patroni/postgresql/misc.py 41 29 29% 3195s Jul 30 23:22:54 patroni/postgresql/mpp/__init__.py 89 21 76% 3195s Jul 30 23:22:54 patroni/postgresql/mpp/citus.py 259 259 0% 3195s Jul 30 23:22:54 patroni/postgresql/postmaster.py 170 139 18% 3195s Jul 30 23:22:54 patroni/postgresql/rewind.py 416 416 0% 3195s Jul 30 23:22:54 patroni/postgresql/slots.py 334 285 15% 3195s Jul 30 23:22:54 patroni/postgresql/sync.py 130 96 26% 3195s Jul 30 23:22:54 patroni/postgresql/validator.py 157 52 67% 3195s Jul 30 23:22:54 patroni/psycopg.py 42 28 33% 3195s Jul 30 23:22:54 patroni/raft_controller.py 22 22 0% 3195s Jul 30 23:22:54 patroni/request.py 62 6 90% 3195s Jul 30 23:22:54 patroni/scripts/__init__.py 0 0 100% 3195s Jul 30 23:22:54 patroni/scripts/aws.py 59 59 0% 3195s Jul 30 23:22:54 patroni/scripts/barman/__init__.py 0 0 100% 3195s Jul 30 23:22:54 patroni/scripts/barman/cli.py 51 51 0% 3195s Jul 30 23:22:54 patroni/scripts/barman/config_switch.py 51 51 0% 3195s Jul 30 23:22:54 patroni/scripts/barman/recover.py 37 37 0% 3195s Jul 30 23:22:54 patroni/scripts/barman/utils.py 94 94 0% 3195s Jul 30 23:22:54 patroni/scripts/wale_restore.py 207 207 0% 3195s Jul 30 23:22:54 patroni/tags.py 38 11 71% 3195s Jul 30 23:22:54 patroni/utils.py 350 228 35% 3195s Jul 30 23:22:54 patroni/validator.py 301 215 29% 3195s Jul 30 23:22:54 patroni/version.py 1 0 100% 3195s Jul 30 23:22:54 patroni/watchdog/__init__.py 2 2 0% 3195s Jul 30 23:22:54 patroni/watchdog/base.py 203 203 0% 3195s Jul 30 23:22:54 patroni/watchdog/linux.py 135 135 0% 3195s Jul 30 23:22:54 -------------------------------------------------------------------------------------------------------- 3195s Jul 30 23:22:54 TOTAL 39824 23842 40% 3195s Jul 30 23:22:54 11 features passed, 0 failed, 1 skipped 3195s Jul 30 23:22:54 44 scenarios passed, 0 failed, 5 skipped 3195s Jul 30 23:22:54 444 steps passed, 0 failed, 61 skipped, 0 undefined 3195s Jul 30 23:22:54 Took 7m11.420s 3195s + echo '### End 16 acceptance-zookeeper -e dcs_failsafe_mode ###' 3195s ### End 16 acceptance-zookeeper -e dcs_failsafe_mode ### 3195s + rm -f '/tmp/pgpass?' 3195s ++ id -u 3195s + '[' 0 -eq 0 ']' 3195s + '[' -x /etc/init.d/zookeeper ']' 3195s + /etc/init.d/zookeeper stop 3195s Stopping zookeeper (via systemctl): zookeeper.service. 3196s autopkgtest [23:22:55]: test acceptance-zookeeper: -----------------------] 3196s autopkgtest [23:22:55]: test acceptance-zookeeper: - - - - - - - - - - results - - - - - - - - - - 3196s acceptance-zookeeper PASS 3197s autopkgtest [23:22:56]: test acceptance-raft: preparing testbed 3389s autopkgtest [23:26:08]: testbed dpkg architecture: s390x 3389s autopkgtest [23:26:08]: testbed apt version: 2.9.6 3389s autopkgtest [23:26:08]: @@@@@@@@@@@@@@@@@@@@ test bed setup 3390s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [126 kB] 3391s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [6372 B] 3391s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [8548 B] 3391s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [52.0 kB] 3391s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [509 kB] 3392s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x Packages [73.3 kB] 3392s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x c-n-f Metadata [2112 B] 3392s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/restricted s390x Packages [1368 B] 3392s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/restricted s390x c-n-f Metadata [120 B] 3392s Get:10 http://ftpmaster.internal/ubuntu oracular-proposed/universe s390x Packages [426 kB] 3393s Get:11 http://ftpmaster.internal/ubuntu oracular-proposed/universe s390x c-n-f Metadata [8372 B] 3393s Get:12 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse s390x Packages [4256 B] 3393s Get:13 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse s390x c-n-f Metadata [120 B] 3393s Fetched 1217 kB in 3s (424 kB/s) 3393s Reading package lists... 3395s Reading package lists... 3395s Building dependency tree... 3395s Reading state information... 3396s Calculating upgrade... 3396s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3396s Reading package lists... 3396s Building dependency tree... 3396s Reading state information... 3396s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3397s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 3397s Hit:2 http://ftpmaster.internal/ubuntu oracular InRelease 3397s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 3397s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 3398s Reading package lists... 3398s Reading package lists... 3398s Building dependency tree... 3398s Reading state information... 3398s Calculating upgrade... 3398s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3399s Reading package lists... 3399s Building dependency tree... 3399s Reading state information... 3399s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3403s Reading package lists... 3403s Building dependency tree... 3403s Reading state information... 3403s Starting pkgProblemResolver with broken count: 0 3403s Starting 2 pkgProblemResolver with broken count: 0 3403s Done 3404s The following additional packages will be installed: 3404s fonts-font-awesome fonts-lato libio-pty-perl libipc-run-perl libjs-jquery 3404s libjs-sphinxdoc libjs-underscore libjson-perl libpq5 libtime-duration-perl 3404s libtimedate-perl libxslt1.1 moreutils patroni patroni-doc postgresql 3404s postgresql-16 postgresql-client-16 postgresql-client-common 3404s postgresql-common python3-behave python3-cdiff python3-click 3404s python3-colorama python3-coverage python3-dateutil python3-parse 3404s python3-parse-type python3-prettytable python3-psutil python3-psycopg2 3404s python3-pysyncobj python3-six python3-wcwidth sphinx-rtd-theme-common 3404s ssl-cert 3404s Suggested packages: 3404s etcd-server | consul | zookeeperd vip-manager haproxy postgresql-doc 3404s postgresql-doc-16 python-coverage-doc python-psycopg2-doc 3404s Recommended packages: 3404s javascript-common libjson-xs-perl 3404s The following NEW packages will be installed: 3404s autopkgtest-satdep fonts-font-awesome fonts-lato libio-pty-perl 3404s libipc-run-perl libjs-jquery libjs-sphinxdoc libjs-underscore libjson-perl 3404s libpq5 libtime-duration-perl libtimedate-perl libxslt1.1 moreutils patroni 3404s patroni-doc postgresql postgresql-16 postgresql-client-16 3404s postgresql-client-common postgresql-common python3-behave python3-cdiff 3404s python3-click python3-colorama python3-coverage python3-dateutil 3404s python3-parse python3-parse-type python3-prettytable python3-psutil 3404s python3-psycopg2 python3-pysyncobj python3-six python3-wcwidth 3404s sphinx-rtd-theme-common ssl-cert 3404s 0 upgraded, 37 newly installed, 0 to remove and 0 not upgraded. 3404s Need to get 25.4 MB/25.4 MB of archives. 3404s After this operation, 83.0 MB of additional disk space will be used. 3404s Get:1 /tmp/autopkgtest.qFf46z/5-autopkgtest-satdep.deb autopkgtest-satdep s390x 0 [752 B] 3404s Get:2 http://ftpmaster.internal/ubuntu oracular/main s390x fonts-lato all 2.015-1 [2781 kB] 3409s Get:3 http://ftpmaster.internal/ubuntu oracular/main s390x libjson-perl all 4.10000-1 [81.9 kB] 3409s Get:4 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-client-common all 261 [36.6 kB] 3409s Get:5 http://ftpmaster.internal/ubuntu oracular/main s390x ssl-cert all 1.1.2ubuntu2 [18.0 kB] 3409s Get:6 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-common all 261 [162 kB] 3409s Get:7 http://ftpmaster.internal/ubuntu oracular/main s390x fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 3410s Get:8 http://ftpmaster.internal/ubuntu oracular/main s390x libio-pty-perl s390x 1:1.20-1build2 [31.3 kB] 3410s Get:9 http://ftpmaster.internal/ubuntu oracular/main s390x libipc-run-perl all 20231003.0-2 [91.5 kB] 3410s Get:10 http://ftpmaster.internal/ubuntu oracular/main s390x libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 3410s Get:11 http://ftpmaster.internal/ubuntu oracular/main s390x libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 3411s Get:12 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x libjs-sphinxdoc all 7.3.7-4 [154 kB] 3411s Get:13 http://ftpmaster.internal/ubuntu oracular/main s390x libpq5 s390x 16.3-1 [144 kB] 3411s Get:14 http://ftpmaster.internal/ubuntu oracular/main s390x libtime-duration-perl all 1.21-2 [12.3 kB] 3411s Get:15 http://ftpmaster.internal/ubuntu oracular/main s390x libtimedate-perl all 2.3300-2 [34.0 kB] 3411s Get:16 http://ftpmaster.internal/ubuntu oracular/main s390x libxslt1.1 s390x 1.1.39-0exp1build1 [170 kB] 3411s Get:17 http://ftpmaster.internal/ubuntu oracular/universe s390x moreutils s390x 0.69-1 [57.4 kB] 3412s Get:18 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-cdiff all 1.0-1.1 [16.4 kB] 3412s Get:19 http://ftpmaster.internal/ubuntu oracular/main s390x python3-colorama all 0.4.6-4 [32.1 kB] 3412s Get:20 http://ftpmaster.internal/ubuntu oracular/main s390x python3-click all 8.1.7-2 [79.5 kB] 3412s Get:21 http://ftpmaster.internal/ubuntu oracular/main s390x python3-six all 1.16.0-6 [13.0 kB] 3412s Get:22 http://ftpmaster.internal/ubuntu oracular/main s390x python3-dateutil all 2.9.0-2 [80.3 kB] 3412s Get:23 http://ftpmaster.internal/ubuntu oracular/main s390x python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 3412s Get:24 http://ftpmaster.internal/ubuntu oracular/main s390x python3-prettytable all 3.10.1-1 [34.0 kB] 3412s Get:25 http://ftpmaster.internal/ubuntu oracular/main s390x python3-psutil s390x 5.9.8-2build2 [195 kB] 3412s Get:26 http://ftpmaster.internal/ubuntu oracular/main s390x python3-psycopg2 s390x 2.9.9-1build1 [133 kB] 3413s Get:27 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-pysyncobj all 0.3.12-1 [38.9 kB] 3413s Get:28 http://ftpmaster.internal/ubuntu oracular/universe s390x patroni all 3.3.1-1 [264 kB] 3413s Get:29 http://ftpmaster.internal/ubuntu oracular/main s390x sphinx-rtd-theme-common all 2.0.0+dfsg-2 [1012 kB] 3415s Get:30 http://ftpmaster.internal/ubuntu oracular/universe s390x patroni-doc all 3.3.1-1 [497 kB] 3416s Get:31 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-client-16 s390x 16.3-1 [1290 kB] 3418s Get:32 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql-16 s390x 16.3-1 [16.7 MB] 3444s Get:33 http://ftpmaster.internal/ubuntu oracular/main s390x postgresql all 16+261 [11.7 kB] 3444s Get:34 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-parse all 1.20.2-1 [27.0 kB] 3444s Get:35 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-parse-type all 0.6.2-1 [22.7 kB] 3444s Get:36 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-behave all 1.2.6-5 [98.4 kB] 3445s Get:37 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-coverage s390x 7.4.4+dfsg1-0ubuntu2 [147 kB] 3445s Preconfiguring packages ... 3445s Fetched 25.4 MB in 41s (617 kB/s) 3445s Selecting previously unselected package fonts-lato. 3445s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 54832 files and directories currently installed.) 3445s Preparing to unpack .../00-fonts-lato_2.015-1_all.deb ... 3445s Unpacking fonts-lato (2.015-1) ... 3446s Selecting previously unselected package libjson-perl. 3446s Preparing to unpack .../01-libjson-perl_4.10000-1_all.deb ... 3446s Unpacking libjson-perl (4.10000-1) ... 3446s Selecting previously unselected package postgresql-client-common. 3446s Preparing to unpack .../02-postgresql-client-common_261_all.deb ... 3446s Unpacking postgresql-client-common (261) ... 3446s Selecting previously unselected package ssl-cert. 3446s Preparing to unpack .../03-ssl-cert_1.1.2ubuntu2_all.deb ... 3446s Unpacking ssl-cert (1.1.2ubuntu2) ... 3446s Selecting previously unselected package postgresql-common. 3446s Preparing to unpack .../04-postgresql-common_261_all.deb ... 3446s Adding 'diversion of /usr/bin/pg_config to /usr/bin/pg_config.libpq-dev by postgresql-common' 3446s Unpacking postgresql-common (261) ... 3446s Selecting previously unselected package fonts-font-awesome. 3446s Preparing to unpack .../05-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 3446s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 3446s Selecting previously unselected package libio-pty-perl. 3446s Preparing to unpack .../06-libio-pty-perl_1%3a1.20-1build2_s390x.deb ... 3446s Unpacking libio-pty-perl (1:1.20-1build2) ... 3446s Selecting previously unselected package libipc-run-perl. 3446s Preparing to unpack .../07-libipc-run-perl_20231003.0-2_all.deb ... 3446s Unpacking libipc-run-perl (20231003.0-2) ... 3446s Selecting previously unselected package libjs-jquery. 3446s Preparing to unpack .../08-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 3446s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 3446s Selecting previously unselected package libjs-underscore. 3446s Preparing to unpack .../09-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 3446s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 3446s Selecting previously unselected package libjs-sphinxdoc. 3446s Preparing to unpack .../10-libjs-sphinxdoc_7.3.7-4_all.deb ... 3446s Unpacking libjs-sphinxdoc (7.3.7-4) ... 3446s Selecting previously unselected package libpq5:s390x. 3446s Preparing to unpack .../11-libpq5_16.3-1_s390x.deb ... 3446s Unpacking libpq5:s390x (16.3-1) ... 3446s Selecting previously unselected package libtime-duration-perl. 3446s Preparing to unpack .../12-libtime-duration-perl_1.21-2_all.deb ... 3446s Unpacking libtime-duration-perl (1.21-2) ... 3446s Selecting previously unselected package libtimedate-perl. 3446s Preparing to unpack .../13-libtimedate-perl_2.3300-2_all.deb ... 3446s Unpacking libtimedate-perl (2.3300-2) ... 3446s Selecting previously unselected package libxslt1.1:s390x. 3446s Preparing to unpack .../14-libxslt1.1_1.1.39-0exp1build1_s390x.deb ... 3446s Unpacking libxslt1.1:s390x (1.1.39-0exp1build1) ... 3446s Selecting previously unselected package moreutils. 3446s Preparing to unpack .../15-moreutils_0.69-1_s390x.deb ... 3446s Unpacking moreutils (0.69-1) ... 3446s Selecting previously unselected package python3-cdiff. 3446s Preparing to unpack .../16-python3-cdiff_1.0-1.1_all.deb ... 3446s Unpacking python3-cdiff (1.0-1.1) ... 3446s Selecting previously unselected package python3-colorama. 3446s Preparing to unpack .../17-python3-colorama_0.4.6-4_all.deb ... 3446s Unpacking python3-colorama (0.4.6-4) ... 3446s Selecting previously unselected package python3-click. 3446s Preparing to unpack .../18-python3-click_8.1.7-2_all.deb ... 3446s Unpacking python3-click (8.1.7-2) ... 3446s Selecting previously unselected package python3-six. 3446s Preparing to unpack .../19-python3-six_1.16.0-6_all.deb ... 3446s Unpacking python3-six (1.16.0-6) ... 3446s Selecting previously unselected package python3-dateutil. 3446s Preparing to unpack .../20-python3-dateutil_2.9.0-2_all.deb ... 3446s Unpacking python3-dateutil (2.9.0-2) ... 3446s Selecting previously unselected package python3-wcwidth. 3446s Preparing to unpack .../21-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 3446s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 3446s Selecting previously unselected package python3-prettytable. 3446s Preparing to unpack .../22-python3-prettytable_3.10.1-1_all.deb ... 3446s Unpacking python3-prettytable (3.10.1-1) ... 3446s Selecting previously unselected package python3-psutil. 3446s Preparing to unpack .../23-python3-psutil_5.9.8-2build2_s390x.deb ... 3446s Unpacking python3-psutil (5.9.8-2build2) ... 3446s Selecting previously unselected package python3-psycopg2. 3446s Preparing to unpack .../24-python3-psycopg2_2.9.9-1build1_s390x.deb ... 3446s Unpacking python3-psycopg2 (2.9.9-1build1) ... 3446s Selecting previously unselected package python3-pysyncobj. 3446s Preparing to unpack .../25-python3-pysyncobj_0.3.12-1_all.deb ... 3446s Unpacking python3-pysyncobj (0.3.12-1) ... 3446s Selecting previously unselected package patroni. 3446s Preparing to unpack .../26-patroni_3.3.1-1_all.deb ... 3446s Unpacking patroni (3.3.1-1) ... 3446s Selecting previously unselected package sphinx-rtd-theme-common. 3446s Preparing to unpack .../27-sphinx-rtd-theme-common_2.0.0+dfsg-2_all.deb ... 3446s Unpacking sphinx-rtd-theme-common (2.0.0+dfsg-2) ... 3446s Selecting previously unselected package patroni-doc. 3446s Preparing to unpack .../28-patroni-doc_3.3.1-1_all.deb ... 3446s Unpacking patroni-doc (3.3.1-1) ... 3446s Selecting previously unselected package postgresql-client-16. 3446s Preparing to unpack .../29-postgresql-client-16_16.3-1_s390x.deb ... 3446s Unpacking postgresql-client-16 (16.3-1) ... 3446s Selecting previously unselected package postgresql-16. 3446s Preparing to unpack .../30-postgresql-16_16.3-1_s390x.deb ... 3446s Unpacking postgresql-16 (16.3-1) ... 3447s Selecting previously unselected package postgresql. 3447s Preparing to unpack .../31-postgresql_16+261_all.deb ... 3447s Unpacking postgresql (16+261) ... 3447s Selecting previously unselected package python3-parse. 3447s Preparing to unpack .../32-python3-parse_1.20.2-1_all.deb ... 3447s Unpacking python3-parse (1.20.2-1) ... 3447s Selecting previously unselected package python3-parse-type. 3447s Preparing to unpack .../33-python3-parse-type_0.6.2-1_all.deb ... 3447s Unpacking python3-parse-type (0.6.2-1) ... 3447s Selecting previously unselected package python3-behave. 3447s Preparing to unpack .../34-python3-behave_1.2.6-5_all.deb ... 3447s Unpacking python3-behave (1.2.6-5) ... 3447s Selecting previously unselected package python3-coverage. 3447s Preparing to unpack .../35-python3-coverage_7.4.4+dfsg1-0ubuntu2_s390x.deb ... 3447s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 3447s Selecting previously unselected package autopkgtest-satdep. 3447s Preparing to unpack .../36-5-autopkgtest-satdep.deb ... 3447s Unpacking autopkgtest-satdep (0) ... 3447s Setting up postgresql-client-common (261) ... 3447s Setting up fonts-lato (2.015-1) ... 3447s Setting up libio-pty-perl (1:1.20-1build2) ... 3447s Setting up python3-pysyncobj (0.3.12-1) ... 3447s Setting up python3-colorama (0.4.6-4) ... 3447s Setting up python3-cdiff (1.0-1.1) ... 3447s Setting up libpq5:s390x (16.3-1) ... 3447s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 3448s Setting up python3-click (8.1.7-2) ... 3448s Setting up python3-psutil (5.9.8-2build2) ... 3448s Setting up python3-six (1.16.0-6) ... 3448s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 3449s Setting up ssl-cert (1.1.2ubuntu2) ... 3449s Created symlink '/etc/systemd/system/multi-user.target.wants/ssl-cert.service' → '/usr/lib/systemd/system/ssl-cert.service'. 3449s Setting up python3-psycopg2 (2.9.9-1build1) ... 3450s Setting up libipc-run-perl (20231003.0-2) ... 3450s Setting up libtime-duration-perl (1.21-2) ... 3450s Setting up libtimedate-perl (2.3300-2) ... 3450s Setting up python3-parse (1.20.2-1) ... 3450s Setting up libjson-perl (4.10000-1) ... 3450s Setting up libxslt1.1:s390x (1.1.39-0exp1build1) ... 3450s Setting up python3-dateutil (2.9.0-2) ... 3450s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 3450s Setting up python3-prettytable (3.10.1-1) ... 3450s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 3450s Setting up sphinx-rtd-theme-common (2.0.0+dfsg-2) ... 3450s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 3450s Setting up moreutils (0.69-1) ... 3450s Setting up postgresql-client-16 (16.3-1) ... 3451s update-alternatives: using /usr/share/postgresql/16/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode 3451s Setting up python3-parse-type (0.6.2-1) ... 3451s Setting up postgresql-common (261) ... 3451s 3451s Creating config file /etc/postgresql-common/createcluster.conf with new version 3451s Building PostgreSQL dictionaries from installed myspell/hunspell packages... 3451s Removing obsolete dictionary files: 3452s Created symlink '/etc/systemd/system/multi-user.target.wants/postgresql.service' → '/usr/lib/systemd/system/postgresql.service'. 3452s Setting up libjs-sphinxdoc (7.3.7-4) ... 3452s Setting up python3-behave (1.2.6-5) ... 3453s /usr/lib/python3/dist-packages/behave/formatter/ansi_escapes.py:57: SyntaxWarning: invalid escape sequence '\[' 3453s _ANSI_ESCAPE_PATTERN = re.compile(u"\x1b\[\d+[mA]", re.UNICODE) 3453s /usr/lib/python3/dist-packages/behave/matchers.py:267: SyntaxWarning: invalid escape sequence '\d' 3453s """Registers a custom type that will be available to "parse" 3453s Setting up patroni (3.3.1-1) ... 3453s Created symlink '/etc/systemd/system/multi-user.target.wants/patroni.service' → '/usr/lib/systemd/system/patroni.service'. 3453s Setting up postgresql-16 (16.3-1) ... 3454s Creating new PostgreSQL cluster 16/main ... 3454s /usr/lib/postgresql/16/bin/initdb -D /var/lib/postgresql/16/main --auth-local peer --auth-host scram-sha-256 --no-instructions 3454s The files belonging to this database system will be owned by user "postgres". 3454s This user must also own the server process. 3454s 3454s The database cluster will be initialized with locale "C.UTF-8". 3454s The default database encoding has accordingly been set to "UTF8". 3454s The default text search configuration will be set to "english". 3454s 3454s Data page checksums are disabled. 3454s 3454s fixing permissions on existing directory /var/lib/postgresql/16/main ... ok 3454s creating subdirectories ... ok 3454s selecting dynamic shared memory implementation ... posix 3454s selecting default max_connections ... 100 3454s selecting default shared_buffers ... 128MB 3454s selecting default time zone ... Etc/UTC 3454s creating configuration files ... ok 3454s running bootstrap script ... ok 3454s performing post-bootstrap initialization ... ok 3454s syncing data to disk ... ok 3458s Setting up patroni-doc (3.3.1-1) ... 3458s Setting up postgresql (16+261) ... 3458s Setting up autopkgtest-satdep (0) ... 3458s Processing triggers for man-db (2.12.1-2) ... 3459s Processing triggers for libc-bin (2.39-0ubuntu9) ... 3462s (Reading database ... 57824 files and directories currently installed.) 3462s Removing autopkgtest-satdep (0) ... 3464s autopkgtest [23:27:23]: test acceptance-raft: debian/tests/acceptance raft 3464s autopkgtest [23:27:23]: test acceptance-raft: [----------------------- 3464s dpkg-architecture: warning: cannot determine CC system type, falling back to default (native compilation) 3464s ### PostgreSQL 16 acceptance-raft ### 3464s ++ ls -1r /usr/lib/postgresql/ 3464s + for PG_VERSION in $(ls -1r /usr/lib/postgresql/) 3464s + '[' 16 == 10 -o 16 == 11 ']' 3464s + echo '### PostgreSQL 16 acceptance-raft ###' 3464s + bash -c 'set -o pipefail; ETCD_UNSUPPORTED_ARCH=s390x DCS=raft PATH=/usr/lib/postgresql/16/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin behave | ts' 3470s Jul 30 23:27:29 Feature: basic replication # features/basic_replication.feature:1 3470s Jul 30 23:27:29 We should check that the basic bootstrapping, replication and failover works. 3470s Jul 30 23:27:29 Scenario: check replication of a single table # features/basic_replication.feature:4 3470s Jul 30 23:27:29 Given I start postgres0 # features/steps/basic_replication.py:8 3479s Jul 30 23:27:38 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 3480s Jul 30 23:27:39 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 3480s Jul 30 23:27:39 When I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "synchronous_mode": true} # features/steps/patroni_api.py:71 3480s Jul 30 23:27:39 Then I receive a response code 200 # features/steps/patroni_api.py:98 3480s Jul 30 23:27:39 When I start postgres1 # features/steps/basic_replication.py:8 3484s Jul 30 23:27:43 And I configure and start postgres2 with a tag replicatefrom postgres0 # features/steps/cascading_replication.py:7 3488s Jul 30 23:27:47 And "sync" key in DCS has leader=postgres0 after 20 seconds # features/steps/cascading_replication.py:23 3488s Jul 30 23:27:47 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 3488s Jul 30 23:27:47 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 3489s Jul 30 23:27:48 Then table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 3489s Jul 30 23:27:48 3489s Jul 30 23:27:48 Scenario: check restart of sync replica # features/basic_replication.feature:17 3489s Jul 30 23:27:48 Given I shut down postgres2 # features/steps/basic_replication.py:29 3490s Jul 30 23:27:49 Then "sync" key in DCS has sync_standby=postgres1 after 5 seconds # features/steps/cascading_replication.py:23 3490s Jul 30 23:27:49 When I start postgres2 # features/steps/basic_replication.py:8 3494s Jul 30 23:27:52 And I shut down postgres1 # features/steps/basic_replication.py:29 3497s Jul 30 23:27:55 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 3498s Jul 30 23:27:56 When I start postgres1 # features/steps/basic_replication.py:8 3501s Jul 30 23:28:00 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 3502s Jul 30 23:28:01 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 3502s Jul 30 23:28:01 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 3502s Jul 30 23:28:01 3502s Jul 30 23:28:01 Scenario: check stuck sync replica # features/basic_replication.feature:28 3502s Jul 30 23:28:01 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": true, "maximum_lag_on_syncnode": 15000000, "postgresql": {"parameters": {"synchronous_commit": "remote_apply"}}} # features/steps/patroni_api.py:71 3502s Jul 30 23:28:01 Then I receive a response code 200 # features/steps/patroni_api.py:98 3502s Jul 30 23:28:01 And I create table on postgres0 # features/steps/basic_replication.py:73 3502s Jul 30 23:28:01 And table mytest is present on postgres1 after 2 seconds # features/steps/basic_replication.py:93 3503s Jul 30 23:28:02 And table mytest is present on postgres2 after 2 seconds # features/steps/basic_replication.py:93 3503s Jul 30 23:28:02 When I pause wal replay on postgres2 # features/steps/basic_replication.py:64 3503s Jul 30 23:28:02 And I load data on postgres0 # features/steps/basic_replication.py:84 3504s Jul 30 23:28:03 Then "sync" key in DCS has sync_standby=postgres1 after 15 seconds # features/steps/cascading_replication.py:23 3507s Jul 30 23:28:06 And I resume wal replay on postgres2 # features/steps/basic_replication.py:64 3507s Jul 30 23:28:06 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 3507s Jul 30 23:28:06 And Status code on GET http://127.0.0.1:8010/async is 200 after 3 seconds # features/steps/patroni_api.py:142 3507s Jul 30 23:28:06 When I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": null, "maximum_lag_on_syncnode": -1, "postgresql": {"parameters": {"synchronous_commit": "on"}}} # features/steps/patroni_api.py:71 3507s Jul 30 23:28:06 Then I receive a response code 200 # features/steps/patroni_api.py:98 3507s Jul 30 23:28:06 And I drop table on postgres0 # features/steps/basic_replication.py:73 3507s Jul 30 23:28:06 3507s Jul 30 23:28:06 Scenario: check multi sync replication # features/basic_replication.feature:44 3507s Jul 30 23:28:06 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 2} # features/steps/patroni_api.py:71 3507s Jul 30 23:28:06 Then I receive a response code 200 # features/steps/patroni_api.py:98 3507s Jul 30 23:28:06 Then "sync" key in DCS has sync_standby=postgres1,postgres2 after 10 seconds # features/steps/cascading_replication.py:23 3511s Jul 30 23:28:10 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 3511s Jul 30 23:28:10 And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 3511s Jul 30 23:28:10 When I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 1} # features/steps/patroni_api.py:71 3511s Jul 30 23:28:10 Then I receive a response code 200 # features/steps/patroni_api.py:98 3511s Jul 30 23:28:10 And I shut down postgres1 # features/steps/basic_replication.py:29 3514s Jul 30 23:28:13 Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds # features/steps/cascading_replication.py:23 3515s Jul 30 23:28:14 When I start postgres1 # features/steps/basic_replication.py:8 3519s Jul 30 23:28:18 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 3519s Jul 30 23:28:18 And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds # features/steps/patroni_api.py:142 3519s Jul 30 23:28:18 And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds # features/steps/patroni_api.py:142 3519s Jul 30 23:28:18 3519s Jul 30 23:28:18 Scenario: check the basic failover in synchronous mode # features/basic_replication.feature:59 3519s Jul 30 23:28:18 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 3522s Jul 30 23:28:21 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3522s Jul 30 23:28:21 When I sleep for 2 seconds # features/steps/patroni_api.py:39 3524s Jul 30 23:28:23 And I shut down postgres0 # features/steps/basic_replication.py:29 3525s Jul 30 23:28:24 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 3526s Jul 30 23:28:25 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3526s Jul 30 23:28:25 And postgres2 role is the primary after 24 seconds # features/steps/basic_replication.py:105 3546s Jul 30 23:28:44 And Response on GET http://127.0.0.1:8010/history contains recovery after 10 seconds # features/steps/patroni_api.py:156 3547s Jul 30 23:28:46 And there is a postgres2_cb.log with "on_role_change master batman" in postgres2 data directory # features/steps/cascading_replication.py:12 3547s Jul 30 23:28:46 When I issue a PATCH request to http://127.0.0.1:8010/config with {"synchronous_mode": null, "master_start_timeout": 0} # features/steps/patroni_api.py:71 3547s Jul 30 23:28:46 Then I receive a response code 200 # features/steps/patroni_api.py:98 3547s Jul 30 23:28:46 When I add the table bar to postgres2 # features/steps/basic_replication.py:54 3547s Jul 30 23:28:46 Then table bar is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 3551s Jul 30 23:28:50 And Response on GET http://127.0.0.1:8010/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 3551s Jul 30 23:28:50 3551s Jul 30 23:28:50 Scenario: check rejoin of the former primary with pg_rewind # features/basic_replication.feature:75 3551s Jul 30 23:28:50 Given I add the table splitbrain to postgres0 # features/steps/basic_replication.py:54 3551s Jul 30 23:28:50 And I start postgres0 # features/steps/basic_replication.py:8 3551s Jul 30 23:28:50 Then postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 3560s Jul 30 23:28:59 When I add the table buz to postgres2 # features/steps/basic_replication.py:54 3560s Jul 30 23:28:59 Then table buz is present on postgres0 after 20 seconds # features/steps/basic_replication.py:93 3560s SKIP Scenario check graceful rejection when two nodes have the same name: Flaky test with Raft 3577s Jul 30 23:29:16 3577s Jul 30 23:29:16 @reject-duplicate-name 3577s Jul 30 23:29:16 Scenario: check graceful rejection when two nodes have the same name # features/basic_replication.feature:83 3577s Jul 30 23:29:16 Given I start duplicate postgres0 on port 8011 # None 3577s Jul 30 23:29:16 Then there is one of ["Can't start; there is already a node named 'postgres0' running"] CRITICAL in the dup-postgres0 patroni log after 5 seconds # None 3577s Jul 30 23:29:16 3577s Jul 30 23:29:16 Feature: cascading replication # features/cascading_replication.feature:1 3577s Jul 30 23:29:16 We should check that patroni can do base backup and streaming from the replica 3577s Jul 30 23:29:16 Scenario: check a base backup and streaming replication from a replica # features/cascading_replication.feature:4 3577s Jul 30 23:29:16 Given I start postgres0 # features/steps/basic_replication.py:8 3581s Jul 30 23:29:20 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 3582s Jul 30 23:29:21 And I configure and start postgres1 with a tag clonefrom true # features/steps/cascading_replication.py:7 3586s Jul 30 23:29:25 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 3587s Jul 30 23:29:26 And I create label with "postgres0" in postgres0 data directory # features/steps/cascading_replication.py:18 3587s Jul 30 23:29:26 And I create label with "postgres1" in postgres1 data directory # features/steps/cascading_replication.py:18 3587s Jul 30 23:29:26 And "members/postgres1" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 3587s Jul 30 23:29:26 And I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 3591s Jul 30 23:29:30 Then replication works from postgres0 to postgres2 after 30 seconds # features/steps/basic_replication.py:112 3592s Jul 30 23:29:31 And there is a label with "postgres1" in postgres2 data directory # features/steps/cascading_replication.py:12 3608s Jul 30 23:29:47 3608s SKIP FEATURE citus: Citus extenstion isn't available 3608s SKIP Scenario check that worker cluster is registered in the coordinator: Citus extenstion isn't available 3608s SKIP Scenario coordinator failover updates pg_dist_node: Citus extenstion isn't available 3608s SKIP Scenario worker switchover doesn't break client queries on the coordinator: Citus extenstion isn't available 3608s SKIP Scenario worker primary restart doesn't break client queries on the coordinator: Citus extenstion isn't available 3608s SKIP Scenario check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node: Citus extenstion isn't available 3608s Jul 30 23:29:47 Feature: citus # features/citus.feature:1 3608s Jul 30 23:29:47 We should check that coordinator discovers and registers workers and clients don't have errors when worker cluster switches over 3608s Jul 30 23:29:47 Scenario: check that worker cluster is registered in the coordinator # features/citus.feature:4 3608s Jul 30 23:29:47 Given I start postgres0 in citus group 0 # None 3608s Jul 30 23:29:47 And I start postgres2 in citus group 1 # None 3608s Jul 30 23:29:47 Then postgres0 is a leader in a group 0 after 10 seconds # None 3608s Jul 30 23:29:47 And postgres2 is a leader in a group 1 after 10 seconds # None 3608s Jul 30 23:29:47 When I start postgres1 in citus group 0 # None 3608s Jul 30 23:29:47 And I start postgres3 in citus group 1 # None 3608s Jul 30 23:29:47 Then replication works from postgres0 to postgres1 after 15 seconds # None 3608s Jul 30 23:29:47 Then replication works from postgres2 to postgres3 after 15 seconds # None 3608s Jul 30 23:29:47 And postgres0 is registered in the postgres0 as the primary in group 0 after 5 seconds # None 3608s Jul 30 23:29:47 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 3608s Jul 30 23:29:47 3608s Jul 30 23:29:47 Scenario: coordinator failover updates pg_dist_node # features/citus.feature:16 3608s Jul 30 23:29:47 Given I run patronictl.py failover batman --group 0 --candidate postgres1 --force # None 3608s Jul 30 23:29:47 Then postgres1 role is the primary after 10 seconds # None 3608s Jul 30 23:29:47 And "members/postgres0" key in a group 0 in DCS has state=running after 15 seconds # None 3608s Jul 30 23:29:47 And replication works from postgres1 to postgres0 after 15 seconds # None 3608s Jul 30 23:29:47 And postgres1 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 3608s Jul 30 23:29:47 And "sync" key in a group 0 in DCS has sync_standby=postgres0 after 15 seconds # None 3608s Jul 30 23:29:47 When I run patronictl.py switchover batman --group 0 --candidate postgres0 --force # None 3608s Jul 30 23:29:47 Then postgres0 role is the primary after 10 seconds # None 3608s Jul 30 23:29:47 And replication works from postgres0 to postgres1 after 15 seconds # None 3608s Jul 30 23:29:47 And postgres0 is registered in the postgres2 as the primary in group 0 after 5 seconds # None 3608s Jul 30 23:29:47 And "sync" key in a group 0 in DCS has sync_standby=postgres1 after 15 seconds # None 3608s Jul 30 23:29:47 3608s Jul 30 23:29:47 Scenario: worker switchover doesn't break client queries on the coordinator # features/citus.feature:29 3608s Jul 30 23:29:47 Given I create a distributed table on postgres0 # None 3608s Jul 30 23:29:47 And I start a thread inserting data on postgres0 # None 3608s Jul 30 23:29:47 When I run patronictl.py switchover batman --group 1 --force # None 3608s Jul 30 23:29:47 Then I receive a response returncode 0 # None 3608s Jul 30 23:29:47 And postgres3 role is the primary after 10 seconds # None 3608s Jul 30 23:29:47 And "members/postgres2" key in a group 1 in DCS has state=running after 15 seconds # None 3608s Jul 30 23:29:47 And replication works from postgres3 to postgres2 after 15 seconds # None 3608s Jul 30 23:29:47 And postgres3 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 3608s Jul 30 23:29:47 And "sync" key in a group 1 in DCS has sync_standby=postgres2 after 15 seconds # None 3608s Jul 30 23:29:47 And a thread is still alive # None 3608s Jul 30 23:29:47 When I run patronictl.py switchover batman --group 1 --force # None 3608s Jul 30 23:29:47 Then I receive a response returncode 0 # None 3608s Jul 30 23:29:47 And postgres2 role is the primary after 10 seconds # None 3608s Jul 30 23:29:47 And replication works from postgres2 to postgres3 after 15 seconds # None 3608s Jul 30 23:29:47 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 3608s Jul 30 23:29:47 And "sync" key in a group 1 in DCS has sync_standby=postgres3 after 15 seconds # None 3608s Jul 30 23:29:47 And a thread is still alive # None 3608s Jul 30 23:29:47 When I stop a thread # None 3608s Jul 30 23:29:47 Then a distributed table on postgres0 has expected rows # None 3608s Jul 30 23:29:47 3608s Jul 30 23:29:47 Scenario: worker primary restart doesn't break client queries on the coordinator # features/citus.feature:50 3608s Jul 30 23:29:47 Given I cleanup a distributed table on postgres0 # None 3608s Jul 30 23:29:47 And I start a thread inserting data on postgres0 # None 3608s Jul 30 23:29:47 When I run patronictl.py restart batman postgres2 --group 1 --force # None 3608s Jul 30 23:29:47 Then I receive a response returncode 0 # None 3608s Jul 30 23:29:47 And postgres2 role is the primary after 10 seconds # None 3608s Jul 30 23:29:47 And replication works from postgres2 to postgres3 after 15 seconds # None 3608s Jul 30 23:29:47 And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds # None 3608s Jul 30 23:29:47 And a thread is still alive # None 3608s Jul 30 23:29:47 When I stop a thread # None 3608s Jul 30 23:29:47 Then a distributed table on postgres0 has expected rows # None 3614s Jul 30 23:29:53 3614s Jul 30 23:29:53 Scenario: check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node # features/citus.feature:62 3614s Jul 30 23:29:53 Given I start postgres4 in citus group 2 # None 3614s Jul 30 23:29:53 Then postgres4 is a leader in a group 2 after 10 seconds # None 3614s Jul 30 23:29:53 And "members/postgres4" key in a group 2 in DCS has role=master after 3 seconds # None 3614s Jul 30 23:29:53 When I run patronictl.py edit-config batman --group 2 -s ttl=20 --force # None 3614s Jul 30 23:29:53 Then I receive a response returncode 0 # None 3614s Jul 30 23:29:53 And I receive a response output "+ttl: 20" # None 3614s Jul 30 23:29:53 Then postgres4 is registered in the postgres2 as the primary in group 2 after 5 seconds # None 3614s Jul 30 23:29:53 When I shut down postgres4 # None 3614s Jul 30 23:29:53 Then there is a transaction in progress on postgres0 changing pg_dist_node after 5 seconds # None 3614s Jul 30 23:29:53 When I run patronictl.py restart batman postgres2 --group 1 --force # None 3614s Jul 30 23:29:53 Then a transaction finishes in 20 seconds # None 3614s Jul 30 23:29:53 3614s Jul 30 23:29:53 Feature: custom bootstrap # features/custom_bootstrap.feature:1 3614s Jul 30 23:29:53 We should check that patroni can bootstrap a new cluster from a backup 3614s Jul 30 23:29:53 Scenario: clone existing cluster using pg_basebackup # features/custom_bootstrap.feature:4 3614s Jul 30 23:29:53 Given I start postgres0 # features/steps/basic_replication.py:8 3617s Jul 30 23:29:56 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 3619s Jul 30 23:29:58 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 3619s Jul 30 23:29:58 And I start postgres1 in a cluster batman1 as a clone of postgres0 # features/steps/custom_bootstrap.py:6 3623s Jul 30 23:30:02 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 3624s Jul 30 23:30:03 Then table foo is present on postgres1 after 10 seconds # features/steps/basic_replication.py:93 3624s Jul 30 23:30:03 3624s Jul 30 23:30:03 Scenario: make a backup and do a restore into a new cluster # features/custom_bootstrap.feature:12 3624s Jul 30 23:30:03 Given I add the table bar to postgres1 # features/steps/basic_replication.py:54 3624s Jul 30 23:30:03 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 3625s Jul 30 23:30:04 When I start postgres2 in a cluster batman2 from backup # features/steps/custom_bootstrap.py:11 3631s Jul 30 23:30:10 Then postgres2 is a leader of batman2 after 30 seconds # features/steps/custom_bootstrap.py:16 3631s Jul 30 23:30:10 And table bar is present on postgres2 after 10 seconds # features/steps/basic_replication.py:93 3649s Jul 30 23:30:28 3649s Jul 30 23:30:28 Feature: dcs failsafe mode # features/dcs_failsafe_mode.feature:1 3649s Jul 30 23:30:28 We should check the basic dcs failsafe mode functioning 3649s Jul 30 23:30:28 Scenario: check failsafe mode can be successfully enabled # features/dcs_failsafe_mode.feature:4 3649s Jul 30 23:30:28 Given I start postgres0 # features/steps/basic_replication.py:8 3653s Jul 30 23:30:32 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 3654s Jul 30 23:30:33 Then "config" key in DCS has ttl=30 after 10 seconds # features/steps/cascading_replication.py:23 3654s Jul 30 23:30:33 When I issue a PATCH request to http://127.0.0.1:8008/config with {"loop_wait": 2, "ttl": 20, "retry_timeout": 3, "failsafe_mode": true} # features/steps/patroni_api.py:71 3654s Jul 30 23:30:33 Then I receive a response code 200 # features/steps/patroni_api.py:98 3654s Jul 30 23:30:33 And Response on GET http://127.0.0.1:8008/failsafe contains postgres0 after 10 seconds # features/steps/patroni_api.py:156 3654s Jul 30 23:30:33 When I issue a GET request to http://127.0.0.1:8008/failsafe # features/steps/patroni_api.py:61 3654s Jul 30 23:30:33 Then I receive a response code 200 # features/steps/patroni_api.py:98 3654s Jul 30 23:30:33 And I receive a response postgres0 http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:98 3654s Jul 30 23:30:33 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}},"slots":{"dcs_slot_1": null,"postgres0":null}} # features/steps/patroni_api.py:71 3654s Jul 30 23:30:33 Then I receive a response code 200 # features/steps/patroni_api.py:98 3654s Jul 30 23:30:33 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots": {"dcs_slot_0": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 3654s Jul 30 23:30:33 Then I receive a response code 200 # features/steps/patroni_api.py:98 3654s Jul 30 23:30:33 3654s Jul 30 23:30:33 @dcs-failsafe 3654s Jul 30 23:30:33 Scenario: check one-node cluster is functioning while DCS is down # features/dcs_failsafe_mode.feature:20 3654s Jul 30 23:30:33 Given DCS is down # features/steps/dcs_failsafe_mode.py:4 3654s Jul 30 23:30:33 Then Response on GET http://127.0.0.1:8008/primary contains failsafe_mode_is_active after 12 seconds # features/steps/patroni_api.py:156 3658s Jul 30 23:30:37 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3658s Jul 30 23:30:37 3658s Jul 30 23:30:37 @dcs-failsafe 3658s Jul 30 23:30:37 Scenario: check new replica isn't promoted when leader is down and DCS is up # features/dcs_failsafe_mode.feature:26 3658s Jul 30 23:30:37 Given DCS is up # features/steps/dcs_failsafe_mode.py:9 3658s Jul 30 23:30:37 When I do a backup of postgres0 # features/steps/custom_bootstrap.py:25 3659s Jul 30 23:30:38 And I shut down postgres0 # features/steps/basic_replication.py:29 3661s Jul 30 23:30:40 When I start postgres1 in a cluster batman from backup with no_leader # features/steps/dcs_failsafe_mode.py:14 3664s Jul 30 23:30:43 Then postgres1 role is the replica after 12 seconds # features/steps/basic_replication.py:105 3664s Jul 30 23:30:43 3664s Jul 30 23:30:43 Scenario: check leader and replica are both in /failsafe key after leader is back # features/dcs_failsafe_mode.feature:33 3664s Jul 30 23:30:43 Given I start postgres0 # features/steps/basic_replication.py:8 3668s Jul 30 23:30:47 And I start postgres1 # features/steps/basic_replication.py:8 3668s Jul 30 23:30:47 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 3669s Jul 30 23:30:48 And "members/postgres1" key in DCS has state=running after 2 seconds # features/steps/cascading_replication.py:23 3669s Jul 30 23:30:48 And Response on GET http://127.0.0.1:8009/failsafe contains postgres1 after 10 seconds # features/steps/patroni_api.py:156 3673s Jul 30 23:30:52 When I issue a GET request to http://127.0.0.1:8009/failsafe # features/steps/patroni_api.py:61 3673s Jul 30 23:30:52 Then I receive a response code 200 # features/steps/patroni_api.py:98 3673s Jul 30 23:30:52 And I receive a response postgres0 http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:98 3673s Jul 30 23:30:52 And I receive a response postgres1 http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:98 3673s Jul 30 23:30:52 3673s Jul 30 23:30:52 @dcs-failsafe @slot-advance 3673s Jul 30 23:30:52 Scenario: check leader and replica are functioning while DCS is down # features/dcs_failsafe_mode.feature:46 3673s Jul 30 23:30:52 Given I get all changes from physical slot dcs_slot_1 on postgres0 # features/steps/slots.py:75 3673s Jul 30 23:30:52 Then physical slot dcs_slot_1 is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 3677s Jul 30 23:30:56 And logical slot dcs_slot_0 is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 3677s Jul 30 23:30:56 And DCS is down # features/steps/dcs_failsafe_mode.py:4 3677s Jul 30 23:30:56 Then Response on GET http://127.0.0.1:8008/primary contains failsafe_mode_is_active after 12 seconds # features/steps/patroni_api.py:156 3683s Jul 30 23:31:02 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3683s Jul 30 23:31:02 And postgres1 role is the replica after 2 seconds # features/steps/basic_replication.py:105 3683s Jul 30 23:31:02 And replication works from postgres0 to postgres1 after 10 seconds # features/steps/basic_replication.py:112 3683s Jul 30 23:31:02 When I get all changes from logical slot dcs_slot_0 on postgres0 # features/steps/slots.py:70 3683s Jul 30 23:31:02 And I get all changes from physical slot dcs_slot_1 on postgres0 # features/steps/slots.py:75 3683s Jul 30 23:31:02 Then logical slot dcs_slot_0 is in sync between postgres0 and postgres1 after 20 seconds # features/steps/slots.py:51 3690s Jul 30 23:31:09 And physical slot dcs_slot_1 is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 3690s Jul 30 23:31:09 3690s Jul 30 23:31:09 @dcs-failsafe 3690s Jul 30 23:31:09 Scenario: check primary is demoted when one replica is shut down and DCS is down # features/dcs_failsafe_mode.feature:61 3690s Jul 30 23:31:09 Given DCS is down # features/steps/dcs_failsafe_mode.py:4 3690s Jul 30 23:31:09 And I kill postgres1 # features/steps/basic_replication.py:34 3691s Jul 30 23:31:10 And I kill postmaster on postgres1 # features/steps/basic_replication.py:44 3691s Jul 30 23:31:10 waiting for server to shut down.... done 3691s Jul 30 23:31:10 server stopped 3691s Jul 30 23:31:10 Then postgres0 role is the replica after 12 seconds # features/steps/basic_replication.py:105 3693s Jul 30 23:31:12 3693s Jul 30 23:31:12 @dcs-failsafe 3693s Jul 30 23:31:12 Scenario: check known replica is promoted when leader is down and DCS is up # features/dcs_failsafe_mode.feature:68 3693s Jul 30 23:31:12 Given I kill postgres0 # features/steps/basic_replication.py:34 3694s Jul 30 23:31:13 And I shut down postmaster on postgres0 # features/steps/basic_replication.py:39 3694s Jul 30 23:31:13 waiting for server to shut down.... done 3694s Jul 30 23:31:13 server stopped 3694s Jul 30 23:31:13 And DCS is up # features/steps/dcs_failsafe_mode.py:9 3694s Jul 30 23:31:13 When I start postgres1 # features/steps/basic_replication.py:8 3697s Jul 30 23:31:16 Then "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 3698s Jul 30 23:31:17 And postgres1 role is the primary after 25 seconds # features/steps/basic_replication.py:105 3700s Jul 30 23:31:19 3700s Jul 30 23:31:19 @dcs-failsafe 3700s Jul 30 23:31:19 Scenario: scale to three-node cluster # features/dcs_failsafe_mode.feature:77 3700s Jul 30 23:31:19 Given I start postgres0 # features/steps/basic_replication.py:8 3704s Jul 30 23:31:23 And I start postgres2 # features/steps/basic_replication.py:8 3710s Jul 30 23:31:29 Then "members/postgres2" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 3710s Jul 30 23:31:29 And "members/postgres0" key in DCS has state=running after 20 seconds # features/steps/cascading_replication.py:23 3710s Jul 30 23:31:29 And Response on GET http://127.0.0.1:8008/failsafe contains postgres2 after 10 seconds # features/steps/patroni_api.py:156 3710s Jul 30 23:31:29 And replication works from postgres1 to postgres0 after 10 seconds # features/steps/basic_replication.py:112 3711s Jul 30 23:31:30 And replication works from postgres1 to postgres2 after 10 seconds # features/steps/basic_replication.py:112 3713s Jul 30 23:31:32 3713s Jul 30 23:31:32 @dcs-failsafe @slot-advance 3713s Jul 30 23:31:32 Scenario: make sure permanent slots exist on replicas # features/dcs_failsafe_mode.feature:88 3713s Jul 30 23:31:32 Given I issue a PATCH request to http://127.0.0.1:8009/config with {"slots":{"dcs_slot_0":null,"dcs_slot_2":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 3713s Jul 30 23:31:32 Then logical slot dcs_slot_2 is in sync between postgres1 and postgres0 after 20 seconds # features/steps/slots.py:51 3718s Jul 30 23:31:37 And logical slot dcs_slot_2 is in sync between postgres1 and postgres2 after 20 seconds # features/steps/slots.py:51 3719s Jul 30 23:31:38 When I get all changes from physical slot dcs_slot_1 on postgres1 # features/steps/slots.py:75 3719s Jul 30 23:31:38 Then physical slot dcs_slot_1 is in sync between postgres1 and postgres0 after 10 seconds # features/steps/slots.py:51 3720s Jul 30 23:31:39 And physical slot dcs_slot_1 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 3720s Jul 30 23:31:39 And physical slot postgres0 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 3720s Jul 30 23:31:39 3720s Jul 30 23:31:39 @dcs-failsafe 3720s Jul 30 23:31:39 Scenario: check three-node cluster is functioning while DCS is down # features/dcs_failsafe_mode.feature:98 3720s Jul 30 23:31:39 Given DCS is down # features/steps/dcs_failsafe_mode.py:4 3720s Jul 30 23:31:39 Then Response on GET http://127.0.0.1:8009/primary contains failsafe_mode_is_active after 12 seconds # features/steps/patroni_api.py:156 3724s Jul 30 23:31:43 Then postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3724s Jul 30 23:31:43 And postgres0 role is the replica after 2 seconds # features/steps/basic_replication.py:105 3724s Jul 30 23:31:43 And postgres2 role is the replica after 2 seconds # features/steps/basic_replication.py:105 3724s Jul 30 23:31:43 3724s Jul 30 23:31:43 @dcs-failsafe @slot-advance 3724s Jul 30 23:31:43 Scenario: check that permanent slots are in sync between nodes while DCS is down # features/dcs_failsafe_mode.feature:107 3724s Jul 30 23:31:43 Given replication works from postgres1 to postgres0 after 10 seconds # features/steps/basic_replication.py:112 3724s Jul 30 23:31:43 And replication works from postgres1 to postgres2 after 10 seconds # features/steps/basic_replication.py:112 3725s Jul 30 23:31:44 When I get all changes from logical slot dcs_slot_2 on postgres1 # features/steps/slots.py:70 3725s Jul 30 23:31:44 And I get all changes from physical slot dcs_slot_1 on postgres1 # features/steps/slots.py:75 3725s Jul 30 23:31:44 Then logical slot dcs_slot_2 is in sync between postgres1 and postgres0 after 20 seconds # features/steps/slots.py:51 3731s Jul 30 23:31:50 And logical slot dcs_slot_2 is in sync between postgres1 and postgres2 after 20 seconds # features/steps/slots.py:51 3731s Jul 30 23:31:50 And physical slot dcs_slot_1 is in sync between postgres1 and postgres0 after 10 seconds # features/steps/slots.py:51 3731s Jul 30 23:31:50 And physical slot dcs_slot_1 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 3731s Jul 30 23:31:50 And physical slot postgres0 is in sync between postgres1 and postgres2 after 10 seconds # features/steps/slots.py:51 3743s Jul 30 23:32:02 3743s Jul 30 23:32:02 Feature: ignored slots # features/ignored_slots.feature:1 3743s Jul 30 23:32:02 3743s Jul 30 23:32:02 Scenario: check ignored slots aren't removed on failover/switchover # features/ignored_slots.feature:2 3743s Jul 30 23:32:02 Given I start postgres1 # features/steps/basic_replication.py:8 3747s Jul 30 23:32:06 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 3748s Jul 30 23:32:07 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 3748s Jul 30 23:32:07 When I issue a PATCH request to http://127.0.0.1:8009/config with {"ignore_slots": [{"name": "unmanaged_slot_0", "database": "postgres", "plugin": "test_decoding", "type": "logical"}, {"name": "unmanaged_slot_1", "database": "postgres", "plugin": "test_decoding"}, {"name": "unmanaged_slot_2", "database": "postgres"}, {"name": "unmanaged_slot_3"}], "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 3748s Jul 30 23:32:07 Then I receive a response code 200 # features/steps/patroni_api.py:98 3748s Jul 30 23:32:07 And Response on GET http://127.0.0.1:8009/config contains ignore_slots after 10 seconds # features/steps/patroni_api.py:156 3748s Jul 30 23:32:07 When I shut down postgres1 # features/steps/basic_replication.py:29 3750s Jul 30 23:32:09 And I start postgres1 # features/steps/basic_replication.py:8 3753s Jul 30 23:32:12 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 3756s Jul 30 23:32:15 And "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 3757s Jul 30 23:32:16 And postgres1 role is the primary after 20 seconds # features/steps/basic_replication.py:105 3757s Jul 30 23:32:16 When I create a logical replication slot unmanaged_slot_0 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 3757s Jul 30 23:32:16 And I create a logical replication slot unmanaged_slot_1 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 3757s Jul 30 23:32:16 And I create a logical replication slot unmanaged_slot_2 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 3757s Jul 30 23:32:16 And I create a logical replication slot unmanaged_slot_3 on postgres1 with the test_decoding plugin # features/steps/slots.py:8 3757s Jul 30 23:32:16 And I create a logical replication slot dummy_slot on postgres1 with the test_decoding plugin # features/steps/slots.py:8 3757s Jul 30 23:32:16 Then postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 3757s Jul 30 23:32:16 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 3757s Jul 30 23:32:16 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 3757s Jul 30 23:32:16 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 3757s Jul 30 23:32:16 When I start postgres0 # features/steps/basic_replication.py:8 3761s Jul 30 23:32:20 Then "members/postgres0" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 3761s Jul 30 23:32:20 And postgres0 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 3761s Jul 30 23:32:20 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 3763s Jul 30 23:32:22 When I shut down postgres1 # features/steps/basic_replication.py:29 3765s Jul 30 23:32:24 Then "members/postgres0" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 3766s Jul 30 23:32:25 When I start postgres1 # features/steps/basic_replication.py:8 3769s Jul 30 23:32:28 Then postgres1 role is the secondary after 20 seconds # features/steps/basic_replication.py:105 3769s Jul 30 23:32:28 And "members/postgres1" key in DCS has role=replica after 10 seconds # features/steps/cascading_replication.py:23 3769s Jul 30 23:32:28 And I sleep for 2 seconds # features/steps/patroni_api.py:39 3771s Jul 30 23:32:30 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 3771s Jul 30 23:32:30 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 3771s Jul 30 23:32:30 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 3771s Jul 30 23:32:30 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 3771s Jul 30 23:32:30 And postgres1 does not have a replication slot named dummy_slot # features/steps/slots.py:40 3771s Jul 30 23:32:30 When I shut down postgres0 # features/steps/basic_replication.py:29 3773s Jul 30 23:32:32 Then "members/postgres1" key in DCS has role=master after 10 seconds # features/steps/cascading_replication.py:23 3774s Jul 30 23:32:33 And postgres1 has a logical replication slot named unmanaged_slot_0 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 3774s Jul 30 23:32:33 And postgres1 has a logical replication slot named unmanaged_slot_1 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 3774s Jul 30 23:32:33 And postgres1 has a logical replication slot named unmanaged_slot_2 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 3774s Jul 30 23:32:33 And postgres1 has a logical replication slot named unmanaged_slot_3 with the test_decoding plugin after 2 seconds # features/steps/slots.py:19 3782s Jul 30 23:32:41 3782s Jul 30 23:32:41 Feature: nostream node # features/nostream_node.feature:1 3782s Jul 30 23:32:41 3782s Jul 30 23:32:41 Scenario: check nostream node is recovering from archive # features/nostream_node.feature:3 3782s Jul 30 23:32:41 When I start postgres0 # features/steps/basic_replication.py:8 3786s Jul 30 23:32:45 And I configure and start postgres1 with a tag nostream true # features/steps/cascading_replication.py:7 3790s Jul 30 23:32:49 Then "members/postgres1" key in DCS has replication_state=in archive recovery after 10 seconds # features/steps/cascading_replication.py:23 3790s Jul 30 23:32:49 And replication works from postgres0 to postgres1 after 30 seconds # features/steps/basic_replication.py:112 3795s Jul 30 23:32:54 3795s Jul 30 23:32:54 @slot-advance 3795s Jul 30 23:32:54 Scenario: check permanent logical replication slots are not copied # features/nostream_node.feature:10 3795s Jul 30 23:32:54 When I issue a PATCH request to http://127.0.0.1:8008/config with {"postgresql": {"parameters": {"wal_level": "logical"}}, "slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 3795s Jul 30 23:32:54 Then I receive a response code 200 # features/steps/patroni_api.py:98 3795s Jul 30 23:32:54 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 3798s Jul 30 23:32:57 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 3799s Jul 30 23:32:58 When I configure and start postgres2 with a tag replicatefrom postgres1 # features/steps/cascading_replication.py:7 3803s Jul 30 23:33:02 Then "members/postgres2" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 3810s Jul 30 23:33:09 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 3810s Jul 30 23:33:09 And postgres2 does not have a replication slot named test_logical # features/steps/slots.py:40 3826s Jul 30 23:33:25 3826s Jul 30 23:33:25 Feature: patroni api # features/patroni_api.feature:1 3826s Jul 30 23:33:25 We should check that patroni correctly responds to valid and not-valid API requests. 3826s Jul 30 23:33:25 Scenario: check API requests on a stand-alone server # features/patroni_api.feature:4 3826s Jul 30 23:33:25 Given I start postgres0 # features/steps/basic_replication.py:8 3830s Jul 30 23:33:29 And postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 3830s Jul 30 23:33:29 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 3830s Jul 30 23:33:29 Then I receive a response code 200 # features/steps/patroni_api.py:98 3830s Jul 30 23:33:29 And I receive a response state running # features/steps/patroni_api.py:98 3830s Jul 30 23:33:29 And I receive a response role master # features/steps/patroni_api.py:98 3830s Jul 30 23:33:29 When I issue a GET request to http://127.0.0.1:8008/standby_leader # features/steps/patroni_api.py:61 3830s Jul 30 23:33:29 Then I receive a response code 503 # features/steps/patroni_api.py:98 3830s Jul 30 23:33:29 When I issue a GET request to http://127.0.0.1:8008/health # features/steps/patroni_api.py:61 3830s Jul 30 23:33:29 Then I receive a response code 200 # features/steps/patroni_api.py:98 3830s Jul 30 23:33:29 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 3830s Jul 30 23:33:29 Then I receive a response code 503 # features/steps/patroni_api.py:98 3830s Jul 30 23:33:29 When I issue a POST request to http://127.0.0.1:8008/reinitialize with {"force": true} # features/steps/patroni_api.py:71 3830s Jul 30 23:33:29 Then I receive a response code 503 # features/steps/patroni_api.py:98 3830s Jul 30 23:33:29 And I receive a response text I am the leader, can not reinitialize # features/steps/patroni_api.py:98 3830s Jul 30 23:33:29 When I run patronictl.py switchover batman --master postgres0 --force # features/steps/patroni_api.py:86 3833s Jul 30 23:33:31 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 3833s Jul 30 23:33:31 And I receive a response output "Error: No candidates found to switchover to" # features/steps/patroni_api.py:98 3833s Jul 30 23:33:31 When I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0"} # features/steps/patroni_api.py:71 3833s Jul 30 23:33:32 Then I receive a response code 412 # features/steps/patroni_api.py:98 3833s Jul 30 23:33:32 And I receive a response text switchover is not possible: cluster does not have members except leader # features/steps/patroni_api.py:98 3833s Jul 30 23:33:32 When I issue an empty POST request to http://127.0.0.1:8008/failover # features/steps/patroni_api.py:66 3833s Jul 30 23:33:32 Then I receive a response code 400 # features/steps/patroni_api.py:98 3833s Jul 30 23:33:32 When I issue a POST request to http://127.0.0.1:8008/failover with {"foo": "bar"} # features/steps/patroni_api.py:71 3833s Jul 30 23:33:32 Then I receive a response code 400 # features/steps/patroni_api.py:98 3833s Jul 30 23:33:32 And I receive a response text "Failover could be performed only to a specific candidate" # features/steps/patroni_api.py:98 3833s Jul 30 23:33:32 3833s Jul 30 23:33:32 Scenario: check local configuration reload # features/patroni_api.feature:32 3833s Jul 30 23:33:32 Given I add tag new_tag new_value to postgres0 config # features/steps/patroni_api.py:137 3833s Jul 30 23:33:32 And I issue an empty POST request to http://127.0.0.1:8008/reload # features/steps/patroni_api.py:66 3833s Jul 30 23:33:32 Then I receive a response code 202 # features/steps/patroni_api.py:98 3833s Jul 30 23:33:32 3833s Jul 30 23:33:32 Scenario: check dynamic configuration change via DCS # features/patroni_api.feature:37 3833s Jul 30 23:33:32 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "postgresql": {"parameters": {"max_connections": "101"}}} # features/steps/patroni_api.py:71 3833s Jul 30 23:33:32 Then I receive a response code 200 # features/steps/patroni_api.py:98 3833s Jul 30 23:33:32 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 11 seconds # features/steps/patroni_api.py:156 3835s Jul 30 23:33:34 When I issue a GET request to http://127.0.0.1:8008/config # features/steps/patroni_api.py:61 3835s Jul 30 23:33:34 Then I receive a response code 200 # features/steps/patroni_api.py:98 3835s Jul 30 23:33:34 And I receive a response ttl 20 # features/steps/patroni_api.py:98 3835s Jul 30 23:33:34 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 3835s Jul 30 23:33:34 Then I receive a response code 200 # features/steps/patroni_api.py:98 3835s Jul 30 23:33:34 And I receive a response tags {'new_tag': 'new_value'} # features/steps/patroni_api.py:98 3835s Jul 30 23:33:34 And I sleep for 4 seconds # features/steps/patroni_api.py:39 3839s Jul 30 23:33:38 3839s Jul 30 23:33:38 Scenario: check the scheduled restart # features/patroni_api.feature:49 3839s Jul 30 23:33:38 Given I run patronictl.py edit-config -p 'superuser_reserved_connections=6' --force batman # features/steps/patroni_api.py:86 3841s Jul 30 23:33:40 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3841s Jul 30 23:33:40 And I receive a response output "+ superuser_reserved_connections: 6" # features/steps/patroni_api.py:98 3841s Jul 30 23:33:40 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 5 seconds # features/steps/patroni_api.py:156 3841s Jul 30 23:33:40 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"role": "replica"} # features/steps/patroni_api.py:124 3841s Jul 30 23:33:40 Then I receive a response code 202 # features/steps/patroni_api.py:98 3841s Jul 30 23:33:40 And I sleep for 8 seconds # features/steps/patroni_api.py:39 3849s Jul 30 23:33:48 And Response on GET http://127.0.0.1:8008/patroni contains pending_restart after 10 seconds # features/steps/patroni_api.py:156 3849s Jul 30 23:33:48 Given I issue a scheduled restart at http://127.0.0.1:8008 in 5 seconds with {"restart_pending": "True"} # features/steps/patroni_api.py:124 3849s Jul 30 23:33:48 Then I receive a response code 202 # features/steps/patroni_api.py:98 3849s Jul 30 23:33:48 And Response on GET http://127.0.0.1:8008/patroni does not contain pending_restart after 10 seconds # features/steps/patroni_api.py:171 3856s Jul 30 23:33:55 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3857s Jul 30 23:33:56 3857s Jul 30 23:33:56 Scenario: check API requests for the primary-replica pair in the pause mode # features/patroni_api.feature:63 3857s Jul 30 23:33:56 Given I start postgres1 # features/steps/basic_replication.py:8 3861s Jul 30 23:34:00 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 3862s Jul 30 23:34:01 When I run patronictl.py pause batman # features/steps/patroni_api.py:86 3864s Jul 30 23:34:03 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3864s Jul 30 23:34:03 When I kill postmaster on postgres1 # features/steps/basic_replication.py:44 3864s Jul 30 23:34:03 waiting for server to shut down.... done 3864s Jul 30 23:34:03 server stopped 3864s Jul 30 23:34:03 And I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 3864s Jul 30 23:34:03 Then I receive a response code 503 # features/steps/patroni_api.py:98 3864s Jul 30 23:34:03 And "members/postgres1" key in DCS has state=stopped after 10 seconds # features/steps/cascading_replication.py:23 3865s Jul 30 23:34:04 When I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 3869s Jul 30 23:34:08 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3869s Jul 30 23:34:08 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 3870s Jul 30 23:34:09 And I sleep for 2 seconds # features/steps/patroni_api.py:39 3872s Jul 30 23:34:11 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 3872s Jul 30 23:34:11 Then I receive a response code 200 # features/steps/patroni_api.py:98 3872s Jul 30 23:34:11 And I receive a response state running # features/steps/patroni_api.py:98 3872s Jul 30 23:34:11 And I receive a response role replica # features/steps/patroni_api.py:98 3872s Jul 30 23:34:11 When I run patronictl.py reinit batman postgres1 --force --wait # features/steps/patroni_api.py:86 3876s Jul 30 23:34:15 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3876s Jul 30 23:34:15 And I receive a response output "Success: reinitialize for member postgres1" # features/steps/patroni_api.py:98 3876s Jul 30 23:34:15 And postgres1 role is the secondary after 30 seconds # features/steps/basic_replication.py:105 3877s Jul 30 23:34:16 And replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 3877s Jul 30 23:34:16 When I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 3881s Jul 30 23:34:20 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3881s Jul 30 23:34:20 And I receive a response output "Success: restart on member postgres0" # features/steps/patroni_api.py:98 3881s Jul 30 23:34:20 And postgres0 role is the primary after 5 seconds # features/steps/basic_replication.py:105 3882s Jul 30 23:34:21 3882s Jul 30 23:34:21 Scenario: check the switchover via the API in the pause mode # features/patroni_api.feature:90 3882s Jul 30 23:34:21 Given I issue a POST request to http://127.0.0.1:8008/switchover with {"leader": "postgres0", "candidate": "postgres1"} # features/steps/patroni_api.py:71 3884s Jul 30 23:34:23 Then I receive a response code 200 # features/steps/patroni_api.py:98 3884s Jul 30 23:34:23 And postgres1 is a leader after 5 seconds # features/steps/patroni_api.py:29 3884s Jul 30 23:34:23 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3884s Jul 30 23:34:23 And postgres0 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 3890s Jul 30 23:34:29 And replication works from postgres1 to postgres0 after 20 seconds # features/steps/basic_replication.py:112 3890s Jul 30 23:34:29 And "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 3891s Jul 30 23:34:30 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 3891s Jul 30 23:34:30 Then I receive a response code 503 # features/steps/patroni_api.py:98 3891s Jul 30 23:34:30 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 3891s Jul 30 23:34:30 Then I receive a response code 200 # features/steps/patroni_api.py:98 3891s Jul 30 23:34:30 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 3891s Jul 30 23:34:30 Then I receive a response code 200 # features/steps/patroni_api.py:98 3891s Jul 30 23:34:30 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 3891s Jul 30 23:34:30 Then I receive a response code 503 # features/steps/patroni_api.py:98 3891s Jul 30 23:34:30 3891s Jul 30 23:34:30 Scenario: check the scheduled switchover # features/patroni_api.feature:107 3891s Jul 30 23:34:30 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 3894s Jul 30 23:34:33 Then I receive a response returncode 1 # features/steps/patroni_api.py:98 3894s Jul 30 23:34:33 And I receive a response output "Can't schedule switchover in the paused state" # features/steps/patroni_api.py:98 3894s Jul 30 23:34:33 When I run patronictl.py resume batman # features/steps/patroni_api.py:86 3896s Jul 30 23:34:35 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3896s Jul 30 23:34:35 Given I issue a scheduled switchover from postgres1 to postgres0 in 10 seconds # features/steps/patroni_api.py:117 3898s Jul 30 23:34:37 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 3898s Jul 30 23:34:37 And postgres0 is a leader after 20 seconds # features/steps/patroni_api.py:29 3908s Jul 30 23:34:47 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3908s Jul 30 23:34:47 And postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 3911s Jul 30 23:34:50 And replication works from postgres0 to postgres1 after 25 seconds # features/steps/basic_replication.py:112 3911s Jul 30 23:34:50 And "members/postgres1" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 3912s Jul 30 23:34:51 When I issue a GET request to http://127.0.0.1:8008/primary # features/steps/patroni_api.py:61 3912s Jul 30 23:34:51 Then I receive a response code 200 # features/steps/patroni_api.py:98 3912s Jul 30 23:34:51 When I issue a GET request to http://127.0.0.1:8008/replica # features/steps/patroni_api.py:61 3912s Jul 30 23:34:51 Then I receive a response code 503 # features/steps/patroni_api.py:98 3912s Jul 30 23:34:51 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 3912s Jul 30 23:34:51 Then I receive a response code 503 # features/steps/patroni_api.py:98 3912s Jul 30 23:34:51 When I issue a GET request to http://127.0.0.1:8009/replica # features/steps/patroni_api.py:61 3912s Jul 30 23:34:51 Then I receive a response code 200 # features/steps/patroni_api.py:98 3923s Jul 30 23:35:02 3923s Jul 30 23:35:02 Feature: permanent slots # features/permanent_slots.feature:1 3923s Jul 30 23:35:02 3923s Jul 30 23:35:02 Scenario: check that physical permanent slots are created # features/permanent_slots.feature:2 3923s Jul 30 23:35:02 Given I start postgres0 # features/steps/basic_replication.py:8 3926s Jul 30 23:35:05 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 3928s Jul 30 23:35:07 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 3928s Jul 30 23:35:07 When I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_physical":0,"postgres0":0,"postgres1":0,"postgres3":0},"postgresql":{"parameters":{"wal_level":"logical"}}} # features/steps/patroni_api.py:71 3928s Jul 30 23:35:07 Then I receive a response code 200 # features/steps/patroni_api.py:98 3928s Jul 30 23:35:07 And Response on GET http://127.0.0.1:8008/config contains slots after 10 seconds # features/steps/patroni_api.py:156 3928s Jul 30 23:35:07 When I start postgres1 # features/steps/basic_replication.py:8 3932s Jul 30 23:35:11 And I start postgres2 # features/steps/basic_replication.py:8 3936s Jul 30 23:35:15 And I configure and start postgres3 with a tag replicatefrom postgres2 # features/steps/cascading_replication.py:7 3941s Jul 30 23:35:20 Then postgres0 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 3941s Jul 30 23:35:20 And postgres0 has a physical replication slot named postgres1 after 10 seconds # features/steps/slots.py:80 3941s Jul 30 23:35:20 And postgres0 has a physical replication slot named postgres2 after 10 seconds # features/steps/slots.py:80 3941s Jul 30 23:35:20 And postgres2 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 3941s Jul 30 23:35:20 3941s Jul 30 23:35:20 @slot-advance 3941s Jul 30 23:35:20 Scenario: check that logical permanent slots are created # features/permanent_slots.feature:18 3941s Jul 30 23:35:20 Given I run patronictl.py restart batman postgres0 --force # features/steps/patroni_api.py:86 3946s Jul 30 23:35:25 And I issue a PATCH request to http://127.0.0.1:8008/config with {"slots":{"test_logical":{"type":"logical","database":"postgres","plugin":"test_decoding"}}} # features/steps/patroni_api.py:71 3946s Jul 30 23:35:25 Then postgres0 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 3947s Jul 30 23:35:26 3947s Jul 30 23:35:26 @slot-advance 3947s Jul 30 23:35:26 Scenario: check that permanent slots are created on replicas # features/permanent_slots.feature:24 3947s Jul 30 23:35:26 Given postgres1 has a logical replication slot named test_logical with the test_decoding plugin after 10 seconds # features/steps/slots.py:19 3953s Jul 30 23:35:32 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 3953s Jul 30 23:35:32 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 3954s Jul 30 23:35:33 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 3955s Jul 30 23:35:34 And postgres1 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 3955s Jul 30 23:35:34 And postgres2 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 3955s Jul 30 23:35:34 And postgres3 has a physical replication slot named test_physical after 2 seconds # features/steps/slots.py:80 3955s Jul 30 23:35:34 3955s Jul 30 23:35:34 @slot-advance 3955s Jul 30 23:35:34 Scenario: check permanent physical slots that match with member names # features/permanent_slots.feature:34 3955s Jul 30 23:35:34 Given postgres0 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 3955s Jul 30 23:35:34 And postgres1 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 3955s Jul 30 23:35:34 And postgres1 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 3955s Jul 30 23:35:34 And postgres2 has a physical replication slot named postgres0 after 2 seconds # features/steps/slots.py:80 3955s Jul 30 23:35:34 And postgres2 has a physical replication slot named postgres3 after 2 seconds # features/steps/slots.py:80 3955s Jul 30 23:35:34 And postgres2 has a physical replication slot named postgres1 after 2 seconds # features/steps/slots.py:80 3955s Jul 30 23:35:34 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 3955s Jul 30 23:35:34 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 3955s Jul 30 23:35:34 3955s Jul 30 23:35:34 @slot-advance 3955s Jul 30 23:35:34 Scenario: check that permanent slots are advanced on replicas # features/permanent_slots.feature:45 3955s Jul 30 23:35:34 Given I add the table replicate_me to postgres0 # features/steps/basic_replication.py:54 3955s Jul 30 23:35:34 When I get all changes from logical slot test_logical on postgres0 # features/steps/slots.py:70 3955s Jul 30 23:35:34 And I get all changes from physical slot test_physical on postgres0 # features/steps/slots.py:75 3955s Jul 30 23:35:34 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 3957s Jul 30 23:35:36 And Physical slot test_physical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 3957s Jul 30 23:35:36 And Logical slot test_logical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 3957s Jul 30 23:35:36 And Physical slot test_physical is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 3957s Jul 30 23:35:36 And Logical slot test_logical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 3957s Jul 30 23:35:36 And Physical slot test_physical is in sync between postgres0 and postgres3 after 10 seconds # features/steps/slots.py:51 3957s Jul 30 23:35:36 And Physical slot postgres1 is in sync between postgres0 and postgres2 after 10 seconds # features/steps/slots.py:51 3957s Jul 30 23:35:36 And Physical slot postgres3 is in sync between postgres2 and postgres0 after 20 seconds # features/steps/slots.py:51 3959s Jul 30 23:35:38 And Physical slot postgres3 is in sync between postgres2 and postgres1 after 10 seconds # features/steps/slots.py:51 3959s Jul 30 23:35:38 And postgres1 does not have a replication slot named postgres2 # features/steps/slots.py:40 3959s Jul 30 23:35:38 And postgres3 does not have a replication slot named postgres2 # features/steps/slots.py:40 3959s Jul 30 23:35:38 3959s Jul 30 23:35:38 @slot-advance 3959s Jul 30 23:35:38 Scenario: check that only permanent slots are written to the /status key # features/permanent_slots.feature:62 3959s Jul 30 23:35:38 Given "status" key in DCS has test_physical in slots # features/steps/slots.py:96 3959s Jul 30 23:35:38 And "status" key in DCS has postgres0 in slots # features/steps/slots.py:96 3959s Jul 30 23:35:38 And "status" key in DCS has postgres1 in slots # features/steps/slots.py:96 3959s Jul 30 23:35:38 And "status" key in DCS does not have postgres2 in slots # features/steps/slots.py:102 3959s Jul 30 23:35:38 And "status" key in DCS has postgres3 in slots # features/steps/slots.py:96 3959s Jul 30 23:35:38 3959s Jul 30 23:35:38 Scenario: check permanent physical replication slot after failover # features/permanent_slots.feature:69 3959s Jul 30 23:35:38 Given I shut down postgres3 # features/steps/basic_replication.py:29 3960s Jul 30 23:35:39 And I shut down postgres2 # features/steps/basic_replication.py:29 3961s Jul 30 23:35:40 And I shut down postgres0 # features/steps/basic_replication.py:29 3963s Jul 30 23:35:42 Then postgres1 has a physical replication slot named test_physical after 10 seconds # features/steps/slots.py:80 3963s Jul 30 23:35:42 And postgres1 has a physical replication slot named postgres0 after 10 seconds # features/steps/slots.py:80 3963s Jul 30 23:35:42 And postgres1 has a physical replication slot named postgres3 after 10 seconds # features/steps/slots.py:80 3975s Jul 30 23:35:54 3975s Jul 30 23:35:54 Feature: priority replication # features/priority_failover.feature:1 3975s Jul 30 23:35:54 We should check that we can give nodes priority during failover 3975s Jul 30 23:35:54 Scenario: check failover priority 0 prevents leaderships # features/priority_failover.feature:4 3975s Jul 30 23:35:54 Given I configure and start postgres0 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 3979s Jul 30 23:35:58 And I configure and start postgres1 with a tag failover_priority 0 # features/steps/cascading_replication.py:7 3983s Jul 30 23:36:02 Then replication works from postgres0 to postgres1 after 20 seconds # features/steps/basic_replication.py:112 3984s Jul 30 23:36:03 When I shut down postgres0 # features/steps/basic_replication.py:29 3986s Jul 30 23:36:05 And there is one of ["following a different leader because I am not allowed to promote"] INFO in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 3988s Jul 30 23:36:07 Then postgres1 role is the secondary after 10 seconds # features/steps/basic_replication.py:105 3988s Jul 30 23:36:07 When I start postgres0 # features/steps/basic_replication.py:8 3991s Jul 30 23:36:10 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 3995s Jul 30 23:36:14 3995s Jul 30 23:36:14 Scenario: check higher failover priority is respected # features/priority_failover.feature:14 3995s Jul 30 23:36:14 Given I configure and start postgres2 with a tag failover_priority 1 # features/steps/cascading_replication.py:7 4000s Jul 30 23:36:19 And I configure and start postgres3 with a tag failover_priority 2 # features/steps/cascading_replication.py:7 4010s Jul 30 23:36:29 Then replication works from postgres0 to postgres2 after 20 seconds # features/steps/basic_replication.py:112 4011s Jul 30 23:36:30 And replication works from postgres0 to postgres3 after 20 seconds # features/steps/basic_replication.py:112 4012s Jul 30 23:36:31 When I shut down postgres0 # features/steps/basic_replication.py:29 4015s Jul 30 23:36:34 Then postgres3 role is the primary after 10 seconds # features/steps/basic_replication.py:105 4016s Jul 30 23:36:35 And there is one of ["postgres3 has equally tolerable WAL position and priority 2, while this node has priority 1","Wal position of postgres3 is ahead of my wal position"] INFO in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 4016s Jul 30 23:36:35 4016s Jul 30 23:36:35 Scenario: check conflicting configuration handling # features/priority_failover.feature:23 4016s Jul 30 23:36:35 When I set nofailover tag in postgres2 config # features/steps/patroni_api.py:131 4016s Jul 30 23:36:35 And I issue an empty POST request to http://127.0.0.1:8010/reload # features/steps/patroni_api.py:66 4016s Jul 30 23:36:35 Then I receive a response code 202 # features/steps/patroni_api.py:98 4016s Jul 30 23:36:35 And there is one of ["Conflicting configuration between nofailover: True and failover_priority: 1. Defaulting to nofailover: True"] WARNING in the postgres2 patroni log after 5 seconds # features/steps/basic_replication.py:121 4018s Jul 30 23:36:37 And "members/postgres2" key in DCS has tags={'failover_priority': '1', 'nofailover': True} after 10 seconds # features/steps/cascading_replication.py:23 4019s Jul 30 23:36:38 When I issue a POST request to http://127.0.0.1:8010/failover with {"candidate": "postgres2"} # features/steps/patroni_api.py:71 4020s Jul 30 23:36:39 Then I receive a response code 412 # features/steps/patroni_api.py:98 4020s Jul 30 23:36:39 And I receive a response text "failover is not possible: no good candidates have been found" # features/steps/patroni_api.py:98 4020s Jul 30 23:36:39 When I reset nofailover tag in postgres1 config # features/steps/patroni_api.py:131 4020s Jul 30 23:36:39 And I issue an empty POST request to http://127.0.0.1:8009/reload # features/steps/patroni_api.py:66 4020s Jul 30 23:36:39 Then I receive a response code 202 # features/steps/patroni_api.py:98 4020s Jul 30 23:36:39 And there is one of ["Conflicting configuration between nofailover: False and failover_priority: 0. Defaulting to nofailover: False"] WARNING in the postgres1 patroni log after 5 seconds # features/steps/basic_replication.py:121 4021s Jul 30 23:36:40 And "members/postgres1" key in DCS has tags={'failover_priority': '0', 'nofailover': False} after 10 seconds # features/steps/cascading_replication.py:23 4022s Jul 30 23:36:41 And I issue a POST request to http://127.0.0.1:8009/failover with {"candidate": "postgres1"} # features/steps/patroni_api.py:71 4024s Jul 30 23:36:43 Then I receive a response code 200 # features/steps/patroni_api.py:98 4024s Jul 30 23:36:43 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 4038s Jul 30 23:36:57 4038s Jul 30 23:36:57 Feature: recovery # features/recovery.feature:1 4038s Jul 30 23:36:57 We want to check that crashed postgres is started back 4038s Jul 30 23:36:57 Scenario: check that timeline is not incremented when primary is started after crash # features/recovery.feature:4 4038s Jul 30 23:36:57 Given I start postgres0 # features/steps/basic_replication.py:8 4047s Jul 30 23:37:06 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 4049s Jul 30 23:37:08 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 4049s Jul 30 23:37:08 When I start postgres1 # features/steps/basic_replication.py:8 4053s Jul 30 23:37:12 And I add the table foo to postgres0 # features/steps/basic_replication.py:54 4053s Jul 30 23:37:12 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 4054s Jul 30 23:37:13 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 4054s Jul 30 23:37:13 waiting for server to shut down.... done 4054s Jul 30 23:37:13 server stopped 4054s Jul 30 23:37:13 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 4057s Jul 30 23:37:16 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 4057s Jul 30 23:37:16 Then I receive a response code 200 # features/steps/patroni_api.py:98 4057s Jul 30 23:37:16 And I receive a response role master # features/steps/patroni_api.py:98 4057s Jul 30 23:37:16 And I receive a response timeline 1 # features/steps/patroni_api.py:98 4057s Jul 30 23:37:16 And "members/postgres0" key in DCS has state=running after 12 seconds # features/steps/cascading_replication.py:23 4058s Jul 30 23:37:17 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 4060s Jul 30 23:37:19 4060s Jul 30 23:37:19 Scenario: check immediate failover when master_start_timeout=0 # features/recovery.feature:20 4060s Jul 30 23:37:19 Given I issue a PATCH request to http://127.0.0.1:8008/config with {"master_start_timeout": 0} # features/steps/patroni_api.py:71 4060s Jul 30 23:37:19 Then I receive a response code 200 # features/steps/patroni_api.py:98 4060s Jul 30 23:37:19 And Response on GET http://127.0.0.1:8008/config contains master_start_timeout after 10 seconds # features/steps/patroni_api.py:156 4060s Jul 30 23:37:19 When I kill postmaster on postgres0 # features/steps/basic_replication.py:44 4060s Jul 30 23:37:19 waiting for server to shut down.... done 4060s Jul 30 23:37:19 server stopped 4060s Jul 30 23:37:19 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 4062s Jul 30 23:37:21 And postgres1 role is the primary after 10 seconds # features/steps/basic_replication.py:105 4073s Jul 30 23:37:32 4073s Jul 30 23:37:32 Feature: standby cluster # features/standby_cluster.feature:1 4073s Jul 30 23:37:32 4073s Jul 30 23:37:32 Scenario: prepare the cluster with logical slots # features/standby_cluster.feature:2 4073s Jul 30 23:37:32 Given I start postgres1 # features/steps/basic_replication.py:8 4077s Jul 30 23:37:36 Then postgres1 is a leader after 10 seconds # features/steps/patroni_api.py:29 4077s Jul 30 23:37:36 And there is a non empty initialize key in DCS after 15 seconds # features/steps/cascading_replication.py:41 4077s Jul 30 23:37:36 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"pm_1": {"type": "physical"}}, "postgresql": {"parameters": {"wal_level": "logical"}}} # features/steps/patroni_api.py:71 4077s Jul 30 23:37:36 Then I receive a response code 200 # features/steps/patroni_api.py:98 4077s Jul 30 23:37:36 And Response on GET http://127.0.0.1:8009/config contains slots after 10 seconds # features/steps/patroni_api.py:156 4077s Jul 30 23:37:36 And I sleep for 3 seconds # features/steps/patroni_api.py:39 4080s Jul 30 23:37:39 When I issue a PATCH request to http://127.0.0.1:8009/config with {"slots": {"test_logical": {"type": "logical", "database": "postgres", "plugin": "test_decoding"}}} # features/steps/patroni_api.py:71 4080s Jul 30 23:37:39 Then I receive a response code 200 # features/steps/patroni_api.py:98 4080s Jul 30 23:37:39 And I do a backup of postgres1 # features/steps/custom_bootstrap.py:25 4081s Jul 30 23:37:40 When I start postgres0 # features/steps/basic_replication.py:8 4085s Jul 30 23:37:44 Then "members/postgres0" key in DCS has state=running after 10 seconds # features/steps/cascading_replication.py:23 4086s Jul 30 23:37:45 And replication works from postgres1 to postgres0 after 15 seconds # features/steps/basic_replication.py:112 4087s Jul 30 23:37:46 When I issue a GET request to http://127.0.0.1:8008/patroni # features/steps/patroni_api.py:61 4087s Jul 30 23:37:46 Then I receive a response code 200 # features/steps/patroni_api.py:98 4087s Jul 30 23:37:46 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 4087s Jul 30 23:37:46 And "members/postgres0" key in DCS has replication_state=streaming after 10 seconds # features/steps/cascading_replication.py:23 4087s Jul 30 23:37:46 4087s Jul 30 23:37:46 @slot-advance 4087s Jul 30 23:37:46 Scenario: check permanent logical slots are synced to the replica # features/standby_cluster.feature:22 4087s Jul 30 23:37:46 Given I run patronictl.py restart batman postgres1 --force # features/steps/patroni_api.py:86 4090s Jul 30 23:37:49 Then Logical slot test_logical is in sync between postgres0 and postgres1 after 10 seconds # features/steps/slots.py:51 4095s Jul 30 23:37:54 4095s Jul 30 23:37:54 Scenario: Detach exiting node from the cluster # features/standby_cluster.feature:26 4095s Jul 30 23:37:54 When I shut down postgres1 # features/steps/basic_replication.py:29 4097s Jul 30 23:37:56 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 4097s Jul 30 23:37:56 And "members/postgres0" key in DCS has role=master after 5 seconds # features/steps/cascading_replication.py:23 4098s Jul 30 23:37:57 When I issue a GET request to http://127.0.0.1:8008/ # features/steps/patroni_api.py:61 4098s Jul 30 23:37:57 Then I receive a response code 200 # features/steps/patroni_api.py:98 4098s Jul 30 23:37:57 4098s Jul 30 23:37:57 Scenario: check replication of a single table in a standby cluster # features/standby_cluster.feature:33 4098s Jul 30 23:37:57 Given I start postgres1 in a standby cluster batman1 as a clone of postgres0 # features/steps/standby_cluster.py:23 4101s Jul 30 23:38:00 Then postgres1 is a leader of batman1 after 10 seconds # features/steps/custom_bootstrap.py:16 4104s Jul 30 23:38:03 When I add the table foo to postgres0 # features/steps/basic_replication.py:54 4104s Jul 30 23:38:03 Then table foo is present on postgres1 after 20 seconds # features/steps/basic_replication.py:93 4104s Jul 30 23:38:03 When I issue a GET request to http://127.0.0.1:8009/patroni # features/steps/patroni_api.py:61 4104s Jul 30 23:38:03 Then I receive a response code 200 # features/steps/patroni_api.py:98 4104s Jul 30 23:38:03 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 4104s Jul 30 23:38:03 And I sleep for 3 seconds # features/steps/patroni_api.py:39 4107s Jul 30 23:38:06 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 4108s Jul 30 23:38:06 Then I receive a response code 503 # features/steps/patroni_api.py:98 4108s Jul 30 23:38:06 When I issue a GET request to http://127.0.0.1:8009/standby_leader # features/steps/patroni_api.py:61 4108s Jul 30 23:38:07 Then I receive a response code 200 # features/steps/patroni_api.py:98 4108s Jul 30 23:38:07 And I receive a response role standby_leader # features/steps/patroni_api.py:98 4108s Jul 30 23:38:07 And there is a postgres1_cb.log with "on_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 4108s Jul 30 23:38:07 When I start postgres2 in a cluster batman1 # features/steps/standby_cluster.py:12 4112s Jul 30 23:38:11 Then postgres2 role is the replica after 24 seconds # features/steps/basic_replication.py:105 4112s Jul 30 23:38:11 And postgres2 is replicating from postgres1 after 10 seconds # features/steps/standby_cluster.py:52 4112s Jul 30 23:38:11 And table foo is present on postgres2 after 20 seconds # features/steps/basic_replication.py:93 4112s Jul 30 23:38:11 When I issue a GET request to http://127.0.0.1:8010/patroni # features/steps/patroni_api.py:61 4112s Jul 30 23:38:11 Then I receive a response code 200 # features/steps/patroni_api.py:98 4112s Jul 30 23:38:11 And I receive a response replication_state streaming # features/steps/patroni_api.py:98 4112s Jul 30 23:38:11 And postgres1 does not have a replication slot named test_logical # features/steps/slots.py:40 4112s Jul 30 23:38:11 4112s Jul 30 23:38:11 Scenario: check switchover # features/standby_cluster.feature:57 4112s Jul 30 23:38:11 Given I run patronictl.py switchover batman1 --force # features/steps/patroni_api.py:86 4116s Jul 30 23:38:15 Then Status code on GET http://127.0.0.1:8010/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 4116s Jul 30 23:38:15 And postgres1 is replicating from postgres2 after 32 seconds # features/steps/standby_cluster.py:52 4118s Jul 30 23:38:17 And there is a postgres2_cb.log with "on_start replica batman1\non_role_change standby_leader batman1" in postgres2 data directory # features/steps/cascading_replication.py:12 4118s Jul 30 23:38:17 4118s Jul 30 23:38:17 Scenario: check failover # features/standby_cluster.feature:63 4118s Jul 30 23:38:17 When I kill postgres2 # features/steps/basic_replication.py:34 4119s Jul 30 23:38:18 And I kill postmaster on postgres2 # features/steps/basic_replication.py:44 4119s Jul 30 23:38:18 waiting for server to shut down.... done 4119s Jul 30 23:38:18 server stopped 4119s Jul 30 23:38:18 Then postgres1 is replicating from postgres0 after 32 seconds # features/steps/standby_cluster.py:52 4138s Jul 30 23:38:37 And Status code on GET http://127.0.0.1:8009/standby_leader is 200 after 10 seconds # features/steps/patroni_api.py:142 4138s Jul 30 23:38:37 When I issue a GET request to http://127.0.0.1:8009/primary # features/steps/patroni_api.py:61 4138s Jul 30 23:38:37 Then I receive a response code 503 # features/steps/patroni_api.py:98 4138s Jul 30 23:38:37 And I receive a response role standby_leader # features/steps/patroni_api.py:98 4138s Jul 30 23:38:37 And replication works from postgres0 to postgres1 after 15 seconds # features/steps/basic_replication.py:112 4139s Jul 30 23:38:38 And there is a postgres1_cb.log with "on_role_change replica batman1\non_role_change standby_leader batman1" in postgres1 data directory # features/steps/cascading_replication.py:12 4154s Jul 30 23:38:53 4154s Jul 30 23:38:53 Feature: watchdog # features/watchdog.feature:1 4154s Jul 30 23:38:53 Verify that watchdog gets pinged and triggered under appropriate circumstances. 4154s Jul 30 23:38:53 Scenario: watchdog is opened and pinged # features/watchdog.feature:4 4154s Jul 30 23:38:53 Given I start postgres0 with watchdog # features/steps/watchdog.py:16 4163s Jul 30 23:39:02 Then postgres0 is a leader after 10 seconds # features/steps/patroni_api.py:29 4164s Jul 30 23:39:03 And postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 4164s Jul 30 23:39:03 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 4164s Jul 30 23:39:03 And postgres0 watchdog has a 15 second timeout # features/steps/watchdog.py:34 4164s Jul 30 23:39:03 4164s Jul 30 23:39:03 Scenario: watchdog is reconfigured after global ttl changed # features/watchdog.feature:11 4164s Jul 30 23:39:03 Given I run patronictl.py edit-config batman -s ttl=30 --force # features/steps/patroni_api.py:86 4166s Jul 30 23:39:05 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 4166s Jul 30 23:39:05 And I receive a response output "+ttl: 30" # features/steps/patroni_api.py:98 4166s Jul 30 23:39:05 When I sleep for 4 seconds # features/steps/patroni_api.py:39 4170s Jul 30 23:39:09 Then postgres0 watchdog has a 25 second timeout # features/steps/watchdog.py:34 4170s Jul 30 23:39:09 4170s Jul 30 23:39:09 Scenario: watchdog is disabled during pause # features/watchdog.feature:18 4170s Jul 30 23:39:09 Given I run patronictl.py pause batman # features/steps/patroni_api.py:86 4172s Jul 30 23:39:11 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 4172s Jul 30 23:39:11 When I sleep for 2 seconds # features/steps/patroni_api.py:39 4174s Jul 30 23:39:13 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 4174s Jul 30 23:39:13 4174s Jul 30 23:39:13 Scenario: watchdog is opened and pinged after resume # features/watchdog.feature:24 4174s Jul 30 23:39:13 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 4174s Jul 30 23:39:13 And I run patronictl.py resume batman # features/steps/patroni_api.py:86 4176s Jul 30 23:39:15 Then I receive a response returncode 0 # features/steps/patroni_api.py:98 4176s Jul 30 23:39:15 And postgres0 watchdog has been pinged after 10 seconds # features/steps/watchdog.py:21 4176s Jul 30 23:39:15 4176s Jul 30 23:39:15 Scenario: watchdog is disabled when shutting down # features/watchdog.feature:30 4176s Jul 30 23:39:15 Given I shut down postgres0 # features/steps/basic_replication.py:29 4178s Jul 30 23:39:17 Then postgres0 watchdog has been closed # features/steps/watchdog.py:29 4178s Jul 30 23:39:17 4178s Jul 30 23:39:17 Scenario: watchdog is triggered if patroni stops responding # features/watchdog.feature:34 4178s Jul 30 23:39:17 Given I reset postgres0 watchdog state # features/steps/watchdog.py:39 4178s Jul 30 23:39:17 And I start postgres0 with watchdog # features/steps/watchdog.py:16 4181s Jul 30 23:39:20 Then postgres0 role is the primary after 10 seconds # features/steps/basic_replication.py:105 4183s Jul 30 23:39:22 When postgres0 hangs for 30 seconds # features/steps/watchdog.py:52 4183s Jul 30 23:39:22 Then postgres0 watchdog is triggered after 30 seconds # features/steps/watchdog.py:44 4211s Jul 30 23:39:50 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.4509.XCTjbWgx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.4512.XpsDZRyx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.4557.XaHUBVFx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.4604.XYrMsthx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.4652.XHGosfWx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.4696.XKArpNux 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.4765.XLwfHlBx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.4814.XjxuWSSx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.4818.XUyrqNnx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.4907.XWMQheyx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.5009.XkdZVXQx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.5013.XNMfLkJx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.5058.XlqgdTex 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.5107.XMZqFjfx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.5239.XPdFGLVx 4213s Jul 30 23:39:52 Skipping duplicate data .coverage.autopkgtest.5243.XaBypocx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.5246.XfAbDzNx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.5292.XSYoUXfx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.5351.XBPuZHkx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.5438.XKPABUDx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.5441.XDMbfWNx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.5763.XbrWHtzx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.5838.XUkYcDEx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.5892.XGzkVTZx 4213s Jul 30 23:39:52 Skipping duplicate data .coverage.autopkgtest.6153.XLQIUUOx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6156.XtlgFRcx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6209.XEvuQRGx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6271.XsqxbHnx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6365.XLqxFrUx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6459.XcnFEHRx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6462.XIvAAIox 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6499.XylNuqIx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6571.XnLVJLCx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6605.XqzGjhnx 4213s Jul 30 23:39:52 Skipping duplicate data .coverage.autopkgtest.6759.XOZbZDGx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6762.XPcWWwex 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6812.XkrjVHvx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6828.XnxBHOFx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6868.XTkhNrUx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6915.XIvAXjgx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6921.XlJNrftx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.6957.XTZKOVTx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7002.XNCiPPWx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7166.XlAOurrx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7169.XNoSSnxx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7176.XoLLBoyx 4213s Jul 30 23:39:52 Skipping duplicate data .coverage.autopkgtest.7311.XLbHFKxx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7317.XdJDHfmx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7365.XzXMljmx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7413.XKSqXBdx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7452.XoPJJsex 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7503.XWQbWcsx 4213s Jul 30 23:39:52 Skipping duplicate data .coverage.autopkgtest.7668.XpPnEFpx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7671.XxAHhUmx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7705.XUNAHhZx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7787.XKFDkdYx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7869.XGvRkOEx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.7945.XSfgwlwx 4213s Jul 30 23:39:52 Skipping duplicate data .coverage.autopkgtest.8274.Xfaoacux 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.8277.XPsHVxDx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.8321.XWWFUTTx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.8464.XvaJyXex 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.8467.XTJJDwPx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.8529.XpoeHYZx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.8582.XHaaUnzx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.8687.XPjoWkYx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.8801.XaPJlnwx 4213s Jul 30 23:39:52 Skipping duplicate data .coverage.autopkgtest.8932.XScaWbhx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.8936.Xdcdlxsx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.8983.XCgErAdx 4213s Jul 30 23:39:52 Skipping duplicate data .coverage.autopkgtest.8986.XiuJIjrx 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.8990.XQuZZfex 4213s Jul 30 23:39:52 Combined data file .coverage.autopkgtest.9003.XqQTZYix 4213s Jul 30 23:39:52 Skipping duplicate data .coverage.autopkgtest.9068.XJcJCVrx 4215s Jul 30 23:39:54 Name Stmts Miss Cover 4215s Jul 30 23:39:54 ------------------------------------------------------------------------------------------------------------- 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/_distutils_hack/__init__.py 101 96 5% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/__about__.py 5 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/__init__.py 3 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/exceptions.py 26 5 81% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/fernet.py 137 54 61% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/__init__.py 2 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/_oid.py 126 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/backends/__init__.py 5 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/backends/openssl/__init__.py 3 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/backends/openssl/aead.py 114 96 16% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/backends/openssl/backend.py 397 257 35% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/backends/openssl/ciphers.py 125 50 60% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/__init__.py 0 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/__init__.py 0 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/_conditional.py 50 23 54% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/bindings/openssl/binding.py 62 12 81% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/__init__.py 0 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_asymmetric.py 6 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_cipheralgorithm.py 17 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/_serialization.py 79 35 56% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/__init__.py 0 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dh.py 47 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/dsa.py 55 5 91% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ec.py 164 17 90% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed448.py 45 12 73% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/ed25519.py 43 12 72% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/padding.py 55 23 58% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/rsa.py 90 38 58% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/types.py 19 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/utils.py 14 5 64% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x448.py 43 12 72% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/asymmetric/x25519.py 41 12 71% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/__init__.py 4 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/algorithms.py 129 30 77% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/base.py 140 59 58% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/ciphers/modes.py 139 50 64% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/constant_time.py 6 3 50% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/hashes.py 127 20 84% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/hmac.py 6 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/kdf/__init__.py 7 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/kdf/pbkdf2.py 27 5 81% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/padding.py 117 27 77% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/__init__.py 5 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/base.py 7 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/pkcs12.py 82 49 40% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/hazmat/primitives/serialization/ssh.py 758 602 21% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/utils.py 77 23 70% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/x509/__init__.py 70 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/x509/base.py 487 229 53% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/x509/certificate_transparency.py 42 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/x509/extensions.py 1038 569 45% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/x509/general_name.py 166 94 43% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/x509/name.py 232 141 39% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/x509/oid.py 3 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/cryptography/x509/verification.py 10 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/dateutil/__init__.py 13 4 69% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/dateutil/_common.py 25 15 40% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/dateutil/_version.py 11 2 82% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/dateutil/parser/__init__.py 33 4 88% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/dateutil/parser/_parser.py 813 436 46% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/dateutil/parser/isoparser.py 185 150 19% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/dateutil/relativedelta.py 241 206 15% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/dateutil/tz/__init__.py 4 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/dateutil/tz/_common.py 161 121 25% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/dateutil/tz/_factories.py 49 21 57% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/dateutil/tz/tz.py 800 626 22% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/dateutil/tz/win.py 153 149 3% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/__init__.py 13 2 85% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/__main__.py 199 65 67% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/api.py 770 288 63% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/async_executor.py 96 15 84% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/collections.py 56 6 89% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/config.py 371 98 74% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/config_generator.py 212 159 25% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/daemon.py 76 3 96% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/dcs/__init__.py 646 83 87% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/dcs/raft.py 319 39 88% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/dynamic_loader.py 35 7 80% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/exceptions.py 16 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/file_perm.py 43 8 81% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/global_config.py 81 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/ha.py 1244 309 75% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/log.py 219 69 68% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/__init__.py 821 170 79% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/available_parameters/__init__.py 21 1 95% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/bootstrap.py 252 62 75% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/callback_executor.py 55 8 85% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/cancellable.py 104 41 61% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/config.py 813 216 73% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/connection.py 75 1 99% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/misc.py 41 8 80% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/mpp/__init__.py 89 11 88% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/postmaster.py 170 82 52% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/rewind.py 416 163 61% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/slots.py 334 34 90% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/sync.py 130 19 85% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/postgresql/validator.py 157 23 85% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/psycopg.py 42 16 62% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/request.py 62 6 90% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/tags.py 38 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/utils.py 350 123 65% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/validator.py 301 208 31% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/version.py 1 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/watchdog/__init__.py 2 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/watchdog/base.py 203 42 79% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/patroni/watchdog/linux.py 135 35 74% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/psutil/__init__.py 951 624 34% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/psutil/_common.py 424 207 51% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/psutil/_compat.py 302 263 13% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/psutil/_pslinux.py 1251 915 27% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/psutil/_psposix.py 96 38 60% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/psycopg2/__init__.py 19 3 84% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/psycopg2/_json.py 64 27 58% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/psycopg2/_range.py 269 172 36% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/psycopg2/errors.py 3 2 33% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/psycopg2/extensions.py 91 25 73% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/__init__.py 2 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/atomic_replace.py 4 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/config.py 80 1 99% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/dns_resolver.py 51 10 80% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/encryptor.py 17 2 88% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/fast_queue.py 21 1 95% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/journal.py 193 37 81% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/monotonic.py 77 70 9% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/node.py 49 10 80% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/pickle.py 52 32 38% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/pipe_notifier.py 24 2 92% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/poller.py 87 41 53% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/serializer.py 166 133 20% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/syncobj.py 1045 392 62% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/tcp_connection.py 250 40 84% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/tcp_server.py 56 12 79% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/transport.py 266 57 79% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/utility.py 59 7 88% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/version.py 1 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/pysyncobj/win_inet_pton.py 44 31 30% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/six.py 504 250 50% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/__init__.py 50 14 72% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/_base_connection.py 70 52 26% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/_collections.py 234 108 54% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/_request_methods.py 53 15 72% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/_version.py 2 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/connection.py 324 104 68% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/connectionpool.py 347 136 61% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/exceptions.py 115 37 68% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/fields.py 92 73 21% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/filepost.py 37 24 35% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/poolmanager.py 233 88 62% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/response.py 562 336 40% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/util/__init__.py 10 0 100% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/util/connection.py 66 9 86% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/util/proxy.py 13 6 54% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/util/request.py 104 49 53% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/util/response.py 32 17 47% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/util/retry.py 173 49 72% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/util/ssl_.py 177 75 58% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/util/ssl_match_hostname.py 66 54 18% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/util/ssltransport.py 160 112 30% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/util/timeout.py 71 19 73% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/util/url.py 205 78 62% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/util/util.py 26 9 65% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/urllib3/util/wait.py 49 38 22% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/__init__.py 165 109 34% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/composer.py 92 17 82% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/constructor.py 479 276 42% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/cyaml.py 46 24 48% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/dumper.py 23 12 48% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/emitter.py 838 769 8% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/error.py 58 42 28% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/events.py 61 6 90% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/loader.py 47 24 49% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/nodes.py 29 7 76% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/parser.py 352 180 49% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/reader.py 122 30 75% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/representer.py 248 176 29% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/resolver.py 135 76 44% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/scanner.py 758 415 45% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/serializer.py 85 70 18% 4215s Jul 30 23:39:54 /usr/lib/python3/dist-packages/yaml/tokens.py 76 17 78% 4215s Jul 30 23:39:54 patroni/__init__.py 13 2 85% 4215s Jul 30 23:39:54 patroni/__main__.py 199 199 0% 4215s Jul 30 23:39:54 patroni/api.py 770 770 0% 4215s Jul 30 23:39:54 patroni/async_executor.py 96 69 28% 4215s Jul 30 23:39:54 patroni/collections.py 56 15 73% 4215s Jul 30 23:39:54 patroni/config.py 371 189 49% 4215s Jul 30 23:39:54 patroni/config_generator.py 212 212 0% 4215s Jul 30 23:39:54 patroni/ctl.py 936 411 56% 4215s Jul 30 23:39:54 patroni/daemon.py 76 6 92% 4215s Jul 30 23:39:54 patroni/dcs/__init__.py 646 268 59% 4215s Jul 30 23:39:54 patroni/dcs/consul.py 485 485 0% 4215s Jul 30 23:39:54 patroni/dcs/etcd3.py 679 679 0% 4215s Jul 30 23:39:54 patroni/dcs/etcd.py 603 603 0% 4215s Jul 30 23:39:54 patroni/dcs/exhibitor.py 61 61 0% 4215s Jul 30 23:39:54 patroni/dcs/kubernetes.py 938 938 0% 4215s Jul 30 23:39:54 patroni/dcs/raft.py 319 73 77% 4215s Jul 30 23:39:54 patroni/dcs/zookeeper.py 288 288 0% 4215s Jul 30 23:39:54 patroni/dynamic_loader.py 35 7 80% 4215s Jul 30 23:39:54 patroni/exceptions.py 16 1 94% 4215s Jul 30 23:39:54 patroni/file_perm.py 43 15 65% 4215s Jul 30 23:39:54 patroni/global_config.py 81 18 78% 4215s Jul 30 23:39:54 patroni/ha.py 1244 1244 0% 4215s Jul 30 23:39:54 patroni/log.py 219 93 58% 4215s Jul 30 23:39:54 patroni/postgresql/__init__.py 821 651 21% 4215s Jul 30 23:39:54 patroni/postgresql/available_parameters/__init__.py 21 1 95% 4215s Jul 30 23:39:54 patroni/postgresql/bootstrap.py 252 222 12% 4215s Jul 30 23:39:54 patroni/postgresql/callback_executor.py 55 34 38% 4215s Jul 30 23:39:54 patroni/postgresql/cancellable.py 104 84 19% 4215s Jul 30 23:39:54 patroni/postgresql/config.py 813 698 14% 4215s Jul 30 23:39:54 patroni/postgresql/connection.py 75 50 33% 4215s Jul 30 23:39:54 patroni/postgresql/misc.py 41 29 29% 4215s Jul 30 23:39:54 patroni/postgresql/mpp/__init__.py 89 21 76% 4215s Jul 30 23:39:54 patroni/postgresql/mpp/citus.py 259 259 0% 4215s Jul 30 23:39:54 patroni/postgresql/postmaster.py 170 139 18% 4215s Jul 30 23:39:54 patroni/postgresql/rewind.py 416 416 0% 4215s Jul 30 23:39:54 patroni/postgresql/slots.py 334 285 15% 4215s Jul 30 23:39:54 patroni/postgresql/sync.py 130 96 26% 4215s Jul 30 23:39:54 patroni/postgresql/validator.py 157 52 67% 4215s Jul 30 23:39:54 patroni/psycopg.py 42 28 33% 4215s Jul 30 23:39:54 patroni/raft_controller.py 22 1 95% 4215s Jul 30 23:39:54 patroni/request.py 62 6 90% 4215s Jul 30 23:39:54 patroni/scripts/__init__.py 0 0 100% 4215s Jul 30 23:39:54 patroni/scripts/aws.py 59 59 0% 4215s Jul 30 23:39:54 patroni/scripts/barman/__init__.py 0 0 100% 4215s Jul 30 23:39:54 patroni/scripts/barman/cli.py 51 51 0% 4215s Jul 30 23:39:54 patroni/scripts/barman/config_switch.py 51 51 0% 4215s Jul 30 23:39:54 patroni/scripts/barman/recover.py 37 37 0% 4215s Jul 30 23:39:54 patroni/scripts/barman/utils.py 94 94 0% 4215s Jul 30 23:39:54 patroni/scripts/wale_restore.py 207 207 0% 4215s Jul 30 23:39:54 patroni/tags.py 38 11 71% 4215s Jul 30 23:39:54 patroni/utils.py 350 215 39% 4215s Jul 30 23:39:54 patroni/validator.py 301 215 29% 4215s Jul 30 23:39:54 patroni/version.py 1 0 100% 4215s Jul 30 23:39:54 patroni/watchdog/__init__.py 2 2 0% 4215s Jul 30 23:39:54 patroni/watchdog/base.py 203 203 0% 4215s Jul 30 23:39:54 patroni/watchdog/linux.py 135 135 0% 4215s Jul 30 23:39:54 ------------------------------------------------------------------------------------------------------------- 4215s Jul 30 23:39:54 TOTAL 44230 24973 44% 4215s Jul 30 23:39:54 12 features passed, 0 failed, 1 skipped 4215s Jul 30 23:39:54 54 scenarios passed, 0 failed, 6 skipped 4215s Jul 30 23:39:54 522 steps passed, 0 failed, 63 skipped, 0 undefined 4215s Jul 30 23:39:54 Took 9m40.036s 4215s ### End 16 acceptance-raft ### 4215s + echo '### End 16 acceptance-raft ###' 4215s + rm -f '/tmp/pgpass?' 4215s ++ id -u 4215s + '[' 1000 -eq 0 ']' 4215s autopkgtest [23:39:54]: test acceptance-raft: -----------------------] 4217s acceptance-raft PASS 4217s autopkgtest [23:39:56]: test acceptance-raft: - - - - - - - - - - results - - - - - - - - - - 4218s autopkgtest [23:39:57]: test test: preparing testbed 4363s autopkgtest [23:42:22]: testbed dpkg architecture: s390x 4364s autopkgtest [23:42:23]: testbed apt version: 2.9.6 4364s autopkgtest [23:42:23]: @@@@@@@@@@@@@@@@@@@@ test bed setup 4365s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [126 kB] 4365s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [8548 B] 4366s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [509 kB] 4366s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [52.0 kB] 4366s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [6372 B] 4366s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x Packages [73.3 kB] 4366s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x c-n-f Metadata [2112 B] 4366s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/restricted s390x Packages [1368 B] 4366s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/restricted s390x c-n-f Metadata [120 B] 4366s Get:10 http://ftpmaster.internal/ubuntu oracular-proposed/universe s390x Packages [426 kB] 4367s Get:11 http://ftpmaster.internal/ubuntu oracular-proposed/universe s390x c-n-f Metadata [8372 B] 4367s Get:12 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse s390x Packages [4256 B] 4367s Get:13 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse s390x c-n-f Metadata [120 B] 4367s Fetched 1217 kB in 2s (556 kB/s) 4367s Reading package lists... 4371s Reading package lists... 4371s Building dependency tree... 4371s Reading state information... 4372s Calculating upgrade... 4372s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4372s Reading package lists... 4372s Building dependency tree... 4372s Reading state information... 4372s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4373s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 4373s Hit:2 http://ftpmaster.internal/ubuntu oracular InRelease 4373s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 4373s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 4374s Reading package lists... 4374s Reading package lists... 4374s Building dependency tree... 4374s Reading state information... 4374s Calculating upgrade... 4375s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4375s Reading package lists... 4375s Building dependency tree... 4375s Reading state information... 4375s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4379s Reading package lists... 4380s Building dependency tree... 4380s Reading state information... 4380s Starting pkgProblemResolver with broken count: 0 4380s Starting 2 pkgProblemResolver with broken count: 0 4380s Done 4380s The following additional packages will be installed: 4380s fonts-font-awesome fonts-lato libcares2 libev4t64 libjs-jquery 4380s libjs-jquery-hotkeys libjs-jquery-isonscreen libjs-jquery-metadata 4380s libjs-jquery-tablesorter libjs-jquery-throttle-debounce libjs-sphinxdoc 4380s libjs-underscore libpq5 patroni patroni-doc python3-aiohttp 4380s python3-aiosignal python3-async-timeout python3-boto3 python3-botocore 4380s python3-cachetools python3-cdiff python3-click python3-colorama 4380s python3-consul python3-coverage python3-dateutil python3-dnspython 4380s python3-etcd python3-eventlet python3-flake8 python3-frozenlist 4380s python3-gevent python3-google-auth python3-greenlet python3-iniconfig 4380s python3-jmespath python3-kazoo python3-kerberos python3-kubernetes 4380s python3-mccabe python3-mock python3-multidict python3-packaging 4380s python3-pluggy python3-prettytable python3-psutil python3-psycopg2 4380s python3-pure-sasl python3-pyasn1 python3-pyasn1-modules python3-pycodestyle 4380s python3-pyflakes python3-pysyncobj python3-pytest python3-pytest-cov 4380s python3-pyu2f python3-requests-oauthlib python3-responses python3-rsa 4380s python3-s3transfer python3-six python3-wcwidth python3-websocket 4380s python3-yarl python3-zope.event python3-zope.interface 4380s sphinx-rtd-theme-common 4380s Suggested packages: 4380s postgresql etcd-server | consul | zookeeperd vip-manager haproxy 4380s python3-tornado python3-twisted python-coverage-doc python3-trio 4380s python3-aioquic python3-h2 python3-httpx python3-httpcore etcd 4380s python-eventlet-doc python-gevent-doc python-greenlet-dev 4380s python-greenlet-doc python-kazoo-doc python-mock-doc python-psycopg2-doc 4380s Recommended packages: 4380s javascript-common python3-aiodns pyflakes3 4380s The following NEW packages will be installed: 4380s autopkgtest-satdep fonts-font-awesome fonts-lato libcares2 libev4t64 4380s libjs-jquery libjs-jquery-hotkeys libjs-jquery-isonscreen 4380s libjs-jquery-metadata libjs-jquery-tablesorter 4380s libjs-jquery-throttle-debounce libjs-sphinxdoc libjs-underscore libpq5 4380s patroni patroni-doc python3-aiohttp python3-aiosignal python3-async-timeout 4380s python3-boto3 python3-botocore python3-cachetools python3-cdiff 4380s python3-click python3-colorama python3-consul python3-coverage 4380s python3-dateutil python3-dnspython python3-etcd python3-eventlet 4380s python3-flake8 python3-frozenlist python3-gevent python3-google-auth 4380s python3-greenlet python3-iniconfig python3-jmespath python3-kazoo 4380s python3-kerberos python3-kubernetes python3-mccabe python3-mock 4380s python3-multidict python3-packaging python3-pluggy python3-prettytable 4380s python3-psutil python3-psycopg2 python3-pure-sasl python3-pyasn1 4380s python3-pyasn1-modules python3-pycodestyle python3-pyflakes 4380s python3-pysyncobj python3-pytest python3-pytest-cov python3-pyu2f 4380s python3-requests-oauthlib python3-responses python3-rsa python3-s3transfer 4380s python3-six python3-wcwidth python3-websocket python3-yarl 4380s python3-zope.event python3-zope.interface sphinx-rtd-theme-common 4380s 0 upgraded, 69 newly installed, 0 to remove and 0 not upgraded. 4380s Need to get 16.9 MB/16.9 MB of archives. 4380s After this operation, 157 MB of additional disk space will be used. 4380s Get:1 /tmp/autopkgtest.qFf46z/6-autopkgtest-satdep.deb autopkgtest-satdep s390x 0 [784 B] 4380s Get:2 http://ftpmaster.internal/ubuntu oracular/main s390x fonts-lato all 2.015-1 [2781 kB] 4384s Get:3 http://ftpmaster.internal/ubuntu oracular/main s390x libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 4385s Get:4 http://ftpmaster.internal/ubuntu oracular/universe s390x libjs-jquery-hotkeys all 0~20130707+git2d51e3a9+dfsg-2.1 [11.5 kB] 4385s Get:5 http://ftpmaster.internal/ubuntu oracular/main s390x fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 4386s Get:6 http://ftpmaster.internal/ubuntu oracular/main s390x libcares2 s390x 1.32.3-1 [85.4 kB] 4386s Get:7 http://ftpmaster.internal/ubuntu oracular/universe s390x libev4t64 s390x 1:4.33-2.1build1 [32.0 kB] 4386s Get:8 http://ftpmaster.internal/ubuntu oracular/universe s390x libjs-jquery-metadata all 12-4 [6582 B] 4386s Get:9 http://ftpmaster.internal/ubuntu oracular/universe s390x libjs-jquery-tablesorter all 1:2.31.3+dfsg1-4 [192 kB] 4386s Get:10 http://ftpmaster.internal/ubuntu oracular/universe s390x libjs-jquery-throttle-debounce all 1.1+dfsg.1-2 [12.5 kB] 4386s Get:11 http://ftpmaster.internal/ubuntu oracular/main s390x libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 4386s Get:12 http://ftpmaster.internal/ubuntu oracular-proposed/main s390x libjs-sphinxdoc all 7.3.7-4 [154 kB] 4386s Get:13 http://ftpmaster.internal/ubuntu oracular/main s390x libpq5 s390x 16.3-1 [144 kB] 4387s Get:14 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-cdiff all 1.0-1.1 [16.4 kB] 4387s Get:15 http://ftpmaster.internal/ubuntu oracular/main s390x python3-colorama all 0.4.6-4 [32.1 kB] 4387s Get:16 http://ftpmaster.internal/ubuntu oracular/main s390x python3-click all 8.1.7-2 [79.5 kB] 4387s Get:17 http://ftpmaster.internal/ubuntu oracular/main s390x python3-six all 1.16.0-6 [13.0 kB] 4387s Get:18 http://ftpmaster.internal/ubuntu oracular/main s390x python3-dateutil all 2.9.0-2 [80.3 kB] 4387s Get:19 http://ftpmaster.internal/ubuntu oracular/main s390x python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 4387s Get:20 http://ftpmaster.internal/ubuntu oracular/main s390x python3-prettytable all 3.10.1-1 [34.0 kB] 4387s Get:21 http://ftpmaster.internal/ubuntu oracular/main s390x python3-psutil s390x 5.9.8-2build2 [195 kB] 4387s Get:22 http://ftpmaster.internal/ubuntu oracular/main s390x python3-psycopg2 s390x 2.9.9-1build1 [133 kB] 4387s Get:23 http://ftpmaster.internal/ubuntu oracular/main s390x python3-dnspython all 2.6.1-1ubuntu1 [163 kB] 4387s Get:24 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-etcd all 0.4.5-4 [31.9 kB] 4387s Get:25 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-consul all 0.7.1-2 [21.6 kB] 4388s Get:26 http://ftpmaster.internal/ubuntu oracular/main s390x python3-greenlet s390x 3.0.3-0ubuntu5 [156 kB] 4388s Get:27 http://ftpmaster.internal/ubuntu oracular/main s390x python3-eventlet all 0.35.2-0ubuntu1 [274 kB] 4388s Get:28 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-zope.event all 5.0-0.1 [7512 B] 4388s Get:29 http://ftpmaster.internal/ubuntu oracular/main s390x python3-zope.interface s390x 6.4-1 [137 kB] 4388s Get:30 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-gevent s390x 24.2.1-1 [835 kB] 4389s Get:31 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-kerberos s390x 1.1.14-3.1build9 [21.4 kB] 4389s Get:32 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-pure-sasl all 0.5.1+dfsg1-4 [11.4 kB] 4389s Get:33 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-kazoo all 2.9.0-2 [103 kB] 4389s Get:34 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-multidict s390x 6.0.4-1.1build1 [33.5 kB] 4389s Get:35 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-yarl s390x 1.9.4-1 [72.8 kB] 4389s Get:36 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-async-timeout all 4.0.3-1 [6412 B] 4389s Get:37 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-frozenlist s390x 1.4.1-1 [49.1 kB] 4390s Get:38 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-aiosignal all 1.3.1-1 [5172 B] 4390s Get:39 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-aiohttp s390x 3.9.5-1 [294 kB] 4390s Get:40 http://ftpmaster.internal/ubuntu oracular/main s390x python3-cachetools all 5.3.3-1 [10.3 kB] 4390s Get:41 http://ftpmaster.internal/ubuntu oracular/main s390x python3-pyasn1 all 0.5.1-1 [57.4 kB] 4390s Get:42 http://ftpmaster.internal/ubuntu oracular/main s390x python3-pyasn1-modules all 0.3.0-1 [80.2 kB] 4390s Get:43 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-pyu2f all 0.1.5-2 [22.8 kB] 4390s Get:44 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-responses all 0.25.3-1 [54.3 kB] 4390s Get:45 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-rsa all 4.9-2 [28.2 kB] 4390s Get:46 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-google-auth all 2.28.2-3 [91.0 kB] 4390s Get:47 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-requests-oauthlib all 1.3.1-1 [18.8 kB] 4390s Get:48 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-websocket all 1.7.0-1 [38.1 kB] 4390s Get:49 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-kubernetes all 30.1.0-1 [386 kB] 4391s Get:50 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-pysyncobj all 0.3.12-1 [38.9 kB] 4391s Get:51 http://ftpmaster.internal/ubuntu oracular/universe s390x patroni all 3.3.1-1 [264 kB] 4391s Get:52 http://ftpmaster.internal/ubuntu oracular/main s390x sphinx-rtd-theme-common all 2.0.0+dfsg-2 [1012 kB] 4393s Get:53 http://ftpmaster.internal/ubuntu oracular/universe s390x patroni-doc all 3.3.1-1 [497 kB] 4393s Get:54 http://ftpmaster.internal/ubuntu oracular/main s390x python3-jmespath all 1.0.1-1 [21.3 kB] 4393s Get:55 http://ftpmaster.internal/ubuntu oracular/main s390x python3-botocore all 1.34.46+repack-1ubuntu1 [6211 kB] 4401s Get:56 http://ftpmaster.internal/ubuntu oracular/main s390x python3-s3transfer all 0.10.1-1ubuntu2 [54.3 kB] 4401s Get:57 http://ftpmaster.internal/ubuntu oracular/main s390x python3-boto3 all 1.34.46+dfsg-1ubuntu1 [72.5 kB] 4401s Get:58 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-coverage s390x 7.4.4+dfsg1-0ubuntu2 [147 kB] 4401s Get:59 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-mccabe all 0.7.0-1 [8678 B] 4401s Get:60 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-pycodestyle all 2.11.1-1 [29.9 kB] 4401s Get:61 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-pyflakes all 3.2.0-1 [52.8 kB] 4401s Get:62 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-flake8 all 7.1.0-1 [43.8 kB] 4401s Get:63 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-iniconfig all 1.1.1-2 [6024 B] 4401s Get:64 http://ftpmaster.internal/ubuntu oracular/main s390x python3-packaging all 24.1-1 [41.4 kB] 4401s Get:65 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-pluggy all 1.5.0-1 [21.0 kB] 4401s Get:66 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-pytest all 7.4.4-1 [305 kB] 4402s Get:67 http://ftpmaster.internal/ubuntu oracular/universe s390x libjs-jquery-isonscreen all 1.2.0-1.1 [3244 B] 4402s Get:68 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-pytest-cov all 5.0.0-1 [21.3 kB] 4402s Get:69 http://ftpmaster.internal/ubuntu oracular/universe s390x python3-mock all 5.1.0-1 [64.1 kB] 4402s Fetched 16.9 MB in 22s (776 kB/s) 4402s Selecting previously unselected package fonts-lato. 4402s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 54832 files and directories currently installed.) 4402s Preparing to unpack .../00-fonts-lato_2.015-1_all.deb ... 4402s Unpacking fonts-lato (2.015-1) ... 4403s Selecting previously unselected package libjs-jquery. 4403s Preparing to unpack .../01-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 4403s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 4403s Selecting previously unselected package libjs-jquery-hotkeys. 4403s Preparing to unpack .../02-libjs-jquery-hotkeys_0~20130707+git2d51e3a9+dfsg-2.1_all.deb ... 4403s Unpacking libjs-jquery-hotkeys (0~20130707+git2d51e3a9+dfsg-2.1) ... 4403s Selecting previously unselected package fonts-font-awesome. 4403s Preparing to unpack .../03-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 4403s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 4403s Selecting previously unselected package libcares2:s390x. 4403s Preparing to unpack .../04-libcares2_1.32.3-1_s390x.deb ... 4403s Unpacking libcares2:s390x (1.32.3-1) ... 4403s Selecting previously unselected package libev4t64:s390x. 4403s Preparing to unpack .../05-libev4t64_1%3a4.33-2.1build1_s390x.deb ... 4403s Unpacking libev4t64:s390x (1:4.33-2.1build1) ... 4403s Selecting previously unselected package libjs-jquery-metadata. 4403s Preparing to unpack .../06-libjs-jquery-metadata_12-4_all.deb ... 4403s Unpacking libjs-jquery-metadata (12-4) ... 4403s Selecting previously unselected package libjs-jquery-tablesorter. 4403s Preparing to unpack .../07-libjs-jquery-tablesorter_1%3a2.31.3+dfsg1-4_all.deb ... 4403s Unpacking libjs-jquery-tablesorter (1:2.31.3+dfsg1-4) ... 4403s Selecting previously unselected package libjs-jquery-throttle-debounce. 4403s Preparing to unpack .../08-libjs-jquery-throttle-debounce_1.1+dfsg.1-2_all.deb ... 4403s Unpacking libjs-jquery-throttle-debounce (1.1+dfsg.1-2) ... 4403s Selecting previously unselected package libjs-underscore. 4403s Preparing to unpack .../09-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 4403s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 4403s Selecting previously unselected package libjs-sphinxdoc. 4403s Preparing to unpack .../10-libjs-sphinxdoc_7.3.7-4_all.deb ... 4403s Unpacking libjs-sphinxdoc (7.3.7-4) ... 4403s Selecting previously unselected package libpq5:s390x. 4403s Preparing to unpack .../11-libpq5_16.3-1_s390x.deb ... 4403s Unpacking libpq5:s390x (16.3-1) ... 4403s Selecting previously unselected package python3-cdiff. 4403s Preparing to unpack .../12-python3-cdiff_1.0-1.1_all.deb ... 4403s Unpacking python3-cdiff (1.0-1.1) ... 4403s Selecting previously unselected package python3-colorama. 4403s Preparing to unpack .../13-python3-colorama_0.4.6-4_all.deb ... 4403s Unpacking python3-colorama (0.4.6-4) ... 4403s Selecting previously unselected package python3-click. 4403s Preparing to unpack .../14-python3-click_8.1.7-2_all.deb ... 4403s Unpacking python3-click (8.1.7-2) ... 4403s Selecting previously unselected package python3-six. 4403s Preparing to unpack .../15-python3-six_1.16.0-6_all.deb ... 4403s Unpacking python3-six (1.16.0-6) ... 4403s Selecting previously unselected package python3-dateutil. 4403s Preparing to unpack .../16-python3-dateutil_2.9.0-2_all.deb ... 4403s Unpacking python3-dateutil (2.9.0-2) ... 4403s Selecting previously unselected package python3-wcwidth. 4403s Preparing to unpack .../17-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 4403s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 4403s Selecting previously unselected package python3-prettytable. 4403s Preparing to unpack .../18-python3-prettytable_3.10.1-1_all.deb ... 4403s Unpacking python3-prettytable (3.10.1-1) ... 4403s Selecting previously unselected package python3-psutil. 4403s Preparing to unpack .../19-python3-psutil_5.9.8-2build2_s390x.deb ... 4403s Unpacking python3-psutil (5.9.8-2build2) ... 4403s Selecting previously unselected package python3-psycopg2. 4403s Preparing to unpack .../20-python3-psycopg2_2.9.9-1build1_s390x.deb ... 4403s Unpacking python3-psycopg2 (2.9.9-1build1) ... 4403s Selecting previously unselected package python3-dnspython. 4403s Preparing to unpack .../21-python3-dnspython_2.6.1-1ubuntu1_all.deb ... 4403s Unpacking python3-dnspython (2.6.1-1ubuntu1) ... 4403s Selecting previously unselected package python3-etcd. 4403s Preparing to unpack .../22-python3-etcd_0.4.5-4_all.deb ... 4403s Unpacking python3-etcd (0.4.5-4) ... 4403s Selecting previously unselected package python3-consul. 4403s Preparing to unpack .../23-python3-consul_0.7.1-2_all.deb ... 4403s Unpacking python3-consul (0.7.1-2) ... 4403s Selecting previously unselected package python3-greenlet. 4403s Preparing to unpack .../24-python3-greenlet_3.0.3-0ubuntu5_s390x.deb ... 4403s Unpacking python3-greenlet (3.0.3-0ubuntu5) ... 4403s Selecting previously unselected package python3-eventlet. 4403s Preparing to unpack .../25-python3-eventlet_0.35.2-0ubuntu1_all.deb ... 4403s Unpacking python3-eventlet (0.35.2-0ubuntu1) ... 4403s Selecting previously unselected package python3-zope.event. 4403s Preparing to unpack .../26-python3-zope.event_5.0-0.1_all.deb ... 4403s Unpacking python3-zope.event (5.0-0.1) ... 4403s Selecting previously unselected package python3-zope.interface. 4403s Preparing to unpack .../27-python3-zope.interface_6.4-1_s390x.deb ... 4403s Unpacking python3-zope.interface (6.4-1) ... 4403s Selecting previously unselected package python3-gevent. 4403s Preparing to unpack .../28-python3-gevent_24.2.1-1_s390x.deb ... 4403s Unpacking python3-gevent (24.2.1-1) ... 4403s Selecting previously unselected package python3-kerberos. 4403s Preparing to unpack .../29-python3-kerberos_1.1.14-3.1build9_s390x.deb ... 4403s Unpacking python3-kerberos (1.1.14-3.1build9) ... 4403s Selecting previously unselected package python3-pure-sasl. 4403s Preparing to unpack .../30-python3-pure-sasl_0.5.1+dfsg1-4_all.deb ... 4403s Unpacking python3-pure-sasl (0.5.1+dfsg1-4) ... 4403s Selecting previously unselected package python3-kazoo. 4403s Preparing to unpack .../31-python3-kazoo_2.9.0-2_all.deb ... 4403s Unpacking python3-kazoo (2.9.0-2) ... 4403s Selecting previously unselected package python3-multidict. 4403s Preparing to unpack .../32-python3-multidict_6.0.4-1.1build1_s390x.deb ... 4403s Unpacking python3-multidict (6.0.4-1.1build1) ... 4403s Selecting previously unselected package python3-yarl. 4404s Preparing to unpack .../33-python3-yarl_1.9.4-1_s390x.deb ... 4404s Unpacking python3-yarl (1.9.4-1) ... 4404s Selecting previously unselected package python3-async-timeout. 4404s Preparing to unpack .../34-python3-async-timeout_4.0.3-1_all.deb ... 4404s Unpacking python3-async-timeout (4.0.3-1) ... 4404s Selecting previously unselected package python3-frozenlist. 4404s Preparing to unpack .../35-python3-frozenlist_1.4.1-1_s390x.deb ... 4404s Unpacking python3-frozenlist (1.4.1-1) ... 4404s Selecting previously unselected package python3-aiosignal. 4404s Preparing to unpack .../36-python3-aiosignal_1.3.1-1_all.deb ... 4404s Unpacking python3-aiosignal (1.3.1-1) ... 4404s Selecting previously unselected package python3-aiohttp. 4404s Preparing to unpack .../37-python3-aiohttp_3.9.5-1_s390x.deb ... 4404s Unpacking python3-aiohttp (3.9.5-1) ... 4404s Selecting previously unselected package python3-cachetools. 4404s Preparing to unpack .../38-python3-cachetools_5.3.3-1_all.deb ... 4404s Unpacking python3-cachetools (5.3.3-1) ... 4404s Selecting previously unselected package python3-pyasn1. 4404s Preparing to unpack .../39-python3-pyasn1_0.5.1-1_all.deb ... 4404s Unpacking python3-pyasn1 (0.5.1-1) ... 4404s Selecting previously unselected package python3-pyasn1-modules. 4404s Preparing to unpack .../40-python3-pyasn1-modules_0.3.0-1_all.deb ... 4404s Unpacking python3-pyasn1-modules (0.3.0-1) ... 4404s Selecting previously unselected package python3-pyu2f. 4404s Preparing to unpack .../41-python3-pyu2f_0.1.5-2_all.deb ... 4404s Unpacking python3-pyu2f (0.1.5-2) ... 4404s Selecting previously unselected package python3-responses. 4404s Preparing to unpack .../42-python3-responses_0.25.3-1_all.deb ... 4404s Unpacking python3-responses (0.25.3-1) ... 4404s Selecting previously unselected package python3-rsa. 4404s Preparing to unpack .../43-python3-rsa_4.9-2_all.deb ... 4404s Unpacking python3-rsa (4.9-2) ... 4404s Selecting previously unselected package python3-google-auth. 4404s Preparing to unpack .../44-python3-google-auth_2.28.2-3_all.deb ... 4404s Unpacking python3-google-auth (2.28.2-3) ... 4404s Selecting previously unselected package python3-requests-oauthlib. 4404s Preparing to unpack .../45-python3-requests-oauthlib_1.3.1-1_all.deb ... 4404s Unpacking python3-requests-oauthlib (1.3.1-1) ... 4404s Selecting previously unselected package python3-websocket. 4404s Preparing to unpack .../46-python3-websocket_1.7.0-1_all.deb ... 4404s Unpacking python3-websocket (1.7.0-1) ... 4404s Selecting previously unselected package python3-kubernetes. 4404s Preparing to unpack .../47-python3-kubernetes_30.1.0-1_all.deb ... 4404s Unpacking python3-kubernetes (30.1.0-1) ... 4404s Selecting previously unselected package python3-pysyncobj. 4404s Preparing to unpack .../48-python3-pysyncobj_0.3.12-1_all.deb ... 4404s Unpacking python3-pysyncobj (0.3.12-1) ... 4404s Selecting previously unselected package patroni. 4404s Preparing to unpack .../49-patroni_3.3.1-1_all.deb ... 4404s Unpacking patroni (3.3.1-1) ... 4404s Selecting previously unselected package sphinx-rtd-theme-common. 4404s Preparing to unpack .../50-sphinx-rtd-theme-common_2.0.0+dfsg-2_all.deb ... 4404s Unpacking sphinx-rtd-theme-common (2.0.0+dfsg-2) ... 4404s Selecting previously unselected package patroni-doc. 4404s Preparing to unpack .../51-patroni-doc_3.3.1-1_all.deb ... 4404s Unpacking patroni-doc (3.3.1-1) ... 4404s Selecting previously unselected package python3-jmespath. 4404s Preparing to unpack .../52-python3-jmespath_1.0.1-1_all.deb ... 4404s Unpacking python3-jmespath (1.0.1-1) ... 4404s Selecting previously unselected package python3-botocore. 4404s Preparing to unpack .../53-python3-botocore_1.34.46+repack-1ubuntu1_all.deb ... 4404s Unpacking python3-botocore (1.34.46+repack-1ubuntu1) ... 4405s Selecting previously unselected package python3-s3transfer. 4405s Preparing to unpack .../54-python3-s3transfer_0.10.1-1ubuntu2_all.deb ... 4405s Unpacking python3-s3transfer (0.10.1-1ubuntu2) ... 4405s Selecting previously unselected package python3-boto3. 4405s Preparing to unpack .../55-python3-boto3_1.34.46+dfsg-1ubuntu1_all.deb ... 4405s Unpacking python3-boto3 (1.34.46+dfsg-1ubuntu1) ... 4405s Selecting previously unselected package python3-coverage. 4405s Preparing to unpack .../56-python3-coverage_7.4.4+dfsg1-0ubuntu2_s390x.deb ... 4405s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 4405s Selecting previously unselected package python3-mccabe. 4405s Preparing to unpack .../57-python3-mccabe_0.7.0-1_all.deb ... 4405s Unpacking python3-mccabe (0.7.0-1) ... 4405s Selecting previously unselected package python3-pycodestyle. 4405s Preparing to unpack .../58-python3-pycodestyle_2.11.1-1_all.deb ... 4405s Unpacking python3-pycodestyle (2.11.1-1) ... 4405s Selecting previously unselected package python3-pyflakes. 4405s Preparing to unpack .../59-python3-pyflakes_3.2.0-1_all.deb ... 4405s Unpacking python3-pyflakes (3.2.0-1) ... 4405s Selecting previously unselected package python3-flake8. 4405s Preparing to unpack .../60-python3-flake8_7.1.0-1_all.deb ... 4405s Unpacking python3-flake8 (7.1.0-1) ... 4405s Selecting previously unselected package python3-iniconfig. 4405s Preparing to unpack .../61-python3-iniconfig_1.1.1-2_all.deb ... 4405s Unpacking python3-iniconfig (1.1.1-2) ... 4405s Selecting previously unselected package python3-packaging. 4405s Preparing to unpack .../62-python3-packaging_24.1-1_all.deb ... 4405s Unpacking python3-packaging (24.1-1) ... 4405s Selecting previously unselected package python3-pluggy. 4405s Preparing to unpack .../63-python3-pluggy_1.5.0-1_all.deb ... 4405s Unpacking python3-pluggy (1.5.0-1) ... 4405s Selecting previously unselected package python3-pytest. 4405s Preparing to unpack .../64-python3-pytest_7.4.4-1_all.deb ... 4405s Unpacking python3-pytest (7.4.4-1) ... 4405s Selecting previously unselected package libjs-jquery-isonscreen. 4405s Preparing to unpack .../65-libjs-jquery-isonscreen_1.2.0-1.1_all.deb ... 4405s Unpacking libjs-jquery-isonscreen (1.2.0-1.1) ... 4405s Selecting previously unselected package python3-pytest-cov. 4405s Preparing to unpack .../66-python3-pytest-cov_5.0.0-1_all.deb ... 4405s Unpacking python3-pytest-cov (5.0.0-1) ... 4405s Selecting previously unselected package python3-mock. 4405s Preparing to unpack .../67-python3-mock_5.1.0-1_all.deb ... 4405s Unpacking python3-mock (5.1.0-1) ... 4405s Selecting previously unselected package autopkgtest-satdep. 4405s Preparing to unpack .../68-6-autopkgtest-satdep.deb ... 4405s Unpacking autopkgtest-satdep (0) ... 4405s Setting up python3-iniconfig (1.1.1-2) ... 4405s Setting up libev4t64:s390x (1:4.33-2.1build1) ... 4405s Setting up fonts-lato (2.015-1) ... 4405s Setting up python3-pysyncobj (0.3.12-1) ... 4406s Setting up python3-cachetools (5.3.3-1) ... 4406s Setting up python3-colorama (0.4.6-4) ... 4406s Setting up python3-zope.event (5.0-0.1) ... 4406s Setting up python3-zope.interface (6.4-1) ... 4406s Setting up python3-cdiff (1.0-1.1) ... 4407s Setting up python3-pyflakes (3.2.0-1) ... 4407s Setting up libpq5:s390x (16.3-1) ... 4407s Setting up python3-kerberos (1.1.14-3.1build9) ... 4407s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 4407s Setting up libjs-jquery-throttle-debounce (1.1+dfsg.1-2) ... 4407s Setting up python3-click (8.1.7-2) ... 4407s Setting up python3-psutil (5.9.8-2build2) ... 4408s Setting up python3-multidict (6.0.4-1.1build1) ... 4408s Setting up python3-frozenlist (1.4.1-1) ... 4408s Setting up python3-aiosignal (1.3.1-1) ... 4408s Setting up python3-mock (5.1.0-1) ... 4408s Setting up python3-async-timeout (4.0.3-1) ... 4409s Setting up python3-six (1.16.0-6) ... 4409s Setting up python3-responses (0.25.3-1) ... 4409s Setting up python3-pycodestyle (2.11.1-1) ... 4409s Setting up python3-packaging (24.1-1) ... 4409s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 4409s Setting up python3-pyu2f (0.1.5-2) ... 4410s Setting up python3-jmespath (1.0.1-1) ... 4410s Setting up python3-greenlet (3.0.3-0ubuntu5) ... 4410s Setting up libcares2:s390x (1.32.3-1) ... 4410s Setting up python3-psycopg2 (2.9.9-1build1) ... 4410s Setting up python3-pluggy (1.5.0-1) ... 4410s Setting up python3-dnspython (2.6.1-1ubuntu1) ... 4411s Setting up python3-pyasn1 (0.5.1-1) ... 4411s Setting up python3-dateutil (2.9.0-2) ... 4411s Setting up python3-mccabe (0.7.0-1) ... 4411s Setting up python3-consul (0.7.1-2) ... 4412s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 4412s Setting up libjs-jquery-hotkeys (0~20130707+git2d51e3a9+dfsg-2.1) ... 4412s Setting up python3-prettytable (3.10.1-1) ... 4412s Setting up python3-yarl (1.9.4-1) ... 4412s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 4412s Setting up sphinx-rtd-theme-common (2.0.0+dfsg-2) ... 4412s Setting up python3-websocket (1.7.0-1) ... 4412s Setting up python3-requests-oauthlib (1.3.1-1) ... 4412s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 4412s Setting up python3-pure-sasl (0.5.1+dfsg1-4) ... 4412s Setting up python3-etcd (0.4.5-4) ... 4412s Setting up python3-pytest (7.4.4-1) ... 4413s Setting up python3-aiohttp (3.9.5-1) ... 4413s Setting up python3-gevent (24.2.1-1) ... 4414s Setting up python3-flake8 (7.1.0-1) ... 4414s Setting up python3-eventlet (0.35.2-0ubuntu1) ... 4414s Setting up python3-kazoo (2.9.0-2) ... 4414s Setting up python3-pyasn1-modules (0.3.0-1) ... 4415s Setting up libjs-jquery-metadata (12-4) ... 4415s Setting up python3-botocore (1.34.46+repack-1ubuntu1) ... 4415s Setting up libjs-jquery-isonscreen (1.2.0-1.1) ... 4415s Setting up libjs-sphinxdoc (7.3.7-4) ... 4415s Setting up libjs-jquery-tablesorter (1:2.31.3+dfsg1-4) ... 4415s Setting up python3-rsa (4.9-2) ... 4415s Setting up patroni (3.3.1-1) ... 4415s Created symlink '/etc/systemd/system/multi-user.target.wants/patroni.service' → '/usr/lib/systemd/system/patroni.service'. 4416s Setting up patroni-doc (3.3.1-1) ... 4416s Setting up python3-s3transfer (0.10.1-1ubuntu2) ... 4416s Setting up python3-pytest-cov (5.0.0-1) ... 4416s Setting up python3-google-auth (2.28.2-3) ... 4417s Setting up python3-boto3 (1.34.46+dfsg-1ubuntu1) ... 4417s Setting up python3-kubernetes (30.1.0-1) ... 4419s Setting up autopkgtest-satdep (0) ... 4419s Processing triggers for man-db (2.12.1-2) ... 4419s Processing triggers for libc-bin (2.39-0ubuntu9) ... 4424s (Reading database ... 60907 files and directories currently installed.) 4424s Removing autopkgtest-satdep (0) ... 4426s autopkgtest [23:43:25]: test test: [----------------------- 4426s running test 4427s ============================= test session starts ============================== 4427s platform linux -- Python 3.12.4, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 4427s cachedir: .pytest_cache 4427s rootdir: /tmp/autopkgtest.qFf46z/build.qnL/src 4427s plugins: cov-5.0.0 4434s collecting ... collected 646 items 4434s 4434s tests/test_api.py::TestRestApiHandler::test_RestApiServer_query PASSED [ 0%] 4434s tests/test_api.py::TestRestApiHandler::test_basicauth PASSED [ 0%] 4434s tests/test_api.py::TestRestApiHandler::test_do_DELETE_restart PASSED [ 0%] 4434s tests/test_api.py::TestRestApiHandler::test_do_DELETE_switchover PASSED [ 0%] 4435s tests/test_api.py::TestRestApiHandler::test_do_GET PASSED [ 0%] 4435s tests/test_api.py::TestRestApiHandler::test_do_GET_cluster PASSED [ 0%] 4435s tests/test_api.py::TestRestApiHandler::test_do_GET_config PASSED [ 1%] 4435s tests/test_api.py::TestRestApiHandler::test_do_GET_failsafe PASSED [ 1%] 4435s tests/test_api.py::TestRestApiHandler::test_do_GET_history PASSED [ 1%] 4435s tests/test_api.py::TestRestApiHandler::test_do_GET_liveness PASSED [ 1%] 4435s tests/test_api.py::TestRestApiHandler::test_do_GET_metrics PASSED [ 1%] 4435s tests/test_api.py::TestRestApiHandler::test_do_GET_patroni PASSED [ 1%] 4435s tests/test_api.py::TestRestApiHandler::test_do_GET_readiness PASSED [ 2%] 4435s tests/test_api.py::TestRestApiHandler::test_do_HEAD PASSED [ 2%] 4435s tests/test_api.py::TestRestApiHandler::test_do_OPTIONS PASSED [ 2%] 4435s tests/test_api.py::TestRestApiHandler::test_do_PATCH_config PASSED [ 2%] 4435s tests/test_api.py::TestRestApiHandler::test_do_POST_citus PASSED [ 2%] 4435s tests/test_api.py::TestRestApiHandler::test_do_POST_failover PASSED [ 2%] 4435s tests/test_api.py::TestRestApiHandler::test_do_POST_failsafe PASSED [ 2%] 4435s tests/test_api.py::TestRestApiHandler::test_do_POST_mpp PASSED [ 3%] 4435s tests/test_api.py::TestRestApiHandler::test_do_POST_reinitialize PASSED [ 3%] 4435s tests/test_api.py::TestRestApiHandler::test_do_POST_reload PASSED [ 3%] 4435s tests/test_api.py::TestRestApiHandler::test_do_POST_restart PASSED [ 3%] 4435s tests/test_api.py::TestRestApiHandler::test_do_POST_sigterm PASSED [ 3%] 4435s tests/test_api.py::TestRestApiHandler::test_do_POST_switchover PASSED [ 3%] 4435s tests/test_api.py::TestRestApiHandler::test_do_PUT_config PASSED [ 4%] 4435s tests/test_api.py::TestRestApiServer::test_check_access PASSED [ 4%] 4435s tests/test_api.py::TestRestApiServer::test_get_certificate_serial_number PASSED [ 4%] 4435s tests/test_api.py::TestRestApiServer::test_handle_error PASSED [ 4%] 4435s tests/test_api.py::TestRestApiServer::test_process_request_error PASSED [ 4%] 4435s tests/test_api.py::TestRestApiServer::test_process_request_thread PASSED [ 4%] 4435s tests/test_api.py::TestRestApiServer::test_query PASSED [ 4%] 4435s tests/test_api.py::TestRestApiServer::test_reload_config PASSED [ 5%] 4435s tests/test_api.py::TestRestApiServer::test_reload_local_certificate PASSED [ 5%] 4435s tests/test_api.py::TestRestApiServer::test_socket_error PASSED [ 5%] 4435s tests/test_async_executor.py::TestAsyncExecutor::test_cancel PASSED [ 5%] 4435s tests/test_async_executor.py::TestAsyncExecutor::test_run PASSED [ 5%] 4435s tests/test_async_executor.py::TestAsyncExecutor::test_run_async PASSED [ 5%] 4435s tests/test_async_executor.py::TestCriticalTask::test_completed_task PASSED [ 6%] 4435s tests/test_aws.py::TestAWSConnection::test_aws_bizare_response PASSED [ 6%] 4435s tests/test_aws.py::TestAWSConnection::test_main PASSED [ 6%] 4435s tests/test_aws.py::TestAWSConnection::test_non_aws PASSED [ 6%] 4436s tests/test_aws.py::TestAWSConnection::test_on_role_change PASSED [ 6%] 4436s tests/test_barman.py::test_set_up_logging PASSED [ 6%] 4436s tests/test_barman.py::TestPgBackupApi::test__build_full_url PASSED [ 6%] 4436s tests/test_barman.py::TestPgBackupApi::test__deserialize_response PASSED [ 7%] 4436s tests/test_barman.py::TestPgBackupApi::test__ensure_api_ok PASSED [ 7%] 4436s tests/test_barman.py::TestPgBackupApi::test__get_request PASSED [ 7%] 4436s tests/test_barman.py::TestPgBackupApi::test__post_request PASSED [ 7%] 4436s tests/test_barman.py::TestPgBackupApi::test__serialize_request PASSED [ 7%] 4436s tests/test_barman.py::TestPgBackupApi::test_create_config_switch_operation PASSED [ 7%] 4436s tests/test_barman.py::TestPgBackupApi::test_create_recovery_operation PASSED [ 8%] 4436s tests/test_barman.py::TestPgBackupApi::test_get_operation_status PASSED [ 8%] 4436s tests/test_barman.py::TestBarmanRecover::test__restore_backup PASSED [ 8%] 4436s tests/test_barman.py::TestBarmanRecoverCli::test_run_barman_recover PASSED [ 8%] 4436s tests/test_barman.py::TestBarmanConfigSwitch::test__switch_config PASSED [ 8%] 4436s tests/test_barman.py::TestBarmanConfigSwitchCli::test__should_skip_switch PASSED [ 8%] 4436s tests/test_barman.py::TestBarmanConfigSwitchCli::test_run_barman_config_switch PASSED [ 8%] 4436s tests/test_barman.py::TestMain::test_main PASSED [ 9%] 4436s tests/test_bootstrap.py::TestBootstrap::test__initdb PASSED [ 9%] 4436s tests/test_bootstrap.py::TestBootstrap::test__process_user_options PASSED [ 9%] 4436s tests/test_bootstrap.py::TestBootstrap::test_basebackup PASSED [ 9%] 4436s tests/test_bootstrap.py::TestBootstrap::test_bootstrap PASSED [ 9%] 4436s tests/test_bootstrap.py::TestBootstrap::test_call_post_bootstrap PASSED [ 9%] 4436s tests/test_bootstrap.py::TestBootstrap::test_clone PASSED [ 10%] 4436s tests/test_bootstrap.py::TestBootstrap::test_create_replica PASSED [ 10%] 4436s tests/test_bootstrap.py::TestBootstrap::test_create_replica_old_format PASSED [ 10%] 4436s tests/test_bootstrap.py::TestBootstrap::test_custom_bootstrap PASSED [ 10%] 4436s tests/test_bootstrap.py::TestBootstrap::test_post_bootstrap PASSED [ 10%] 4436s tests/test_callback_executor.py::TestCallbackExecutor::test_callback_executor PASSED [ 10%] 4436s tests/test_cancellable.py::TestCancellableSubprocess::test__kill_children PASSED [ 10%] 4436s tests/test_cancellable.py::TestCancellableSubprocess::test_call PASSED [ 11%] 4436s tests/test_cancellable.py::TestCancellableSubprocess::test_cancel PASSED [ 11%] 4436s tests/test_citus.py::TestCitus::test_add_task SKIPPED (Citus not tested) [ 11%] 4436s tests/test_citus.py::TestCitus::test_adjust_postgres_gucs SKIPPED (C...) [ 11%] 4436s tests/test_citus.py::TestCitus::test_bootstrap_duplicate_database SKIPPED [ 11%] 4436s tests/test_citus.py::TestCitus::test_handle_event SKIPPED (Citus not...) [ 11%] 4436s tests/test_citus.py::TestCitus::test_ignore_replication_slot SKIPPED [ 12%] 4436s tests/test_citus.py::TestCitus::test_load_pg_dist_node SKIPPED (Citu...) [ 12%] 4436s tests/test_citus.py::TestCitus::test_on_demote SKIPPED (Citus not te...) [ 12%] 4436s tests/test_citus.py::TestCitus::test_pick_task SKIPPED (Citus not te...) [ 12%] 4436s tests/test_citus.py::TestCitus::test_process_task SKIPPED (Citus not...) [ 12%] 4436s tests/test_citus.py::TestCitus::test_process_tasks SKIPPED (Citus no...) [ 12%] 4436s tests/test_citus.py::TestCitus::test_run SKIPPED (Citus not tested) [ 13%] 4436s tests/test_citus.py::TestCitus::test_sync_meta_data SKIPPED (Citus n...) [ 13%] 4436s tests/test_citus.py::TestCitus::test_wait SKIPPED (Citus not tested) [ 13%] 4436s tests/test_config.py::TestConfig::test__process_postgresql_parameters PASSED [ 13%] 4436s tests/test_config.py::TestConfig::test__validate_and_adjust_timeouts PASSED [ 13%] 4436s tests/test_config.py::TestConfig::test__validate_failover_tags PASSED [ 13%] 4436s tests/test_config.py::TestConfig::test_configuration_directory PASSED [ 13%] 4436s tests/test_config.py::TestConfig::test_global_config_is_synchronous_mode PASSED [ 14%] 4436s tests/test_config.py::TestConfig::test_invalid_path PASSED [ 14%] 4436s tests/test_config.py::TestConfig::test_reload_local_configuration PASSED [ 14%] 4436s tests/test_config.py::TestConfig::test_save_cache PASSED [ 14%] 4436s tests/test_config.py::TestConfig::test_set_dynamic_configuration PASSED [ 14%] 4436s tests/test_config.py::TestConfig::test_standby_cluster_parameters PASSED [ 14%] 4436s tests/test_config_generator.py::TestGenerateConfig::test_generate_config_running_instance_16 PASSED [ 15%] 4436s tests/test_config_generator.py::TestGenerateConfig::test_generate_config_running_instance_16_connect_from_env PASSED [ 15%] 4436s tests/test_config_generator.py::TestGenerateConfig::test_generate_config_running_instance_errors PASSED [ 15%] 4436s tests/test_config_generator.py::TestGenerateConfig::test_generate_sample_config_16 PASSED [ 15%] 4436s tests/test_config_generator.py::TestGenerateConfig::test_generate_sample_config_pre_13_dir_creation PASSED [ 15%] 4436s tests/test_config_generator.py::TestGenerateConfig::test_get_address PASSED [ 15%] 4436s tests/test_consul.py::TestHTTPClient::test_get PASSED [ 15%] 4436s tests/test_consul.py::TestHTTPClient::test_put PASSED [ 16%] 4436s tests/test_consul.py::TestHTTPClient::test_unknown_method PASSED [ 16%] 4436s tests/test_consul.py::TestConsul::test__get_citus_cluster PASSED [ 16%] 4436s tests/test_consul.py::TestConsul::test_cancel_initialization PASSED [ 16%] 4436s tests/test_consul.py::TestConsul::test_create_session PASSED [ 16%] 4436s tests/test_consul.py::TestConsul::test_delete_cluster PASSED [ 16%] 4436s tests/test_consul.py::TestConsul::test_delete_leader PASSED [ 17%] 4436s tests/test_consul.py::TestConsul::test_get_cluster PASSED [ 17%] 4436s tests/test_consul.py::TestConsul::test_initialize PASSED [ 17%] 4436s tests/test_consul.py::TestConsul::test_referesh_session PASSED [ 17%] 4436s tests/test_consul.py::TestConsul::test_reload_config PASSED [ 17%] 4436s tests/test_consul.py::TestConsul::test_set_config_value PASSED [ 17%] 4436s tests/test_consul.py::TestConsul::test_set_failover_value PASSED [ 17%] 4436s tests/test_consul.py::TestConsul::test_set_history_value PASSED [ 18%] 4436s tests/test_consul.py::TestConsul::test_set_retry_timeout PASSED [ 18%] 4436s tests/test_consul.py::TestConsul::test_sync_state PASSED [ 18%] 4436s tests/test_consul.py::TestConsul::test_take_leader PASSED [ 18%] 4436s tests/test_consul.py::TestConsul::test_touch_member PASSED [ 18%] 4436s tests/test_consul.py::TestConsul::test_update_leader PASSED [ 18%] 4436s tests/test_consul.py::TestConsul::test_update_service PASSED [ 19%] 4436s tests/test_consul.py::TestConsul::test_watch PASSED [ 19%] 4436s tests/test_consul.py::TestConsul::test_write_leader_optime PASSED [ 19%] 4436s tests/test_ctl.py::TestCtl::test_apply_config_changes PASSED [ 19%] 4436s tests/test_ctl.py::TestCtl::test_ctl PASSED [ 19%] 4436s tests/test_ctl.py::TestCtl::test_dsn PASSED [ 19%] 4436s tests/test_ctl.py::TestCtl::test_edit_config PASSED [ 19%] 4436s tests/test_ctl.py::TestCtl::test_failover PASSED [ 20%] 4436s tests/test_ctl.py::TestCtl::test_flush_restart PASSED [ 20%] 4436s tests/test_ctl.py::TestCtl::test_flush_switchover PASSED [ 20%] 4437s tests/test_ctl.py::TestCtl::test_format_pg_version PASSED [ 20%] 4437s tests/test_ctl.py::TestCtl::test_get_all_members PASSED [ 20%] 4438s tests/test_ctl.py::TestCtl::test_get_any_member PASSED [ 20%] 4438s tests/test_ctl.py::TestCtl::test_get_cursor PASSED [ 21%] 4438s tests/test_ctl.py::TestCtl::test_get_dcs PASSED [ 21%] 4438s tests/test_ctl.py::TestCtl::test_get_members PASSED [ 21%] 4438s tests/test_ctl.py::TestCtl::test_history PASSED [ 21%] 4438s tests/test_ctl.py::TestCtl::test_invoke_editor PASSED [ 21%] 4438s tests/test_ctl.py::TestCtl::test_list_extended PASSED [ 21%] 4438s tests/test_ctl.py::TestCtl::test_list_standby_cluster PASSED [ 21%] 4438s tests/test_ctl.py::TestCtl::test_load_config PASSED [ 22%] 4438s tests/test_ctl.py::TestCtl::test_members PASSED [ 22%] 4438s tests/test_ctl.py::TestCtl::test_output_members PASSED [ 22%] 4438s tests/test_ctl.py::TestCtl::test_parse_dcs PASSED [ 22%] 4438s tests/test_ctl.py::TestCtl::test_pause_cluster PASSED [ 22%] 4438s tests/test_ctl.py::TestCtl::test_query PASSED [ 22%] 4438s tests/test_ctl.py::TestCtl::test_query_member PASSED [ 23%] 4438s tests/test_ctl.py::TestCtl::test_reinit_wait PASSED [ 23%] 4438s tests/test_ctl.py::TestCtl::test_reload PASSED [ 23%] 4438s tests/test_ctl.py::TestCtl::test_remove PASSED [ 23%] 4438s tests/test_ctl.py::TestCtl::test_restart_reinit PASSED [ 23%] 4438s tests/test_ctl.py::TestCtl::test_resume_cluster PASSED [ 23%] 4438s tests/test_ctl.py::TestCtl::test_show_config PASSED [ 23%] 4438s tests/test_ctl.py::TestCtl::test_show_diff PASSED [ 24%] 4438s tests/test_ctl.py::TestCtl::test_switchover PASSED [ 24%] 4438s tests/test_ctl.py::TestCtl::test_topology PASSED [ 24%] 4438s tests/test_ctl.py::TestCtl::test_version PASSED [ 24%] 4438s tests/test_ctl.py::TestPatronictlPrettyTable::test__get_hline PASSED [ 24%] 4438s tests/test_ctl.py::TestPatronictlPrettyTable::test__stringify_hrule PASSED [ 24%] 4438s tests/test_ctl.py::TestPatronictlPrettyTable::test_output PASSED [ 25%] 4438s tests/test_etcd.py::TestDnsCachingResolver::test_run PASSED [ 25%] 4438s tests/test_etcd.py::TestClient::test___del__ PASSED [ 25%] 4438s tests/test_etcd.py::TestClient::test__get_machines_cache_from_dns PASSED [ 25%] 4438s tests/test_etcd.py::TestClient::test__get_machines_cache_from_srv PASSED [ 25%] 4438s tests/test_etcd.py::TestClient::test__load_machines_cache PASSED [ 25%] 4438s tests/test_etcd.py::TestClient::test__refresh_machines_cache PASSED [ 26%] 4438s tests/test_etcd.py::TestClient::test_api_execute PASSED [ 26%] 4438s tests/test_etcd.py::TestClient::test_create_connection_patched PASSED [ 26%] 4438s tests/test_etcd.py::TestClient::test_get_srv_record PASSED [ 26%] 4438s tests/test_etcd.py::TestClient::test_machines PASSED [ 26%] 4438s tests/test_etcd.py::TestEtcd::test__get_citus_cluster PASSED [ 26%] 4438s tests/test_etcd.py::TestEtcd::test_attempt_to_acquire_leader PASSED [ 26%] 4438s tests/test_etcd.py::TestEtcd::test_base_path PASSED [ 27%] 4438s tests/test_etcd.py::TestEtcd::test_cancel_initializion PASSED [ 27%] 4438s tests/test_etcd.py::TestEtcd::test_delete_cluster PASSED [ 27%] 4438s tests/test_etcd.py::TestEtcd::test_delete_leader PASSED [ 27%] 4438s tests/test_etcd.py::TestEtcd::test_get_cluster PASSED [ 27%] 4438s tests/test_etcd.py::TestEtcd::test_get_etcd_client PASSED [ 27%] 4438s tests/test_etcd.py::TestEtcd::test_initialize PASSED [ 28%] 4438s tests/test_etcd.py::TestEtcd::test_last_seen PASSED [ 28%] 4438s tests/test_etcd.py::TestEtcd::test_other_exceptions PASSED [ 28%] 4438s tests/test_etcd.py::TestEtcd::test_set_history_value PASSED [ 28%] 4438s tests/test_etcd.py::TestEtcd::test_set_ttl PASSED [ 28%] 4438s tests/test_etcd.py::TestEtcd::test_sync_state PASSED [ 28%] 4438s tests/test_etcd.py::TestEtcd::test_take_leader PASSED [ 28%] 4438s tests/test_etcd.py::TestEtcd::test_touch_member PASSED [ 29%] 4438s tests/test_etcd.py::TestEtcd::test_update_leader PASSED [ 29%] 4438s tests/test_etcd.py::TestEtcd::test_watch PASSED [ 29%] 4438s tests/test_etcd.py::TestEtcd::test_write_leader_optime PASSED [ 29%] 4438s tests/test_etcd3.py::TestEtcd3Client::test_authenticate PASSED [ 29%] 4438s tests/test_etcd3.py::TestKVCache::test__build_cache PASSED [ 29%] 4438s tests/test_etcd3.py::TestKVCache::test__do_watch PASSED [ 30%] 4438s tests/test_etcd3.py::TestKVCache::test_kill_stream PASSED [ 30%] 4438s tests/test_etcd3.py::TestKVCache::test_run PASSED [ 30%] 4438s tests/test_etcd3.py::TestPatroniEtcd3Client::test__ensure_version_prefix PASSED [ 30%] 4438s tests/test_etcd3.py::TestPatroniEtcd3Client::test__handle_auth_errors PASSED [ 30%] 4438s tests/test_etcd3.py::TestPatroniEtcd3Client::test__handle_server_response PASSED [ 30%] 4438s tests/test_etcd3.py::TestPatroniEtcd3Client::test__init__ PASSED [ 30%] 4438s tests/test_etcd3.py::TestPatroniEtcd3Client::test__restart_watcher PASSED [ 31%] 4438s tests/test_etcd3.py::TestPatroniEtcd3Client::test__wait_cache PASSED [ 31%] 4438s tests/test_etcd3.py::TestPatroniEtcd3Client::test_call_rpc PASSED [ 31%] 4438s tests/test_etcd3.py::TestPatroniEtcd3Client::test_txn PASSED [ 31%] 4438s tests/test_etcd3.py::TestEtcd3::test__get_citus_cluster PASSED [ 31%] 4438s tests/test_etcd3.py::TestEtcd3::test__update_leader PASSED [ 31%] 4438s tests/test_etcd3.py::TestEtcd3::test_attempt_to_acquire_leader PASSED [ 32%] 4438s tests/test_etcd3.py::TestEtcd3::test_cancel_initialization PASSED [ 32%] 4438s tests/test_etcd3.py::TestEtcd3::test_create_lease PASSED [ 32%] 4438s tests/test_etcd3.py::TestEtcd3::test_delete_cluster PASSED [ 32%] 4438s tests/test_etcd3.py::TestEtcd3::test_delete_leader PASSED [ 32%] 4438s tests/test_etcd3.py::TestEtcd3::test_delete_sync_state PASSED [ 32%] 4438s tests/test_etcd3.py::TestEtcd3::test_get_cluster PASSED [ 32%] 4438s tests/test_etcd3.py::TestEtcd3::test_initialize PASSED [ 33%] 4438s tests/test_etcd3.py::TestEtcd3::test_refresh_lease PASSED [ 33%] 4438s tests/test_etcd3.py::TestEtcd3::test_set_config_value PASSED [ 33%] 4438s tests/test_etcd3.py::TestEtcd3::test_set_failover_value PASSED [ 33%] 4438s tests/test_etcd3.py::TestEtcd3::test_set_history_value PASSED [ 33%] 4438s tests/test_etcd3.py::TestEtcd3::test_set_socket_options PASSED [ 33%] 4438s tests/test_etcd3.py::TestEtcd3::test_set_sync_state_value PASSED [ 34%] 4438s tests/test_etcd3.py::TestEtcd3::test_set_ttl PASSED [ 34%] 4438s tests/test_etcd3.py::TestEtcd3::test_take_leader PASSED [ 34%] 4438s tests/test_etcd3.py::TestEtcd3::test_touch_member PASSED [ 34%] 4438s tests/test_etcd3.py::TestEtcd3::test_watch PASSED [ 34%] 4438s tests/test_exhibitor.py::TestExhibitorEnsembleProvider::test_init PASSED [ 34%] 4438s tests/test_exhibitor.py::TestExhibitorEnsembleProvider::test_poll PASSED [ 34%] 4438s tests/test_exhibitor.py::TestExhibitor::test_get_cluster PASSED [ 35%] 4438s tests/test_file_perm.py::TestFilePermissions::test_set_permissions_from_data_directory PASSED [ 35%] 4438s tests/test_file_perm.py::TestFilePermissions::test_set_umask PASSED [ 35%] 4438s tests/test_ha.py::TestHa::test__is_healthiest_node PASSED [ 35%] 4438s tests/test_ha.py::TestHa::test_abort_join PASSED [ 35%] 4438s tests/test_ha.py::TestHa::test_acquire_lock PASSED [ 35%] 4438s tests/test_ha.py::TestHa::test_acquire_lock_as_primary PASSED [ 36%] 4439s tests/test_ha.py::TestHa::test_after_pause PASSED [ 36%] 4439s tests/test_ha.py::TestHa::test_bootstrap_as_standby_leader PASSED [ 36%] 4439s tests/test_ha.py::TestHa::test_bootstrap_from_another_member PASSED [ 36%] 4439s tests/test_ha.py::TestHa::test_bootstrap_initialize_lock_failed PASSED [ 36%] 4439s tests/test_ha.py::TestHa::test_bootstrap_initialized_new_cluster PASSED [ 36%] 4439s tests/test_ha.py::TestHa::test_bootstrap_not_running_concurrently PASSED [ 36%] 4439s tests/test_ha.py::TestHa::test_bootstrap_release_initialize_key_on_failure PASSED [ 37%] 4439s tests/test_ha.py::TestHa::test_bootstrap_release_initialize_key_on_watchdog_failure PASSED [ 37%] 4439s tests/test_ha.py::TestHa::test_bootstrap_waiting_for_leader PASSED [ 37%] 4439s tests/test_ha.py::TestHa::test_bootstrap_waiting_for_standby_leader PASSED [ 37%] 4439s tests/test_ha.py::TestHa::test_bootstrap_without_leader PASSED [ 37%] 4439s tests/test_ha.py::TestHa::test_check_failsafe_topology PASSED [ 37%] 4439s tests/test_ha.py::TestHa::test_coordinator_leader_with_lock PASSED [ 38%] 4439s tests/test_ha.py::TestHa::test_crash_recovery PASSED [ 38%] 4439s tests/test_ha.py::TestHa::test_crash_recovery_before_rewind PASSED [ 38%] 4439s tests/test_ha.py::TestHa::test_delete_future_restarts PASSED [ 38%] 4439s tests/test_ha.py::TestHa::test_demote_after_failing_to_obtain_lock PASSED [ 38%] 4439s tests/test_ha.py::TestHa::test_demote_because_not_having_lock PASSED [ 38%] 4439s tests/test_ha.py::TestHa::test_demote_because_not_healthiest PASSED [ 39%] 4439s tests/test_ha.py::TestHa::test_demote_because_update_lock_failed PASSED [ 39%] 4439s tests/test_ha.py::TestHa::test_demote_immediate PASSED [ 39%] 4439s tests/test_ha.py::TestHa::test_disable_sync_when_restarting PASSED [ 39%] 4439s tests/test_ha.py::TestHa::test_effective_tags PASSED [ 39%] 4439s tests/test_ha.py::TestHa::test_empty_directory_in_pause PASSED [ 39%] 4439s tests/test_ha.py::TestHa::test_enable_synchronous_mode PASSED [ 39%] 4439s tests/test_ha.py::TestHa::test_evaluate_scheduled_restart PASSED [ 40%] 4439s tests/test_ha.py::TestHa::test_failed_to_update_lock_in_pause PASSED [ 40%] 4439s tests/test_ha.py::TestHa::test_failover_immediately_on_zero_primary_start_timeout PASSED [ 40%] 4439s tests/test_ha.py::TestHa::test_fetch_node_status PASSED [ 40%] 4439s tests/test_ha.py::TestHa::test_follow PASSED [ 40%] 4439s tests/test_ha.py::TestHa::test_follow_copy PASSED [ 40%] 4439s tests/test_ha.py::TestHa::test_follow_in_pause PASSED [ 41%] 4439s tests/test_ha.py::TestHa::test_follow_new_leader_after_failing_to_obtain_lock PASSED [ 41%] 4439s tests/test_ha.py::TestHa::test_follow_new_leader_because_not_healthiest PASSED [ 41%] 4439s tests/test_ha.py::TestHa::test_follow_triggers_rewind PASSED [ 41%] 4439s tests/test_ha.py::TestHa::test_get_node_to_follow_nostream PASSED [ 41%] 4439s tests/test_ha.py::TestHa::test_inconsistent_synchronous_state PASSED [ 41%] 4439s tests/test_ha.py::TestHa::test_is_healthiest_node PASSED [ 41%] 4439s tests/test_ha.py::TestHa::test_is_leader PASSED [ 42%] 4439s tests/test_ha.py::TestHa::test_leader_race_stale_primary PASSED [ 42%] 4439s tests/test_ha.py::TestHa::test_leader_with_lock PASSED [ 42%] 4439s tests/test_ha.py::TestHa::test_leader_with_not_accessible_data_directory PASSED [ 42%] 4439s tests/test_ha.py::TestHa::test_long_promote PASSED [ 42%] 4439s tests/test_ha.py::TestHa::test_lost_leader_lock_during_promote PASSED [ 42%] 4439s tests/test_ha.py::TestHa::test_manual_failover_from_leader PASSED [ 43%] 4439s tests/test_ha.py::TestHa::test_manual_failover_from_leader_in_pause PASSED [ 43%] 4439s tests/test_ha.py::TestHa::test_manual_failover_from_leader_in_synchronous_mode PASSED [ 43%] 4439s tests/test_ha.py::TestHa::test_manual_failover_process_no_leader PASSED [ 43%] 4439s tests/test_ha.py::TestHa::test_manual_failover_process_no_leader_in_pause PASSED [ 43%] 4439s tests/test_ha.py::TestHa::test_manual_failover_process_no_leader_in_synchronous_mode PASSED [ 43%] 4439s tests/test_ha.py::TestHa::test_manual_failover_while_starting PASSED [ 43%] 4440s tests/test_ha.py::TestHa::test_manual_switchover_from_leader PASSED [ 44%] 4440s tests/test_ha.py::TestHa::test_manual_switchover_from_leader_in_pause PASSED [ 44%] 4440s tests/test_ha.py::TestHa::test_manual_switchover_from_leader_in_synchronous_mode PASSED [ 44%] 4440s tests/test_ha.py::TestHa::test_manual_switchover_process_no_leader PASSED [ 44%] 4440s tests/test_ha.py::TestHa::test_manual_switchover_process_no_leader_in_pause PASSED [ 44%] 4440s tests/test_ha.py::TestHa::test_manual_switchover_process_no_leader_in_synchronous_mode PASSED [ 44%] 4440s tests/test_ha.py::TestHa::test_no_dcs_connection_primary_demote PASSED [ 45%] 4440s tests/test_ha.py::TestHa::test_no_dcs_connection_primary_failsafe PASSED [ 45%] 4440s tests/test_ha.py::TestHa::test_no_dcs_connection_replica_failsafe PASSED [ 45%] 4440s tests/test_ha.py::TestHa::test_no_dcs_connection_replica_failsafe_not_enabled_but_active PASSED [ 45%] 4440s tests/test_ha.py::TestHa::test_no_etcd_connection_in_pause PASSED [ 45%] 4440s tests/test_ha.py::TestHa::test_notify_citus_coordinator PASSED [ 45%] 4440s tests/test_ha.py::TestHa::test_permanent_logical_slots_after_promote PASSED [ 45%] 4440s tests/test_ha.py::TestHa::test_post_recover PASSED [ 46%] 4440s tests/test_ha.py::TestHa::test_postgres_unhealthy_in_pause PASSED [ 46%] 4440s tests/test_ha.py::TestHa::test_primary_stop_timeout PASSED [ 46%] 4440s tests/test_ha.py::TestHa::test_process_healthy_cluster_in_pause PASSED [ 46%] 4440s tests/test_ha.py::TestHa::test_process_healthy_standby_cluster_as_cascade_replica PASSED [ 46%] 4440s tests/test_ha.py::TestHa::test_process_healthy_standby_cluster_as_standby_leader PASSED [ 46%] 4440s tests/test_ha.py::TestHa::test_process_sync_replication PASSED [ 47%] 4440s tests/test_ha.py::TestHa::test_process_unhealthy_standby_cluster_as_cascade_replica PASSED [ 47%] 4440s tests/test_ha.py::TestHa::test_process_unhealthy_standby_cluster_as_standby_leader PASSED [ 47%] 4440s tests/test_ha.py::TestHa::test_promote_because_have_lock PASSED [ 47%] 4440s tests/test_ha.py::TestHa::test_promote_without_watchdog PASSED [ 47%] 4440s tests/test_ha.py::TestHa::test_promoted_by_acquiring_lock PASSED [ 47%] 4440s tests/test_ha.py::TestHa::test_promotion_cancelled_after_pre_promote_failed PASSED [ 47%] 4440s tests/test_ha.py::TestHa::test_readonly_dcs_primary_failsafe PASSED [ 48%] 4440s tests/test_ha.py::TestHa::test_recover_former_primary PASSED [ 48%] 4440s tests/test_ha.py::TestHa::test_recover_raft PASSED [ 48%] 4440s tests/test_ha.py::TestHa::test_recover_replica_failed PASSED [ 48%] 4440s tests/test_ha.py::TestHa::test_recover_unhealthy_leader_in_standby_cluster PASSED [ 48%] 4440s tests/test_ha.py::TestHa::test_recover_unhealthy_unlocked_standby_cluster PASSED [ 48%] 4440s tests/test_ha.py::TestHa::test_recover_with_reinitialize PASSED [ 49%] 4440s tests/test_ha.py::TestHa::test_recover_with_rewind PASSED [ 49%] 4440s tests/test_ha.py::TestHa::test_reinitialize PASSED [ 49%] 4440s tests/test_ha.py::TestHa::test_restart PASSED [ 49%] 4440s tests/test_ha.py::TestHa::test_restart_in_progress PASSED [ 49%] 4440s tests/test_ha.py::TestHa::test_restart_matches PASSED [ 49%] 4440s tests/test_ha.py::TestHa::test_restore_cluster_config PASSED [ 50%] 4440s tests/test_ha.py::TestHa::test_run_cycle PASSED [ 50%] 4440s tests/test_ha.py::TestHa::test_schedule_future_restart PASSED [ 50%] 4440s tests/test_ha.py::TestHa::test_scheduled_restart PASSED [ 50%] 4440s tests/test_ha.py::TestHa::test_scheduled_switchover_from_leader PASSED [ 50%] 4440s tests/test_ha.py::TestHa::test_shutdown PASSED [ 50%] 4440s tests/test_ha.py::TestHa::test_shutdown_citus_worker PASSED [ 50%] 4440s tests/test_ha.py::TestHa::test_start_as_cascade_replica_in_standby_cluster PASSED [ 51%] 4440s tests/test_ha.py::TestHa::test_start_as_readonly PASSED [ 51%] 4440s tests/test_ha.py::TestHa::test_start_as_replica PASSED [ 51%] 4440s tests/test_ha.py::TestHa::test_start_primary_after_failure PASSED [ 51%] 4440s tests/test_ha.py::TestHa::test_starting_timeout PASSED [ 51%] 4440s tests/test_ha.py::TestHa::test_sync_replication_become_primary PASSED [ 51%] 4440s tests/test_ha.py::TestHa::test_sysid_no_match PASSED [ 52%] 4440s tests/test_ha.py::TestHa::test_sysid_no_match_in_pause PASSED [ 52%] 4440s tests/test_ha.py::TestHa::test_touch_member PASSED [ 52%] 4440s tests/test_ha.py::TestHa::test_unhealthy_sync_mode PASSED [ 52%] 4440s tests/test_ha.py::TestHa::test_update_cluster_history PASSED [ 52%] 4440s tests/test_ha.py::TestHa::test_update_failsafe PASSED [ 52%] 4440s tests/test_ha.py::TestHa::test_update_lock PASSED [ 52%] 4440s tests/test_ha.py::TestHa::test_wakup PASSED [ 53%] 4440s tests/test_ha.py::TestHa::test_watch PASSED [ 53%] 4441s tests/test_ha.py::TestHa::test_worker_restart PASSED [ 53%] 4441s tests/test_kubernetes.py::TestK8sConfig::test_load_incluster_config PASSED [ 53%] 4441s tests/test_kubernetes.py::TestK8sConfig::test_load_kube_config PASSED [ 53%] 4441s tests/test_kubernetes.py::TestK8sConfig::test_refresh_token PASSED [ 53%] 4441s tests/test_kubernetes.py::TestApiClient::test__do_http_request PASSED [ 54%] 4441s tests/test_kubernetes.py::TestApiClient::test__refresh_api_servers_cache PASSED [ 54%] 4441s tests/test_kubernetes.py::TestApiClient::test_request PASSED [ 54%] 4441s tests/test_kubernetes.py::TestCoreV1Api::test_create_namespaced_service PASSED [ 54%] 4441s tests/test_kubernetes.py::TestCoreV1Api::test_delete_namespaced_pod PASSED [ 54%] 4441s tests/test_kubernetes.py::TestCoreV1Api::test_list_namespaced_endpoints PASSED [ 54%] 4441s tests/test_kubernetes.py::TestCoreV1Api::test_list_namespaced_pod PASSED [ 54%] 4441s tests/test_kubernetes.py::TestCoreV1Api::test_patch_namespaced_config_map PASSED [ 55%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test__get_citus_cluster PASSED [ 55%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test__wait_caches PASSED [ 55%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_attempt_to_acquire_leader PASSED [ 55%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_cancel_initialization PASSED [ 55%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_delete_cluster PASSED [ 55%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_delete_leader PASSED [ 56%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_get_citus_coordinator PASSED [ 56%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_get_cluster PASSED [ 56%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_get_mpp_coordinator PASSED [ 56%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_initialize PASSED [ 56%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_manual_failover PASSED [ 56%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_reload_config PASSED [ 56%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_set_config_value PASSED [ 57%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_set_history_value PASSED [ 57%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_take_leader PASSED [ 57%] 4441s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_touch_member PASSED [ 57%] 4442s tests/test_kubernetes.py::TestKubernetesConfigMaps::test_watch PASSED [ 57%] 4442s tests/test_kubernetes.py::TestKubernetesEndpointsNoPodIP::test_update_leader PASSED [ 57%] 4442s tests/test_kubernetes.py::TestKubernetesEndpoints::test__create_config_service PASSED [ 58%] 4442s tests/test_kubernetes.py::TestKubernetesEndpoints::test__update_leader_with_retry PASSED [ 58%] 4442s tests/test_kubernetes.py::TestKubernetesEndpoints::test_delete_sync_state PASSED [ 58%] 4442s tests/test_kubernetes.py::TestKubernetesEndpoints::test_update_leader PASSED [ 58%] 4442s tests/test_kubernetes.py::TestKubernetesEndpoints::test_write_leader_optime PASSED [ 58%] 4442s tests/test_kubernetes.py::TestKubernetesEndpoints::test_write_sync_state PASSED [ 58%] 4442s tests/test_kubernetes.py::TestCacheBuilder::test__build_cache PASSED [ 58%] 4442s tests/test_kubernetes.py::TestCacheBuilder::test__do_watch PASSED [ 59%] 4442s tests/test_kubernetes.py::TestCacheBuilder::test__list PASSED [ 59%] 4442s tests/test_kubernetes.py::TestCacheBuilder::test_kill_stream PASSED [ 59%] 4442s tests/test_kubernetes.py::TestCacheBuilder::test_run PASSED [ 59%] 4442s tests/test_log.py::TestPatroniLogger::test_dateformat PASSED [ 59%] 4442s tests/test_log.py::TestPatroniLogger::test_fail_to_use_python_json_logger PASSED [ 59%] 4442s tests/test_log.py::TestPatroniLogger::test_interceptor PASSED [ 60%] 4442s tests/test_log.py::TestPatroniLogger::test_invalid_dateformat PASSED [ 60%] 4442s tests/test_log.py::TestPatroniLogger::test_invalid_json_format PASSED [ 60%] 4442s tests/test_log.py::TestPatroniLogger::test_invalid_plain_format PASSED [ 60%] 4442s tests/test_log.py::TestPatroniLogger::test_json_list_format PASSED [ 60%] 4442s tests/test_log.py::TestPatroniLogger::test_json_str_format PASSED [ 60%] 4442s tests/test_log.py::TestPatroniLogger::test_patroni_logger PASSED [ 60%] 4442s tests/test_log.py::TestPatroniLogger::test_plain_format PASSED [ 61%] 4442s tests/test_mpp.py::TestMPP::test_get_handler_impl_exception PASSED [ 61%] 4442s tests/test_mpp.py::TestMPP::test_null_handler PASSED [ 61%] 4442s tests/test_patroni.py::TestPatroni::test__filter_tags PASSED [ 61%] 4442s tests/test_patroni.py::TestPatroni::test_check_psycopg PASSED [ 61%] 4442s tests/test_patroni.py::TestPatroni::test_ensure_unique_name PASSED [ 61%] 4442s tests/test_patroni.py::TestPatroni::test_failover_priority PASSED [ 62%] 4442s tests/test_patroni.py::TestPatroni::test_load_dynamic_configuration PASSED [ 62%] 4442s tests/test_patroni.py::TestPatroni::test_no_config PASSED [ 62%] 4442s tests/test_patroni.py::TestPatroni::test_nofailover PASSED [ 62%] 4443s tests/test_patroni.py::TestPatroni::test_noloadbalance PASSED [ 62%] 4443s tests/test_patroni.py::TestPatroni::test_nostream PASSED [ 62%] 4443s tests/test_patroni.py::TestPatroni::test_nosync PASSED [ 63%] 4443s tests/test_patroni.py::TestPatroni::test_patroni_main PASSED [ 63%] 4443s tests/test_patroni.py::TestPatroni::test_patroni_patroni_main PASSED [ 63%] 4443s tests/test_patroni.py::TestPatroni::test_reload_config PASSED [ 63%] 4443s tests/test_patroni.py::TestPatroni::test_replicatefrom PASSED [ 63%] 4443s tests/test_patroni.py::TestPatroni::test_run PASSED [ 63%] 4443s tests/test_patroni.py::TestPatroni::test_schedule_next_run PASSED [ 63%] 4443s tests/test_patroni.py::TestPatroni::test_shutdown PASSED [ 64%] 4443s tests/test_patroni.py::TestPatroni::test_sigterm_handler PASSED [ 64%] 4443s tests/test_patroni.py::TestPatroni::test_validate_config PASSED [ 64%] 4443s tests/test_postgresql.py::TestPostgresql::test__do_stop PASSED [ 64%] 4443s tests/test_postgresql.py::TestPostgresql::test__get_postgres_guc_validators PASSED [ 64%] 4443s tests/test_postgresql.py::TestPostgresql::test__load_postgres_gucs_validators PASSED [ 64%] 4443s tests/test_postgresql.py::TestPostgresql::test__query PASSED [ 65%] 4443s tests/test_postgresql.py::TestPostgresql::test__read_postgres_gucs_validators_file PASSED [ 65%] 4443s tests/test_postgresql.py::TestPostgresql::test__read_recovery_params PASSED [ 65%] 4443s tests/test_postgresql.py::TestPostgresql::test__read_recovery_params_pre_v12 PASSED [ 65%] 4443s tests/test_postgresql.py::TestPostgresql::test__wait_for_connection_close PASSED [ 65%] 4443s tests/test_postgresql.py::TestPostgresql::test__write_recovery_params PASSED [ 65%] 4443s tests/test_postgresql.py::TestPostgresql::test_call_nowait PASSED [ 65%] 4443s tests/test_postgresql.py::TestPostgresql::test_can_create_replica_without_replication_connection PASSED [ 66%] 4443s tests/test_postgresql.py::TestPostgresql::test_check_for_startup PASSED [ 66%] 4443s tests/test_postgresql.py::TestPostgresql::test_check_recovery_conf PASSED [ 66%] 4443s tests/test_postgresql.py::TestPostgresql::test_checkpoint PASSED [ 66%] 4443s tests/test_postgresql.py::TestPostgresql::test_controldata PASSED [ 66%] 4443s tests/test_postgresql.py::TestPostgresql::test_effective_configuration PASSED [ 66%] 4443s tests/test_postgresql.py::TestPostgresql::test_follow PASSED [ 67%] 4444s tests/test_postgresql.py::TestPostgresql::test_get_major_version PASSED [ 67%] 4444s tests/test_postgresql.py::TestPostgresql::test_get_postgres_role_from_data_directory PASSED [ 67%] 4444s tests/test_postgresql.py::TestPostgresql::test_get_primary_timeline PASSED [ 67%] 4444s tests/test_postgresql.py::TestPostgresql::test_get_server_parameters PASSED [ 67%] 4444s tests/test_postgresql.py::TestPostgresql::test_handle_parameter_change PASSED [ 67%] 4444s tests/test_postgresql.py::TestPostgresql::test_is_healthy PASSED [ 67%] 4444s tests/test_postgresql.py::TestPostgresql::test_is_primary PASSED [ 68%] 4444s tests/test_postgresql.py::TestPostgresql::test_is_primary_exception PASSED [ 68%] 4444s tests/test_postgresql.py::TestPostgresql::test_is_running PASSED [ 68%] 4444s tests/test_postgresql.py::TestPostgresql::test_latest_checkpoint_location PASSED [ 68%] 4444s tests/test_postgresql.py::TestPostgresql::test_move_data_directory PASSED [ 68%] 4444s tests/test_postgresql.py::TestPostgresql::test_pgpass_is_dir PASSED [ 68%] 4444s tests/test_postgresql.py::TestPostgresql::test_postmaster_start_time PASSED [ 69%] 4444s tests/test_postgresql.py::TestPostgresql::test_promote PASSED [ 69%] 4444s tests/test_postgresql.py::TestPostgresql::test_query PASSED [ 69%] 4444s tests/test_postgresql.py::TestPostgresql::test_received_timeline PASSED [ 69%] 4444s tests/test_postgresql.py::TestPostgresql::test_reload PASSED [ 69%] 4444s tests/test_postgresql.py::TestPostgresql::test_reload_config PASSED [ 69%] 4444s tests/test_postgresql.py::TestPostgresql::test_remove_data_directory PASSED [ 69%] 4444s tests/test_postgresql.py::TestPostgresql::test_replica_cached_timeline PASSED [ 70%] 4444s tests/test_postgresql.py::TestPostgresql::test_replica_method_can_work_without_replication_connection PASSED [ 70%] 4444s tests/test_postgresql.py::TestPostgresql::test_resolve_connection_addresses PASSED [ 70%] 4444s tests/test_postgresql.py::TestPostgresql::test_restart PASSED [ 70%] 4444s tests/test_postgresql.py::TestPostgresql::test_restore_configuration_files PASSED [ 70%] 4444s tests/test_postgresql.py::TestPostgresql::test_save_configuration_files PASSED [ 70%] 4444s tests/test_postgresql.py::TestPostgresql::test_set_enforce_hot_standby_feedback PASSED [ 71%] 4444s tests/test_postgresql.py::TestPostgresql::test_start PASSED [ 71%] 4444s tests/test_postgresql.py::TestPostgresql::test_stop PASSED [ 71%] 4444s tests/test_postgresql.py::TestPostgresql::test_sysid PASSED [ 71%] 4444s tests/test_postgresql.py::TestPostgresql::test_terminate_starting_postmaster PASSED [ 71%] 4444s tests/test_postgresql.py::TestPostgresql::test_timeline_wal_position PASSED [ 71%] 4444s tests/test_postgresql.py::TestPostgresql::test_validator_factory PASSED [ 71%] 4444s tests/test_postgresql.py::TestPostgresql::test_wait_for_port_open PASSED [ 72%] 4444s tests/test_postgresql.py::TestPostgresql::test_wait_for_startup PASSED [ 72%] 4444s tests/test_postgresql.py::TestPostgresql::test_write_pgpass PASSED [ 72%] 4444s tests/test_postgresql.py::TestPostgresql::test_write_postgresql_and_sanitize_auto_conf PASSED [ 72%] 4444s tests/test_postgresql.py::TestPostgresql2::test_available_gucs PASSED [ 72%] 4444s tests/test_postgresql.py::TestPostgresql2::test_cluster_info_query PASSED [ 72%] 4444s tests/test_postgresql.py::TestPostgresql2::test_load_current_server_parameters PASSED [ 73%] 4444s tests/test_postmaster.py::TestPostmasterProcess::test_from_pid PASSED [ 73%] 4444s tests/test_postmaster.py::TestPostmasterProcess::test_from_pidfile PASSED [ 73%] 4444s tests/test_postmaster.py::TestPostmasterProcess::test_init PASSED [ 73%] 4444s tests/test_postmaster.py::TestPostmasterProcess::test_read_postmaster_pidfile PASSED [ 73%] 4444s tests/test_postmaster.py::TestPostmasterProcess::test_signal_kill PASSED [ 73%] 4444s tests/test_postmaster.py::TestPostmasterProcess::test_signal_stop PASSED [ 73%] 4444s tests/test_postmaster.py::TestPostmasterProcess::test_signal_stop_nt PASSED [ 74%] 4444s tests/test_postmaster.py::TestPostmasterProcess::test_start PASSED [ 74%] 4444s tests/test_postmaster.py::TestPostmasterProcess::test_wait_for_user_backends_to_close PASSED [ 74%] 4444s tests/test_raft.py::TestTCPTransport::test__connectIfNecessarySingle PASSED [ 74%] 4444s tests/test_raft.py::TestDynMemberSyncObj::test__SyncObj__doChangeCluster PASSED [ 74%] 4444s tests/test_raft.py::TestDynMemberSyncObj::test_add_member PASSED [ 74%] 4444s tests/test_raft.py::TestDynMemberSyncObj::test_getMembers PASSED [ 75%] 4445s tests/test_raft.py::TestKVStoreTTL::test_delete PASSED [ 75%] 4449s tests/test_raft.py::TestKVStoreTTL::test_expire PASSED [ 75%] 4451s tests/test_raft.py::TestKVStoreTTL::test_on_ready_override PASSED [ 75%] 4451s tests/test_raft.py::TestKVStoreTTL::test_retry PASSED [ 75%] 4452s tests/test_raft.py::TestKVStoreTTL::test_set PASSED [ 75%] 4452s tests/test_raft.py::TestRaft::test_init PASSED [ 76%] 4454s tests/test_raft.py::TestRaft::test_raft PASSED [ 76%] 4454s tests/test_raft_controller.py::TestPatroniRaftController::test_patroni_raft_controller_main PASSED [ 76%] 4454s tests/test_raft_controller.py::TestPatroniRaftController::test_reload_config PASSED [ 76%] 4454s tests/test_raft_controller.py::TestPatroniRaftController::test_run PASSED [ 76%] 4454s tests/test_rewind.py::TestRewind::test__check_timeline_and_lsn PASSED [ 76%] 4454s tests/test_rewind.py::TestRewind::test__get_local_timeline_lsn PASSED [ 76%] 4454s tests/test_rewind.py::TestRewind::test__log_primary_history PASSED [ 77%] 4454s tests/test_rewind.py::TestRewind::test_archive_ready_wals PASSED [ 77%] 4454s tests/test_rewind.py::TestRewind::test_can_rewind PASSED [ 77%] 4454s tests/test_rewind.py::TestRewind::test_check_leader_is_not_in_recovery PASSED [ 77%] 4454s tests/test_rewind.py::TestRewind::test_cleanup_archive_status PASSED [ 77%] 4454s tests/test_rewind.py::TestRewind::test_ensure_checkpoint_after_promote PASSED [ 77%] 4454s tests/test_rewind.py::TestRewind::test_ensure_clean_shutdown PASSED [ 78%] 4454s tests/test_rewind.py::TestRewind::test_execute PASSED [ 78%] 4454s tests/test_rewind.py::TestRewind::test_maybe_clean_pg_replslot PASSED [ 78%] 4454s tests/test_rewind.py::TestRewind::test_pg_rewind PASSED [ 78%] 4454s tests/test_rewind.py::TestRewind::test_read_postmaster_opts PASSED [ 78%] 4454s tests/test_rewind.py::TestRewind::test_single_user_mode PASSED [ 78%] 4454s tests/test_slots.py::TestSlotsHandler::test__ensure_logical_slots_replica PASSED [ 78%] 4454s tests/test_slots.py::TestSlotsHandler::test_advance_physical_slots PASSED [ 79%] 4454s tests/test_slots.py::TestSlotsHandler::test_cascading_replica_sync_replication_slots PASSED [ 79%] 4454s tests/test_slots.py::TestSlotsHandler::test_check_logical_slots_readiness PASSED [ 79%] 4454s tests/test_slots.py::TestSlotsHandler::test_copy_logical_slots PASSED [ 79%] 4454s tests/test_slots.py::TestSlotsHandler::test_fsync_dir PASSED [ 79%] 4454s tests/test_slots.py::TestSlotsHandler::test_get_slot_name_on_primary PASSED [ 79%] 4454s tests/test_slots.py::TestSlotsHandler::test_nostream_slot_processing PASSED [ 80%] 4454s tests/test_slots.py::TestSlotsHandler::test_on_promote PASSED [ 80%] 4454s tests/test_slots.py::TestSlotsHandler::test_process_permanent_slots PASSED [ 80%] 4454s tests/test_slots.py::TestSlotsHandler::test_should_enforce_hot_standby_feedback PASSED [ 80%] 4454s tests/test_slots.py::TestSlotsHandler::test_slots_advance_thread PASSED [ 80%] 4454s tests/test_slots.py::TestSlotsHandler::test_sync_replication_slots PASSED [ 80%] 4454s tests/test_sync.py::TestSync::test_pick_sync_standby PASSED [ 80%] 4455s tests/test_sync.py::TestSync::test_set_sync_standby PASSED [ 81%] 4455s tests/test_utils.py::TestUtils::test_enable_keepalive PASSED [ 81%] 4455s tests/test_utils.py::TestUtils::test_polling_loop PASSED [ 81%] 4455s tests/test_utils.py::TestUtils::test_unquote PASSED [ 81%] 4455s tests/test_utils.py::TestUtils::test_validate_directory_couldnt_create PASSED [ 81%] 4455s tests/test_utils.py::TestUtils::test_validate_directory_is_not_a_directory PASSED [ 81%] 4455s tests/test_utils.py::TestUtils::test_validate_directory_not_writable PASSED [ 82%] 4455s tests/test_utils.py::TestUtils::test_validate_directory_writable PASSED [ 82%] 4455s tests/test_utils.py::TestRetrySleeper::test_copy PASSED [ 82%] 4455s tests/test_utils.py::TestRetrySleeper::test_deadline PASSED [ 82%] 4455s tests/test_utils.py::TestRetrySleeper::test_maximum_delay PASSED [ 82%] 4455s tests/test_utils.py::TestRetrySleeper::test_reset PASSED [ 82%] 4455s tests/test_utils.py::TestRetrySleeper::test_too_many_tries PASSED [ 82%] 4455s tests/test_validator.py::TestValidator::test_bin_dir_is_empty PASSED [ 83%] 4455s tests/test_validator.py::TestValidator::test_bin_dir_is_empty_string_excutables_in_path PASSED [ 83%] 4455s tests/test_validator.py::TestValidator::test_bin_dir_is_file PASSED [ 83%] 4455s tests/test_validator.py::TestValidator::test_complete_config PASSED [ 83%] 4455s tests/test_validator.py::TestValidator::test_data_dir_contains_pg_version PASSED [ 83%] 4455s tests/test_validator.py::TestValidator::test_data_dir_is_empty_string PASSED [ 83%] 4455s tests/test_validator.py::TestValidator::test_directory_contains PASSED [ 84%] 4455s tests/test_validator.py::TestValidator::test_empty_config PASSED [ 84%] 4455s tests/test_validator.py::TestValidator::test_failover_priority_int PASSED [ 84%] 4455s tests/test_validator.py::TestValidator::test_json_log_format PASSED [ 84%] 4455s tests/test_validator.py::TestValidator::test_one_of PASSED [ 84%] 4455s tests/test_validator.py::TestValidator::test_pg_version_missmatch PASSED [ 84%] 4455s tests/test_validator.py::TestValidator::test_pg_wal_doesnt_exist PASSED [ 84%] 4455s tests/test_validator.py::TestValidator::test_validate_binary_name PASSED [ 85%] 4455s tests/test_validator.py::TestValidator::test_validate_binary_name_empty_string PASSED [ 85%] 4455s tests/test_validator.py::TestValidator::test_validate_binary_name_missing PASSED [ 85%] 4455s tests/test_wale_restore.py::TestWALERestore::test_create_replica_with_s3 PASSED [ 85%] 4455s tests/test_wale_restore.py::TestWALERestore::test_fix_subdirectory_path_if_broken PASSED [ 85%] 4455s tests/test_wale_restore.py::TestWALERestore::test_get_major_version PASSED [ 85%] 4455s tests/test_wale_restore.py::TestWALERestore::test_main PASSED [ 86%] 4455s tests/test_wale_restore.py::TestWALERestore::test_run PASSED [ 86%] 4455s tests/test_wale_restore.py::TestWALERestore::test_should_use_s3_to_create_replica PASSED [ 86%] 4455s tests/test_watchdog.py::TestWatchdog::test_basic_operation PASSED [ 86%] 4455s tests/test_watchdog.py::TestWatchdog::test_config_reload PASSED [ 86%] 4455s tests/test_watchdog.py::TestWatchdog::test_exceptions PASSED [ 86%] 4455s tests/test_watchdog.py::TestWatchdog::test_invalid_timings PASSED [ 86%] 4455s tests/test_watchdog.py::TestWatchdog::test_parse_mode PASSED [ 87%] 4455s tests/test_watchdog.py::TestWatchdog::test_timeout_does_not_ensure_safe_termination PASSED [ 87%] 4455s tests/test_watchdog.py::TestWatchdog::test_unsafe_timeout_disable_watchdog_and_exit PASSED [ 87%] 4455s tests/test_watchdog.py::TestWatchdog::test_unsupported_platform PASSED [ 87%] 4455s tests/test_watchdog.py::TestWatchdog::test_watchdog_activate PASSED [ 87%] 4455s tests/test_watchdog.py::TestWatchdog::test_watchdog_not_activated PASSED [ 87%] 4455s tests/test_watchdog.py::TestNullWatchdog::test_basics PASSED [ 88%] 4455s tests/test_watchdog.py::TestLinuxWatchdogDevice::test__ioctl PASSED [ 88%] 4455s tests/test_watchdog.py::TestLinuxWatchdogDevice::test_basics PASSED [ 88%] 4455s tests/test_watchdog.py::TestLinuxWatchdogDevice::test_error_handling PASSED [ 88%] 4455s tests/test_watchdog.py::TestLinuxWatchdogDevice::test_is_healthy PASSED [ 88%] 4455s tests/test_watchdog.py::TestLinuxWatchdogDevice::test_open PASSED [ 88%] 4455s tests/test_zookeeper.py::TestPatroniSequentialThreadingHandler::test_create_connection PASSED [ 89%] 4455s tests/test_zookeeper.py::TestPatroniSequentialThreadingHandler::test_select PASSED [ 89%] 4455s tests/test_zookeeper.py::TestPatroniKazooClient::test__call PASSED [ 89%] 4455s tests/test_zookeeper.py::TestZooKeeper::test__cluster_loader PASSED [ 89%] 4455s tests/test_zookeeper.py::TestZooKeeper::test__get_citus_cluster PASSED [ 89%] 4455s tests/test_zookeeper.py::TestZooKeeper::test__kazoo_connect PASSED [ 89%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_attempt_to_acquire_leader PASSED [ 89%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_cancel_initialization PASSED [ 90%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_delete_cluster PASSED [ 90%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_delete_leader PASSED [ 90%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_get_children PASSED [ 90%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_get_citus_coordinator PASSED [ 90%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_get_cluster PASSED [ 90%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_get_mpp_coordinator PASSED [ 91%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_get_node PASSED [ 91%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_initialize PASSED [ 91%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_reload_config PASSED [ 91%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_set_config_value PASSED [ 91%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_set_failover_value PASSED [ 91%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_set_history_value PASSED [ 91%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_sync_state PASSED [ 92%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_take_leader PASSED [ 92%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_touch_member PASSED [ 92%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_update_leader PASSED [ 92%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_watch PASSED [ 92%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_watcher PASSED [ 92%] 4455s tests/test_zookeeper.py::TestZooKeeper::test_write_leader_optime PASSED [ 93%] 4455s patroni/__init__.py::patroni.parse_version PASSED [ 93%] 4455s patroni/api.py::patroni.api.check_access PASSED [ 93%] 4455s patroni/collections.py::patroni.collections.CaseInsensitiveDict.__len__ PASSED [ 93%] 4455s patroni/collections.py::patroni.collections.CaseInsensitiveDict.__repr__ PASSED [ 93%] 4455s patroni/collections.py::patroni.collections.CaseInsensitiveSet.__len__ PASSED [ 93%] 4455s patroni/collections.py::patroni.collections.CaseInsensitiveSet.__repr__ PASSED [ 93%] 4455s patroni/collections.py::patroni.collections.CaseInsensitiveSet.__str__ SKIPPED [ 94%] 4455s patroni/collections.py::patroni.collections._FrozenDict.__len__ PASSED [ 94%] 4455s patroni/ctl.py::patroni.ctl.format_pg_version PASSED [ 94%] 4455s patroni/ctl.py::patroni.ctl.parse_dcs PASSED [ 94%] 4455s patroni/ctl.py::patroni.ctl.parse_scheduled PASSED [ 94%] 4456s patroni/ctl.py::patroni.ctl.watching PASSED [ 94%] 4456s patroni/utils.py::patroni.utils.compare_values PASSED [ 95%] 4456s patroni/utils.py::patroni.utils.convert_int_from_base_unit PASSED [ 95%] 4456s patroni/utils.py::patroni.utils.convert_real_from_base_unit PASSED [ 95%] 4456s patroni/utils.py::patroni.utils.convert_to_base_unit PASSED [ 95%] 4456s patroni/utils.py::patroni.utils.deep_compare PASSED [ 95%] 4456s patroni/utils.py::patroni.utils.maybe_convert_from_base_unit PASSED [ 95%] 4456s patroni/utils.py::patroni.utils.parse_bool PASSED [ 95%] 4456s patroni/utils.py::patroni.utils.parse_int PASSED [ 96%] 4456s patroni/utils.py::patroni.utils.parse_real PASSED [ 96%] 4456s patroni/utils.py::patroni.utils.split_host_port PASSED [ 96%] 4456s patroni/utils.py::patroni.utils.strtod PASSED [ 96%] 4456s patroni/utils.py::patroni.utils.strtol PASSED [ 96%] 4456s patroni/utils.py::patroni.utils.unquote PASSED [ 96%] 4456s patroni/dcs/__init__.py::patroni.dcs.Cluster.__len__ PASSED [ 97%] 4456s patroni/dcs/__init__.py::patroni.dcs.Cluster.timeline PASSED [ 97%] 4456s patroni/dcs/__init__.py::patroni.dcs.ClusterConfig.from_node PASSED [ 97%] 4456s patroni/dcs/__init__.py::patroni.dcs.Failover PASSED [ 97%] 4456s patroni/dcs/__init__.py::patroni.dcs.Failover.__len__ PASSED [ 97%] 4456s patroni/dcs/__init__.py::patroni.dcs.Leader.checkpoint_after_promote PASSED [ 97%] 4456s patroni/dcs/__init__.py::patroni.dcs.Member.from_node PASSED [ 97%] 4456s patroni/dcs/__init__.py::patroni.dcs.Member.patroni_version PASSED [ 98%] 4456s patroni/dcs/__init__.py::patroni.dcs.SyncState.from_node PASSED [ 98%] 4456s patroni/dcs/__init__.py::patroni.dcs.SyncState.matches PASSED [ 98%] 4456s patroni/dcs/__init__.py::patroni.dcs.TimelineHistory.from_node PASSED [ 98%] 4456s patroni/dcs/kubernetes.py::patroni.dcs.kubernetes.Kubernetes.subsets_changed PASSED [ 98%] 4456s patroni/postgresql/bootstrap.py::patroni.postgresql.bootstrap.Bootstrap.process_user_options PASSED [ 98%] 4456s patroni/postgresql/config.py::patroni.postgresql.config.parse_dsn PASSED [ 99%] 4456s patroni/postgresql/config.py::patroni.postgresql.config.read_recovery_param_value PASSED [ 99%] 4456s patroni/postgresql/misc.py::patroni.postgresql.misc.postgres_major_version_to_int PASSED [ 99%] 4456s patroni/postgresql/misc.py::patroni.postgresql.misc.postgres_version_to_int PASSED [ 99%] 4456s patroni/postgresql/sync.py::patroni.postgresql.sync.parse_sync_standby_names PASSED [ 99%] 4456s patroni/scripts/wale_restore.py::patroni.scripts.wale_restore.repr_size PASSED [ 99%] 4459s patroni/scripts/wale_restore.py::patroni.scripts.wale_restore.size_as_bytes PASSED [100%] 4459s 4459s ---------- coverage: platform linux, python 3.12.4-final-0 ----------- 4459s Name Stmts Miss Cover Missing 4459s ----------------------------------------------------------------------------------- 4459s patroni/__init__.py 13 0 100% 4459s patroni/__main__.py 199 1 99% 395 4459s patroni/api.py 770 0 100% 4459s patroni/async_executor.py 96 0 100% 4459s patroni/collections.py 56 3 95% 50, 99, 107 4459s patroni/config.py 371 0 100% 4459s patroni/config_generator.py 212 0 100% 4459s patroni/ctl.py 936 0 100% 4459s patroni/daemon.py 76 0 100% 4459s patroni/dcs/__init__.py 646 0 100% 4459s patroni/dcs/consul.py 485 0 100% 4459s patroni/dcs/etcd3.py 679 0 100% 4459s patroni/dcs/etcd.py 603 0 100% 4459s patroni/dcs/exhibitor.py 61 0 100% 4459s patroni/dcs/kubernetes.py 938 0 100% 4459s patroni/dcs/raft.py 319 0 100% 4459s patroni/dcs/zookeeper.py 288 0 100% 4459s patroni/dynamic_loader.py 35 0 100% 4459s patroni/exceptions.py 16 0 100% 4459s patroni/file_perm.py 43 0 100% 4459s patroni/global_config.py 81 0 100% 4459s patroni/ha.py 1244 2 99% 1925-1926 4459s patroni/log.py 219 2 99% 365-367 4459s patroni/postgresql/__init__.py 821 0 100% 4459s patroni/postgresql/available_parameters/__init__.py 21 0 100% 4459s patroni/postgresql/bootstrap.py 252 0 100% 4459s patroni/postgresql/callback_executor.py 55 0 100% 4459s patroni/postgresql/cancellable.py 104 0 100% 4459s patroni/postgresql/config.py 813 0 100% 4459s patroni/postgresql/connection.py 75 0 100% 4459s patroni/postgresql/misc.py 41 0 100% 4459s patroni/postgresql/mpp/__init__.py 89 0 100% 4459s patroni/postgresql/mpp/citus.py 259 122 53% 49, 52, 62, 66, 135-144, 149-162, 183-186, 205-227, 230-234, 255-271, 274-299, 302-320, 330, 338, 343-346, 360-361, 369-380, 395-399, 437, 458-459 4459s patroni/postgresql/postmaster.py 170 0 100% 4459s patroni/postgresql/rewind.py 416 0 100% 4459s patroni/postgresql/slots.py 334 0 100% 4459s patroni/postgresql/sync.py 130 0 100% 4459s patroni/postgresql/validator.py 157 0 100% 4459s patroni/psycopg.py 42 16 62% 19, 25-26, 42, 44-82, 120 4459s patroni/raft_controller.py 22 0 100% 4459s patroni/request.py 62 0 100% 4459s patroni/scripts/__init__.py 0 0 100% 4459s patroni/scripts/aws.py 59 1 98% 86 4459s patroni/scripts/barman/__init__.py 0 0 100% 4459s patroni/scripts/barman/cli.py 51 1 98% 240 4459s patroni/scripts/barman/config_switch.py 51 0 100% 4459s patroni/scripts/barman/recover.py 37 0 100% 4459s patroni/scripts/barman/utils.py 94 0 100% 4459s patroni/scripts/wale_restore.py 207 1 99% 374 4459s patroni/tags.py 38 0 100% 4459s patroni/utils.py 350 0 100% 4459s patroni/validator.py 301 0 100% 4459s patroni/version.py 1 0 100% 4459s patroni/watchdog/__init__.py 2 0 100% 4459s patroni/watchdog/base.py 203 0 100% 4459s patroni/watchdog/linux.py 135 1 99% 36 4459s ----------------------------------------------------------------------------------- 4459s TOTAL 13778 150 99% 4459s Coverage XML written to file coverage.xml 4459s 4459s 4459s ======================= 632 passed, 14 skipped in 32.82s ======================= 4460s autopkgtest [23:43:59]: test test: -----------------------] 4465s autopkgtest [23:44:04]: test test: - - - - - - - - - - results - - - - - - - - - - 4465s test PASS 4465s autopkgtest [23:44:04]: @@@@@@@@@@@@@@@@@@@@ summary 4465s acceptance-etcd3 PASS 4465s acceptance-etcd-basic PASS 4465s acceptance-etcd PASS 4465s acceptance-zookeeper PASS 4465s acceptance-raft PASS 4465s test PASS 4476s base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.10", "links": [{"href": "http://keystone.infra.bos01.scalingstack:5000/v3/", "rel": "self"}]}} 4476s DEBUG (session:946) GET call to http://keystone.infra.bos01.scalingstack:5000/v3/ used request id req-75e1da0a-6398-4210-b440-a93c9e0a1215 4476s DEBUG (base:182) Making authentication request to http://keystone.infra.bos01.scalingstack:5000/v3/auth/tokens 4476s DEBUG (connectionpool:429) http://keystone.infra.bos01.scalingstack:5000 "POST /v3/auth/tokens HTTP/1.1" 201 4363 4476s DEBUG (base:187) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}], "is_admin_project": false, "project": {"domain": {"id": "default", "name": "Default"}, "id": "3f3b771a247746688951a4c90bf16631", "name": "prod-proposed-migration_project"}, "catalog": [{"endpoints": [{"url": "http://10.189.0.40", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "7d31d2904b56461cb46c735fc00850b0"}, {"url": "http://10.189.0.40", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "931e03b1033c4992ac8d223599983801"}, {"url": "http://10.189.0.40", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "c703b3c5e7224cfd893f622a7def99d7"}], "type": "product-streams", "id": "6723640fcf314f1c84ab92b0b7b7d343", "name": "image-stream"}, {"endpoints": [{"url": "http://neutron-api.infra.bos01.scalingstack:9696", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "13475a253aba4a63883ad9da9631b1d3"}, {"url": "http://10.189.0.22:9696", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "63b2334803a742048e95cd48d39f1674"}, {"url": "http://10.189.0.22:9696", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9d19ce3dbfd544ef90e7694049018957"}], "type": "network", "id": "6a80a28849da43ce9839207bb1e98bfc", "name": "neutron"}, {"endpoints": [{"url": "http://10.189.0.20:5000/v3", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "51d5e1cea07c4644b44a8bf114268a27"}, {"url": "http://10.189.0.20:35357/v3", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "79c780094b2f40e5a70ee3a6353760a0"}, {"url": "http://keystone.infra.bos01.scalingstack:5000/v3", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "9cdf3486e4a94ca0a181e87bc1ff344f"}], "type": "identity", "id": "ad3a88bc8df3470b938f685304ad3ae9", "name": "keystone"}, {"endpoints": [{"url": "http://nova-api.infra.bos01.scalingstack:8778", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "83e5577919844e47bbf3dffc39f71e5f"}, {"url": "http://10.189.0.23:8778", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "86cd7636126b4214a0c0de3c50936bb9"}, {"url": "http://10.189.0.23:8778", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "eb918cef1bd546fcaafc28133e511d6c"}], "type": "placement", "id": "af7144bdc8404803a159883c31910f75", "name": "placement"}, {"endpoints": [{"url": "http://10.189.0.23:8774/v2.1", "interface": "internal", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "202b55f38ce646fe8ec9e2b956672f07"}, {"url": "http://10.189.0.23:8774/v2.1", "interface": "admin", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "b29375d70fd748e699859503279177e3"}, {"url": "http://nova-api.infra.bos01.scalingstack:8774/v2.1", "interface": "public", "region": "scalingstack-bos01", "region_id": "scalingstack-bos01", "id": "ff7b759bc23341fe911fedfc2cd9ae07"}], "type": "compute", "id": "e34360be9bc6484eb95832a381a2d650", "name": "nova"}, {"endpoints": [{"url": "http://glance.infra.bos01.scalingstack:9292", "interface": "nova [W] Using flock in scalingstack-bos01-s390x 4476s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 4476s nova [W] Using flock in scalingstack-bos01-s390x 4476s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)... 4476s nova [W] Using flock in scalingstack-bos01-s390x 4476s Creating nova instance adt-oracular-s390x-patroni-20240730-222939-juju-7f2275-prod-proposed-migration-environment-2-9c812c00-e2c5-4ac3-88ea-2f1ba23b7bb8 from image adt/ubuntu-oracular-s390x-server-20240730.img (UUID 8a3353f5-c393-44e6-a278-878d68f67811)...