luna-nightly-rpm-pipeline 236 failed

Luna nightly pipeline failed:

https://ci.theforeman.org/job/luna-nightly-rpm-pipeline/236/

foreman-pipeline-luna-nightly-centos7-install (passed)
foreman-pipeline-luna-nightly-centos7-upgrade (failed)

Not sure why this happens

[2020-09-21T01:03:10.783Z] not ok 3 sync proxy
[2020-09-21T01:03:10.783Z] # (in test file fb-proxy.bats, line 26)
[2020-09-21T01:03:10.783Z] #   `hammer capsule content synchronize --id=$PROXY_ID' failed with status 70
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.0/1, 0%, elapsed: 00:00:00
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.0/1, 0%, 0.0/s, elapsed: 00:00:02
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:04, ETA: 00:00:39
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:06, ETA: 00:00:59
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:08, ETA: 00:01:19
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:11, ETA: 00:01:39
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:13, ETA: 00:01:59
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:15, ETA: 00:01:59
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:17
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:20
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:22
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:24
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:26
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:28
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:30
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:33
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:35
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:37
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:39
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:41
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:44
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:46
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:49
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:52
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.1/1, 10%, 0.0/s, elapsed: 00:00:56
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.21/1, 21%, 0.0/s, elapsed: 00:00:58, ETA: 00:01:52
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.63/1, 63%, 0.0/s, elapsed: 00:01:01, ETA: 00:00:52
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.63/1, 63%, 0.0/s, elapsed: 00:01:03, ETA: 00:00:16
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 0.63/1, 63%, 0.0/s, elapsed: 00:01:05, ETA: 00:00:16
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b running: 1.0/1, 100%, 0.0/s, elapsed: 00:01:07
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b warning: 1.0/1, 100%, 0.0/s, elapsed: 00:01:10
[2020-09-21T01:03:10.783Z] # Task 214b8880-63a6-403e-94e7-cb220ae4b59b warning: 1.0/1, 100%, 0.0/s, elapsed: 00:01:10
[2020-09-21T01:03:10.783Z] # Error: DKR1020: Image download(s) from https://pipeline-up-luna-nightly-centos7.n14.example.com/v2/test_organization-test_product-fedorassh/blobs/sha256:9504711983ecdacdeb879a4f1ed046f42a19a081058cddf08f587e32b878e564 failed. Sync task has failed to prevent a corrupted repository.

This popped up once before, on rpm content, not docker. I wasn’t able
to reproduce locally.

It appears to be some sort of inconsistent issue between apache and the
pulp content app. Writing it out here mainly to help myself think
through it

On the proxy we see:

Sep 21 00:59:23 pipeline-up-luna-proxy-nightly-centos7 pulp: urllib3.connectionpool:INFO: Starting new HTTPS connection (1): pipeline-up-luna-nightly-centos7.n14.example.com
Sep 21 00:59:23 pipeline-up-luna-proxy-nightly-centos7 pulp: urllib3.connectionpool:INFO: Starting new HTTPS connection (2): pipeline-up-luna-nightly-centos7.n14.example.com
Sep 21 00:59:23 pipeline-up-luna-proxy-nightly-centos7 pulp: urllib3.connectionpool:INFO: Starting new HTTPS connection (3): pipeline-up-luna-nightly-centos7.n14.example.com
Sep 21 00:59:23 pipeline-up-luna-proxy-nightly-centos7 pulp: urllib3.connectionpool:INFO: Starting new HTTPS connection (4): pipeline-up-luna-nightly-centos7.n14.example.com
Sep 21 00:59:24 pipeline-up-luna-proxy-nightly-centos7 pulp: nectar.downloaders.threaded:INFO: Download succeeded: https://pipeline-up-luna-nightly-centos7.n14.example.com/v2/test_organization-test_product-fedorassh/blobs/sha256:ad6a3ff29626fff703315619e98929eec1290fadeba867fef73ee2f9a171520f.
Sep 21 00:59:26 pipeline-up-luna-proxy-nightly-centos7 pulp: nectar.downloaders.threaded:INFO: Download failed: Download of https://pipeline-up-luna-nightly-centos7.n14.example.com/v2/test_organization-test_product-fedorassh/blobs/sha256:9504711983ecdacdeb879a4f1ed046f42a19a081058cddf08f587e32b878e564 failed with code 502: Bad Gateway

In apache on the main server we see:

192.168.121.6 - - [21/Sep/2020:00:59:24 +0000] "GET /pulp/container/test_organization-test_product-fedorassh/blobs/sha256:9504711983ecdacdeb879a4f1ed046f42a19a081058cddf08f587e32b878e564?validate_token=f82b5ab167ad19052cf44fe21c3d6162ee2c905eb975e25af4f9c75839fb68d1:cb6cc1123254edd9353361a4e7d6ea0c2e016890f4842262eca7d7ca907b2ba5 HTTP/1.1" 502 645 "-" "rest-client/2.0.2 (linux-gnu x86_64) ruby/2.5.5p157"
"GET /v2/test_organization-test_product-fedorassh/blobs/sha256:9504711983ecdacdeb879a4f1ed046f42a19a081058cddf08f587e32b878e564 HTTP/1.1" 502 645 "-" "pulp/2.21.3"
[Mon Sep 21 00:59:24.894077 2020] [proxy:error] [pid 30521] [client 192.168.121.6:37168] AH00898: Error reading from remote server returned by /pulp/container/test_organization-test_product-fedorassh/blobs/sha256:9504711983ecdacdeb879a4f1ed046f42a19a081058cddf08f587e32b878e564

And on the pulpcore api app, we see:

Sep 21 00:59:24 pipeline-up-luna-nightly-centos7 pulpcore-api: 127.0.0.1 - - [21/Sep/2020:00:59:24 +0000] "GET /v2/test_organization-test_product-fedorassh/blobs/sha256:9504711983ecdacdeb879a4f1ed046f42a19a081058cddf08f587e32b878e564 HTTP/1.1" 302 0 "-" "rest-client/2.0.2 (linux-gnu x86_64) ruby/2.5.5p157"

which redirects the client to the content app, via apache. However, i
don’t see any such request to the content app, even though it shows up
in apache.

This same request went through fine, five minutes earlier when bats
tested the docker client explicitly:

pulpcore-content: 127.0.0.1 [21/Sep/2020:00:54:34 +0000] "GET /pulp/container/test_organization-test_product-fedorassh/blobs/sha256:9504711983ecdacdeb879a4f1ed046f42a19a081058cddf08f587e32b878e564?validate_token=a4c5f382937690ae40b5e009888c1fff3586a7af669ac0f1718811d33735e46c:634b43c2520d26f4a1fb60e8634c5a91c5307946afc458e98e2f9ce310ff739e HTTP/1.1" 200 877 "-" "rest-client/2.0.2 (linux-gnu x86_64) ruby/2.5.5p157"

So either:

  1. apache isn’t properly proxying the request, and the pulp-content-app
    isn’t receiving it

  2. the pulp-content app is erroring in such a way that nothing is
    logged, or nothing is logged before the logs are gathered.

Since the failure seems to be within a second or two, it does not appear
to be any sort of timeout i can think of.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.