Problem:
I’m evaluating katello as a distributed container registry. I had success using the main katello server to login, pull and push images. This works fine on the main server
When using a smart proxy, the login fails, so no images can be pulled via the smart proxy
See logs below. It seems the main server returns a valid auth token, but it can’t be used in the following calls.
What is going wrong ? Is this a config issue ?
We have published all lifecycles including Library to the smart proxy in “on demand” mode
Expected outcome:
login with podman to smart proxy should succeed
Foreman and Proxy versions:
Foreman 3.12 - Katello 4.14
Foreman and Proxy plugin versions:
Distribution and version:
Other relevant data:
INFO[0000] podman filtering at log level debug
DEBU[0000] Called login.PersistentPreRunE(podman --log-level=DEBUG login proxy.myserver.com)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using boltdb as database backend
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is being used
DEBU[0000] Cached value indicated that native-diff is not being used
INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/runc"
INFO[0000] Setting parallel job count to 13
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/001-rhel-shortnames.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/002-rhel-shortnames-overrides.conf"
DEBU[0000] No credentials matching proxy.myserver.com found in /run/containers/0/auth.json
DEBU[0000] No credentials matching proxy.myserver.com found in /root/.config/containers/auth.json
DEBU[0000] No credentials matching proxy.myserver.com found in /root/.docker/config.json
DEBU[0000] No credentials matching proxy.myserver.com found in /root/.dockercfg
DEBU[0000] No credentials for proxy.myserver.com found
Username: admin
Password:
DEBU[0007] Looking for TLS certificates and private keys in /etc/docker/certs.d/proxy.myserver.com
DEBU[0007] GET https://proxy.myserver.com/v2/
DEBU[0007] Ping https://proxy.myserver.com/v2/ status 401
DEBU[0007] GET https://proxy.myserver.com/v2/token?account=admin&service=proxy.myserver.com
DEBU[0008] GET https://proxy.myserver.com/v2/
DEBU[0008] error logging into "proxy.myserver.com": unable to retrieve auth token: invalid username/password: authentication required
Error: logging into "proxy.myserver.com": invalid username/password
DEBU[0008] Shutting down engines
http log on smart proxy:
x.x.x.x - - [16/Oct/2024:13:15:27 +0200] "GET /v2/ HTTP/1.1" 401 12 "-" "containers/5.29.3 (github.com/containers/image)"
x.x.x.x - - [16/Oct/2024:13:15:27 +0200] "GET /v2/token?account=admin&service=proxy.myserver.com HTTP/1.1" 200 174 "-" "containers/5.29.3 (github.com/containers/image)"
x.x.x.x - - [16/Oct/2024:13:15:28 +0200] "GET /v2/ HTTP/1.1" 401 12 "-" "containers/5.29.3 (github.com/containers/image)"
http log on main katello:
y.y.y.y - - [16/Oct/2024:13:15:27 +0200] "GET /v2/token?account=admin HTTP/1.1" 200 174 "-" "Ruby"
y.y.y.y - - [16/Oct/2024:13:15:27 +0200] "GET /v2/_catalog?account=admin HTTP/1.1" 200 342 "-" "Ruby"
Can you send some logs at the time of login. From what I understand, smart proxy handles the user authentication in a cached way where the initial authentication still happens on the foreman server. Are there any errors you see on foreman at the time of podman login?
The logs attached seem to say username/password were incorrect. Not sure if that’s accurate?
DEBU[0008] error logging into "proxy.myserver.com": unable to retrieve auth token: invalid username/password: authentication required
Error: logging into "proxy.myserver.com": invalid username/password
DEBU[0008] Shutting down engines
Katello proxy log (not really helpful, timestamp is different, its from a different try to login)
2024-10-16T13:46:34 daebf622 [I] Started GET /container_gateway/v2/token account=admin&service=katello-smart-vg.vitecmm.de
2024-10-16T13:46:35 daebf622 [I] Finished GET /container_gateway/v2/token with 200 (598.07 ms)
2024-10-16T13:46:35 [D] Rack::Handler::WEBrick is invoked.
2024-10-16T13:46:35 a3565620 [I] Started GET /container_gateway/v2/
2024-10-16T13:46:35 a3565620 [I] Finished GET /container_gateway/v2/ with 401 (1.3 ms)
Katello main server log:
2024-10-16T13:44:18 [I|app|7f244be3] Started GET "/v2/token?account=admin" for 10.1.90.151 at 2024-10-16 13:44:18 +0200
2024-10-16T13:44:18 [I|app|7f244be3] Processing by Katello::Api::Registry::RegistryProxiesController#token as HTML
2024-10-16T13:44:18 [I|app|7f244be3] Parameters: {"account"=>"admin"}
2024-10-16T13:44:18 [I|kat|7f244be3] Authorized user admin(Admin User)
2024-10-16T13:44:18 [I|aud|7f244be3] PersonalAccessToken (2) update event on expires_at 2024-10-16 11:21:27 UTC, 2024-10-16 11:50:18 UTC
2024-10-16T13:44:18 [I|app|7f244be3] Completed 200 OK in 262ms (Views: 0.3ms | ActiveRecord: 41.1ms | Allocations: 25165)
2024-10-16T13:44:18 [I|app|3d3d00a5] Started GET "/v2/_catalog?account=admin" for 10.1.90.151 at 2024-10-16 13:44:18 +0200
2024-10-16T13:44:18 [I|app|3d3d00a5] Processing by Katello::Api::Registry::RegistryProxiesController#catalog as HTML
2024-10-16T13:44:18 [I|app|3d3d00a5] Parameters: {"account"=>"admin"}
2024-10-16T13:44:18 [I|kat|3d3d00a5] Authorized user admin(Admin User)
2024-10-16T13:44:18 [I|app|3d3d00a5] Completed 200 OK in 217ms (Views: 0.3ms | ActiveRecord: 19.4ms | Allocations: 12241)
2024-10-16T13:46:34 [I|app|aa66b272] Started GET "/v2/token?account=admin" for 10.1.90.151 at 2024-10-16 13:46:34 +0200
2024-10-16T13:46:34 [I|app|aa66b272] Processing by Katello::Api::Registry::RegistryProxiesController#token as HTML
2024-10-16T13:46:34 [I|app|aa66b272] Parameters: {"account"=>"admin"}
2024-10-16T13:46:35 [I|kat|aa66b272] Authorized user admin(Admin User)
2024-10-16T13:46:35 [I|aud|aa66b272] PersonalAccessToken (2) update event on expires_at 2024-10-16 11:50:18 UTC, 2024-10-16 11:52:35 UTC
2024-10-16T13:46:35 [I|app|aa66b272] Completed 200 OK in 281ms (Views: 0.3ms | ActiveRecord: 35.6ms | Allocations: 25024)
2024-10-16T13:46:35 [I|app|4c771fe0] Started GET "/v2/_catalog?account=admin" for 10.1.90.151 at 2024-10-16 13:46:35 +0200
2024-10-16T13:46:35 [I|app|4c771fe0] Processing by Katello::Api::Registry::RegistryProxiesController#catalog as HTML
2024-10-16T13:46:35 [I|app|4c771fe0] Parameters: {"account"=>"admin"}
2024-10-16T13:46:35 [I|kat|4c771fe0] Authorized user admin(Admin User)
2024-10-16T13:46:35 [I|app|4c771fe0] Completed 200 OK in 199ms (Views: 0.5ms | ActiveRecord: 25.2ms | Allocations: 12257)
I think i found the underlying issue. It is a sequel timestamp issue.
The token is added to the DB and immediately removed again.
I’m in timezone CEST (currently UTC+2)
if it is 11:00:00 local time, the token is added to the DB as
09:05:00, then the query
database.connection[:authentication_tokens].where { expire_at < Sequel::CURRENT_TIMESTAMP }.delete
will immediately delete the token
The postgres datetime field is created as a “timestamp without timezone”
Changing the field type to “timestamp with timezone” fixes the issue, but seems this is not wanted by the sequel developers, see also
Thank you, we’ll take a look and hopefully should get to it shortly since it’s not too complex. Once we have a PR up, you should get notified since you created an issue. Any testing help on the pull request will be appreciated and it’ll speed up merging.