This more of a “best practice” section than a bug one.
We’re trying to support doing bare metal deployment of both Ubuntu16 and Ubuntu18 using PXE booting.
Both OS have the same installation media.
From my observations, when building a host, the following files get downloaded to /var/lib/tftpboot/boot
Where yeHqISwqdX is a random ID.
It seems to me that every time one builds a host, these files get downloaded.
On top of this, hosts from both Ubuntu 16 and Ubuntu 18 are using the same files (because same media?) which has consequences:
- Many useless downloads
- Sometimes corruption of files
- Impossibility to safely deploy Ubuntu 16/18 hosts at the same time.
So my questions:
- Is it supposed to work to have different OS version for 1 install media?
- Am I missing something obvious or misconfiguring something?
A quick work around is obviously to manually/via scripts download the files and edit the templates - which I can do - I was just wondering if there was a better way to do that.
Thanks a lot !
Forgot to mention: we’re using Foreman 1.20.1.
It’s not random but rather a SHA hash of installation media URL encoded in base64 format.
It should not download it every time, smart-proxy spawns
wget --tries=3 --no-check-certificate -nv -c so it will only download what’s missing (usually nothing) and quickly return. Unless you changed the URL of course, then it’s a redownload.
We are tracking an issue to improve this,
-c can cause issues when file changes and can corrupt files easily. From the manpage:
Continue getting a partially-downloaded file. This is useful when you want to finish up a download started by a
previous instance of Wget, or by another program. For instance:
wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
If there is a file named ls-lR.Z in the current directory, Wget will assume that it is the first portion of the
remote file, and will ask the server to continue the retrieval from an offset equal to the length of the local
Note that you don't need to specify this option if you just want the current invocation of Wget to retry
downloading a file should the connection be lost midway through. This is the default behavior. -c only affects
resumption of downloads started prior to this invocation of Wget, and whose local files are still sitting
Without -c, the previous example would just download the remote file to ls-lR.Z.1, leaving the truncated ls-lR.Z
If you use -c on a non-empty file, and the server does not support continued downloading, Wget will restart the
download from scratch and overwrite the existing file entirely.
Beginning with Wget 1.7, if you use -c on a file which is of equal size as the one on the server, Wget will
refuse to download the file and print an explanatory message. The same happens when the file is smaller on the
server than locally (presumably because it was changed on the server since your last download attempt)---because
"continuing" is not meaningful, no download occurs.
On the other side of the coin, while using -c, any file that's bigger on the server than locally will be
considered an incomplete download and only "(length(remote) - length(local))" bytes will be downloaded and
tacked onto the end of the local file. This behavior can be desirable in certain cases---for instance, you can
use wget -c to download just the new portion that's been appended to a data collection or log file.
However, if the file is bigger on the server because it's been changed, as opposed to just appended to, you'll
end up with a garbled file. Wget has no way of verifying that the local file is really a valid prefix of the
remote file. You need to be especially careful of this when using -c in conjunction with -r, since every file
will be considered as an "incomplete download" candidate.
Another instance where you'll get a garbled file if you try to use -c is if you have a lame HTTP proxy that
inserts a "transfer interrupted" string into the local file. In the future a "rollback" option may be added to
deal with this case.
Note that -c only works with FTP servers and with HTTP servers that support the "Range" header.
Oh I think I see where the problem is. Ubuntu has installation media which has no variables in:
http://archive.ubuntu.com/ubuntu and operating system class has a mechanism of finding out the correct path for it. Filed an issue: Bug #25733: Media provider unique ID does not work for Debian-based distros - Foreman
Patch incoming, you can try to apply it.
@lzap, so the file names are not random but hash from the media URL is it? Then it makes indeed sense.
Thanks a lot for your reply, filling a bug and the rest!
How would I test a patch? I’m only familiar with installing the Foreman from packages and configuring it via the installer.
Find the files on your filesystem that matches the name on github, backup, change them. Or you can do this:
patch -p1 < 6*.patch
systemctl restart httpd
To revert it back just do
patch -Rp1 < 6*patch.
This bug should be fixed in 1.22.1 but we still suffer from this bug. Today debian 9.12 and 10.3 were released and we weren’t able to install both. The bootimages weren’t updated neither were new ones downloaded. We just deleted them and foreman downloaded them and saved them with the same name.
I have fixed this with: https://github.com/theforeman/foreman/pull/7432
For the record, fixed in version 2.0.