Now having read @lzap post, why not ask the user to provide the actions required, without guessing what is best.
curl THE_URL | bash -s -- --help
curl THE_URL | bash -s -- --install-rhsm
curl THE_URL | bash -s -- --install-rhsm=6.7
curl --insecure THE_URL | bash -s -- --install-cert-anchor --use-installed=ansible
curl --insecure THE_URL | bash -s -- --install-cert-anchor --install=ansible
curl THE_URL | bash -s -- --register=basic
While some options will take time to support a given OS (like certificate installation on non-RedHat OSes or non-Linux OSes), having a simple generic script should be doable. Adding lots of complexity will make the script un-audit-able by admins who will than likely not use it, or hack on it themselves (creating a local maintenance issue).
Start the script by including a snippet with a clear and hopefully simple case "$(uname -X):$(uname -Y)" in
and establish clean variables to make further code conditional code easier to maintain. Example: [Aside: these name come from a separate older project of mine, and are illustrative only] OS_FAMILY=(linux|bsd|sunos|…), OS_FAMILY_LINAGE=redhat|debian|freebsd|netbsd|solaris|illumos|…, OS_MAJOR=…, OS_MINOR=, SYSTEM_ARCH=X86|ARM|SPARC|…, SYSTEM_OS_BITS=32|64, …
With the main script having a simple ‘basic’ registration type available, which will register a host on basic uname(1) or similar derived values (from that included snippet).
Given this script would likely be a ERB template, you could include the URL (and as previously stated the certificate), and for more complex parts (like installing providers like RHSM, puppet, ansible, …) separated out into scripts (or modules) downloaded on demand (they would also be templates, but given the fetch URL can now contain uname(1) or similar derived data - some of the complexity of which implementation to use, can be handled on the Foreman server (with templates; host or other variables, fallback defaults, etc).
This would keep provider logic out of the main script template, as it could download modules (like: RHSM, ansible, puppet, …) as required (that is, only when asked for via the command line or CLI defaults), and that fetch (as stated previously) would provide the Foreman server with enough details to get the correct script/module for your OS, or fallback to the generic script/module for that feature and tell the user to develop this functionality for their OS. Remember make each module succinct, package installation is very OS linage or version specific; but using ansible(1) once it’s installed is likely common code. Always, remember the DRY (Don’t Repeat Yourself) principal.
This approach would also mean that, if a script/module for say RHSM-6.X and RHSM-7.X differ considerably; the user told you which version they want to use (or if they didn’t the default could be controlled the Foreman server) and the code to be download can be the best script/module for the task, with that logic being on the Foreman server and not in the script. Again the KISS principal (Keep It Simple Stanley).
Supporting multiple OSes and version of software is a pain! Separating out as much conditional code into modules which can be downloaded, executed and return exit values to be checked, provides a simple interface to code to. And a way to search for the correct module to provide without huge amounts of conditional logic. or masses of code that may not be relevant to the developer or reader.
This can be especially simple, if the logic which provides the script/module on the Foreman server can use search paths (think PATH, MANPATH, RUBYPATH, PERLLIB, …) and search for the first instance of a matching template. Example: Given a bunch of uname(1) based data, construct a search path and search for the file [or alternatively construct a list of filenames to look for, and return the first one found]. The search path or the search filename than just contains a most specific to least specific implementation of that module with a guaranteed fallback implementation.
While separating out this code into modules provides more files to manage, each file should be MUCH simpler and does not need a huge amount of conditional code. Testing the module implementation as a standalone script on each target platform will be easier, and common code could still be included via the ERB template expansion process.
Furthering my fictitious example, --install-rhsm=6.7
, the Foreman server knows the provided uname(1) or similar data, the module (RHSM) and the requested version (if provided), and can search for a module that can do that or provide a fallback implementation that tells the user the operation is not supported. If using constructed filenames as the search mechanism (c.f. PATH searching), than such a search might start by looking for RHSM-EL${EL_VERSION}-${RHSM_MAJOR}.${RHSM_MINOR}
, followed by RHSM-EL${EL_VERSION}-${RHSM_MAJOR}
, followed by RHSM-${RHSM_MAJOR}.${RHSM_MINOR}
, followed by RHSM-${RHSM_MAJOR}
, with a fallback of RHSM
, is just one possible implementation.
While some variants of module (such as support for a given OS, or given provider on an OS) may be missing or out-dated, in my opinion (and it is just that) reducing the complexity by providing the implementation with clear divisions of responsibility and delegating specifics, will make that initial script readable, maintainable, and testable; and the same will apply to delegated modules.
One IMPORTANT thing, (I just thought of, for this environment), is have that uname(1) and related derived constants/variables, should all be part of an include file in the main script template. Why? because than the OS/Product specific modules test harness (or developers on the command line) can, load the same constants, and these can be available to the modules when they are tested on their respective OSes without trying to register hosts. Again, this gives a defined environment of constants that a module can expect to be available to it, and a set of return codes to exit with, thus providing a richer and better defined API between the main script and the implementation of each module.
I hope these approaches, which I have used in many projects (related to automation and platform management), provide food for thought and a way to create something for the Foreman project (and its more diverse community) and your likely immediate goal of something for the Satellite ecosystem (which has a lot of non-Foreman requirements, like subscription management), and provide a set of templates (a code base), that is simple but flexible enough for all consumers. And don’t be afraid to factor out any common code (functions) and constants into appropriate small include files; that the main program and the sub-modules can re-use (especially with the suggested fork()/exec() model ). Note: Don’t include the constants computation in the sub-modules, have them available via environment variables, it makes the code easier to test, especially on the command line, if the constants are loaded (and exported) into the current shell like they would be in the main program, than developers can modify a value and run a changed module to see if the code would run under different conditions (example: did the conditional statement they just introduced work the way they intended)
That probably enough of brain dump for now,
Peter
PS: OK one more thing, while I’m suggesting approaches to shell script development. I can’t remember how I learnt the following (reading or own design), but I been using it for 20+ years. Try, what I call DO=echo
(and DO=:
) shell script development. All non-destructive actions are the same as regular shell scripts, but destructive actions (any command which change permanent state, so mkdir(1) falls that category) are prefixed with ${DO}
, so that when the script executes with DO
not set, the script runs as normal, when run with DO
set, as in the following examples:
DO=echo ./myScript my1stArg ....
DO=: ./myScript my1stArg ....
The first invocation will show all the destructive commands that would have been executed, without actually doing them. The second shows what the output would look like to the user, as again the destructive commands are not executed, but this time they don’t pollute the script’s output. There is only one downside to this approach, and that is destructive commands that require pipes or IO redirection. Basically you need to repeat the destructive command inside an if
block and quote all the special cases in the DO
being non-empty case. Given how many scripts are written, its not all that common that the destructive actions include pipes and/or I/O redirections. Hopefully a simple (contrived) example will assist with understanding.
#!/bin/sh
TEMP_AREA="/tmp/$(basename ${0}).$$)"
${DO} rm -rf "${TEMP_AREA}"
${DO} mkdir "${TEMP_AREA}"
if [ -n "${DO}" ]
then
${DO} command1 \| command2 \> ${TEMP_AREA}/pass1
else
command1 | command2 > ${TEMP_AREA}/pass1
fi
# ...
${DO} rm -rf "${TEMP_AREA}"
Note: Technically, this introduces a huge security hole, if a user is allowed to execute the command with elevated privileges, so don’t use this for SUID/SGID shell scripts (but thankfully no-one would ever do something so silly, given the history of SUID/SGID shell scripts). If you have elevated privileges, and need to execute such a script, explicitly set DO=''
in the environment to be passed to the script.
PPS: Can anyone tell I’m a “real” programmer who has had to write a lot of shell script Lets not dwell on that too much, especially when they have had to be compatible with most implementations of /bin/sh
!