FDI build jenkins job does not work

Hey, I have a regression during jenkins job changing

I am trying to build 3.5 release of FDI via now changed http://ci.theforeman.org/job/foreman-discovery-image-publish job but hitting:

hudson.plugins.git.GitException: Command "/usr/bin/git fetch --tags --progress https://github.com/{repoowner}/foreman-discovery-image +refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: unable to access 'https://github.com/{repoowner}/foreman-discovery-image/': The requested URL returned error: 400

Trying with the following inputs:

  • proxy_repository: 1.18
  • output_dir: releases/3.5
  • repoowner: theforeman
  • branch: master

Any ideas? @komidore64

Looks like there were two typos in the SCM module. Iā€™ve submitted a PR fixing those two things.

Thanks.

Weā€™ve switched to the openstack provider for vagrant:

Thanks folks, were you able to run the job correctly? I donā€™t see any result uploaded into:

http://downloads.theforeman.org/discovery/releases/

Iā€™d expect 3.5 subdirectory there. Shall I run it myself?

Still does not work. https://github.com/theforeman/foreman-infra/pull/707#issuecomment-399904125

Letā€™s discuss this on our internal RELENG meeting about FDI.

I am trying to change the vagrant file, made libvirt working but openstack keep giving me:

Either ā€˜imageā€™ or ā€˜volume_bootā€™ configuration must be provided (VagrantPlugins::Openstack::Errors::MissingBootOption)

But I am providing itā€¦

SHELLARGS = []
SHELLARGS << ENV['repoowner'] || ''
SHELLARGS << ENV['branch'] || ''
SHELLARGS << ENV['proxy_repo'] || ''

Vagrant.configure("2") do |config|
  config.ssh.username  = "vagrant"
  config.ssh.pty = true

  config.vm.define "el7", primary: true do |machine|
    machine.vm.hostname = "fdi-builder-vm"
    machine.vm.provision :shell, :path => 'build_image.sh', :args => SHELLARGS

    config.vm.provider :libvirt do |lv|
      lv.server_name = machine.vm.hostname
      lv.vm.box = "centos/7"
      lv.memory = 1024
    end

    # openstack needs credentials in ~/.vagrant.d/Vagrantfile and existence
    # of a dummy box:
    # vagrant box add dummy https://github.com/cloudbau/vagrant-openstack-plugin/raw/master/dummy.box
    config.vm.provider :openstack do |os|
      os.vm.box = 'dummy'
      os.server_name = machine.vm.hostname
      os.flavor = /m1.tiny/
      os.image = /CentOS 7/
    end
  end
end

I am not able to get FDI job running via Vagrant - OpenStack plugin. Why we migrated it again?

Is there any alternative? Perhaps running livecd-creator directly on the nodes? I am not much familiar with our Jenkins infra, but I now we do have CentOS and Debian nodes, so it could either run only on CentOS nodes or we could use a container.

What do @packaging folks say to building FDI in brew? Is that doable?

This is becoming an issue I am unable to solve without further assistance. Who owns our Jenkins? Can you help me with that?

How feasible is building livecd in our koji, @packaging? It should be only matter of setting up tags and doing some permissions IMHO, the only issue I can see is space limitation. We could workaround that by deleting livecd builds/scratchbuilds earlier.

@ekohl Does any of this ring a bell from the transition to the vagrant-openstack provider?

I have tried to submit livecd task:

kojikat spin-livecd --scratch --background --repo=http://yum.theforeman.org/latest/el7/x86_64 foreman-discovery-image 3.5.0 foreman-nightly-rhel7 x86_64 fdi-image.ks

But it failed with LiveCDError: LiveCD functions not available.

http://koji.katello.org/koji/taskinfo?taskID=113509

After reading some docs and code, it looks like one dependency is missing:

We already have pykickstart and python-hashlib present. Is it possible to do koji maintainance in short-term installing the missing package and restarting kojid:

yum install pycdio
systemctl restart kojid

The missing package is in EPEL7, since this is not upgrading anything this should be low-risk. That should enable submitting livecd tasks. It would be much nicer to build FDI in koji rather than in OpenStack I think. If I find this working, the only problem we need to solve is cleaning up build artifacts sooner - that should be just easy cronjob with find command.

Any assistance would be really appreciated, 1.18 came out without FDI build and 1.19 RC is out. I am still unable to build FDI on our infra.

And I have crazy ideas on my mind, like building it locally and uploading it to our download site. I had dreams about it yesterdayā€¦ :wink:

Because the old plugin was keeping us at an ancient version and it was all unmaintained.

The problem is the incorrect override. The dummy box is no longer needed, that was only with vagrant-rackspace. I also had to change the flavor because Rackspace has no m1.tiny flavor.

    config.vm.provider :openstack do |p, os|
      os.vm.box = nil
      p.server_name = machine.vm.hostname
      p.flavor = /4GB/
      p.image = /CentOS 7/
    end

That said, I think building in koji can be a better solution but fixing the vagrant setup is a good short terms solution.

@ekohl Can you elaborate? The snippet itself wonā€™t work, I obviously need some credentials so I can test this locally. How did you get running through openstack then? I need to test this first.

We really need 3.5.0 build, we are so late with this.

@ehelms any chance of getting pycdio installed on koji and kojid restarted? That should enable building livecds according to documentation.

It is what I had to change compared to what you posted in FDI build jenkins job does not work. I can guide you offline on how to test it.

1 Like

Thanks thatā€™d be great. Or if you have it working just provide a PR I can test and merge. I tried to customize my snippet above but I am still getting:

Either 'image' or 'volume_boot' configuration must be provided (VagrantPlugins::Openstack::Errors::MissingBootOption)

I see you have still a PR opened in FDI repo but it does not work for me.