>
>
>
>
>>
>> I now got bit by this in the opposite way. I'm starting a new Foreman
host dedicated to an HPC cluster, and wanted to remove the entries from my
departmental Foreman server as more of a housekeeping exercise. I went to
the new Foreman host's entry and set it to be "Unmanaged" , then deleted
the host. I was somewhat shocked to find that it then also deleted the VM
entry in libvirt , AND the VMs virtual disk, essentially erasing every
trace of the new Foreman host. This isn't a huge issue, as everything I
had done on the new host was in the new Puppet master's manifests, but it
seemed a bit too easy to completely destroy every trace of a VM by removing
its entry in Foreman.
>
>
> I guess we can simply change it to not do anything when the vm is not
managed, please create an issue (and if possible send a patch too ;))
Will see if I can throw something together to patch this.
>>
>>
>> Is there a non-destructive way to remove a host from Foreman when the
host was created via a Compute Resource? Does the compute resource have to
also be removed to ensure it can't communicate, or will that prevent
deletion of the host once CR is gone? My goal is to transition these hosts
from one Foreman host to another. I realize the same Foreman host can
manage all resource, but this HPC system needs to be as isolated as
possible from the rest of the hosts I manage, including separate Puppet
masters and possibly different versions of the same Puppet modules.
>
>
> Did you consider using Orgs (new feature), if you dont need real
hardware separation, that could (hopefully) provide the separation you need.
>
The Foreman server this happened on is on latest stable RPM release, is
this new feature something in 1.1 RC1?
> Note that if you import a vm on another foreman, it would be visible as a
bare metal, (as foreman has no idea about the CR used).
> we are looking to adding support for fixing it via the UI/API, but at the
moment, the only way to reattach them is to create the CR in the new
foreman and update the record with the CR id and instance UUID.
>
Is the current method to update the CR Id via console and not UI? My 1.0.1
install is only one with CRs and so far the ability to modify CR seems only
available in the 'new' action.
>
> Ohad
>>
>>
>> Thanks
>> - Trey
>>
>>
>>
>>>
>>>
>>>
>>>
>>>>
>>>> I have an issue right now where I can not delete a host because the
compute resource (oVirt) that it was assigned to no longer exists. The
oVirt setup was for testing which has had to be temporarily suspended, and
when I try to delete the host I get this error…
>>>>
>>>> Failed to destroy a compute Wehner (oVirt) instance
dc-aggregator-old.tamu.edu: No route to host -
connect(2)/usr/lib/ruby/gems/1.8/gems/rbovirt-0.0.12/lib/rbovirt.rb:131:in
`handle_fault'
>>>>
>>>> This wouldn't really be a problem but I've since re-created "
dc-aggregator.tamu.edu" on a libvirt compute resource, however whenever the
new host reports to Foreman it is updating the old record's reports not the
new host's. This is when I renamed it to "dc-aggregator-old" which did
nothing.
>>>>
>>>> As a temporary solution, I did the following…
>>>>
>>>> @host = Host.find_by_name('dc-aggregator-old.tamu.edu')
>>>> @host.compute_resource = nil
>>>> @host.save!
>>>>
>>>> This allowed me to delete the host successfully. Please let me know
if this could leave stale data behind in the DB.
>>>
>>>
>>> It wont leave anything in the db, might it might leave a vm running
some where 
>>>
>>> The idea was not to delete it from foreman, and someone will forget to
remove their EC2 instance and keep paying money, however, I agree that
deleting a CR and leaving your in this way is wrong too, can you open a bug
about it?
>>>
>>> Should we not allow to delete a CR if it has hosts attached?
>>>
>>>>
>>>>
>>>> To prevent others from running into this I'd be interested to know if
this is a known issue, or on the roadmap, to prevent hosts from being
un-removable once their corresponding compute resource is inaccessible.
Also this leads to another issue I've run into, when live migrating
between KVM hosts the compute resource cannot be updated / changed to
reflect the migration.
>>>
>>>
>>> I know… this is a limitation in the libvirt model, after all, I dont
want foreman to manage the VM life cycle too (hence why there is no migrate
options in the UI (even if it would be trivial).
>>>
>>> Ideally, if you have a large set of hypervisors, you should not use
libvirt (but maybe ovirt instead).
>>> But I'm more than open for suggestions of how to handle this
scenario…
>>>
>>> Thanks,
>>> Ohad
>>>>
>>>>
>>>> Thanks
>>>> - Trey
>>>>
>>>> –
>>>> You received this message because you are subscribed to the Google
Groups "Foreman users" group.
>>>> To view this discussion on the web visit
https://groups.google.com/d/msg/foreman-users/-/BBW_CVcLmb0J.
>>>> To post to this group, send email to forema...@googlegroups.com.
>>>> To unsubscribe from this group, send email to
foreman-user...@googlegroups.com.
>>>>
>>>> For more options, visit this group at
http://groups.google.com/group/foreman-users?hl=en.
>>>
>>>
>> –
>> You received this message because you are subscribed to the Google
Groups "Foreman users" group.
>> To view this discussion on the web visit
https://groups.google.com/d/msg/foreman-users/-/g5fDNmWaskYJ.
>>
>> To post to this group, send email to foreman-users@googlegroups.com.
>> To unsubscribe from this group, send email to
foreman-users+unsubscribe@googlegroups.com.
>> For more options, visit this group at
http://groups.google.com/group/foreman-users?hl=en.
>
>
> –
> You received this message because you are subscribed to the Google Groups
"Foreman users" group.
> To post to this group, send email to foreman-users@googlegroups.com.
> To unsubscribe from this group, send email to
foreman-users+unsubscribe@googlegroups.com.
> For more options, visit this group at
http://groups.google.com/group/foreman-users?hl=en.
···
On Dec 8, 2012 1:51 PM, "Ohad Levy" wrote:
> On Sat, Dec 8, 2012 at 12:39 AM, treydock wrote:
>> On Sunday, November 11, 2012 3:20:31 AM UTC-6, ohad wrote:
>>> On Fri, Nov 9, 2012 at 7:55 PM, treydock wrote: