[katello 2.1]Pulp nodesync never finishes

Hi All,

When I synchronize content for deployment through a remote capsule server,
the following task ends up hung and suspended:

Actions::Pulp::Consumer::SyncNode

Is this a known issue? I've seen some bug reports but I'm unclear if
there's a resolution or workaround, other than cancelling the "SyncNode"
task.One bugzilla in particular, though several months
old: https://bugzilla.redhat.com/show_bug.cgi?id=1134594

I only see this issue when using a capsule server. Content synchronizes
fine between the katello server and the capsule server, but the tasks just
hang at this action.

Thanks!

> Hi All,
>
> When I synchronize content for deployment through a remote capsule server,
> the following task ends up hung and suspended:
>
> Actions::Pulp::Consumer::SyncNode
>
>
> Is this a known issue? I've seen some bug reports but I'm unclear if
> there's a resolution or workaround, other than cancelling the "SyncNode"
> task.One bugzilla in particular, though several months
> old: https://bugzilla.redhat.com/show_bug.cgi?id=1134594
>
> I only see this issue when using a capsule server. Content synchronizes
> fine between the katello server and the capsule server, but the tasks just
> hang at this action.

There's plenty of reasons why a capsule won't sync, you need to look on
the capsule itself - make sure the goferd service is started, and look
at the logs (i.e. /var/log/messages or journalctl) and see what happens
there when you initiate a sync.

··· On Tue, Mar 17, 2015 at 05:14:49AM -0700, GD wrote:

Thanks!


You received this message because you are subscribed to the Google Groups “Foreman users” group.
To unsubscribe from this group and stop receiving emails from it, send an email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at http://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.

What was the resolution to this issue? As I am currently experiencing it. I
have upgraded to the latest nightly 2.3 on the Katello and Capsule server.
However the Actions::Pulp::Consumer::SyncNode immediately goes to a
Suspended state. The task initially kicked off and was syncing, did about
30 GB before stopping.

The main error that I see is KeyError: 'pop(): dictionary is empty'

Here's some of the logs. This error just keeps on repeating itself.

> Mar 17 12:25:47 capsulename goferd: [ERROR][worker-0]
> gofer.transport.qpid.endpoint:98 - Traceback (most recent call last): File
> "/usr/lib/python2.6/site-packages/gofer/transport/qpid/endpoint.py", line
> 95, in __pop ssn.acknowledge() File "<string>", line 6, in acknowledge File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 708,
> in acknowledge self._ecwait(lambda: not [m for m in messages if m in
> self.acked]) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 50, in
> _ecwait result = self._ewait(lambda: self.closed or predicate(), timeout)
> File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line
> 580, in _ewait result = self.connection._ewait(lambda: self.error or
> predicate(), timeout) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 219,
> in _ewait self.check_error() File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 212,
> in check_error raise e InternalError: Traceback (most recent call last):
> File "/usr/lib/python2.6/site-packages/qpid/messaging/driver.py", line 652,
> in write op.dispatch(self) File
> "/usr/lib/python2.6/site-packages/qpid/ops.py", line 84, in dispatch
> getattr(target, handler)(self, *args) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/driver.py", line 867, in
> do_session_detached sst = self._sessions.pop(dtc.channel) KeyError: 'pop():
> dictionary is empty'
> Mar 17 12:25:47 capsulename goferd: [ERROR][worker-0] gofer.agent.rmi:324
> - send (progress), failed
> Mar 17 12:25:47 capsulename goferd: [ERROR][worker-0] gofer.agent.rmi:324
> - Traceback (most recent call last): File
> "/usr/lib/python2.6/site-packages/gofer/agent/rmi.py", line 320, in report
> details=self.details) File
> "/usr/lib/python2.6/site-packages/gofer/transport/qpid/producer.py", line
> 85, in send return send(self, destination, ttl, **body) File
> "/usr/lib/python2.6/site-packages/gofer/transport/qpid/producer.py", line
> 59, in send sender = endpoint.session().sender(address) File "<string>",
> line 6, in sender File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 606,
> in sender sender._ewait(lambda: sender.linked) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 814,
> in _ewait result = self.session._ewait(lambda: self.error or predicate(),
> timeout) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 580,
> in _ewait result = self.connection._ewait(lambda: self.error or
> predicate(), timeout) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 219,
> in _ewait self.check_error() File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 212,
> in check_error raise e InternalError: Traceback (most recent call last):
> File "/usr/lib/python2.6/site-packages/qpid/messaging/driver.py", line 652,
> in write op.dispatch(self) File
> "/usr/lib/python2.6/site-packages/qpid/ops.py", line 84, in dispatch
> getattr(target, handler)(self, *args) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/driver.py", line 867, in
> do_session_detached sst = self._sessions.pop(dtc.channel) KeyError: 'pop():
> dictionary is empty'
> Mar 17 12:25:47 capsulename pulp:
> requests.packages.urllib3.connectionpool:INFO: Starting new HTTPS
> connection (1): servername.fqdn
> Mar 17 12:25:48 capsulename goferd: [ERROR][worker-0]
> gofer.transport.qpid.endpoint:98 - <Session
> 9c88c7e5-76b9-43c1-b383-cb4345845434:20>
> Mar 17 12:25:48 capsulename goferd: [ERROR][worker-0]
> gofer.transport.qpid.endpoint:98 - Traceback (most recent call last): File
> "/usr/lib/python2.6/site-packages/gofer/transport/qpid/endpoint.py", line
> 95, in __pop ssn.acknowledge() File "<string>", line 6, in acknowledge File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 708,
> in acknowledge self._ecwait(lambda: not [m for m in messages if m in
> self.acked]) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 50, in
> _ecwait result = self._ewait(lambda: self.closed or predicate(), timeout)
> File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line
> 580, in _ewait result = self.connection._ewait(lambda: self.error or
> predicate(), timeout) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 219,
> in _ewait self.check_error() File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 212,
> in check_error raise e InternalError: Traceback (most recent call last):
> File "/usr/lib/python2.6/site-packages/qpid/messaging/driver.py", line 652,
> in write op.dispatch(self) File
> "/usr/lib/python2.6/site-packages/qpid/ops.py", line 84, in dispatch
> getattr(target, handler)(self, *args) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/driver.py", line 867, in
> do_session_detached sst = self._sessions.pop(dtc.channel) KeyError: 'pop():
> dictionary is empty'
> Mar 17 12:25:48 capsulename goferd: [ERROR][worker-0] gofer.agent.rmi:324
> - send (progress), failed
> Mar 17 12:25:48 capsulename goferd: [ERROR][worker-0] gofer.agent.rmi:324
> - Traceback (most recent call last): File
> "/usr/lib/python2.6/site-packages/gofer/agent/rmi.py", line 320, in report
> details=self.details) File
> "/usr/lib/python2.6/site-packages/gofer/transport/qpid/producer.py", line
> 85, in send return send(self, destination, ttl, **body) File
> "/usr/lib/python2.6/site-packages/gofer/transport/qpid/producer.py", line
> 59, in send sender = endpoint.session().sender(address) File "<string>",
> line 6, in sender File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 606,
> in sender sender._ewait(lambda: sender.linked) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 814,
> in _ewait result = self.session._ewait(lambda: self.error or predicate(),
> timeout) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 580,
> in _ewait result = self.connection._ewait(lambda: self.error or
> predicate(), timeout) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 219,
> in _ewait self.check_error() File
> "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 212,
> in check_error raise e InternalError: Traceback (most recent call last):
> File "/usr/lib/python2.6/site-packages/qpid/messaging/driver.py", line 652,
> in write op.dispatch(self) File
> "/usr/lib/python2.6/site-packages/qpid/ops.py", line 84, in dispatch
> getattr(target, handler)(self, *args) File
> "/usr/lib/python2.6/site-packages/qpid/messaging/driver.py", line 867, in
> do_session_detached sst = self._sessions.pop(dtc.channel) KeyError: 'pop():
> dictionary is empty'
>

Thanks.

I forgot to mention that yes, goferd is running (and I've tried restarting
it). I've restarted the capsule as well but this had no effect.

Does anyone have any ideas for this? It's still occuring.

Thanks

What is the Capsule doing? Is there any process pegged at 100% CPU? Does
syslog show any activity?

··· On Fri, May 29, 2015 at 2:19 PM, Matthew Ceroni wrote:

What was the resolution to this issue? As I am currently experiencing it.
I have upgraded to the latest nightly 2.3 on the Katello and Capsule
server. However the Actions::Pulp::Consumer::SyncNode immediately goes to a
Suspended state. The task initially kicked off and was syncing, did about
30 GB before stopping.


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at http://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.

Do you have external Capsules? Have you re-registered any of them? Any qpid
related errors in /var/log/messages ?

This can occur for a number of reasons. Some are related to if the Capsule
is unreachable, and some can occur if the representation of the Capsule
gets out of sync. For 2.2, we have changed the logic to account for a
number of these and have an open issue for 2.2.1 to better handle timing
out when the Capsule is unreachable.

Eric

··· On Wed, May 6, 2015 at 11:42 AM, GD wrote:

Does anyone have any ideas for this? It’s still occuring.

Thanks


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at http://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.

Hi Eric,

Thanks for the reply. Yes, I have one external Capsule, in a different
geographic location. I don't see any qpidd related errors in
/var/log/messages (with a grep -i qpid) or anything related to the external
capsule.

Do you recommend re-registering the capsule? Should I follow a specific
procedure for re-registering in order to ensure it syncs properly? Are
there any "resync" commands I can run to test this sort of thing?

Thanks.

Re-registration in 2.1 can lead to issues – that is to say, don't
re-register. Can you find the task for that node sync on the tasks page?
What state is it in? What step is it stuck on?

Eric

··· On Wed, May 6, 2015 at 3:04 PM, GD wrote:

Hi Eric,

Thanks for the reply. Yes, I have one external Capsule, in a different
geographic location. I don’t see any qpidd related errors in
/var/log/messages (with a grep -i qpid) or anything related to the external
capsule.

Do you recommend re-registering the capsule? Should I follow a specific
procedure for re-registering in order to ensure it syncs properly? Are
there any “resync” commands I can run to test this sort of thing?

Thanks.


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at http://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.

Ok, I registered the capsule a while ago in attempt to fix this issue…I
probbaly made it worse.

I have a bunch of Actions::Katello::Repository::NodeMetadataGenerate tasks
stuck at 75% (action Actions::Pulp::Consumer::SyncNode). But anytime I see
a task with an action of SyncNode, It hangs and pauses or throws an error.
It never finishes unless I cancel the syncnode task.

So, the step where the issue manifests itself is "SyncNode."

Thanks.

Do you have multiple content hosts with the same hostname where said
hostname is the hostname of your Capsule?

··· On Wed, May 6, 2015 at 3:38 PM, GD wrote:

Ok, I registered the capsule a while ago in attempt to fix this issue…I
probbaly made it worse.

I have a bunch of Actions::Katello::Repository::NodeMetadataGenerate
tasks stuck at 75% (action Actions::Pulp::Consumer::SyncNode). But
anytime I see a task with an action of SyncNode, It hangs and pauses or
throws an error. It never finishes unless I cancel the syncnode task.

So, the step where the issue manifests itself is “SyncNode.”

Thanks.


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at http://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.

Nope. Only one content host with the same hostname as the capsule, because
they are the same.

>
> Can a capsule be safely and cleanly removed, rebuilt, and then re-added in
> 2.1?
>

Thanks.

Yes. Just make sure you either delete the content host or unregister it
first as well as the smart proxy for it.

Eric

··· On Thu, May 7, 2015 at 10:13 AM, GD wrote:

Can a capsule be safely and cleanly removed, rebuilt, and then re-added in

2.1?

Thanks.


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at http://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.

Ok, I just ran another content publish and here was a different error: It
errored out at 99%: I would expect this is related but this is the first
time I didn't see the sync-node error.

Actions::Katello::Foreman::ContentUpdate

Output:

{}

Exception:

NoMethodError: undefined method `values' for []:Array

Backtrace:

/usr/share/foreman/app/services/puppet_class_importer.rb:78:in new_classes_for&#39; /usr/share/foreman/app/services/puppet_class_importer.rb:39:inchanges'
/opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/services/katello/puppet_class_importer_extensions.rb:22:in update_environment&#39; /opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/katello/foreman.rb:36:inupdate_puppet_environment'
/opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/actions/katello/foreman/content_update.rb:32:in finalize&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:465:inblock (2 levels) in execute_finalize'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:26:in call&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:26:inpass'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware.rb:16:in pass&#39; /opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/actions/middleware/remote_action.rb:31:inblock in finalize'
/opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/actions/middleware/remote_action.rb:57:in block (2 levels) in as_remote_user&#39; /opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/katello/util/thread_session.rb:84:inpulp_config'
/opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/actions/middleware/remote_action.rb:43:in as_pulp_user&#39; /opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/actions/middleware/remote_action.rb:56:inblock in as_remote_user'
/opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/katello/util/thread_session.rb:91:in cp_config&#39; /opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/actions/middleware/remote_action.rb:38:inas_cp_user'
/opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/actions/middleware/remote_action.rb:55:in as_remote_user&#39; /opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/actions/middleware/remote_action.rb:31:infinalize'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:22:in call&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:26:inpass'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware.rb:16:in pass&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action/progress.rb:30:inwith_progress_calculation'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action/progress.rb:22:in finalize&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:22:incall'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:26:in pass&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware.rb:16:inpass'
/opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/actions/middleware/keep_locale.rb:27:in block in finalize&#39; /opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/actions/middleware/keep_locale.rb:34:inwith_locale'
/opt/rh/ruby193/root/usr/share/gems/gems/katello-2.1.2/app/lib/actions/middleware/keep_locale.rb:27:in finalize&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:22:incall'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/world.rb:30:in execute&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:464:inblock in execute_finalize'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:365:in call&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:365:inblock in with_error_handling'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:365:in catch&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:365:inwith_error_handling'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:463:in execute_finalize&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/action.rb:230:inexecute'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:9:in block (2 levels) in execute&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/execution_plan/steps/abstract.rb:152:incall'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/execution_plan/steps/abstract.rb:152:in with_meta_calculation&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:8:inblock in execute'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:22:in open_action&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/execution_plan/steps/abstract_flow_step.rb:7:inexecute'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/sequential_manager.rb:72:in run_step&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/sequential_manager.rb:57:indispatch'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/sequential_manager.rb:64:in block in run_in_sequence&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/sequential_manager.rb:64:ineach'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/sequential_manager.rb:64:in all?&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/sequential_manager.rb:64:inrun_in_sequence'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/sequential_manager.rb:53:in dispatch&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/sequential_manager.rb:28:inblock (2 levels) in finalize'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:26:in call&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:26:inpass'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware.rb:16:in pass&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware.rb:41:infinalize_phase'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:22:in call&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:26:inpass'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware.rb:16:in pass&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware.rb:41:infinalize_phase'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:22:in call&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:26:inpass'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware.rb:16:in pass&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware.rb:41:infinalize_phase'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/stack.rb:22:in call&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/middleware/world.rb:30:inexecute'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/sequential_manager.rb:27:in block in finalize&#39; /opt/rh/ruby193/root/usr/share/gems/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/database_statements.rb:192:intransaction'
/opt/rh/ruby193/root/usr/share/gems/gems/activerecord-3.2.8/lib/active_record/transactions.rb:208:in transaction&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/transaction_adapters/active_record.rb:5:intransaction'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/sequential_manager.rb:24:in finalize&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/worker.rb:23:inblock in on_message'
/opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:859:in block in assigns&#39; /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:858:intap'
/opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:858:in assigns&#39; /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:138:inmatch_value'
/opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:116:in block in match&#39; /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:115:ineach'
/opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:115:in match&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/parallel/worker.rb:17:inon_message'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:82:in on_envelope&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:72:inreceive'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:in block (2 levels) in run&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:inloop'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:in block in run&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:incatch'
/opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:in run&#39; /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:13:inblock in initialize'
/opt/rh/ruby193/root/usr/share/gems/gems/logging-1.8.1/lib/logging/diagnostic_context.rb:323:in call&#39; /opt/rh/ruby193/root/usr/share/gems/gems/logging-1.8.1/lib/logging/diagnostic_context.rb:323:inblock in create_with_logging_context'

I'd like to add that my problem went away when I updated to Katello 2.2.
Good Work!

I think it is not the update to Katello 2.2 that fixed your problem, it is
calling "katello-installer --upgrade" that fixes it.

I had the exact same issue with Katello 2.2 already installed and up and
running, when a yum update of puppet and puppet-server from version 3.7.5
to 3.8.1
broke the content-view publish task. As you say, it stopped at 99 percent
progress with the same error message as above.

Running "katello-installer --upgrade" again on a local console (using
remote login is not recommended for katello-installer upgrades or installs)
fixed the problem
and restored publishing content views to the expected behaviour.

Martin

··· Am Dienstag, 12. Mai 2015 15:22:23 UTC+2 schrieb GD: > > I'd like to add that my problem went away when I updated to Katello 2.2. > Good Work! >

> I think it is not the update to Katello 2.2 that fixed your problem,
> it is calling "katello-installer --upgrade" that fixes it.
>
> I had the exact same issue with Katello 2.2 already installed and up
> and running, when a yum update of puppet and puppet-server from
> version 3.7.5 to 3.8.1
> broke the content-view publish task. As you say, it stopped at 99
> percent progress with the same error message as above.
>
> Running "katello-installer --upgrade" again on a local console (using
> remote login is not recommended for katello-installer upgrades or
> installs) fixed the problem
> and restored publishing content views to the expected behaviour.
>
That sounds to me like the puppet upgrade overwrote some existing config
files that re-running katello-installer fixed?

-Jusitn

··· On 05/29/2015 09:58 AM, Martin Leweling wrote:

Martin

Am Dienstag, 12. Mai 2015 15:22:23 UTC+2 schrieb GD:

I'd like to add that my problem went away when I updated to
Katello 2.2.  Good Work!


You received this message because you are subscribed to the Google
Groups “Foreman users” group.
To unsubscribe from this group and stop receiving emails from it, send
an email to foreman-users+unsubscribe@googlegroups.com
mailto:foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com
mailto:foreman-users@googlegroups.com.
Visit this group at http://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.