Katello 3.0.2 (Saison) Released

Katello 3.0.2 has been released to supply bug fixes and major upgrade
issues found by some awesome users. Please see the changelog for more
information (
https://github.com/Katello/katello/blob/KATELLO-3.0/CHANGELOG.md).

Installation

··· ============

For installation, please see the instructions at:

Server: http://www.katello.org/docs/3.0/installation/index.html
http://www.katello.org/docs/2.4/installation/index.html
Capsule: http://www.katello.org/docs/3.0/installation/capsule.html
http://www.katello.org/docs/2.4/installation/capsule.html

Bug reporting

If you come across a bug in your testing, please file it and note the
version of Katello that you’re using in the report and set the release
to 3.0.2.

Foreman


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University

Hi

I still cannot install new Katello because of this
issue: Bug #15507: Katello 3.0.1 installation fails - Crane: Failed to configure CA certificate chain! - Katello - Foreman

It fails every time… Does anyone know workaround?

Edgars

trešdiena, 2016. gada 6. jūlijs 22:51:48 UTC+2, Eric Helms rakstīja:

··· > > Katello 3.0.2 has been released to supply bug fixes and major upgrade > issues found by some awesome users. Please see the changelog for more > information ( > https://github.com/Katello/katello/blob/KATELLO-3.0/CHANGELOG.md). > > Installation > ============ > > For installation, please see the instructions at: > > Server: http://www.katello.org/docs/3.0/installation/index.html > > Capsule: http://www.katello.org/docs/3.0/installation/capsule.html > > > Bug reporting > ============= > If you come across a bug in your testing, please file it and note the > version of Katello that you're using in the report and set the release > to 3.0.2. > > http://projects.theforeman.org/projects/katello/issues/new > > > -- > Eric D. Helms > Red Hat Engineering > Ph.D. Student - North Carolina State University >

Hi Eric

Did you have a chance to look in to this issue?

Edgars

trešdiena, 2016. gada 6. jūlijs 22:51:48 UTC+2, Eric Helms rakstīja:

··· > > Katello 3.0.2 has been released to supply bug fixes and major upgrade > issues found by some awesome users. Please see the changelog for more > information ( > https://github.com/Katello/katello/blob/KATELLO-3.0/CHANGELOG.md). > > Installation > ============ > > For installation, please see the instructions at: > > Server: http://www.katello.org/docs/3.0/installation/index.html > > Capsule: http://www.katello.org/docs/3.0/installation/capsule.html > > > Bug reporting > ============= > If you come across a bug in your testing, please file it and note the > version of Katello that you're using in the report and set the release > to 3.0.2. > > http://projects.theforeman.org/projects/katello/issues/new > > > -- > Eric D. Helms > Red Hat Engineering > Ph.D. Student - North Carolina State University >

I will check certs again, though they are standard certs used across our
infra

Crane config is configured with default CA certs, not my custom:

SSL directives

SSLEngine on
SSLCertificateFile "/etc/pki/katello/certs/katello-apache.crt"
SSLCertificateKeyFile "/etc/pki/katello/private/katello-apache.key"
SSLCertificateChainFile "/etc/pki/katello/certs/katello-default-ca.crt"
SSLCACertificatePath "/etc/pki/tls/certs"
SSLCACertificateFile "/etc/pki/katello/certs/katello-default-ca.crt"

trešdiena, 2016. gada 6. jūlijs 22:51:48 UTC+2, Eric Helms rakstīja:

··· > > Katello 3.0.2 has been released to supply bug fixes and major upgrade > issues found by some awesome users. Please see the changelog for more > information ( > https://github.com/Katello/katello/blob/KATELLO-3.0/CHANGELOG.md). > > Installation > ============ > > For installation, please see the instructions at: > > Server: http://www.katello.org/docs/3.0/installation/index.html > > Capsule: http://www.katello.org/docs/3.0/installation/capsule.html > > > Bug reporting > ============= > If you come across a bug in your testing, please file it and note the > version of Katello that you're using in the report and set the release > to 3.0.2. > > http://projects.theforeman.org/projects/katello/issues/new > > > -- > Eric D. Helms > Red Hat Engineering > Ph.D. Student - North Carolina State University >

Hi Eric

I re-generated all certs and re-downloaded CA cert again and now it works!
I have no idea what was wrong with them. Sorry about all the noise :slight_smile:

Edgars

trešdiena, 2016. gada 6. jūlijs 22:51:48 UTC+2, Eric Helms rakstīja:

··· > > Katello 3.0.2 has been released to supply bug fixes and major upgrade > issues found by some awesome users. Please see the changelog for more > information ( > https://github.com/Katello/katello/blob/KATELLO-3.0/CHANGELOG.md). > > Installation > ============ > > For installation, please see the instructions at: > > Server: http://www.katello.org/docs/3.0/installation/index.html > > Capsule: http://www.katello.org/docs/3.0/installation/capsule.html > > > Bug reporting > ============= > If you come across a bug in your testing, please file it and note the > version of Katello that you're using in the report and set the release > to 3.0.2. > > http://projects.theforeman.org/projects/katello/issues/new > > > -- > Eric D. Helms > Red Hat Engineering > Ph.D. Student - North Carolina State University >

Hello

I seem to be having some odd behavior with this version. With a fresh
install on centos 7 I have setup a product which completes normally but
when I discover a repo and save them i get these meta data task that seem
to just wait forever. Any ideas what can be the culprit

Action:

Actions::Pulp::Repository::DistributorPublish

State: waiting for Pulp to start the task
Input:

{"pulp_id"=>"test-centos-6_updates_x86_64",
"distributor_type_id"=>"yum_distributor",
"source_pulp_id"=>nil,
"dependency"=>nil,
"remote_user"=>"admin",
"remote_cp_user"=>"admin",
"locale"=>"en"}

Output:

{"pulp_tasks"=>
[{"exception"=>nil,
"task_type"=>"pulp.server.managers.repo.publish.publish",
"_href"=>"/pulp/api/v2/tasks/a40815d5-9ba4-463a-8216-338cdcc4b1cc/",
"task_id"=>"a40815d5-9ba4-463a-8216-338cdcc4b1cc",
"tags"=>
["pulp:repository:test-centos-6_updates_x86_64", "pulp:action:publish"],
"finish_time"=>nil,
"_ns"=>"task_status",
"start_time"=>nil,
"traceback"=>nil,
"spawned_tasks"=>[],
"progress_report"=>{},
"queue"=>"None.dq",
"state"=>"waiting",
"worker_name"=>nil,
"result"=>nil,
"error"=>nil,
"_id"=>{"$oid"=>"579f32aa95c48c6a54257674"},
"id"=>"579f32aa95c48c6a54257674"}],
"poll_attempts"=>{"total"=>100, "failed"=>0}}

Edgars,

I will test this today and report back to you.

Eric

··· On Thu, Jul 7, 2016 at 5:53 AM, Edgars M. wrote:

Hi

I still cannot install new Katello because of this issue:
Bug #15507: Katello 3.0.1 installation fails - Crane: Failed to configure CA certificate chain! - Katello - Foreman

It fails every time… Does anyone know workaround?

Edgars

trešdiena, 2016. gada 6. jūlijs 22:51:48 UTC+2, Eric Helms rakstīja:

Katello 3.0.2 has been released to supply bug fixes and major upgrade
issues found by some awesome users. Please see the changelog for more
information (
https://github.com/Katello/katello/blob/KATELLO-3.0/CHANGELOG.md).

Installation

For installation, please see the instructions at:

Server: http://www.katello.org/docs/3.0/installation/index.html
http://www.katello.org/docs/2.4/installation/index.html
Capsule: http://www.katello.org/docs/3.0/installation/capsule.html
http://www.katello.org/docs/2.4/installation/capsule.html

Bug reporting

If you come across a bug in your testing, please file it and note the
version of Katello that you’re using in the report and set the release
to 3.0.2.

Foreman


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University

I have tried to reproduce this myself to no avail. Anything special about
your custom certs? Can you check if the CA cert is your custom one being
used by crane? Also the server cert? The filenames won't reflect that so
check the contents.

··· On Jul 26, 2016 7:17 AM, "Edgars M." wrote:

Hi Eric

Did you have a chance to look in to this issue?

Edgars

trešdiena, 2016. gada 6. jūlijs 22:51:48 UTC+2, Eric Helms rakstīja:

Katello 3.0.2 has been released to supply bug fixes and major upgrade
issues found by some awesome users. Please see the changelog for more
information (
https://github.com/Katello/katello/blob/KATELLO-3.0/CHANGELOG.md).

Installation

For installation, please see the instructions at:

Server: http://www.katello.org/docs/3.0/installation/index.html
http://www.katello.org/docs/2.4/installation/index.html
Capsule: http://www.katello.org/docs/3.0/installation/capsule.html
http://www.katello.org/docs/2.4/installation/capsule.html

Bug reporting

If you come across a bug in your testing, please file it and note the
version of Katello that you’re using in the report and set the release
to 3.0.2.

Foreman


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.

No worries! Certs don't always make it easy – glad to hear its working!

Eric

··· On Tue, Jul 26, 2016 at 9:49 AM, Edgars M. wrote:

Hi Eric

I re-generated all certs and re-downloaded CA cert again and now it works!
I have no idea what was wrong with them. Sorry about all the noise :slight_smile:

Edgars

trešdiena, 2016. gada 6. jūlijs 22:51:48 UTC+2, Eric Helms rakstīja:

Katello 3.0.2 has been released to supply bug fixes and major upgrade
issues found by some awesome users. Please see the changelog for more
information (
https://github.com/Katello/katello/blob/KATELLO-3.0/CHANGELOG.md).

Installation

For installation, please see the instructions at:

Server: http://www.katello.org/docs/3.0/installation/index.html
http://www.katello.org/docs/2.4/installation/index.html
Capsule: http://www.katello.org/docs/3.0/installation/capsule.html
http://www.katello.org/docs/2.4/installation/capsule.html

Bug reporting

If you come across a bug in your testing, please file it and note the
version of Katello that you’re using in the report and set the release
to 3.0.2.

Foreman


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University

> Hello
>
> I seem to be having some odd behavior with this version. With a fresh
> install on centos 7 I have setup a product which completes normally but
> when I discover a repo and save them i get these meta data task that seem
> to just wait forever. Any ideas what can be the culprit
>

Are there any related errors in /var/log/messages?

··· On 08/01/2016 07:54 AM, Rick Langston wrote:

Action:

Actions::Pulp::Repository::DistributorPublish

State: waiting for Pulp to start the task
Input:

{“pulp_id”=>“test-centos-6_updates_x86_64”,
“distributor_type_id”=>“yum_distributor”,
“source_pulp_id”=>nil,
“dependency”=>nil,
“remote_user”=>“admin”,
“remote_cp_user”=>“admin”,
“locale”=>“en”}

Output:

{“pulp_tasks”=>
[{“exception”=>nil,
“task_type”=>“pulp.server.managers.repo.publish.publish”,
"_href"=>"/pulp/api/v2/tasks/a40815d5-9ba4-463a-8216-338cdcc4b1cc/",
“task_id”=>“a40815d5-9ba4-463a-8216-338cdcc4b1cc”,
“tags”=>
[“pulp:repository:test-centos-6_updates_x86_64”, “pulp:action:publish”],
“finish_time”=>nil,
"_ns"=>“task_status”,
“start_time”=>nil,
“traceback”=>nil,
“spawned_tasks”=>[],
“progress_report”=>{},
“queue”=>“None.dq”,
“state”=>“waiting”,
“worker_name”=>nil,
“result”=>nil,
“error”=>nil,
"_id"=>{"$oid"=>“579f32aa95c48c6a54257674”},
“id”=>“579f32aa95c48c6a54257674”}],
“poll_attempts”=>{“total”=>100, “failed”=>0}}

Edgars,

I tested this scenario today and could not duplicate your results. Is there
anything special about your custom certificates? Wildcard? Attributes
special to them? This is my test scenario:

··· On Thu, Jul 7, 2016 at 8:07 AM, Eric D Helms wrote:

Edgars,

I will test this today and report back to you.

Eric

On Thu, Jul 7, 2016 at 5:53 AM, Edgars M. edgars.mazurs@gmail.com wrote:

Hi

I still cannot install new Katello because of this issue:
http://projects.theforeman.org/issues/15507

It fails every time… Does anyone know workaround?

Edgars

trešdiena, 2016. gada 6. jūlijs 22:51:48 UTC+2, Eric Helms rakstīja:

Katello 3.0.2 has been released to supply bug fixes and major upgrade
issues found by some awesome users. Please see the changelog for more
information (
https://github.com/Katello/katello/blob/KATELLO-3.0/CHANGELOG.md).

Installation

For installation, please see the instructions at:

Server: http://www.katello.org/docs/3.0/installation/index.html
http://www.katello.org/docs/2.4/installation/index.html
Capsule: http://www.katello.org/docs/3.0/installation/capsule.html
http://www.katello.org/docs/2.4/installation/capsule.html

Bug reporting

If you come across a bug in your testing, please file it and note the
version of Katello that you’re using in the report and set the release
to 3.0.2.

http://projects.theforeman.org/projects/katello/issues/new


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University


You received this message because you are subscribed to the Google Groups
“Foreman users” group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University

Sounds like an SELinux issue. Ran into something similar when I copied /
generated certificates in my home directory then moved them to /etc/httpd.
If you have the files backed up, you can try running 'ls -alZ' in the
directory to check the file context.
Running 'restorecon <filename>' on your certificate files would have
resolved it if SELinux was the issue.

··· On Wednesday, July 27, 2016 at 12:07:14 AM UTC+10, Eric Helms wrote: > > No worries! Certs don't always make it easy -- glad to hear its working! > > Eric > > On Tue, Jul 26, 2016 at 9:49 AM, Edgars M. > wrote: > >> Hi Eric >> >> I re-generated all certs and re-downloaded CA cert again and now it >> works! I have no idea what was wrong with them. Sorry about all the noise :) >> >> Edgars >> >> trešdiena, 2016. gada 6. jūlijs 22:51:48 UTC+2, Eric Helms rakstīja: >>> >>> Katello 3.0.2 has been released to supply bug fixes and major upgrade >>> issues found by some awesome users. Please see the changelog for more >>> information ( >>> https://github.com/Katello/katello/blob/KATELLO-3.0/CHANGELOG.md). >>> >>> Installation >>> ============ >>> >>> For installation, please see the instructions at: >>> >>> Server: http://www.katello.org/docs/3.0/installation/index.html >>> >>> Capsule: http://www.katello.org/docs/3.0/installation/capsule.html >>> >>> >>> Bug reporting >>> ============= >>> If you come across a bug in your testing, please file it and note the >>> version of Katello that you're using in the report and set the release >>> to 3.0.2. >>> >>> http://projects.theforeman.org/projects/katello/issues/new >>> >>> >>> -- >>> Eric D. Helms >>> Red Hat Engineering >>> Ph.D. Student - North Carolina State University >>> >> -- >> You received this message because you are subscribed to the Google Groups >> "Foreman users" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to foreman-user...@googlegroups.com . >> To post to this group, send email to forema...@googlegroups.com >> . >> Visit this group at https://groups.google.com/group/foreman-users. >> For more options, visit https://groups.google.com/d/optout. >> > > > > -- > Eric D. Helms > Red Hat Engineering > Ph.D. Student - North Carolina State University >

>
>
>
> > Hello
> >
> > I seem to be having some odd behavior with this version. With a fresh
> > install on centos 7 I have setup a product which completes normally but
> > when I discover a repo and save them i get these meta data task that
> seem
> > to just wait forever. Any ideas what can be the culprit
> >
>
> Are there any related errors in /var/log/messages?
>
> >
> > Action:
> >
> > Actions::Pulp::Repository::DistributorPublish
> >
> > State: waiting for Pulp to start the task
> > Input:
> >
> > {"pulp_id"=>"test-centos-6_updates_x86_64",
> > "distributor_type_id"=>"yum_distributor",
> > "source_pulp_id"=>nil,
> > "dependency"=>nil,
> > "remote_user"=>"admin",
> > "remote_cp_user"=>"admin",
> > "locale"=>"en"}
> >
> > Output:
> >
> > {"pulp_tasks"=>
> > [{"exception"=>nil,
> > "task_type"=>"pulp.server.managers.repo.publish.publish",
> > "_href"=>"/pulp/api/v2/tasks/a40815d5-9ba4-463a-8216-338cdcc4b1cc/",
> > "task_id"=>"a40815d5-9ba4-463a-8216-338cdcc4b1cc",
> > "tags"=>
> > ["pulp:repository:test-centos-6_updates_x86_64",
> "pulp:action:publish"],
> > "finish_time"=>nil,
> > "_ns"=>"task_status",
> > "start_time"=>nil,
> > "traceback"=>nil,
> > "spawned_tasks"=>[],
> > "progress_report"=>{},
> > "queue"=>"None.dq",
> > "state"=>"waiting",
> > "worker_name"=>nil,
> > "result"=>nil,
> > "error"=>nil,
> > "_id"=>{"$oid"=>"579f32aa95c48c6a54257674"},
> > "id"=>"579f32aa95c48c6a54257674"}],
> > "poll_attempts"=>{"total"=>100, "failed"=>0}}
> >
>

··· On Monday, August 1, 2016 at 7:12:26 AM UTC-5, Chris Duryee wrote: > On 08/01/2016 07:54 AM, Rick Langston wrote:

I do see this issue in messages but not sure if its related

Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) beat
raised exception <class 'qpid.messaging.exceptions.Timeout'>:
Timeout('Connection attach timed out',)
Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416)
Traceback (most recent call last):
Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) File
"/usr/lib/python2.7/site-packages/celery/apps/beat.py", line 112, in
start_scheduler
Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416)
beat.start()

··· On Monday, August 1, 2016 at 7:12:26 AM UTC-5, Chris Duryee wrote: > > > > On 08/01/2016 07:54 AM, Rick Langston wrote: > > Hello > > > > I seem to be having some odd behavior with this version. With a fresh > > install on centos 7 I have setup a product which completes normally but > > when I discover a repo and save them i get these meta data task that > seem > > to just wait forever. Any ideas what can be the culprit > > > > Are there any related errors in /var/log/messages? > > > > > Action: > > > > Actions::Pulp::Repository::DistributorPublish > > > > State: waiting for Pulp to start the task > > Input: > > > > {"pulp_id"=>"test-centos-6_updates_x86_64", > > "distributor_type_id"=>"yum_distributor", > > "source_pulp_id"=>nil, > > "dependency"=>nil, > > "remote_user"=>"admin", > > "remote_cp_user"=>"admin", > > "locale"=>"en"} > > > > Output: > > > > {"pulp_tasks"=> > > [{"exception"=>nil, > > "task_type"=>"pulp.server.managers.repo.publish.publish", > > "_href"=>"/pulp/api/v2/tasks/a40815d5-9ba4-463a-8216-338cdcc4b1cc/", > > "task_id"=>"a40815d5-9ba4-463a-8216-338cdcc4b1cc", > > "tags"=> > > ["pulp:repository:test-centos-6_updates_x86_64", > "pulp:action:publish"], > > "finish_time"=>nil, > > "_ns"=>"task_status", > > "start_time"=>nil, > > "traceback"=>nil, > > "spawned_tasks"=>[], > > "progress_report"=>{}, > > "queue"=>"None.dq", > > "state"=>"waiting", > > "worker_name"=>nil, > > "result"=>nil, > > "error"=>nil, > > "_id"=>{"$oid"=>"579f32aa95c48c6a54257674"}, > > "id"=>"579f32aa95c48c6a54257674"}], > > "poll_attempts"=>{"total"=>100, "failed"=>0}} > > >

> I do see this issue in messages but not sure if its related
>
> Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) beat
> raised exception <class 'qpid.messaging.exceptions.Timeout'>:
> Timeout('Connection attach timed out',)
> Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416)
> Traceback (most recent call last):
> Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) File
> "/usr/lib/python2.7/site-packages/celery/apps/beat.py", line 112, in
> start_scheduler
> Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416)
> beat.start()
>

That is the likely culprit:)

Next time your task hangs, check in the "/about" page on your Katello
instance and ensure everything under "Backend System Status" says "OK"
with no further message.

If there are pulp errors, a possible quick fix is to ensure qpidd is
still running, then restart pulp_workers, pulp_celerybeat and
pulp_resource_manager. I suspect your task will get picked up after that.

Also, please check dmesg for out-of-memory errors. There are some other
possible things we can check, but I would be curious first about the
backend system status output.

··· On 08/01/2016 08:50 AM, Rick Langston wrote:

On Monday, August 1, 2016 at 7:12:26 AM UTC-5, Chris Duryee wrote:

On 08/01/2016 07:54 AM, Rick Langston wrote:

Hello

I seem to be having some odd behavior with this version. With a fresh
install on centos 7 I have setup a product which completes normally but
when I discover a repo and save them i get these meta data task that
seem
to just wait forever. Any ideas what can be the culprit

Are there any related errors in /var/log/messages?

Action:

Actions::Pulp::Repository::DistributorPublish

State: waiting for Pulp to start the task
Input:

{“pulp_id”=>“test-centos-6_updates_x86_64”,
“distributor_type_id”=>“yum_distributor”,
“source_pulp_id”=>nil,
“dependency”=>nil,
“remote_user”=>“admin”,
“remote_cp_user”=>“admin”,
“locale”=>“en”}

Output:

{“pulp_tasks”=>
[{“exception”=>nil,
“task_type”=>“pulp.server.managers.repo.publish.publish”,
"_href"=>"/pulp/api/v2/tasks/a40815d5-9ba4-463a-8216-338cdcc4b1cc/",
“task_id”=>“a40815d5-9ba4-463a-8216-338cdcc4b1cc”,
“tags”=>
[“pulp:repository:test-centos-6_updates_x86_64”,
“pulp:action:publish”],
“finish_time”=>nil,
"_ns"=>“task_status”,
“start_time”=>nil,
“traceback”=>nil,
“spawned_tasks”=>[],
“progress_report”=>{},
“queue”=>“None.dq”,
“state”=>“waiting”,
“worker_name”=>nil,
“result”=>nil,
“error”=>nil,
"_id"=>{"$oid"=>“579f32aa95c48c6a54257674”},
“id”=>“579f32aa95c48c6a54257674”}],
“poll_attempts”=>{“total”=>100, “failed”=>0}}

No, there is nothing special about our certificates. No wildcard, no even
SAN. 2048 bits.

Why is Crane needed? Can I disable it? Can I disable everything related to
Puppet as we don't need that functionality?

I just tested it and it failed again, here is my full install command:

sudo foreman-installer --scenario katello
–certs-server-cert="/etc/pki/tls/certs/katello.tld.crt"
–certs-server-cert-req="/etc/pki/tls/csr/katello.tld.csr"
–certs-server-key="/etc/pki/tls/private/katello.tld.key"
–certs-server-ca-cert="/etc/pki/tls/certs/CompanyInternalCA.crt"
–foreman-admin-email="name@company.tld"
–foreman-admin-first-name="Name"
–foreman-admin-last-name="LastName"
–foreman-admin-password="SomeCustomPassword"
–foreman-initial-organization="Company"
–katello-num-pulp-workers="24"
–katello-proxy-url="http://corporate.proxy.tld"
–katello-proxy-port="8080"
–verbose

Errors:
[ERROR 2016-07-08 10:48:00 verbose] Could not start Service[httpd]:
Execution of '/usr/share/katello-installer-base/modules/service_wait/bin/service-wait
start httpd' returned 1: Redirecting to /bin/systemctl start httpd.service
[ INFO 2016-07-08 10:48:00 verbose] Job for httpd.service failed because
the control process exited with error code. See "systemctl status
httpd.service" and "journalctl -xe" for details.
[ERROR 2016-07-08 10:48:00 verbose] /Stage[main]/Apache::Service/Service[
httpd]/ensure: change from stopped to running failed: Could not start
Service[httpd]: Execution of '/usr/share/katello-installer-base/modules/service_wait/bin/service-wait
start httpd' returned 1: Redirecting to /bin/systemctl start httpd.service
[ERROR 2016-07-08 10:48:18 verbose] /Stage[main]/Foreman::Database/Foreman
::Rake[db:seed]/Exec[foreman-rake-db:seed]: Failed to call refresh: /usr/
sbin/foreman-rake db:seed returned 1 instead of one of [0]
[ERROR 2016-07-08 10:48:18 verbose] /Stage[main]/Foreman::Database/Foreman
::Rake[db:seed]/Exec[foreman-rake-db:seed]: /usr/sbin/foreman-rake db:seed
returned 1 instead of one of [0]
[ERROR 2016-07-08 10:49:15 verbose] /Stage[main]/Foreman_proxy::Register/
Foreman_smartproxy[katello.tld]: Failed to call refresh: Proxy katello.tld
cannot be registered (Could not load data from https://katello.tld
[ INFO 2016-07-08 10:49:15 verbose] - is your server down?
[ INFO 2016-07-08 10:49:15 verbose] - was rake apipie:cache run when using
apipie cache? (typical production settings)): N/A
[ERROR 2016-07-08 10:49:15 verbose] /Stage[main]/Foreman_proxy::Register/
Foreman_smartproxy[katello.tld]: Proxy katello.tld cannot be registered (
Could not load data from https://katello.tld
[ INFO 2016-07-08 10:49:15 verbose] - is your server down?
[ INFO 2016-07-08 10:49:15 verbose] - was rake apipie:cache run when using
apipie cache? (typical production settings)): N/A
[ INFO 2016-07-08 10:49:15 verbose] /usr/share/ruby/vendor_ruby/puppet/util/
errors.rb:106:in `fail'
[ INFO 2016-07-08 10:49:19 verbose] Executing hooks in group post
Something went wrong! Check the log for ERROR-level output

sudo cat /var/log/httpd/crane_error_ssl.log
[Fri Jul 08 10:48:00.480289 2016] [ssl:emerg] [pid 13049] AH01903: Failed
to configure CA certificate chain!
[Fri Jul 08 10:57:44.197492 2016] [ssl:emerg] [pid 13508] AH01903: Failed
to configure CA certificate chain!

Edgars

ceturtdiena, 2016. gada 7. jūlijs 20:26:43 UTC+2, Eric Helms rakstīja:

··· > > Edgars, > > I tested this scenario today and could not duplicate your results. Is > there anything special about your custom certificates? Wildcard? Attributes > special to them? This is my test scenario: > > https://github.com/Katello/forklift/pull/247/files > > On Thu, Jul 7, 2016 at 8:07 AM, Eric D Helms > wrote: > >> Edgars, >> >> I will test this today and report back to you. >> >> >> Eric >> >> On Thu, Jul 7, 2016 at 5:53 AM, Edgars M. > > wrote: >> >>> Hi >>> >>> I still cannot install new Katello because of this issue: >>> http://projects.theforeman.org/issues/15507 >>> >>> It fails every time.. Does anyone know workaround? >>> >>> Edgars >>> >>> >>> trešdiena, 2016. gada 6. jūlijs 22:51:48 UTC+2, Eric Helms rakstīja: >>>> >>>> Katello 3.0.2 has been released to supply bug fixes and major upgrade >>>> issues found by some awesome users. Please see the changelog for more >>>> information ( >>>> https://github.com/Katello/katello/blob/KATELLO-3.0/CHANGELOG.md). >>>> >>>> Installation >>>> ============ >>>> >>>> For installation, please see the instructions at: >>>> >>>> Server: http://www.katello.org/docs/3.0/installation/index.html >>>> >>>> Capsule: http://www.katello.org/docs/3.0/installation/capsule.html >>>> >>>> >>>> Bug reporting >>>> ============= >>>> If you come across a bug in your testing, please file it and note the >>>> version of Katello that you're using in the report and set the release >>>> to 3.0.2. >>>> >>>> http://projects.theforeman.org/projects/katello/issues/new >>>> >>>> >>>> -- >>>> Eric D. Helms >>>> Red Hat Engineering >>>> Ph.D. Student - North Carolina State University >>>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "Foreman users" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to foreman-user...@googlegroups.com . >>> To post to this group, send email to forema...@googlegroups.com >>> . >>> Visit this group at https://groups.google.com/group/foreman-users. >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> >> >> -- >> Eric D. Helms >> Red Hat Engineering >> Ph.D. Student - North Carolina State University >> > > > > -- > Eric D. Helms > Red Hat Engineering > Ph.D. Student - North Carolina State University >

The backend service all say ok but when a run a katelli-service status I
can see that celery.bet fails to status. if I restart the server it and
immediately check the status it says running but checking the status again
shows it timed out.

No memory errros noted

[root@dscaprv01 tmp]# systemctl status pulp_celerybeat.service
● pulp_celerybeat.service - Pulp's Celerybeat
Loaded: loaded (/usr/lib/systemd/system/pulp_celerybeat.service;
enabled; vendor preset: disabled)
Active: inactive (dead) since Mon 2016-08-01 08:34:31 CDT; 18s ago
Process: 5887 ExecStart=/usr/bin/celery beat
–app=pulp.server.async.celery_instance.celery
–scheduler=pulp.server.async.scheduler.Scheduler (code=exited,
status=0/SUCCESS)
Main PID: 5887 (code=exited, status=0/SUCCESS)

Aug 01 08:34:30 dscaprv01.corp.acxiom.net pulp[5887]: celery.beat:CRITICAL:
(5887-79264) raise Timeout("Connection attach timed out")
Aug 01 08:34:30 dscaprv01.corp.acxiom.net pulp[5887]: celery.beat:CRITICAL:
(5887-79264) Timeout: Connection attach timed out
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: celery beat v3.1.11
(Cipater) is starting.
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: __ - … __

  •    _
    

Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: Configuration ->
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . broker ->
qpid://dscaprv01.corp.acxiom.net:5671//
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . loader ->
celery.loaders.app.AppLoader
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . scheduler ->
pulp.server.async.scheduler.Scheduler
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . logfile ->
[stderr]@%INFO
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . maxinterval ->
now (0s)

··· On Monday, August 1, 2016 at 8:22:46 AM UTC-5, Chris Duryee wrote: > > > > On 08/01/2016 08:50 AM, Rick Langston wrote: > > I do see this issue in messages but not sure if its related > > > > Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) beat > > raised exception : > > Timeout('Connection attach timed out',) > > Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) > > Traceback (most recent call last): > > Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) > File > > "/usr/lib/python2.7/site-packages/celery/apps/beat.py", line 112, in > > start_scheduler > > Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) > > beat.start() > > > > > That is the likely culprit:) > > Next time your task hangs, check in the "/about" page on your Katello > instance and ensure everything under "Backend System Status" says "OK" > with no further message. > > If there are pulp errors, a possible quick fix is to ensure qpidd is > still running, then restart pulp_workers, pulp_celerybeat and > pulp_resource_manager. I suspect your task will get picked up after that. > > Also, please check dmesg for out-of-memory errors. There are some other > possible things we can check, but I would be curious first about the > backend system status output. > > > > > On Monday, August 1, 2016 at 7:12:26 AM UTC-5, Chris Duryee wrote: > >> > >> > >> > >> On 08/01/2016 07:54 AM, Rick Langston wrote: > >>> Hello > >>> > >>> I seem to be having some odd behavior with this version. With a fresh > >>> install on centos 7 I have setup a product which completes normally > but > >>> when I discover a repo and save them i get these meta data task that > >> seem > >>> to just wait forever. Any ideas what can be the culprit > >>> > >> > >> Are there any related errors in /var/log/messages? > >> > >>> > >>> Action: > >>> > >>> Actions::Pulp::Repository::DistributorPublish > >>> > >>> State: waiting for Pulp to start the task > >>> Input: > >>> > >>> {"pulp_id"=>"test-centos-6_updates_x86_64", > >>> "distributor_type_id"=>"yum_distributor", > >>> "source_pulp_id"=>nil, > >>> "dependency"=>nil, > >>> "remote_user"=>"admin", > >>> "remote_cp_user"=>"admin", > >>> "locale"=>"en"} > >>> > >>> Output: > >>> > >>> {"pulp_tasks"=> > >>> [{"exception"=>nil, > >>> "task_type"=>"pulp.server.managers.repo.publish.publish", > >>> > "_href"=>"/pulp/api/v2/tasks/a40815d5-9ba4-463a-8216-338cdcc4b1cc/", > >>> "task_id"=>"a40815d5-9ba4-463a-8216-338cdcc4b1cc", > >>> "tags"=> > >>> ["pulp:repository:test-centos-6_updates_x86_64", > >> "pulp:action:publish"], > >>> "finish_time"=>nil, > >>> "_ns"=>"task_status", > >>> "start_time"=>nil, > >>> "traceback"=>nil, > >>> "spawned_tasks"=>[], > >>> "progress_report"=>{}, > >>> "queue"=>"None.dq", > >>> "state"=>"waiting", > >>> "worker_name"=>nil, > >>> "result"=>nil, > >>> "error"=>nil, > >>> "_id"=>{"$oid"=>"579f32aa95c48c6a54257674"}, > >>> "id"=>"579f32aa95c48c6a54257674"}], > >>> "poll_attempts"=>{"total"=>100, "failed"=>0}} > >>> > >> > > >

> The backend service all say ok but when a run a katelli-service status I
> can see that celery.bet fails to status. if I restart the server it and
> immediately check the status it says running but checking the status again
> shows it timed out.
>

Is this the sequence of events?

  • service pulp_celerybeat start (outputs success)
  • service pulp_celerybeat status (outputs success)
  • wait some number of seconds
  • service pulp_celerybeat status (outputs error)
··· On 08/01/2016 09:39 AM, Rick Langston wrote:

No memory errros noted

[root@dscaprv01 tmp]# systemctl status pulp_celerybeat.service
● pulp_celerybeat.service - Pulp’s Celerybeat
Loaded: loaded (/usr/lib/systemd/system/pulp_celerybeat.service;
enabled; vendor preset: disabled)
Active: inactive (dead) since Mon 2016-08-01 08:34:31 CDT; 18s ago
Process: 5887 ExecStart=/usr/bin/celery beat
–app=pulp.server.async.celery_instance.celery
–scheduler=pulp.server.async.scheduler.Scheduler (code=exited,
status=0/SUCCESS)
Main PID: 5887 (code=exited, status=0/SUCCESS)

Aug 01 08:34:30 dscaprv01.corp.acxiom.net pulp[5887]: celery.beat:CRITICAL:
(5887-79264) raise Timeout(“Connection attach timed out”)
Aug 01 08:34:30 dscaprv01.corp.acxiom.net pulp[5887]: celery.beat:CRITICAL:
(5887-79264) Timeout: Connection attach timed out
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: celery beat v3.1.11
(Cipater) is starting.
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: __ - … __

  •    _
    

Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: Configuration ->
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . broker ->
qpid://dscaprv01.corp.acxiom.net:5671//
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . loader ->
celery.loaders.app.AppLoader
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . scheduler ->
pulp.server.async.scheduler.Scheduler
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . logfile ->
[stderr]@%INFO
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . maxinterval ->
now (0s)

On Monday, August 1, 2016 at 8:22:46 AM UTC-5, Chris Duryee wrote:

On 08/01/2016 08:50 AM, Rick Langston wrote:

I do see this issue in messages but not sure if its related

Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) beat
raised exception <class ‘qpid.messaging.exceptions.Timeout’>:
Timeout(‘Connection attach timed out’,)
Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416)
Traceback (most recent call last):
Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416)
File
"/usr/lib/python2.7/site-packages/celery/apps/beat.py", line 112, in
start_scheduler
Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416)
beat.start()

That is the likely culprit:)

Next time your task hangs, check in the “/about” page on your Katello
instance and ensure everything under “Backend System Status” says "OK"
with no further message.

If there are pulp errors, a possible quick fix is to ensure qpidd is
still running, then restart pulp_workers, pulp_celerybeat and
pulp_resource_manager. I suspect your task will get picked up after that.

Also, please check dmesg for out-of-memory errors. There are some other
possible things we can check, but I would be curious first about the
backend system status output.

On Monday, August 1, 2016 at 7:12:26 AM UTC-5, Chris Duryee wrote:

On 08/01/2016 07:54 AM, Rick Langston wrote:

Hello

I seem to be having some odd behavior with this version. With a fresh
install on centos 7 I have setup a product which completes normally
but

when I discover a repo and save them i get these meta data task that
seem
to just wait forever. Any ideas what can be the culprit

Are there any related errors in /var/log/messages?

Action:

Actions::Pulp::Repository::DistributorPublish

State: waiting for Pulp to start the task
Input:

{“pulp_id”=>“test-centos-6_updates_x86_64”,
“distributor_type_id”=>“yum_distributor”,
“source_pulp_id”=>nil,
“dependency”=>nil,
“remote_user”=>“admin”,
“remote_cp_user”=>“admin”,
“locale”=>“en”}

Output:

{“pulp_tasks”=>
[{“exception”=>nil,
“task_type”=>“pulp.server.managers.repo.publish.publish”,

“_href”=>"/pulp/api/v2/tasks/a40815d5-9ba4-463a-8216-338cdcc4b1cc/",

"task_id"=>"a40815d5-9ba4-463a-8216-338cdcc4b1cc", 
"tags"=> 
 ["pulp:repository:test-centos-6_updates_x86_64", 

“pulp:action:publish”],

"finish_time"=>nil, 
"_ns"=>"task_status", 
"start_time"=>nil, 
"traceback"=>nil, 
"spawned_tasks"=>[], 
"progress_report"=>{}, 
"queue"=>"None.dq", 
"state"=>"waiting", 
"worker_name"=>nil, 
"result"=>nil, 
"error"=>nil, 
"_id"=>{"$oid"=>"579f32aa95c48c6a54257674"}, 
"id"=>"579f32aa95c48c6a54257674"}], 

“poll_attempts”=>{“total”=>100, “failed”=>0}}

What is the journal -xn output after you try to start httpd manually?

··· On Jul 8, 2016 5:12 AM, "Edgars M." wrote:

No, there is nothing special about our certificates. No wildcard, no even
SAN. 2048 bits.

Why is Crane needed? Can I disable it? Can I disable everything related to
Puppet as we don’t need that functionality?

I just tested it and it failed again, here is my full install command:

sudo foreman-installer --scenario katello
–certs-server-cert="/etc/pki/tls/certs/katello.tld.crt"
–certs-server-cert-req="/etc/pki/tls/csr/katello.tld.csr"
–certs-server-key="/etc/pki/tls/private/katello.tld.key"
–certs-server-ca-cert="/etc/pki/tls/certs/CompanyInternalCA.crt"
–foreman-admin-email=“name@company.tld”
–foreman-admin-first-name=“Name”
–foreman-admin-last-name=“LastName”
–foreman-admin-password=“SomeCustomPassword”
–foreman-initial-organization=“Company”
–katello-num-pulp-workers=“24”
–katello-proxy-url=“http://corporate.proxy.tld
–katello-proxy-port=“8080”
–verbose

Errors:
[ERROR 2016-07-08 10:48:00 verbose] Could not start Service[httpd]:
Execution of ‘/usr/share/katello-installer-base/modules/service_wait/bin/service-wait
start httpd’ returned 1: Redirecting to /bin/systemctl start httpd.
service
[ INFO 2016-07-08 10:48:00 verbose] Job for httpd.service failed because
the control process exited with error code. See “systemctl status
httpd.service” and “journalctl -xe” for details.
[ERROR 2016-07-08 10:48:00 verbose] /Stage[main]/Apache::Service/Service[
httpd]/ensure: change from stopped to running failed: Could not start
Service[httpd]: Execution of ‘/usr/share/katello-installer-base/modules/service_wait/bin/service-wait
start httpd’ returned 1: Redirecting to /bin/systemctl start httpd.
service
[ERROR 2016-07-08 10:48:18 verbose] /Stage[main]/Foreman::Database/
Foreman::Rake[db:seed]/Exec[foreman-rake-db:seed]: Failed to call refresh:
/usr/sbin/foreman-rake db:seed returned 1 instead of one of [0]
[ERROR 2016-07-08 10:48:18 verbose] /Stage[main]/Foreman::Database/
Foreman::Rake[db:seed]/Exec[foreman-rake-db:seed]: /usr/sbin/foreman-rake
db:seed returned 1 instead of one of [0]
[ERROR 2016-07-08 10:49:15 verbose] /Stage[main]/Foreman_proxy::Register/
Foreman_smartproxy[katello.tld]: Failed to call refresh: Proxy katello.tld
cannot be registered (Could not load data from https://katello.tld
[ INFO 2016-07-08 10:49:15 verbose] - is your server down?
[ INFO 2016-07-08 10:49:15 verbose] - was rake apipie:cache run when
using apipie cache? (typical production settings)): N/A
[ERROR 2016-07-08 10:49:15 verbose] /Stage[main]/Foreman_proxy::Register/
Foreman_smartproxy[katello.tld]: Proxy katello.tld cannot be registered (
Could not load data from https://katello.tld
[ INFO 2016-07-08 10:49:15 verbose] - is your server down?
[ INFO 2016-07-08 10:49:15 verbose] - was rake apipie:cache run when
using apipie cache? (typical production settings)): N/A
[ INFO 2016-07-08 10:49:15 verbose] /usr/share/ruby/vendor_ruby/puppet/
util/errors.rb:106:in `fail’
[ INFO 2016-07-08 10:49:19 verbose] Executing hooks in group post
Something went wrong! Check the log for ERROR-level output

sudo cat /var/log/httpd/crane_error_ssl.log
[Fri Jul 08 10:48:00.480289 2016] [ssl:emerg] [pid 13049] AH01903: Failed
to configure CA certificate chain!
[Fri Jul 08 10:57:44.197492 2016] [ssl:emerg] [pid 13508] AH01903: Failed
to configure CA certificate chain!

Edgars

ceturtdiena, 2016. gada 7. jūlijs 20:26:43 UTC+2, Eric Helms rakstīja:

Edgars,

I tested this scenario today and could not duplicate your results. Is
there anything special about your custom certificates? Wildcard? Attributes
special to them? This is my test scenario:

https://github.com/Katello/forklift/pull/247/files

On Thu, Jul 7, 2016 at 8:07 AM, Eric D Helms ericd...@gmail.com wrote:

Edgars,

I will test this today and report back to you.

Eric

On Thu, Jul 7, 2016 at 5:53 AM, Edgars M. edgars...@gmail.com wrote:

Hi

I still cannot install new Katello because of this issue:
Bug #15507: Katello 3.0.1 installation fails - Crane: Failed to configure CA certificate chain! - Katello - Foreman

It fails every time… Does anyone know workaround?

Edgars

trešdiena, 2016. gada 6. jūlijs 22:51:48 UTC+2, Eric Helms rakstīja:

Katello 3.0.2 has been released to supply bug fixes and major upgrade
issues found by some awesome users. Please see the changelog for more
information (
https://github.com/Katello/katello/blob/KATELLO-3.0/CHANGELOG.md).

Installation

For installation, please see the instructions at:

Server: http://www.katello.org/docs/3.0/installation/index.html
http://www.katello.org/docs/2.4/installation/index.html
Capsule: http://www.katello.org/docs/3.0/installation/capsule.html
http://www.katello.org/docs/2.4/installation/capsule.html

Bug reporting

If you come across a bug in your testing, please file it and note the
version of Katello that you’re using in the report and set the release
to 3.0.2.

Foreman


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University


You received this message because you are subscribed to the Google
Groups “Foreman users” group.
To unsubscribe from this group and stop receiving emails from it, send
an email to foreman-user...@googlegroups.com.
To post to this group, send email to forema...@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University


Eric D. Helms
Red Hat Engineering
Ph.D. Student - North Carolina State University


You received this message because you are subscribed to the Google Groups
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to foreman-users+unsubscribe@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.

Thank you for help on this

yes here is the actual capture

command line output
[root@dscaprv01 tmp]# systemctl restart pulp_celerybeat.service
[root@dscaprv01 tmp]# systemctl status pulp_celerybeat.service
● pulp_celerybeat.service - Pulp's Celerybeat
Loaded: loaded (/usr/lib/systemd/system/pulp_celerybeat.service;
enabled; vendor preset: disabled)
Active: active (running) since Mon 2016-08-01 08:34:20 CDT; 7s ago
Main PID: 5887 (celery)
CGroup: /system.slice/pulp_celerybeat.service
└─5887 /usr/bin/python /usr/bin/celery beat
–app=pulp.server.async.celery_instance.celery
–scheduler=pulp.server.async.scheduler.Scheduler

Aug 01 08:34:20 dscaprv01.corp.acxiom.net systemd[1]: Started Pulp's
Celerybeat.
Aug 01 08:34:20 dscaprv01.corp.acxiom.net systemd[1]: Starting Pulp's
Celerybeat…
Aug 01 08:34:25 dscaprv01.corp.acxiom.net pulp[5887]: celery.beat:INFO:
beat: Starting…
Aug 01 08:34:25 dscaprv01.corp.acxiom.net pulp[5887]:
pulp.server.db.connection:INFO: Attempting to connect to localhost:27017
Aug 01 08:34:25 dscaprv01.corp.acxiom.net pulp[5887]:
pulp.server.async.scheduler:INFO: Worker Timeout Monitor Started
Aug 01 08:34:25 dscaprv01.corp.acxiom.net pulp[5887]:
pulp.server.db.connection:INFO: Attempting to connect to localhost:27017
Aug 01 08:34:26 dscaprv01.corp.acxiom.net pulp[5887]:
pulp.server.db.connection:INFO: Write concern for Mongo connection: {}
[root@dscaprv01 tmp]# systemctl status pulp_celerybeat.service
● pulp_celerybeat.service - Pulp's Celerybeat
Loaded: loaded (/usr/lib/systemd/system/pulp_celerybeat.service;
enabled; vendor preset: disabled)
Active: inactive (dead) since Mon 2016-08-01 08:34:31 CDT; 18s ago
Process: 5887 ExecStart=/usr/bin/celery beat
–app=pulp.server.async.celery_instance.celery
–scheduler=pulp.server.async.scheduler.Scheduler (code=exited,
status=0/SUCCESS)
Main PID: 5887 (code=exited, status=0/SUCCESS)

Aug 01 08:34:30 dscaprv01.corp.acxiom.net pulp[5887]: celery.beat:CRITICAL:
(5887-79264) raise Timeout("Connection attach timed out")
Aug 01 08:34:30 dscaprv01.corp.acxiom.net pulp[5887]: celery.beat:CRITICAL:
(5887-79264) Timeout: Connection attach timed out
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: celery beat v3.1.11
(Cipater) is starting.
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: __ - … __

  •    _
    

Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: Configuration ->
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . broker ->
qpid://dscaprv01.corp.acxiom.net:5671//
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . loader ->
celery.loaders.app.AppLoader
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . scheduler ->
pulp.server.async.scheduler.Scheduler
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . logfile ->
[stderr]@%INFO
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . maxinterval ->
now (0s)

/var/log/messages output

==> /var/log/messages <==
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) beat
raised exception <class 'qpid.messaging.exceptions.Timeout'>:
Timeout('Connection attach timed out',)
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
Traceback (most recent call last):
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/celery/apps/beat.py", line 112, in
start_scheduler
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
beat.start()
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/celery/beat.py", line 462, in start
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
interval = self.scheduler.tick()
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/pulp/server/async/scheduler.py", line
265, in tick
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) ret
= self.call_tick(self, celerybeat_name)
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/pulp/server/async/scheduler.py", line
230, in call_tick
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) ret
= super(Scheduler, self).tick()
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/celery/beat.py", line 220, in tick
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
next_time_to_run = self.maybe_due(entry, self.publisher)
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/kombu/utils/init.py", line 325, in
get
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
value = obj.dict[self.name] = self.__get(obj)
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/celery/beat.py", line 342, in publisher
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
return self.Publisher(self._ensure_connected())
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/celery/beat.py", line 326, in
_ensure_connected
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
_error_handler, self.app.conf.BROKER_CONNECTION_MAX_RETRIES
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 369, in
ensure_connection
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
interval_start, interval_step, interval_max, callback)
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/kombu/utils/init.py", line 246, in
retry_over_time
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
return fun(*args, **kwargs)
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 237, in connect
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
return self.connection
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 741, in
connection
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
self._connection = self._establish_connection()
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 696, in
_establish_connection
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) conn
= self.transport.establish_connection()
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/kombu/transport/qpid.py", line 1600, in
establish_connection
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) conn
= self.Connection(**opts)
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/kombu/transport/qpid.py", line 1261, in
init
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
self._qpid_conn = establish(**self.connection_options)
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 112,
in establish
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
conn.open(timeout=timeout)
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"<string>", line 6, in open
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 323,
in open
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
self.attach(timeout=timeout)
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"<string>", line 6, in attach
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) File
"/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 343,
in attach
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)
raise Timeout("Connection attach timed out")
Aug 1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) Timeout:
Connection attach timed out
Aug 1 08:45:03 dscaprv01 celery: celery beat v3.1.11 (Cipater) is starting.
Aug 1 08:45:03 dscaprv01 celery: __ - … __ - _
Aug 1 08:45:03 dscaprv01 celery: Configuration ->
Aug 1 08:45:03 dscaprv01 celery: . broker ->
qpid://dscaprv01.corp.acxiom.net:5671//
Aug 1 08:45:03 dscaprv01 celery: . loader -> celery.loaders.app.AppLoader
Aug 1 08:45:03 dscaprv01 celery: . scheduler ->
pulp.server.async.scheduler.Scheduler
Aug 1 08:45:03 dscaprv01 celery: . logfile -> [stderr]@%INFO
Aug 1 08:45:03 dscaprv01 celery: . maxinterval -> now (0s)

··· On Monday, August 1, 2016 at 8:43:45 AM UTC-5, Chris Duryee wrote: > > > > On 08/01/2016 09:39 AM, Rick Langston wrote: > > The backend service all say ok but when a run a katelli-service status I > > can see that celery.bet fails to status. if I restart the server it and > > immediately check the status it says running but checking the status > again > > shows it timed out. > > > > Is this the sequence of events? > > * service pulp_celerybeat start (outputs success) > * service pulp_celerybeat status (outputs success) > * wait some number of seconds > * service pulp_celerybeat status (outputs error) > > > > > No memory errros noted > > > > > > [root@dscaprv01 tmp]# systemctl status pulp_celerybeat.service > > ● pulp_celerybeat.service - Pulp's Celerybeat > > Loaded: loaded (/usr/lib/systemd/system/pulp_celerybeat.service; > > enabled; vendor preset: disabled) > > Active: inactive (dead) since Mon 2016-08-01 08:34:31 CDT; 18s ago > > Process: 5887 ExecStart=/usr/bin/celery beat > > --app=pulp.server.async.celery_instance.celery > > --scheduler=pulp.server.async.scheduler.Scheduler (code=exited, > > status=0/SUCCESS) > > Main PID: 5887 (code=exited, status=0/SUCCESS) > > > > Aug 01 08:34:30 dscaprv01.corp.acxiom.net pulp[5887]: > celery.beat:CRITICAL: > > (5887-79264) raise Timeout("Connection attach timed out") > > Aug 01 08:34:30 dscaprv01.corp.acxiom.net pulp[5887]: > celery.beat:CRITICAL: > > (5887-79264) Timeout: Connection attach timed out > > Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: celery beat > v3.1.11 > > (Cipater) is starting. > > Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: __ - ... > __ > > - _ > > Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: Configuration > -> > > Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . broker -> > > qpid://dscaprv01.corp.acxiom.net:5671// > > Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . loader -> > > celery.loaders.app.AppLoader > > Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . scheduler -> > > pulp.server.async.scheduler.Scheduler > > Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . logfile -> > > [stderr]@%INFO > > Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . maxinterval > -> > > now (0s) > > > > > > On Monday, August 1, 2016 at 8:22:46 AM UTC-5, Chris Duryee wrote: > >> > >> > >> > >> On 08/01/2016 08:50 AM, Rick Langston wrote: > >>> I do see this issue in messages but not sure if its related > >>> > >>> Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) > beat > >>> raised exception : > >>> Timeout('Connection attach timed out',) > >>> Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) > >>> Traceback (most recent call last): > >>> Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) > >> File > >>> "/usr/lib/python2.7/site-packages/celery/apps/beat.py", line 112, in > >>> start_scheduler > >>> Aug 1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) > > >>> beat.start() > >>> > >> > >> > >> That is the likely culprit:) > >> > >> Next time your task hangs, check in the "/about" page on your Katello > >> instance and ensure everything under "Backend System Status" says "OK" > >> with no further message. > >> > >> If there are pulp errors, a possible quick fix is to ensure qpidd is > >> still running, then restart pulp_workers, pulp_celerybeat and > >> pulp_resource_manager. I suspect your task will get picked up after > that. > >> > >> Also, please check dmesg for out-of-memory errors. There are some other > >> possible things we can check, but I would be curious first about the > >> backend system status output. > >> > >>> > >>> On Monday, August 1, 2016 at 7:12:26 AM UTC-5, Chris Duryee wrote: > >>>> > >>>> > >>>> > >>>> On 08/01/2016 07:54 AM, Rick Langston wrote: > >>>>> Hello > >>>>> > >>>>> I seem to be having some odd behavior with this version. With a > fresh > >>>>> install on centos 7 I have setup a product which completes normally > >> but > >>>>> when I discover a repo and save them i get these meta data task that > >>>> seem > >>>>> to just wait forever. Any ideas what can be the culprit > >>>>> > >>>> > >>>> Are there any related errors in /var/log/messages? > >>>> > >>>>> > >>>>> Action: > >>>>> > >>>>> Actions::Pulp::Repository::DistributorPublish > >>>>> > >>>>> State: waiting for Pulp to start the task > >>>>> Input: > >>>>> > >>>>> {"pulp_id"=>"test-centos-6_updates_x86_64", > >>>>> "distributor_type_id"=>"yum_distributor", > >>>>> "source_pulp_id"=>nil, > >>>>> "dependency"=>nil, > >>>>> "remote_user"=>"admin", > >>>>> "remote_cp_user"=>"admin", > >>>>> "locale"=>"en"} > >>>>> > >>>>> Output: > >>>>> > >>>>> {"pulp_tasks"=> > >>>>> [{"exception"=>nil, > >>>>> "task_type"=>"pulp.server.managers.repo.publish.publish", > >>>>> > >> "_href"=>"/pulp/api/v2/tasks/a40815d5-9ba4-463a-8216-338cdcc4b1cc/", > >>>>> "task_id"=>"a40815d5-9ba4-463a-8216-338cdcc4b1cc", > >>>>> "tags"=> > >>>>> ["pulp:repository:test-centos-6_updates_x86_64", > >>>> "pulp:action:publish"], > >>>>> "finish_time"=>nil, > >>>>> "_ns"=>"task_status", > >>>>> "start_time"=>nil, > >>>>> "traceback"=>nil, > >>>>> "spawned_tasks"=>[], > >>>>> "progress_report"=>{}, > >>>>> "queue"=>"None.dq", > >>>>> "state"=>"waiting", > >>>>> "worker_name"=>nil, > >>>>> "result"=>nil, > >>>>> "error"=>nil, > >>>>> "_id"=>{"$oid"=>"579f32aa95c48c6a54257674"}, > >>>>> "id"=>"579f32aa95c48c6a54257674"}], > >>>>> "poll_attempts"=>{"total"=>100, "failed"=>0}} > >>>>> > >>>> > >>> > >> > > >