Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert "Make sure that after toggling ssl on/off all services are restarted #3365

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

jgrassler
Copy link
Contributor

This reverts commit 9d523e7.

Reverting this should fix HA job breakage.

@dirkmueller
Copy link
Contributor

no the problem is not the maintenance mode but the problem is neutron-ha-tool crashing due to not being restarted. we can fix that by deleting the failcounter as I did previously, but that was rejected in the previous review: #3286 (review)

@dirkmueller
Copy link
Contributor

I am working on a patch to fix the rootcause, if you don't have time for that then we can use the crm_failcount removal hack as an interim solution

@dirkmueller
Copy link
Contributor

crowbar/crowbar-openstack#2112 is the fix wip for this

@jgrassler
Copy link
Contributor Author

So why did this only start to fail after #3286 got merged (that's the reason I opened this PR - to make sure it really isn't the culprit)? Over the weekend before April 15th there were a whole bunch of successful HA job runs, and for cloudsource=susecloud9 they still are green. So somewhere there must be code that didn't make it into susecloud9 and that began to cause trouble from April 15th onward...

@dirkmueller
Copy link
Contributor

@jgrassler very simple, because neutron-ha-tool needs to crash repeatedly several times in a row (like 15 times in a row or so) before pacemaker logs a failure. and in the previous code the crash just happened later, interrupting tempest and other things, than clearly causing a repeatable single fail. now it waits, and you trigger it all the time. before it was only happening when you were particularly unlucky.

@dirkmueller
Copy link
Contributor

also, in most cases the "failcount" check was already executed before the failure was logged (and it does so only once)

Copy link
Contributor

@dirkmueller dirkmueller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please spend your review electrons here: crowbar/crowbar-openstack#2112

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants