We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
在使用了libfaketime的云容器机器上部署phxpaxos时,发现当将faketime时间往前改时,会导致phxpaxos进程无法正常退出(基本还会同时触发高cpu占用),卡死在UDPSend::run()的m_oSendQueue.peek(poData, 1000)处。
原因是phxpaxos中下列代码中调用_cond.wait_for会采用std::chrono::system_clock::now()计算时间,与此时libfaketime中实际不同步,导致wait_for总是异常地立即返回no_timeout(详见libstdc++中condition_variable的__wait_until_impl接口),但此时m_oSendQueue当然是空的,导致以下代码死循环:
bool peek(T& t, int timeoutMS) { while (empty()) { if (_cond.wait_for(_lock, std::chrono::milliseconds(timeoutMS)) == std::cv_status::timeout) { return false; } } t = _storage.front(); return true; }
这个问题导致在云容器环境中,不方便通过修改libfaketime来协助测试,请问大家都怎么处理这个问题的? 换用启用clock_gettime的libstdc++?或者直接hook std::chrono::system_clock::now()会有什么风险吗?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
在使用了libfaketime的云容器机器上部署phxpaxos时,发现当将faketime时间往前改时,会导致phxpaxos进程无法正常退出(基本还会同时触发高cpu占用),卡死在UDPSend::run()的m_oSendQueue.peek(poData, 1000)处。
原因是phxpaxos中下列代码中调用_cond.wait_for会采用std::chrono::system_clock::now()计算时间,与此时libfaketime中实际不同步,导致wait_for总是异常地立即返回no_timeout(详见libstdc++中condition_variable的__wait_until_impl接口),但此时m_oSendQueue当然是空的,导致以下代码死循环:
这个问题导致在云容器环境中,不方便通过修改libfaketime来协助测试,请问大家都怎么处理这个问题的?
换用启用clock_gettime的libstdc++?或者直接hook std::chrono::system_clock::now()会有什么风险吗?
The text was updated successfully, but these errors were encountered: