Skip to content

Allow pull request for clang-format changes #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

mydeveloperday
Copy link
Owner

Remove repo lockdown on clang-format specific directories

Remove repo lockdown on clang-format specific directories
mydeveloperday pushed a commit that referenced this pull request Aug 9, 2023
TSan reports the following data race:

  Write of size 4 at 0x000109e0b160 by thread T2 (mutexes: write M0, write M1):
    #0 NativeFile::Close() File.cpp:329
    #1 ConnectionFileDescriptor::Disconnect(lldb_private::Status*) ConnectionFileDescriptorPosix.cpp:232
    llvm#2 Communication::Disconnect(lldb_private::Status*) Communication.cpp:61
    llvm#3 process_gdb_remote::ProcessGDBRemote::DidExit() ProcessGDBRemote.cpp:1164
    llvm#4 Process::SetExitStatus(int, char const*) Process.cpp:1097
    llvm#5 process_gdb_remote::ProcessGDBRemote::MonitorDebugserverProcess(...) ProcessGDBRemote.cpp:3387

  Previous read of size 4 at 0x000109e0b160 by main thread (mutexes: write M2):
    #0 NativeFile::IsValid() const File.h:393
    #1 ConnectionFileDescriptor::IsConnected() const ConnectionFileDescriptorPosix.cpp:121
    llvm#2 Communication::IsConnected() const Communication.cpp:79
    llvm#3 process_gdb_remote::GDBRemoteCommunication::WaitForPacketNoLock(...) GDBRemoteCommunication.cpp:256
    llvm#4 process_gdb_remote::GDBRemoteCommunication::WaitForPacketNoLock(...l) GDBRemoteCommunication.cpp:244
    llvm#5 process_gdb_remote::GDBRemoteClientBase::SendPacketAndWaitForResponseNoLock(llvm::StringRef, StringExtractorGDBRemote&) GDBRemoteClientBase.cpp:246

The problem is that in WaitForPacketNoLock's run loop, it checks that
the connection is still connected. This races with the
ConnectionFileDescriptor disconnecting. Most (but not all) access to the
IOObject in ConnectionFileDescriptorPosix is already gated by the mutex.
This patch just protects IsConnected in the same way.

Differential revision: https://reviews.llvm.org/D157347
mydeveloperday pushed a commit that referenced this pull request Jan 31, 2024
…ass template explict specializations (llvm#78720)

According to [[dcl.type.elab]
p2](http://eel.is/c++draft/dcl.type.elab#2):
> If an
[elaborated-type-specifier](http://eel.is/c++draft/dcl.type.elab#nt:elaborated-type-specifier)
is the sole constituent of a declaration, the declaration is ill-formed
unless it is an explicit specialization, an explicit instantiation or it
has one of the following forms [...]

Consider the following:
```cpp
template<typename T>
struct A 
{
    template<typename U>
    struct B;
};

template<>
template<typename U>
struct A<int>::B; // #1
```
The _elaborated-type-specifier_ at `#1` declares an explicit
specialization (which is itself a template). We currently (incorrectly)
reject this, and this PR fixes that.

I moved the point at which _elaborated-type-specifiers_ with
_nested-name-specifiers_ are diagnosed from `ParsedFreeStandingDeclSpec`
to `ActOnTag` for two reasons: `ActOnTag` isn't called for explicit
instantiations and partial/explicit specializations, and because it's
where we determine if a member specialization is being declared.

With respect to diagnostics, I am currently issuing the diagnostic
without marking the declaration as invalid or returning early, which
results in more diagnostics that I think is necessary. I would like
feedback regarding what the "correct" behavior should be here.
mydeveloperday pushed a commit that referenced this pull request Apr 30, 2024
Builder alerted me to the failing test, attempt #1 in the blind.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant