Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Driver not failing over to readers #1075

Closed
Milesy opened this issue Aug 2, 2024 · 7 comments
Closed

Driver not failing over to readers #1075

Milesy opened this issue Aug 2, 2024 · 7 comments
Labels
bug Something isn't working

Comments

@Milesy
Copy link

Milesy commented Aug 2, 2024

Describe the bug

When doing a failover of a writer, readonly connections are not failing over to the available readers. The failover takes approx 20 seconds while it waits on the new writer node starting up.

Expected Behavior

I expect, the failover to happen within seconds, and my read-only transactions to be split to the reader nodes.

What plugins are used? What other connection properties were set?

ConnectTimeConnectionPluginFactory,ExecutionTimeConnectionPluginFactory,IamAuthConnectionPluginFactory,AuroraInitialConnectionStrategyPluginFactory,AuroraConnectionTrackerPluginFactory,ReadWriteSplittingPluginFactory,FailoverConnectionPluginFactory,HostMonitoringConnectionPluginFactory

Current Behavior

Using the driver with Postgres Aurora Serverless, I have configured failover, with internal connection pooling, with read-write splitting enabled.

I have also set the failover mode to STRICT_READER (can be seen in the logs), and my tests are all against read only connections (can also be seen in the logs)

2024-08-02 19:01:34.965 DEBUG [ddTraceId-:ddSpanId-] 5488 --- [ scheduling-1] s.a.j.p.f.FailoverConnectionPlugin : [] failoverMode=STRICT_READER

2024-08-02 19:01:34.965 TRACE [ddTraceId-:ddSpanId-] 5488 --- [ scheduling-1] s.a.jdbc.plugin.DefaultConnectionPlugin : [] Executing method: 'Connection.setReadOnly'

I have 1 WRITER and 3 READERSs.

2024-08-02 19:01:03.781 TRACE [ddTraceId-:ddSpanId-] 5488 --- [ scheduling-1] s.a.j.h.RdsHostListProvider : [] [From cache] Topology:
HostSpec[host=aurora-reader-node-1-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com, port=6160, READER, AVAILABLE, weight=4523, 2024-08-02 19:00:34.0]
HostSpec[host=aurora-reader-node-3-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com, port=6160, READER, AVAILABLE, weight=2537, 2024-08-02 19:00:34.0]
HostSpec[host=aurora-reader-node-2-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com, port=6160, READER, AVAILABLE, weight=3315, 2024-08-02 19:00:34.0]
HostSpec[host=aurora-cluster-node-0-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com, port=6160, WRITER, AVAILABLE, weight=43, null]

Whenever I failover, the WRITER always fails to node 2, and back to node 0. never any other nodes.

However during failover, my read only connections still appear to follow the failover of the WRITER repeatedly every single time.

When I fail node 0, and node 2 then takes up the WRITER the following is seen in the logs:

org.postgresql.util.PSQLException: Connection to aurora-reader-node-2-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com:6160 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.Driver$ConnectThread.getResult(Driver.java:394)
at org.postgresql.Driver.connect(Driver.java:304)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:359)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:201)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:470)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:733)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:712)
at java.base/java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:842)
Caused by: java.net.ConnectException: Connection refused: no further information
at java.base/sun.nio.ch.Net.pollConnect(Native Method)
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:554)
at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:602)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327)
at java.base/java.net.Socket.connect(Socket.java:633)
at org.postgresql.core.PGStream.createSocket(PGStream.java:243)
at org.postgresql.core.PGStream.(PGStream.java:98)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:132)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:258)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:54)
at org.postgresql.jdbc.PgConnection.(PgConnection.java:263)
at org.postgresql.Driver.makeConnection(Driver.java:443)
at org.postgresql.Driver.access$100(Driver.java:63)
at org.postgresql.Driver$ConnectThread.run(Driver.java:353)
... 1 common frames omitted

And this just happens on a loop for about 20 seconds until the new WRITER is back up and then my transactions continue as normal.

Right to the last moment the node is still starting

2024-08-02 19:01:49.876 DEBUG [ddTraceId-:ddSpanId-] 5488 --- [onnection adder] org.postgresql.Driver : [] Connection error:

org.postgresql.util.PSQLException: FATAL: the database system is starting up
at org.postgresql.Driver$ConnectThread.getResult(Driver.java:394)
at org.postgresql.Driver.connect(Driver.java:304)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:359)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:201)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:470)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:733)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:712)
at java.base/java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:842)

Back up, and node 2 is now the writer

2024-08-02 19:01:50.398 DEBUG [ddTraceId-:ddSpanId-] 5488 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : [] dataSource-aurora-reader-node-2-pyxdlwfq-READER - Added connection org.postgresql.jdbc.PgConnection@7a391279
2024-08-02 19:01:50.409 DEBUG [ddTraceId-:ddSpanId-] 5488 --- [onnection adder] org.postgresql.Driver : [] Connecting with URL: jdbc:postgresql://aurora-reader-node-2-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com:6160/fusion_identity_local?tcpKeepAlive=false&connectTimeout=10&socketTimeout=5&loginTimeout=10
2024-08-02 19:01:51.005 DEBUG [ddTraceId-:ddSpanId-] 5488 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : [] dataSource-aurora-reader-node-2-pyxdlwfq-READER - Added connection org.postgresql.jdbc.PgConnection@4a8683e3
2024-08-02 19:01:52.437 DEBUG [ddTraceId-:ddSpanId-] 5488 --- [onnection adder] org.postgresql.Driver : [] Connecting with URL: jdbc:postgresql://aurora-cluster-node-0-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com:6160/fusion_identity_local?tcpKeepAlive=false&connectTimeout=10&socketTimeout=5&loginTimeout=10
2024-08-02 19:01:53.273 DEBUG [ddTraceId-:ddSpanId-] 5488 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : [] dataSource-aurora-cluster-node-0-pyxdlwfq-WRITER - Added connection org.postgresql.jdbc.PgConnection@5f30aa8f
2024-08-02 19:01:53.273 TRACE [ddTraceId-:ddSpanId-] 5488 --- [ scheduling-1] s.a.j.p.f.FailoverConnectionPlugin : [] Changes:
Host 'aurora-cluster-node-0-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com:6160/': [WENT_UP, NODE_CHANGED]
2024-08-02 19:01:53.284 DEBUG [ddTraceId-:ddSpanId-] 5488 --- [onnection adder] org.postgresql.Driver : [] Connecting with URL: jdbc:postgresql://aurora-cluster-node-0-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com:6160/fusion_identity_local?tcpKeepAlive=false&connectTimeout=10&socketTimeout=5&loginTimeout=10
2024-08-02 19:01:53.375 TRACE [ddTraceId-:ddSpanId-] 5488 --- [ scheduling-1] s.a.j.h.RdsHostListProvider : [] Topology:
HostSpec[host=aurora-reader-node-1-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com, port=6160, READER, AVAILABLE, weight=2223, 2024-08-02 19:01:51.0]
HostSpec[host=aurora-reader-node-3-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com, port=6160, READER, AVAILABLE, weight=2224, 2024-08-02 19:01:51.0]
HostSpec[host=aurora-cluster-node-0-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com, port=6160, READER, AVAILABLE, weight=1844, null]
HostSpec[host=aurora-reader-node-2-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com, port=6160, WRITER, AVAILABLE, weight=16, 2024-08-02 19:01:52.0]

Reproduction Steps

Unable to provide at present due to internal policy.

Possible Solution

No response

Additional Information/Context

No response

The AWS Advanced JDBC Driver version used

2.3.7

JDK version used

17

Operating System and version

Windows Server

@Milesy Milesy added the bug Something isn't working label Aug 2, 2024
@Milesy
Copy link
Author

Milesy commented Aug 5, 2024

My initial thoughts after some more debugging, is that this is possibly happening in this scenario:

  1. @transactional(readOnly = true) method executes.
  2. Read operation is performed against reader node.
  3. Failover happens
  4. Spring is attempting to set the readOnly - false again on the underlying connection (DataSourceUtils.resetConnectionAfterTransaction is being invoked)
  5. ReadWriteSplittingPlugin is trying to swap back to the writer again.

My priority is always for reader connections, with fast failover, over writers. So maybe my solution here is that I need to configure my read and write data sources independently, rather then read/write splitting?

@aaron-congo
Copy link
Contributor

Hi @Milesy, thank you for submitting this ticket. We are looking into this to try and understand what is occurring and how we can find a good solution for you. For starters, can you please confirm the following?

  1. What URL are you providing for your initial connection? Is it the reader cluster endpoint (eg, mycluster.cluster-ro-XYZ.us-east-1.rds.amazonaws.com)?
  2. Are you using a multi-AZ cluster?
  3. When driver failover occurs, can you tell whether writer failover or reader failover is triggered? If writer failover is triggered, your logs should contain this line: "Starting writer failover procedure.". If reader failover is triggered, your logs should contain this line: "Starting reader failover procedure.".
  4. When driver failover completes, does it connect you to the new writer, or to a reader? You should see a line in your logs like this, which will allow you to answer this question:
software.amazon.jdbc.plugin.failover.FailoverConnectionPlugin failoverWriter    <---- (might say "failoverReader" instead)
FINE: Connected to: HostSpec[host=aurora-cluster-node-0-pyxdlwfq.c4q03196xb4j.us-east-1.rds.amazonaws.com, port=6160, WRITER, AVAILABLE, weight=43, null]

@Milesy
Copy link
Author

Milesy commented Aug 9, 2024

@aaron-congo Good Morning arron.

I have tried multiple configuration setups. This setup was against the cluster end point, with read-write splitting in place.

But in my view now, the read-write splitting is too much overhead with the constant swapping back and forward of connections.

My primary use case is to support read operations during a failover, so this is where I am trying to concentrate my efforts.

The cluster is made up of 4 nodes, across two AZs. Node 0 and 2 are on us-east-1c, and Node 1 and 3 are on us-east-1d

I dont see a log entry saying failoverReader or failoverWriter

With some tweaking I have a different scenarios. (Connecting to RO end point just now)

  1. Failover happens during a read operation: the failed node is detected, and failover takes about 7 seconds to complete
  2. Failover happens just before an operation. The failure is detected within the Hikari internal pool, and results in a thread lockout for about 20 seconds until the failover completes.

@aaron-congo
Copy link
Contributor

aaron-congo commented Aug 10, 2024

Hi @Milesy,

Thank you for the information. You are correct that the read/write splitting plugin regularly switches between the writer/reader when using Spring's @transaction(readOnly=true). This occurs because Spring automatically calls setReadOnly(true) at the beginning of the transaction (which triggers a switch to the reader) and setReadOnly(false) at the end of the transaction (which triggers a switch to the writer).

It sounds like you are now using the driver without the read/write splitting plugin. Under this new driver configuration, can you clarify what behavior you are seeing that is unexpected and that you would like to fix?

Additionally, can you please tell me if you are using the read/write splitting plugin and/or internal connection pools for point "2." in your recent response? The beginning of your response makes it sound like you are not using the read/write plugin, but in point "2." you mention failure is detected within the Hikari internal pool, and internal connection pools is a read/write splitting plugin feature.

@aaron-congo
Copy link
Contributor

aaron-congo commented Aug 23, 2024

Bit of an update: I've chatted with @Milesy and he is no longer using the read/write splitting plugin because spring automatically calls setReadOnly before/after each transaction. However, he did encounter a scenario which was problematic for him: if using internal connection pools and AwsWrapperDataSource#getConnection is called while server failover is occurring, the driver does not fail over because:

  1. The failover plugin currently does not perform failover when trying to connect, it only performs failover when trying to execute methods against an already established connection
  2. When hikari tries to grab a connection from its pool, it first tries to validate it. If the connection is invalid (which is true if server failover is occurring), it tries to reconnect to the given URL for an amount of time determined by Hikari's connectionTimeout setting. If server failover completes before this timeout Hikari will successfully connect and return the connection, otherwise it will throw an error.

The solution to this issue is to:

  • adjust Hikari's connectionTimeout to something small (eg HikariConfig#setConnectionTimeout(1500)) so that Hikari does not spend a long time trying to validate the connection. When the timeout is hit, Hikari will throw an exception that will allow the wrapper driver to initiate driver failover.
  • adjust the cluster topology refresh rate so that entries are kept in the cache for longer (eg targetDataSourceProps.setProperty("clusterTopologyRefreshRateMs", String.valueOf(TimeUnit.MINUTES.toMillis(2))). This ensures that the topology info is still in the cache when get AwsWrapperDataSource#getConnection is called (provided that the topology entry has been updated in the last 2 minutes by a previous connection)
  • add logic to the failover plugin allowing users to enable failover behavior while connecting via an "enableConnectFailover" connection property. I've created a PR here to provide this functionality

@aaron-congo
Copy link
Contributor

Hi @Milesy, I believe we have addressed the original issue, which turned out to be that failover was not occurring on connect. Are you okay with me closing this ticket? If I don't hear anything in the next few days I will go ahead and do so, but please feel free to reopen it or let me know if you encounter any other problems. Thanks!

@Milesy
Copy link
Author

Milesy commented Oct 24, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants