Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Single big file upload with desktop-client fails if OIDC token expires #1205

Closed
exalate-issue-sync bot opened this issue Jan 13, 2021 · 33 comments · Fixed by cs3org/reva#1941
Closed
Assignees
Labels
Category:Defect Existing functionality is not working as expected OCIS-Fastlane Planned outside of the sprint Platform:Desktop Platform:oCIS Priority:p2-high Escalation, on top of current planning, release blocker Status:Bug-Analysis

Comments

@exalate-issue-sync
Copy link

The issue seems to be that the upload aborts after KONNECTD_ACCESS_TOKEN_EXPIRATION which defaults to 10 minutes.

Latest desktop sync-client testpilot daily.

  • Start ocis server with KONNECTD_ACCESS_TOKEN_EXPIRATION=60 (1 minute)
  • Set upload throttling to 10kb/s
  • fallocate -l 4G 4G.bin inside the clients data-dir
  • Upload does not complete and is restarted on every failure from the beginning.

Client logs => This probably leads to the sync-abort as the request is (probably?) done with an expired token.

12-02 16:51:40:442 [ warning sync.networkjob.propfind ]:	*not* successful, http result code is 401 ""
12-02 16:51:40:442 [ debug sync.networkjob ]	[ OCC::AbstractNetworkJob::slotFinished ]:	Network job OCC::PropfindJob finished for "/"
12-02 16:51:40:443 [ info sync.httplogger ]:	"5424b26a-9be6-462c-817c-8f036df39da8: Response: POST 0 https://localhost:9090/remote.php/dav/files

See attached mitmproxy log

Server logs:

2020-12-02 12:34:49.162768 I | http: proxy error: readfrom tcp 127.0.0.1:44974->127.0.0.1:9140: unexpected EOF
2020-12-02 12:34:49.165016 I | http: proxy error: unexpected EOF
2020-12-02T12:34:49Z ERR error doing GET request to data service error="Patch \"https://ocis.owncloud.works/data/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJyZXZhIiwiZXhwIjoxNjU4NzUxOTUxLCJpYXQiOjE2MDY5MTE5NTEsInRhcmdldCI6Imh0dHA6Ly9sb2NhbGhvc3Q6OTE1NS9kYXRhL3R1cy8zMDk5OTIxMi0yMWI3LTQzYjYtYmZlZS1lNTY4M2Y2MjhmMjcifQ.BzTtgiI8_jWrmTTyP9nqj5Alu-TuzP_9jIG6fyEM1Do\": context canceled" pkg=rhttp service=storage traceid=9d24d20a11df0a269e4b3af7833b18f8
2020-12-02T12:34:49Z WRN srv/app/pkg/mod/github.com/cs3org/[email protected]/internal/grpc/services/storageprovider/storageprovider.go:528 > path not found when stating pkg=rgrpc service=storage traceid=a51466e41eba5cef6919bacc5e9e339c
2020-12-02T12:34:49Z ERR srv/app/pkg/mod/github.com/cs3org/[email protected]/internal/grpc/services/storageprovider/storageprovider.go:530 > permission denied error="error: permission denied: 73abb2bc-d6c9-4161-9e53-0c35332e79c3" pkg=rgrpc service=storage traceid=a51466e41eba5cef6919bacc5e9e339c
[tusd] 2020/12/02 12:34:49 event="ChunkWriteComplete" id="30999212-21b7-43b6-bfee-e5683f628f27" bytesWritten="632799232" 
[tusd] 2020/12/02 12:34:49 event="ResponseOutgoing" status="204" method="PATCH" path="/30999212-21b7-43b6-bfee-e5683f628f27" requestId="" 
2020-12-02T12:34:49Z ERR http end="02/Dec/2020:12:34:49 +0000" host=127.0.0.1 method=POST pkg=rhttp proto=HTTP/1.1 service=storage size=0 start="02/Dec/2020:12:25:51 +0000" status=500 time_ns=537634144018 traceid=9d24d20a11df0a269e4b3af7833b18f8 uri=/remote.php/dav/files/einstein/ url=/remote.php/dav/files/einstein/
2020-12-02T12:34:49Z ERR error doing PATCH request to data service error="Patch \"http://localhost:9155/data/tus/30999212-21b7-43b6-bfee-e5683f628f27\": unexpected EOF" pkg=rhttp service=storage traceid=9d24d20a11df0a269e4b3af7833b18f8
2020-12-02T12:34:49Z ERR http end="02/Dec/2020:12:34:49 +0000" host=127.0.0.1 method=PATCH pkg=rhttp proto=HTTP/1.1 service=storage size=0 start="02/Dec/2020:12:25:51 +0000" status=500 time_ns=537608042622 traceid=9d24d20a11df0a269e4b3af7833b18f8 uri=/data/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJyZXZhIiwiZXhwIjoxNjU4NzUxOTUxLCJpYXQiOjE2MDY5MTE5NTEsInRhcmdldCI6Imh0dHA6Ly9sb2NhbGhvc3Q6OTE1NS9kYXRhL3R1cy8zMDk5OTIxMi0yMWI3LTQzYjYtYmZlZS1lNTY4M2Y2MjhmMjcifQ.BzTtgiI8_jWrmTTyP9nqj5Alu-TuzP_9jIG6fyEM1Do url=/data/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJyZXZhIiwiZXhwIjoxNjU4NzUxOTUxLCJpYXQiOjE2MDY5MTE5NTEsInRhcmdldCI6Imh0dHA6Ly9sb2NhbGhvc3Q6OTE1NS9kYXRhL3R1cy8zMDk5OTIxMi0yMWI3LTQzYjYtYmZlZS1lNTY4M2Y2MjhmMjcifQ.BzTtgiI8_jWrmTTyP9nqj5Alu-TuzP_9jIG6fyEM1Do
2020-12-02T12:34:49Z WRN srv/app/pkg/mod/github.com/cs3org/[email protected]/internal/grpc/services/storageprovider/storageprovider.go:528 > path not found when stating pkg=rgrpc service=storage traceid=dc2750ce6e0925812342b158aee9ea3b
2020-12-02T12:34:49Z ERR srv/app/pkg/mod/github.com/cs3org/[email protected]/internal/grpc/services/storageprovider/storageprovider.go:530 > permission denied error="error: permission denied: 73abb2bc-d6c9-4161-9e53-0c35332e79c3" pkg=rgrpc service=storage traceid=dc2750ce6e0925812342b158aee9ea3b

@exalate-issue-sync exalate-issue-sync bot added OCIS-Fastlane Planned outside of the sprint p3-medium labels Jan 13, 2021
@exalate-issue-sync
Copy link
Author

Jörn Friedrich Dreyer commented: us tus enabled or not?
if tus is enabled the client should ask the server how many bytes are available and resume the upload ...

if tus is disabled ... there is not much we can do ... other than the old chunking algorithms ... newchunking should be able to list the progress as well. v1 I'm not sure

@exalate-issue-sync
Copy link
Author

Michael Stingl commented: No TUS upload in the mitmproxy. Please provide new mitmproxy flows, or test setup.

@exalate-issue-sync
Copy link
Author

Ilja Neumann commented: Same behaviour with disabled token-cache in the proxy

@exalate-issue-sync
Copy link
Author

Ilja Neumann commented: Setup with docker-compose to local lan-ip

docker network create -o "com.docker.network.bridge.host_binding_ipv4"="192.168.0.33" bridge2 #use your lan ip
sudo ufw allow 443

Modify ocis/deployments ocis_traefik example like so: https://gist.github.com/IljaN/413833ad66c4f29f3cdc444f5a022bb2

Add to /etc/hosts: 192.168.0.33 ocis.owncloud.remote #your lan ip

mitmproxy -k -p 9999

Start sync-client with proxy to localhost:9999 and throttle the connection to 150

Connect to ocis.owncloud.remote

Create large file:
fallocate -l 3G 3G.bin

Upload is restarted after 401 and never finishes.

@exalate-issue-sync
Copy link
Author

Michael Barz commented: please check status

@micbar micbar added Category:Defect Existing functionality is not working as expected Platform:Desktop Platform:oCIS Status:Bug-Analysis labels Jan 13, 2021
@michaelstingl
Copy link
Contributor

@IljaN simplify the setup? Why do we need the desktop client? Reproduce with curl? Can we make OIDC expire faster? With curl --limit-rate we can slow down the requests. Maybe curl --vvv can already tell us about the problem. Otherwise we need to make our hands dirty with Wireshark…

@stale
Copy link

stale bot commented Mar 14, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 10 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status:Stale label Mar 14, 2021
@stale stale bot closed this as completed Mar 25, 2021
@micbar micbar reopened this Mar 25, 2021
@stale stale bot removed the Status:Stale label Mar 25, 2021
@settings settings bot removed the p3-medium label Apr 7, 2021
@stale
Copy link

stale bot commented Jun 6, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 10 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status:Stale label Jun 6, 2021
@micbar
Copy link
Contributor

micbar commented Jun 7, 2021

@IljaN on which version did you test this in jan?

@micbar
Copy link
Contributor

micbar commented Jun 21, 2021

Could be fixed by #2188

@micbar
Copy link
Contributor

micbar commented Jun 21, 2021

Needs to be tested during #2197

@settings settings bot removed the p3-medium label Jul 1, 2021
AndreasSko added a commit to AndreasSko/infrastructure that referenced this issue Jul 4, 2021
Currently, big file uploads fail because of expiration
of the token after 10m. Until this is fixed
(see owncloud/ocis#1205),
we can just increase the expiration time to 24h.
@wkloucek
Copy link
Contributor

Still fails if upload takes longer than the token lifetime

07-23 10:38:33:862 [ info sync.accessmanager ]:	2 "" "https://ocis.team.owncloud.works/ocs/v2.php/apps/notifications/api/v1/notifications?format=json" has X-Request-ID "81f11d53-c726-4f01-a594-cf5fe2ddcc70"
07-23 10:38:33:862 [ debug sync.cookiejar ]	[ OCC::CookieJar::cookiesForUrl ]:	QUrl("https://ocis.team.owncloud.works/ocs/v2.php/apps/notifications/api/v1/notifications?format=json") requests: ()
07-23 10:38:33:862 [ info sync.networkjob ]:	OCC::JsonApiJob created for "https://ocis.team.owncloud.works/" + "ocs/v2.php/apps/notifications/api/v1/notifications" "OCC::ServerNotificationHandler"
07-23 10:38:33:920 [ warning sync.networkjob ]:	QNetworkReply::AuthenticationRequiredError "Host requires authentication" QVariant(int, 401)
07-23 10:38:33:920 [ info gui.account.state ]:	Invalid credentials for "https://ocis.team.owncloud.works/" asking user
07-23 10:38:33:920 [ info gui.account.state ]:	AccountState state change:  "Connected" -> "Asking Credentials"
07-23 10:38:33:920 [ debug gui.activity ]	[ OCC::ActivitySettings::slotRefresh ]:	Do not check as last check is only secs ago:  0
07-23 10:38:33:920 [ info gui.socketapi ]:	Sending SocketAPI message --> "UNREGISTER_PATH:/home/kloucek/Desktop/Team oCIS" to QLocalSocket(0x561856d5cc90)
07-23 10:38:33:920 [ info gui.socketapi ]:	Sending SocketAPI message --> "UNREGISTER_PATH:/home/kloucek/Desktop/Team oCIS" to QLocalSocket(0x5618570d1ff0)
07-23 10:38:33:921 [ info gui.socketapi ]:	Sending SocketAPI message --> "UNREGISTER_PATH:/home/kloucek/Desktop/Team oCIS" to QLocalSocket(0x7facb0005e40)
07-23 10:38:33:923 [ debug sync.database.sql ]	[ OCC::SqlQuery::bindValue ]:	SQL bind 1 3
07-23 10:38:33:925 [ debug sync.database.sql ]	[ OCC::SqlQuery::exec ]:	SQL exec "SELECT path FROM selectivesync WHERE type=?1"
07-23 10:38:33:925 [ info gui.folder.manager ]:	Account "[email protected]" disconnected or paused, terminating or descheduling sync folders
07-23 10:38:33:925 [ info gui.folder ]:	folder  "2"  Terminating!
07-23 10:38:33:925 [ info sync.engine ]:	Aborting sync
07-23 10:38:33:925 [ debug gui.account.manager ]	[ OCC::AccountManager::saveAccount ]:	Saving account "https://ocis.team.owncloud.works/"
07-23 10:38:33:926 [ info gui.account.manager ]:	Saving  0  unknown certs.
07-23 10:38:33:926 [ info gui.account.manager ]:	Saving cookies. "/home/kloucek/.config/ownCloud/cookies1.db"
07-23 10:38:33:926 [ debug sync.cookiejar ]	[ OCC::CookieJar::save ]:	"/home/kloucek/.config/ownCloud/cookies1.db"

@micbar micbar added the Priority:p2-high Escalation, on top of current planning, release blocker label Jul 23, 2021
@wkloucek
Copy link
Contributor

If the Desktop client is already using TUS, there would no need to abort the upload of the file because it is not authenticated via the OIDC token but by the TUS upload token (valid for ~ 1 week)

@michaelstingl
Copy link
Contributor

michaelstingl commented Jul 23, 2021

Please always report exact client version:

Bildschirmfoto 2021-07-23 um 11 21 29

TUS requires 2.9 daily builds:
https://download.owncloud.com/desktop/ownCloud/daily/2.9/
https://download.owncloud.com/desktop/testpilotcloud/daily/2.9/

Please enable HTTP logging for your tests:

Bildschirmfoto 2021-07-23 um 11 17 44

or

QT_LOGGING_RULES='sync.httplogger=true' \
  /PATH/TO/CLIENT \
  --logdebug --logfile <file>

More information:
https://doc.owncloud.com/desktop/appendices/troubleshooting.html#control-log-content

@wkloucek
Copy link
Contributor

@michaelstingl here is the log of one sync run
sync-run.log

@michaelstingl
Copy link
Contributor

@michaelstingl here is the log of one sync run
sync-run.log

Strange, it says 2.8.2, but it tries TUS. Could you post the version information from the settings windows?

07-23 11:37:12:816 [ info sync.httplogger ]:	"18059704-3e08-4329-8ecb-0ff94f7b310b: 
Request: POST https://ocis.team.owncloud.works/remote.php/dav/files/wkloucek/ 
Header: { X-OC-Mtime: 1627027821,
 Content-Type: application/offset+octet-stream,
 Content-Length: 1816560956,
 Upload-Offset: 0,
 Tus-Resumable: 1.0.0,
 Upload-Metadata: filename L2hlaWMxNTAyYS50aWY=,checksum U0hBMSAyNjlmOWExNjY5YTY5NjY3Yjc3NTA1OTZhN2U2OWQ1N2RjYjYxZmYw,
 Upload-Length: 1816560956,
 Authorization: Bearer [redacted],
 User-Agent: Mozilla/5.0 (Linux) mirall/2.8.2 (ownCloud,
 manjaro-5.13.1-3-MANJARO ClientArchitecture: x86_64 OsArchitecture: x86_64),
 Accept: */*,
 X-Request-ID: 18059704-3e08-4329-8ecb-0ff94f7b310b,
 }
Data: [1816560956 bytes of application/offset+octet-stream data]"

Also strange, log doesn't contain a response for this request? ( X-Request-ID: 18059704-3e08-4329-8ecb-0ff94f7b310b)

@michaelstingl
Copy link
Contributor

@felix-schwarz could you try to upload a 2GB file from iOS and post the related log? (http, timing etc)
ocis.ocis-keycloak.latest.owncloud.works
(einstein:relativity)

@wkloucek
Copy link
Contributor

@michaelstingl
image

@felix-schwarz
Copy link

felix-schwarz commented Jul 23, 2021

@michaelstingl Uploading a 2 GB file in a single request over the course of 42 minutes results in a 500 Internal Error:

Request:

2021-07-23 13:00:26.335000+0200 ownCloud[16582:415306] ⚪️ | [HTTP, Request, …] Sending request:
# REQUEST ---------------------------------------------------------
URL:         https://ocis.ocis-keycloak.latest.owncloud.works/remote.php/dav/files/einstein/
Error:       -
Req Signals: coreOnline, authAvailable
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
POST /remote.php/dav/files/einstein HTTP/1.1
Host: ocis.ocis-keycloak.latest.owncloud.works
Content-Length: 2147483654
Tus-Resumable: 1.0.0
X-Request-ID: ECBC4F31-7C9B-4B14-9685-5035B712C443
Original-Request-ID: ECBC4F31-7C9B-4B14-9685-5035B712C443
Content-Type: application/offset+octet-stream
User-Agent: ownCloudApp/11.7.0 (App/197; iOS/14.5; iPhone)
Authorization: Bearer [redacted]
Accept-Language: en
Upload-Metadata: filename MmdiLnRlc3Q=,checksum U0hBMSBiZGQ2OTU1N2RmOWYzNDI2MjBmYjJhOTk5MWFkNGRhMTNiNzJhNWM4
Upload-Length: 2147483654
Upload-Offset: 0

[Contents from /Users/felix/Library/Developer/CoreSimulator/Devices/9D9DE66B-C996-4B38-BAE2-089F1BF81DA1/data/Containers/Shared/AppGroup/64F26195-0BCB-4E64-9D81-872275E4EBA4/Vaults/C5323898-E184-40DE-BEE4-E995292D85FA/TUS/CBA3CB2C-D474-4871-828C-3CA8008DD03A/006CE8C3-EB93-4AB5-998E-C2B925CB89B5]
----------------------------------------------------------------- [… POST, RequestID:ECBC4F31-7C9B-4B14-9685-5035B712C443, URLSessionTaskID:1] [OCHTTPPipeline.m:1182|FULL]

Response:

2021-07-23 13:42:50.833000+0200 ownCloud[16582:415306] ⚪️ | [HTTP, Response, …] Received response:
# RESPONSE --------------------------------------------------------
Method:      POST
URL:         https://ocis.ocis-keycloak.latest.owncloud.works/remote.php/dav/files/einstein/
Request-ID:  ECBC4F31-7C9B-4B14-9685-5035B712C443
Error:       -
Req Signals: coreOnline, authAvailable
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
500 INTERNAL SERVER ERROR
upload-offset: 2147483654
Access-Control-Allow-Origin: *
content-security-policy: default-src 'none';
x-xss-protection: 1; mode=block
x-download-options: noopen
access-control-expose-headers: Tus-Resumable, Location
access-control-allow-headers: Tus-Resumable, Upload-Length, Upload-Metadata, If-Match
x-permitted-cross-domain-policies: none
Location: https://ocis.ocis-keycloak.latest.owncloud.works/data/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJyZXZhIiwiZXhwIjoxNjc4ODc4MDI2LCJpYXQiOjE2MjcwMzgwMjYsInRhcmdldCI6Imh0dHA6Ly9sb2NhbGhvc3Q6OTE1NS9kYXRhL3R1cy81MmI2YWRlOS1hYjEyLTRkOTUtOGJjZi1lNTlmNDE0YWU1OGIifQ.SX-RnP2dvFdhe-SbmLPLjGI18RrhbbEVIPR6da4fSHg
Date: Fri, 23 Jul 2021 11:42:51 GMT
tus-resumable: 1.0.0
x-robots-tag: none
Content-Length: 0
x-content-type-options: nosniff
Vary: Origin
x-frame-options: SAMEORIGIN
----------------------------------------------------------------- [… POST, RequestID:ECBC4F31-7C9B-4B14-9685-5035B712C443, URLSessionTaskID:1] [OCHTTPPipeline.m:1299|FULL]

Interestingly, tus-resumable, upload-offset and Location headers are returned regardless. And upload-offset: 2147483654 implies the file has been fully received.

@wkloucek
Copy link
Contributor

The oCIS log to @felix-schwarz problem:

[tusd] 2021/07/23 11:00:26 event="RequestIncoming" method="PATCH" path="/52b6ade9-ab12-4d95-8bcf-e59f414ae58b" requestId="8931569e-a06d-400c-b775-e4e5484e47fb" 
[tusd] 2021/07/23 11:00:26 event="ChunkWriteStart" id="52b6ade9-ab12-4d95-8bcf-e59f414ae58b" maxSize="2147483654" offset="0" 
[tusd] 2021/07/23 11:42:33 event="ChunkWriteComplete" id="52b6ade9-ab12-4d95-8bcf-e59f414ae58b" bytesWritten="2147483654" 
[tusd] 2021/07/23 11:42:51 event="ResponseOutgoing" status="204" method="PATCH" path="/52b6ade9-ab12-4d95-8bcf-e59f414ae58b" requestId="8931569e-a06d-400c-b775-e4e5484e47fb" 
{"level":"error","service":"storage","pkg":"rgrpc","traceid":"b6e3c1a4ac85d6988e5930c293522aff","user-agent":"grpc-go/1.39.0","from":"tcp://127.0.0.1:58342","uri":"/cs3.storage.registry.v1beta1.RegistryAPI/GetStorageProviders","start":"23/Jul/2021:11:42:51 +0000","end":"23/Jul/2021:11:42:51 +0000","time_ns":303127,"code":"PermissionDenied","time":"2021-07-23T11:42:51Z","message":"unary"}
{"level":"error","service":"storage","pkg":"rgrpc","traceid":"b6e3c1a4ac85d6988e5930c293522aff","error":"gateway: error calling GetStorageProvider: rpc error: code = PermissionDenied desc = auth: core access token is invalid","time":"2021-07-23T11:42:51Z","caller":"/go/pkg/mod/github.com/cs3org/[email protected]/pkg/rgrpc/status/status.go:163","message":"gateway: stat ref: path:\"/home/2gb.test\" :gateway: error calling GetStorageProvider: rpc error: code = PermissionDenied desc = auth: core access token is invalid"}
{"level":"error","service":"storage","pkg":"rhttp","traceid":"b6e3c1a4ac85d6988e5930c293522aff","path":"/home/2gb.test","status":{"code":15,"message":"gateway: stat ref: path:\"/home/2gb.test\" :gateway: error calling GetStorageProvider: rpc error: code = PermissionDenied desc = auth: core access token is invalid","trace":"b6e3c1a4ac85d6988e5930c293522aff"},"time":"2021-07-23T11:42:51Z","message":"grpc request failed"}
{"level":"error","service":"storage","pkg":"rhttp","traceid":"b6e3c1a4ac85d6988e5930c293522aff","host":"127.0.0.1","method":"POST","uri":"/remote.php/dav/files/einstein/","url":"/remote.php/dav/files/einstein/","proto":"HTTP/1.1","status":500,"size":0,"start":"23/Jul/2021:11:00:26 +0000","end":"23/Jul/2021:11:42:51 +0000","time_ns":2544444942760,"time":"2021-07-23T11:42:51Z","message":"http"}

@wkloucek
Copy link
Contributor

@michaelstingl the log of ownCloud-2.9.0-daily20210723.4714:
sync-run.log

I didn't see any progress during the sync run:
image

But there was network ingress on the oCIS instance.

@felix-schwarz
Copy link

The oCIS log to @felix-schwarz problem:

[tusd] 2021/07/23 11:00:26 event="RequestIncoming" method="PATCH" path="/52b6ade9-ab12-4d95-8bcf-e59f414ae58b" requestId="8931569e-a06d-400c-b775-e4e5484e47fb" 
[tusd] 2021/07/23 11:00:26 event="ChunkWriteStart" id="52b6ade9-ab12-4d95-8bcf-e59f414ae58b" maxSize="2147483654" offset="0" 
[tusd] 2021/07/23 11:42:33 event="ChunkWriteComplete" id="52b6ade9-ab12-4d95-8bcf-e59f414ae58b" bytesWritten="2147483654" 
[tusd] 2021/07/23 11:42:51 event="ResponseOutgoing" status="204" method="PATCH" path="/52b6ade9-ab12-4d95-8bcf-e59f414ae58b" requestId="8931569e-a06d-400c-b775-e4e5484e47fb" 
{"level":"error","service":"storage","pkg":"rgrpc","traceid":"b6e3c1a4ac85d6988e5930c293522aff","user-agent":"grpc-go/1.39.0","from":"tcp://127.0.0.1:58342","uri":"/cs3.storage.registry.v1beta1.RegistryAPI/GetStorageProviders","start":"23/Jul/2021:11:42:51 +0000","end":"23/Jul/2021:11:42:51 +0000","time_ns":303127,"code":"PermissionDenied","time":"2021-07-23T11:42:51Z","message":"unary"}
{"level":"error","service":"storage","pkg":"rgrpc","traceid":"b6e3c1a4ac85d6988e5930c293522aff","error":"gateway: error calling GetStorageProvider: rpc error: code = PermissionDenied desc = auth: core access token is invalid","time":"2021-07-23T11:42:51Z","caller":"/go/pkg/mod/github.com/cs3org/[email protected]/pkg/rgrpc/status/status.go:163","message":"gateway: stat ref: path:\"/home/2gb.test\" :gateway: error calling GetStorageProvider: rpc error: code = PermissionDenied desc = auth: core access token is invalid"}
{"level":"error","service":"storage","pkg":"rhttp","traceid":"b6e3c1a4ac85d6988e5930c293522aff","path":"/home/2gb.test","status":{"code":15,"message":"gateway: stat ref: path:\"/home/2gb.test\" :gateway: error calling GetStorageProvider: rpc error: code = PermissionDenied desc = auth: core access token is invalid","trace":"b6e3c1a4ac85d6988e5930c293522aff"},"time":"2021-07-23T11:42:51Z","message":"grpc request failed"}
{"level":"error","service":"storage","pkg":"rhttp","traceid":"b6e3c1a4ac85d6988e5930c293522aff","host":"127.0.0.1","method":"POST","uri":"/remote.php/dav/files/einstein/","url":"/remote.php/dav/files/einstein/","proto":"HTTP/1.1","status":500,"size":0,"start":"23/Jul/2021:11:00:26 +0000","end":"23/Jul/2021:11:42:51 +0000","time_ns":2544444942760,"time":"2021-07-23T11:42:51Z","message":"http"}

That looks like ocis waits for the complete request to arrive before - internally - handing it down to the tus backend. At which point the token has expired and the upload fails.

@wkloucek
Copy link
Contributor

The oCIS log to @felix-schwarz problem:

[tusd] 2021/07/23 11:00:26 event="RequestIncoming" method="PATCH" path="/52b6ade9-ab12-4d95-8bcf-e59f414ae58b" requestId="8931569e-a06d-400c-b775-e4e5484e47fb" 
[tusd] 2021/07/23 11:00:26 event="ChunkWriteStart" id="52b6ade9-ab12-4d95-8bcf-e59f414ae58b" maxSize="2147483654" offset="0" 
[tusd] 2021/07/23 11:42:33 event="ChunkWriteComplete" id="52b6ade9-ab12-4d95-8bcf-e59f414ae58b" bytesWritten="2147483654" 
[tusd] 2021/07/23 11:42:51 event="ResponseOutgoing" status="204" method="PATCH" path="/52b6ade9-ab12-4d95-8bcf-e59f414ae58b" requestId="8931569e-a06d-400c-b775-e4e5484e47fb" 
{"level":"error","service":"storage","pkg":"rgrpc","traceid":"b6e3c1a4ac85d6988e5930c293522aff","user-agent":"grpc-go/1.39.0","from":"tcp://127.0.0.1:58342","uri":"/cs3.storage.registry.v1beta1.RegistryAPI/GetStorageProviders","start":"23/Jul/2021:11:42:51 +0000","end":"23/Jul/2021:11:42:51 +0000","time_ns":303127,"code":"PermissionDenied","time":"2021-07-23T11:42:51Z","message":"unary"}
{"level":"error","service":"storage","pkg":"rgrpc","traceid":"b6e3c1a4ac85d6988e5930c293522aff","error":"gateway: error calling GetStorageProvider: rpc error: code = PermissionDenied desc = auth: core access token is invalid","time":"2021-07-23T11:42:51Z","caller":"/go/pkg/mod/github.com/cs3org/[email protected]/pkg/rgrpc/status/status.go:163","message":"gateway: stat ref: path:\"/home/2gb.test\" :gateway: error calling GetStorageProvider: rpc error: code = PermissionDenied desc = auth: core access token is invalid"}
{"level":"error","service":"storage","pkg":"rhttp","traceid":"b6e3c1a4ac85d6988e5930c293522aff","path":"/home/2gb.test","status":{"code":15,"message":"gateway: stat ref: path:\"/home/2gb.test\" :gateway: error calling GetStorageProvider: rpc error: code = PermissionDenied desc = auth: core access token is invalid","trace":"b6e3c1a4ac85d6988e5930c293522aff"},"time":"2021-07-23T11:42:51Z","message":"grpc request failed"}
{"level":"error","service":"storage","pkg":"rhttp","traceid":"b6e3c1a4ac85d6988e5930c293522aff","host":"127.0.0.1","method":"POST","uri":"/remote.php/dav/files/einstein/","url":"/remote.php/dav/files/einstein/","proto":"HTTP/1.1","status":500,"size":0,"start":"23/Jul/2021:11:00:26 +0000","end":"23/Jul/2021:11:42:51 +0000","time_ns":2544444942760,"time":"2021-07-23T11:42:51Z","message":"http"}

That looks like ocis waits for the complete request to arrive before - internally - handing it down to the tus backend. At which point the token has expired and the upload fails.

You're right, we need to investigate in this

@wkloucek
Copy link
Contributor

@micbar
Copy link
Contributor

micbar commented Jul 23, 2021

ok, make it configurable, the 10 min default is not good. We need to check other places.

Requirement from PM is "NEVER introduce ridicolous timeouts" 😄

@wkloucek
Copy link
Contributor

wkloucek commented Jul 26, 2021

I got it wrong on Friday... The token is valid for 864000 minutes, since we set it in oCIS to a higher value. I will continue to investigate

EDIT:
see here

&cli.IntFlag{
Name: "transfer-expires",
Value: flags.OverrideDefaultInt(cfg.Reva.TransferExpires, 24*60*60), // one day
Usage: "Transfer token ttl in seconds",
EnvVars: []string{"STORAGE_TRANSFER_EXPIRES"},
Destination: &cfg.Reva.TransferExpires,
},

and https://github.com/cs3org/reva/pull/1924/files

@wkloucek wkloucek changed the title Single big file upload with dekstop-client fails if OIDC token expires Single big file upload with desktop-client fails if OIDC token expires Jul 29, 2021
@AndreasSko
Copy link

I noticed similar issues on my setup when uploading a larger file (~2GB) and also saw similar error messages, especially the core access token is invalid. Could be unrelated (if so, sorry for the noise 😅 ), but I noticed that after raising the JWT expiration time in the AccountResolver here

"expires": int64(60),

the issue seemed to be gone.

@wkloucek
Copy link
Contributor

I noticed similar issues on my setup when uploading a larger file (~2GB) and also saw similar error messages, especially the core access token is invalid. Could be unrelated (if so, sorry for the noise ), but I noticed that after raising the JWT expiration time in the AccountResolver here

"expires": int64(60),

the issue seemed to be gone.

Thanks for digging into this :-) There are dedicated up- and download tokens, which are valid for a much longer timespan and they should be sufficient in that case. But somewhere seems to be a codepath for uploads which still uses the core access token (even if it shouldn't from my understanding)

@wkloucek
Copy link
Contributor

Possible fix: cs3org/reva#1941

@wkloucek
Copy link
Contributor

After cs3org/reva#2151 clients shouldn't get a 500 anymore after a successful upload, despite the token being expired. They should then get a 200 but without the file information. These can be obtained by an propfind. The Desktop client handles this correctly.
If the token didn't expire during the upload, the file information will be included in the response.

@wkloucek
Copy link
Contributor

Current situation (oCIS 1.13.0):

  • REVA tokens are now valid for 24h (before 1 min) -> no more expiry expected
  • REVA tokens and transfer tokens are checked -> only transer tokens should be checked: fix TUS uploads with transfer token only cs3org/reva#1941
  • Max chunk size can now be set in the capabilities: STORAGE_FRONTEND_UPLOAD_MAX_CHUNK_SIZE
  • For unlimited chunk size, clients will upload the whole file to the webdav endpoint with a single post request. If the REVA token would expire (assuming one lowers the token lifetime), the client would now recieve a 200 OK after the upload and not a 500 internal, like before. Only downside is that we cannot return file information in that case, but clients will fall back to a separate propfind.

@michaelstingl
Copy link
Contributor

  • Max chunk size can now be set in the capabilities: STORAGE_FRONTEND_UPLOAD_MAX_CHUNK_SIZE

This approach? tus/tus-resumable-upload-protocol#93

@wkloucek
Copy link
Contributor

  • Max chunk size can now be set in the capabilities: STORAGE_FRONTEND_UPLOAD_MAX_CHUNK_SIZE

This approach? tus/tus-resumable-upload-protocol#93

No :-( It's in the capabilities -> max_chunk_size

{
  "ocs": {
    "data": {
      "capabilities": {
        "files": {
          "tus_support": {
            "version": "1.0.0",
            "resumable": "1.0.0",
            "extension": "creation,creation-with-upload",
            "max_chunk_size": 0,
            "http_method_override": ""
          }
        }
      }
    }
  }
}

I'm not sure where to plan / schedule thinks like contributing to the TUS protocol. But I think it would be a benefit for all to have it in TUS itself even if the discussion looks quite dead. @dragotin @micbar Is that something for an oCIS sprint?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Category:Defect Existing functionality is not working as expected OCIS-Fastlane Planned outside of the sprint Platform:Desktop Platform:oCIS Priority:p2-high Escalation, on top of current planning, release blocker Status:Bug-Analysis
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants