Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error reading files larger than 512KB #81

Open
karbowiak opened this issue Jul 6, 2014 · 56 comments
Open

Error reading files larger than 512KB #81

karbowiak opened this issue Jul 6, 2014 · 56 comments

Comments

@karbowiak
Copy link

Hi.

I'm having an issue where uploading files larger than 512KB works great, but reading them back fails.

I can get the first 512KB of the file, and then it just errors out.
Below 512KB and it works great however.

Error logs:
gdrivefs with debug

$ gdfs /root/.gdfscache /mnt/gdfs -d
FUSE library version: 2.9.3
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.22
flags=0x0000f7fb
max_readahead=0x00020000
   INIT: 7.19
   flags=0x00000011
   max_readahead=0x00020000
   max_write=0x00020000
   max_background=0
   congestion_threshold=0
   unique: 1, success, outsize: 40
# COPYING FILE UNDER 512KB
unique: 2, opcode: LOOKUP (1), nodeid: 1, insize: 49, pid: 1845
LOOKUP /32kb.png
getattr /32kb.png
   NODEID: 2
   unique: 2, success, outsize: 144
unique: 3, opcode: OPEN (14), nodeid: 2, insize: 48, pid: 1845
open flags: 0x8000 /32kb.png
   open[2] flags: 0x8000 /32kb.png
   unique: 3, success, outsize: 32
unique: 4, opcode: READ (15), nodeid: 2, insize: 80, pid: 1845
read[2] 32768 bytes from 0 flags: 0x8000
   read[2] 31793 bytes from 0
   unique: 4, success, outsize: 31809
unique: 5, opcode: GETATTR (3), nodeid: 2, insize: 56, pid: 1845
getattr /32kb.png
   unique: 5, success, outsize: 120
unique: 6, opcode: FLUSH (25), nodeid: 2, insize: 64, pid: 1845
flush[2]
   unique: 6, success, outsize: 16
unique: 7, opcode: RELEASE (18), nodeid: 2, insize: 64, pid: 0
release[2] flags: 0x8000
   unique: 7, success, outsize: 16
# COPYING FILE OVER 512KB
unique: 8, opcode: LOOKUP (1), nodeid: 1, insize: 59, pid: 1857
LOOKUP /Mining The EVE.pdf
getattr /Mining The EVE.pdf
   NODEID: 3
   unique: 8, success, outsize: 144
unique: 9, opcode: OPEN (14), nodeid: 3, insize: 48, pid: 1857
open flags: 0x8000 /Mining The EVE.pdf
   open[3] flags: 0x8000 /Mining The EVE.pdf
   unique: 9, success, outsize: 32
unique: 10, opcode: READ (15), nodeid: 3, insize: 80, pid: 1857
read[3] 131072 bytes from 0 flags: 0x8000
   read[3] 131072 bytes from 0
   unique: 10, success, outsize: 131088
unique: 11, opcode: READ (15), nodeid: 3, insize: 80, pid: 1857
read[3] 131072 bytes from 131072 flags: 0x8000
   read[3] 131072 bytes from 131072
   unique: 11, success, outsize: 131088
unique: 12, opcode: READ (15), nodeid: 3, insize: 80, pid: 1857
read[3] 131072 bytes from 262144 flags: 0x8000
   read[3] 131072 bytes from 262144
   unique: 12, success, outsize: 131088
unique: 13, opcode: READ (15), nodeid: 3, insize: 80, pid: 1857
read[3] 131072 bytes from 393216 flags: 0x8000
   read[3] 131072 bytes from 393216
   unique: 13, success, outsize: 131088
unique: 14, opcode: READ (15), nodeid: 3, insize: 80, pid: 1857
read[3] 131072 bytes from 524288 flags: 0x8000
   read[3] 1 bytes from 524288
   unique: 14, success, outsize: 17
unique: 15, opcode: READ (15), nodeid: 3, insize: 80, pid: 1857
read[3] 131072 bytes from 655360 flags: 0x8000
   unique: 15, error: -5 (Input/output error), outsize: 16
unique: 16, opcode: READ (15), nodeid: 3, insize: 80, pid: 1857
read[3] 131072 bytes from 786432 flags: 0x8000
   unique: 16, error: -5 (Input/output error), outsize: 16
unique: 17, opcode: GETATTR (3), nodeid: 3, insize: 56, pid: 1857
getattr /Mining The EVE.pdf
   unique: 17, success, outsize: 120
unique: 18, opcode: READ (15), nodeid: 3, insize: 80, pid: 1857
read[3] 131072 bytes from 524288 flags: 0x8000
   read[3] 1 bytes from 524288
   unique: 18, success, outsize: 17
unique: 19, opcode: READ (15), nodeid: 3, insize: 80, pid: 1857
read[3] 131072 bytes from 655360 flags: 0x8000
   unique: 19, error: -5 (Input/output error), outsize: 16
unique: 20, opcode: FLUSH (25), nodeid: 3, insize: 64, pid: 1857
flush[3]
   unique: 20, success, outsize: 16
unique: 21, opcode: RELEASE (18), nodeid: 3, insize: 64, pid: 0
release[3] flags: 0x8000
   unique: 21, success, outsize: 16
^C#       

gdrivefs.log

# COPYING FILE UNDER 512KB
# COPYING FILE OVER 512KB
2014-07-06 17:46:13,116 [FsUtility ERROR] There was an exception in [read]
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/fsutility.py", line 77, in wrapper
    result = f(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/opened_file.py", line 585, in read
    (offset, buffer_len))
IndexError: Offset (655360) exceeds length of data (524289).
2014-07-06 17:46:13,116 [gdrivefs.gdfs.gdfuse ERROR] Could not read data.
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/gdfuse.py", line 243, in read
    return opened_file.read(offset, length)
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/fsutility.py", line 77, in wrapper
    result = f(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/opened_file.py", line 585, in read
    (offset, buffer_len))
IndexError: Offset (655360) exceeds length of data (524289).
2014-07-06 17:46:13,117 [FsUtility ERROR] FUSE error [FuseOSError] (5) will be forwarded back to GDFS from [read]: [Errno 5] Input/output error
2014-07-06 17:46:13,117 [FsUtility ERROR] There was an exception in [read]
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/fsutility.py", line 77, in wrapper
    result = f(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/opened_file.py", line 585, in read
    (offset, buffer_len))
IndexError: Offset (786432) exceeds length of data (524289).
2014-07-06 17:46:13,117 [gdrivefs.gdfs.gdfuse ERROR] Could not read data.
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/gdfuse.py", line 243, in read
    return opened_file.read(offset, length)
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/fsutility.py", line 77, in wrapper
    result = f(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/opened_file.py", line 585, in read
    (offset, buffer_len))
IndexError: Offset (786432) exceeds length of data (524289).
2014-07-06 17:46:13,117 [FsUtility ERROR] FUSE error [FuseOSError] (5) will be forwarded back to GDFS from [read]: [Errno 5] Input/output error
2014-07-06 17:46:13,118 [FsUtility ERROR] There was an exception in [read]
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/fsutility.py", line 77, in wrapper
    result = f(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/opened_file.py", line 585, in read
    (offset, buffer_len))
IndexError: Offset (655360) exceeds length of data (524289).
2014-07-06 17:46:13,119 [gdrivefs.gdfs.gdfuse ERROR] Could not read data.
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/gdfuse.py", line 243, in read
    return opened_file.read(offset, length)
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/fsutility.py", line 77, in wrapper
    result = f(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/gdrivefs-0.13.14-py2.7.egg/gdrivefs/gdfs/opened_file.py", line 585, in read
    (offset, buffer_len))
IndexError: Offset (655360) exceeds length of data (524289).
2014-07-06 17:46:13,119 [FsUtility ERROR] FUSE error [FuseOSError] (5) will be forwarded back to GDFS from [read]: [Errno 5] Input/output error
^C

Tried using the latest version from pip, and that was released, only thing i havn't tried is using the source repo, but i assume 0.13.14 is latest from the source repo.

@fasterthanlime
Copy link

Can reproduce with 0.13.14, tried copying a file with 'dd', and I'm getting only the first 524KiB (which is 512KB)

output of gdfs -d:

[snip]
unique: 498, opcode: READ (15), nodeid: 85, insize: 80, pid: 22536
read[8] 65536 bytes from 49152 flags: 0x8000
   read[8] 65536 bytes from 49152
   unique: 498, success, outsize: 65552
unique: 499, opcode: READ (15), nodeid: 85, insize: 80, pid: 22536
read[8] 131072 bytes from 114688 flags: 0x8000
   read[8] 131072 bytes from 114688
   unique: 499, success, outsize: 131088
unique: 500, opcode: READ (15), nodeid: 85, insize: 80, pid: 22536
read[8] 131072 bytes from 245760 flags: 0x8000
   read[8] 131072 bytes from 245760
   unique: 500, success, outsize: 131088
unique: 501, opcode: READ (15), nodeid: 85, insize: 80, pid: 22536
read[8] 131072 bytes from 376832 flags: 0x8000
   read[8] 131072 bytes from 376832
   unique: 501, success, outsize: 131088
unique: 502, opcode: READ (15), nodeid: 85, insize: 80, pid: 22536
read[8] 131072 bytes from 507904 flags: 0x8000
   read[8] 16385 bytes from 507904
   unique: 502, success, outsize: 16401
unique: 503, opcode: READ (15), nodeid: 85, insize: 80, pid: 22536
read[8] 131072 bytes from 638976 flags: 0x8000
   unique: 503, error: -5 (Input/output error), outsize: 16
unique: 504, opcode: GETATTR (3), nodeid: 85, insize: 56, pid: 22536
getattr /some/file/here/anonymized.dat
   unique: 504, success, outsize: 120
unique: 505, opcode: READ (15), nodeid: 85, insize: 80, pid: 22536
read[8] 16384 bytes from 524288 flags: 0x8000
   read[8] 1 bytes from 524288
   unique: 505, success, outsize: 17
unique: 506, opcode: GETATTR (3), nodeid: 85, insize: 56, pid: 22536
getattr /Music/Jars of Clay/1997 - Much Afraid/01 - Overjoyed.mp3
   unique: 506, success, outsize: 120
unique: 507, opcode: READ (15), nodeid: 85, insize: 80, pid: 22536
read[8] 16384 bytes from 524288 flags: 0x8000
   read[8] 1 bytes from 524288
   unique: 507, success, outsize: 17
unique: 508, opcode: FLUSH (25), nodeid: 85, insize: 64, pid: 22536
flush[8]
   unique: 508, success, outsize: 16
unique: 509, opcode: RELEASE (18), nodeid: 85, insize: 64, pid: 0
release[8] flags: 0x8000
   unique: 509, success, outsize: 16

Every read operation works until the 512KB limit, then there's an error -5 and dd gives up.

@fasterthanlime
Copy link

I've been able to reproduce the issue with versions 0.13.13, 0.13.12, 0.13.10, 0.13.9, 0.13.8, 0.13.5, and 0.13.4, and then I've given up testing.

At this point, I'm thinking something's wrong with one of the dependencies of gdrivefs, or Google changed their API...

@karbowiak
Copy link
Author

I was working under the same assumption, so i tried to upgrade the google-api thing, but that didn't do diddly squat :(

@karbowiak
Copy link
Author

After a bit of fiddling, and getting it to output debugging logging instead of just warning logging, i've gotten a bit more information out of it. (A LOT more infact)

I removed the huge chunk of data that fuse dumped, everything else is as is output to the log:
http://pastebin.com/kw5Yxq2P

@karbowiak
Copy link
Author

@dsoprea i tried clearing out the tmp directory between each test, in the hopes the log would show something different, but it was the same.

Worst part, i suck at python, so i can't even understand half of it /o\

@dsoprea
Copy link
Owner

dsoprea commented Jul 6, 2014

Thanks for getting the ball rolling on this.

I'll have to look into it. I don't think I've ever had this problem (I've downloaded files larger than that).

Neither of you happen to have experience with Google's OAuth 2.0 Playground, do you?

There's a script called tools/gdfsdumpentry, which will allow us to see exactly what information Google is returning for that file/entry. Unfortunately, that script isn't pushed into the executable path, and, because of the way the package was configured, that file will therefore not be included when installing from pip. I'd fix that, right now, except that I'd be unable to test it (I'm currently on a Mac).

If you're using a clone of the project, you'll have that script, though. It should work. We could definitely get some direction if you can run it with a problematic file, and paste the output.

@karbowiak
Copy link
Author

@dsoprea sadly i don't, i've tried running gdfsdumpentry however, but not entirely sure on the way to tell it to do something.

So far i've tried ./gdfsdumpentry /root/.gdfscache --bypath mte.pdf (mte.pdf is a file in the dir)
Also without --bypath and with {mte.pdf, 0} - pretty sure i'm doing it wrong tho :D

@dsoprea
Copy link
Owner

dsoprea commented Jul 6, 2014

Try adding a prefixing slash, or the whole relative path if it's not in the
root.
On Jul 6, 2014 2:40 PM, "Michael Karbowiak" [email protected]
wrote:

@dsoprea https://github.com/dsoprea sadly i don't, i've tried running
gdfsdumpentry however, but not entirely sure on the way to tell it to do
something.

So far i've tried ./gdfsdumpentry /root/.gdfscache --bypath mte.pdf
(mte.pdf is a file in the dir)
Also without --bypath and with {mte.pdf, 0} - pretty sure i'm doing it
wrong tho :D


Reply to this email directly or view it on GitHub
#81 (comment).

@karbowiak
Copy link
Author

Here we go..

./gdfsdumpentry /root/.gdfscache bypath /mte.pdf did the job

Data dumped

$ ./gdfsdumpentry /root/.gdfscache bypath /mte.pdf
<NORMAL ID= [0ByQzTx-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B] MIME= [application/pdf] NAME= [mte.pdf] URIS= (1)>

[original]

mimeType: application/pdf
appDataContents: False
thumbnailLink: https://lh6.googleusercontent.com/1brWHkkNKWfMou_E3rAsrPpdwKUUsUlYLnB78H2kKZBNOe7TWozP3bq9WfAXftFj6Tg0Ej99OW2nYVLJLzYjhkOtIpQ=s220
labels: {u'restricted': False, u'starred': False, u'viewed': False, u'hidden': False, u'trashed': False}
etag: "gROZn7NlhxVF8deqr1tB7t1xA6k/MTQwNDY3MTc5MDk5Nw"
lastModifyingUserName: Michael Karbowiak
writersCanShare: True
id: 0ByQzTx-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B
lastModifyingUser: {u'picture': {u'url': u'https://lh5.googleusercontent.com/-KiWZpEPjcdU/AAAAAAAAAAI/AAAAAAAAJco/3cSgTIzhQjU/s64/photo.jpg'}, u'kind': u'drive#user', u'displayName': u'Michael Karbowi                                                                       ak', u'permissionId': u'17074781870819237039', u'isAuthenticatedUser': True, u'emailAddress': u'[email protected]'}
title: mte.pdf
ownerNames: [u'Michael Karbowiak']
version: 361786
parents: [{u'isRoot': True, u'kind': u'drive#parentReference', u'id': u'0ACQzTx-QEtpBUk9PVA', u'selfLink': u'https://www.googleapis.com/drive/v2/files/0ByQzTx-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B                                                                       /parents/0ACQzTx-QEtpBUk9PVA', u'parentLink': u'https://www.googleapis.com/drive/v2/files/0ACQzTx-QEtpBUk9PVA'}]
shared: False
originalFilename: Mining The EVE.pdf
webContentLink: https://docs.google.com/uc?id=0ByQzTx-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B&export=download
editable: True
kind: drive#file
fileExtension: pdf
fileSize: 9333398
createdDate: 2014-05-20T16:11:21.000Z
md5Checksum: f98e28f9696060169112002df0bfaacc
iconLink: https://ssl.gstatic.com/docs/doclist/images/icon_10_pdf_list.png
owners: [{u'picture': {u'url': u'https://lh5.googleusercontent.com/-KiWZpEPjcdU/AAAAAAAAAAI/AAAAAAAAJco/3cSgTIzhQjU/s64/photo.jpg'}, u'kind': u'drive#user', u'displayName': u'Michael Karbowiak', u'per                                                                       missionId': u'17074781870819237039', u'isAuthenticatedUser': True, u'emailAddress': u'[email protected]'}]
alternateLink: https://docs.google.com/file/d/0ByQzTx-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B/edit?usp=drivesdk
copyable: True
modifiedByMeDate: 2014-07-06T18:36:30.997Z
downloadUrl: https://doc-14-50-docs.googleusercontent.com/docs/securesc/gj7rpq4ahpt8uhuhl7igudjj2i4lp51v/bmq82c1tocebfe9aelbatqt6a3380hfa/1404669600000/17074781870819237039/17074781870819237039/0ByQzT                                                                       x-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B?h=16653014193614665626&e=download&gd=true
userPermission: {u'kind': u'drive#permission', u'etag': u'"gROZn7NlhxVF8deqr1tB7t1xA6k/1dQ4vV1RvdMKKpEUPX7BoWcsn14"', u'role': u'owner', u'type': u'user', u'id': u'me', u'selfLink': u'https://www.goog                                                                       leapis.com/drive/v2/files/0ByQzTx-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B/permissions/me'}
quotaBytesUsed: 9333398
headRevisionId: 0ByQzTx-QEtpBMVRoNFR3QW9JTUk1bkRwS3luUUVFdjBkMHRNPQ
selfLink: https://www.googleapis.com/drive/v2/files/0ByQzTx-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B
modifiedDate: 2014-07-06T18:36:30.997Z

[extra]

is_visible: True
is_directory: False
modified_date: 2014-07-06 18:36:30.997000+00:00
mtime_byme_date: 2014-07-06 18:36:30.997000+00:00
parents: [u'0ACQzTx-QEtpBUk9PVA']
mtime_byme_date_epoch: 1404668190.0
download_types: [u'application/pdf']
atime_byme_date: None
atime_byme_date_epoch: None
modified_date_epoch: 1404668190.0

@dsoprea
Copy link
Owner

dsoprea commented Jul 6, 2014

Assuming that 9333398 is the correct size, it definitely looks like a
problem with GDFS.

Dustin

On Sun, Jul 6, 2014 at 3:00 PM, Michael Karbowiak [email protected]
wrote:

Here we go..

./gdfsdumpentry /root/.gdfscache bypath /mte.pdf did the job

Data dumped

$ ./gdfsdumpentry /root/.gdfscache bypath /mte.pdf

[original]

mimeType: application/pdf
appDataContents: False
thumbnailLink: https://lh6.googleusercontent.com/1brWHkkNKWfMou_E3rAsrPpdwKUUsUlYLnB78H2kKZBNOe7TWozP3bq9WfAXftFj6Tg0Ej99OW2nYVLJLzYjhkOtIpQ=s220
labels: {u'restricted': False, u'starred': False, u'viewed': False, u'hidden': False, u'trashed': False}
etag: "gROZn7NlhxVF8deqr1tB7t1xA6k/MTQwNDY3MTc5MDk5Nw"
lastModifyingUserName: Michael Karbowiak
writersCanShare: True
id: 0ByQzTx-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B
lastModifyingUser: {u'picture': {u'url': u'https://lh5.googleusercontent.com/-KiWZpEPjcdU/AAAAAAAAAAI/AAAAAAAAJco/3cSgTIzhQjU/s64/photo.jpg'}, u'kind': u'drive#user', u'displayName': u'Michael Karbowi ak', u'permissionId': u'17074781870819237039', u'isAuthenticatedUser': True, u'emailAddress': u'[email protected]'}
title: mte.pdf
ownerNames: [u'Michael Karbowiak']
version: 361786
parents: [{u'isRoot': True, u'kind': u'drive#parentReference', u'id': u'0ACQzTx-QEtpBUk9PVA', u'selfLink': u'https://www.googleapis.com/drive/v2/files/0ByQzTx-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B /parents/0ACQzTx-QEtpBUk9PVA', u'parentLink': u'https://www.googleapis.com/drive/v2/files/0ACQzTx-QEtpBUk9PVA'}]
shared: False
originalFilename: Mining The EVE.pdf
webContentLink: https://docs.google.com/uc?id=0ByQzTx-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B&export=download
editable: True
kind: drive#file
fileExtension: pdf
fileSize: 9333398
createdDate: 2014-05-20T16:11:21.000Z
md5Checksum: f98e28f9696060169112002df0bfaacc
iconLink: https://ssl.gstatic.com/docs/doclist/images/icon_10_pdf_list.png
owners: [{u'picture': {u'url': u'https://lh5.googleusercontent.com/-KiWZpEPjcdU/AAAAAAAAAAI/AAAAAAAAJco/3cSgTIzhQjU/s64/photo.jpg'}, u'kind': u'drive#user', u'displayName': u'Michael Karbowiak', u'per missionId': u'17074781870819237039', u'isAuthenticatedUser': True, u'emailAddress': u'[email protected]'}]
alternateLink: https://docs.google.com/file/d/0ByQzTx-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B/edit?usp=drivesdk
copyable: True
modifiedByMeDate: 2014-07-06T18:36:30.997Z
downloadUrl: https://doc-14-50-docs.googleusercontent.com/docs/securesc/gj7rpq4ahpt8uhuhl7igudjj2i4lp51v/bmq82c1tocebfe9aelbatqt6a3380hfa/1404669600000/17074781870819237039/17074781870819237039/0ByQzT x-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B?h=16653014193614665626&e=download&gd=true
userPermission: {u'kind': u'drive#permission', u'etag': u'"gROZn7NlhxVF8deqr1tB7t1xA6k/1dQ4vV1RvdMKKpEUPX7BoWcsn14"', u'role': u'owner', u'type': u'user', u'id': u'me', u'selfLink': u'https://www.goog leapis.com/drive/v2/files/0ByQzTx-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B/permissions/me'}
quotaBytesUsed: 9333398
headRevisionId: 0ByQzTx-QEtpBMVRoNFR3QW9JTUk1bkRwS3luUUVFdjBkMHRNPQ
selfLink: https://www.googleapis.com/drive/v2/files/0ByQzTx-QEtpBSHJhUWRldzV2VjRsMmpjeW4wQWE0V3BmQ19B
modifiedDate: 2014-07-06T18:36:30.997Z

[extra]

is_visible: True
is_directory: False
modified_date: 2014-07-06 18:36:30.997000+00:00
mtime_byme_date: 2014-07-06 18:36:30.997000+00:00
parents: [u'0ACQzTx-QEtpBUk9PVA']
mtime_byme_date_epoch: 1404668190.0
download_types: [u'application/pdf']
atime_byme_date: None
atime_byme_date_epoch: None
modified_date_epoch: 1404668190.0


Reply to this email directly or view it on GitHub
#81 (comment).

@karbowiak
Copy link
Author

File is indeed 9.1MB in size.
9333398 bytes according to windows (9334784 bytes on disk)

http://files.karbowiak.dk/2014-07-06_21-06-38.png

@dsoprea
Copy link
Owner

dsoprea commented Jul 6, 2014

I believe you. I'll let you know when I have something.

On Sun, Jul 6, 2014 at 3:08 PM, Michael Karbowiak [email protected]
wrote:

File is indeed 9.1MB in size.
9333398 bytes according to windows (9334784 bytes on disk)

http://files.karbowiak.dk/2014-07-06_21-06-38.png


Reply to this email directly or view it on GitHub
#81 (comment).

@karbowiak
Copy link
Author

Awesome, thanks for taking a look at it :)

@dsoprea
Copy link
Owner

dsoprea commented Jul 6, 2014

Happy to help.

On Sun, Jul 6, 2014 at 3:11 PM, Michael Karbowiak [email protected]
wrote:

Awesome, thanks for taking a look at it :)


Reply to this email directly or view it on GitHub
#81 (comment).

@fasterthanlime
Copy link

This is a complete shot in the dark, but from my googling online I've seen that to upload files to GDrive using the API, you need to send it in 512KB chunks - could it be that they've changed the download endpoints to also work in 512KB chunks? For uniformity? (And also perhaps to keep their response handling times low)

I'm pretty sure the GDrive sync desktop client (for Windows) is able to work in 512KB chunks for both download & upload - then again it isn't the most stable thing in the world, and they may be using private/undocumented APIs.

@karbowiak
Copy link
Author

Any headway on this? :)

@dsoprea
Copy link
Owner

dsoprea commented Jul 11, 2014

Sorry, no. I'll try to work on this, this weekend.

On Thu, Jul 10, 2014 at 1:17 PM, Michael Karbowiak <[email protected]

wrote:

Any headway on this? :)


Reply to this email directly or view it on GitHub
#81 (comment).

@karbowiak
Copy link
Author

I completely forgot to check in, and when i remembered it, i were all kinds of happy "YAY IT'LL WORK"..
And now i'm just back at sadness :P

@dsoprea
Copy link
Owner

dsoprea commented Jul 20, 2014

Son of a gun. Okay... I'm working on it, today.

On Sun, Jul 20, 2014 at 11:40 AM, Michael Karbowiak <
[email protected]> wrote:

I completely forgot to check in, and when i remembered it, i were all
kinds of happy "YAY IT'LL WORK"..
And now i'm just back at sadness :P


Reply to this email directly or view it on GitHub
#81 (comment).

@flowsworld
Copy link

Any news on this? Would it speed up solving this issue if I donate you $ 50?

I don't want to hijack this issue, but I don't want to open a new issue for this. Files containing an umlaut/a special character (üäöß) in their name can't be read.

@dsoprea
Copy link
Owner

dsoprea commented Jul 23, 2014

I'm spread thin. However, the more people that indicate they're
experiencing the same issue, the more of a fire that it becomes.

The unicode problem that you're experiencing probably relates to something
I observed a while ago, but no one else has brought up:
#2 . Please copy your second
comment there.

On Wed, Jul 23, 2014 at 1:54 AM, flow [email protected] wrote:

Any news on this? Would it speed up solving this issue if I donate you $
50?

I don't want to hijack this issue, but I don't want to open a new issue
for this. Files containing an umlaut/a special character (üäöß) in their
name can't be read.


Reply to this email directly or view it on GitHub
#81 (comment).

@flowsworld
Copy link

Ok. To be fair to you, there are even more people waiting to get this resolved. You can find our intention here: https://forums.plex.tv/index.php/topic/113159-an-alternative-to-bitcasa-google-drive-for-work/

@dsoprea
Copy link
Owner

dsoprea commented Jul 23, 2014

I read the thread. My bad. I'll get it done ASAP.

@karbowiak That's been a favorite clip (Jim Carrey on Letterman) of mine
for a long time. This is the original, for those that are unfamiliar:
https://www.youtube.com/watch?v=dNE69WyL0U0

Dustin

On Wed, Jul 23, 2014 at 2:19 PM, flow [email protected] wrote:

Ok. To be fair to you, there are even more people waiting to get this
resolved. You can find our intention here:
https://forums.plex.tv/index.php/topic/113159-an-alternative-to-bitcasa-google-drive-for-work/


Reply to this email directly or view it on GitHub
#81 (comment).

@karbowiak
Copy link
Author

@dsoprea haha, i was looking for an image to convey my happiness, and it fit perfectly - but now that i've seen the actual footage, it's just that much better !

Do let me/us know if you need testing done, i'll be more than happy to ruin some data to help the cause :)

@ghost
Copy link

ghost commented Aug 9, 2014

I also would love to contribute my logs to this issue. Currently this is an issue with most programs that mount Google Drive to include:

NetDrive
NetDrive2
google-drive-ocamlfuse
GDriveFS
Visual Subst

So far ExpanDrive is the only mount program I've found that can get around the streaming issue. Sadly it only works for Mac and Windows. I would love to see GDriveFS be able to blow past this limitation!

@dsoprea
Copy link
Owner

dsoprea commented Aug 9, 2014

I desperately want to work on this for you guys, but I just lack
availability until I finish-up a map-reduce solution that I've been working
on. Then, I'll be all over this. It's for my corporate build and deployment
process, but if I do it in my free-time, than I get to open-source it.
That's what it's all about, right?

Dustin

On Fri, Aug 8, 2014 at 8:55 PM, Zed [email protected] wrote:

I also would love to contribute my logs to this issue. Currently this is
an issue with most programs that mount Google Drive to include:

NetDrive
NetDrive2
google-drive-ocamlfuse
GDriveFS
Visual Subst

So far ExpanDrive is the only mount program I've found that can get around
the streaming issue. Sadly it only works for Mac and Windows. I would love
to see GDriveFS be able to blow past this limitation!


Reply to this email directly or view it on GitHub
#81 (comment).

@ghost
Copy link

ghost commented Aug 10, 2014

Of course! Don't let anyone tell you different :P

@dellipse
Copy link

I, too, am having the same issue. However, I am not a programmer, nor do I play one on TV.

I will be patiently waiting. Just remember your legions of fans that are awaiting your magic.

Thank you for all the hard work!

@karbowiak
Copy link
Author

Hey man!

How's it going with your corporate map-reduce project? :)
Just wanted to "bump" the issue, incase it had been forgotten, not that i think you could forget our lovely faces :P

@dsoprea
Copy link
Owner

dsoprea commented Aug 27, 2014

The corporate use-cases gave me a practical reason to commission it within
the off-hours. It's finishing up now, perhaps concluding in another week or
two and the last few discrete tasks. I've already reserved the time after
that for the GDriveFS stuff.

Dustin

On Wed, Aug 27, 2014 at 6:20 AM, Michael Karbowiak <[email protected]

wrote:

Hey man!

How's it going with your corporate map-reduce project? :)
Just wanted to "bump" the issue, incase it had been forgotten, not that i
think you could forget our lovely faces :P


Reply to this email directly or view it on GitHub
#81 (comment).

@ghost
Copy link

ghost commented Sep 21, 2014

Hello,

would just like to ask if you have had time to check this problem out, or if anyone found a workaround to this problem?

@JamieKitson
Copy link

  1. Does this occur with files uploaded via the Google UI?

Yes

  1. Is the size of the problematic files correct in the Google UI?

Yes.

  1. Is it a problem with every file larger than that size?

All that I have seen.

I am now getting "Input/output error" when I try to copy files from my gdrive.

@saqebakhter
Copy link

Running into the same issue here as well

@karbowiak
Copy link
Author

Any updated on this issue @dsoprea ? :)

If wanted, i'm sure we can pile some money into it via bountysource.
https://www.bountysource.com/trackers/387733-dsoprea-g-drive-fs

@dsoprea
Copy link
Owner

dsoprea commented Nov 1, 2014

Sorry for not being as responsive as I should be. I love this project, and there are so many people that are now supporting it that I need to get back on it. I'll be working on this issue, specifically, tomorrow.

Thanks for your patience. I appreciate it more than you know.

@dsoprea
Copy link
Owner

dsoprea commented Nov 2, 2014

@karbowiak Alright. I relent. I've placed a donation icon at the top of the documentation.

@karbowiak
Copy link
Author

@dsoprea awesome, i look forward to reading from GDrive! <3

I'll make sure to send a few bucks your way once my credit card arrives. Managed to get it stolen last week while shopping /o\

@reallistic
Copy link

@dsoprea I received the following error when attempting to donate:

Your payment can't be completed because one of the receivers can't accept payments at this time.

@karbowiak
Copy link
Author

@dsoprea I just received the same message that @rxsegrxup received..

Your payment can't be completed because one of the receivers can't accept payments at this time. /o\

@dsoprea
Copy link
Owner

dsoprea commented Nov 8, 2014

It's Paypal. They flagged me as a potential terrorist, and at least four of the verification methods are broken. I had to switch accounts. Should be fine, now.

@karbowiak
Copy link
Author

@dsoprea as a potential terrorist? lol.. Fucking Paypal

@dsoprea
Copy link
Owner

dsoprea commented Nov 8, 2014

Thank you for your donation and support, Michael.

@karbowiak
Copy link
Author

No problems @dsoprea - i would've donated more, but christmas is right around the corner, i shall revisit the donate link once December rolls around :)

@reallistic
Copy link

When I first read @dsoprea's comment I thought to myself "Your welcome but I didn't donate yet" LOL completely forgetting how common the name "Michael" is. I know a guy that works for paypal. I'll definitely be asking hom about this lol

@dsoprea
Copy link
Owner

dsoprea commented Nov 8, 2014

Thank you for your donation and support, @rxsegrxup.

@dsoprea
Copy link
Owner

dsoprea commented Nov 8, 2014

I was taken away from this for a couple of days, but I'm currently rebuilding the way that we manage open-files. I have a feeling that the size problem was a consequence of this.

@dsoprea
Copy link
Owner

dsoprea commented Nov 24, 2014

This problem is likely fixed (in the "development" branch). There are a couple of remaining issues that keep me from merging it, but it's very close. I'd say that I'd try to get it done this week, but Thanksgiving is going to interfere.

@reallistic
Copy link

@dsoprea any chance you could provide a high-level explanation of those bugs?

@dsoprea
Copy link
Owner

dsoprea commented Nov 24, 2014

There are some anomalies with how we process the change-events that we subscribe to from GD. I have to debug why change-processing interferes with normal operations that are happening concurrently. I generally debug with this turned-off, to make it easier to control what's going on. But, when I reenabled this after making the other changes, somethings happening in the change-thread that's interrupting the file-upload.

The other issue has to do with the vim editor not opening saving files correctly. I often use vim to validate the current design because it has such an immensely complicated file-usage pattern... It opens, closes, modifies, and unlinks so many files when it's opening and closing files that it's impossible for a general usage bug to sneak by. For some reason none of its operations fail, but the file that was created is created empty. This needs to be investigated.

@dsoprea
Copy link
Owner

dsoprea commented Dec 9, 2014

This should be fixed in this release: https://github.com/dsoprea/GDriveFS/releases/tag/0.14.0

Please confirm.

@dsoprea
Copy link
Owner

dsoprea commented Dec 11, 2014

Can someone confirm the fix?

@JamieKitson
Copy link

Just done a quick test. When I copy a jpg out it looks the right size but is reported to be corrupted.

How can I check what version I'm running?

@dsoprea
Copy link
Owner

dsoprea commented Dec 15, 2014

Sorry. I didn't see this.

$ pip freeze | grep gdrivefs
gdrivefs==0.14.2

$ pip search gdrivefs
gdrivefs                  - A complete FUSE adapter for Google Drive.
  INSTALLED: 0.14.2 (latest)

$ python
Python 2.7.5 (default, Mar  9 2014, 22:15:05) 
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import gdrivefs
>>> gdrivefs.__version__
'0.14.2'

@dsoprea
Copy link
Owner

dsoprea commented Dec 15, 2014

Please keep in contact. I did a check and my hashes agree.

@JamieKitson
Copy link

Yes it looks like I am using 0.14.2:

$ pip2 freeze | grep gdrivefs Warning: cannot find svn location for
apsw==3.8.4.3-r1 gdrivefs==0.14.2

$ pip2 search gdrivefs gdrivefs - A complete FUSE adapter for Google
Drive. INSTALLED: 0.14.2 (latest)

$ python2 Python 2.7.9 (default, Dec 11 2014, 04:42:00) [GCC 4.9.2] on
linux2 Type "help", "copyright", "credits" or "license" for more
information.

import gdrivefs gdrivefs.version
'0.14.2'

@shauder
Copy link

shauder commented Feb 7, 2015

I also get this problem

root@debian:/mnt# cp /mnt/gdrive/www-data/big6.mp4 /home/shauder/
cp: cannot open `/mnt/gdrive/www-data/big6.mp4' for reading: Input/output error

root@debian:/mnt# pip search gdrivefs
gdrivefs                  - A complete FUSE adapter for Google Drive.
INSTALLED: 0.14.2 (latest)
root@debian:/mnt# python
Python 2.7.3 (default, Mar 13 2014, 11:03:55)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants