Skip to content

ls: Input/output error #161

@zyxmon

Description

@zyxmon

I have build GDriveFS for Entware - https://github.com/Entware-ng/Entware-ng
(I have used pip install for testing).
fuse mount seems to work gdfs(/opt/var/cache/gdfs.creds) on /opt/gdrive type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
When I try to access mounted folder I have Input/output error.
Here is a debug log

>GD_DEBUG=1 gdfs /opt/var/cache/gdfs.creds /opt/gdrive
2016-07-06 06:42:14,275 [gdrivefs.utility INFO] No extension-mapping was found.
2016-07-06 06:42:14,611 [__main__ DEBUG] Mounting GD with creds at [/opt/var/cache/gdfs.creds]: /opt/gdrive
2016-07-06 06:42:14,615 [gdrivefs.gdfs.gdfuse DEBUG] FUSE options:
{}
2016-07-06 06:42:14,619 [gdrivefs.gdfs.gdfuse DEBUG] PERMS: F=777 E=666 NE=444
2016-07-06 06:42:14,673 [gdrivefs.gdtool.oauth_authorize INFO] Credentials have expired. Attempting to refresh them.
2016-07-06 06:42:14,675 [gdrivefs.gdtool.oauth_authorize INFO] Doing credentials refresh.
2016-07-06 06:42:14,679 [oauth2client.client INFO] Refreshing access_token
2016-07-06 06:42:15,282 [gdrivefs.gdtool.drive DEBUG] Getting authorized HTTP tunnel.
2016-07-06 06:42:15,287 [gdrivefs.gdtool.drive DEBUG] Got authorized tunnel.
FUSE library version: 2.9.3
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.18
flags=0x0000047b
max_readahead=0x00020000
2016-07-06 06:42:16,522 [gdrivefs.gdfs.fsutility DEBUG] --------------------------------------------------
2016-07-06 06:42:16,524 [gdrivefs.gdfs.fsutility DEBUG] >>>>>>>>>> init(23) >>>>>>>>>> (0)
2016-07-06 06:42:16,528 [gdrivefs.gdfs.fsutility DEBUG] DATA: path= [/]
2016-07-06 06:42:16,530 [gdrivefs.gdfs.gdfuse INFO] Activating change-monitor.
2016-07-06 06:42:16,876 [gdrivefs.gdfs.fsutility DEBUG] <<<<<<<<<< init(23) (0)
2016-07-06 06:42:16,879 [gdrivefs.gdtool.drive DEBUG] Getting authorized HTTP tunnel.
2016-07-06 06:42:16,883 [gdrivefs.gdtool.drive DEBUG] Got authorized tunnel.
   INIT: 7.19
   flags=0x00000011
   max_readahead=0x00020000
   max_write=0x00020000
   max_background=0
   congestion_threshold=0
   unique: 1, success, outsize: 40
unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 10738
getattr /
2016-07-06 06:42:43,640 [gdrivefs.gdfs.fsutility DEBUG] --------------------------------------------------
2016-07-06 06:42:43,642 [gdrivefs.gdfs.fsutility DEBUG] >>>>>>>>>> getattr(4) >>>>>>>>>> (10738)
2016-07-06 06:42:43,644 [gdrivefs.gdfs.fsutility DEBUG] DATA: fh= [None]  raw_path= [/]
2016-07-06 06:42:43,659 [gdrivefs.cache.cacheclient_base DEBUG] CacheClientBase(CacheClientBase,28800)
2016-07-06 06:42:43,662 [gdrivefs.cache.cache_agent INFO] Starting cache-cleanup thread: <gdrivefs.cache.cache_agent.CacheAgent object at 0xf65858>
2016-07-06 06:42:43,670 [gdrivefs.cache.cache_agent INFO] Cache-cleanup thread running: <gdrivefs.cache.cache_agent.CacheAgent object at 0xf65858>
2016-07-06 06:42:43,669 [gdrivefs.gdtool.drive INFO] Getting client for parent-listing.
2016-07-06 06:42:43,675 [gdrivefs.gdtool.drive INFO] Listing entries over child with ID [0AP09GPqvAIMZUk9PVA].
2016-07-06 06:42:44,902 [gdrivefs.gdtool.drive DEBUG] (1) entries were retrieved.
2016-07-06 06:42:44,908 [gdrivefs.cache.volume DEBUG] Recursively pruning entry with ID [0AP09GPqvAIMZUk9PVA].
2016-07-06 06:42:44,928 [gdrivefs.gdfs.fsutility DEBUG] <<<<<<<<<< getattr(4) (10738)
   unique: 2, success, outsize: 120

I am running it on a mipsel device where part of the filesystem is read only. May be it is cache issue?

BTW I have no problems running davfs2, encfs on this router.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions