Skip to content

Handle timeout exception from selenium and still return the page #58

Open
@michelts

Description

@michelts

Hi @clemfromspace

I'm using the wait_time and wait_until to wait for a page to be rendered but, sometimes, the page renders a way I'm not expecting. If I don't use wait_time, I will see the rendered content (if it was faster enough), but using wait time, selenium will trigger a timeout exception and scrapy won't parse the result after all.

I wonder if this is something useful somehow, but I'm not sure. I think the approach should be the opposite, I mean, we should handle the exception and still return the found content to scrapy, so I can at least see the snapshot or see the HTML content.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions