Skip to content

Conversation

@donovanhide
Copy link
Contributor

Just some comments, feel free to ignore :-)

Just some comments, feel free to ignore :-)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I originally thought so, but I've changed my mind. Different event types require that sources are stateful - that they have a concept of history and know which documents they've sent out previously. While journalisted has this info, other sources won't. I think we should be aiming to keep the burden of scrapers as low as possible.
it also implies that the consumer of the document stream is in-sync with the source. Which isn't necessarily the case (eg journalisted rescraping and sending out an update for an old article which churnalism doesn't have).
So, while it's more work on the consumer (ie churnalism) side to check if incoming docs are already in the system, it makes scrapers vastly simpler, and we plan to write a lot more scrapers than consumers...
As long as scrapers are consistent with their document ordering and lastEventId scheme, the checking on the consumer side should be minimal - if the consumer gives a "lastEventId" to a source when it connects, it can be confident that it won't be served up some huge backlog of documents it's already been sent.
Blah. Not sure I'm being clear about this. I'll write an eventsource adaptor for a scraperwiki press release scraper to try and illustrate things a bit better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants