Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion jl/hacks/jlhose.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,10 @@ import (
"encoding/json"
"flag"
"fmt"
_ "github.com/bmizerany/pq"
_ "github.com/lib/pq"
// More up to date version of library
// Including talk about LISTEN/NOTIFY
// https://github.com/lib/pq/pull/106
"github.com/donovanhide/eventsource"
"net"
"net/http"
Expand Down Expand Up @@ -67,10 +70,12 @@ func (art *articleEvent) Id() string {
// not too bad in practice - should always be ascending.
// But because stuff can be rescrapeduse lastscraped as id?
// or date + id concatenation?
// Maybe MAX(created,lastscraped) as unixtime with id concatenated?
return strconv.Itoa(art.id)
}

func (art *articleEvent) Event() string {
// This probably should be more specific, ie. "add", "update" and maybe even "delete"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I originally thought so, but I've changed my mind. Different event types require that sources are stateful - that they have a concept of history and know which documents they've sent out previously. While journalisted has this info, other sources won't. I think we should be aiming to keep the burden of scrapers as low as possible.
it also implies that the consumer of the document stream is in-sync with the source. Which isn't necessarily the case (eg journalisted rescraping and sending out an update for an old article which churnalism doesn't have).
So, while it's more work on the consumer (ie churnalism) side to check if incoming docs are already in the system, it makes scrapers vastly simpler, and we plan to write a lot more scrapers than consumers...
As long as scrapers are consistent with their document ordering and lastEventId scheme, the checking on the consumer side should be minimal - if the consumer gives a "lastEventId" to a source when it connects, it can be confident that it won't be served up some huge backlog of documents it's already been sent.
Blah. Not sure I'm being clear about this. I'll write an eventsource adaptor for a scraperwiki press release scraper to try and illustrate things a bit better.

return "article"
}

Expand All @@ -93,6 +98,7 @@ func findLatest(db *sql.DB) string {
}

// pumpArticles streams out a batch of articles starting just after lastEventID
// If you implement the registry interface this function might become a bit simpler, and just process a single article at a time
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean the Repository interface?
It didn't feel like it was the right interface to an sql backend with potentially millions of documents behind it... I need to think about it a little more. Ideally I'd like to be able to use this eventsource stuff for bulk transfers as well as realtime(ish) updates.

func pumpArticles(lastEventID string, db *sql.DB, eventServer *eventsource.Server) string {
batchSize := 1000

Expand Down Expand Up @@ -146,6 +152,7 @@ func main() {

srv := eventsource.NewServer()
defer srv.Close()
// Channel should probably be "/article" if you decide to use the event field to dictate the action required by the client
http.HandleFunc("/new", srv.Handler("article"))
l, err := net.Listen("tcp", fmt.Sprintf(":%d", *port))
if err != nil {
Expand Down