-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scrape The Facts #27
Comments
slifty
added a commit
that referenced
this issue
Sep 27, 2018
The credible content model is going to store... credible content! Baiscally, this will hold scraped text from publications that we want to apply truth goggles to. This is the very first schema and model we've tried to make. Issue #27 Scrape The Facts
slifty
added a commit
that referenced
this issue
Sep 27, 2018
The schema links up the previously specified model to GraphQL. It makes it possible to look up the objects. Issue #27 Scrape The Facts
slifty
added a commit
that referenced
this issue
Sep 27, 2018
The credible content model is going to store... credible content! Baiscally, this will hold scraped text from publications that we want to apply truth goggles to. This is the very first schema and model we've tried to make. Issue #27 Scrape The Facts
slifty
added a commit
that referenced
this issue
Sep 27, 2018
The schema links up the previously specified model to GraphQL. It makes it possible to look up the objects. Issue #27 Scrape The Facts
slifty
added a commit
that referenced
this issue
Sep 28, 2018
node-scheduler is a cron-like package that we're going to use to regularly scrape the share the facts API. You can read more about it [over here](https://www.npmjs.com/package/node-schedule) Issue #27 Scrape The Facts
slifty
added a commit
that referenced
this issue
Oct 4, 2018
node-scheduler is a cron-like package that we're going to use to regularly scrape the share the facts API. You can read more about it [over here](https://www.npmjs.com/package/node-schedule) Issue #27 Scrape The Facts
This has become a bit less important now that we are focusing on #17 and the wireframes are now more focused on entering raw text BEFORE it gets published. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We now have access to the Share The Facts API -- we need to locally cache them for our mad experiments.
For now the scraping should be done by something that COULD be automated, but it's OK for it to be triggered by a yarn script for now.
This will probably require completion of #26 so we have a place to store the results.
The text was updated successfully, but these errors were encountered: