Skip to content

darius-codes/django-scrapping

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Home task 22 (Django app & Scrapy)

1: Initial Setup

Clone project in a new directory:

cd path/to/a/new/directory
git clone https://github.com/MaksNech/pylab2018_ht_22.git

2: Getting Started

Start backend:

Inside project create virtual environment:

virtualenv -p python3 env

Then start virtual environment:

source env/bin/activate

Install packages using pip according to the requirements.txt file:

pip install -r requirements.txt

Inside project directory run app with terminal command:

python3 manage.py runserver

Admin:

Username: admin
Email address: [email protected]
Password: 123

Start Scraping data:

Inside project directory run celery and scrapy with terminal commands:

celery -A store worker -l info

scrapy crawl net_a_porter_bags

Then go to the site in browser and press 'scrapping' button.

Clear Redis:

enter to Redis:

redis-cli

clear all databases:

FLUSHDB

show keys in Redis:

keys *

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published