Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/visualisation #9

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

schneegor
Copy link

No description provided.

obs_date__lte=end)

#TODO create n sub plots for all relevant parameter or merge them in 1 plot
# TODO how to loop over all elements and extract name along the way?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be easiest to just consume the CSV output by csv_observation_request. If you read the response into a StringIO buffer you could pass it to pandas using read_csv and you'd have the data as a DataFrame without having to manipulate the observations directly. There'd be no network bottleneck because there'd be no HTTP communication - we'd just use the responses.

The content would be accessible like this:

csv = csv_observation_request(request, station, start, end).content
# Continue processing

@erget
Copy link
Member

erget commented Oct 11, 2015

Before the merge we need to add the following packages to install_requires in setup.py:

  • matplotlib
  • pySide

url(r'^csv/stations/$', csv_stations)
r'(?P<end>[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2})/', include([ # End date
url(r'csv$', csv_observation_request),
url(r'png$', png_observation_request),])),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to build this so that png and csv are at the beginning, not at the end. Not sure how to best implement that - off the top of my head I'm thinking something like putting those up front and then passing them to a date_parser or something, somehow passing it the harvested request type so that it knows what function to call. I prefer that because that would be more consistent with line 31, where the pattern begins with the request type. Before the merge we'll need to find a DRY way of expressing that, or modify the API for requestion a station list.

I'm thinking something like this:

data_patterns = [
    url(r'^(?P<content_type>[a-z]+)/'  # Content type
        r'^(?P<station>[0-9]+)/'  # Station ID
        r'(?P<start>[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2})'    # Start date
        r' - '
        r'(?P<end>[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2})/', data_request),
        url(r'csv$', csv_observation_request)

And then in views.py something like:

DATA_TYPES_MAP = {"csv": csv_observation_request,
                  "png": png_observation_request}
def data_request(request, content_type, station, start, end):
    """Route a data request."""
    return DATA_TYPES_MAP[content_type](request, station, start, end)

Just a very rough prototype.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants