Speeding up the data processing file (about 300 times faster) and hyper parameter adjustments #2
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
First of all, thanks for providing us the baseline code.
This pull request contains an enhancement in data processing ipynb file. The code before took long time to search for a specific value and process it row-by-rows. Here, the adjustment was made based on the assumption the pre-processed file has 2d array with x-position and time stamp, starting from 0. (which is intuitive.)
Based on this, the time consumption for data processing has now been reduced by 1/300.
It also has some adjustments to the hyperparameters based on the comparison with the original paper.