-
Notifications
You must be signed in to change notification settings - Fork 27
Open
Labels
Description
@pdurbin and I discussed possible ways to reduce the effort each of the Dataverse client developer to create and maintain tests. It might be nice if
- there was a common bank of (sub)dataverses and files that covered a nice spread of scenarios and asserted a client library (i.e., the two Pythons, one Java, one JavaScript, and R) downloaded/uploaded/searched/processed/whatever correctly. For a download test, the test suite confirms that the client returns a file that matches a specific pre-existing file. For a metadata test, the test suite confirms that the client returns a dataset that matches the pre-existing ~csv.
- a manifest file enumerates these files, and certain expected characteristics (e.g., md5, approx file size). Currently, I think a csv adequately meets this un-hierarchical need, where each row represents a file that will be tested.
- a client's test suite doesn't code specifically for each file. It probably just loops over the manifest file. To add a new condition, only the manifest file and file bank is modified.
- the manifest file and the expected files are eventually stored somewhere centrally, that's easily accessible by the client developers. When someone hits a weird case (e.g., the pyDataverse developer finds a problem when processing a csv with a "txt' extension), they'll add that case to the test banks.
@skasberger, @rliebz, @tainguyenbui, and any others, please tell me if this isn't worth it, or there's a better approach, etc.
(This is different from #4 & #29, which involve the battery of tests/comparisons. #40 deals with the deployment of API keys used in testing.)
Reactions are currently unavailable