You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current test suite uses the Double_0_1 type, which has an Arbitrary instance that generates Double values between 0 and 1. This is used whenever some dynamic randomness is needed in QuickCheck tests, e.g., picking an arbitrary file from the filesystem, where the list of files cannot be statically known.
The corresponding implementation of shrink is defined in terms of shrink for Double, which uses shrinkDecimal, which picks numbers with a shorter decimal representation. I think that's probably the wrong behaviour for this use-case. If this were to be biased towards smaller numbers, it would cause us to select files from earlier in the list of files or bits from earlier in the file. Neither of these are really what we want when shrinking. However, it's not even biased to smaller numbers, but to less-precise numbers, which... I can't really predict what that's even doing in this case?
If we want to clean up the Double_0_1 trick, we would have two options:
Arbitrary generators. We can add a wrapper around QuickCheck's generators and permit tests to ask for an arbitrary random generator—either by returning the actual generator or by generating a random integer seed for a new generator. In such cases, we should not permit any shrinking, as there is no well-defined notion of shrinking randomness.
Embrace index selection. We can embrace that this is only used for index selection and generate a whole number that we interpret dynamically modulo the list length. In this case, we would have more control over the semantics of shrinking, though I would argue that we still don't want to allow any shrinking. (Due to the interpretation modulo the list length, we wouldn't be able to ensure that it causes us to select earlier files.)
The text was updated successfully, but these errors were encountered:
The current test suite uses the
Double_0_1
type, which has anArbitrary
instance that generatesDouble
values between0
and1
. This is used whenever some dynamic randomness is needed in QuickCheck tests, e.g., picking an arbitrary file from the filesystem, where the list of files cannot be statically known.The corresponding implementation of
shrink
is defined in terms ofshrink
forDouble
, which usesshrinkDecimal
, which picks numbers with a shorter decimal representation. I think that's probably the wrong behaviour for this use-case. If this were to be biased towards smaller numbers, it would cause us to select files from earlier in the list of files or bits from earlier in the file. Neither of these are really what we want when shrinking. However, it's not even biased to smaller numbers, but to less-precise numbers, which... I can't really predict what that's even doing in this case?If we want to clean up the
Double_0_1
trick, we would have two options:The text was updated successfully, but these errors were encountered: