Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for friedman test #390

Merged
merged 8 commits into from
Sep 23, 2023
Merged

Add support for friedman test #390

merged 8 commits into from
Sep 23, 2023

Conversation

rempsyc
Copy link
Member

@rempsyc rempsyc commented Aug 27, 2023

addresses #309 part 2 (friedman test)


This fixes report(chisq.test) and report(friedman.test), which were both broken.

library(report)
library(effectsize)
packageVersion("report")
#> [1] '0.5.7.11'

m <- as.table(rbind(c(762, 327, 468), c(484, 239, 477)))
dimnames(m) <- list(gender = c("F", "M"), party = c("Democrat", "Independent", "Republican"))
x <- chisq.test(m)

report(x)
#> Effect sizes were labelled following Funder's (2019) recommendations.
#> 
#> The Pearson's Chi-squared test testing the association between gender and party
#> suggests that the effect is positive, statistically significant, and small
#> (chi2 = 30.07, p < .001; Adjusted Cramer's v = 0.10, 95% CI [0.07, 1.00])
  

wb <- aggregate(warpbreaks$breaks,
                by = list(w = warpbreaks$wool,
                          t = warpbreaks$tension),
                FUN = mean)
wb
#>   w t        x
#> 1 A L 44.55556
#> 2 B L 28.22222
#> 3 A M 24.00000
#> 4 B M 28.77778
#> 5 A H 24.55556
#> 6 B H 18.77778

x <- friedman.test(wb$x, wb$w, wb$t)

report_table(x)
#> Friedman rank sum test
#> 
#> Parameter1 | Parameter2 | Chi2(1) |     p | Kendall's W |        W  CI
#> ----------------------------------------------------------------------
#> wb$x, wb$w |       wb$t |    0.33 | 0.564 |        0.11 | [0.11, 1.00]

report(x)
#> Effect sizes were labelled following Landis' (1977) recommendations.
#> 
#> The Friedman rank sum test testing the difference in ranks between wb$x, wb$w
#> and wb$t suggests that the effect is positive, statistically not significant,
#> and slight agreement (chi2 = 0.33, p = 0.564; Kendall's W = 0.11, 95% CI [0.11,
#> 1.00])

Created on 2023-08-27 with reprex v2.0.2

@rempsyc rempsyc added the bug 🐛 Something isn't working label Aug 27, 2023
@codecov
Copy link

codecov bot commented Aug 27, 2023

Codecov Report

Merging #390 (802a78e) into main (0b1046d) will increase coverage by 0.23%.
Report is 2 commits behind head on main.
The diff coverage is 85.43%.

❗ Current head 802a78e differs from pull request most recent head a5b86ea. Consider uploading reports for the commit a5b86ea to get more accurate results

@@            Coverage Diff             @@
##             main     #390      +/-   ##
==========================================
+ Coverage   72.25%   72.48%   +0.23%     
==========================================
  Files          47       48       +1     
  Lines        3316     3391      +75     
==========================================
+ Hits         2396     2458      +62     
- Misses        920      933      +13     
Files Changed Coverage Δ
R/report_htest_friedman.R 66.66% <66.66%> (ø)
R/report.htest.R 91.42% <100.00%> (+2.07%) ⬆️
R/report_effectsize.R 67.27% <100.00%> (+0.60%) ⬆️
R/report_htest_chi2.R 88.23% <100.00%> (+6.01%) ⬆️
R/report_participants.R 93.19% <100.00%> (+0.41%) ⬆️

... and 1 file with indirect coverage changes

@rempsyc rempsyc requested a review from strengejacke August 28, 2023 00:22
@rempsyc
Copy link
Member Author

rempsyc commented Sep 23, 2023

Some of the tests are again failing because of a .01 difference in value in snapshots between local and GA checks... Kind of annoying having to modify them by hand to match GitHub values, maybe skipping those tests is better? What's your opinion @IndrajeetPatil ?

@IndrajeetPatil
Copy link
Member

The difference in values comes from R-release vs R-devel? In that case, you can skip the offending case on the R-devel.

@rempsyc
Copy link
Member Author

rempsyc commented Sep 23, 2023

It seems that it fails even on release, check random test order, and test coverage, so not only devel...

@rempsyc
Copy link
Member Author

rempsyc commented Sep 23, 2023

Ok for some reasons the snapshots were not updating locally so deleted them and ran the tests again and now it seems fixed :)

@rempsyc
Copy link
Member Author

rempsyc commented Sep 23, 2023

There is only test coverage that is failing but I don't understand this error since it does not come up on other checks.

Error: Error: Failure in `C:/Users/runneradmin/AppData/Local/Temp/Rtmp06bbNH/R_LIBS88c7cc2be/report/report-tests/testthat.Rout.fail`
        │ └─base::eval(expr, envir)
 14.               │   └─base::eval(expr, envir)
 15.               └─brms::do_call(rstan::stan_model, args)
 16.                 └─brms:::eval2(call, envir = args, enclos = envir)
 17.                   └─base::eval(expr, envir, ...)
 18.                     └─base::eval(expr, envir, ...)
 19.                       └─rstan (local) .fun(model_code = .x1)

@rempsyc rempsyc merged commit 0a72e99 into main Sep 23, 2023
25 of 26 checks passed
@rempsyc rempsyc deleted the report_table_friedman branch September 23, 2023 22:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants