Study 1: Distraction from smartwatches

We consider a within-subject design from Brodeur et al. (2021), who conducted an experiment to check distraction while driving from different devices including smartwatches using a virtual reality environment. The response is the number of road safety violations during the task.

Code
library(coin, quietly = TRUE)
data(BRLS21_T3, package = "hecedsm")
str(BRLS21_T3)
Classes 'tbl_df', 'tbl' and 'data.frame':   124 obs. of  3 variables:
 $ task      : Factor w/ 4 levels "phone","watch",..: 1 2 3 4 1 2 3 4 1 2 ...
 $ nviolation: int  8 4 10 12 1 5 0 7 5 6 ...
 $ id        : Factor w/ 31 levels "1","2","3","4",..: 6 6 6 6 11 11 11 11 26 26 ...
Code
xtabs(~ task + id, data = BRLS21_T3)
         id
task      1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
  phone   1 1 1 1 1 1 1 1 1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1
  watch   1 1 1 1 1 1 1 1 1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1
  speaker 1 1 1 1 1 1 1 1 1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1
  texting 1 1 1 1 1 1 1 1 1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1
         id
task      27 28 29 30 31
  phone    1  1  1  1  1
  watch    1  1  1  1  1
  speaker  1  1  1  1  1
  texting  1  1  1  1  1

A quick inspection reveals that the data are balanced with four tasks and 31 individuals. We can view the within-subject design with a single replication as a complete block design (with id as block) and task as experimental manipulation.

How could we compare the different tasks? The data here are clearly very far from normally distributed and there are notable outliers among the residuals, as evidenced by Figure 1. Conclusions probably wouldn’t be affected by using an analysis of variance, but it may be easier to convince reviewers that the findings are solid by ressorting to nonparametric procedures.

Figure 1: Normal quantile-quantile plot of the block design. There are many outliers

Both the Friedman and the Quade tests are obtained by computing ranks within each block (participant) and then performing a two-way analysis of variance. The Friedman test is less powerful than Quade’s with a small number of groups. Both are applicable for block designs with a single factor. We can also obtain effect sizes for the rank test, termed Kendall’s \(W\). A value of 1 indicates complete agreement in the ranking: here, this would occur if the ranking of the number of violations was the same for each participant.

Code
friedman <- coin::friedman_test(
  nviolation ~ task | id,
  data = BRLS21_T3)
quade <- coin::quade_test(
  nviolation ~ task | id,
  data = BRLS21_T3)
eff_size <- effectsize::kendalls_w(
  x = "nviolation", 
  groups = "task", 
  blocks = "id", 
  data = BRLS21_T3)

The Friedman test is obtained by replacing observations by the rank within each block (so rather than the number of violations per task, we compute the rank among the four tasks). The Friedman’s test statistic is \(18.97\) and is compared to a benchmark \(\chi^2_3\) distribution, yielding a \(p\)-value of \(3\times 10^{-4}\). The estimated agreement (effect size) is \(0.2\).

The test reveals significant differences in the number of road safety violations across tasks. We could therefore perform all pairwise differences using the signed-rank test and adjust \(p\)-values to correct for the fact we have performed six hypothesis tests.

To do this, we modify the data and map them to wide-format (each line corresponds to an individual). We can then feed the data to compute differences, here for phone vs watch. We could proceed likewise for the five other pairwise comparisons and then adjust \(p\)-values.

Code
smartwatch <- tidyr::pivot_wider(
  data = BRLS21_T3,
  names_from = task,
  values_from = nviolation)
coin::wilcoxsign_test(phone ~ watch,
                      data = smartwatch)

    Asymptotic Wilcoxon-Pratt Signed-Rank Test

data:  y by x (pos, neg) 
     stratified by block
Z = 0.35399, p-value = 0.7233
alternative hypothesis: true mu is not equal to 0

You can think of the test as performing a paired \(t\)-test for the 31 signed ranks \(R_i =\mathsf{sign}(D_i) \mathsf{rank}(|D_i|)\) and testing whether the mean is zero. The \(p\)-value obtained by doing this after discarding zeros is \(0.73\), which is pretty much the same as the more complicated approximation.

Study 2: Online vs in-person meetings

Brucks & Levav (2022) measure the attention of participants based on condition using an eyetracker.

We compare here the time spend looking at the partner by experimental condition (face-to-face or videoconferencing). The authors used a Kruskal–Wallis test, but this is equivalent to Wilcoxon’s rank-sum test.

Code
data(BL22_E, package = "hecedsm")
mww <- coin::wilcox_test(
  partner_time ~ cond, 
  data = BL22_E, 
  conf.int = TRUE)
welch <- t.test(partner_time ~ cond, 
  data = BL22_E, 
  conf.int = TRUE)
mww

    Asymptotic Wilcoxon-Mann-Whitney Test

data:  partner_time by cond (f2f, video)
Z = -6.4637, p-value = 1.022e-10
alternative hypothesis: true mu is not equal to 0
95 percent confidence interval:
 -50.694 -25.908
sample estimates:
difference in location 
               -37.808 

The output of the test includes, in addition to the \(p\)-value for the null hypothesis that both median time are the same, a confidence interval for the time difference (in seconds). This is obtained by computing all average of pairwise differences between observations of the two groups, so-called Walsh’s averages. The Hodges–Lehmann estimate of location (the median of Walsh’s differences) is \(-37.81\) seconds, with a 95% confidence interval for the difference of \([-50.69, -25.91]\) seconds.

These can be compared with the usual Welch’s two-sample \(t\)-test with unequal variance. The estimated mean difference is \(-39.69\) seconds for face-to-face vs group video, with a 95% confidence interval of \([-52.93, -26.45]\).

In either case, it’s clear that the videoconferencing translates into longer time spent gazing at the partner than in-person meetings.

References

Brodeur, M., Ruer, P., Léger, P.-M., & Sénécal, S. (2021). Smartwatches are more distracting than mobile phones while driving: Results from an experimental study. Accident Analysis & Prevention, 149, 105846. https://doi.org/10.1016/j.aap.2020.105846
Brucks, M. S., & Levav, J. (2022). Virtual communication curbs creative idea generation. Nature, 605(7908), 108–112. https://doi.org/10.1038/s41586-022-04643-y