Tuesday, May 14, 2024
No menu items!
HomeCertificatesDesigning, Running, and Analyzing Experiments (University of California San Diego)

Designing, Running, and Analyzing Experiments (University of California San Diego)

About this Course

You may never be sure whether you have an effective user experience until you have tested it with users. In this course, you’ll learn how to design user-centered experiments, how to run such experiments, and how to analyze data from these experiments in order to evaluate and validate user experiences. You will work through real-world examples of experiments from the fields of UX, IxD, and HCI, understanding issues in experiment design and analysis. You will analyze multiple data sets using recipes given to you in the R statistical programming language — no prior programming experience is assumed or required, but you will be required to read, understand, and modify code snippets provided to you. By the end of the course, you will be able to knowledgeably design, run, and analyze your own experiments that give statistical weight to your designs.

Instructors

Scott Klemmer
Professor
Cognitive Science & Computer Science

Jacob O. Wobbrock
Professor
The Information School

Offered by

University of California San Diego

UC San Diego is an academic powerhouse and economic engine, recognized as one of the top 10 public universities by U.S. News and World Report. Innovation is central to who we are and what we do. Here, students learn that knowledge isn’t just acquired in the classroom—life is their laboratory.

Syllabus – What you will learn from this course

Week 1

Basic Experiment Design Concepts

In this module, you will learn basic concepts relevant to the design and analysis of experiments, including mean comparisons, variance, statistical significance, practical significance, sampling, inclusion and exclusion criteria, and informed consent. You’ll also learn to think of an experiment in terms of usability, its participants, apparatus, procedure, and design & analysis. This module covers lecture videos 1-2.

Week 2

Tests of Proportions

In this module, you will learn how to analyze user preferences (or other tallies) using tests of proportions. You will also get up and running with R and RStudio. Topics covered include independent and dependent variables, variable types, exploratory data analysis, p-values, asymptotic tests, exact tests, one-sample tests, two-sample tests, Chi-Square test, G-test, Fisher’s exact test, binomial test, multinomial test, post hoc tests, and pairwise comparisons. This module covers lecture videos 3-9.

Week 3

The T-Test

In this module, you will learn how to design and analyze a simple website A/B test. Topics include measurement error, independent variables as factors, factor levels, between-subjects factors, within-subjects factors, dependent variables as responses, response types, balanced designs, and how to report a t-test. You will perform your first analysis of variance in the form of an independent-samples t-test. This module covers lecture videos 10-11.

Week 4

Validity in Design and Analysis

In this module, you will learn about how to ensure that your data is valid through the design of experiments, and that your analyses are valid by understanding and testing for certain assumptions. Topics include how to achieve experimental control, confounds, ecological validity, the three assumptions of ANOVA, data distributions, residuals, normality, homoscedasticity, parametric versus nonparametric tests, the Shapiro-Wilk test, the Kolmogorov-Smirnov test, Levene’s test, the Brown-Forsythe test, and the Mann-Whitney U test. This module covers lecture videos 12-15.

Week 5

One-Factor Between-Subjects Experiments

In this module, you will learn about one-factor between-subjects experiments. The experiment examined will be a between-subjects study of task completion time with various programming tools. You will understand and analyze data from two-level factors and three-level factors using the independent-samples t-test, Mann-Whitney U test, one-way ANOVA, and Kruskal-Wallis test. You will learn how to report an F-test. You will also understand omnibus tests and how they relate to post hoc pairwise comparisons with adjustments for multiple comparisons. This module covers lecture videos 16-18.

Week 6

One-Factor Within-Subjects Experiments

In this module, you will learn about one-factor within-subjects experiments, also known as repeated measures designs. The experiment examined will be a within-subjects study of subjects searching for contacts in a smartphone contacts manager, including the analysis of times, errors, and effort Likert-type scale ratings. You will learn counterbalancing strategies to avoid carryover effects, including full counterbalancing, Latin Squares, and balanced Latin Squares. You will understand and analyze data from two-level factors and three-level factors using the paired-samples t-test, Wilcoxon signed-rank test, one-way repeated measures ANOVA, and Friedman test. This module covers lecture videos 19-23.

Week 7

Factorial Experiment Designs

In this module, you will learn about experiments with multiple factors and factorial ANOVAs. The experiment examined will be text entry performance on different smartphone keyboards while sitting, standing, and walking. Topics include mixed factorial designs, interaction effects, factorial ANOVAs, and the Aligned Rank Transform as a nonparametric factorial ANOVA. This module covers lecture videos 24-27.

Week 8

Generalizing the Response

In this module, you will learn about analyses for non-normal or non-numeric responses for between-subjects experiments using Generalized Linear Models (GLM). We will revisit three previous experiments and analyze them using generalized models. Topics include a review of response distributions, nominal logistic regression, ordinal logistic regression, and Poisson regression. This module covers lecture videos 28-29.

Week 9

The Power of Mixed Effects Models

In this module, you will learn about mixed effects models, specifically Linear Mixed Models (LMM) and Generalized Linear Mixed Models (GLMM). We will revisit our prior experiment on text entry performance on smartphones but this time, keeping every single measurement trial as part of the analysis. The full set of analyses covered in this course will also be reviewed. This module covers lecture videos 30-33.

RELATED ARTICLES

Most Popular

Recent Comments