Lectures and Details

Meetings
Tuesday/Thursday, 1:15-2:40pm, Searles 126
Instructor
Office Hours
Searles 224, Tuesdays 11:00am-11:30 (as CS Faculty Meeting ends), 12:00-1:00pm, and by appointment (including via Zoom)
Course Forum

Overview

This seminar course covers the intersection of software topics (e.g., reading, writing and debugging programs) and human factors (e.g., perception, bias, cognition). It will involve an exploration of classic topics in psychology in a manner that is approachable to students without prior psychology experience. We will relate these topics to activities in computer science and we will discuss the resulting implications on both academic and industrial practice. Students will be graded on in-class discussions and short presentations on the material. Group discussions will focus on constructive conversations that summarize the work, analyze its strengths, and brainstorm ways in which it might be improved in the future.

Special emphasis will be placed on instructor-led in-person discussions of contextual aspects of papers that may be less evident to students (e.g., based on the instructor's first-hand knowledge of the authors or research situation). These may include:

Such discussions can help students to view papers as part of a collaborative, human activity takes place over a span of time. Conversations about how to construct well-designed follow-on work (e.g., balancing risk and rewarding, lifting input assumptions, etc.), at the level of the graduate Preliminary Exam, will be encouraged.

Software and Cognition

This class will include papers that explicitly cover human cognitive aspects of software engineering and programming languages, such as studies that make use of eye tracking or medical imaging.

It is not expected that incoming students have any expertise in such cognitive aspects. Instead, relevant cognitive aspects will be discussed (including in the third paper, below) such that students can interpret their relevance to computer science.

Structure and Presentations

Randomized Presentations

For each paper discussion I will choose up to three students at random to give a five-minute presentation.

Each selected student will give a five-minute in-person presentation that at least (1) summarizes the work, (2) lists its strengths, and (3) lists ways in which it might be improved in a subsequent publication. Including other information, such as your opinion about the work or its relation to other projects, is recommended.

The goals of this randomized approach are to encourage all participants to read the material thoroughly in advance, to provide jumping-off points for detailed discussions, and to allow me to evaluate participation.

There is no project component to this course — your primary responsibility is to prepare five minutes of cogent discussion for each paper.

Grading Rubric

Grading will be based on in-person participation.

  • 40% Paper Presentations
  • 40% Discussions
  • 20% Professionalism

In more detail, Paper Presentations will be graded on the summarization of the work (20%), the enumeration of strengths (30%), and areas and mechanisms for improvement (50%).

The Discussion component will be assessed by noting when students contribute to the conversation and analysis of a paper (outside of their presentations).

The Professionalism component will be assessed in terms of helping to maintain a welcoming environment for everyone and demonstrating our shared values. Participants are assumed to be professionals by default, but may lose points in certain negative circumstances. Examples of less-professional conduct include:

Informally, one sufficient condition for an "A" grade is to do a good job whenever you are called on to present a paper, to interject insightful comments into the discussions of other papers, and to treat others respectfully.

The grading cutoffs are:

Missed Meetings

Students who miss a meeting for College-recognized reasons and who are selected to present that day can make up the presentation assessment in Office Hours (preferred) or via Zoom (potentially). In such a case you may be asked to present your summary of any recent paper (not necessarily just the one discussed while you were away) and your summary may be held to a slightly higher standard (but not that much higher) because you definitively know the presentation is coming. That additional assessment will likely take the form of a more back-and-forth conversation about the paper, similar to the discussions in class but limited to you and the professor.

First Class Meeting

The first meeting will take place on Tuesday, January 23rd. You are responsible for reading the first two papers in advance.

However, I will get the discussion rolling for Paper #1 ("Producing wrong data ..."), so you do not need to prepare a presentation for it. The discussion for it will probably be short.

I will begin randomly calling on students when we advance to Paper #2. You should prepare your five minute discussions for that paper.

Reading List

Read Two Ahead

You are responsible for reading and preparing for the next two not-yet-discussed papers for any given class meeting.

On average, we will discuss three papers a week (devoting perhaps 50 minutes to each paper). However, some papers may merit more or less discussion.

It is often possible to find presentation slides or video recordings associated with a paper. You are welcome to use those as part of your preparation.

The next paper to discuss may be shown below a highlighted line. You should read and prepare at least the next two-to-three papers before the next meeting.

    Software Research and Threats to Validity

  1. Todd Mytkowicz, Amer Diwan, Matthias Hauswirth, Peter F. Sweeney: Producing wrong data without doing anything obviously wrong! ASPLOS 2009: 265-276
  2. Janet Siegmund, Norbert Siegmund, Sven Apel: Views on Internal and External Validity in Empirical Software Engineering. ICSE 2015: 9-19 (distinguished paper award)
  3. Neuroscience and Language

  4. Jingyuan E. Chen, Gary H. Glover: Functional Magnetic Resonance Imaging Methods. Neuropsychology Review, vol. 25, pages 289-313 (2015)
  5. Ioulia Kovelman, Stephanie A. Baker, Laura-Ann Petitto: Bilingual and Monolingual Brains Compared: A Functional Magnetic Resonance Imaging Investigation of Syntactic Processing and a Possible "Neural Signature" of Bilingualism. J. Cogn. Neurosci. 20(1): 153-169 (2008)
  6. Software Engineering and the Brain

  7. Janet Siegmund, Christian Kästner, Sven Apel, Chris Parnin, Anja Bethmann, Thomas Leich, Gunter Saake, André Brechmann: Understanding understanding source code with functional magnetic resonance imaging. ICSE 2014: 378-389
  8. Benjamin Floyd, Tyler Santander, Westley Weimer: Decoding the representation of code in the brain: an fMRI study of code review and expertise. ICSE 2017: 175-186 (distinguished paper award)
  9. Yu Huang, Xinyu Liu, Ryan Krueger, Tyler Santander, Xiaosu Hu, Kevin Leach, Westley Weimer: Distilling neural representations of data structure manipulation using fMRI and fNIRS. ICSE 2019: 396-407 (distinguished paper award)
  10. Ryan Krueger, Yu Huang, Xinyu Liu, Tyler Santander, Westley Weimer, Kevin Leach: Neurological Divide: An fMRI Study of Prose and Code Writing. International Conference on Software Engineering (ICSE): 2020
  11. Zachary Karas, Andrew Jahn, Westley Weimer, Yu Huang: Connecting the Dots: Rethinking the Relationship between Code and Prose Writing with Functional Connectivity: Foundations of Software Engineering (ESEC/FSE): 2021
  12. Software Expertise and the Brain

  13. Norman Peitek, Annabelle Bergum, Maurice Rekrut, Jonas Mucke, Matthias Nadig, Chris Parnin, Janet Siegmund, Sven Apel: Correlates of programmer efficacy and their link to experience: a combined EEG and eye-tracking study. ESEC/SIGSOFT FSE 2022: 120-131
  14. Ikutani Y, Kubo T, Nishida S, Hata H, Matsumoto K, Ikeda K, Nishimoto S. Expert Programmers Have Fine-Tuned Cortical Representations of Source Code. eNeuro. 2021 Jan 28. 8(1): ENEURO.0405-20.2020.
  15. Hammad Ahmad, Madeline Endres, Kaia Newman, Priscila Santiesteban, Emma Shedden, Westley Weimer: Causal Relationships and Programming Outcomes: A Transcranial Magnetic Stimulation Experiment: International Conference on Software Engineering (ICSE) 2024
  16. Psychoactive Substances and Neurodiversity

  17. Madeline Endres, Kevin Boehnke, Westley Weimer: Hashing It Out: A Survey of Programmers' Cannabis Usage, Perception, and Motivation: International Conference on Software Engineering (ICSE) (2022)
  18. Kaia Newman, Madeline Endres, Westley Weimer, Brittany Johnson: From Organizations to Individuals: Psychoactive Substance Use By Professional Programmers: International Conference on Software Engineering (ICSE) 2023
  19. Meredith Ringel Morris, Andrew Begel, Ben Wiedermann: Understanding the Challenges Faced by Neurodiverse Software Engineering Employees: Towards a More Inclusive and Productive Technical Workforce. ASSETS 2015: 173-184
  20. Wenxin He, Manasvi Parikh, Westley Weimer, Madeline Endres: High Expectations: An Observational Study of Programming and Cannabis Intoxication: International Conference on Software Engineering (ICSE) 2024
  21. Implications — Code Comprehension

  22. R. W. Proctor, D. W. Schneider: Hick's law for choice reaction time: A review. Quarterly Journal of Experimental Psychology, 2018, 71(6), 1281-1299.
  23. Sarah Fakhoury, Devjeet Roy, Yuzhan Ma, Venera Arnaoudova, Olusola O. Adesope: Measuring the impact of lexical and structural inconsistencies on developers' cognitive load during bug localization. Empir. Softw. Eng. 25(3): 2140-2178 (2020)
  24. Norman Peitek, Sven Apel, Chris Parnin, André Brechmann, Janet Siegmund: Program Comprehension and Code Complexity Metrics: An fMRI Study. ICSE 2021: 524-536 (distinguished paper award)
  25. Guest Speaker

  26. Visit by Kaia Newman
  27. Implications — Code Review and Trust

  28. Tyler J. Ryan, Gene M. Alarcon, Charles Walter, Rose F. Gamble, Sarah A. Jessup, August A. Capiola, Marc D. Pfahler: Trust in Automated Software Repair - The Effects of Repair Source, Transparency, and Programmer Experience on Perceived Trustworthiness and Trust. HCI (29) 2019: 452-470
  29. Yu Huang, Kevin Leach, Zohreh Sharafi, Nicholas McKay, Tyler Santander, Westley Weimer: Biases and Differences in Code Review using Medical Imaging and Eye-Tracking: Genders, Humans, and Machines: Foundations of Software Engineering (ESEC/FSE): 2020
  30. Guest Speaker

  31. Visit by Madeline Endres
  32. Implications — Learning and Teaching

  33. Ryan Shaun Joazeiro de Baker, Sidney K. D'Mello, Ma. Mercedes T. Rodrigo, Arthur C. Graesser: Better to be frustrated than bored: The incidence, persistence, and impact of learners' cognitive-affective states during interactions with three different computer-based learning environments. Int. J. Hum. Comput. Stud. 68(4): 223-241 (2010)
  34. Naser Al Madi, Cole S. Peterson, Bonita Sharif, Jonathan I. Maletic: From Novice to Expert: Analysis of Token Level Effects in a Longitudinal Eye Tracking Study. ICPC 2021: 172-183
  35. Hammad Ahmad, Zachary Karas, Kimberly Diaz, Amir Kamil, Jean-Baptiste Jeannin, Westley Weimer: How Do We Read Formal Claims? Eye-Tracking and the Cognition of Proofs about Algorithms. ICSE 2023: 208-220
  36. Nischal Shrestha, Colton Botta, Titus Barik, Chris Parnin: Here we go again: why is it difficult for developers to learn another programming language? Commun. ACM 65(3): 91-99 (2022) (distinguished paper award)
  37. Madeline Endres, Zachary Karas, Xiaosu Hu, Ioulia Kovelman, Westley Weimer: Relating Reading, Visualization, and Coding for New Programmers: A Neuroimaging Study: International Conference on Software Engineering (ICSE): (2021)

  38. Optional: Madeline Endres, Madison Fansher, Priti Shah, Westley Weimer: To Read or To Rotate? Comparing the Effects of Technical Reading Training and Spatial Skills Training on Novice Programming Ability: Foundations of Software Engineering (ESEC/FSE): 2021
  39. Chris Parnin, Alessandro Orso: Are automated debugging techniques actually helping programmers? ISSTA 2011: 199-209
  40. E. Soremekun, L. Kirschner, M. Böhme and M. Papadakis: Evaluating the Impact of Experimental Assumptions in Automated Fault Localization: International Conference on Software Engineering (ICSE), 2023, pp. 159-171, doi: 10.1109/ICSE48619.2023.00025.
  41. Software Engineering and Deep Learning

  42. Ru Zhang, Wencong Xiao, Hongyu Zhang, Yu Liu, Haoxiang Lin, Mao Yang: An empirical study on program failures of deep learning jobs. ICSE 2020: 1159-1170 (distinguished paper award)
  43. Hung Viet Pham, Shangshu Qian, Jiannan Wang, Thibaud Lutellier, Jonathan Rosenthal, Lin Tan, Yaoliang Yu, Nachiappan Nagappan: Problems and Opportunities in Training Deep Learning Software Systems: An Analysis of Variance. ASE 2020: 771-783 (distinguished paper award)