CHI 2023 registration is now open
TIME | Sunday | Monday | Tuesday | Wednesday | Thursday | Friday |
---|---|---|---|---|---|---|
09:00 | Workshops & Symposia / Doctoral Consortium |
Opening Keynote | Session 1 | Session 1 | Session 1 | Workshops & Symposia |
10:30 | Coffee Break | DC / Posters / LBW / Comps | DC / Posters / LBW / Comps | Coffee Break | ||
11:00 | Session 1 | Session 2 | Session 2 | Closing Keynote | ||
12:35 | Lunch Break | |||||
14:30 | Session 2 | Session 3 | Session 3 | Session 2 | ||
15:55 | Coffee Break | |||||
16:35 | Session 3 | Session 4 | Session 4 | Session 3 | ||
18:00 | SIGCHI Welcome Reception | Reception Opening of Interactivity |
*NB: Presentation time for papers, journals, alt.chi and case studies will be 10 minutes plus 3 minutes Q&A. Presentation video length for these venues can be 7-10 minutes long.
Explore Hamburg (Tuesday, April 25, from 6pm)
The German HCI community has created a collection of things to do in Hamburg. Please have a look at the web page of the German HCI community – germanhci.de – this is not part of the official CHI2023 program.
We’re excited that registration for CHI2023 is now open for both the in-person and online conference!
You can register here. The early registration deadline is March 13th, 2023 March 20th, 2023.
We have – as in previous years – different pricing by geographic region. See the list of countries in each category at the end of this post. We also offer opportunities for onsite as well as online only participation. An overview of all options is given on the first page of the registration page.
The registration fees for the CHI conference have remained the same for over 10 years. The prices (including tax) this year are the same as in Paris 2013. In the European Union, there is a value-added tax (VAT, in Germany this is 19%) and it is added on top of the last year’s registration prices.
If you want to understand the rationale of the registration fees in more detail, this blog written by Aaron Quigley and Yoshifumi Kitamura (CHI 2021 general chairs) explains the budget and reasons for the levels of the registration costs.
We hope that COVID is becoming less of an issue over the next months. Nevertheless, we have included further information on COVID risks and potential countermeasures in the registration form.
The program is taking shape. There are so many exciting contributions! We are looking forward to seeing you in Hamburg or online in just a few months!
The list of conference hotels and further information about travel and the venue is online.
Albrecht Schmidt and Kaisa Väänänen, CHI 2023 General Chairs,
generalchairs@chi2023.acm.org
Frequently Asked Questions
- What should I do if I run into problems with the registration? Please contact our registration team at chiregistration@executivevents.com.
- How do I get an visa support letter for a visa application? If you need an visa support letter for a visa application, you will have an opportunity to ask for it when registering. The registration team will email you an visa support letter within two business days after the registration is confirmed.
Categories (country list)
Category C
All countries not listed in category H or I.
Category H
- Albania
- Algeria
- Angola
- Argentina
- Armenia
- Azerbaijan
- Belarus
- Belize
- Bosnia
- Botswana
- Brazil
- Bulgaria
- Colombia
- Cook Islands
- Costa Rica
- Cuba
- Dominica
- Dominican Republic
- Ecuador
- Fiji
- French Guiana
- Gabon
- Georgia
- Grenada
- Guadeloupe
- Guatemala
- Guyana
- Iran
- Iraq
- Jamaica
- Jordan
- Kazakhstan
- Kosovo
- Lebanon
- Libya
- North Macedonia
- Malaysia
- Maldives
- Marshall Islands
- Mauritius
- Mexico
- Montenegro
- Namibia
- Paraguay
- Peru
- Romania
- Russian Federation
- Saint Lucia
- Samoa
- Serbia
- South Africa
- Sri Lanka
- St. Vincent
- Suriname
- Thailand
- Tonga
- Tunisia
- Turkey
- Turkmenistan
- Tuvalu
- Venezuela
Category I
- Afghanistan
- Bangladesh
- Benin
- Bhutan
- Bolivia
- Burkina Faso
- Burundi
- C African Rp
- Cambodia
- Cameroon
- Cape Verde
- Chad
- China
- Comoros
- Congo
- Congo, Democratic Republic
- Djibouti
- Egypt
- El Salvador
- Eritrea
- Eswatini
- Ethiopia
- Federal State of Micronesia
- Gambia
- Ghana
- Guinea
- Guinea-Bissau
- Haiti
- Honduras
- India
- Indonesia
- Ivory Coast
- Kenya
- Kiribati
- Kyrgyzstan
- Lesotho
- Liberia
- Madagascar
- Malawi
- Mali
- Mauritania
- Mongolia
- Morocco
- Mozambique
- Myanmar
- Nepal
- Nicaragua
- Niger
- Nigeria
- North Korea
- Pakistan
- Palestine
- Papua New Guinea
- People’s Dem. Republic of Lao
- Philippines
- Republic Moldova
- Rwanda
- Sao Tome and Principe
- Senegal
- Sierra Leone
- Solomon Isl
- Somalia
- South Sudan
- Sudan
- Swaziland
- Syria
- Tadzhikistan
- Tanzania
- Timor-Leste
- Togo
- Uganda
- Ukraine
- Uzbekistan
- Vanuatu
- Viet Nam
- Yemen
- Zambia
- Zimbabwe
Investigating the Quality of Reviews, Reviewers, and their Expertise for CHI2023

Max Wilson (Papers Co-Chair 2023 and 2024)
In this blog post, I look at data from the CHI2023 reviewing process, particularly in phase 1 (the first reviews that every paper gets). I analyse the difference between reviews that get marked as ‘high quality’ and those that do not. I examine length in relation to recommendation, decision outcome from phase 1, and reviewer role. I examine whether authors do actually serve as reviewers in practice. And finally I examine the expertise of reviewers and ACs in the different subcommittees.
Review Quality
A passion of mine is good quality reviews. I teach a course on good reviewing (running in person at CHI2023!), and I’m excited that community efforts exist to help people understand the ever-evolving process at CHI. Different people have also collected resources on reviewing (such as Aaron Quigley’s list). Good reviews typically a) summarise the reviewers’ understanding of the paper’s contribution, b) address both the strengths and weaknesses of the paper for each of the criteria listed by the venue, c) (perhaps most importantly) reason directly about the recommendation they are making (so that authors and other reviewers can understand how the review text relates to the recommendation), and d) lists minor details (editorial changes/links to missing references, etc.) that are largely irrelevant to the recommendation but useful.
We do not have an exact way to examine all reviews for these factors, but length is a first-step proxy. Reviewers cannot achieve all 4 of those effectively in <50 words, for example. And short reviews that say ‘this should be accepted’ or ‘this should be rejected,’ with very little other detail or reasoning, are not constructive or helpful to either authors or senior reviewers (that we call Associate Chairs (ACs)). My analysis below looks at review length, at least, as an investigation into one aspect of review quality. Note: in the analyses below, some of the data is not available to me for papers that I am conflicted with.
Overall, the average length of reviews in phase 1 was 580.4 words, median 497. The longest being 3723 words (for an RRX recommendation), and shortest being 33 words (for an Accept recommendation). There were 90 reviews that were less than 100 words: 9 Accepts (A), 7 Accept or Revise & Resubmit (ARR), 13 Revise & Resubmit (RR), 9 Reject or Revise & Resubmit (RRX), 52 Rejects (X), excluding desk and quick rejects. One paper received two of these (inc one from a 2AC!), which led to an email complaint from the authors – as I would do too. Out of ~12,000 reviews (a little hard to tell with desk rejects and conflicted data), 1621 reviews were less than 250 words and 5976 were less than 500 words.
One bit of data we do have is where a review can be given a ‘Special Recognition’ tick by the 1AC of the paper (the lead AC). This data is fallible in that some ACs may have neglected to consider giving these ticks, but looking at the 1 in 10 that were given a recognition mark is interesting. The histogram and data table below show the breakdown of reviews given a Special Recognition mark of quality by the 1AC, or not. We can see the average length of ‘good quality reviews’ is much closer to 1000 words (984.9), than the average 622.9 for the remaining reviews.

Quality Mark Given | Not Given | |
---|---|---|
Count | 938 | 7824 |
Min | 165 | 33 |
Avg (stdev) | 984.9 (472.7) | 622.9 (333.9) |
Median | 890 | 553 |
Max | 3723 | 3239 |
Length by Recommendation (Excluding Desk and Quick Rejects)
The histogram below shows the number of reviews (vertical axis) in the brackets by number of words (horizontal axis). The green line shows the spread of Accept reviews, ranging between 33 words and 2101 words (avg 460.4 words, median 413). The purple line shows the AARs. The blue line shows the RR recommendations – we are starting to see that there were substantially more negative reviews than positive ones in round 1 of CHI2023 (something we see every year). The yellow line shows RRX reviews as being longer, overall, than the Reject (X) reviews in red. In general, looking at the data table, we see that reviews were longer and more detailed where there was more reason to reject the paper, but with slightly fewer words where a paper is a clear reject (X) compared to less clear (RRX).

A | ARR | RR | RRX | X | |
---|---|---|---|---|---|
Min | 33 | 47 | 53 | 60 | 41 |
Avg (stdev) | 460.4 (264.9) | 516.6 (299.2) | 566.7 (338.8) | 637.9 (372.6) | 558.5 (386.2) |
Median | 413 | 448 | 490 | 559 | 460 |
Max | 2101 | 2340 | 2969 | 3723 | 3392 |
Review Length by Round 1 Outcome, and Type of Reviewer
We see below the average lengths of reviews given in round 1 by people in different roles. We see again that the average length of reviews is slightly longer for rejects (in blue) than those that received a revise and resubmit decision. 1AC meta-reviews are notably shorter than 2AC and external reviews; however, 1AC meta-reviews do serve a different function and have different expectations for their content. One decision taken in 2018 was that the ACs would also write full reviews – they would have ~7 papers as 1AC (primary AC writing meta-reviews) and another ~7 papers as 2AC writing normal reviews. This decision was taken as there was concern that many of the most experienced reviewers in the field were acting as meta-reviewers, and not contributing their expertise to review papers. Using this 1AC and 2AC balance, every paper also receives a review from the typically more experienced ACs. Notably, though, we see a 120-word difference in the average length of reviews between 2AC and reviewer, indicating that reviews from 2ACs are shorter than those provided by reviewers. This could be to do with load, with ACs having 14 papers to deal with in total, with 7 of those being full reviews as 2AC.

Contribution of Reviewers
Excluding ACs, the average number of reviews contributed by external reviewers was 1.72, with the maximum being 9. 53 people contributed more than 5 reviews, and 11 of those contributed 7 or more. Their reviews (as shown in the table below), however, were not insignificant in length. These involved a mix of phd students and faculty members at different levels, but for anonymity’s sake, I have not listed which was which in terms of career stage, especially as there was no obvious trend between career stage and average length in terms of review.
Type | #Reviews | Avg. Length (Words) |
---|---|---|
reviewer | 9 | 404.4 |
reviewer | 8 | 1130.9 |
reviewer | 8 | 1027.8 |
reviewer | 8 | 648.4 |
reviewer | 8 | 639.1 |
reviewer | 8 | 496.5 |
reviewer | 8 | 421.5 |
reviewer | 8 | 378.9 |
reviewer | 7 | 921.3 |
reviewer | 7 | 805.6 |
reviewer | 7 | 721.9 |
Did Authors Actually Review?
We expect, as part of the agreement of submitting, that authors contribute back as reviewers. Of course, this is a slightly simplified request for a complicated situation, as some authors are first-time authors and have no track record of expertise to review. Equally, there are out-of-field authors making partial contributions to papers, and likely some, e.g., senior professors unknowingly named as being part of an extended supervision committee. Further, there are entirely out-of-scope papers that were desk rejected that also do not have expertise in HCI to review. Regardless, like previous CHIs, I decided to investigate whether reviewers were authors and whether authors were reviewers in CHI2023. Of interest, CHI papers have a lot of authors. Of 3182 CHI2023 submissions, one had 25 authors, 104 had 10+ authors, and 857 had 5+ authors. Conversely, only 82 papers had a single author.
Of 3921 people involved in reviewing full papers (including subcommittee chairs and the papers chairs), 2170 were authors, out of 10026 authors total. That leaves 7856 of the authors that did not review a paper (or act as a committee member), including 79 that authored 5+ papers, and 9 that had authored 10+ papers. Conversely, the other 20 or so authors of 10+ papers did get involved in the review process. Next, I considered individual authors, their reviewer cost generated, and reviews contributed (including committee member contributions). For every paper submitted, I gave an author a cost of 1/n authors of that paper. So an individual author of several papers would generate a total cost as the sum of these author contributions. For example, this would lead to 0.333 for an author that was named on only 1 paper as one of 3 authors. Or something like 4.666 if they were named as an author on many papers. I then subtracted the number of reviews that each author completed to investigate if they were in review-debt. The histograms below (in linear and log scale), show peoples review-debt scores as mediated by then being involved in the review process. Note: the conference organising committee and subcommittee chairs (where their contribution is not measured in number of reviewers done) sit together with an arbitrarily chosen fixed review-credit of -15.


The ~400 ACs involved in the process are in green and largely (but not exclusively) sit to the left of the divider, in review-credit. Any ACs still in review-debt were likely brought in as an AC for specific papers (rather than general AC duty). Authors that were external reviewers (n=1778) were largely, but not exclusively, to the left of the divide (in review-credit). Authors that did not review sit to the right of the divider, with 6644 sitting between 0-0.34, named as, e.g., 1 of 3+ authors on one paper. These will likely include the first-time authors not yet experienced enough to review, although there are only ~3000 papers, so that’s perhaps 2+ per paper. In total, the people to the right of the divider include some 7934 authors that are in review-debt, and it is perhaps the 1253 with review-debt scores between 0.34-5 (only 33 with a review-debt more than 1.5), where their score implies they are submitting several papers and are not providing reviews, that are of interest for expanding the reviewer pool. Some authors (typically full professors) generated a cost between 1-5 from submitting, e.g., 8, 15, and even 21 papers, and did not contribute a commensurate number of reviews. Perhaps they are volunteering in other conferences and/or reviewer pools, although anecdotally some experienced professors tell us that they are typically not invited to review because people assume they are busy.
Do the Authors of Each Paper Produce 3 (or 4) Reviews?
This analysis above is still a somewhat oversimplified view. Each paper generates a demand for 3 reviews, and the authors between them could be covering this review-debt. In fact, it also generates a need for a 4th artefact: a meta-review. Since the data below includes the efforts of ACs, I will present numbers below both as based on 3 reviews (and 4 reviews in brackets). Running through the data (which, as a reminder, is slightly inaccurate as I cannot see data for papers that I am conflicted with), the authors of 1473 of 3180 submissions did not produce 3 reviews (1687 did not produce 4); some did contribute some reviews, as only 901 papers led to 0 reviews from the authors. Notably, that is just shy of a third of papers not producing any, and around half not producing enough. The calculation is also naive, as an author may be on many papers, meaning that if reviewer X contributed 3 reviews, but was an author on 10 papers, then 10 papers were marked as having an author that contributed 3 reviews. Instead, I removed 3 reviews from the total of reviews done per reviewer, for each paper they authored (note, this was typically 14 papers for an Associate Chair managing 7 as 1AC and 7 as 2AC). Although this approach is still potentially suboptimal (as there may be more optimal distributions of contributed reviews to papers – my algorithm started with the papers with the fewest authors, and then started with the first author (often the most junior author), allocating review-credit to those papers and stopping when the paper’s authors produced at least 3 reviews), the results show that the authors of 1885 papers did not generate 3 reviews (2156 did not generate 4), with 1278 contributing zero reviews (1366 if based on allocating 4 reviews). These numbers largely confirm the findings of Jofish Kaye’s analysis after CHI2016 – that the authors of approx 2/3 of papers are not pulling their weight in terms of review effort.
These numbers above still paint a naive picture. Authors may be general chairs, technical program chairs, papers chairs, subcommittee chairs, or chairs of other venues like late-breaking work. Really – contributing to the review pool includes contributing in many different ways across the committee. Unfortunately, I only had immediate access to organising committee and subcommittee chair data for the analysis. Considering these indirect committee contributions, the authors 1769 papers did not generate 3 reviews (2007 did not generate 4), with 1217 not generating any reviews (1288 if allocating 4). Of course, these authors may yet contribute reviews to late-breaking work, which has not yet begun, or in other ways, such as being papers chair or AC at a different SIGCHI conference. In practice, a full community analysis would need to consider all ways of volunteering at all the SIGCHI conferences, and even HCI journals, to get a full picture of where community effort is going. Have fun trying to gather all that, and quantify levels of volunteering.
Expertise in the Subcommittees
One bit of data we would still like to investigate one day is the influence of AC experience. For interest’s sake, we examined how many of the ACs had been an AC in the last 5 years (using recent historical data that we had access to – nod to Faraz Faruqi (Assistant to the Papers Chairs), who processed all of the data). This is extremely fallible data (including data entry errors of name spellings from 100+ spreadsheet contributors over 5 years), and some ACs may be really experienced but not in the last 5 years. Recruiting reviewers (and ACs) has seemingly become harder lately, with people considering both the COVID-19 pandemic and expectations from revise and resubmit processes influencing people’s decisions to volunteer in the review process. It was often hard for subcommittee chairs to recruit experienced ACs. It will be hard to measure the impact of the spread of AC experience, but we can see that some subcommittees (rows randomised and left unlabelled on purpose), have more than 50% of ACs taking the role for the first time. As Papers Chairs, we highlighted this kind of data to the subcommittee chairs at recruitment time, however, some were confident in the process and others simply had to go with who would say yes despite the turnover.
5 years |
4 years |
3 years |
2 years |
1 year |
First Time |
0% |
0% |
21% |
29% |
21% |
29% |
0% |
0% |
7% |
43% |
7% |
43% |
0% |
3% |
14% |
3% |
21% |
59% |
0% |
6% |
12% |
6% |
35% |
41% |
0% |
0% |
0% |
46% |
29% |
25% |
0% |
0% |
0% |
8% |
15% |
78% |
0% |
0% |
0% |
0% |
29% |
71% |
0% |
0% |
18% |
29% |
29% |
24% |
0% |
3% |
10% |
23% |
47% |
13% |
0% |
7% |
7% |
17% |
30% |
40% |
0% |
9% |
9% |
22% |
22% |
39% |
0% |
20% |
20% |
13% |
33% |
13% |
0% |
11% |
11% |
50% |
11% |
17% |
0% |
5% |
0% |
18% |
50% |
27% |
0% |
0% |
0% |
4% |
30% |
65% |
0% |
0% |
4% |
4% |
21% |
71% |
4% |
0% |
9% |
26% |
13% |
48% |
0% |
0% |
42% |
11% |
26% |
21% |
0% |
9% |
13% |
35% |
17% |
26% |
In terms of the expertise that people self-assigned when providing a review (a confidence/expertise score, graded out of 4, submitted with the review), we can see from the following table, that the process maintained a good level of expertise in each of the subcommittees. The average self-assigned expertise in each subcommittee ranges from 3.03 to 3.34, where 3 represents ‘knowledgeable,’ and 4 represents ‘expert.’ The table below also shows the total number of people involved in each subcommittee, and the number of papers they handled.
Subcommittee | Papers | People Involved | Avg. Expertise |
---|---|---|---|
Accessibility & Aging | 193 | 311 | 3.3 |
Specific Applications Areas | 151 | 287 | 3.03 |
Computational Interaction | 178 | 317 | 3.08 |
Critical Computing, Sustainability, and Social Justice | 177 | 348 | 3.11 |
Design | 239 | 442 | 3.1 |
Building Devices: Hardware, Materials, and Fabrication | 94 | 151 | 3.34 |
Games and Play | 131 | 209 | 3.11 |
Health | 206 | 338 | 3.22 |
Interaction Beyond the Individual | 151 | 278 | 3.06 |
Interacting with Devices: Interaction Techniques & Modalities | 217 | 380 | 3.14 |
Learning, Education, and Families | 187 | 323 | 3.14 |
Understanding People – Mixed and Alternative Methods | 158 | 276 | 3.13 |
Understanding People – Qualitative Methods | 152 | 277 | 3.05 |
Understanding People – Quantitative Methods | 148 | 280 | 3.05 |
Privacy and Security | 132 | 215 | 3.2 |
Blending Interaction: Engineering Interactive Systems & Tools | 198 | 344 | 3.15 |
User Experience and Usability | 244 | 424 | 3.1 |
Visualization | 151 | 245 | 3.29 |
Interestingly, unlike ACs, reviewers did not exclusively review for specific subcommittees. Of the 607 reviewers that provided 3 or more reviews, the average number of subcommittees they reviewed for was 2.68, with a maximum of 6 different subcommittees and a minimum of 1. As an example, the average number of subcommittees reviewed for by reviewers who produced exactly 3 reviews was 2.22, implying most reviewed each paper for a different subcommittee. The table below shows the overlap of reviewers across the different subcommittees.
|
Access |
Apps |
CompInt |
Critical |
Design |
Devices |
Games |
Health |
Ibti |
IntTech |
Learning |
People Mixed |
People Qual |
People Quant |
Privacy |
Systems |
UX |
Viz |
Access: |
|
17 |
18 |
19 |
21 |
5 |
11 |
30 |
16 |
10 |
21 |
20 |
20 |
13 |
12 |
17 |
25 |
7 |
Apps: |
17 |
|
17 |
29 |
24 |
8 |
4 |
22 |
17 |
13 |
25 |
21 |
19 |
6 |
13 |
22 |
24 |
10 |
CompInt: |
18 |
17 |
|
10 |
18 |
8 |
7 |
12 |
21 |
30 |
21 |
26 |
14 |
18 |
3 |
46 |
21 |
22 |
Critical: |
19 |
29 |
10 |
|
44 |
1 |
10 |
25 |
29 |
2 |
10 |
28 |
33 |
10 |
9 |
8 |
10 |
3 |
Design: |
21 |
24 |
18 |
44 |
|
18 |
11 |
25 |
16 |
25 |
13 |
27 |
20 |
13 |
15 |
25 |
32 |
8 |
Devices: |
5 |
8 |
8 |
1 |
18 |
|
0 |
8 |
0 |
23 |
6 |
5 |
2 |
2 |
0 |
19 |
5 |
1 |
Games: |
11 |
4 |
7 |
10 |
11 |
0 |
|
10 |
10 |
8 |
11 |
10 |
6 |
11 |
3 |
5 |
22 |
2 |
Health: |
30 |
22 |
12 |
25 |
25 |
8 |
10 |
|
13 |
7 |
18 |
26 |
26 |
14 |
7 |
8 |
24 |
7 |
Ibti: |
16 |
17 |
21 |
29 |
16 |
0 |
10 |
13 |
|
7 |
21 |
30 |
27 |
16 |
8 |
13 |
15 |
6 |
IntTech: |
10 |
13 |
30 |
2 |
25 |
23 |
8 |
7 |
7 |
|
13 |
11 |
5 |
15 |
2 |
43 |
53 |
13 |
Learning: |
21 |
25 |
21 |
10 |
13 |
6 |
11 |
18 |
21 |
13 |
|
19 |
10 |
11 |
11 |
23 |
23 |
8 |
PeopleMixed: |
20 |
21 |
26 |
28 |
27 |
5 |
10 |
26 |
30 |
11 |
19 |
|
25 |
20 |
14 |
11 |
27 |
10 |
PeopleQual: |
20 |
19 |
14 |
33 |
20 |
2 |
6 |
26 |
27 |
5 |
10 |
25 |
|
16 |
9 |
16 |
12 |
5 |
PeopleQuant: |
13 |
6 |
18 |
10 |
13 |
2 |
11 |
14 |
16 |
15 |
11 |
20 |
16 |
|
9 |
14 |
27 |
7 |
Privacy: |
12 |
13 |
3 |
9 |
15 |
0 |
3 |
7 |
8 |
2 |
11 |
14 |
9 |
9 |
|
5 |
12 |
2 |
Systems: |
17 |
22 |
46 |
8 |
25 |
19 |
5 |
8 |
13 |
43 |
23 |
11 |
16 |
14 |
5 |
|
27 |
28 |
UX: |
25 |
24 |
21 |
10 |
32 |
5 |
22 |
24 |
15 |
53 |
23 |
27 |
12 |
27 |
12 |
27 |
|
19 |
Viz: |
7 |
10 |
22 |
3 |
8 |
1 |
2 |
7 |
6 |
13 |
8 |
10 |
5 |
7 |
2 |
28 |
19 |
|
Conclusions
It’s clear from the analysis that there is a wide range in how long reviews are. Length obviously does not alone equal quality, but reviews judged as ‘good’ by 1ACs that they are typically longer (approaching 1000 words) rather than being less than 500 words. I’m in awe of the role models that wrote 8 reviews with an average above 1000 words – in a way, I hope this post helps raise awareness of good practice (as it appears to be). One challenge seems to be workload, however, as ACs typically wrote shorter reviews – this is something for future papers chairs to consider about the workload of ACs. Of course to reduce this, we need more ACs, which creates a different problem (variability and large crowd management in the PC meeting etc). One thing ACs do (typically) is review much more than their generated review cost (from submitting papers), so increasing the size of the AC group would mean more people are covering their review-debt and reducing individual workload. Most notably, only 1/5 of authors are covering their reviewer-debt, including those working in committees. This is scary. 3/5 of authors, however, are named on just one paper, and perhaps are new authors or out of field authors – we don’t necessarily want everyone to review yet. Looking at it another way, 1/3 of papers generated reviews, but the authors of 2/3 of papers did not produce a commensurate number of reviews. To me, this is more of a concern than individual authors. That said, there are many ways in which people volunteer for the conference, and indeed the wider SIGCHI community, and so perhaps these authors are giving back elsewhere.
Overall, it’s clear to me that many people give massively generous amounts of time to the conference (typically as part of the expectations of their job), and we have too many people for me to name and thank everyone that has done an awesome job this year – so thank you! My biggest hope is that this post helps us as a community to calibrate our understanding of what good reviews should be like, and how much we all volunteer.
SV T-Shirt Design Competition
Hello everyone,
It’s t-shirt design time! Every year we call on students to design the wonderful t-shirt that our SVs wear! If your design is selected, you get a free SV spot! That means you move off the waitlist, or if you’re already accepted, you can give the spot to a friend (as long as they are also a student). This year the deadline is Monday, January 9th, 2023 and your submissions should be sent to svchair@chi2023.acm.org with the subject: CHI 2023 SV T-Shirt Design Contest
Design details
You may want to connect your design to the location (Hamburg, Germany), or not, it doesn’t matter as long as it’s respectful of local culture, fun, interesting, and can stand out a bit in the crowd.
Please send us front/back designs, noting that we cannot print on sleeves or the extreme edges of the shirts. Designs should be ONE color. In general, this means a black or white design on a colored shirt.
The imprint size is roughly 11″ wide and 13″ high front or back.
You can find the CHI 2023 logo information here: [CHI2023 Design Package]
Submissions details
Mock-ups should be sent as PDF, JPG, or PNG in medium resolution. If your design is selected as a winning design, we will require the final version in an .eps or .ai format.
You may submit several designs or variations on a single design, should you so desire.
Please follow the following naming convention for each of your designs: lastname_firstname_tshirtdesign._ext_
The deadline is Monday, January 9th, 2023 at 23:59 AoE to submit your designs to svchair@chi2023.acm.org with the subject: T-Shirt Design Contest. We will select a winner in the week following the end of the contest and notify the winner as well to everyone who submitted designs.
Here are some photos from previous SV T-shirts, courtesy of our wonderful past chair Haley MacLeod:

Thank you and we’re looking forward to seeing your creativity!
Ciabhan Connelly, Julia Dunbar, and Maximiliane Windl
SV Chairs CHI 2023, Hamburg, Germany
CHI 2023: Quick Look at How to Interpret Your Reviews

Julie R. Williamson (Papers Co-Chair 2022 and 2023)
The first round of the review process is complete, and authors will now know if they have been invited to revise and resubmit (at least one reviewer recommends Revise and Resubmit or Better). This short post will help you understand how to interpret your reviews and decide if you want to revise and resubmit for CHI 2023.
In 2023, we received 3182 submissions.
- 48.9% were invited to revise and resubmit
- 43.7% were rejected after review
- 7.4% were not sent for external review (withdrawn, desk rejected, quick rejected).
A significant proportion of papers invited to revise and resubmit will not be accepted to CHI 2023. Estimating based on previous years, we expect about half of the invited papers to be eventually rejected during the PC meeting in January 2023.
Our analysis of 2022 review process data demonstrates that papers that do not have strongly supportive reviews do not have a good chance to be accepted. This post will give some updated numbers to help interpret your reviews and decide if you want to participate in revise and resubmit. Authors do not need to notify the programme committee about their decision to revise and resubmit.
Review Scales
Before we go into the reviews, it’s important to remember what scales have been used during the CHI 2023 review process. Reviewers and ACs provide a recommendation (recommendation category out of 5 choices) and can further contextualise their recommendation based on originality, significance, and rigour (each a 5 point ordinal scale).
Recommendation
Short Name | On Review Form | Threshold for Revise and Resubmit |
---|---|---|
A | I recommend Accept with Minor Revisions | Yes |
ARR | I can go with either Accept with Minor Revisions or Revise and Resubmit | Yes |
RR | I recommend Revise and Resubmit | Yes |
RRX | I can go with either Reject or Revise and Resubmit | No |
X | I recommend Reject | No |
Ordinal Scales
Ordinal scales are used to better contextualise reviewer recommendations, and should be considered as secondary to the reviewer recommendation.
Order | On Review Form |
---|---|
5 | Very high |
4 | High |
3 | Medium |
2 | Low |
1 | Very low |
Proportion of Supportive Reviews
The proportion of supportive reviews (recommendations of RR or better) was a good indicator of paper success in 2022. Below, we provide bar charts showing a few ways of counting “proportion of supportive reviews.” In all cases, supportive means the actual recommendation of the reviewer, not the text of the review.
Figure 1 shows the proportion of reviewers recommending RR or better. Papers that have only one or two supportive reviews are very unlikely to be accepted and authors should consider if they want to revise and resubmit.

Another way to look at how supportive reviews are is to consider how many reviewers recommend ARR or better. Papers where no reviewers recommend ARR or better are unlikely to be accepted.

A very positive way to look at how supportive reviews are is to consider how many reviewers recommend A, which is the most positive recommendation possible. However, 86% papers have no reviewers recommending A. This is an unfortunate statistic for our community, as our review process might be fairly criticized for being overly negative. This also means a fair number of papers where no reviewer recommends A in the first round will be accepted after the PC meeting in January 2023, showing the positive impact a revision cycle can have. It’s worth reflecting how we can improve the quality of submissions and the tone of reviews moving forward.

Subcommittees
Each year, there is some variation between subcommittees. We provide this data for transparency and reflection on how different subcommittees are running their review process this year.

Conclusion
This short overview of the review data should give you some additional context when analysing your reviews and deciding if you want to revise and resubmit. These give some indications, but all decisions are reached after discussion at the PC meeting. There are no deterministic positive or negative outcomes, all decisions are human decisions by the programme committee.
Good luck with your paper revisions, or if you are not revising and resubmitting, good luck with your future plans for your work. We hope the review process has provided something helpful in improving your papers and working towards your next publications.
Data Tables
Note some numbers may not add up to official totals due to conflicts, late reviews, and other missing data at time of writing.
Figure 1
Figure Description: Proportion of reviewers recommending RR or better. Authors with at least 75% of reviewers recommending RR or better make up the top 32% of submissions. See data table:
Rejected | Revise and Resubmit | |
---|---|---|
0.0% | 1624 | 0 |
25% | 0 | 246 |
50% | 0 | 298 |
75% | 0 | 473 |
100% | 0 | 541 |
Figure 2
Figure Description: Proportion of reviewers recommending “ARR” or better. Papers where at least half of the reviewers recommend “ARR” or better represent the top 14% of submissions. See data table:
Rejected | Revise and Resubmit | |
---|---|---|
0.0% | 1624 | 585 |
25% | 0 | 538 |
50% | 0 | 211 |
75% | 0 | 118 |
100% | 0 | 106 |
Figure 3
Figure Description: Proportion of reviewers recommending A. Papers where all reviewers recommend A are rare, representing just .5% of submissions. See data table:
Rejected | Revise and Resubmit | |
---|---|---|
0.0% | 1624 | 1181 |
25% | 0 | 287 |
50% | 0 | 55 |
75% | 0 | 19 |
100% | 0 | 16 |
Figure 4
Figure Description: Breakdown of QR, X, and RR for all subcommittees. See data table:
QR (%) | X (%) | RR (%) | |
---|---|---|---|
Accessibility and Aging A, Accessibility joint | 0.0 | 0.463918 | 0.536082 |
Accessibility and Aging B, Accessibility joint | 0.020833 | 0.375000 | 0.593750 |
Blending Interaction: Engineering Interactive Systems & Tools | 0.025253 | 0.469697 | 0.505051 |
Building Devices: Hardware, Materials, and Fabrication | 0.010638 | 0.329787 | 0.659574 |
Computational Interaction | 0.016854 | 0.443820 | 0.533708 |
Critical and Sustainable Computing | 0.033898 | 0.338983 | 0.627119 |
Design A, Design joint | 0.033058 | 0.545455 | 0.421488 |
Design B, Design joint | 0.016949 | 0.432203 | 0.550847 |
Games and Play | 0.053435 | 0.473282 | 0.473282 |
Health | 0.116505 | 0.451456 | 0.432039 |
Interacting with Devices: Interaction Techniques & Modalities | 0.018433 | 0.470046 | 0.511521 |
Interaction Beyond the Individual | 0.059603 | 0.417219 | 0.523179 |
Learning, Education and Families A, Learning joint | 0.063158 | 0.463158 | 0.463158 |
Learning, Education and Families B, Learning joint | 0.032609 | 0.543478 | 0.423913 |
Privacy & Security | 0.068182 | 0.409091 | 0.522727 |
Specific Application Areas | 0.046358 | 0.496689 | 0.456954 |
Understanding People: Mixed and Alternative Methods | 0.113924 | 0.481013 | 0.405063 |
Understanding People: Qualitative Methods | 0.078947 | 0.381579 | 0.539474 |
Understanding People: Quantitative Methods | 0.074324 | 0.459459 | 0.466216 |
User Experience and Usability A, User Experience and Usability joint | 0.081967 | 0.557377 | 0.360656 |
User Experience and Usability B, User Experience and Usability joint | 0.049180 | 0.508197 | 0.442623 |
Visualization | 0.046358 | 0.377483 | 0.576159 |
Student Volunteer
Become a Student Volunteer
The student volunteer organization is what keeps CHI running smoothly throughout the conference. You must have had student status for at least one semester during the academic year before CHI. We are more than happy to accept undergrad, graduate, and PhD students. We need friendly enthusiastic volunteers to help us out.
The SV lottery will be open on Monday, October 12, 2022, at new.chisv.org and will be closed on Monday, January 16, 2023. Approximately 180 students will be chosen as SVs for this year’s conference. All other students who registered will be assigned a position on the waitlist. To learn how the SV lottery works, please check the Student Volunteers page for more details. To sign up for the lottery, please visit new.chisv.org, select the appropriate conference, and follow the steps to enroll.
We will mainly be accepting IN-PERSON SVs for this year’s conference, however, we may end up needing some limited SVs to complete remote only tasks. SVs can state their preference during enrollment and the registration form can be updated at any time before the lottery is run. We encourage all applicants to update the form once your participation mode is clearer later in the year.
The lottery result will be announced on Monday, January 23, 2023. Once you have a confirmed spot and registration is open you will be required to register, usually in two weeks. You will receive instructions on how to do this with a special code that will waive your registration fee for the conference. You will still be responsible for course/workshop fees.
Important Dates
All times are in Anywhere on Earth (AoE) time zone. When the deadline is day D, the last time to submit is when D ends AoE. Check your local time in AoE.
- SV lottery registration open: Wednesday, Oct 12, 2022
- Close lottery: Monday, January 16, 2023
- Announce results: Monday, January 23, 2023
What Will I Do When I Volunteer?
For CHI2023 SVs, you will agree to a volunteer contract, in which you agree to:
- In-person SVs: Work at least 20 hours
- Show up on time to tasks
- Attend an orientation session
- Arrive at the conference by Sunday morning at the latest (in person SVs only)
In return we commit to:
- Waive your registration fee
- Provide 2 meals a day on site (breakfast and lunch)
- Free SV t-shirt to be collected on site
- Our fabulous SV thank-you party on Friday night (April 28, 2023). When you are planning for your travel we highly recommend that you remember to leave on Saturday or Sunday so you can attend the party. There is always food, drinks, dancing, and fun!
- More SV benefits TBA…
If you need to reach us, please always use the svchair@chi2023.acm.org address so that the three of us receive it. Reply-to-all on our correspondence so we all stay in the loop and can better help you.
A CHI 2023 note: With the rapid changing situations and CHI 2023 staying hybrid, there may be changes to the way the SV program will operate this year. We, the SV chairs, are monitoring the situation and will keep the community up to date on any changes to the SV program. If you have any comments or concerns, please feel free to email us at svchair@chi2023.acm.org.
Frequently Asked Questions
We get a lot of emails with the same kinds of questions, this is not a made up FAQ.
Q: I know the deadline for the lottery is passed, but I really, really want to be a student volunteer. Can you get me in?
A: You may go to new.chisv.org at any time after the lottery is opened or even after it is run to put your name in the running. If the lottery has already been run your name will simply be added to the end of the waiting list. If you will be attending CHI anyway there is always a chance you may be able to be added to the last minute, you never know.
Q: I want to skip orientation, or work way less than 20 hours, or arrive on Monday, can I still be an SV?
A: No, sorry these are minimum expectations we expect from everyone.
If after you commit extenuating circumstances appear (like volcanos erupting and other strange things) please communicate with us (to svchair@chi2023.acm.org). All we ask is for you to communicate what your circumstances are as early as you realize a situation has come up.
Q: I didn’t get your emails and/or forgot to register by the deadlines you guys sent us and I lost my spot as an SV, can I get it back?
A: If this is due to you just not reading your emails, not taking care of your responsibilities, not keeping your email up to date in our system, forgetting or similar things then the answer is NO, no you may not. If there are extenuating circumstances, please communicate with us (to svchair@chi2023.acm.org). All we ask is for you to communicate what your circumstances are as early as you realize a situation has come up. (Yes, we’ll repeat this often).
Q: I was nominated for an SV spot by someone and got in, will I have to do the same kind of work as other SVs?
A: Yes, the obligations are the same.
We are looking forward to meeting all of you!
Ciabhan Connelly, Georgia Institute of Technology, Atlanta, U.S.
Julia Dunbar, University of Washington, Seattle, Washington, U.S.
Maximiliane Windl, LMU Munich, Munich, Germany
Email: svchair@chi2023.acm.org