19

Anonymous account for obvious reasons. I am a researcher in TCS. I have several publications in SODA/STOC/FOCS. I've recently been so disgruntled with the way these conferences are run, and wanted to voice my frustration into the void. It seems like the organizers collect "feedback" every year, and throw it into the bin. In particular,

  1. In what universe does the single chair model still make sense? How is the chair able to supervise the reviewing process for all PCs acurately? Every single discipline I know has switched to having several area chairs for different subdisciplines. Why are theory conferences lagging?

  2. What steps are being taken to prevent crap reviews? I've seen one of my students' SODA 2024 reviews, and they were a hot steaming pile of shit. The reviewers had "Expert" for confidence, when it was clear as day light the reviewers (external or PCs) had no idea what the paper is talking about. Some reviews were 2-3 lines, and I have no idea how that wasn't caught by a PC. What steps are being taken by the community to weed out low quality reviews and reviewers? Has any reviewer been banned from reviewing in (say) the last 5 years for the quality of their reviews?

  3. I am aware of a PC who has served several times on the committee in the last 9 years who notoriously reads the paper in their pile and writes reviews on the last day. Their reviews are notoriously utter garbage, and they get into flaming wars arguing for their BS reviews. I know they've sank several papers that deserved to get in. Why is this PC still being invited to serve on the commitee when they have shown such utter disregard to the seriousness of their task in objectively reviewing papers?

  4. I was optimistic when conferences started implementing double blind reviews, but I am now convinced that's complete and utter BS. I've recieved emails by known researchers in my area who publicized their paper after the SODA rebuttals period saying "they hoped to see it in SODA despite the poor scores". These papers, surprise surprise, got into this and last year's SODA. Not to mention a few bad actors that I am familiar with who search the name of the paper they are asked to review on arxiv, despite the instructions specifically asking not to do that. Do the conferences actually take double blind reviews seriously, or is this just lip service?

  5. Who is doing quality assurance on the papers that get in? There are for example 2 papers in last year's FOCS by 2 "superstars" where I was able to prove the statements in an hour. These are papers that claimed their main contribution was a "non-trivial" proof of such statements. Meanwhile, the papers I root for get weeded out and have to go through several rejection cycles.

  6. Why are poster sessions not a thing yet? Conferences complain that they can "only accept so many papers" because of physical constraints, but I believe this is utter BS. ML/Data mining conferences accept "good" papers, and categorize them as posters, oral, and spotlight papers. Poster papers are presented during a poster session, whereas the oral/spotlight papers get to present.

  7. What is with the obsession with acceptance rates? This is just driving people away from TCS.
The theory community keeps blabbering on wanting to be more inclusive to new researchers, but I know of at least 3 researches (including 2 of my own students) who are so sick of the BS happening in reviewing, and waiting several cycles of rejections for a paper to get in, that they switched to publishing almost solely in ML conferences. These same papers are still being cited by SODA/STOC/FOCS papers and are clearly relevant to the TCS community. Can we drop the facade of "acceptance rates" and just focus on accepting solid good papers?

As I near the end of my career, I'm filled with nothing but frustration with the way that TCS conferences are run. I actively encourage all my students to switch fields, including the really talented ones. Every time I discuss this with a TCS researcher, they share my frustration, but argue that "this is how it's always been".

kodlu
  • 2,070
  • 13
  • 23
  • 4
    About acceptance rates, they are much higher for top theory conferences than for top conferences in almost any other CS subfield. Also, coming from publishing in AAAI/IJCAI during my masters to publishing in theory in my phd, theory definitely compares positively on the quality of reviews and in the seriousness of the reviewing process in my experience. – Laakeri Nov 11 '23 at 05:32
  • 3
    @Laakeri that's not true. NeurIPS acceptance rates for 2022 was 25.6%, and 26.1% for 2023. Sigmod (for datamining) has an acceptance rate of 29.3% in 2022, and 41.8% in 2021 (I can't find figures for 2023). Compare this to for example SODA (28.1%, 30%), STOC (25.7%, 26.9%) and FOCS (37%, 34%). – DisgruntledTCSResearcher Nov 11 '23 at 05:36
  • 6
    @Laakeri Also, I don't think it's constructive to compare acceptance rates across disciplines. The average ML submission is not the same as the average TCS submission. There is a ton of self-selection that happens for TCS. The current enforced acceptance rates are causing extremely solid and good papers to be rejected, when that should not be happening. – DisgruntledTCSResearcher Nov 11 '23 at 05:39
  • 1
    Agree with the fact that acceptance rates are not comparable because of self selection. I think I misunderstood your point 7 a bit. – Laakeri Nov 11 '23 at 06:53
  • 10
    As for crap reviews, this is inherent in the idea of the conference publication model. There is no consistent way to get quality reviews on a ridiculously short deadline such as a few weeks. Theoretical computer scientists should abandon this model and start publishing in journals, as all other scientists are doing. – Emil Jeřábek Nov 11 '23 at 08:00
  • 8
    @DisgruntledTCSResearcher you state that we shouldn't compare acceptance rates between disciplines, but several of your points keep comparing TCS and ML conferences. I'm not saying all is rosy in TCS (it's not), but I haven't met a single researcher in ML (theory or not) who doesn't say the ML conference reviewing process is utterly broken. – Clement C. Nov 11 '23 at 08:54
  • 1
    I would point out that there are many TCS researchers who also try to publish in ML venues. There the work can be somewhere between theory and practice. While these conferences have another set of issues, at least they are quite open to papers relating to both theory and experiments. At least for me, they are a much more appealing venue than STOC/FOCS/SODA. – Sandeep Silwal Nov 11 '23 at 17:22
  • 1
    Is this a Theory A vs. Theory B issue? I have severed on a bunch of PCs covering topics such as logic in computer science, semantics of programming languages, type theory, category-theoretic aspects of TCS, etc. (LICS, ICFP, POPL, LCS, TACL). I have seem some bad reviewes, but nowhere as bad as what you are describing, and for the most part reviewers do as good as job as they can be expected to. PC chairs seem to have a good handle on things, too. (Acceptance rates and conferences-as-journals is another matter.) – Andrej Bauer Nov 17 '23 at 13:46
  • 2
    BTW, SIGPLAN conferences (ICFP, OOPSLA, POPL) have moved to a model where the publication undergoes a journal-style review process, and the proceedings want to be a journal (https://dl.acm.org/journal/pacmpl). – Andrej Bauer Nov 17 '23 at 13:48
  • I would be so happy to be banned from refereeing for computer science conferences ... – user313032 Nov 17 '23 at 21:24
  • 1
    As much as I agree with many of the points in the question and many other things that are broken with the academic system (cf https://a3nm.net/work/research/wrong/), I'm quite surprised with point 5. In my experience, accepted papers at major conferences can have many problems but I have never seen results that were as trivial as you described. Since you are anonymous anyway, could you point us to these specific FOCS papers? I think this would be a compelling argument against the perceived prestige of these conferences. – a3nm Nov 17 '23 at 22:17
  • 1
    As someone who usually publishes on AAAI/IJCAI, I can say the trash reviews are very common there as well. However, TCS conferences are smaller so I don’t see excuses regarding the inability of checking for obviously bad reviews. – Nicola Gigante Nov 21 '23 at 05:59

0 Answers0