An article in CV that is purely experimental, without any proof, should we write “no” for the checklist item “Does this paper make theoretical contributions? (yes/no)”?
If there’s not even a bit of theoretical contribution, yet you still publish a paper like that ![]()
Even if you’re not good at math, you still want to pad a paper, huh?
Pure experiments have no proof, so where is the novelty of this work
or what is the significance of this work? Doesn’t the researcher basically become a graphics‑card operator (I haven’t done CV, so I’m not sure if CV has a tradition of only looking at experiments).
As someone who was just desk‑rejected by NeurIPS because of this, I can say that this year it’s happening again in the paper. It’s such a rip‑off.
Bro, what did you say about the workflow? Do we need to label the paper information?
Proposed a new architecture/module but didn’t provide proofs with mathematical formulas, and I also don’t quite understand what those theoretical articles are about.
Everyone, the checklist is attached after the reference; it’s over 9 pages—does that work?
Do we need to use some tool to remove metadata before submitting the PDF? What do the experts use?
Submitted full paper.
XJTU still: trash

https://arxiv.org/abs/2103.14030
https://arxiv.org/abs/2306.01567
https://arxiv.org/abs/2403.18271
Just randomly picked a few top‑conference papers, they’re all like this.
The contribution of a thesis is not merely stacking formulas, but using mathematical language to explain why a method works. I looked at several papers and they are not purely experimental; they have simple theory but ample performance, and they all pursue interpretability. It feels like CV is like building blocks? It’s roughly like introducing why this block, or why this plug‑in can be placed here, what role it plays, and whether future researchers can plug this new block into other toys. Achieving this part constitutes a theoretical contribution, and it doesn’t have to be as demanding as the formula‑bombing papers at ICML, whose assumptions are often too strong, seemingly made just to stack formulas.
Maybe because I work on interpretability, the exact performance of a module compared to others is often not my primary focus. Changing the dataset or tweaking the settings of an old method can easily render the SOTA obsolete. I’m more interested in what problem it solves, whether its approach to solving the problem is worth following, and whether the work opens a new avenue or simply closes an existing one. Last year, when reviewing papers, if a paper that merely stacks metrics doesn’t clearly explain the underlying principle, I could at most give it a wa. Some papers may not have fully surpassed the SOTA, but if I think the idea is excellent, I would gladly defend it and give it an ac.
Can the checklist be placed in the supplementary materials? … I didn’t notice this the first time I submitted…
Asked the chair, and the reply was yes.
May I ask what is the maximum size of a compressed package that can be uploaded? If it’s 25 GB, does that mean it can only be provided as an anonymous download link?
Maximum 50 MB
Is it allowed to adjust an image’s size in LaTeX by setting its width? I only used the \includegraphics[width=0.9\columnwidth] command, for example:
\begin{figure} [t]
\centering
\includegraphics[width=0.9\columnwidth]{figures/x.pdf}
\caption{xxx}
\end{figure}
I want to ask everyone, can you submit to arXiv during the anonymity period? I couldn’t find any relevant information on the official website ![]()