Over on Bluesky yesterday, in a gap between all the shouting about politics, somebody asked for an explanation of what the Arxiv is, which I responded to with a long thread (click through for the whole thing):
A lot of that is hitting familiar themes of mine— the historical connection to letter-writing networks, the more collegial nature of physics than some other fields, the uneven adoption of preprint culture— but it’s probably worth repeating. It also ties in to a tab I’ve had open for a good while, meaning to say something about it, namely this video from Sabine Hossenfelder:
In the same way that my thread is hitting familiar themes for me, this is hitting familiar themes for her: that academia is prone to groupthink and clannishness, and that the current approach to physics is a waste of time and effort. It’s probably a little more directly combative than normal— which is really saying something, if you’re familiar with her work— which is why it stuck out enough for me to keep the tab open.
Where do these two things overlap? Well, the obvious place would be toward the end of my thread, where I explicitly name-check her in talking about a possible downside of the adoption of the Arxiv by theoretical physics. I think that overall it’s a net win for science to have all this work freely available, and that the switch from a closed network to an open website has done a lot to expand and democratize the field. At the same time, though, the speed and relative ease with which theoretical papers can be disseminated via the Arxiv encourages a kind of faddishness and a frenetic pace that I think certainly increases the ambient stress level for researchers, particularly early in their career, and is arguably unhealthy for theoretical physics as a whole.
Specifically, I tend to agree with Hossenfelder regarding the common phenomenon where some “tantalizing” hint of new physics in experimental data— a two-sigma discrepancy between experiment and theory, say— triggers an avalanche of preprints “explaining” it in terms of the lead author’s favorite hypothetical new particle, only to see the whole thing invalidated a few months later when more data come in. The muon g-2 saga is not the greatest example of this— the calculations involved are complex enough that only a small number of groups can do them, so there’s less of an avalanche effect— but this article from Physics World is really good, so I’ll use this thin excuse to plug it. You really see it with the regular reports of an excess in some LHC experiment at some energy, one of which comes along every few months, or things like the mistaken claim of superluminal neutrinos from the OPERA experiment.
This is borderline pathological behavior, driven by the incentives of modern academic science— “publish or perish” and all that— to rush into (pre)print with a tenuous explanation in hopes of establishing priority regarding a major breakthrough. It also probably sucks energy away from a search for more novel approaches that may ultimately be more fruitful— if, as Hossenfelder has long advocated, we need a radically different approach to the fundamentals of theoretical physics, that’s something that will come from longer-term contemplation of alternatives. That sort of deep thought isn’t really compatible with a rapid churn of papers “explaining” the anomaly of the week.
The maybe less obvious connection between her video and my thread is in my repeated use of and emphasis on “theoretical” in relation to both the benefits and the costs of the Arxiv. The large-scale adoption of preprint publication as a norm is a much bigger deal for theoretical work than experimental work, because the distribution of effort is different: the time and effort required to write a paper and shepherd it through peer review is much less of a barrier to entry in experimental fields. Not because experimentalists are better writers, or kinder peer reviewers, but because there’s so much time and effort and expense involved in building up a physical apparatus to acquire data.
That means the bottleneck to producing experimental physics research will always be somewhere other than the publication process. Which means that the Arxiv doesn’t have the same democratizing effect, but also that you don’t generally get an avalanche of bad experimental preprints in the same way that you do with theoretical papers. (With the possible exception of situations like the LK-99 “superconductivity” saga where there’s a controversial claim about a relatively easy to synthesize material that can be tested with “off the shelf” apparatus.) It takes months or years to build up a new experiment to test a given claim, and then more months or years to acquire the data. That doesn’t allow wild-goose-chasing to the same degree.
Now, from the video and other things she’s written, Hossenfelder clearly also believes that a lot of experimental effort in fundamental physics is being wasted on testing misguided theoretical approaches. And that’s probably my biggest area of disagreement with her, both in this video and more historically: while I’m sympathetic to the idea that some radical shift of approach may be needed on the theory side of fundamental physics, I am much less convinced that experimental effort toward testing even deeply flawed theories is actually wasted.
Again, this has to do with the nature of the research bottlenecks. Not only does it take months or years to build up an apparatus to test a particular model, it takes years or decades to establish the expertise needed to design and build these experiments. Often the earliest experiments in a given subfield don’t have any prayer of seeing anything even in the most optimistic interpretation of the theory they aim to test. That doesn’t make them a waste of time, though— on the contrary, they’re an essential step in the process of developing new techniques that will eventually be useful.
I am much less concerned about the state of experimental physics than Hossenfelder is, because as I see it it’s inherently a process with a much longer lead time. And even if the current theoretical paradigms are fundamentally misguided and will eventually be replaced by something better, whatever that something better is will need sophisticated techniques to test its predictions (I say that confidently because if they could be tested in simple ways, we’d already have an answer). I think it’s perfectly reasonable to regard most current experimental efforts in fundamental physics as groundwork-laying: at worst, the experimentalists are developing new tools and techniques that will come in handy whenever the theorists get their shit together.
Note that this is not a “CERN invented the World Wide Web” argument that we’ll get some kind of massive windfall from some unspecfiable spin-off technology— I’ve never been a fan of those. I’m specifically arguing that these efforts are a benefit to physics, expanding the toolkit we can use to study nature in new ways. If we happen to enable flying cars in the process, that’s great, but the real point is to develop tools and train people to use them.
It’s also not a completely open-ended argument— I wouldn’t sign on to an effort to build a trillion-dollar particle collider on the Moon without some more solid justification than current models can provide. There is a price point somewhere between the LHC and the Moon collider that I would consider excessive even from an infrastructure-development perspective; I just don’t agree that we’ve reached it yet. The experimental stuff that we’re spending tax dollars on now is, to my mind, worth the effort even if it won’t revolutionize physics directly, because it’s building up a base that we’ll need down the road.
But again, this is a common refrain from me: whenever you hear someone talking about a crisis in physics, they’ve almost always elided the words “theoretical” and “particle.” Experimental physics is a different game, especially once you’re outside the particle subfield, and the situation there is nowhere near as dire.
I feel a bit like I’m repeating myself here, but it’s probably been long enough since the last time that nobody else will remember. If you’d like the hipster cred of recognizing this the next time I come around to it, here’s a button:
And if you’d like to argue with any of my characterizations of things, the comments will be open:
Another very fine article. I especially like the distinctions you draw between your views and Sabine Hossenfelder's. I agree in particular with your sense that building hardware to do an experiment, even if done so initially in response to a theory that will turn out to be of little use, is on balance a good thing.
That said, I did want to encourage anyone else who has clicked over to watch SH's video to also read the article she buried in her notes for the video. (This also applies to those who do not care to watch a video.) The article has the same thrust as the video, but is more measured. I don't completely agree with it, especially after having thought about it for a few hours, but it is thought-provoking. Here is the link she posted:
https://www.dropbox.com/scl/fi/5o31k2jovu4nmyy219tzh/nphys4079-1.pdf?rlkey=f5y07dj0i6ob29fuq01zgkibs&e=1&st=xtv22uph&dl=0