I’m no life sciences person either, but it seems to me that their equivalent is the need to replicate studies at very large scale to truly be sure of ANYTHING they are seeing. The literal cost may not be in the same realm as colliders but it seems like there is a logistical equivalent cost.
"In the case of astronomy, [being an observational science] very directly motivates the creation of at least two comparable instruments, because you want one that can see the Northern Hemisphere sky and one that can see the Southern Hemisphere sky."
I'd add that another motivation for building comparable instruments is to improve resolution by networking the telescopes and doing interferometry. But could interferometry networks be thought of as a sort of consolidation as well? For example, the Event Horizon Telescope that produced the famous M87 black hole image a few years ago consists of about a dozen individual telescopes that were all built independently by various groups, and now they're working together as one.
That's certainly an ingenious use case for those scopes, though I think it's also the case that a lot of them are useful in their own right, not just as a contributor to an interferometric array.
Going rather afield, but weather and climate modeling are encountering the bigness problem. It comes from two different directions. First, on the science as an abstract endeavour side, the size of the codes involved has passed anything a small group can deal with (whether to learn in full or to replace). Ballpark 2 million lines of code for a more or less modern system. Steve Easterbrook (CS U. Toronto) pointed out some years ago that one of these models (any one) represents a comparable investment to a largish, if not largest, particle colliders. Say $500 million, iirc.
The second side, imnsho, is greatly under-considered. Namely, the computational cost of demonstrating that you have an improvement has been increasing far faster than the cost of the benchmark/standard runs (daily weather model, a CMIP run for climate). As the models get better, it becomes harder to show a net improvement (long since impossible to show universal improvement). As a consequence, while the computational cost of the standard run increases as X^4 (X being the improvement in space-time resolution, and assuming fully 4 dimensional model), the cost of an experiment to show improvement increases as X^6. Where it used to be a simple matter of a grad student tweaking a subroutine and seeing what happens, now that experiment is itself a team and requires a significant computing allocation on a large system. So again, convergence to fewer platforms (models/particle colliders) running fewer experiments (weather+climate models, at least) on fewer (hardware) platforms (few now being large enough).
This is one reason that I'm not too happy with "economizing" on large projects. I don't really think the money not spent on the SCSC, for example, went to other science projects
In that specific case, I think there was a trade-off with the space shuttle. It didn't go into any other physics programs, though. In general, it's very much a "capital budget vs. operating budget" sort of situation-- the money spent on Big Science projects is really only available for those, and can't be split up into a thousand $5million grants.
I’m no life sciences person either, but it seems to me that their equivalent is the need to replicate studies at very large scale to truly be sure of ANYTHING they are seeing. The literal cost may not be in the same realm as colliders but it seems like there is a logistical equivalent cost.
Yeah, in the pharma branch of biomedical science, the big cost/logistics issue is with producing stuff at scale.
"In the case of astronomy, [being an observational science] very directly motivates the creation of at least two comparable instruments, because you want one that can see the Northern Hemisphere sky and one that can see the Southern Hemisphere sky."
I'd add that another motivation for building comparable instruments is to improve resolution by networking the telescopes and doing interferometry. But could interferometry networks be thought of as a sort of consolidation as well? For example, the Event Horizon Telescope that produced the famous M87 black hole image a few years ago consists of about a dozen individual telescopes that were all built independently by various groups, and now they're working together as one.
-GS
That's certainly an ingenious use case for those scopes, though I think it's also the case that a lot of them are useful in their own right, not just as a contributor to an interferometric array.
Going rather afield, but weather and climate modeling are encountering the bigness problem. It comes from two different directions. First, on the science as an abstract endeavour side, the size of the codes involved has passed anything a small group can deal with (whether to learn in full or to replace). Ballpark 2 million lines of code for a more or less modern system. Steve Easterbrook (CS U. Toronto) pointed out some years ago that one of these models (any one) represents a comparable investment to a largish, if not largest, particle colliders. Say $500 million, iirc.
The second side, imnsho, is greatly under-considered. Namely, the computational cost of demonstrating that you have an improvement has been increasing far faster than the cost of the benchmark/standard runs (daily weather model, a CMIP run for climate). As the models get better, it becomes harder to show a net improvement (long since impossible to show universal improvement). As a consequence, while the computational cost of the standard run increases as X^4 (X being the improvement in space-time resolution, and assuming fully 4 dimensional model), the cost of an experiment to show improvement increases as X^6. Where it used to be a simple matter of a grad student tweaking a subroutine and seeing what happens, now that experiment is itself a team and requires a significant computing allocation on a large system. So again, convergence to fewer platforms (models/particle colliders) running fewer experiments (weather+climate models, at least) on fewer (hardware) platforms (few now being large enough).
This is one reason that I'm not too happy with "economizing" on large projects. I don't really think the money not spent on the SCSC, for example, went to other science projects
In that specific case, I think there was a trade-off with the space shuttle. It didn't go into any other physics programs, though. In general, it's very much a "capital budget vs. operating budget" sort of situation-- the money spent on Big Science projects is really only available for those, and can't be split up into a thousand $5million grants.