Harvard-2017: S4 Vision and CDT Synchronization

From CMB-S4 wiki
Jump to: navigation, search

Back to Harvard-2017 main page

S4 Vision and CDT Synchronization (Moderator: Mark Devlin; Scribe: Akito Kusaka)

Post talks here.

  • Overview -- charge & status of deliverables -- Mark Devlin [PDF]
  • Key Science, Parameter Targets -- Raphael Flauger [PDF]
  • Clusters and baryonic physics -- Nick Battaglia PDF
  • Surveys -- Tom Crawford File:Cdt surveys.pdf
  • Open Discussion -- All
  • Strawman Instrument Design(s) (emphasizing trades) -- Adrian Lee PDF
  • R&D -- Kent Irwin PDF
  • Action items from session -- Akito Kusaka [[File: ]]

Notes from session

Overview (Mark)

(see Mark's slides for the content of the talk)

Discussions

Suzanne: any suggestion on where to stop on systematics? M. Devlin: it will be a continuous/iterative process. Optics simulation got a lot better in the last decade. But our requirements may be more precise. We could change what to put in the focal plane as we move forward and make progress in estimating systematics.

J. Bartlett: Lobbying. Give talks. Advertize to the broader audience. M. Devlin: agreed. bringing neutrino science - great for DOE. for NSF/decadal, need to get buy-in from astronomers.

Brad Johnson: do DOE and NSF fund all together? M. Devlin: DOE does not fund telescope for infrastructure. It funds camera. There is a dividing line. But we should go as a package. Adrian: can have an entire science portfolio, but selling emphasis can be different to DOE community and NSF community.

Nathan W.: good to have talks wiki page etc. John C.: need to improve the web site. Talks, job posting, etc.

Key Science Goals (Raphael)

Jim B.: need for high frequency channels to be carefully addressed. Dust is there. Simulations needed. Raphael: we know dust is there. the question is how many frequencies. Intense simulation work on going.

Natlie R.: when requirement is set for sigma(r), does it include statistics and systematics? How do you define this? Raphael: depending on method. Take 1000 simulations, and take the spread. The goal is to keep the systematics below 20%. But sigma(r)=5e-4 is only statistical.

Blake: remember cross correlation science - calibrating LSST, sample variance cancellation. Raphael: assuming they will be covered in the next two talks. Cora: in parallel session? Jim B.: be clear who is responsible for that. make sure we cover it.

Brad B.: on Neff, how much are we improving? Natalie R.: to add, it is cost expensive, and needs to be very well motivated. It is in the John C.: stage 3 is sigma(Neff)~0.06. Mark D.: DESI? Natalie: not to this level. Raphael: this method of using optical survey is not demonstrated yet.

Clusters and Baryons (Nick)

Lloyd: lensing cross correlation / LSS. Blake&Lloyd to start plugging into this group.

Nathan W.: Having two independent neutrino mass measurements is extremely valuable.

John C.: what would be the big driver if we are going to have high-ell to be the science driver. Nick B.: depending on the community. for astro, galaxy formation is really a big thing. Underrepresented in S4 community, the galaxy formation community is excited about S4. Mark D.: X-ray community is also excited. Nick B.: combination with Athena.

Mike N.: many are excited with short wavelength capabilities. Perhaps a design driver. Another is time-domain astronomy. But we have little representation. Nathan W.: huge potential and broad science in this area. Lots of light curves. Nick B.: Planets, ... maybe worth having more working groups in these areas.

Brad J.: surveys or targetting small objects. Nick B.: optimization shows that we always want to go wide surveys.

Brian K.: polarized SZ? Nick B.: several sigma expected with S4. depends on forecast. Even measuring quadrupole as a function of z, and going beyond cosmic variance.

Tom C.: if we set galaxy formation as key science, we should make sure S4 is transformative. But what about CCATp. S4 needs to do better than CCATp. Mike N.: CCATp does not have as capable camera. Brad B.: survey mode is different.

Mark D.: if these science will be driver, we need to hear from a lot of people. If we need 1-1.5 arcmin, that is a bigger telescope.

Charles L.: fischer forecast needs to include many things, and is a time-consuming study. Nick B.: a lot of elements (e.g., non-white noies) are in there. but agreed more works to be done in this area.

Survey strategy (Tom)

John K.: foregrounds get worse once we go more than the best a few % of the sky. But what about delensing residual? Is it better to have deeper survey? Neelima: what about spreading the sensitivity into two patches? Tom C.: CDT certainly thought about. 95% upper limit of 1e-3 is already pushing the cost. If we require two patches, that would double the cost. Blake: what abuot the case of detection? We would want to confirm at two places. Adrian: there is a large scale variation of the foregrounds. Mark D.: we can change the survey strategy as we observe, and be based on findings. Akito: be careful about reading the plots - assumptions are there. whether smaller is better depends on the depth etc.

Marcell S.: increasing fsky does not seem to improve Neff constraint that much. Tom C.: 0.027 is a threshold and we will reach there by one sigma. Whether reaching one sigma can be a good threshold is another question, though.

Zeesh: sigma(Neff) = 0.027. Is it really a science driver. Mark D.: if we require fsky=0.8, that would be much moer expensive. Adrian: in principle can reach sigma(Neff) = 0.027 with fsky=0.4.

Blake: can we use the wider survey for delensing? Tom C.: it really needs to be deep and hence dedicated deep survey of high resolution required.

Blake: Neff constraint combining BAO and CMB? Joel M.: does not help much. Still need CMB to anchor that.

Joel: There are a few other thresholds above 0.027. But not as strong. Brad B.: again, we cannot reach the threshold at 2sigma. And the improvement over S3 is not order of magnitude. Joel: just to be clear, this is the one that particle physicists care the most. John C.: there are caveats about the 0.027 being clear goal - it is not that well defined. Raphael? Raphael: it could go negative. So unsure to take this number too seriously. Interesting to exclude electroweak scales etc. Natalie: as a particle physicist, I care inflation and r.

Discussion

Joaquin: focus is good. People would have complained if CDT has diverted attention and diverged. Mark D.: if we were to elevate a science goal to a driver, we need to look at it from various angles. High profile, cost, ... Toki S.: Summary of what science costs how much. Schedule driver, ... Mark D.: going to the high ell science, cost modeling will be more difficult.

REQUEST: cost benefit analysis summary for a variety of science goals.

Joaquin: get other community's people (LSST) etc and get the science cases and requirements written down.

Mike N.: would be good to have a paragraph of variety of science in CDT report that we can bring it to decadal. Mark D.: we may not have enough time in CDT, but would be good to aim for. Marcel S.: Neff alone driving fsky seems a bit weak. Mark D.: 40% from southern hemisphere

Steve A.: cross correlation and cluster requiring 1arcmin. Unless it drives the cost way high, 1arcmin is a good target. it is transformational. Mark D.: do all the detectors have to be 1arcmin? Is that the right balance? All the frequencies?

Clarence: will preliminary version posted somewhere? Charles: the report cannot be public before AAAC approval. It does not necessarily mean contents cannot be public. So can think about ways of getting community input more.

Jim B.: is there something that drives to higher frequency channels? Mark D.: another reason why CCAT is an imoprtant asset.

Colin H.: concerned about neutrino mass constraint no longer being a science driver. There can be things like systematics may bite us and there isn't a control against it. Brad B.: issue is that other experiments such as DESI. Colin H.: but neutrino mass from cluster. Blake: lensing is robust.


Lunch Break

Strawpeople (Adrian)

Mark D.: on sigma(r) vs. aperture, it is flat assuming we can build bigger and bigger

Brad B.: is the 500k detectors, hybrid, ...; is that the CDT recommended strawperson? Adrian: this is not a binding thing for the community. This is a concept, and an example. Mark D.: remember CDT is not coming up with PDR.

Steve P.: 200k vs. 300k for small aperture vs. large aperture? Adrain: This depends on simulations etc. Also, Neff requires quite a bit of detectors. Suzanne: and some of the delensing detectors included. Adrian: yes, part of

Nick B.: frequency bands. Shown there (8 bands) are for small aperture? Adrain: yes, delensing may require less frequencies. Clusters may require higher frequencies. Neff may require even less frequencies.

Hanness: what element of the strawperson drives the cost? Adrian: detector count drives the cost. Size of the large aperture is not the driver. Steve: 1/3 for each of detectors/readout, cryostat, and telescope. There isn't one thing that drives the cost.

Natalie: definition of "year" - assumed efficiencies, ... Make sure we speak in common languages. Adrian: efforts made in this direction. Inflation science based on BICEP achieved. Natalie: good to include those definitions in slides.

Toki: Does splitting 90 and 150 help? Adrian: this was not a focus to find that out. - will try to find this out later.

Brad B.: I understand we should not take it too seriously? Adrian: not at this point. Mark D.: probably having multiple aperture is pretty well agreed in CDT.

Brad J.: cost is extremely sensitive since 20% efficiency seems too low. Any effort to make improved estimate in CDT? Adrian: using something existing is conservative. Once conservative estimate is established. The 20% is already . Clem: Observing efficiency will not get better by factor 2. It is physically impossible. Shaul: even if it may be possible, we cannot assume that,

Steve A.: is the small aperture's aperture size set? Adrian: we can optimize, it but different dynamics. throughput, resolution (but only for low frequencies), ... Also ground shielding. Steve A.: will there be ranges of the (small) aperture size in the CDT report?

Mark D.: 1/l_knee for estimation? Adrian: currently assuming somewhere between inflation and lensing. Mark D.: need to be careful that it can hurt like efficiency. Adrian: these are demonstrated, except deeper levels of course not demonstrated.


R&D (Kent)

Steve P.: how do you decide which things should be pre-CD0, and which should be post-CD0. Kent: what's lowest risk on the most important science. Start at deadline, and work backwards. Working backwards, fab efficiency / yield could be post-CD0. So probably wafer uniformity later. We should look at TES vs. MKIDs earlier.

John C.: suppose DOE does not wait for decadal, then we will be in fabrication in a couple of years. Timing will be much shorter. Kent: yes, we need schedule to lay out the development plan. Natalie: often CD-2 / 3a - long lead time items start (placing order) at CD-2. Kent: yes, if detector fabrication needs to start at CD-2 as long lead time items, development has to start earlier.

Joaquin: schedule delay of decadal? Charles: 6 months delay. This is not unusual.

Clem: where the risk score comes from? Kent: LSST. Clem: it seems very strict about the schedule - 6 months delay being critical delay? Natalie: yes, in DOE management, 6 months delay is critical.

Denis: Risk score assignment seems subjective. Mark D.: in NASA, when putting in a proposal, they will bring in professional managers. Then they would assign risks. Natalie: in most DOE project, DOE project team will put together risk registory. New risks can be assigned during the project, and some risk registory can be retired. Kent: dynamic contingency as well.

Jesse: what do you when a slip is highly likely. How to move resources in, and how to adapt to the situation. This is the difficult part. Brenna: that's part of the trick of defining governance and collaboration structure. would want to have flexibility to adjust throughout the course of project. There is a process and schedule contingency. Unless you go passed the contingency they won't kill you.

Shaul: suppose there is whole bunch of development for NASA etc. These are designed for this project. Kent: this DOE way is not the way this community has been working. Brenna: you can upgrade the project. You can explain about the new development with agency. Have to do a lot of arguing.

Steve P.: post decadal schedule seems very aggressive. What needs to be prepared/developed going into decadal and during the decadal.

John C.: decadal is really needed for NSF astro. But DOE does not need it. Other divisions NSF does not need it. Natalie: P5 has endorsed. we should be keep pushing regardless decadal.

Shaul: let's suppose this is not prioritized by NSF. Is there a path forward without NSF? Natalie: DOE and NSF physics listens to P5. It would be good to come up with a way to move ahead and get NSF catch up.

Adrian: is there any way to not necessiate decadal if all three divisions of NSF agree. Rich: NSF astro needs decadal. Adrian: could there be enough support. Vlad: critical thing / difficulty for MREFC is to get the R&D in the queue. But if DOE does the R&D, then NSF can quickly jump onto the construction and catch up.

Clem: NSF comes in later, that's going to enable high-ell science - big telescope, ...

Jim B.: funding model. Mark D.: NSF to build platforms, concrete, site, ... Vlad: operation as well. Natalie: DOE will also support operations. Charles: basic scheme is 50/50. Shaul: CDT should adjust its science for the risk of NSF not coming in. Mark D.: we should be looking at the phasing, first on r, sites, ...?

Schedule and phasing, funding model, two sites solution....

Adrian: for dacadal, coherent message from CMB - space vs. ground. complementary - aperture and frequencies. We need to have coherent message for decadal. Mark D.: they are going to give you hard time. need to anticipate difficult questions.

Shaul: decadal is unlikely to recommend any probes in the cost range of CMB probe. Likely scenario is funding wedge. then at later times, there will be a point (2025 etc) for competition. Mark D.: they always split ground and space. need to come up with a unified voice. in the US panel - 35. then 8 rated. There is this level of selection there. If you don't rank high in the US you cannot go further. Charles: there can be new fronties program. discovery area - competition would be later in 2024 or something. John C.: there will be a text on CMB for sure, and we should make sure ew give them material.

Action items/Next steps

Summarize action items here

  • Improve the S4 web site. Talk slides etc.
    • Also related to the collaboration formation, talk policy etc. as well.
  • CDT to get feedbacks from the community for the reoprt.
  • Strawpeople
    • Be clear about the range of parameters, assumptions: aperture size, l_knee, observing efficiency, ...
  • Funding model (DOE, NSF), phasing, ... - connected to the target as well.
  • Two key sciences driving the instrumental requirements. (concerns in particular for high-ell)
    • Does this strategy really capture the requirements, including systematics?
  • Neff - goal of sigma(Neff) = 0.027
    • [Science cases are clear (Raphael has description - just didn't allocate time for explaning it)]
    • yet, it is not 2sigma(Neff) = 0.027.
    • Improvement over the S3 experiment?
    • Does it really capture the requirements on the instruments for the other high-ell science that we care.
  • Cross correlation science.
    • Need to build up the science case, and requirement flowdown. Who is in charge?
  • Small angular scales science being possible driver
    • Take a lot of effort to define this to be a driver - flowing the science down to instrumental requirement takes lots of modeling/simulation/etc. effort.
    • Which we want be really the "drivers"?
  • Cost benefit analysis summary for a variety of science goals.

Discussion

  • For inflation no one said sigma_r = 1e-3 is a bad target so we seem to be on firm ground there.
  • A lot of questions came from sigma Neff = 0.027 as a goal.
  • Related is will that capture requirements on the instruments for other high ell science we care about
  • Hard to do all the sims to flow down instrument requirements for the other high ell science — which is going to be pretty hard in the time frame
  • Cost benefit analysis summary for a variety of goals would be useful but might be hard to get it on time also
  • BJK — we feel like we own r amongst all fields, but Neff is different since anything that can trace P(k) can say something — is it their free science and our requirements sicence?
  • Joel — true enough if you knew where every object is in the universe you could get somewhere with large scale structure but that isn’t feasible.
  • Raphael — with BOSS you can say there is nonzero Neff and maybe get to Delta Neff = 1 but that is about it - -ie it hasn’t been demonstrated at all
  • Jim Bartlett - how far into the nonlinearity do you have to go?
  • RF — Silk damping scale is one thing and also phases of BAO peaks for CMB and for large scale structure and in both cases the phase is what gives you the better numbers — but the size of the BAO peaks is smaller for LSS (says someone, maybe Joel) — but on the other hand if you measure them very well with a very large survey you can
  • RF/ BK there is some very optimistic 1sigma = 0.057 estimate from Planck + DESI — but have not been able to track down the person who made the actual forecasts. Going up to kmax = 0.02 and including the Ly Alpha forest, so quite aggressive, says NR. Brad points out S3 expects to get to sigma_Neff = 0.06.
  • AKito — aren’t we penalizeing S4 more for foregrounds etc for both r and for Neff? TC — much truer for r than Neff — we aren’t penalizing S4 Neff very much.
  • Julian — wsn’t the thuoght we would put in a nominal S3 forecast? Tom — to reforecast all S3 experiments would be huge for the CDT. Cora - well, for the decadal.
  • JK — can’t we just say how much effort is going into S3 in terms of detector-years cf S4 and show how much that factor buys you on say Neff (and r?)
  • AK — we need to figure out the path forward for the other high ell science — this is the area that requires some work —
  • CL — with exception of frequency coverage (ie needing dust at 270 GHz) there is nothing driving us to a harder instrument for that high ell stuff and Neff is a hard driver so.. so from a practical point of view we don’t have to resolve the issue
  • AK — the reason we spent so much time today on this is that people get concerned about systematics — lot sof high ell science uses intensity more than polzn — so if only Neff is driving instrument will it cover the systematics enough? Also if we say that Neff is the driver science and it is only 2x better, does that win for us?
  • JB — it worries me that Neff is so weak — and if I understand the model then NSF would pay for the telescope — and if base don Neff why would NSF AST go for it? Especially if only 2x better.
  • CL — 10x more effort for S4 means it has to be more than 2x improvement on Neff — it just isnt’ a credible argument to say that 10x more detectors won’t give you a better improvement.
  • JB + AK + others disagree.
  • AK — would you say that a fair comparison should be done btwn S4 and S3 is the way to go forward?
  • TC — Fig 25 in science book gives sigma Neff vs map sensitivity — and for factor of 10 in sensitivity you get factor of 2 in sigma Neff for fixed fsky.
  • Neelima — I’m troubled by CL statement that Neff’s frequency coverage etc will be good enough for everythign else — when you are doing things with intensity you need more foreground cleaning. CL said you haven’t proven you need more frequency channels — but Neelima says that the burden of proof goes the other way around.