Chicago-2016: Inflation

From CMB-S4 wiki
Jump to: navigation, search

Summary of Science Book Chapter 2 - Inflation

Session Leaders: Lloyd Knox, Sarah Shandera

File:InflationSummary.pdf

Opportunities for Improvement in Forecasting

Are the foregrounds for the B-mode survey forecasting of Buza et al. , or others, sufficiently complex? What are some possible next steps?

Delensing is highly idealized in these forecasts. Foregrounds are ignored as are quadratic estimator biases and potential impacts of masking. What are some possible next steps?

Figures for discussion

(Please insert figures, or links to figures, that you think might be helpful for our discussion)


Notes on discussion

- Clem comments that the current optimization framework that shows only a modest sigma(r) preference for small f_sky has the simplistic assumption that all sky areas have FGs as low as the BICEP/Keck patch; increasing FG amplitudes at larger f_sky can be expected to increase penaly for larger f_sky.
 - Jo says her latest work with Alonso does this, the shapes don't change so much and conclusions are broadly similar.
- John Ruhl asks about realism in simulating systematics: can we have real, ugly effects like ghost beams, sidelobes, etc. included in these forecasts in the next 6 months.
 - Kovac says that simulating such effects reliably from the timestream level is unrealistic within the near term, but timestream sims might inform the categories of map-level systematics injected into mock trials, and we can hope that at the map level, forecasting can explore internal detectability and biases for different classes of systematics.
 - Tom says show me a specific systematic and we can develop a model for it.
 - Francios says map level systematics forecasting exercises can help by revealing how low does an effect have to be (at map level) before it impacts the science.
 - Brian Keating asks about polarization angle calibration
 - Julian advocates for at least a small number of time-domain sims to inform map level sims.  But he points out the challenge is less in the computational lift, but that such time domain sims would require specifying lots of details we don't currently have (scan strategy, etc.)
- Dick Bond says that he remembers a decade ago or so a Spider optimization showing a broad minimum in sigma(r) around f_sky 8%.  Why is that no longer best?
- answers from crowd:
 - Using optimal E/B separation techniques opens up use of smaller patches, but some early forecasting efforts didn't assume this kind of analysis.
 - Delensing is important to the tradeoff; assuming no delensing or a fixed (i.e. naively cost-free) level of delensing would shift minimum toward larger f_sky
 - Assuming non-zero r also begins to favor larger f_sky

Next Steps

We've been asked to identify next steps to be taken

1)