Difference between revisions of "Estimated observing efficiency for past and current telescopes"

Colin Bischoff, 2018-09-25

In this posting, I try to estimate the relative observing efficiency for telescopes at South Pole vs Chile. It is hard to get a clean answer to this question because every experiment has its own unique circumstances and there are a limited number of data points to examine.

The method I will use here is to compare a survey weight (units of μK-2) calculated from published BB bandpowers to a survey weight calculated from instantaneous sensitivity and observing time. Note that survey weight is the quantity that should scale linearly with efforts, so the survey weight at 150 GHz for the BK14 paper is equal to the BICEP2 2010--2012 survey weight plus Keck Array 150 GHz survey weight for 2012--2014.

• The "bandpower weight" is easier to define unambiguously -- in a previous posting I calculated the N and effective fsky for many different experiments that have published BB results. From these results, I calculate the bandpower weight as fsky / N. Note that I am using the white noise N and fsky -- see my previous posting for a look at low-ℓ noise.
• The "tod weight" is calculated from instantaneous sensitivity (NEQ) of the full experiment and observing time (τ) as τ / NEQ2. While this definition is quite simple, there are many possible choices for how to select τ and it can be difficult to do this in a consistent way across experiments.

The idea behind these statistics is that tod weight describes the experiment on paper, i.e. "I will put together an array of detectors with NEQ = 15 μK s1/2 and then observe for three years". The bandpower weight describes the results that were actually obtained, including data cuts, instrument downtime, filtering, inefficiencies in sky coverage, etc, etc. Note however that I am using actual array NEQ as reported in published results to calculate tod weight, so detector yield, noisy detectors, and increased NEQ from marginal weather all get baked into the tod weight to some extent and should not lead to a discrepancy between the two statistics.

Figure 1 is a plot of the ratio of tod weight to bandpower weight for BICEP/Keck, ACTpol, ABS, and QUIET. I didn't include SPTpol or POLARBEAR because I couldn't array NEQ numbers for those instruments. Points are color-coded according to observing band (red for 95 GHz, green for 150 GHz, and blue for 220 GHz). A larger value of the weight ratio (y-axis) means that the statistical power of the bandpower result fell short of what we might expect from the instrument sensitivity and time on sky. The tod weight is ~100 times larger than the bandpower weight for most experiments. While we all know that there are many significant factors that cause observing efficiency to be less than a naive calculation would indicate, I haven't spent any time thinking about whether there are any order ~10 numbers that would be needed to make these two statistics comparable -- I wouldn't recommend reading much into the absolute scale of the y-axis, but it would be interesting to cross-check with an ab initio sensitivity calculator such as BoloCalc.

For each experiment I include two points (connected by a line) that make different choices for how to define observing time. The upper point uses a strict definition that calculates τ as the number of seconds between when the experiment first started observing and when it completed. For the lower point, I tried to count only the stretches of time that were spent in standard observing mode, i.e. excluding downtimes for maintenance / upgrades. For ABS and BICEP2 150 GHz, I added an additional unfilled point that counts only the observing time after data cuts.

Figure 1: survey weight ratio vs bandpower-derived survey weight

Details for figure inputs

• BICEP/Keck 150 GHz includes points from BK-I 2014, BK-V 2015 (same dataset used for BKP joint analysis), BK-VI 2016 (BK14), and the upcoming BK15 results.
• For BICEP2, I used an array NEQ of 17 μK s1/2 with τ = 3 years. For the lower point, τ is reduced to 936 days (2010-02-15 to 2012-11-06, except for 2011-01-01 to 2011-03-01) to remove time spent on deployment and calibration campaigns. The unfilled circle counts BICEP2 data after cuts. From Table 7 of BK-II, the data volume surviving cuts is 8.6e9 detector-seconds and I used a typical detector sensitivity of 305.6 μK s1/2 (from Fig 22 of the same paper).
• The BK-V result adds in Keck Array data from 2012 (11.5 μK s1/2 for five receivers) and 2013 (9.5 μK s1/2 for five receivers). These each have nominal τ = 1 year. For the lower points, I deducted time spent on deployment and calibration campaigns, ending up with 240 days in 2012 and 223 days in 2013.
• The BK14 result adds in Keck Array data from 2014 (13.3 μK s1/2 for three receivers). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 240 days after deducting deployment / calibration.
• The BK15 result adds in Keck Array data from 2015 (19.5 μK s1/2 for one receiver). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 242 days after deducting deployment / calibration.
• BICEP/Keck 95 GHz includes points from BK-VI 2016 (BK14) and the upcoming BK15 results.
• The BK14 result uses 2014 Keck Array data (17.4 μK s1/2 for two receivers). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 240 days after deducting deployment / calibration.
• The BK15 result adds in Keck Array data from 2015 (13.5 μK s1/2 for two receivers). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 242 days after deducting deployment / calibration.
• BICEP/Keck 220 GHz is from the upcoming BK15 results. Array NEQ is 41.6 μK s1/2 for two receivers. This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 242 days after deducting calibration.
• ABS has array NEQ of 41 μK s1/2 and observed for 464 days (2012-09-13 to 2013-12-21). For the lower (filled) point, I used τ = 1634 + 209 + 1745 + 3135 = 6723 hours (Section 3 of Kusaka 2018). For the unfilled point, I used 461,237 TES-hours on Field A after cuts (bottom line of Table 3 of Kusaka 2018) and used a per-TES sensitivity of 580 μK s1/2.
• QUIET 43 GHz has array NEQ of 69 μK s1/2 and observed for 232 days (2008-10-24 to 2009-06-13). For the lower point, I used τ = 3458 hours (Section 3 of QUIET 2011).
• QUIET 95 GHz has array NEQ of 87 μK s1/2 and observed for 497 days (2009-08-12 to 2010-12-22). For the lower point, I used τ = 7426 hours (Section 3 of QUIET 2012).
• ACTpol season 1 has array NEQ of 19 μK s1/2 and observed for 94 days (2013-09-11 to 2013-12-14). For the lower point, I multiplied by τ by 63% to account for the fact that their analysis used only nighttime data for fields D1, D5, and D6 (Section 3.1 of Næss 2014).
• For ACTpol season 2, I kept the season 1 accumulated tod weight and added an additional 133 days (2014-08-20 to 2014-12-31) with array NEQ of 11.3 μK s1/2 (inverse-quadrature sum of 23 and 12.9 μK s1/2 for PA1 and PA2, respectively). For the lower point, the ACTpol season 2 observing time was scaled by a factor of 45% to account for D5 and D6 nighttime data only. It seems like Louis 2017 reanalyzes the season 1 data with somewhat different choices than Næss 2014, so this addition of weights might not be strictly accurate.

Conclusions

The goal of this analysis was to try to separate instrument design parameters, like array NEQ, from site-specific parameters, like weather. This is only approximately possible, since we know that detector sensitivity will fluctuate with sky temperature.

That said, the difference between upper and lower points for each experiment primarily tells us about how smoothly that experiment ran (except in the case of ACTpol, where it mostly accounts for nighttime only observing). While BICEP2 stands out as being a notably smooth-running experiment (cryostat stayed cold for three calendar years!), most of the other lines have fairly similar length. Across four calendar years of Keck observations, the number of days spent in standard observing mode was always around 240 (66%). QUIET and ABS also spent 60-65% of their calendar time in standard observing mode.

I will assume that CMB-S4 will be a well-run and well-staffed experiment at both South Pole and Atacama sites, in which case we can focus on comparing the lower points between experiments. The weight ratio for these points describes a combination of weather cuts, variations in detector noise over time or between detectors that is not captured in array NEQ, noise correlations that don't integrate down in maps, analysis inefficiencies, and probably other factors that I haven't thought of.

• We see a clear trend with observing frequency with the expected sign -- 95 GHz instruments do the best job of translating their instantaneous sensitivity into bandpower sensitivity. For BICEP/Keck, the weight ratio is about twice as large for 150 GHz as for 95 GHz, but the 220 GHz weight ratio is only ~20% higher than 150 GHz. Current noise forecasts are based off BICEP/Keck achieved performance, so these factors are already baked in.
• At fixed observing frequency, the BICEP/Keck at South Pole does roughly twice as well as the Atacama experiments in converting instantaneous sensitivity to bandpower sensitivity. It is hard to know how much confidence to have in this conclusion, since it is based on just two points of comparison -- BK vs ABS at 150 GHz and BK vs QUIET at 95 GHz.
• It is very interesting that this factor of two persists even after we account for data cuts in BICEP2 and ABS. This implies that weather cuts are not driving the difference in observing efficiency.
• Also note that my bandpower weight statistic describes total effort in a way that is agnostic towards deep+narrow vs shallow+wide survey strategy. We might worry that Atacama-based telescopes are forced to survey a larger sky area and this is sub-optimal strategy for noise-dominated r detection efforts (you might not agree, depending on your optimism about delensing), but that distinction won't show up in this posting.

To go beyond what is presented here would probably require digging down into the detailed accounting of how each experiment ended up with its particular sensitivity. Matt Hasselfield and Tom Crawford have done some of this for ACTpol and SPTpol as part of the LAT forecasting effort. It would be interesting to compare their results with the 10,000 ft view shown here.

Tabulated results

The table below compiles all the numbers used for Figure 1.

• tod weight A uses the full duration of the experiment (upper end of the line)
• tod weight B uses the duration of standard operations (lower end of the line)
• tod weight C uses the data volume after cuts (unfilled symbols, for BICEP2 and ABS only)
Experiment Frequency bandpower weight tod weight A ratio A tod weight B ratio B tod weight C ratio C
BK14 95 GHz 1478 104162 70 68490 46 -- --
BK15 95 GHz 3501 276178 79 182539 52 -- --
BICEP2 150 GHz 2905 327363 113 279828 96 92086 32
BK-V 150 GHz 6245 915250 147 650109 104 -- --
BK14 150 GHz 7514 1093530 146 767335 102 -- --
BK15 150 GHz 8638 1176380 136 822265 95 -- --
BK15 220 GHz 101 18249 181 12100 120 -- --
ACTpol (year 1) 150 GHz 42 22498 539 14173 340 -- --
ACTpol (year 2) 150 GHz 197 112490 571 54670 278 -- --
QUIET 43 GHz 35 4210 120 2615 75 -- --
QUIET 95 GHz 33 5673 171 3532 106 -- --
ABS 150 GHz 77 23849 310 14398 187 4936 64