Difference between revisions of "Estimated observing efficiency for past and current telescopes"

From CMB-S4 wiki
Jump to: navigation, search
(Tabulated results)
(Details for figure inputs)
Line 20: Line 20:
  
 
* BICEP/Keck 150 GHz includes points from [http://adsabs.harvard.edu/abs/2014PhRvL.112x1101A BK-I 2014], [http://adsabs.harvard.edu/abs/2015ApJ...811..126B BK-V 2015] (same dataset used for BKP joint analysis), [http://adsabs.harvard.edu/abs/2016PhRvL.116c1302B BK-VI 2016] (BK14), and the upcoming BK15 results.  
 
* BICEP/Keck 150 GHz includes points from [http://adsabs.harvard.edu/abs/2014PhRvL.112x1101A BK-I 2014], [http://adsabs.harvard.edu/abs/2015ApJ...811..126B BK-V 2015] (same dataset used for BKP joint analysis), [http://adsabs.harvard.edu/abs/2016PhRvL.116c1302B BK-VI 2016] (BK14), and the upcoming BK15 results.  
** For BICEP2, I used an array NEQ of 17 &mu;K s<sup>1/2</sup> with &tau; = 3 years. For the lower point, &tau; is reduced to 936 days (2010-02-15 to 2012-11-06, except for 2011-01-01 to 2011-03-01) to remove time spent on deployment and calibration campaigns.
+
** For BICEP2, I used an array NEQ of 17 &mu;K s<sup>1/2</sup> with &tau; = 3 years. For the lower point, &tau; is reduced to 936 days (2010-02-15 to 2012-11-06, except for 2011-01-01 to 2011-03-01) to remove time spent on deployment and calibration campaigns. The unfilled circle counts BICEP2 data after cuts. From Table 7 of [http://adsabs.harvard.edu/abs/2014ApJ...792...62A BK-II], the data volume surviving cuts is 8.6e9 detector-seconds and I used a typical detector sensitivity of 305.6 &mu;K s<sup>1/2</sup> (from Fig 22 of the same paper).
 
** The BK-V result adds in Keck Array data from 2012 (11.5 &mu;K s<sup>1/2</sup> for five receivers) and 2013 (9.5 &mu;K s<sup>1/2</sup> for five receivers). These each have nominal &tau; = 1 year. For the lower points, I deducted time spent on deployment and calibration campaigns, ending up with 240 days in 2012 and 223 days in 2013.
 
** The BK-V result adds in Keck Array data from 2012 (11.5 &mu;K s<sup>1/2</sup> for five receivers) and 2013 (9.5 &mu;K s<sup>1/2</sup> for five receivers). These each have nominal &tau; = 1 year. For the lower points, I deducted time spent on deployment and calibration campaigns, ending up with 240 days in 2012 and 223 days in 2013.
 
** The BK14 result adds in Keck Array data from 2014 (13.3 &mu;K s<sup>1/2</sup> for three receivers). This NEQ estimate is from an internal posting and is not included in the paper. &tau; = 1 year, or 240 days after deducting deployment / calibration.
 
** The BK14 result adds in Keck Array data from 2014 (13.3 &mu;K s<sup>1/2</sup> for three receivers). This NEQ estimate is from an internal posting and is not included in the paper. &tau; = 1 year, or 240 days after deducting deployment / calibration.

Revision as of 16:50, 25 September 2018

Colin Bischoff, 2018-09-25


In this posting, I try to estimate the relative observing efficiency for telescopes at South Pole vs Chile. It is hard to get a clean answer to this question because every experiment has its own unique circumstances and there are a limited number of data points to examine.

The method I will use here is to compare a survey weight (units of μK-2) calculated from published BB bandpowers to a survey weight calculated from instantaneous sensitivity and observing time. Note that survey weight is the quantity that should scale linearly with efforts, so the survey weight at 150 GHz for the BK14 paper is equal to the BICEP2 2010--2012 survey weight plus Keck Array 150 GHz survey weight for 2012--2014.

  • The "bandpower weight" is easier to define unambiguously -- in a previous posting I calculated the N and effective fsky for many different experiments that have published BB results. From these results, I calculate the bandpower weight as fsky / N.
  • The "tod weight" is calculated from instantaneous sensitivity (NEQ) of the full experiment and observing time (τ) as τ / NEQ2. While this definition is quite simple, there are many possible choices for how to select τ and it can be difficult to do this in a consistent way across experiments.

The idea behind these statistics is that tod weight describes the experiment on paper, i.e. "I will put together an array of detectors with NEQ = 15 μK s1/2 and then observe for three years". The bandpower weight describes the results that were actually obtained, including data cuts, instrument downtime, filtering, inefficiencies in sky coverage, etc, etc. Note however that I am using actual array NEQ as reported in published results to calculate tod weight, so detector yield, noisy detectors, and increased NEQ from marginal weather all get baked into the tod weight to some extent and should not lead to a discrepancy between the two statistics.

Figure 1 is a plot of the ratio of tod weight to bandpower weight for BICEP/Keck, ACTpol, ABS, and QUIET. I didn't include SPTpol or POLARBEAR because I couldn't array NEQ numbers for those instruments. Points are color-coded according to observing band (red for 95 GHz, green for 150 GHz, and blue for 220 GHz). A larger value of the weight ratio (y-axis) means that the statistical power of the bandpower result fell short of what we might expect from the instrument sensitivity and time on sky. The tod weight is ~100 times larger than the bandpower weight for most experiments. While we all know that there are many significant factors that cause observing efficiency to be less than a naive calculation would indicate, I haven't spent any time thinking about whether there are any order ~10 numbers that would be needed to make these two statistics comparable -- I wouldn't recommend reading much into the absolute scale of the y-axis, but it would be interesting to cross-check with an ab initio sensitivity calculator such as BoloCalc.

For most experiments I include two points (connected by a line) that make different choices for how to define observing time. The upper point uses a strict definition that calculates τ as the number of seconds between when the experiment first started observing and when it completed. For the lower point, I tried to count only the stretches of time that were spent in standard observing mode, i.e. excluding downtimes for maintenance / upgrades. For ABS and BICEP2 150 GHz, I added an additional unfilled point that counts only the observing time after data cuts.

Figure 1: survey weight ratio vs bandpower-derived survey weight

Details for figure inputs

  • BICEP/Keck 150 GHz includes points from BK-I 2014, BK-V 2015 (same dataset used for BKP joint analysis), BK-VI 2016 (BK14), and the upcoming BK15 results.
    • For BICEP2, I used an array NEQ of 17 μK s1/2 with τ = 3 years. For the lower point, τ is reduced to 936 days (2010-02-15 to 2012-11-06, except for 2011-01-01 to 2011-03-01) to remove time spent on deployment and calibration campaigns. The unfilled circle counts BICEP2 data after cuts. From Table 7 of BK-II, the data volume surviving cuts is 8.6e9 detector-seconds and I used a typical detector sensitivity of 305.6 μK s1/2 (from Fig 22 of the same paper).
    • The BK-V result adds in Keck Array data from 2012 (11.5 μK s1/2 for five receivers) and 2013 (9.5 μK s1/2 for five receivers). These each have nominal τ = 1 year. For the lower points, I deducted time spent on deployment and calibration campaigns, ending up with 240 days in 2012 and 223 days in 2013.
    • The BK14 result adds in Keck Array data from 2014 (13.3 μK s1/2 for three receivers). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 240 days after deducting deployment / calibration.
    • The BK15 result adds in Keck Array data from 2015 (19.5 μK s1/2 for one receiver). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 242 days after deducting deployment / calibration.
  • BICEP/Keck 95 GHz includes points from BK-VI 2016 (BK14) and the upcoming BK15 results.
    • The BK14 result uses 2014 Keck Array data (17.4 μK s1/2 for two receivers). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 240 days after deducting deployment / calibration.
    • The BK15 result adds in Keck Array data from 2015 (13.5 μK s1/2 for two receivers). This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 242 days after deducting deployment / calibration.
  • BICEP/Keck 220 GHz is from the upcoming BK15 results. Array NEQ is 41.6 μK s1/2 for two receivers. This NEQ estimate is from an internal posting and is not included in the paper. τ = 1 year, or 242 days after deducting calibration.
  • ABS has array NEQ of 41 μK s1/2 and observed for 464 days (2012-09-13 to 2013-12-21). For the lower (filled) point, I used τ = 1634 + 209 + 1745 + 3135 = 6723 hours (Section 3 of Kusaka 2018). For the unfilled point, I used 461,237 TES-hours on Field A after cuts (bottom line of Table 3 of Kusaka 2018) and used a per-TES sensitivity of 580 μK s1/2.
  • QUIET 43 GHz has array NEQ of 69 μK s1/2 and observed for 232 days (2008-10-24 to 2009-06-13). For the lower point, I used τ = 3458 hours (Section 3 of QUIET 2011).
  • QUIET 95 GHz has array NEQ of 87 μK s1/2 and observed for 497 days (2009-08-12 to 2010-12-22). For the lower point, I used τ = 7426 hours (Section 3 of QUIET 2012).
  • ACTpol season 1 has array NEQ of 19 μK s1/2 and observed for 94 days (2013-09-11 to 2013-12-14). For the lower point, I multiplied by τ by 63% to account for the fact that their analysis used only nighttime data for fields D1, D5, and D6 (Section 3.1 of Næss 2014).
  • For ACTpol season 2, I kept the season 1 accumulated tod weight and added an additional 133 days (2014-08-20 to 2014-12-31) with array NEQ of 11.3 μK s1/2 (inverse-quadrature sum of 23 and 12.9 μK s1/2 for PA1 and PA2, respectively). For the lower point, the ACTpol season 2 observing time was scaled by a factor of 45% to account for D5 and D6 nighttime data only. It seems like Louis 2017 reanalyzes the season 1 data with somewhat different choices than Næss 2014, so this addition of weights might not be strictly accurate.

Conclusions

Tabulated results

The table below compiles all the numbers used for Figure 1.

  • tod weight A uses the full duration of the experiment (upper end of the line)
  • tod weight B uses the duration of standard operations (lower end of the line)
  • tod weight C uses the data volume after cuts (unfilled symbols, for BICEP2 and ABS only)
Experiment Frequency bandpower weight tod weight A ratio A tod weight B ratio B tod weight C ratio C
BK14 95 GHz 1478 104162 70 68490 46 -- --
BK15 95 GHz 3501 276178 79 182539 52 -- --
BICEP2 150 GHz 2905 327363 113 279828 96 92086 32
BK-V 150 GHz 6245 915250 147 650109 104 -- --
BK14 150 GHz 7514 1093530 146 767335 102 -- --
BK15 150 GHz 8638 1176380 136 822265 95 -- --
BK15 220 GHz 101 18249 181 12100 120 -- --
ACTpol (year 1) 150 GHz 42 22498 539 14173 340 -- --
ACTpol (year 2) 150 GHz 197 112490 571 54670 278 -- --
QUIET 43 GHz 35 4210 120 2615 75 -- --
QUIET 95 GHz 33 5673 171 3532 106 -- --
ABS 150 GHz 77 23849 310 14398 187 4936 64