Difference between revisions of "Estimated observing efficiency for past and current telescopes, version 2"

From CMB-S4 wiki
Jump to: navigation, search
(update to 2018-09-25 posting)
 
 
Line 29: Line 29:
 
     bandpower_weight = 2 * 4&pi; * fsky / N<sub>&#x2113;</sub>
 
     bandpower_weight = 2 * 4&pi; * fsky / N<sub>&#x2113;</sub>
 
* The factor of 2 counts both EE and BB survey weight and is needed to match the NEQ convention discussed above.
 
* The factor of 2 counts both EE and BB survey weight and is needed to match the NEQ convention discussed above.
* N<sub>&#x2113;</sub> and effective fsky are estimated from the error bars of published bandpowers. This is discussed in more detail in a [http://bicep2.rc.fas.harvard.edu/CMB-S4/analysis_logbook/20180810_noise/ 2018-08-10 posting]. The bandpower error bars constrain N<sub>&#x2113;</sub> / sqrt(fsky), so for experiments where we weren't able to reliably break this degeneracy it is possible to change the bandpower weight while keeping the bandpower errors fixed. This is noted explicitly on the figure for the case of ABS&mdash;we made a rough estimate of fsky = 3% but added dotted lines that range from 1.5% (lower left end) to 6% (upper right end). This issue affects the QUIET points too, but we haven't added dotted lines there.
+
* N<sub>&#x2113;</sub> and effective fsky are estimated from the error bars of published bandpowers. This is discussed in more detail in a [http://bicep.rc.fas.harvard.edu/CMB-S4/analysis_logbook/20180810_noise/ 2018-08-10 posting]. The bandpower error bars constrain N<sub>&#x2113;</sub> / sqrt(fsky), so for experiments where we weren't able to reliably break this degeneracy it is possible to change the bandpower weight while keeping the bandpower errors fixed. This is noted explicitly on the figure for the case of ABS&mdash;we made a rough estimate of fsky = 3% but added dotted lines that range from 1.5% (lower left end) to 6% (upper right end). This issue affects the QUIET points too, but we haven't added dotted lines there.
  
 
The bandpower weight represents final results after all efficiency hits while the tod weight represents accumulated sensitivity. If we fully understand all of the various efficiency factors, then we should be able to get these numbers to agree.
 
The bandpower weight represents final results after all efficiency hits while the tod weight represents accumulated sensitivity. If we fully understand all of the various efficiency factors, then we should be able to get these numbers to agree.

Latest revision as of 12:47, 9 May 2019

Colin Bischoff, Yuji Chinone, Tom Crawford, Matt Hasselfield, 2018-10-23


This posting is an update on a previous posting from 2018-09-25. The goal is to try to identify any factors that lead to different observing efficiency between Atacama and South Pole.

References

We assess results from recent CMB polarization experiments that have been published in the following papers:

Method

To investigate observing efficiency, we compare different measures of survey weight, which is defined with units of μK-2 and accumulates linearly with detector count or integration time.

The “tod weight” is calculated from array sensitivity and integration time as

   tod_weight = τ / NEQ2
  • Note that we use here a convention for NEQ that corresponds to instantaneous sensitivity to whatever combination of Q and U is being measured. By this convention, most experiments should have NEQ that is similar to NET (up to minor factors of polarization efficiency), not a factor of sqrt(2) higher.
  • Since survey weight accumulates linearly, we can calculate the total 150 GHz BICEP2/Keck survey weight from the most recent publication as BICEP2 2010–2012 + Keck 2012–2013 + Keck 2014 + Keck 2015.

The “bandpower weight” is calculated from N and effective fsky of the BB bandpowers as

   bandpower_weight = 2 * 4π * fsky / N
  • The factor of 2 counts both EE and BB survey weight and is needed to match the NEQ convention discussed above.
  • N and effective fsky are estimated from the error bars of published bandpowers. This is discussed in more detail in a 2018-08-10 posting. The bandpower error bars constrain N / sqrt(fsky), so for experiments where we weren't able to reliably break this degeneracy it is possible to change the bandpower weight while keeping the bandpower errors fixed. This is noted explicitly on the figure for the case of ABS—we made a rough estimate of fsky = 3% but added dotted lines that range from 1.5% (lower left end) to 6% (upper right end). This issue affects the QUIET points too, but we haven't added dotted lines there.

The bandpower weight represents final results after all efficiency hits while the tod weight represents accumulated sensitivity. If we fully understand all of the various efficiency factors, then we should be able to get these numbers to agree.

Results

Figure 1 below shows the bandpower / tod weight ratio vs bandpower weight, color-coded by observing frequency. For each experiment, there are two or three points spaced vertically. These all represent the same bandpowers, but count total observing time differently and arrive at different tod weights.

Figure 1: survey weight ratio vs bandpower-derived survey weight
  • The bottom points uses the most expansive definition of observing time, which is just the duration between when the experiment started operating and when it finished.
    • For BICEP2/Keck, this is just calendar years. BICEP2 ran for three years (2010 through 2012). The 2015 paper added two years of Keck Array (2012, 2013). The 2016 paper added Keck data from 2014. The 2018 paper added Keck data from 2015.
    • For ACTpol, the 2014 paper was based on just 94 days of operation (2013-09-11 through 2013-12-14). The 2017 paper included an additional 133 days (2014-08-20 through 2014-12-31).
    • For QUIET, the 2011 paper was based on 232 days of operation (2008-10-24 through 2009-06-13). The 2012 paper was based on 497 days of operation (2009-08-12 through 2010-12-22).
    • For ABS, the 2018 paper was based on 464 days of operation (2012-09-13 through 2013-12-21).
    • For POLARBEAR, the 2017 paper was based on 668 days of operation. Season 1 was May 2012 through June 2013 and season 2 was September 2013 through April 2014.
    • For SPTpol, the 2015 paper was based on 395 days of operation (April 2012 through April 2013).
  • The upper filled points attempt to count just the days spent on normal operations.
    • For BICEP2/Keck and SPT, we drop the austral summer deployment season that is typically spent on instrument repairs, upgrades, and calibration. BICEP2 is an expection, as it operated more or less continuously from 2010-02-15 to 2012-11-06 except for a campaign of calibrations from 2011-01-01 to 2011-03-01.
    • For ACTpol, we kept only 63% (45%) of the observing time from season 1 (2) to account for night-time only observations.
    • For QUIET, the time spent on observations is 3458 hours for 43 GHz and 7426 hours for 95 GHz, taken from the text of the respective papers. These durations include calibrations and Galactic field data, excluding only blocks of downtime due to “occasional snow, power outages, and mechanical failures”.
    • For ABS, the time spent on observations is 6723 hours, taken from the text of the paper.
    • For POLARBEAR, the time spent on observations is 4700 hours, taken from the text of the paper.
  • In some cases, there is an additional unfilled point that counts only the remaining time after cuts.
    • For BICEP2, we used 8.6e9 detector-seconds from Table 7 of BK-II and replaced the array sensitivity with per-detector sensitivity.
    • For ABS, we used 461,237 TES-hours from Table 3 of ABS 2018 and replaced the array sensitivity with per-detector sensitivity
    • For QUIET, we counted only hours targeting the CMB fields and used 69.4% cut efficiency at 43 GHz (Table 3 of QUIET 2011, pipeline A) and 63.5% cut efficiency at 95 GHz (Table 1 of QUIET 2012, PCL pipeline).
    • For POLARBEAR, we used 1400 hours of data after cuts. POLARBEAR 2017 lists 2800 hours of data passing cuts, but 50% of that data is lost from cutting to scan turnarounds.

Discussion

Our interpretation of this figure is as follows:

  1. If we take the ratio of the two points connected by the line, this is telling us what fraction of the total number of calendar days were spent in standard science observing mode. This ratio is fairly similar for most experiments (60–65%), but it is notably high for BICEP2 (85%) and low for POLARBEAR (29%). This loss of observing efficiency is not primarily site specific, with much of the downtime due to repairs and upgrades. However, this factor probably does include some downtime due to snowstorms in Atacama while for Pole the summer deployment season conveniently overlaps with the worst observing conditions.
  2. The next ratio is between the upper filled point and the unfilled point. This represents the loss of observation time due to data cuts (and scheduling). It is quite similar for BICEP2 (33%), ABS (34%), and POLARBEAR (30%). QUIET comes out a bit higher at 43 GHz (54%) and 95 GHz (46%), but is seems reasonable that we would cut less data at low frequencies.
  3. Finally we can compare the unfilled points, which should include all the efficiency hits from downtime and data cuts with 1, the value that we ought to obtain if all factors are accounted for. We don't have a good explanation for this factor. Potential explanations are that the assumed array sensitivities are a bit too good or that apodized / non-uniform map coverage is inefficient in some way. The values of this factor are 79% for BICEP2, 60% for POLARBEAR, 39% for ABS (or 55%, if we assume ABS fsky=0.06), and 63% (52%) for QUIET 43 (95) GHz (but remember that QUIET could be suffering from similar fsky uncertainty as ABS).

Of the three factors listed above, only #2 is clearly site-specific, yet it seems to be quite similar between BICEP/Keck, ABS, and POLARBEAR. This survey is obviously imprecise, but it provides some evidence that observing efficiency is not very different between Atacama and Pole, all else being equal. It would be an invaluable exercise if people with deep knowledge of each experiment (and access to data) could produce detailed breakdowns of the factors needed to get agreement between the accumulated tod survey weight and the bandpower-derived version.