-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathrefresp.txt
958 lines (785 loc) · 40.9 KB
/
refresp.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
Reviewer #1: General comments
-----------------
This paper considers, at a conceptual level, a number of different
ways in which the expansion history of the Universe could be measured,
and discusses a number of observational techniques that could be
employed in measuring the so-called "redshift drift".
The discovery that the expansion of the Universe is accelerating has
prompted many investigations into various methods of measuring the
expansion history. One relatively exotic method is to measure the
so-called redshift drift, i.e. the change of the redshifts of
cosmologically distant sources as a function of time. This is a tiny
effect and hence a very difficult measurement, which is beyond current
facilities.
In the present paper the authors present a number of conceptual
variants of the redshift drift, which at least in principle may be
easier to implement, and discuss observational techniques with which
the observational challenges of a redshift drift measurement might be
overcome. In particular this second part highlights some interesting
ideas, and I think these would definitely represent quite a useful
contribution to the literature. Hence I recommend publication of this
paper. However, I have a fairly large number of comments which I would
ask the authors to consider before the paper is published. Apologies
for the length of this report and its lateness.
Detailed comments
-----------------
Section 1
- page 1, right column, lines 43-47
"So even if..."
This sentence sounds (at least to me) as if there were some
fundamental reason why the redshift drift method cannot possibly
constrain cosmology as well as distances or growth of structure. I am
not aware of any such reason. Given enough S/N and/or patience the
constraints from the redshift drift can be made arbitrarily tight. But
perhaps I am misinterpreting this sentence?
**
There is no fundamental reason that redshift drift is less
constraining than other probes. Achieving the accuracy needed is
practically challenging so we don't know how well it will actually
work. We have reworded the text as follows:
"This directness means that even if such a cosmic probe cannot
practically reach the accuracy on dynamical cosmological-model
parameters compared to the more established distance or growth of
structure probes, it is worthwhile exploring possibilities for
carrying it out."
- p 1, r, 54
"fantastically precise"
This sentence evokes the notion that a "classical" redshift drift
experiment requires hugely precise measurements of the redshifts of
individual objects or spectral features. That is not necessarily
correct. Using the Ly-alpha forest (as suggested by Loeb 1998) does
not require extremely high precision in the redshift measurement of
individual absorption lines. The extremely high *overall* precision of
the experiment is achieved by (in a sense) summing over many lines,
each of which is measured at much lower precision. In fact, a detailed
(although unpublished) analysis of this version of the redshift drift
experiment has revealed that one only needs a radial velocity
precision of ~70 cm/s, which is not outrageously difficult to achieve,
although with a long-term stability of ~1 cm/s, which is indeed
somewhat harder to achieve. So, at least for this version of the
experiment we do indeed need long-term stability, but not incredibly
high precision.
**
We have replaced with word "precise" with "accurate", which more
accurately conveys our intended meaning.
Section 2
- p 2, l, 60
"We emphasize..." - end of paragraph
I found this an imprecise description. First of all, a non-zero value
of dz/dt does *not* directly indicate acceleration, only a *positive*
value indicates acceleration. Secondly, I don't understand the
reference to the "specific functional dependence of a(t)". The point
is that a measurement of dz/dt at various redshifts allows one to
directly reconstruct a(t) without the assumption of a cosmological
model (or even a theory of gravity): dz/dt(a) provides
da/dt(z). Together with a(z) one can thus reconstruct a(t).
**
As mentioned at the beginning of this paragraph, acceleration as a
physics word means positive or negative second time
derivative. Regarding the functional dependence, the point in this
paragraph is not that the measurement of dz/dt at various redshifts
allows one to directly reconstruct a(t) without the assumption of a
cosmological model. Rather that a non-zero dz/dt indicates
acceleration without having to make assumptions on a(t).
The text has been rewritten as
"its nonzero value at any redshift directly indicates, with no further
assumptions about $a(t)$, that the value of $\dot{a}$ differs at two
different times and hence that there was an acceleration (positive or
negative)."
- p 2, r, para beginning line 21
"At high redshift..."
Again this description is imprecise. dz/dt does not change sign at the
redshift at which DE begins to dominate over matter. The former occurs
at z ~ 2, the latter at z ~ 0.7. Whether an emitter's dz/dt is
positive or negative depends on whether the universe is mostly
decelerating or accelerating during the photon travel time from the
emitter to the observer. That's why the change of sign in dz/dt occurs
at much higher redshift than the Omega_L-Omega_M cross-over (see
e.g. Gudmundsson & Bjoernsson 2002).
**
We do not mean to claim dz/dt changes sign when DE begins to dominate. We
clarify this by rephrase: "When the universe begins to accelerate (speed
up) under the influence of dark energy in recent times, however, then the
drift will be positive (see Eq. 2 for the exact condtion for the change of
sign)."
- p 2, r, para beginning line 32
It might be worth pointing out explicitly that a flat cosmology is
being assumed here.
**
Rephrase to "The results assume a flat universe with the fiducial..."
- Fig. 1 has no y-axis label. Suggestion: dzdot/dp
**
Fig. 1 curves show different quantities, e.g. zdot as well as
dzdot/dw0. There we labeled the curves individually for clarity.
- In Fig.1 I would label the lines slightly differently:
dzdot/dw_a --> dzdot/dw_a/H_0
same for w_0 and Omega_M
zdot --> dzdot/dH_0
**
We state in both the text and caption that "All quantities are in units
of H_0".
- p 2, r, line 42
"The sensitivity curves..."
It is a little difficult to verify this statement because Fig. 1 does
not show dzdot/dOmega_m beyond z=0.3.
**
We have modified the plot to show the dzdot/dOmega scaled by a factor
20 so that it can be viewed over the full redshift range, and not so
in the text.
- p 2, r, 54
"especially if they have better S/N"
This statement sort of implies that this would indeed be the case, but
it is far from clear whether this is true.
**
In the next paragraph we justify it but showing that higher redshift
sources (of the same luminosity) become fainter more rapidly and so if we
are photon noise limited we expect low redshift sources to be better.
This is not a proof, as the referee says, and we rewrite
"especially if, being closer and thus appearing brighter, they are
more easily observed to high signal-to-noise"
- p 3, l, 36
"In the precision..."
This is a very naive analysis as it ignores luminosity and number
density evolution, as well as k-corrections.
**
This is meant to be a simple first-order argument for scaling
relations at low redshift. We carefully use the word "if" and the
phrase "may give". We are giving the reader initial motivation to
consider low redshift, and then we present robust quantification in
Figures 2 and 3. We add the text "To first order,".
- p 3
The authors use an entire page (plus Figs.1-3) to convince their
readers that a zdot measurement should be performed at low redshift
rather than at high redshift. They do this apparently in order to
justify their investigation of using low-redshift emission line
galaxies as targets in Section 4. I found this not hugely convincing
and in any case a little misguided.
**
In fact this occurred in the opposite sense. We discovered that low
redshift could be powerful, and then looked for what sources would be suitable.
It is a bonus that well studied low redshift galaxies are possibilities.
So Figures 1-3 really did lead the way.
We also make explicit "Optimization of survey redshift range, taking
into account observation time constraints, is beyond the scope of this
article, and is left for future work." We do show, however, that low redshift
observations can be very interesting to consider further: "From a theoretical
sensitivity standpoint, without attempting a detailed observational strategy,
we can quantify the redshift sensitivity. We carry out Fisher analysis using
the sensitivities in Fig. 1..."
First of all, I did not find this part very convincing because their
analysis leading to Figs. 2 and 3 is incomplete:
1. They assume that their zdot measurements are distributed over a
range in redshift of 0.4. Why this value? Why not not 2? Why not make
this a free parameter? My point is that the best constraints will come
from a combination of low and high redshift measurements (as
acknowledged by the authors at line 40, right column).
**
We have removed the misleading phrase “redshift range”; what Fig. 1 shows
is sensitivity at a given redshift. Then, since to fit 4 free cosmological
parameters one needs at least 4 measurements at slightly different redshifts, we
spread them by 0.4 so as to keep them as localized as possible. If we had spread
them over a redshift range of 2, then one could not interpret the leverage
as being from a particular redshift. One should simply think of them as
measurements in a narrow band around a redshift z. We state this as
"the multiple measurements slightly spread in redshift are needed to allow
fits of multiple cosmology parameters, while still focusing the sensitivity
at a particular redshift z; we use z-0.2, z-0.1, z, z+0.1, z+0.2, while
see belowfor consideration of combining measurements at significantly
different redshifts."
2. The analysis takes no account of the observational effort required
to achieve a certain zdot measurement. Essentially, the authors are
answering the question "What constraints on cosmological parameters
can we achieve with a zdot measurement?". The much more relevant
question is, however, what constraints can be achieved *for a fixed
amount of observing time* (or money or some other metric of
effort/resources). (This is of course a much more difficult analysis,
in particular when covering a wide redshift range, as it presumably
would involve multiple different techniques.)
**
As we emphasize throughout the article, we are pointing out interesting
new aspects of this probe. First we need to know the redshift sensitivity,
then we identify possible sources, then we calculate the observing time
needed, in successive sections of the article.
Secondly, while reading this part I could not help but thinking that
the authors were barking up the wrong tree. Accelerated expansion is
of sufficient interest, and the redshift drift method of sufficient
immaturity that such a detailed motivation for exploring ways of
performing a low-z zdot measurement is not required.
**
Redshift drift has previously been thought of as a high redshift
measurement, so the motivation of low redshift is worth discussing in
some detail. Figures 2 and 3 quantify the leverage of a given redshift;
they could have shown the initial motivation for low redshift measurements
was wrong, but instead showed how much advantage it can bring, if practical.
We also show that to combine with low redshift zdot, CMB gives better
leverage (and is "for free") than high redshift zdot.
If I had written this part, I would have structured it around the
following points:
1. Demonstrate that *in principle* low-z measurements have some
desirable features (reasonable sensitivity to DE parameters (Fig. 1),
orientation of error ellipse (Fig. 3)).
2. Make it clear that that covering a very wide range of redshifts
would likely provide the best leverage on cosmological parameters, and
that in any case one would want to use different techniques (different
systematics).
3. Acknowledge that the immaturity of the field does not yet allow a
proper cost-benefit analysis, i.e. an analysis of how to best
distribute resources among different techniques, redshift ranges, etc.
**
The changes mentioned above to the text address each of these points.
- p 5 left column
The entire part on astrophysical systematics is quite weak. This part
does not add anything new at all. First, the authors just repeat
equation (4) from Linder (2010) (without properly explaining all of
the quantities), and just point to that paper for an explanation. The
discussion of peculiar accelerations is also not very illuminating.
The authors completely ignore all of the previous literature on this
subject (e.g. Lake (1982), Phillipps (1982), Amendola et al. (2008),
Liske et al. (2008), Uzan et al. (2008)). Furthermore, I did not
understand the paragraph following equation (5). I am not aware of
anyone ever having suggested using AGN as targets for a redshift drift
measurement, and I do not understand why peculiar accelerations should
not be "interpreted statistically", or what they mean by "it" in the
last sentence of this paragraph. Finally, the very last paragraph of
this section does not state anything beyond the obvious (and manages
to do this in a confusing way).
**
We have added more explanation in the text, and more references.
All variables are currently defined, with the clarification rephrase:
"potentials, with a subscript e denoting emitter and o denoting observer."
Eq. 4 puts all the terms together, whereas people have often only considered
subsets. We clarify that by treating peculiar accelerations statistically,
we mean through the power spectrum of density perturbations, which averages
over low and high density regions, while Eq. 5 makes clear that one
cares about an individual system, i.e. that type of object may be a high
fluctuation. For example the density perturbations
are linear on average but we often care about nonlinear systems. The last
paragraph reminds the reader of the level of challenge faced; it is useful
to show these numbers explicitly. We have rephrased: "separated sources
with statistically independent peculiar accelerations. To be explicit,
we recall that the precision required..."
Section 3
**
We have emphasized explicitly, starting in the title of the overall
section, that these are speculations that may stimulate further thought
in the reader. In seminars we have given based on this paper,
this section has given rise to considerable discussion and interactions,
with much interesting brainstorming. We view this as valuable, though
none of the techniques are "ready for prime time".
- Section 3.1
I am sure I have missed something important but I am completely
baffled by the entire Section 3.1. It is already very well known that
the expansion history (i.e. H) can be measured using radial BAO. This
is not new. This method has nothing in common with the redshift drift
in terms of technique. There is not even a qualitative discussion of
the pros and cons of radial BAO vs redshift drift. I simply do not
understand what we are supposed to learn from this section or why it
is even here. I would advocate its removal.
**
We rewrite the introductory paragraph to show the logical
association of redshift drift and Hubble constant drift and
acceleration.
The Hubble drift gives substantially related information to redshift
drift, but comes "for free" with spectroscopic galaxy clustering surveys
and so is important to emphasize. Differences and pros/cons are discussed
in the text and explicitly shown in Table 1. To clarify the distinction
and indicate we are making a comparison rather than inventing a new probe,
we rephrase the introductory paragraph of the subsection and explicitly
state:
"Here we briefly compare this Hubble drift from the
familiar concept of radial baryon acoustic oscillations
with redshift drift as discussed in the rest of this article."
- Section 3.2
This section I found much more relevant, but unfortunately again not
very illuminating. The idea of using the drift of a pulsar's period in
itself is not new (although perhaps not widely known in this context).
I would have expected to read about the latest estimates of our
prospects to detect cosmological pulsars (SKA). Instead the authors
superficially discuss even more remote possibilities like using
gravitational waves. Also, I found the reference to Thornton et
al. (2013) misplaced. This paper deals with radio *transients*, not
periodic sources. I would advocate to either add more content to this
section or else to remove it.
**
As stated, we believe that these concepts are useful for stimulating
further, perhaps more practical ideas. There is considerable excitement in the
gravitational wave community about testing fundamental physics, as indicated
in the Yunes et al paper. The Thornton et al reference is in a parenthetical
"but", and we clarify the text and emphasize the possibility of new
discoveries by “but see Thornton+ 2013; while this is for a transient,
not periodic, source, an exciting prospect is that upcoming time domain surveys
such as LSST or SKA may find new classes of sources that could be used”.
- Section 3.3
Again this section is very relevant, but again poorly executed.
1. The authors fail to explain clearly the idea of this experiment
(i.e. to simultaneously measure the redshifts of the images of a
strongly gravitationally lensed source) and what is being measured
(dz/dt = -H(z) which is considerably larger than in the "normal" case).
2. The authors fail to explain clearly the implementation they have in
mind. They mention quasars, then Lyman alpha absorption, and at the
end of this section they again refer to the Ly-a forest in quasar
spectra, but it would be nice to explain clearly what they have in
mind near the start of this section.
3. The authors fail to properly discuss the pros and cons with respect
to the "classical" redshift measurement. The pros are that the signal
is larger and that (at least superficially) one might think that it is
easier to reach a certain precision in delta_z when measuring it
simultaneously on two images, as opposed to on the same image
separated by several years. The downside is of course that delta_t is
fixed and cannot be arbitrarily extended (i.e. one cannot just wait
for the S/N of the measurement to improve, as in the "classical" case).
Furthermore, it is not at all clear why it should be that much easier
to measure delta_z from two widely separated positions on the detector
compared to from two widely separated positions in time. These two
things actually have a lot in common, but none of this is discussed.
**
This section has been significantly rewritten. There is no single
implementation that we have in mind, however: the implementation depends
on the geometry of the multiple image system and the type of source being
lensed. Ongoing and future time domain surveys should clarify the
possibilities.
- p 7, l, 26
"...and so the redshifts will be the same."
Given the context, this is a somewhat confusing statement. I
understand what the authors are trying to say, but it needs to be
clarified. After all, the whole point of this section is that the
different images of a strongly lensed source do *not* have the same
redshift.
**
This text has disappeared in the rewrite.
- p 7, l, 36
"...the required precision could be eased."
Compared to what?
The point is that lenses may be able to give us a similar time delay
as what is being considered for the "classical" case (i.e. a decade)
but that the systematics may be easier to beat (but see my
reservations about that above).
**
This text has disappeared in the rewrite.
- p 7, l, para beginning line 43
When discussing time delays, it might be useful to know what kind of
experiment the authors have in mind. For example, the time delay that
is measured from the photometric time variability of a quasar is not
the time delay that applies to a Lyman-alpha absorption system seen in
the foreground.
**
The new text includes the following
"The time delay for each gas cloud will not be directly measured, however they
can be (at least approximately) constrained using the delay
of the background quasar and the model geodesics of a lens in a
Robertson-Walker metric, Friedmann cosmology."
- p 7, r, 25
"Note that if..."
The same issue also affects the ability to simultaneously observe the
two images. For example, the 2-yr time delay lens of Fohlmeister et
al. (2013) has an image separation of 22 arcsec, which is already
challenging for a high-resolution spectrograph. For longer time delays
it may be impossible to simultaneously observe the two images. On the
other hand, this could be easily solved by having two telescopes feed
the same spectrograph (see the example of ESPRESSO in the incoherent
combined focus of the VLT).
**
Fiber fed spectrographs on a single telescope can be (and are today)
used to obtain simultaneous observations of objects that have larger
angular separations than a spectrograph’s instantaneous field of
view. The fiber are positioned on the sky to collect the flux of each
object and be arranged at the spectrograph’s entrance slit. Techniques
are in existence that mitigate fiber effect = on the apparent
wavelength. For example, the HARPS instruments (Cosentino et al. 2012
SPIE, Vol 8446) achieve velocity resolutions of 10’s of cm/s using
four fibers simultaneously feeding a spectrograph.
- p 7, r, 56
"...spectrum of image B should match..."
I disagree. The redshifts of these two spectra should differ by
(1+z)*H_0*delta_t.
**
The sentence begins with saying that the time delay is one year so be
definition the spectra of the two images will have a 1-year lag.
- p 8, l, 31
"The N_line,q spectral lines in the quasar, perhaps O(10^3)..."
A quasar (not counting the intervening IGM lines) does not display of
order 1000 spectral lines. More like O(10). And many of these (the
broad ones) would not be particularly suited to a zdot measurement.
Also the IGM lines are more like ~200 in number than 1000.
**
Liske et al. cite 225 metal absorption lines for Q1101-264; we change the
text to use that number and cite Liske et al.
Section 4
- p 8, l, 50
The work of Davis & May (1978!!!) is *not* representative of current
high-precision work. First of all, one has to differentiate between
the radio and optical regimes. Secondly, if interested in the radio
regime, the authors may wish to consult Darling (2012).
**
We have replaced the reference and results with those of Darling (2012)
and specify this is a radio measurement. For redshift drift we are
interested in the most accurate redshift measurement independent of
wavelength.
- p 8, l, 54
"...and simultaneous differential measurements..."
First of all, I do not understand what this "guiding concept" has to
do with this section. All of the techniques explored in this section
still require multiple measurements taken at different times. The
increased redshift precision may reduce the time span between the
measurements, but certainly not to the point of "simultaneous"
measurements.
Secondly, even if the statement were relevant here, I'm not convinced
it's actually true (see above).
**
The text did not properly convey our intended meaning. It has been
modified to read:
"differential, rather than absolute wavelength measurements
to get redshift can be more robust."
- p 8, l, para beginning line 58
Nobody would seriously even consider a redshift drift measurement with
classical wavelength calibration methods (comparison spectra from arc
lamps, iodine cell). Technology has already moved on: a laser
frequency comb (LFC) provides a closely-spaced *absolute* wavelength
reference with almost arbitrary precision, see e.g. Murphy et
al. (2007), Steinmetz et al. (2008), Wilken et al. (2012).
**
We now refer to LFCs as the standard for accurate wavelength
calibrations and added the suggested references. We remove
some of the references to the older calibration methods.
Section 4.1
- p 8, r, 28
"...narrow bandwidth that spectrographs must span"
True, except that one would want to cover at least some redshift
range.
**
Objects at different redshifts may be observed with different
spectrographs or gratings. This is an implementation decision that
doesn't have to be specified for the purposes of this article.
- p 8, r para beginning line 37
This paragraph (and the preceding sentence) are slightly confusing
because two issues are intermingled. Yes, one can measure redshift
from the observed doublet separation which is therefore not sensitive
to line shape distortions. However, since we are only interested in
differential measurements, line profile uncertainties are not an issue
anyway (unless line profiles change on the timescale of a few years),
even when not using the doublet separation but the individual line
positions.
**
We reorganize the last sentence of the preceding paragraph and the
paragraph to distinguish discussion on redshift accuracy and redshift
drift accuracy.
- Table 2
The values for the velocity dispersions in Table 2 are frequently ~10
km/s and go as low as 1.43 km/s. May I remind the authors that SDSS
spectra have a resolution of ~2000, i.e. of ~150 km/s. I find it hard
to believe that an intrinsic line width of 1.43 km/s or even of 10
km/s can be reliably retrieved from these spectra, despite the high
S/N of the emission lines. At the very least this would require
accurate knowledge of the actual spectral resolution at the observed
wavelength of the emission line. Thomas et al. (2013) do not mention
any efforts in this direction but they may have just omitted this. In
any case, values of 1 or even 5 km/s are even unphysically low. Even
in the case of a face-on disk one would expect higher values.
What is puzzling about these values is that they have relatively small
errors (I checked in the SDSS DR10 database for a few cases). It may
be advisable for the authors to contact the Portsmouth group to obtain
advice on how to select a trustworthy sample of galaxies with narrow
emission lines.
In any case, I am highly suspicious of almost all of the velocity
dispersion values in Table 2. Consequently, I am also highly
suspicious of the *absolute* values in Table 3. The relative values
(i.e. the improvement in radial velocity precision afforded by the
alternative techniques relative to the "Conventional" one) are
probably ok.
**
Indeed some of the velocity dispersions are low. As an alternative
extractor of line velocities, we examined the SDSS DR7 catalog made by
the MPA-Garching group. Their catalog contains dispersions as low as
12 km/s.
We have asked Daniel Thomas (Portsmouth) specifically on the velocity
numbers and unreported systematic uncertainties but have not heard
back from him. We take the SDSS DR10 database results at face value
but do consider a range of line velocities as broad as Plate 4749
Fiber 757, which is safely non-controversial. As we note shortly, we
now mention the need for higher-resolution spectroscopy for additional
target screening.
Solar-type stars can have a rotationally-caused dispersion of about 8
km/s. In recognition of the sensitivity of the result to pointing and
unresolved spatial structure that contribute to the lines, we now
mention the benefit from an integral field unit, in part to resolve
different emission regions within the galaxy.
- p 9, l, 37
"...are difficult to access..."
Observations up to 15 micron are quite routine from ground-based
observatories. The authors probably mean to say that the NIR cannot be
accessed with the SDSS spectrographs.
**
It the text it is stated that it is not the NIR, but rather the [OIII}
features shifted into the NIR that are difficult to access.
- p 9, l, 44
"Future surveys..."
Given the low spectral resolution of these future surveys, we will be
able to select target *candidates* from these surveys. However, their
suitability (i.e. the width of their emission lines) will have to be
confirmed at higher spectral resolution.
**
We have added a new paragraph:
SDSS3 (just as the aforementioned future surveys) has a spectrograph that does
not resolve lines as fine as those quoted in Table~\ref{lines:tab}. Velocity dispersions come
from a model fit and the uncertainties given in DR10 are underestimated for low dispersions (Thomas 2014).
Therefore, targets initially identified with $R\lesssim 2000$ spectrographs need subsequent screening with
integral-field-unit (for spatial resolution),
high spectral-resolution spectrographs: the resulting minimum dispersion may be expected to be higher
than found listed in Table~\ref{lines:tab}.
-p 9, l, 51
"...redshift precision scales..."
The authors have it backwards. The precision scales with the *inverse*
of the line width and with the square root of the line flux (as correctly
stated on p 10, lines 41 and 45).
**
We correct this mistake.
- p 9, equation 10
I am puzzled by why the authors allow different line widths for the
two lines of a doublet.
**
The wording has been made more precise. The equation is now
presented as the model for two nearby emission lines and we note that
the velocity dispersion is the same for the lines of a doublet.
- p 9, r, 44
"..throughput of 70%."
Considering that this is the total system throughput, including
atmosphere, telescope, entrance aperture losses, instrument and
detector, this is a very high number. Current high-resolution
spectrographs achieve ~20%. 30% would be great, but 70% is simply
unfeasible.
Also, I understand that the authors use the same throughput for all
spectrograph designs to highlight the improvements due to technique,
but they should at least mention that this assumption is of course
entirely unrealistic.
**
The referee is correct. We now use a 35% throughput and mention the
contribution of the interferometer. The text now reads:
"To allow direct comparison of the designs, all systems are assigned
the same total throughput of 35\%. The interferometer optics and
fringe visibility has a $\approx 75\%$ efficiency, so the Conventional
spectrograph will have a third better throughput relative to the other
systems."
-p 9, r, 46
"R = 20,000"
Do the authors mean 200,000? 20,000 corresponds to a velocity
resolution of 15 km/s. This would leave most of the lines in Table 2
unresolved (but see above), which would be a very strange thing to
do. Furthermore, at least for a conventional spectrograph one want to
work at something like R = 100,000 just for the purpose of wavelength
calibration, if nothing else.
**
First, we quote 2-pixel resolutions and make that precision in the
text. Figure 6 shows an example how a line is sampled for the EDI.
Second, we now run the Conventional spectrograph at (2-pixel) R=50,000
but continue to use R=20,000 for the other cases.
The point of EDI and SHS is that they allow you to use a lower
resolution spectrograph (much cheaper and easier to make thermally
stable), such as R=20,000, that does not resolve the lines, to achieve
reasonably high Doppler precisions. It is not necessary to resolve
the lines by the dispersive spectrograph component to measure a
precise Doppler velocity, because it is the interferometer component
which measures the Doppler velocity. The sinusoidal interferometer
comb provides the steeply sloped PSF that responds sensitively to
changes in line position. The sinusoid period is chosen so that the
sinusoidal absorption valley fits the stellar linewidth. The PSF of
the net instrument (dispersive*interferometer) can be thought of as a
corrugated bell curve. The corrugations are provided by the
interferometer and are much steeper than the walls of the bell curve
which are provided by the low resolution dispersive spectrograph. The
steep slope of the corrugations provides the high sensitivity to
Doppler shift, not the bell curve of the low res dispersive
spectrograph.
The text is modified to read:
"The dispersion spectrometer used for Conventional spectrograph has a
two-pixel resolution $R=50{,}000$ whereas the EDI, and ED-SHS
instruments is taken to have resolution $R=20{,}000$. The effective
point-spread-function dominated by the pixel top-hat function. The
resolution of the Conventional spectrograph is chosen in order to
sample the features. EDI and ED-SHS allow the use of lower resolution
spectrographs (much cheaper and easier to make thermally stable)
because it is the interferometer component which measures the
velocity)."
- p 9, r, 47
"...dominated by the pixel top-hat function."
I don't understand what this means. The key thing is whether the PSF
is Nyquist sampled or not.
**
We are noting that we include the contribution of the pixel to the
effective PSF.
- p 9, r, 48
If object photon noise is dominant error source then all of the
details described here (readout noise, dark current, no of exposures
background model, etc) are irrelevant and only serve to confuse. (Note
that a dark current of 2 e-/s is enormous. I presume the authors
meant 2 e-/h.)
**
We note that the code does include detector noise because it does
affect the lowest-precision digits in our tables. This affects those
trying to reproduce our results.
The text is modified to read
"For all cases considered the photon noise dominate uncertainties; we
include a detector read noise of $2e^-$, total integrations split into
2-hour exposures, and a dark current of $2e^-$\,s$^{-1}$, which affect
the calculated signal-to-noise at the least significant quoted digit."
Yes, we meant 2e-/h.
- line 52
Why are blocking filters relevant?
**
Blocking filters are relevant for the SHS, where light from the full
bandpass becomes background noise.
- line 58
Under the assumptions used here it is irrelevant whether the target is
considered point-like etc. Any aperture entrance losses should be
included in the throughput defined above.
- line 59
"The effective PSF..."
I am baffled by this sentence. First of all, it makes no sense to me
to say that the "PSF Nyquist samples" the intrinsic line profile.
Secondly, given the line widths in Table 2 and R=20,000 the lines are
unresolved, so the intrinsic line widths are certainly not Nyquist
samples, in any sense.
**
Indeed the two lines mentioned don't make any sense and are removed.
- p 10, l, line 35
"When appropriate..."
Under the assumption of being object flux limited, it is completely
irrelevant over how many pixels the flux is distributed.
**
Our intended meaning is presented in the rewrite given in response to
the next commet.
- para beginning line 39
After having provided a large amount of detail in the previous
paragraph which appears to be entirely irrelevant to the numerical
experiment about to follow, the authors are then extremely concise
regarding the thing that actually matters, namely their derivation of
the numbers in Table 3. They simply refer to a Fisher matrix analysis
with no further explanation.
The entire section 4.2 needs some attention. I was not able to
understand what the authors have actually done to derive the numbers
in Table 3.
**
We rewrite the end of the second paragraph and the beginning of
the third to clarify what is done.
"Projected redshift precisions are calculated with a Fisher matrix
analyses with the source redshift $z$ as the only free parameter. The
equation for the predicted signal is give for each spectrograph type.
For conciseness, trivial behavior along the spatial axis are not given
explicitly in these equation. Measurement uncertainties come from
photon and detector noise."
Section 4.3
- p 12, l, 10
"...every detector pixel."
This would be physically impossible unless the PSF were under-sampled,
which would make no sense. For a properly (i.e. Nyquist) sampled PSF
one can at most have one line per resolution element.
I note that this is precisely what a laser frequency comb delivers.
**
Yes that it doesn't make sense is the point. In the text we add
laser frequency comb lasers as another calibration source.
- p 12, l, lines 11-17
One can also record wavelength calibration lines next to the science
spectrum. In this case one has to interpolate neither over time not
wavelength, but over detector space.
**
The text is modified to read:
"Otherwise, the arc must be observed spatially offset from the science
signal on the detector or in a different exposure. Therefore,
temporal, spatial, and/or wavelength interpolation are applied to
calibrate wavelengths.
- p 12, l, para beginning line 18
Flat-fielding is not the only issue. The background will also vary, as
will the scattered light, possibly even the source (at least when
quasars are used as targets). Quite generally the problem is that one
needs to extract the redshift drift signal in the presence of varying
additive and multiplicative factors.
**
Yes, the flatfield is just one of a slew of issues such as
scattered light, ghosts, non-uniform pixel sizes, etc. In the text we
replace "flatfield" with the generic "imager flux calibration".
Small movements of the source during observations due to imperfect
telescope tracking, and the the associated varying illumination of the
slit are also a problem, which can however, be mitigated using light
scrambling devices (i.e. devices that have an output light
distribution hat is independent of the input light distribution.
**
Yes, we now cite HARPS-N, an instrument that uses light scrambling to
measure precision wavelengths.
Section 4.4
Very interesting concept. However, as far as I can see, the main
advantage of an EDI is its capability to boost the resolution of a
conventional spectrograph. This is not really an issue in the context
of the redshift drift. Indeed the authors assume that the spectrograph
behind the interferometer is the same as that in Section 4.3. In this
case the advantage of using the EDI is "just" the additional,
apparently independent, signal contaned in the "whirl". However, as
pointed out above already, the authors should at least point out that
the downside is the loss of photons in the interferometer. So the
"conventional" contribution to the signal will in reality not be the
same as that in Section 4.3. It's fine to estimate the redshift
precision of the EDI without including this effect (as the authors
have done) in order to see the effect of the additional
signal. However, for a fair comparison, the authors should also
estimate the redshift precision including the loss of photons in the
inteferometer.
**
As mentioned before, the text now gives an estimate of the throughput
of the interferometer and the relative throughput with the conventional
spectrograph.
EDI's capability to "boost the resolution of a conventional
spectrograph" also means we can use a smaller resolution spectrograph
to achieve reasonably high Doppler precisions normally achieved only
by high resolution spectrographs. It is always beneficial
(spectroscopically) to use the highest resolution spectrograph one can
afford, and has physical space to mount and thermomechanically
isolate. Often they are prohibitively expensive, so we analyze the
case for a R=20,000 spectrograph because they are so much more
affordable. Secondly, smaller resolution means smaller physical size
for spectrograph, which makes it much easier to thermomechanically
protect and isolate it from drifts.
If we use both of two complementary interferometer outputs, then
through conservation of energy for ideal optics there is no loss in
the interferometer (summing the two outputs). So then the only
interferometer loss is due to parasitic losses because beam passes
through or reflects from extra optics. With a well designed
instrument and good AR coatings this could be minimized. Perhaps a
15% loss.
- p 13, l, 59
I am not sure that the issue of PSF variations is any different in
this case from the "conventional" case. In the latter case arc lines
can also be used for PSF calibration.
**
The arclines would not enter through the same slit as the science
light and thus would not have the same PSF.
Section 4.6
I am not sure I understand the advantages of ED-SHS over EDI. As the
authors point out, these concepts are very similar, except for the
functional form of the modulation. Could the authors elaborate on the
pros and cons of EDI vs ED-SHS?
**
The difference between ED-SHS and EDI is shown pictorially in the
figure referenced after equation 27. The main difference between the
two is the practical bandwidth.
We add a description in the text.
"The similarities between ED-SHS and EDI can be seen in Fig. 2 of
Erskine (2003); both use interferometry to create wavelength-dependent
modulations in the signal, with the distinction being that the EDI
creates a uniform spatial frequency for all wavelengths and thus has
an extremely wide bandwidth, whereas the ED-SHS creates a diamond-like
fringe pattern whose spatial frequency varies rapidly around specific
wavelength limiting the practical bandwidth significantly."
Section 5
- p 16, r, 52
"...brighter and narrow(er?) lines."
Unlikely. As already pointed out above, the line widths considered
Table 2 are already unphysically low.
**
Although the galaxy line widths are low, isolated gas clouds are known
the have lower velocity dispersion. The text is tempered with the
following:
"Spatial resolution of the galaxy, say with an integral field unit, may isolate
subregions with finer emission than that of the combined whole."
- p 17, l, para beginning line 11
In this context the author may wish to refer to
2008PhLB..660...81A
2008MNRAS.391.1308Q
2012PhR...521...95Q