Content of review 1, reviewed on September 15, 2020

Title: What is Big BRUVver up to? Methods and uses of baited underwater video

DOI: 10.1007/s11160-016-9450-1

Findings

This is a great study reviewing important aspects of BRUVS studies which are recently growing in numbers and achievements to measure the health of marine ecosystem. The study achieved their aims to 1) demonstrate the large variations of methods and purpose of the previous BRUVS studies, 2) recognise the gaps in knowledge and the needs for further studies, and 3) point out the failure of many studies to describe necessary methodologies. These are great findings in order to further improve the use of BRUVS and enable more accurate comparisons between studies.

There is one major weakness that draws me back from accepting this study, unless this can be fixed in the revised manuscript, which is the selection of the keywords for the literature search. The authors used the keywords of “baited and video” or “BRUVS”. I see the following 4 issues in these keywords.

  1. The authors found the great variability in the names of units used in literature while BRUVS was the most commonly used (Fig 1d). If an extra search using "BRUVS" was made apart from the search with "bared and video", then is it very likely that the overall result have included more studies using the word "BRUVS" than others using different acronyms, and therefore potentially biased.

  2. Authors mentioned that the abbreviated name of BRUVS was first used in Cappo et al 2001 in Australia. Since then, a number of his and his colleagues (who are presumably more concentrated in Australia) published their works using the word BRUVS, whereas scientists in other countries may have used different names. This may be why authors found that the majority (61%) of the 161 studies were in Australia. Again, potentially biased.

  3. The first keywords "baited and video" didn't specify aquatic or terrestrial study. Why not include something like "underwater"? How did they spot unwanted "terrestrial" studies and removed them from the analyses?

  4. Following to the comment above, the selection of the subjects are unclear. How did the authors decide that the 10,000+ hits were not relevant and removed them? It's hard to imagine that they checked through every paper individually.

In order to fix this major issue, I recommend authors to conduct a fresh literature search including other acronyms. It may not make any major changes in their conclusions from their original manuscript. However I expect that many of the figures will slightly change and be less biased. Also, please provide more details of the selection of the subjects in the methods section (related to the third and fourth point raised above).

There are also the following 6 specific minor comments that I’d like the reviewers to consider.

  1. Please specify why authors chose to use the period of between 1950 and search date in the literature search? Authors found the old study was in the late 90s (Fig 1a). Even if there was any BRUVS studies in the 50s, I imagine that underwater video technologies back then would be nothing like the current one to make any methodology comparisons.

  2. Re: The last sentence in the method section. Please provide more details of how “the purpose and novelty of the studies were assessed”

  3. Table 1. Only 46% of the 161 studies reported the variable “Max range visible”. I expect some of the study sites on coral reefs had great visibility (i.e. over 15m) and the measurement of "Max range visible" were not applicable. It will be better if authors take into account of these kinds of details before determining the # of studies reported this variable.

  4. Table 1 “distance between reps”. It will be interesting to see if the number of papers which reported this variable through time. My guess is that more recent papers reported it, especially after Cappo et al 2004 first described the effect of bait plume dispersal between replicates and made the recommendation of careful consideration of the distance between reps.

  5. Page 58, first paragraph. Authors stated that “sampling effort and the additional cameras required for stereo could be better spent on increasing replication” for the studies which didn’t use length data. I disagree with this statement. Stereo-BRUVs have been increasingly used around the world to estimate length/abundance of fish and community structures, and the same equipment (or even same video samples) are repeatedly used for multiple studies with different research questions. Based on these facts, I can think of 2 possible explanations of why stereo-BRUVS were used in the studies which didn't require length data. 1) The research groups did not have single-BRUVS and only stereo-BRUVS were available, or 2) Stereo-BRUVS were used for another studies which require length data, and the same video data were re-used for other studies for different aims. In either way, it was more cost-effective to use stereo-BRUVS instead of constructing/purchasing a new single-BRUVS for the studies which didn't require length data. Authors should take these factors into account, and make a recommendation to other authors to provide the clear reasoning of why stereo-BRUVS was used instead of single-BRUVS in their articles.

  6. Page 59. The results of Pearson correlation was provided in text, but nothing was mentioned about any statistical analyses in the method section. Please describe the method (i.e. What statistical package was used? How were the assumptions tested?)

Source

    © 2020 the Reviewer.

References

    K., W. S., G., F. P., Charlie, H. 2017. What is Big BRUVver up to? Methods and uses of baited underwater video. Reviews in Fish Biology and Fisheries.