Content of review 1, reviewed on July 26, 2013

Basic reporting

The paper is very well written and presents the ideas clearly.

Some minor (Discretionary) comments regarding the style:

* The title is too long, how about "A Galaxy framework for sequence analysis with applications in molecular plant pathology"?.
* In the abstract, NCBI BLAST+ is mentioned and then BLAST is mentioned again, but as an example. It id confusing.
* In the abstract, in the sentence "The motivating research theme ... " it is not clear whether the research theme mentioned refers to Galaxy as a whole or the content of this paper. Also, the abstract reads like a presentation of Galaxy, rather than presenting the authors' work (Specific Galaxy tools).
* The sentence in lines 148-151 is very difficult to understand.
* The last part of the sentence in lines 244-245 may be clearer written as follows: "despite being phylogenetically distant"

Possible mistakes:

Line 112: computING cluster?
Line 114: can BE made
Line 115: extra space after "e.g."? Perhaps the authors can use the LaTex command \newcommand{\eg}{\emph{e.g.}\xspace} (and the xspace package)
Line 244: sequenceS

Experimental design

The main objection is that the work presented in this paper is not completely reproducible.

The authors present a set of Galaxy tools and workflows that exploit such tools. However, only the "backbones" of the workflows are stored in the Galaxy tool shed. Therefore, if a user wants to reproduce the workflow, she needs to import it into a Galaxy server and run the workflow with datasets of her choice: since the datasets will be different, the workflows are not completely reproducible.

The authors should publish the workflows with the datasets they used to test them. Since the authors mention in the acknowledgements that they maintain an in-house Galaxy server, they can easily make the workflows mentioned in the paper public, and also publish a history with the datasets used, with clear instructions mapping the datasets to the corresponding workflow steps. This way any reader can run precisely the workflows presented in the paper, with the actual datasets, and judge the results. If the authors are worried about the computational burden for their server, they can set up accounts for the reviewers only, without making their Galaxy server public.

Validity of the findings

As already mentioned, the datasets used to test the workflows have not been made available.

Comments for the author

As already mentioned, the datasets used to test the workflows have not been made available.

Source

    © 2013 the Reviewer (CC BY 3.0 - source).

Content of review 2, reviewed on July 26, 2013

Basic reporting

The paper is very well written and presents the ideas clearly.

Some minor (Discretionary) comments regarding the style:

  • The title is too long, how about "A Galaxy framework for sequence analysis with applications in molecular plant pathology"?.
  • In the abstract, NCBI BLAST+ is mentioned and then BLAST is mentioned again, but as an example. It id confusing.
  • In the abstract, in the sentence "The motivating research theme ... " it is not clear whether the research theme mentioned refers to Galaxy as a whole or the content of this paper. Also, the abstract reads like a presentation of Galaxy, rather than presenting the authors' work (Specific Galaxy tools).
  • The sentence in lines 148-151 is very difficult to understand.
  • The last part of the sentence in lines 244-245 may be clearer written as follows: "despite being phylogenetically distant"

Possible mistakes:

Line 112: computING cluster?
Line 114: can BE made
Line 115: extra space after "e.g."? Perhaps the authors can use the LaTex command \newcommand{\eg}{\emph{e.g.}\xspace} (and the xspace package)
Line 244: sequenceS

Experimental design

The main objection is that the work presented in this paper is not completely reproducible.

The authors present a set of Galaxy tools and workflows that exploit such tools. However, only the "backbones" of the workflows are stored in the Galaxy tool shed. Therefore, if a user wants to reproduce the workflow, she needs to import it into a Galaxy server and run the workflow with datasets of her choice: since the datasets will be different, the workflows are not completely reproducible.

The authors should publish the workflows with the datasets they used to test them. Since the authors mention in the acknowledgements that they maintain an in-house Galaxy server, they can easily make the workflows mentioned in the paper public, and also publish a history with the datasets used, with clear instructions mapping the datasets to the corresponding workflow steps. This way any reader can run precisely the workflows presented in the paper, with the actual datasets, and judge the results. If the authors are worried about the computational burden for their server, they can set up accounts for the reviewers only, without making their Galaxy server public.

Validity of the findings

As already mentioned, the datasets used to test the workflows have not been made available.

Source

    © 2013 the Reviewer (CC BY 3.0 - source).