Pre-publication Review of
Verified
Submitted to
Reviewed
Actions
Content of review 1, reviewed on November 29, 2018

This feature comparison study of open-source container orchestration frameworks presents a tremendous amount of comparison work in the field of elastic container platforms. I am not aware of any comparable study diving so much into details and version history of the respective platforms. The main contribution of this work is:

  • A descriptive feature comparison overview of the three most prominent elastic container platforms used in cloud-native application engineering: Docker Swarm, Kubernetes and Mesos
  • The study identifies 124 common and 54 unique features of all frameworks and groups it into nine functional and 27 sub-functional aspects.
  • The study compares these features qualitatively and quantitatively concerning genericity, vendor lock-in, maturity, and stability.
  • Furthermore, this study investigates the pioneering nature of each framework by studying the historical evolution of the frameworks on GitHub.

The paper fits the intended purpose and scope of the journal and presents a massive amount of work in the field of elastic container platforms and cloud-native application. "Tons" of data, table, and figures support all this. Furthermore, the review identifies interesting research opportunities in the following fields: improved cluster security and container security, performance isolation of GPU, disk and network resources and network plugin architectures.

I am perfectly fine with the content. However, the strength of the paper (presenting all the details) is at the same time its biggest weakness. The submitted work shows the potential to be split into two (maybe three) stand-alone papers. With almost 160 pages the article is much too long. So, the authors should work on the following and more general points to improve the overall readability of the paper.

Some of my general recommendations are:

  1. Provide the current version as a technical report for those readers who are interested in this kind of presented details (the most readers will not search for so many details).
  2. and refactor the current paper by applying the following principles to reduce the length to an appropriate journal size of about 20 to 40 pages. The article should be under no circumstances longer than 40 pages (better shorter!).

Reduce the paper size dramatically and improve the overall readability by applying the following principles:

  • Introduce your research questions much earlier (methodology section or even introduction) and not on page 111!
  • Focus on common features and explain the commonalities, do not reflect all platforms common feature per common feature. So, focus on the common features and not on the platforms. You can reference the much more detailed technical report where appropriate.
  • Do not reflect the version history of the analyzed frameworks (Section 5). Section 5 alone seems worth a paper on its own. That makes it very lengthy and hard to read. Take the latest version for the presentation of your analysis. The version history can be presented in a more detailed technical report (or even in another paper).
  • Present the differences in a separate section and discuss them briefly (and not lengthy). Try to ask yourself which trends or ongoing developments can be derived from the differences. What aspects are not covered? What open research questions can be derived? Where is the road heading? What would be benefits for the reader from this section? Try to use the differences section to provide the reader a "look-ahead" into the future. This should be possible according to what I have read.

Some more specific recommendations would be:

  • Provide Table 1 in an additional graphical architecture presentation to give the reader some visual guidance. A nice side effect, such visualizations tend to be cited quite often according to my experiences.
  • Try to visualize your methodology in a kind of workflow diagram to give the reader better guidance on how the analysis outcomes have been derived methodically.
  • Optimize the layout of your "blue feature tables." The tables content is great, but the layout is inefficient. Try to find a solution to integrate all of your "blue feature tables" into one table (maybe of two or three pages of length, not longer). Such overviews are of great value - but they must be presented in a compact form and not fragmented in a document of more than 100 pages.
  • Try to focus on the three leading platforms (K8S, Mesos, Swarm) and try to handle add-ons like Aurora within these main platforms (otherwise the readers gets lost throughout the document).
  • Try to focus the essential points from your point of view in the migration and vendor-lock-in/deprecation section. I got lost in this section. You can always reference a much more detailed technical report. But a journal article should be more to the point.
  • Explain the motivation of your quantitative analysis. Answer the question according to what methodology you run your analysis and introduce this in your methodology section. Try to focus the essential points of your review and try to keep only the visualizations that support your points. Not everything that has been calculated must be presented and visualized. It should be helpful for the reader to understand your points, you must not impress the reader by the sheer amount of data!

If you are applying these principles, the paper will get a lot shorter, better to follow, and you get your conclusions better to the point.

As an expert in the field I warmly honor all of your made efforts, this is a tremendous amount of work, and as a reviewer, I am willing to do another review round, but the paper length must be reduced dramatically. I hope you will find the time to make the necessary efforts and you are willing to do another maybe two or three rounds. The cloud-native community will honor your efforts. But in the current state of the paper, your insights are hardly identifiable which is very sad.

So, please work on that paper! Please!!!

Source

    © 2018 the Reviewer.

Content of review 2, reviewed on January 24, 2019

I thank the authors for their revision. It improved the paper a lot. However, there are still sections that can be shortened. Maybe the authors are right, that 40 pages might be impossible including the references. However, the paper might be shortened down to 40 pages (not counting the references).

Nevertheless, this version is a huge step forward. But the authors might be thinking about an Appendix where they can store the details? The main text should focus on the essential parts and the commonalities of the CO platforms and the most interesting distinguishing parts (that might have triggered Section 8).

Doing that consequently nearly all Sections can be shortened.

I recommend making Section 9.1 (Threats of validity) a standalone section. The same is true for 9.2 (Lessons learned).

The conclusion currently is no real conclusion - it is a discussion of the threats of validity and the lessons learned. However, it should sum up the critical points of the study and present them in a very condensed and "Take-Away" form on about half a page.

What is more, reference [58] has got an update. The preliminary technical report is now published as the final report. Please use DOI: 10.13140/RG.2.2.22009.52321 as a reference.

The authors may although want to consider the paper "A Brief History of Cloud Application Architectures" published in the same Journal. Please check DOI: 10.3390/app8081368 This reference might be useful to motivate the overall study.

I observed several reference errors. At least on the following pages: 21, 24, 25, 28, 42, 43, 45, 48, 59, 55. However, I do not check it systematically. But, it seems a general problem of the used bibliography software?

Table 3 has still an ineffective layout. The Kubernetes column contains a lot of full text.

Source

    © 2019 the Reviewer.