601 Matching Annotations
  1. May 2023
    1. Abstract

      This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.81), and has published the reviews under the same license. These are as follows.

      **Reviewer 1. Jin Sun **

      Gomes-dos-Santos et al., have upgraded the freshwater mussel Margaritifera margaritifera genome with the usage of long-read sequencing. Overall, this version has been dramatically improved compared to the former one, with the increased N50 value and BUSCO score and decreased No. of contigs. Considering the important economic value of M. margaritifera and the high quality of assembly, I must congratulate the authors on this. However, in contrast to the high-quality assembly, I am a bit aware of the genome annotation part. To me, the number of gene models predicted is a bit higher compared with other molluscan genomes. This can also be reflected by the low proportion of gene models that can be annotated by Swissprot or GO etc. I suspect that the high number of gene models could be the consequence that only the ab initio evidence was applied in the current study. More sophisticated ways, such as EVM or maker, shall be used to see whether the number of gene models can be reduced without sacrificing the BUSCO scores on the gene models.

      Line 76, The official name shall be “Oxford Nanopore Technology (ONT)”.

      Fig. 1, it is interesting to see the wide distribution of M. margaritifera. I am a bit interested to know whether there are any genetic differentiations between the European population and the North American population.

      **Reviewer 2. Rebekah L. Rogers **

      Is there sufficient detail in the methods and data-processing steps to allow reproduction?

      Y. All methods seem standard and high quality for a genome release.

      If the authors could add a table comparing with other Unio genomes, that might be helpful. Gene numbers, BUSCO scores, N50s, and other relevant stats. It will help readers see the value of this more contiguous genome -V. ellipsiforma (Renaut et al.) -M nervosa -P. streckersonii

    1. ABSTRACT

      This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.80), and has published the reviews under the same license. These are as follows.

      **Reviewer 1. Grace Mugumbate **

      Please add additional comments on language quality to clarify if needed

      Yes. First person reporting has been used with the word "We' used extensively.

      Are the data and metadata consistent with relevant minimum information or reporting standards?

      No. There is need to specify the type, size, standardisation and curation of the data that was used, especially when additional data was obtained from different databases.

      Is the data acquisition clear, complete and methodologically sound?

      Yes. Sources of data are indicated in the paper, however the size of the data sets and type of data is not clear.

      Is there sufficient detail in the methods and data-processing steps to allow reproduction?

      No. There is need to give more detail in the methods for reproducibility.

      Is there sufficient data validation and statistical analyses of data quality?

      No. Validation was performed, however no statistical analyses was mentioned.

      Is there sufficient information for others to reuse this dataset or integrate it with other data?

      No. More detail is needed on data retrieval to allow reuse of the dataset.

      Additional Comments:

      The Authors presented their work entitled 'Mycobacterial Metabolic Model Development for Drug Target Identification'. This is very innovative work that led to generation of M. laprae and M. abscessus models, important tools for drug target identification. Target identification for a number of infectious diseases provides information for structure-based molecular modification of new and alternative diseases. The target specific compounds will help reduce side effects among other things. Generation of the models by the authors is commendable.
      

      There are a few corrections: 1) Under Abstract: Line 4: Please note that Mycobacterium tuberculosis is not a disease but the bacterium that causes the diseases tuberculosis. 2) Mehtods, GEM reconstruction, curation and simulation (i) Line two: Name the "other organism specific databases" (ii) Give a brief description of the COBRApy and the GLPK even if the source had been given. 3) The Method section need to be more informative to allow for reproducibility.

      **Reviewer 2. Nagasuma Chandra **

      Is there sufficient detail in the methods and data-processing steps to allow reproduction?

      Yes. It would be useful if the authors could comment on how the models vary between the two species and with respect to M. tuberculosis. Specifically, a note on how the authors deal with alternate enzymes and whether they included enzymes specific to each species, would be helpful.

      Is the validation suitable for this type of data?

      Yes. A figure depicting the overall capability of the models would be useful

      Additional Comments:

      Genome-scale metabolic models are useful to the community as they can be used to address a variety of questions. It would be useful if the authors could include a section on the comparative performance of the models and link it to the known metabolic capability of these microbes.

    1. Abstract

      This work has been peer reviewed in GigaScience (see paper https://doi.org/10.1093/gigascience/giad028), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Philippe Boileau

      This manuscript introduces the new docker-based JupyterLab framework in Galaxy, describing its core components and demonstrating its use in the reproduction of two analyses. The proposed framework is also thoroughly compared to competitors, like Google’s Colab and Amazon’s SageMaker. This tool is bound to have an impact on the life sciences: it democratizes computational analyses and facilitates reproducibility. I thank the authors for their important work. However, I think that this technical note should be reviewed for grammatical errors and faulty punctuation. I’ve identified some such issues in the comments below but wasn’t able to address all of them. Included in the comments are other remarks which, if addressed, could strengthen some key takeaways. • The first sentence of the abstract states that AI programs require “powerful compute infrastructure” when applied to large datasets. I think readers would like to know how you qualify an infrastructure as “powerful”. A brief definition could be included in the second sentence instead of repeating “. . . hosted on a powerful infrastructure . . . ”. • Is it “JupyterLab” or “jupyterlab notebook”? The Project Jupyter site seems to use the former. Based on the documentation, JupyterLab is a web-based user interface that can open Jupyter notebooks (.ipynb files). • The statement “Artificial intelligence (AI) approaches such as machine learning (ML) and deep learning (DL) . . . ” implies that ML and DL are distinct aspects of AI. This distinction is insinuated throughout the rest of text. Isn’t DL a subset of ML? I suggest replacing “ML and DL algorithms” by “ML algorithms” and specifying “DL algorithms” only as needed. • I believe there’s a missing comma between “ecosystems” and “enabling” in the first sentence of the Docker container section. • Consider reformatting “A container runs . . . of the running software.” to “A container runs an isolated environment with minimal interactions between it and the host OS. Running software in a container is more secure.” • Related to the suggestion above: Can you explain why this increased security is necessary? An example might help emphasize the importance of a secure container. • I think “Docker container inherits . . . ” should be “The Docker container inherits . . . ”. Same goes for “Docker container is decoupled . . . ”. • Consider reformatting “Moreover, it can easily be extended by installing suitable packages only by adding their appropriate package names in its dockerfile.” to “Moreover, the Docker container is easily extended: additional software packages can be installed by adding their names to the dockerfile.” • Consider replacing “some of the popular ones are” by “including” • I believe there’s an unneeded comma between “. . . platform for both” and “rapid prototyping. . . ”. • I believe that there’s missing a word in the last sentence of the Features of jupyterlab and notebook infrastructure section: “. . . an H5 file.” • “google” and “amazon” should be capitalized. • Consider removing “and non-ideal” from Related infrastructure section. • I believe the comma in “. . . but they come at a price, . . . ” should be replace by a colon. • I believe there’s missing a comma between “. . . free of charge” and “similar to colab . . . ”. • Why is sharing a sessions’s resources across multiple notebooks more useful than operating each notebook in a separate session? Isn’t the latter preferable when a notebook causes a session to crash? • “deep learning” in the Implementation section should be replaced by “DL” for consistency. • I think that readers would find a link to your tool on Galaxy Europe useful: https://usegalaxy.eu/root?tool_id=interactive_tool_ml_jupyter_notebook. The same is true for your tutorial: I think readers would find a URL in the text more easily than in the references. However, the tool failed to execute on usegalaxy.edu with the following error message: “This tool is restricted to authorized users”. I was unable to follow the tutorial. Was this a one-off issue with the Galaxy servers?

    2. Abstract

      Reviewer 2: Milot Mirdita

      Kumar et al. present a Docker-based integration of Jupyter Notebooks in the Galaxy workflow system that can utilize GPUs. This notebook is also available in the Galaxy Europe instance.

      I was able to create a Galaxy Europe account, find the newly introduced Galaxy tool and submit a job. However, it remained stuck with the message "This job is waiting to run" and the job info "Stopped" for multiple hours. I was able to download the docker image and run it on a local server with multiple Nvidia GPUs. This resulted in a running Jupyter Lab, however running the GPU based examples resulted in driver mismatch errors/warnings (pynvml.nvml.NVMLError_LibRmVersionMismatch: RM has detected an NVML/RM version mismatch; kernel version 470.141.3 does not match DSO version 515.65.1 -- cannot find working devices in this configuration). Thus, the examples ran on CPU only. I did not try to resolve this issue and only repeated some examples.

      The authors show two use-cases for the GPU Jupyter Docker and provide a step-by-step tutorial for usage on Galaxy Europe. Shipping machine learning applications that utilize GPUs as Jupyter Notebooks has become popular recently and supporting these through well-known and freely accessible Galaxy servers, such as Galaxy Europe, would be of clear benefit to users. Additionally, it would be very valuable for method developers like me to easily deploy GPU-based methods to Galaxy servers.

      Major: - As mentioned before, I had issues getting a running Jupyter Lab on the Galaxy Europe server. Is this due to a limited number of GPUs or was this due to an error? - Our ColabFold Multiple Sequence Alignment server currently processes about 10-20k MSAs per day. We do not know how many of these are running on Google Colab or on users' local machines. However, a substantial number of predictions are running inside Google Colab. The authors claim that Google Colab's and Kaggle's resources are scarce. However, generally, users (with either free or pro accounts) are given an instance nearly immediately on Colab. I recognize that it is extremely difficult to compete with these commercial platform providers. However, providing a long-term, freely available and securely funded, platform with ML accelerators would be extremely beneficial for the whole community. I would like to see a discussion on what GPU resources are currently available to users of Galaxy Europe (and the whole Galaxy Project) and what plans exist to expand these in the future. - The size of the docker container (compressed ~10GB, uncompressed ~22GB) seems difficult to sustain. Both keeping up an up-to-date Docker image and ensuring the availability of older images for reproducibility looks difficult to me, especially with such fast moving dependencies such as machine learning frameworks. How do the authors plan to deal with this issue?

      Minor: - Please highlight the tutorial (https://training.galaxyproject.org/training-material/topics/statistics/tutorials/gpu_jupyter_lab/tutorial.html) on GitHub and inside the container readme (home_page.ipynb). It is very easy to overlook. I also nearly overlooked the example notebook repository (https://github.com/anuprulez/gpu_jupyterlab_ct_image_segmentation). I found it confusing, that I could not find the two shown example use-cases inside the Docker container. I only later figured out that I have to clone the example repository into the running container. - The manuscript highlights various workflow methods (elyra, kubeflow, airflow), however it needs clarification on how the Galaxy workflow integration works. I saw that it is possible to give input of another Galaxy output to the tool. I would appreciate a tutorial on how to make the GPU Jupyter Docker into part of a Galaxy workflow with multiple tools running. I think the above mentioned tutorials can be expanded to show how the output can be given to the next tool. - Docker Hub has introduced many business-model changes such as deleting container images that are rarely used, which poses a challenge for reproducibility. I know that Dr Grüning is involved in the Biocontainers project. I would recommend investigating if it is possible to combine these efforts to make this GPU container and derived containers long term available. - The Docker container is explicitly running as a root user, while the manuscript highlights the security benefits of Docker. The cited report by Baset et al. highlights the security benefits and the many security challenges that Docker containers pose. I suggest checking what security best practices for Docker containers are possible to implement, while still allowing GPUs to be exposed to users. - I recommend revising the manuscript for conciseness, with an additional focus on capitalization of words.

    1. Background

      This work has been peer reviewed in GigaScience (see paper https://doi.org/10.1093/gigascience/giad025), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Weilong Guo, PhD

      Patrick König and colleagues have built a web application for the interactive query, visualization and analysis of genomic diversity data, supportting population structure analysis on specific genetic elements, and data export. The application can also be easily used as a plugin for existing web application. According to its documentation, this application can be easily installed form pip, Docker and conda, which would be useful for population genomic studies. There are still several concerns about this manuscript.

      Major concerns:

      1. As for the SNP visualization function, there are only very limited numbers of SNPs can be read on the webpage, without function such as "zoom in" or "zoom out"(it is suggested to add such functions or similar functions). Although the application can export almost all the SNP sites of a whole VCF file, it is far from user-friendly.It is suggested to add a track of chromosomes showing the genomic windows under querying, allowing the cursor to select or adjust the genomic regions (UCSC-browser style), which is necessary for an intuitive user experience.

      2. The BLAST function could serve as a useful entry point. But what is the starting position of the query sequence when mapped on minus strand? The authors should make it more clearly explained on the website.

      3. TThe authors mentioned that their application would convert the inputted VCF file into Zarr format. Thus, more performance evaluation should be declared to show the advantages of this strategy (rather than using the VCF file directly).

      4. The authors should also compared the their applications with other similar existing web applications, such as CanvasDB, Gigwa, SNiPlay and SnpHub, to highlight their advantages and improvemences.

      Minor concerns:

      1. The analysis functions are still insufficient. Commonly used analysis tools or methods, such as haplotype analysis, STRUCTURE analysis, distribution of nucleotide diversity and selection sweep analysis, are also suggested to be supported.

      2. Ref. 22 is not completed.

    2. Background

      Reviewer 2: Armin Scheben

      The authors present the web app DivBrowse for visualizing genomic variant data. Their code is publicly available, and their web app is well-documented and provides several demonstration implementations for human, mouse and barley. The manuscript is well-written and concisely covers the key features of DivBrowse and summarizes the implementation of the software.

      I was able to test the demonstration website and was impressed with how smoothly everything ran and was set up. Due to time constraints, I was not able to test the installation and set up of DivBrowse but the documentation looks sufficient to allow easy set up by experts. Overall, I think this is a useful contribution to the community. One key issue I believe the authors should address, however, is that the manuscripts presents DivBrowse in a vaccum, not providing much mention of or comparison with existing software with overlapping functionality. Below I provide some further details illustrate my point and how it might be addressed, as well as listing several other minor comments.

      Main comment

      The authors rightly indicate in their introduction that the growing amounts of genomic data generated require robust solutions for visualization and exploration that does not require use of the command-line. But the authors fail to mention that there exists a considerable ecosystem of software that already does this. Moreover, some of the software available offers substantially expanded features compared to DivBrowse.

      To help readers better decide when DivBrowse might be the right choice for their needs compared to other options, the authors could cite existing software and provide some comparison. My knowledge of all available software is not exhaustive, but Wang et al. 2020 (https://doi.org/10.1093/gigascience/giaa060) in their publication of SnpHub provide a comparison table including SnpHub itself and Jbrowse. I would consider both of these tools for exploration and visualization of SNPs and additional data, similar to DivBrowse. Jbrowse is relatively widely used and considerably more feature-rich. The standalone offline tool TASSEL (https://academic.oup.com/bioinformatics/article/23/19/2633/185151) also offers many options for visualisation and exploration and analysis of VCF data offline. There may also be other tools I am not aware of, and readers would likely benefit from some brief overview of the landscape and the pros and cons of each piece of software and what differentiates DivBrowse.

      Minor comments

      The authors can consider the minor comments below as 'take it or leave it' comments. I do not think it is essential to address these, but in my view they may enhance the manuscript.

      1) In the discussion, the authors point out the efficiency and low latency of DivBrowse, however this is not quantified in the manuscript. If it were technically feasible without substantial effort, it might be useful to quantify in some way just how efficient DivBrowse can be, especially if this could be one of the stand-out features of DivBrowse.

      2) The authors use divergence Bezier curves to increase the amount of variant calls that can be visualized. This is helpful and a useful default. However, invariant sites can also be of considerable evolutionary and breeding/medicinal interest. When collapsing invariant sites, they become indistinguishable from unmapped regions. This is a fundamental issue and many VCF files may not encode information on invariant sites, so it may not be possible to develop robust functionality that allows users to also show invariant sites optionally. Still, this point may be worth briefly mentioning in the discussion, if the authors agree it is noteworthy.

      3) One advantage of visualization of relatively raw data like SNPs is that it can reveal patterns that are less obvious in other types of data exploration. To fully take advantage of this tools like Jbrowse allow export of the browser window in SVG format, allowing users to incorporate images into high-resolution figures. I don't expect the authors to necessarily implement this feature for this review, but it may be worth adding it to the list of potential enhancements that could be implemented based on user demand.

    1. Motivation

      Reviewer 2: Mulin Jun Li

      In this manuscript, the authors updated their previous ReMM to the GRCh38 human genome build, supported convenient and fast data source. Then, the authors take some examples to demonstrate the usability of the resource. It's original to point that the difference in prioritized tools between different genome build. However, we have following concerns and comments:

      Major: 1. How to deal with missing value variants in test datasets when compare new ReMM with other tools, the author mentioned that ExPecto annotated only half of the million negative variants. 2. Although the CADD used the same negative training dataset, it's not suitable to compare it in the ReMM training dataset. How those tools performance in the independent test datasets. 3. The author presumes that new genome build will get better performance, is there some evidence can support this perspective, like the distribution of feature or training data in different genome build. 4. Other existing similar tools can prioritization disease-causal noncoding variant, such as regBase-PAT, NCBoost, ncER, etc. can the authors compare new version of ReMM with these tools.

    2. Motivation

      Reviewer 3: Wyeth Wasserman

      SYNOPSIS The manuscript describes an updated release of the ReMM regulatory variant mutation scoring system. The paper presents the performance of an updated version of the system and describes how it was applied to the most current release of the reference human genome.

      OVERALL PERSPECTIVE This is a valuable resource for the community of researchers and clinicians working on the interpretation of genetic variants in the human genome. The work appears to be thoughtfully done and appropriate assessments have been provided. The use of the random forest models to weigh the contributions of features was particularly noted for the insights it provided into how features contribute to prediction. My biggest concerns are stylistic, which falls outside the scientific quality of the work. I provide these comments for the authors to consider and do not expect that my stylistic preferences will be uniformly accepted. A fair amount of justification of the manuscript focuses on the value of having a release for version 38 of the human genome, pointing to the field as not having done so broadly. I think this is misguided, as by the time people are reading the manuscript such points will have lost relevance. I suggest a focus on the science be given, as there is no need to justify things based on where other resources have progressed in releasing their version 38 updates. Points below include language/text clarifications that can be assessed by the authors. Writing styles differ, so stylistic comments should be optional.

      MAJOR POINTS None. Well done and clearly presented.

      MINOR POINTS 1. The word "various" is vague and often shows up when people are too busy to provide an accurate statement. Starting the manuscript with it makes a bad impression on this reader. You do not have to change it, but I thought you might appreciate knowing this impression. You could delete it with no harm to the sentence. (Not to get carried away, but the next sentence starting with "some" heightens the impression of 'hand waving'.) 2. I think I understand ", we apply cytogenic band-aware cross-validation using ten folds" but I encourage the authors to provide clearer wording for this point. 3. I would allow the reader to make their own judgement of performance. So please remove "excellent" from "we achieve an excellent performance" 4. "Rather than using ReMM scores for ranking, some users need to specify score thresholds" is confusing. I would change 'need to' to 'choose to' 5. "with lots of false positives" is a bit informal. I suggest "with a high false positive rate" 6. I am confused by "from three genomic regions (genic content and not overlapping with assembly gap changes) " as the brackets include two items, not three. 7. "maybe due to better mapping" - "maybe" should be "may be" 8. I think the language like "seems to be the only tool directly trained on training data and features derived from GRCh38." Is not particularly valuable long term. This is a useful contribution, but many tools are being updated to 38 and by the time this appears and is read, such statements decline in relevance. I would focus on providing this valuable resource, and not try to justify it based on a transient perception of where the field stands in updating versions. 9. "It is worth noting that in the context of extremely unbalanced data…" - you do note it. So I would change the wording to "In the context of extremely unbalanced data…"

    3. The Regulatory Mendelian Mutation score for GRCh38

      Reviewer 1: Yan Guo

      1. In the abstract "Some methods and annotations are not available for the current human genome build (GRCh38), for which the adoption in databases, software and pipelines was slow." Not sure what the author is referring to by some methods, this could be a grammar problem.
      2. "Restricting variants to non-coding only removes a small proportion of variants", what is the proportion? Also, I don't understand the need to remove coding variants, shouldn't your model works also with coding variants?
      3. The method the author used is based on a previous publication. However, there is still the need to give the detail of the method in this manuscript. There is a lot of missing information. For example, what is the outcome, whether a position is deleterious? How is the probability for deleteriousness calculated?
      4. by a few specific variants. Thus, the overall Mendelian disease-related variants should be low. I am guessing that's why 406 hand-curated variants were used in the previous version of ReMM. If my assumption is correct, there shouldn't be a lot variants for Mendelian disease. How many variants are found to be positive in the entire genome?
      5. In the online application, the results are limited to 500, the rest cannot be seen or downloaded. I would be better to allow the user to download the entire results.
      6. The authors performed comparison with other tools and generated ROC curve which is dependent on knowing the true positives. There is no description of the dataset that was used for the comparison. Did the authors make sure that the training variants is not used for the comparison?
    4. Motivation

      This work has been peer reviewed in GigaScience (see paper https://doi.org/10.1093/gigascience/giad024), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

    1. Background

      Reviewer 2: Ben Woodcroft

      Cornet et al have generated a collection of NextFlow pipelines which provide a pipeline to analyse data associated with genome or raw sequencing data of microbial organisms and protists. The methodology appears sound and reproducible. My main concern with the manuscript is that it is not well described in the abstract, introduction or GitHub repository. It isn't clear whether the analyses are specific for genomics questions arising from culture collections, or if it is more broadly applicable. There is also no discussion about other pipelines which achieve similar things e.g. ATLAS https://metagenome-atlas.github.io/

      I also had a number of minor concerns, detailed below.

      A number of grammatical errors detected, these should be fixed. Parts of the manuscript are also slightly too informal e.g. "This confirms the interest of 221using ORPER to spot interesting SSU rRNA sequences" It would be helpful if the GitHub front page could provide a concise description of what the software aims to achieve, to make its use more understandable. 106: "as it happened" grammatical error "Assembly.nf" Commonly assembly is a separate process to binning, but here binning has been included. Perhaps a clearer name might be Genome-recovery.nf ? 124: "Researchers interested in a better understanding of these tools can read the recent review on the detection of genomic contamination made by Cornet et al. [15]." While not inappropriate, this is perhaps too much self-citation. Why is contamination assessed but not completeness? 129: "annotation of bacterial proteins is automatic" Automatic in what sense? Annotation also refers to describing the function of the protein usually, but here the meaning appears to be restricted to ORF calling. I found this somewhat confusing. Also "in the different GEN-ERA workflows" is unclear - does this mean that prodigal is run as part of the Assembly.nf workflow for instance? 143: "Orthology.nf automatically provides the core genes, shared by all the organisms in unicopy" what is meant by "all organisms" here? 145: "The OGs of proteins 145 can be further enriched" what does "enriched" mean? 163: GTDB.nf is described in the "Other workflows" section, when it is phylogeny-related. 172: "it was 173 technically not possible to include Mantis in a container" I am curious as to why this was the case? I do not have any specific insight or ability to judge the accuracy of this statement, just curious. Inclusion of a sentence describing the difficulties might help other workflow developers and/or the Mantis developers. 190: "Gloeobacterales are the most basal order of the 191 Cyanobacteria phylum" This statement is somewhat controversial, because the GTDB has defined the Melainobacteria as being a part of the Cyanobacteria phylum based on RED values. I would suggest removing "the most basal" or making it clear that cyanobacteria refers to photosynthetic cyanobacteria rather than the phylum. 189: The methods for this section are not described in the methods section. They are only briefly described in the Findings section. A clearer link to these methods should be made from the maintext and methods. 212: Showed -> show. 215: "estimate the sequencing level of the order" it isn't clear what meaning this has. 224: Our results demonstrate the absence of one metabolic 225pathway" There are many metabolic pathways, presumably it is missing more than one. 233: "examples of the practical usage of the GEN-ERA toolbox are available in Supplemental 234File 1." this does not make it clear that this refers to the methods for this specific example.

    2. Background Microbial culture collections play a key role in taxonomy by studying the diversity of their accessions and providing well characterized strains to the scientific community for fundamental and applied research.

      This work has been peer reviewed in GigaScience (see paper https://doi.org/10.1093/gigascience/giad022), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Shakuntala Baichoo

      Paper Title: The GEN-ERA toolbox: unified and reproducible workflows for research in microbial genomics The GEN-ERA toolbox provides a number of containerized workflows to researchers (without any specific training in bioinformatics) to study the diversity of well-characterized strains for fundamental and applied research. More specifically It facilitates all steps from genome downloading and quality assessment, including genomic contamination estimation, to tree phylogenetic reconstruction. It additionally provides workflows for average nucleotide identity comparisons and metabolic modeling. The supplementary file provides details of how to run the whole workflow (through 10 steps), found in the GEN-ERA toolbox on basal, for an empirical dataset of early emerging cyanobacteria. It provides an up-to-date phylogenomic analysis of the Gloeobacteralesorder, the first group to diverge in the evolutionary tree of Cyanobacteria. The github repo located at https://github.com/Lcornet/GENERA also provides more details about the GEN-ERA toolssuite. Though in the manuscript it is mentioned that the call to Mantis could not be included in the Singularity call, on the github repo they have indicated that Mantis is now installed in a singularity container for the Metabolic workflow (install is no longer necessary). The tool has been tested on an empirical dataset of 18 (meta)genomes of early-branching Cyanobacteria and the time taken as well as the results of the run are documented in the supplementary file. The authors claim that the toolsuite can be used to study the diversity of microorganisms, including bacteria and fungi. From the github repo, it is clear that a number of publications in high-impact journal papers have already resulted from the development of the GEN-ERA.

      1) Are the methods appropriate to the aims of the study, are they well described, and are necessary controls included? This study aims at describing a toolbox, named GEN-ERA, and the methods section defines the various steps of the toolsuite. Looking at the supplementary file and the github, it is easy to follow the manuscript. The versions of the programs used in the case study are provided in the forms of nextflow scripts.

      2) Are the conclusions adequately supported by the data shown? The results of running the toolsuite on an empirical dataset of 18 (meta)genomes of early-branching Cyanobacteria, at each step, as well as the time taken to download the files and the running each step, are convincing that it works fine, at least for Cyanobateria. But this is found in the Supplementary Material. There should be section on Discussion and Conclusion in the main text.

      3) Please indicate the quality of language in the manuscript. Does it require a heavy editing for language and clarity? But t The use of English language is adequate and concise and can be understood clearly, by researchers interested in studying diversity of micro-organisms.

      4) Are you able to assess all statistics in the manuscript, including the appropriateness of statistical tests used? The statistics involved in the phylogenetic analyses are integrated in the existing programs. Hence I am not able to assess the statistics.

      5) Final Comments The proposed toolbox/toolsuite described in this manuscript is very relevant and worth a read for researchers interested in studying the diversity of microorganisms, including bacteria and fungi, especially as it helps to facilitate their life through the use of well-defined containerized NextFlow workflows.

      I strongly believe that there should be a section on the Discussion of the results of running the toolbox for the case study and a Conclusion in the main manuscript. This will help readers in understanding the importance of the toolbox better.

  2. Apr 2023
    1. Background

      Reviewer2-Raphael Eisenhofer

      Piro and Renard introduce GRIMER, a tool that automates microbiome-related analyses and creates rich, offline-supported report that can be shared with collaborators or hosted online. I think that they gave a great summary of the problem of contamination in the microbiome field, and clearly explain the gap that their software fills. They exhibit GRIMER on previously published datasets, which are available to view online. Overall, I'm very impressed with the dashboard—it looks great, is easy to explore datasets, and highly portable. I can certainly see myself using GRIMER on some of my future datasets, and I have no doubt that it can be a valuable tool for others in the field. I do however think that the documentation and usability of the tool can be improved, and I give some suggestions below. Addressing these issues will, in my opinion, lead to a wider adoption of the tool by researchers in the field.Usability:I managed to test GRIMER on a 16S amplicon dataset, but given the sparsity of the documentation, this took me a little longer than expected (in addition to quite a few steps), and I think that there are improvements that could be made to make it easier for people to use GRIMER from formats that people commonly generate.For example, QIIME2 is perhaps the most used 16S amplicon analysis pipeline, so the ability to import directly from .qza files (e.g. table.qza, taxonomy.qza) would give GRIMER much greater reach. If this is beyond the scope to incorporate within the GRIMER codebase, at least provide the exact code needed in the documentation for people to export their .qza files to files compatible with GRIMER.Likewise from phyloseq, a commonly used R package for microbiome analyses. Could some documentation/code be added about how best to export phyloseq objects to a format that GRIMER can handle?I mostly analyse shotgun metagenomic datasets (genome-resolved), and I foresee more users using these types of data in the future. Therefore, the ability to parse gtdb-tk outputs directly would be very helpful. Perhaps have a flag --gtdb that parses the 'gtdbtk.bac120.summary.tsv' and 'gtdbtk.ar53.summary.tsv' files.Following on from this, CoverM (https://github.com/wwood/CoverM) is quite commonly used for generating final MAG count tables (.tsv), so the ability to import them directly would be a really nice quality-of-life addition, and something that would not require much coding to accomplish.I believe that these adjustments will make the tool far more accessible for everyday users and increase the adoption of GRIMER by the wider community.For the actual report, if possible, I would like the ability to export ASVs/features/MAGs from the report that the user thinks are contaminants. This could be a list that the user could copy/paste, or the direct export of a .txt/.tsv. Perhaps the user could tick a box next to the ASVs/features/MAGs to save them to a list/viewer? The reason for this is that the logical next step I see after using GRIMER is to go back to your dataset and filter out the putative contaminant ASVs/features/MAGs. Being able to produce such a list will make subsequent filtering by the user easier.I couldn't get decontam to work with my dataset, here was the error:raise KeyError(f"None of [{key}] are in the [{axis_name}]")KeyError: "None of [Float64Index([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan],\n dtype='float64')] are in the [index]"I can post this as an issue on the repo if you'd like.Regarding the specification of negative and positive controls in the config.yaml, would it be possible for this to be implemented from the executable? For example, there could be a flag '--control-column' that specifies the column in the user's metadata file. '--control-column control' would parse the 'control' metadata column, and for cases where are values 'negative', 'positive' assign them automatically. This is just a suggestion that could make it a bit easier for users to set control samples, rather than having to create a new .txt file and change the config.yml.Dependencies:When installing via conda, I ran into the following error:ImportError: cannot import name 'PearsonRConstantInputWarning' from 'scipy.stats'It seems that this can't be imported from later versions of scipy, but I managed to fix it by forcing scipy=1.8.1. You should be able to force this version in the conda recipe.Minor grammar:Line 16: replace 'perform' with 'performs'Line 50: 'found in the [9]'Line 56: replace 'as technicians body' with 'microbes from laboratory technicians'Line 60: I would remove the 'environmental' adjective here, as contamination affects all low-biomass samples.Line 63: I would use 'samples' in place of 'environments' here. You may also consider suggesting that some samples may even contain no microbial DNA. E.g. replace 'low amounts of' with 'little to no'.Line 64: Replace 'ideal scenario for an exogenous contaminants' with 'an ideal scenario for exogenous contaminants'.Line 72: perhaps consider referencing decontam here.Line 79: replace 'due to increase in costs' with 'due to the increase in cost associated with their inclusion'.Line 81: Consider referencing first author's last name, e.g. 'Moreover, XXX et al. [45] reported…'Line 88: remove 'outcomes'

    2. Abstract

      Reviewer1-Gavin M Douglas

      Piro and Renard present GRIMER, which is a bioinformatics tool for summarizing microbiome taxonomic data in various ways, with the main purpose of identifying putatively contaminant taxa. The authors convincingly argue that there is great value in looking at several different aspects of a dataset when determining which taxa are potential contaminants. I think this tool could potentially be very useful for the field, but I think at the moment there are several places where users might be confused and perhaps be overwhelmed without more documentation.The main point of confusion I'm concerned about is regarding the "common contaminants". It's not convincing that you can just classify a taxon as a contaminant regardless of what environment is being profiled. Also, under this approach, if a taxon is identified once as a contaminant in an earlier study, would it then be classified as a contaminant in all datasets processed by GRIMER? This would mean that a lot of high-abundance taxa in certain environments would be wrongly thrown out. For instance, you can imagine high-abundance taxa on the human skin might be more likely to be contaminants during sequencing preparation, but of course many researchers are very interested in profiling the skin microbiome. I think the authors realize this, but I'm concerned that typical users may not appreciate this point. I think explicit discussion of this point in the discussion is needed and also an example of how this might look in practice (e.g., if skin microbiome samples were input to GRIMER, as part of a larger tutorial that could be online [see next point], would help avoid this mistake).The authors do a great job of walking through some results in the text, but more documentation is needed for the reports. The authors should include a basic tutorial that provides example input files and then walks through each individual tab. This could done all through text with screenshots of the GRIMER, or perhaps with a video tutorial. In addition, for someone just opening the example reports, I'm sure they will be wondering what data was produced by GRIMER (e.g., they might wrongly think GRIMER did the taxonomic classiciation) and what data was needed as input.The authors should expand on how the correlation step is used to identify contaminants. There is great interest in identifying clusters of co-occurring taxa, so identifying a cluster of 9 genera in Figure 5 doesn't seem like evidence of contamination to me. Perhaps it is when considered with other lines of evidence though, but this should be made clearer. Currently this legend implies that it alone points to reagent-derived contaminationThe figure text needs to be increased in size. Using more panels split across additional rows and removing unnecessary info (e.g., not all control categories need to be shown in Figure 1) would make these figures easier to interpret. I realize that you were hoping to use the raw GRIMER figures, but based on the current display items it does not seem like they are publication ready.The acronym WGS generally refers to "whole genome sequencing" (i.e., for single isolate organisms) not "whole metagenome sequencing". The standard acronym for the latter case would be "MGS", for "metagenomics". Also, the term "shotgun metagenomics sequencing" is mostly commonly used in this context, I've never come across "whole metagenome sequencing" before. Either way, "WGS" will mislead casual readers with the current usage, so this should be changed on your website and in the manuscript.The taxa parsing capabilities sound like they will save a lot of tedious, manual data mapping! Just checking - how does it perform with new taxa names / typos?Text editsL11 - "are challenging task" should be "is challenging"L12 - can remove "by design"L12 - "helping to" should be "to help"L13 - "can potentially be a source" I think should be "that could reflect"L14 - "evidences" should be "evidence"L13 + L14 - Unclear what is meant by "external evidences, aggregation of methods and data and common contaminant" - should be clarifiedL15 - "that perform" should be "that performs"L17 - "towards contamination detection" should be something like "to help detect contamination"L41 - "hypothesis" should be "hypotheses"L42/43 - "analysis can hardly be fully" should be something like "the required analysis is difficult to fully…"L56 - "technicians body" should be "a technician's body"L60 - "strongly affects environmental" should be "especially environmental," (note comma)L64 - "ideal scenario for an" should be "an ideal scenario for"L67 - "not to bias measurements and not to" should be reworded, possibly as: "to not bias measurements and to ensure that bias is not propagated into databases"L75 - "were proposed. They are " should be "have been proposed. These are"L77 - "among others" should be ", and others" (note comma)L79 - "increase in costs" should be "the required increase in costs"L88 - add "a" before focusL90, L196, L265, and elsewhere - "evidences" should be "evidence"L99, L104, L117, and possibly elsewhere - "analysis" should be "analyses" (when plural)L106 - "each samples/compositions" should be "each sample/composition"L110 - add "a" before taxonomy database and "the" before "DNA concentration"L132 - "specially" should be "especially"L134 - remove "a" before "the"L151 - add "of" after "thousands"L182 - "is" should be "are"L196 - "evidences" should be "evidence". And rather than "Evidences towards" it would be correct to say "Evidence for" or "Evidence supporting"L208 - add "the" before "overall"L246/247 - "generated several studies and investigations" should be something like "motivated several investigations"L248 - should be something like "from the maternal and fetal sides"L279 - remove "a"L280 - Add "the" before "Jet"L284 - capitalize "Qiita" and re-word "Pick closedreference OTUs with 97% annotated with greengenes taxonomy"L293 - Should be "Furthermore" rather than "Further"L295 - I think it should be "with low and high human exposure, respectively"? Or do you mean they both have highly variable exposure?L297 - "could be a also an" should be "could be driven by an"L300 - "against" should be "and"L304 - "correlated genus" should be "correlated genera" (and in other cases, such as in the Fig 5 and 6 legends, where "genus" should be plural version, i.e., "genera")L305 - "Such pattern" should be "Such a pattern"L307 - Should be "groups" rather than "organisms groups", or just "genera" as I believe each is a genusL313 - Remove "a"Fig 5 legend: "point" should be "points"Fig 6 legend: "taxa is abundant" should be "This taxon is abundant" and "inversely correlate" should be "inversely correlated". "a contamination evidence" should be "potential contamination"

    1. there

      Reviewer4-Madeleine Geiger

      This well written study integrates different approaches and methodologies to tackle the still obscure nature and origin of the dingo and its sub-populations by thoroughly characterising and comparing an "archetype" dingo specimen. I have read and commented on the abstract and the introduction, as well as the morphology related parts of the methods, the results and the discussion. The methods of morphological comparison, as well as their description and the reporting of the results are sound. However, in some sections it is difficult to comprehend the results and their interpretations, as well as the significance and nature of the suggested "archetype" specimen Cooinda. I therefore made some suggestions for additions and edits to the text and the figures, which hopefully help to increase comprehensibility and consistence of the text (see my comments below). I could not check and comment on the raw data because the links to the supplement given in the manuscript (figshare) do not work. Sorry if I'm stating the obvious here, but to be able to access the raw data is particularly important if the described dingo should act as a reference archetype. L. 74: Add «of the dingo» after "ecotypes": "[…] compare the Alpine and Desert ecotypes of the dingo […]". Otherwise it's not really clear what this is about. L. 91: It's unclear to me what you mean by "this female". I would suggest to exchange this expression with the previously used name of the animal. L. 94 ff.: The conclusions do not really fit to the rest of the abstract, specifically the aims as stated in the beginning. What I read from the "Background" section is that this work is about defining a "dingo archetype" via different approaches (genetic and morphological). The conclusion, however, is centred around the individual Cooinda. I would suggest to open up this section, to also make conclusions concerning the previously stated aims of the paper. L. 105 ff. and L. 369 and L. 508: A very nice opening! However, I feel that there is a somewhat misleading interpretation of the domestication process as a discrete trichotomy: wild > tamed > domesticated, when in fact domestication is a continuum with various stages in between the two extremes of the "wild" and the "intensively bred". There are various forms - even today - of "half-domesticated" populations, such as e.g., many of the Asian domestic bovids, or the reindeer. Thus, I would strongly argue that the dingo - although special due to the almost complete lack of human influence on its evolution in the last millennia - is not the only link between the "wild" and the "domesticated". See e.g.: Vigne, Jean-Denis. "The origins of animal domestication and husbandry: a major change in the history of humanity and the biosphere." Comptes rendus biologies 334.3 (2011): 171-181 L. 117: How do you define "large carnivore"? And: Are dogs more numerous than cats? I don't know the tallies overall, but in many parts of the world domestic cats are more frequent than dogs. L. 120 - 121: I think this sentence does not contribute to the manuscript and I would suggest to delete it. I also think that these are not the usual characteristics to discern the wolf from other canids. 123 - 125: I do not understand this distinction. In my opinion, the dingo could well be both, a tamed intermediate between wolf and domestic dog AND a feral canid. If I understand the current view of dingo evolution correctly, the dingo most probably constitutes an early domestic stage of the dog, which became feral. L. 150: I do not understand the reference to Figure 1 at this point. If you want to keep the figure reference at this place, I would recommend to extend the legend in order to be more descriptive about the significance of this individual dingo. Also: Is the question mark on purpose? Intro and Results in general: Cooinda is central for the research question and the paper. However, I do not really understand her position and significance right away from the text. Maybe this is just a matter of sequence of the paragraphs (some information is given at the beginning of the methods section at the end of the manuscript), but I think it would be crucial to introduce and explain Cooinda and her role (as kind of a reference "archetype") for the aims thoroughly already early on, preferably in the Introduction. This would e.g. also include: why of all the dingoes in Australia is Cooinda an appropriate choice to function as the "archetype". Further, it would be helpful to maybe have a figure showing the geographical distribution of the compared populations (alpine and desert, as well as Cooindas origin) to better understand the setting. L. 320 ff. and Figure 5: Would it be possible to add a visualisation of the shape changes described in the text into the figure? It is otherwise impossible to evaluate these shape changes. L. 328 - 345: It would be interesting to pursue the variation along PC2 further: Do you maybe have information from the raw-data if specimens of both the alpine and the desert group that were found to have particularly low or high values for PC2 are especially young and female, or old and male? In other words, do you find evidence in the dataset that there is an actual age and/or sex gradient along PC2? And what age was Cooinda when she died? L. 347: As also pointed out below, it would be important to note somewhere if these two specimens died at about the same time and/or were similarly treated (because of brain shrinkage in specimens that were frozen or otherwise fixed for a long time). L. 472: I would suggest to rewrite as: "Cooinda's brain was 20% larger than that of a similarity sized domestic dog […]". Further, I do not agree with the rest of the statement in this sentence. One of the hallmark characteristics of domestication is brain size reduction, which might be the result of selection for tameness (which you also describe later on). However, selection for tameness (an evolutionary process within a population) is not the same as taming (on the level of the individual). I would therefore suggest to re-write this sentence. Further and in general concerning the brain size part of this study: It would greatly increase the significance of this part of the work if you would compare the dingo brain size not only to one domestic dog, but set it into a larger context. There are plenty of published references for wolf, domestic dog, and dingo brain size estimates and it would be enlightening to compare your findings with those. Of course, there are methodological issues, but maybe a meaningful comparison is possible for some of them. For this I could recommend this review article: Balcarcel, A. M., et al. "The mammalian brain under domestication: Discovering patterns after a century of old and new analyses." Journal of Experimental Zoology Part B: Molecular and Developmental Evolution (2021). L. 483: Many of the surviving populations of re-introduced (i.e., feral) domestics were part of a fauna that did not correspond to the one of their wild relatives, but was somehow characterised by reduced predation or competition. This was certainly the case for the dingo (few other large predators in Australia) and for some island populations. Maybe you should double-check if this is really the case for the provided examples, but maybe it would be better to write that brain size reduction persists in feral populations at least under certain circumstances. L. 527: Why is it important that the reference dingo is a female? Please explain. L. 535 ff.: Please explain the significance of these special characteristics. Why and how are they special and important for the current study? Also: I'm not a native speaker, but I have the impression that some of the sentences in this section are a bit unusual. Please double-check the grammar. L. 739: What do you mean by "below" in the brackets? L. 741: Is this the right figure reference? I do not find this figure. Do you mean supplementary Figure 9a? 744 - 745: Could you briefly explain in one sentence the nature and number etc. of landmarks used in this reference study? (For those who cannot check the referenced work.) This would be quite important to be able to interpret the results. L. 744: Delete "earlier". L. 755: Could you briefly explain here if these were freshly dead specimens, or if they were already older (e.g. frozen, stored in a liquid etc.) please? This has some implications on brain morphology and size. L. 784 ff.: The figshare-links don't work. L. 884: I would suggest to re-write the sentence like this: "This was required because the brain was removed immediately after death, which caused some damage to the braincase." Supplementary Figure 9c: It's hard to match the reds of the convex hulls with the reds of the legend. Would it be possible to write down the names right next to the corresponding convex hulls? L. 895: Position remains the same relative to which other analysis? Maybe make a reference to text and/or figure (I guess Fig. 5) here.

    2. functional

      Reviewer3-Sven Winter

      The manuscript " The Australasian dingo archetype: De novo chromosome-length genome assembly, DNA methylome, and cranial morphology" does describe a de novo genome assembly of the Alpine dingo based on PacBio, ONT, 10X Genomics Chromium, BioNano, and Hi-C. Furthermore, it describes cranial morphometrics and methylation patterns to describe an Alpine dingo "archetype". The methods used seemed overall sound, yet, the writing was often confusing and unclear, so it was difficult to understand what was done and why. The writing, in general, is my biggest criticism of the manuscript, so much so that I was wondering if the authors, by accident, uploaded an earlier version of it. Throughout the manuscript, multiple writing styles and skill levels are evident, and I am sorry to say that it seemed as if the manuscript was copied together from different sources written by the different coauthors rather than a coherent manuscript. Some of the Figure Captions have superscript numbers to highlight individuals and at the same time proper labels that make them obsolete. I first thought they were remnants of footnotes in a previous version. Unfortunately, the methods section, for me one of the most important sections, needs serious improvements. I am not a big fan of having the methods at the end of the manuscript (I know that is how GIGAScience likes it), especially when some methodology is mentioned in the results in a way that you must look up the details in the methods section to understand it. Unfortunately, that is the case with this manuscript. As there are so many paragraphs in this manuscript that need some improvements, I can only focus on some of them in this review but encourage the authors to have a careful look at the whole manuscript before resubmission, as in the current state, I would not recommend it for publication. Detailed Comments: Abstract: The abstract is overall too long and needs to be much more concise, e.g., the discussion on taxonomic designation (L75-78) should be part of the discussion section but not the background paragraph of the abstract. L91 "this female" which female? Introduction: I am missing a short review of the taxonomy of the dingo, mostly with respect to the dog or wolf. I know you do not want to draw taxonomic conclusions from this study, but a short review of what others proposed would be helpful. Also, even though everybody knows what a dog is, scientific taxon names are a requirement in scientific writing and should be added at the first mention of any taxon (e.g., dog, gray wolf, dingo, etc.). L113: intermediate in what sense? Morphological, behavioral, ecologically? L123: Please explain in more detail why a type specimen is needed, especially, as I would argue, that a population-level genomic, morphological and behavioral study would be better to answer if dingoes are feral dogs or an intermediate form instead of a single individual type specimen. L139: reevaluation L142: this can be more concise, e.g., "Zhang et al. (17) found evidence for a separation of Australian dingoes into a northwestern group and a southeastern group, clustering with New Guinea Singing dogs" L150: remove "?" L151-154: This belongs in the discussion. L153: "… being characterized. However, we suggest …." Results: L161: Please consider changing it to chromosome-scale or chromosome-level genome assembly, which is much more common. L162-170: This whole paragraph is a short summary of the methods and does not include a single result. I know it sometimes is nice to recap the methods but in this case I do not see the need for it. Or at least it can be shortened even more to something like: "The final assembly after hybrid long-read assembly, polishing, and scaffolding has a total length of 2,398,209,015 bp …." L163: I'll highlight it again in the methods, but Supplementary Figure 1 shows 18 pacbio SMRT cells were used, but the methods say 15. L164: please be more precise were they pacbio CCS or CLR reads? L167: Please provide Supplementary Figure 2 with a better contrast allowing us to see the high and low contact density on the centre of the scaffold squares. L172: ungapped is not a term I would use; instead, I prefer to refer to the assembly as scaffolded or scaffold-level if it is in scaffolds with gaps and contig-level if the scaffolds are split up into contigs for statistics or analyses. In this case, I would only state the total scaffolded length and maybe the amount of N's or gaps. Also, the second sentence would be better combined with the first e.g., "The final assembly had a total length of 2,398,209,015 bp in 477 scaffolds and a scaffold and contig N50 of 64.8 Mb and 23.1 Mb, respectively." L174: What does full-length mean? L175: please reference the dog genome properly with the accession number and reference if available. L176: Please rewrite. There is something not quite right with the bracket and the following remaining sentence. L178: Carnivora_odb10 L182: Please check the manuscript and the supplementary data for consistent spelling of Cooinda (or Cooindah). L184: what does "were full-length by BUSCOMP" mean? Please give more details here on which basis this is determined and what it means that the two other genomes hat a few more. Also, I am not sure if you have to repeat the list with canine assemblies if you have them properly listed in the methods. Again, that's why I prefer to have the methods before the results. Table1: Again "ungapped" sounds not right. Please consider changing it to be clearer. As a general side note, when you want to compare two assemblies of different assembly length, it would be better to compare NG50 instead of N50. I doubt that in this case, with only a 40-50Mb difference, it would change the results much but consider adding NG50 values. Number of gaps is also not very clear, as the gaps can be of different sizes and can be of a determined length or a standard number of N's as a placeholder for a gap of unknown length. L198: "to align Alpine dingo long reads to the Desert dingo assembly" seems not to fit here. Please check the sentence structure and rephrase. L199: "These plots show low variation on the X chromosome" More context is needed. low compared to? Why are the results only so briefly mentioned after multiple lines of "methods". This is an issue I see throughout the results. There are barely any results and mostly method summaries. Figure 2: Explain what the plot shows. I am, in general, not a big fan of these multi-layer circus plots as each individual plot is way too small to show much. However in this case the lower amount of SVs on the X chr is visible enough, but the caption needs more details. L211: Why list a reference for something that is a results of this study? Supplementary Fig. 5: Each chromosome is too small and the resolution too low to see details of the SVs. L217: So why is that important to mention? If there is no further reason I would remove it, it does not add to the story. L226: "In addition, however, we also found …" ïƒ change to "In addition, we found… " or "We also found …" L227: Consider joining the two sentences: "We also found two structural events on Chromosome 26 (SFig. 6) containing mostly short genes…" L227: What does perfectly conserved mean? L228-229: Why not show it? L230-232: This can be more concise and easier to read for example: "The Alpine and Desert dingo both have a single copy pancreatic amylase gene (AMY2B) on Chr 6. However, only the copy in the Desert dingo includes a 6.4kb long LINE." I am not sure why reference 10 is cited twice here in the results. Is this a result already known before? If so, this belongs in the discussion. L233: Again, the whole section is a short methodological summary, and there are absolutely no results. Figure 3: The figure caption needs to be rephrased completely. Not sure how this ended up here, but "NOTE: A and C as well as B and D are similar plots. However, A and B use SNVs while C and D use indels." really does not belong in a proper caption, especially as each plot is listed before stating if it is based on SNVs or indels. Bootstrapping usually does not need to be explained in a figure caption. Instead, it would be more important to mention what type of phylogenetic tree it is and on how many SNVs it is based. There is also no scale on the trees, does that mean these are pure cladograms? For B and D, please explain what an ordination analysis is and change the axis labels to something meaningful. Labeling the x and y axis "Axis 1" and "Axis 2" is absolutely pointless. I am quite surprised that this passed the final ok from all co-authors. L260: please use Desert dingo and not Sandy. L263-265: Not sure why this is important here if it is not discussed later. Figure 4: Again, the figure caption needs a complete rewrite. For example, L281 "dingo Sandy is in this clade" is very unclear and confusing; "In this figure," ïƒ remove!; What are the superscript numbers for? Please remove them, they look like they belong to some footnotes from an earlier manuscript version, which are now missing. Instead, important info such as the type of network, the meaning of the small lines, and the scale are missing. Methylome: L295: I would remove "the" before MethylSeek and a period is missing before UMRs. L297-299: I think here it would be very nice to not just mention that there are other studies but give some examples and comparisons. "These analyses" could either refer to the MethylSeek analyses or the analyses of reference 55, please rephrase to be clearer. Also, it is unclear what previously reported numbers mean, again give more details ("… in line with previously reported numbers of promoters and enhancers in, e.g., humans (promoters xxxx, enhancers xxxx), mouse (xxxx), and rat (xxxx).") L301: what does "we lifted over the former UMRs to the latter genome" Please rephrase, it is very unclear to me what you mean. L302: why was average DNA methylation calculated for UMRs. Should they not be unmethylated by definition? L306: Why have a sentence about that a gene is highly conserved but not perfect and then give the percentage of identity instead of just stating that it is 99.8% identical? I have now mentioned quite a few examples where the manuscript could be much more concise. I cannot list them all but would encourage you to read through the manuscript again and make it more concise. Morphology: My knowledge of morphological analyses is limited, but, despite the unfamiliar terminology, this section of the manuscript is easy to read and focuses in a more concise way on the actual results. My only suggestion would be to label Supplementary figure 9a with the different morphological features mentioned or adding an additional schematic to the supplementary, so non-morphologists can easier follow. L353-354: I would suggest adding the sizes after you mention the individual to avoid repeating dingo and dog brain. For example: "… the dingo brain (75.25cm3) was 20% larger than the dog brain (59.53 cm3) (Figure 5B)." Figure 5: Please add an explanation of what the polygons in 5A represent. Also, consider changing the labels in 5B. I assume LHS and RHS are short for the left-hand side and the right-hand side. This, for me, is usually used to describe positions in unlabeled figures. I would suggest changing it to Cooinda dingo (CD) and domestic dog (DD). Discussion L375: It is not clear why Cooinda should be considered the archetype at the beginning of the discussion. I would place this in the conclusions and base it on the results and the discussion. L394-395: Please rephrase and shorten, e.g.,: "There is a single copy of AMY2B in both dingo genomes; however, they differ by a 6.3 kb retrotransposon insertion present in the Desert dingo." L394-405: I would like to see a more in-depth discussion on the differences between wolf, dingo, and dog. If there is no LINE in the wolf but both in the dingo and the dogs, when did the transposition happen? Could be two independent events in the dog and dingo lineages or one in the ancestral lineage. Are the LINEs in dog and dingo at the same position in the gene region? Could it be the same insertion that was reduced in length in the dog lineage, and what does that mean for the evolution of dogs and dingoes? L431/432: please use Alpine and Desert dingo instead of the individuals' names. L471-473: Not sure if a single sample (Cooinda) is sufficient to come to this conclusion, also how does it compare to the wolf? She could just have been a dingo with an exceptionally large brain. I think a more in-depth discussion is needed. Methods: Overall, the methods need to be more concise but at the same time clear and complete. L530-531: Why is solving the puzzle-box experiment important? Does that not suggest an exceptionally intelligent dingo if she was the only one, and could that not potentially explain the large brain size? How does brain size and intelligence or the potential to solve the puzzle-box correlate? L532: her brothers L533: What is the importance of the ginger color? As it is stated here, it is a bit out of context. Why is it important? L535-542: Why is this detailed report on her appearance of importance? I am often missing logical connections in the manuscript. L541: I would usually not expect to read such a statement with an emotional connotation in a scientific manuscript. L545ff: When were the samples taken? What type of samples were taken? How were they preserved? As it is stated that fresh blood was used, I assume Cooinda was still alive at that point. Are there any sampling and ethics permits to be mentioned? L552-56. This whole section about the pulse-field electrophoresis can be much shorter without losing any information, e.g., "Molecular integrity was assessed by pulse-field gel-electrophoresis using the PippinPulse (Sage Science) with a 0.75% KBB gel, Invitrogen 1kb Extension DNA ladder (cat ….) and 150 ng of DNA on the 9hr 10-48kb (80V) program." L556: What libraries? You have not explained how the libraries were prepared. CLR or CCS? L557: Which Sequel platform was used? Sequel I, II or, IIe? L558: remove the hours of movies, that does not matter unless you used a custom sequencing program. L559: AS 15 SMRT cells were used, I assume the sequencing was performed on the Sequel I. L561: I usually avoid starting a section or paragraph with "for". Please consider rephrasing as you start most paragraphs that way. L564: 119 ng of library, especially for long DNA-molecules, seems very low for a decent ONT run. I have mostly used the MinION, and I would usually only load a library with so little DNA if I only needed a few reads. I am just curious how well that worked on the larger PromethION flow cell. L573: In some sections, this manuscript reads like an early draft that was accidentally submitted. "User Guide, manual part number CG00043 Rev B." Please rephrase. L575: For me personally, it does not matter where Qubit measurements were taken, but if you include that info, please try not to repeat it as you did in L578. L576-577: Please shorten the two sentences about sequencing to one, e.g., "Sequencing was performed in 150bp paired-end sequencing mode on a single lane on the Illumina HiSeq X Ten platform with a version 2 patterned flowcell." L581-582: Does reference 8 use the same protocol version? If so, I would remove the brackets. If not, Is there no version number of the protocol available? L594-601: Why is this a mixture of insufficiently described methods and results? Please give additional information about trimming and assembly using canu. Why mention the number of sequences, bubbles, and unassembled sequences in the methods? L597: How were the reads aligned to the assembly? I have not used Arrow but if the pipeline uses mapping tools, please mention them. L598-601: These are results and should not be part of the methods. L614: what does finishing mean? In the literature, it is more common to write "manually curation of" or scaffolds were "manually curated" or "manually edited". L615ff: Again, these are results. I would not place them in the methods. L621: Was gap-closing performed only once? Were pacbio and ONT reads combined on one iteration of gap-closing? Maybe PBJelly suggests only using it once, but in my experience, gap-closing can be performed iteratively to further improve the contiguity. L622-623: Again, results in the methods. L634: Why is it important when the chromosome mapping was completed? You did not specify when the sequencing was performed or when the samples were taken. L635: Please add accession number and, if available the reference. L644: Circos is a tool for plotting data in a circular plot. How were the SNV, and indels identified? L647: X chromosome L652: I usually use GeMoMo for homology-based gene prediction. I would like to see a short description of the method rather than just linking to a previous publication. L658: "processes that produce differences" is not very precise, please give some more info here. I would usually remove Indels from phylogenetic datasets due to the uncertainty of their mutational history. How were they coded and how were they analysed? L659: What is WA distance? Reference? L660: The Glazko et al reference is quite out of context here. Better phylogenetic properties than what? Why use distance-based phylogeny? How many SNVs were used? How ere they filtered L662: Maximum parsimony is not frequently used anymore for phylogenetic reconstruction. Why not use a Maximum-likelihood or Bayesian approach? L664: It is mentioned that the wolf should be the outgroup, but the dataset itself is not mentioned in the methods. List all samples that were included in the phylogeny. If the dingo is assumed to be an intermediate between wolf and dog why did you not use a different canine as outgroup to avoid bias? L665: include version and the URL of the tool if there is no paper to cite. I cannot judge the methods for Methylation and Morphology, as this is not my expertise, but these method sections read very well and seem clear to me. Availability of supporting data: There are quite some broken sentences and misplaced periods in this section. Please check the text again and make sure that the links to your datasets are functioning. Overall, the presented data are interesting and a valuable resource, but the manuscript itself needs some major improvements to make the interesting results available to the reader in an easier-to-follow and more understandable form. It is obvious that multiple authors with different expertise worked on different sections of the manuscript. The challenge during the revision is to bring it together into a concise and easy-to-read manuscript with a consistent writing style. I hope that my comments, questions, and suggestions can help in that process. Please take my writing suggestions as such, feel free to adjust and change it in a different way as long as the result is more reader-friendly and more concise.

    3. Background

      Reviewer2-Jack Tseng

      My evaluation of the manuscript was restricted to the geometric morphometrics (GM) section. The authors seem to have followed a standard GM procedure in their analysis of cranial shape differences among dingo skull samples. My only suggestion is that additional detail be provided in the landmark data collection for the GM analyses: -Reference 58 was cited as the source of the landmarks used in this study, but no other details are provided. A list of landmarks that forms the basis of the geometric morphometric analyses should be presented in order for the reader to fully interpret the PCA plots.

    4. Abstract

      Reviewer1-Andreas Chavez

      The article "The Australasian dingo archetype: De novo chromosome-length genome assembly, DNA methylome, and cranial morphology" is well written and interesting study examining the evolutionary relationships between the focal species, the dingo, and related canids (both domestic and wild). This study uses an impressive amount of state-of-the-art genomic data and resources to produce a (chromosome-length) de novo assembly of the dingo genome. The approach for assembling the genome are all adequate. The comparisons of chromosomal structural variation and methylation patterns with another dingo ecotype and other canids show interesting patterns of divergence that are potentially important regions of adaptive differences. I have two major concerns and some minor concerns for this paper. Major concerns: I believe this paper would be stronger if it contained analytical methods that addressed admixture between dingos and domestic dogs more explicitly. The authors state that admixture between dingos and domestic dogs (Line 453) is one of a few hypotheses that may explain phenotypic differences between the two dingo ecotypes. To evaluate this hypothesis with the genomic data, they rely primarily on phylogenetic analyses to explore the evolutionary relationships between the dingo, wolf, and domestic dog lineages. They show that the dingo lineages are outside of the domestic dog clade and that wolves are outside of the dog/dingo clade. Although it is probably true that dingos are a unique evolutionary lineage, phylogenetic analyses are not the strongest tool for assessing admixture and the contribution of genomic variation from different ancestral source populations. I would recommend using methods that would test admixture hypothesis more explicitly. D-statistic tests (ABBA BABA test) and related tests would seem appropriate for this kind of data and sampling scheme. I also have concerns about the interpretations of brain size differences between dogs, dingos, and wolves. Although I am intrigued by the idea that domestication may have driven reductions in brain size and shape variation, I find it hard to not consider natural selection pressures in the case of dingos and wolves. The best scenario for testing the domestication-driven hypothesis would be if dogs, dingos, and wolves evolved in a common environment and domestication practices were the most notable differences between them. However, given that wolves and dingos in Australia evolved with different prey and habitats on different continents, it seems hard to me to not consider environmental adaptations as another important factor in the evolution of brain-size and shape variation. Minor concerns: Introduction: It took me awhile reading deeper into the manuscript to understand what was meant by the name Cooinda. For awhile, I thought it was the name for a dingo subspecies or ecotype. I would suggest including a brief section in the introduction stating that the genomic and morphological data in this study is based off of a single individual named Cooinda and that there are questions about it's ancestry and placement as one of the dingo ecotypes. Line 172: ")," should be replaced with ")." Line 371: I don't think it is necessary to say "The passing of Cooinda the dingo" Line 463: more is needed to finish the point of "will illuminate" Line 494-497: The role of venomous animals as barriers to gene flow is conceptually not clear and is not supported by the citations from what I can readily tell. Line 540: dewclaws? Line 541: "Regrettably" isn't necessary to include

    1. n hum

      Reviewer3-: Borbala Mifsud

      Gigascience - Cell type-specific interpretation of noncoding variants using deep learning-based methods Sindeeva et al. have developed DeepCT, a convolutional neural network-based model that predicts sequence and cell type-specific epigenetic profiles from available epigenetic data. The novelty of the approach is that the model can learn unmeasured epigenetic profiles in a given cell type, if there is another cell type that has the target feature measured and shares one or more other epigenetic data types with the cell type it aims to predict in. The authors demonstrated that the framework works well and that the model learns both sequence context and cell type-specificity and used the model to predict which de novo variants, identified in the Simon Simplex Collection, have the highest effect in any of the cell types they studied. Focusing on one variant with high predicted effect in glial cells they suggested a mechanism, whereby the variant in a putative enhancer element within the SMG6 gene reduces FOS binding, which might affect SMG6 expression in these cells. I have a few minor comments to clarify the applicability of this model. Minor comments: 1. In Figure 2D the authors showed that adult heart and fetal heart cell state representations cluster together even though they did not share the measured epigenetic features. This is an interesting observation, however one of them had ATAC-seq data while the other had DNase-seq data which are highly correlated. It would be good to know how much this can be generalized to other cases. What is the level of correlation between two epigenetic features that is required for correct clustering of the cell states between two cell types that do not share epigenetic features? 2. In both Figure 2C and in Supplementary Figure 1, the 2D visualization of the cell state representations, show that some cell types cluster well together while others do not cluster at all. Even those that cluster well, like "Digestive", "Kidney" or "Muscle" cells have many cell types that do not cluster with the others. Apart from biological differences, could this be also reflective of cell types with lower quality epigenetic tracks? How much does the quality of the tracks effect the model? 3. Figure 3E shows that there are some points where the accuracy of the model is much higher when leaving out certain epigenetic tracks from the training of the model. Is that also related to quality of those data or is there a specific epigenetic feature where the model consistently shows higher accuracy when the feature is left out? 4. The authors used 1000bp for representation of the sequence, but the target sequence that is checked for overlap of the epigenetic features is only 200bp. Does the model learn from the additional 800bp? 5. For the cell state tail the chosen emb_length was 32. Based on Supplementary Figure 1, I assume this is due to the number of cell type groups expected, but it would be good to include the rationale in the methods. 6. For the GO term enrichment what background was used? I would expect that the nearest genes of all de novo variants found in autism cases would show enrichment for similar GO terms. 7. Pg.11 last line should be "FOS transcription factor binding" instead of "grinding".

    2. Interpretation

      Reviewer2-Yuwen Liu

      The manuscript entitled "Cell type-specific interpretation of noncoding variants using deep learningbased methods" interpreted the non-coding genomic variants by integrating the single-cell epigenetic profiles with the convolution neural network. The author found the CNN can capture the cell typespecific properties and generate a biologically meaningful cell state representation by embedding the cell to the latent space. In general, the architecture of the convolution neural network is novel, and, to a certain extent, the model may be helpful for improving our understanding of genomic non-coding variant effects at single-cell level. Major comments: 1. In Figure1C the author intended to quantify how often unmeasured epigenetic marks can be inferred from available profiles. Although, in fact, the modification of the epigenetic marks is correlated and sometimes colocated in the genome (Ernst and Kellis 2015). However, the connected graph is not a piece of strong and solid evidence or data for quantify the predictive ability of the epigenetic marks. They should provide other compelling evidences or undertake more analysis. 2. The author used an empirical p-value threshold to detect the peak position along the genome. The definition of the peaks for epigenetic mark is crucial for the whole study. At least they should plot the distribution of p-value and explain why they choose the empirical threshold of p-value as 4.4 in detail. Furthermore, the false positive outcome of the test should be corrected. 3. Some epigenetic marks present broad modified regions of the genome, the 150 bp DNA sequence may not contain all the sequence determinants for that broad peak. That is may the prediction performance is poor for most of epigenetic marks. 4. In Figure3D and Supplementary Figure 2, the majority of epigenetic marks presented very poor prediction performance. The author should discuss the potential biological reasons that lead to this result and perform some analyses to preclude these confounding factors. 5.The author should scrutinize their data because they also use some epigenetic profiles form heterogeneous tissues which are composed of different cell types. And these heterogeneous profiles may weaken the predictive power of the convolution neural network model and impair the interpretability of the model. 6. The authors only used SSC data to showcase their predictive power in pinpointing potential causal non-coding variants of ASD. I suggest use GWAS data from a wide varieties of complex traits and diseases to generate a more thorough evaluation of the specificity of their prediction. Furthermore, the authors used prediction leveraging signals from 794 cell types in predicting non-coding causal variants for ASD. Including a large number of ASD-irrelevant cell types would likely bring strong noise and make the results hard to interpret. I suggest the authors mask the epigenetic marks of ASD-relevant cell types (treating these cells as if they do not have available epigenetic data), and then use epigenetic marks from other cell types to predict non-coding variants with high impact on epigenetic marks in ASDrelevant cells. Then use this new prediction to rerun Fig 4A and 4B. Achieving good performance with this new analysis would better demonstrate the core advantage of their new model, i.e., predicting celltype specific non-coding effects of cell using epigenetic information from other cell types. Minor comments: 1. The author defined peaks as 150 bp genomic intervals, however, they use 200 bp DNA sequence as the center when preparing the data for the CNN input. 2. The resolution of the figure should be greatly improved.

    3. Abstract

      Reviewer1-: Fangfang Yan

      In this manuscript, Sindeeva and colleagues describe a novel neural network-based algorithm, DeepCT, to cluster epigenetically similar cell types and infer unmeasured epigenetic features, which then can be used to interpret non-coding variants. The manuscript is well structured and well written, it is potentially interesting to a broad readership. Yet, the algorithm itself in the manuscript lacks rigor and thoroughness. Major points: 1. Lack of comparison with competing methods 2. As the authors state themselves in the results and discussion, the performance of DeepCT among some features is very low, such as H3K9 and H4K20 monomethylation. Could authors add more discussion and explanations of this almost zero average precision? 3. The authors said "statistically higher" or "outperforms" in a lot of statements but no statical test results. For example, on page 8, the authors write: "This analysis confirmed that average cosine similarity for embeddings representing cell types from the same tissue was significantly higher than for embeddings of randomly selected cell types". On page 9, "we note that this baseline has performance metrics substantially higher than expected in random (baseline AP=0.417)." 4. On page 8, the authors write "we show co-localization of muscle cells, as well as co-localization of digestive cells (Fig. 2C)". However, Figure 2C looks not quite convincing. Minor points: 1. Providing high-resolution vector-friendly figures will help a lot. I can barely see the content of the figure in the current version. 2. A jupyter notebook tutorial on the Github repo would be helpful for users to apply DeepCT quickly.

    1. Genetic recombinat

      Reviewer2-Fadi G Alnaji

      In this work, Sotcheff et al provide a comprehensive and nicely-written report about using the algorithm Virus Recombination Mapper (ViReMa) to identify and characterize different kinds of recombination events in different viruses. ViReMA was first reported - by the same group - in a separate paper (Routh et al, NAR, 2013) as a python-based algorithm that, by accounting for the high-diversity nature of virus populations, can efficiently detect a wide range of virus recombination junctions within virus-derived Next Generations Sequencing (NGS) datasets. In this paper, the authors described a couple of important updates on the original algorithm that enables ViReMa to cope with the new technological advances in NGS, including the read length and the significant increase in NGS library size and NGS-based experiments. Notably, the authors implemented a powerful validation approach by challenging the algorithm with a different type of NGS-based data containing various types of junctions from different viruses to highlight the contextual computational and biological connotations. Overall, the paper used a robust analysis method and sufficient controls to clearly demonstrate the capacity of ViReMa to detect different types of recombinant molecules in different NGS datasets and viruses with high sensitivity and specificity. I only have very few minor comments.Minor comments1) Since Fig 2E is showing the gradual effect of the permissibility imposed by the error-density values, transforming the tables into figures e.g. bar or scatter plots can render the effect more observable visually.2) At lines 500-501, the author found that the majority of reads mapped directly to the virus genome. Looking at the aligned read number, this dataset seems fairly large, I was wondering if using the newly added function --Chunk can come into play at this scenario to speed up the analysis? If it is the case, then maybe mentioning it would be valuable.3) At line 478, the authors stated: "The 'Reads' columns describe the number of reads at each particular nucleotide position", is this the average read number?4) Typos at line 206 "red", and at line 397 "(NL4-3)"

    2. Abstract

      Reviewer1-Diogo Pratas

      This article describes a pipeline (coded in Python) to detect and analyze recombination events of viral genomes using short-read FASTQ data. The paper presents some level of work accomplished by the authors. Usually, these types of articles hide numerous hours of coding and experimentation. Moreover, the authors present actual accomplishments that typically are unique architectural designs and important alternative ways to the area, including several results. However, many points require attention, namely:1) This pipeline expects exactly a specific virus. Hence, it uses a specific reference. However, this reference might not be the most representative because of the recombination events. Although it may be appropriate for smaller recombination events, detecting large-scale recombinations may face substantial difficulties. Moreover, since it is not prepared to deal with more significant variations (without de-novo support), it is exclusively for targeted support. Therefore, the article could be more descriptive about this specificity.2) The article states that the improvement is also inspired with the read length increase that NGS is bringing. Also, the reported depth coverages are very high. So, why not use de-novo assembly? For example, the de-novo assembly can be used to create scaffolds that can generate a reference sequence to be used after by the aligners. Please, comment on this.3) About the use of artificial poly-(A)tales to allow the mapper to align the reads, what happens when the read size is smaller than the k-mer hash of the aligner? Usually, repetitive A-sequence content appears in almost all samples because they have lower entropy and a higher probability of being generated. Wouldn't this create ambiguity, especially when there are very high-depth coverages? Please, comment on this matter.4) What is the minimum read size allowed to be considered a valid read for downstream analyses? Are the reads collapsed (in the case of Paired-ends) or considered split? Although less probable, the trimming is fundamental for excluding "events" generated at the tips of the reads that very rarely overlap, depending on the nucleotide distribution.5) Are the reads clipped above a particular depth coverage? This feature is especially critical in repetitive viral content, such as hairpins or poly- (A)tales - removing mountains that become the most significant factor in sequence depth coverage.6) Have some of these viruses been enriched for targeted capture? Please, provide this information in the manuscript. In some parts of the article, the coverage depth is very high: 300'000 - is this 300000? The simulated data used this coverage which may not be entirely similar to reality. Also, allowing lower depth coverage helps to understand how the pipeline behaves. Moreover, some aligners may have problems in older versions with these depth values.7) It was unclear which types of duplications were flagged and if the pipeline covers them.8) How does the pipeline deal with contaminants?9) This article states that the pipeline works for viral sequences. However, the tests used do not include large genomes. What about larger genomes? Some larger genomes contain repetitive content that provides additional reconstruction challenges. Therefore, the benchmark could have an example of this nature.10) While looking for recombination events, specially fusions with the host, what are the differences between sequenced viral integrations and fusion events at the analysis level? How do we distinguish both using this pipeline? Please, comment on this.11) The authors state that the pipeline provides accurate results. Regarding the calculation of accuracy values, several good practices and recommended by many experts in the field:a)https://www.sciencedirect.com/science/article/pii/S1386653220304339b)https://www.sciencedi rect.com/science/article/pii/S138665322100079212) Augmentation of existing pipelines in the area could guide the reader to other solutions and sometimes complementary. See, for example:a) ASPIRE: https://bmcgenomics.biomedcentral.com/articles/10.1186/s12864-022-08649-8b) TRACESPipe: https://academic.oup.com/gigascience/article/9/8/giaa086/5894824c) V-pipe: https://academic.oup.com/bioinformatics/article/37/12/1673/610481613) Line 113: "in range a of plant" - please correct;14) Line 120-121: Please, rephrase.15) There are several acronyms; perhaps an abbreviation list would improve the reading of the article.16) Line 394: ART is defined as "antiretroviral treated," but this acronym overlaps the ART simulator. Perhaps, in this case, adding another letter or changing it would remove the ambiguity.17) Line 753-754: Reference 27 is missing at least the title, journal, and year.18) Please, consider to add ViReMa to Bioconda.19) I've tried to clone the repository from sourceforge, and it came out empty. I had to download the package manually. I faced some problems, perhaps because it was not easy to follow. Possibly, users may face the same difficulties, which may be an obstacle to using the software. Please, consider having an elementary example for running ViReMa (already including some tiny read sample and reference along with the code and command description - including how to run the GUI). Please, consider using Github in the following times.

    1. Nanopore seq

      Reviewer2-Hadrien GourlÃ

      Thank you for a great piece of software and great article.A few minor points:l166-168: the clause about structural variant is unclear to me, and perhaps to the reader. Please consider rephrasing.l209-210: I understand that the number of mapped reads is unadequate for abundance estimation of ONT data, but k-mers should not suffer from the same problem, shouldn't they? the number of k-mers matching to a genomic region (or a genome) will scale appropriately with read length. I have therefore a hard time understanding why k-mers are presented as problematic in the first sentence of the Abundance Estimation paragraph.l379: What do you mean by "pronounced". Please consider rephrasing.l438: geomes -> genomesfigure 2: panel 2 should have the same theme as the other subplots of the figuresoftware comments:- I'd like to be able the examples present in the documentation out of the box: please add a link and instructions on how to download and unpack the zymo community- Speaking of the zymo community, why do Campylobacter and S. cerevisiae have a different path than the other genomes in the examples?- If you plan to not update the pre-trained error models to a more recent version of scipy, please pin scipy 0.22.1 to the bioconda recipe, so that users can use pre-trained models out-of-the-box- Please make a new release of the software including pull request !67- In a future version of Nanosim, I urge you to consider gathering all scripts into subcommands (i.e. nanosim simulate [-- params] instead of simulate.py [--params]. I realise this a big breaking change but it is good practise, and avoids polluting a user's PATH with many scripts. This change is in my opinion not required for the paper to be published, but something I'd like you to consider for a future release

    2. ABSTRACT

      Reviewer1-Andre Rodrigues-Soares

      This manuscript is of very good quality - as such, my review is quite limited. I would like to congratulate the authors for the development of the software and its comprehensive and extensive benchmarking.On l. 64 - Given the range of samples that can currently be sequenced using Nanopore sequencing and the recent focus on short reads as opposed to the previous highlight given to long reads, this statement is out-of-date. I would recommend describing instead the current range of sizes that can be sequenced via Nanopore.After reading the manuscript, I am however, left to wonder how the authors would approach the different error rates namely of different types of flowcell (R9 vs R10). It can be assumed that error rates of R9 sequences might indeed average 10% as stated in the manuscript, but with the advent of R10 flowcells (more recently R10.4.1) and respective updated chemistries, the error rates have decreased significantly. One specific error rate of 10% as stated in the manuscript can't be assumed for the sequencing technology as a whole anymore. While I don't think this should be central to the development of the tool, I think this should be addressed in the manuscript in some way.I would also have liked to see distributions of PHRED quality scores in the simulated reads in the analyses conducted in the manuscript. Although the assembly and genome recovery statistics namely in Figure 4 indicate these should have the expected distributions, I would have liked to understand how quality scores are distributed in the generated reads. If the two issues above are addressed in the manuscript, I will be happy to recommend its publication.I have no further reviews to add as the manuscript covers all other factors I would think could be worrying regarding a tool simulating Nanopore metagenomic reads.

    1. Background

      Reviewer2-Sveinung Gundersen

      The paper describes the FAIR Data Station, which is a lightweight application written in Java that facilitates FAIR-by-design by allowing the collection of structured metadata from the first phase of a project. To this end, the authors have applied and extended the ISA metadata framework to form a core data structure wherein attributes from a library of 40 frequently used minimal information checklists can be placed. The FAIR Data Station contains tools for generating and validating Excel metadata files, as well as conversion to RDF format as well as to a European Nucleotide Archive(ENA) compatible XML metadata file for submission.General comments:The FAIR Data Station (FAIR-DS) seems to be a useful application to help life science researchers to collect and structure metadata according to the FAIR principles. The software is based on core community standards, ontologies and checklists. As for deposition databases, the software currently seems to only integrate with ENA, which, on the other hand, is a central deposition database.The three main contributions of FAIR-DS is to my mind A) the metadata schema that has been carefully constructed by the authors, B) the validation functionality of metadata against said schema, and C) functionality for conversion of validated metadata into RDF and deposition formats There are, however, some architectural choices and technical limitations in the implementation that I have issues with and which makes me uncertain whether the software shows enough "innovation in the approach, implementation, or have added benefits", as mentioned in the "Instructions for Authors"(https://academic.oup.com/gigascience/pages/technical_note). I would therefore invite the authors to address the following issues:1. The authors state that "the FAIR-DS uses an extended version of the original three-tier Investigation, Study, Assay (ISA) metadata framework [https://isa-tools.org]". This leads the reader to think that the software applies the full ISA Abstract Model (https://isa-specs.readthedocs.io/en/latest/isamodel.html), which is not correct. Only the top level objects and a few attributes are retained. It is also not clear why the authors have found it necessary to add additional, custom object types, such as "Observation unit", explained as "the "object" from which the measurements are taken". The ISA model includes an attribute "source material" which seems to overlap. The authors have also added "sample" as a top-level object, even though there is already a "sample" attribute in the ISA model. It is unclear to me what is improved by adding new object types and whether any such improvements will outweigh the obvious drawbacks that comes with not following a community standard for the metadata schema.2. The FAIR-DS makes use of Excel files as an intermediate format for collection of user metadata. While the feature set of Excel and its familiarity for most users are good arguments its adoption, I miss a discussion on the fact that a commercial product is included in the core architecture of the system. FAIR principle I1 promote that: "(Meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation". As Excel is only an intermediate metadata format, while RDF is used for the final output, the FAIR-DS does not directly break principle I1, however I think the choice of a commercial file format is not following the "spirit" of FAIR. I see no reason why CSV could not be included as an alternative to Excel and that the authors could recommend an Open Source application as alternative for users that wish their entire software suite to remain in the Open Source domain.3. The metadata schema is not represented in a standard schema format, such as JSON Schema, Frictionless table schema, or similar. Using a shared format for representing the metadata schema makes it possible to make use of general validation libraries (such as the ELIXIR Biovalidator: https://doi.org/10.1093/bioinformatics/btac195). Shared schema formats also allows for reuse of the schema in other contexts/software. In FAIR-DS, the metadata schema seems to be primarily represented in an implicit way in the Java source code that generates the Excel files as a secondary representation of the schema. Even though the FAIR principles might not directly include a recommendation to share of the metadata schema in a FAIR way, one can argue that this falls under R1.3: "(Meta)data meet domain-relevant community standards". It would in any case be in "the spirit of FAIR".4. As a consequence of issue 3, the validation functionality is also specified implicitly in the Java source code and does not seem to reuse much external validation functionality. I particularly miss validation of ontology terms against the relevant ontologies, as well as more stringent validation of PMIDs, DOIs etc, preferable using CURIEs instead of URLs. All of these data types only seem to be validated as general strings, which is of limited use. Users might for instance introduce spelling variants for ontology term labels without this being detected by the validator.5. Due to the hard-coded nature of the metadata schema, the validator and the conversion functionality, I suspect the authors might not have designed the system flexibly enough to allow for easy updates based on updates in the external dependencies, i.e. the minimal information checklists, ontologies, or deposition schemas. For instance, EMBL-EBI, who are hosting ENA, are moving towards requiring the submission of sample data/metadata to BioSamples, prior to submitting the metadata to ENA, which might have consequences for the checklist requirements. Also, ontologies in particular are known to be updated regularly.6. I am not convinced that the authors have done a careful enough search of the literature to list relevant software solutions for comparison. For instance, the FAIRDOM Seek solution (https://doi.org/10.1186/s12918-015-0174-y) is not cited directly, although the functionality seems to be highly overlapping.7. The manuscript would benefit from careful proofreading of the language and grammar.When addressing these issues, I would urge the authors to better demonstrate "innovation in the approach, implementation, or ... added benefits",

    2. Abstract

      Reviewer1-Dominique Batista: An overall a strong paper that creates a new bridge between the ISA model and the FAIR principles.A few points should be addressed:- page 2: "As one Investigation can have several research lines, each Study layer has a unique identifier ...": how do you generate these identifiers and control their uniqueness, persistency and stability? Are these identifiers resolvable ? "As an extension to the original three-tier ISA-model in between Study and Assay two additional layers of information were added Observation unit and Sample": would you clarify what problems were addressed ? More generally speaking, does the FAIR-DS integrate with existing implementation of the ISA model ? Did you consider a conversion and submission to external systems such as the ones mentioned in the conclusion ? The text for figure 1 is good, but the corresponding text in the core of the document is hard to read and understand. "Model specific attributes are optionally selected by the user": Does this mean users can add extra fields on top of the provided packages or that they have to select fields within the given package ?-page 3: "In addition, we included regular expressions obtained from the ENA checklist, such as "(0|((0Ë™)|([1-9][0-9]Ë™?))[0-9]*)([Ee][+-]?[0-9]+)? (g|mL|mg|ng)" for sample volume or weight for DNA extraction": good point. Is their a mechanism for users to add new regex ?

  3. Mar 2023
    1. Abstract

      This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.78), and has published the reviews under the same license. These are as follows.

      Reviewer 1. Takeshi Takeuchi

      Is there sufficient data validation and statistical analyses of data quality?

      Scaffolding with the Chicago and Hi-C libraries did not significantly improve the assembly. In general, Hi-C scaffolding can produce a chromosome-scale assembly. I would suggest that the authors describe the quality of the Chicago and Hi-C sequence data. For example, the mapping rates of the Chicago/Hi-C reads to the assembly should be informative.

      **Reviewer 2. Yang Zhou **

      This is a fascinating study on the assembly of the first deep-sea scleractinian coral, Lophelia pertusa. The manuscript is well-written and easy to follow. I have gone through your manuscript and would like you to address the following concerns/comments before publication.

      Line 47: 1.2 454 pyrosequencing reads means 1.2Gb 454 pyrosequencing reads? Line 51-52: Please add some references. Line 72: As far as I know, the DNA extraction process of stony corals is affected by calcium carbonate skeletons. How did you deal with this problem during the DNA extraction process? References: Please double-check the references for errors. Italics for species names, capitalization of journal titles, and so on.

    1. Abstract

      This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.79), and has published the reviews under the same license. These are as follows.

      **Reviewer 1. Xuanmin Guang **

      Han et al. had carried out genome assembly of Aplectana chamaeleonis, analysised the genome’s repeat content and annotated the genome. They descripted the geneset’s function and done a PSMC analysis. The genome is a key source for research, but there are so many mistakes in the manuscript, I suggest the author to revies the manuscript carefully and the grama and content should be re-organized. Some suggestions have been listed below:

      1. In the context part, the first two sentence lacks continuity in logic, please change them.
      2. The author didn’t mention which sequence platform they had used in the context, I think this should be added.
      3. The average sequence length in the table is 496kbp, but the author it as 496Mbp , this is a mistake.
      4. In table 1, why there aren’t any gaps in the scaffold genome?
      5. The author said that “This suggests that the significant expansion of repeating elements is an important manifestation of species differences”. Its unreasonable to get this conclusion only based your genome repeat analysis.
      6. In the text they claim that 12887 function gene had annotated, I want to know how much gene they have annotated? Please add this in the manuscript.
      7. Too many decimal places have been used in the Table2.

      Re-review: The author revised the paper as I concerned in the report and the paper could be accepted now.

      Reviewer 2. Jianbin Wang

      In this manuscript, Hou et al. present a genome assembly for Aplectana chamaeleonis, a parasitic nematode that infects amphibians. They report a genome of ~1 Gb, most of which is composed of repetitive elements. This genome draft is significant as it is the first assembled for this or any Cosmocercidae species. It may provide insights into the evolution of the nematodes – if it is thoroughly compared to other nematode genomes. It may also allow for better species identification than previous morphological methods. While the conclusions on genome size and composition described in the paper appear sound, there are many questions that go unanswered. The reasoning behind why this research was undertaken is not clear. What is the ecological or agricultural and economic impact of the species? How would the genome provide a better understanding of this species? More specific information is also needed to better understand the genome. How many chromosomes does this species have? Is there any cytology to help answer this question? Any notion of sex chromosome vs. autosome? This genome is much bigger than most of the assembled parasitic nematodes. The author did not make any efforts to explain what might contribute to this. Could the big size due to contamination in the samples used? Judging from the images, it does not look very convincing to me how clean the sample was for the genomic DNA extraction. Overall, there is a lack of in-depth data analysis and comparison between this genome and many other available nematode genomes. About the overall presentation and organization of the manuscript, the context is often lacking from results. How do these results compare to related species? How does figure 4/the demographic history fit in to this story? A round of general proofreading needs to be done for grammar, punctuation, capitalization, italics, etc – see below for some specific examples. In the Abstract, the repeat content in the Ascaris genome is 72.45%, and the total length is more than 742 Mb. The math does not add up (1.1 Gb x 72.45% = 797 Mb). Or do you mean the Aplectana genome? Should say total length of repeats. Why is this “Ascaris” genome? Ascaris is a parasite that infects pigs and human. Some sentences need addressing/clarification: Page 1. “and their diversity is also very high, many of which are above the national second-level protected animals” – what is the significance of this/how are these ideas related? Page 2. “Through the characteristics of the genome sequence, it shows that the genome is a highly continuous genome” – need to be more specific with metric and data. Page 4. “In addition, the enrichment of A. chamaeleonis genes in all metabolic pathways was found in twelve metabolic pathways.” – not sure what you are trying to say about the all or 12 pathways. Figure 1. - Images need scalebars. In A, what is the mat of material? For A, crop out area around the worm and enlarge the worm image. In B the worm is dark/shows little contrast or detail. In C, label which image is the head and which is the tail (or specify left vs. right in the legend text). The images in B and C look like they were taken using a cell phone pointed at a computer monitor – are there higher quality images? Table 1. – Why is the data in all four columns the exact same? What is the difference between each column? This appear to be a mistake when preparing the table. Very sloppy and unfortunate! Table 2 – Significant figures on the %s?. Is the “other” category needed (same for Fig2C)? Table 3 – Check text spacing (e.g. % in genome). Figure 3 – Recommend to redo the spacing of figures, increase size of text in each part of this figure. Need to refer to parts of figures in the body/text (Fig 3a vs. 3b vs. 3c). Can 3b be sorted from most number of genes to least? Figure 4 is not referenced in the body text. Consider merging Fig 4 with Fig 3. Figure 4 is lacking a description in the legend – what are the grey lines, definition of LGM? The x-axis scale and orientation are unintuitive – is the present on the left and the past on the right? Past should be on the left. Methods Genomic DNA was purification for Long-reads libraries preparation – should say purified What is the meaning of “The generation we used was 0.17” – what generation is this? and “the mutation rate was 9×10-9” needs units. The sentence “we used the pairwise sequentially Markovian coalescent (PSMC) model to estimate the effective population size of A. chamaeleonis within last million years.” should be moved to the section immediately after its current location.

      Re-review: Overall, the writing has been improved in several places and is somewhat clearer than in the previous draft. These changes are mostly related to the minor concerns raised. However, many questions related to the broader impact of this research and how the new genome compares to other nematode species remain unanswered. The following comments were largely ignored. 1. The reasoning behind why this research was undertaken is not clear. 2. What is the ecological or agricultural and economic impact of the species? How would the genome provide a better understanding of this species? 3. More specific information is also needed to better understand the genome. How many chromosomes does this species have? Is there any cytology to help answer this question? Any notion of sex chromosome vs. autosome? 4. This genome is much bigger than most of the assembled parasitic nematodes. The author did not make an effort to explain what might contribute to this. 5. Overall, there is a lack of in-depth data analysis and comparison between this genome and many other available nematode genomes. How do these results compare to related species? 6. About the overall presentation and organization of the manuscript, the context is often lacking from results. Another round of general proofreading needs to be done for grammar, punctuation, capitalization, italics, etc. – see below for additional specific examples. The authors, not the reviewers, need to make a concerted effort to read and proofread their own manuscript.

      In addition to the big picture points raised above, several other issues that were either brought up last time or are new and need to be addressed: 1. Not sure Table 1 is present the right way. The columns and rows should be reversed, I think. If so, there will be only one column - do you still need a table? 2. “Through the characteristics of the genome sequence, it shows that the genome is a highly continuous genome.” Unclear. The authors mentioned that they have fixed this in their response to the reviewers, but no change was seen in the updated manuscript. 3. “The generation we used was 0.17, and the mutation rate was 9×10-9 [8].” These numbers need units after them. Again, this was addressed in the response but not written out or clarified in the revised text. 4. “In addition, the enrichment of A. chamaeleonis genes in all metabolic pathways was found in twelve metabolic pathways.” Not sure what the authors were trying to say about the all or 12 pathways. Still confusing. 5. Photographs of the worms are still lacking scale bars. 6. Make sure that all genus and species names are italicized (in body text and in Fig.3). 7. Make section heading format is consistent (check capitalization). 8. “The results showed that 91 % of the sequences were compared to Arthropoda (1898/2088) and 7 % were compared to Arthropoda (122/2088).” Both of these say Arthropoda - is that a mistake? Also "compared to" is not the correct word, maybe "similar to"? 9. LGM acronym is defined after the second use of "last glacial period", should appear after the first use. Also, LGM stands for last glacial maximum, not period. This should be corrected.

    1. Abstract

      Reviewer1: Lim Heo

      In this manuscript, authors described a new platform, 3D-Beacons, which is an interface for accessing multiple sources of computational protein models (e.g., AlphaFold DB, SWISS-MODEL) and experimentally determined structures. As the number of protein sequences increases much faster than the growth of experimental structure database (e.g., PDB), computational protein structure models are great alternatives for proteins that do not have experimentally determined structures. Nowadays, many accurate protein models have become available thanks to the progress in template-based modeling techniques for decades and recent advances in de novo protein structure prediction methods using machine-learning approaches. However, those model sources were scattered at their own databases, so there has been difficulties in accessing these models. Thus, in my opinion, the development of a new database or platform, 3D-Beacons, for accessing various computational models is a great movement in the structural biology field. The manuscript well described the description of the platform and some technical details. I have a few minor comments on this work.1. I recently noticed that RCSB PDB also made it possible to search computational protein models by extending its web interface. The database included ~1 million models from AlphaFold DB and ~1,100 models from ModelArchive, which are main sources of this work as well and are maintained by some of the authors of this work. Even though the number of models and the diversity of the sources accessible via the RCSB PDB interface are fewer than this work, I think the purpose of both works are similar. As there are some overlaps between this work and the RCSB PDB interface in terms of data providers (and authors), what is the significance of this work compared to the RCSB PDB interface?2. Most computational models rely on a few data providers, AlphaFold DB, SWISS-MODEL Repository, and AlphaFill (for ligands). In my opinion, it would be better to make the platform richer by recruiting more diverse data providers with different points of view (e.g., conformational ensembles) or different modeling approaches (e.g., machine learning-based approaches with pre-trained protein language models such as OmegaFold). Is there any plan for such progress or promotion of the platform?3. It would be better to have a guide of model selection if there are multiple searched models for an Uniprot ID. Alternatively, providing universal quality assessment scores for models would be an option (by additional data provider). Currently, pLDDT scores are provided, but they are difficult to compare between modeling methods as they were trained independently for each method.4. I was able to search on the 3D-Beacons web page a few days ago. However, I could not at the moment of writing these review comments (Sept. 13, 6 p.m. in EDT).

      Reviewer2: Carlos Rodrigues

      This manuscript describes in detail the 3D-Beacons platform/initiative, which aims to facilitate access to 3D data as well as meta-information about experimentally determined and computationally predicted protein structure. This resource is very valuable for the broader scientific community in a time where the number of protein structure data available rapidly increases an many structures may be available for the same protein.A minor correction is required on page 7, where the authors describe 4 different types of protein structures: Experimentally determined, Template-based, Ab-initio anc Conformational Ensembles. On many examples available on the website (e.g. https://www.ebi.ac.uk/pdbe/pdbekb/3dbeacons/search/P15056), there is one extra category which is structures derived from "Deep learning" methods. I am assuming this comprises a sub-set of Ab-initio structures, which the authors decided to keep as a separate category after submitting this study for publication. The main text should be updated to reflect this change as well as Figure 4.

    1. Abstract

      Reviewer1: J. Harry Caufield

      This manuscript by Charbonneau et al. details efforts to address challenges in enhancing the value of metadata among projects in the NIH's Common Fund Data Ecosystem. They specifically detail how a new metadata model was developed and deployed to unify data properties across projects. Assembling such a model is a major accomplishment and a necessary step in promoting data reuse. Applying the model is another commendable achievement. The manuscript text undersells the value of these efforts. How has the value of data in the CFDE improved due to implementation of a unified metadata model and new infrastructure? The authors clearly delineate the challenges in searching CFDE data; these issues frequently appear in efforts toward improving biomedical data FAIRness and are directly relevant to the core challenges identified by Wilkinson et al. (2016) in their FAIR guiding principles. Much more emphasis could be placed on the overall impact of a consistent metadata model, whether within the CFDE alone or in the broader realm of bio-data management. Major issues: 1. As noted on page 11, "All C2M2 controlled vocabulary annotations are optional". Data producers will use terms outside the controlled vocabulary as needed, and are unlikely to consult any CFDE working groups in every instance. Is there some automated system for term normalization in place? How will data producers be encouraged to preferentially use controlled terms? Are they warned during submission, as noted on page 22 regarding data contents? Minor comments: 2. The first example of the mismatch between user expectations and actual results of searching for Common Fund program data is very illustrative. I appreciate how it notes that even instances like matching Dr. Phil Blood's name in a search can complicate Findability. 3. The abstract could include some brief description of the broader relevance and impact of the metadata model, including its potential for use outside the CFDE. 4. On page 5, the sentence "Thus, a researcher interested in combining data across CF programs is faced with not only a huge volume, richness, and complexity of data, but also a wide variety, richness, and complexity of data access systems with their own vocabularies, file types, and data structures" feels somewhat redundant and could benefit from some editing. 5. The structure of Figure 1 (or should this be Table 1?) is confusing. The general idea is clear - metadata types, properties, and formats are inconsistent across projects - but the two-column format presents issues with direct comparison. 6. It is interesting that, among all values presented in Fig. 1, just one includes a CURIE (HMP's ENVO:02000020). This may be worth further comment as it is striking that few of these projects have adopted unique identifiers within their metadata schemata. 7. Slightly more detail regarding the interviews with Common Fund programs would be helpful for understanding how these interactions contributed to the process. Were interviews primarily with PIs? Were several prominent issues repeatedly discussed in the context of multiple projects? 8. Is the C2M2 master JSON schema publicly accessible? 9. Some redundancy is present between the first and second paragraphs under the heading "Entities and associations are key structural features of the C2M2" - e.g., core entities and container entities are both described twice. 10. In Figure 2, some lines connecting tables are very close to the edge of the figure borders and are difficult to see as a result. 11. Is there a mechanism for dealing with obsolete terms as the ontologies contributing to the controlled vocabulary change? In the even that the NCBI Taxonomy renames a genus, for example, how will CFDE metadata change (if at all)?

      Reviewer2: Carole Goble

      The article is a very useful contribution to the growing number of metadata models and data catalogues in the life science data ecosystem. The recent NIH mandates in data sharing emphasize the need for findability of datasets, and the need to operate within a federation and ecosystem recognises the reality of independent data centers and legacy data collections. The paper states the context of the CFDE well, setting up the need for a centralized portal capable of ingesting, indexing, search and supporting cross dataset comparisons of dataset from different, independent data centers without the need for those centers to move, reformat or rehost their data. This is a common pattern that many data infrastructure providers will recognise. The incremental approach that supports minimal uploads and respects local DOI implementation is a pragmatic approach that has made onboarding the data centers feasible, I suspect. The insight that mapping to common ontologies does not actually lead to harmonised dataset and nor does it support search is a useful lesson that resonates and is useful to reiterate (although it is already well known). Given the approach is tabular, Frictionless data makes sense. The process of working with the Centers is interesting as is the choice of three core entities. Some more discussion on why these three and only these three would be appreciated. The ingest pipeline and process is not so clear. - It seems that each Center is required to map its datasets to the current C2M2 model in 48 TSV files, in a data package that is then uploaded to the catalogue and ingested into the portal's database. Is the data package a complete reupload each time or is the data package additive? There are hints in the text that it is a replacement each time. - What is the cost and complexity of this mapping and upload borne by the Centers? Any insights would be valuable. Is and tooling provide to help beyond the documentation? - Figure 5 could be improved to include the data that flows between the steps, and the actors. Could Figure 3 and 5 be merged? - If the datasets are reuploaded afresh each cycle, how are between-release analytics managed? By the use of the PIDs? Are there any restrictions on what cannot be changed between releases? - As the datasets can be incrementally improved with each release, are there any trends between releases that indicate changes in metadata enrichment - On page 18 you state that "DCCs get better at using the C2M2" - The data package needs clearer description: relationship between the TSV files, the Frictionless Data JSON and BDBag is of interest to many in the community and warrant a more thorough discussion. The portal - Why were these three basic kinds of search chosen? Were there user stories collected from the listening tour? - It would be helpful if there were some indications of the use of the catalogue by users rather than just the ingest and publishing pipeline. Page 5 the arguments are made that reusing common fund data for cross-cutting analysis is challenging and requires the hiring of dedicated bioinformaticians ("at considerable cost of NIH"). How does making the datasets available through a catalogue relieve the burden of skilled bioinformaticians to analyse data? The data still needs to be processed. Hasn't the burden just shifted to the Centers to prepare the TSV files for the ingest pipeline? Page 5 claims that the sociotechnical framework of the CFDE is a self-sustaining community. How? Working groups have been established but to what extent are these managed by the community and not the dedicated action of developing the portal? What is the sustainability of the portal? The easy expansion of the C2M2 seems to depend on two things: the incorporation of domain specific vocabularies and the cycle of ingest-releases at time points. Does this latter point constitute expansion that is easy? This would require each data center to adapt to the new table templates. Page 9 Containers are mentioned but it is not clear what the difference is between a container and a collection. Containers do not seem to appear again in the browsing. Page 18 the visibility of Biosamples changing over time in Figure 4 wasn't so clear to me

    1. Abstract

      Reviewer1: Jianyi Yang

      The authors present the predicted structures for the proteome of the reef-building corals. 8382 protein sequences were obtained by experiments, which are fed into ColabFold for structure modeling, generating 8166 structure models. Overall, this is a valuable study toward the understanding of the reefbuilding coral. Here are a few comments for possible improvement. 1. It becomes trivial for proteome-wide structure predictions nowadays with AlphaFold2 and other methods. I think the major contribution of the current study is the determination of the proteome sequences rather than the structure prediction. Thus, I would encourage the authors to spend more effort in analyzing the sequences, for example, how the sequences cover the Pfam families, how redundant the sequences are, how much they overlap with the sequences in UniProt, etc. 2. It may be meaningful to compare the predicted structure models to the SCOP or CATH database to see the fold distribution and if there is any new fold. 3. What happened to the ~200 proteins that ColabFold failed to work? 4. I suggest adding a browse function to the server for browsing the data.

      Reviewer2: Brendan Robert E. Ansell

      Zhu and colleagues report the generation of predicted protein structures via alpha-fold, for three coral species: A muricate, M foliosa and P verrucosa. Mass-spec analysis of the proteome of the three species is also performed. The authors describe a handful of structures that appear to be orthologues across the species and may have functions as pore-forming toxins, in calcium deposition and host-symbiont interactions. The generated protein structures will be of use to the scientific community and the web server is quite good. Major comments: Please ensure that the entire structure repository is available for unrestricted download as per http://corals.bmeonline.cn/prot/release.php Incorrect use of 'co-expression'. Assume the authors mean protein orthologues (i.e., homologues across species). Please replace with 'homologous proteins' throughout including in http://corals.bmeonline.cn/prot/release.php The link from 'CoralBioinfo' gives a 404 error: http://corals.bmeonline.cn/index.php In http://corals.bmeonline.cn/blast/, please include a link back to http://corals.bmeonline.cn/prot/ Although the manuscript lacks bioinformatic analysis of the structural proteome, this is not required for the data note category but would enhance the value of the publication if provided. In terms of validation, there is a technical control for the alphafold instance that this project used, which the authors should include. Specifically, please report the RMSD between structures predicted in this work with the published alphafold structures for the same proteins Acropora muricata ( 20 proteins), Montipora foliosa (8 proteins) and Pocillopora verrucosa (70 proteins), available at e.g. https://alphafold.ebi.ac.uk/search/text/Montipora%20foliosa%20?organismScientificName=Montipora %20foliosa Please detail in methods how the mass spec data relates to improving the genome or proteome annotation of each species. How was the mass spec data used? I presume it was used to identify 3-way orthologues between the species, and producing the "8,382 co-expressed proteins" that were selected for structural prediction. The data dump would be stronger if the mass spec proteomics data was also made available. What proportion of the structural proteome has mass-spectral support? Please include a supplementary text file containing the key features of each predicted protein e.g. % high confidence structure, gene id, interpro domain annotations , and top blast homologues. The long proteins could be split by domain to provide some structural information. To boost the value of this data, the authors might also consider predicting the coral symbiont proteomes followed by integrative analysis of host and symbiont proteomes to predict interacting partners. What are the domain and sequence features of the low and very-low confidence predictions? Is the reference genome available for any species? What is the completeness and content. How does the mass spec and structural data improve the genome annotation and vice versa? At present large parts of the discussion are irrelevant. Comments about covid-19 and the role of bioinformaticians are outside the scope of a research report. Minor comments: Comment on whether toxicity is reported for these coral species. Use full genus names on first use Proofreading of grammar required throughout, and elimination of non-scientific phrasing. Drop irrelevant arguments regarding COVID 19 and the call to arms for bioinformaticians.

    1. Abstract

      Reviewer1: Satoshi Hiraoka

      In this manuscript, the authors developed a new tool, What the Phage (WtP), for comparison of the output from multiple bioinformatics tools to predict phage sequences from genomic or metagenomic datasets. The purpose of this study is some or less meaningful. As the authors described in the Introduction section, currently it is difficult to predict reliable viral genomes, especially from cultureindependent metagenomic datasets precisely because of the lack of knowledge about viral genomes in current protein/genome databases. There are many bioinformatics tools already proposed and some of them are widely used in microbiology, however, the outputs from these tools are frequently varied and conflicted among them. However, there is no good integrative platform to compare the outputs. Here, the proposed tool easily generates well-summarized output derived from multiple tools, and thus, the tool might be facilitated the analysis of phage prediction in the field of microbiology. Indeed, the authors conducted (only but) one case study using real phage genomes and reported reasonable performance. I feel the tool has some potential to contribute to the wide fields of viral genomics. However, the user of this tool should keep in mind the fact that the tool just summarizes the output of multiple phage-prediction tools, meaning does not evaluate the reliability of the output, as described in the Discussion section. I feel thus the tool sometimes may lead to misunderstandings or make the users confuse rather than help them. It should emphasize that the majority decision among the multiple tools does not always bring the best result. The users may need further detailed analysis for the precise prediction of viral genome from metagenomes. Also, I feel that, because the development of bioinformatics tools is quite rapid, integrated platforms like WtP will be outdated very soon without continuous effort for maintenance and upgrade to assimilate future novel tools. I understand the 'sustainability' of the tool is out of the journal scope, but the perspective on this point will be better to be described in the manuscript or GitHub page. I have some suggestions that would increase the clarity and impact of this manuscript if addressed. [Background] Some tools (e.g., Virsorter2) can be used to predict viruses out from common bacteriophages, e.g., NCLDV and virophage (See the original article of VirSorter2). Those kinds of viruses should be described briefly in this section as well as common dsDNA phages. Assembly-free long read is described here, but I think this is a bit far from the scope of this manuscript. Indeed, the dataset used in this study (ERR575692) is derived from Illumina HiSeq and the performance of assembly-free long-read dataset was not analyzed in this study. I think the descriptions could be moved to the Discussion section rather than the Introduction. Rather than that, it would be better to add more attractive descriptions about studies of phage genomes identified from short-read metagenomes to emphasize the importance of phage prediction and the value of the proposed tool, WtP. e.g., History of viral genomics using metagenomic dataset, recent technical improvement of metagenomics, phylogenetic diversity of phages, discovery of novel phage lineages from environmental metagenome, etc. Only 5 out of 11 tools that used in WtP were introduced here. The remaining 6 tools would be better to also cite here with a brief explanation of those strategies for virus prediction. Also, MARVEL was cited here but not used in WtP. [Design and Implementation] Figure 1 is different from the one on the GitHub page ( https://mult1fractal.github.io/wtpdocumentation/figures/wtp-flowchart-simple.png ), which seem to be better than the Figure 1. What 'DAG' means? [Prediction and Visualization] 'a metagenome assembly' could rephrase like 'metagenomic assembled contigs' Metaphinder and Seeker are here with 'no release version'. I understand the situation but I feel this description is not good for reproducing the analysis. To specify the version of tools even if lack the official release version, mention the last commit date (For Metaphinder, Aug 10, 2021) or GitHub commit ID ( bebc447d00ec9ff9f4960f38b627d8651262ff72 ) is likely a good way. [Functional annotation & Taxonomy] In this manuscript, Prodigal was used for gene prediction. However, accurate gene prediction from phage genome is still difficult (see https://academic.oup.com/bioinformatics/article/35/22/4537/5480131). This fact have been affect both the phage prediction and functional gene annotation in the field of virology. I think the difficulty of gene prediction from phage genome and potential room for improvement should be noted in the discussion section. [Result report] The sentence ' ~ IMG/VR, iVirus, or VERVE-NET' here should be with appropriate citations or URLs. I found a paper of iVirus: https://www.nature.com/articles/s43705-021-00083-3 [Other features] WTP -> WtP [Analysis] Figure 3. X-axis title of left-bottom bar plot and Y-axis title of top-right bar plot. viral -> phage What 'prediction values' mean? Are these scores generated by each prediction tool? Figure 4. X-axis texts. Unify the format to either NodeID:assignment (e.g., NODE_5:unknown) or assignment:NodeID (T3:NODE_14). ' The sequences matched with 100% identity to Salmonella enterica (Salmonella enterica strain FDAARGOS_768 chromosome, complete genome), but not to prophage sequences. ' here. Does the sentence mean that the contig NODE_5 and NODE_8 were mis-predicted as prophage by CheckV? Table 1. completeness -> completeness (%) [Discussion & potential implications] Add citation in the line ' At least one multitool approach was implemented on a smaller scale by Ann C. Gregory et al. (comprising only VirFinder and VirSorter). ' [References] 16. Lack doi. 18. Lack doi. 19. Lack doi.

      Reviewer2: : Huaiqiu Zhu

      In this manuscript, the authors developed an integrated workflow WtP for identification, annotation and taxonomy of phage sequences. Based on Docker and Nextflow, WtP integrates 11 phage sequence identification tools (including 14 approaches), two functional annotation and taxonomy tools (Prodigal and HMMER), and a visualizing tool (chromoMap). When using WtP, it is convenient that users do not need to install each tool and can avoid the conflict between each installation package and between operating systems. Also, the WtP tool was applied to the artificial microbiome. The threshold of each phage sequence prediction tool can be manually adjusted and outputted. Annotation and taxonomy results of phage sequences can be further visualized by CheckV and by chromeMap tool. However, there are some limitations in this manuscript. For the annotation and taxonomy stage, only the Prodigal tool was used for gene prediction, and no other gene prediction tools (especially the phage-specific tools). It is necessary for an integrated workflow to include other similar tools. WtP needs at least 4 GB of memory and 75 GB of storage, so the author should develop a web version or at least a graphical interface version of WtP for its prevalence. Major comment: 1. Except for sequence identification, host prediction (e.g., HoPhage, PHP, and VirHost Matcher-Net) and lifestyle prediction (e.g., DeepPhage, PhagePred) of phage sequences are also important in microbial communities. However, WtP did not involve those functions. 2. In addition to the web version or graphical interface version of WtP, the author can also consider a video demo or usage illustration. To clarify the purpose of this study, I think it would be better to add the phrase 'a web server of ...' or 'a GUI platform of ...' into the title. 3. In 'Analysis' Section (Page 12), only four contigs of phage sequences can be annotated in artificial data: P22 (NODE_12), T3 (NODE_14), T7 (NODE_13) and phiX174 (NODE_30). The 'predicted_organism_name' of the remaining 102 phage contigs are 'no match found'. Can WtP improve or add more databases to annotate more contigs? 4. In 'Analysis' Section (Page 14), the author mentions 'No specialized phage assembly strategy or any cleanup step was included during the assembly step'. I think it is unreasonable, and the downstream analysis will inevitably be affected by the impurity sequences. 5. In Figure 2, it is possible to export results in the form of 'csv', 'pdf' or 'excel'. Can WtP export all the predicted phage sequences in the form of 'fasta'. The author should describe how to change or add the database during the annotation and classification phases. Minor comment: 1. In 'Functional annotation & Taxonomy' Section (Page 8), 'Figure 3' in the sentence 'All annotations are summarized in an interactive HTML file via chromoMap (see Figure 3)' should be 'Figure 4'. 2. The column of 'Computeness' in Table 1 missed the unit, and the author could add an outer border to Table 1. 3. Figure 2 and Figure 3 need to be clearer. 4. Page 5. 'approach to gain' should be 'approach to gaining'. 5. Page 13. 'In addition to' should be 'In addition to'.

    1. Abstract

      Reviewer 1: Milton Pividori

      In this manuscript, the authors analyzed different characteristics that are potentially related to the expression of human genes under IFN-a stimulation. A classification model is built to predict ISG (genes that are upregulated following IFN-a stimulation) from the human fibroblast cell. The model also performs feature selection, and the authors used different test sets (on different types of IFN) to validate their model. The authors provide a web server that implemented this machine learning model. I liked the introduction, the background and motivation were clear. However, the Results section was a bit hard to follow, in particular the implementation of the machine learning models, with different classifiers applied inconsistently across distinct features sets. At the beginning of this section, the authors perform extensive manual feature analyses across different feature types (related to alternative splicing, duplication, and mutation) to build a refined dataset. These analyses basically correlate each individual feature with the expression of genes in the presence of IFN-a. I have several concerns here, related mainly to the correlation between features, that I describe below. General comments: * Regarding reproducibility, the authors provide a Github repository with source code, the model trained and data. From the documentation and notes in the manuscript (lines 1015-1023), looks like this can only be run on mac OS, which makes it very hard for me to test (I'm a Linux user). I recommend the authors to read and follow the article "Reproducibility standards for machine learning in the life sciences" (https://doi.org/10.1038/s41592-021-01256-7). Having, for instance, a Docker image to download and run your analyses would be fantastic. * The authors perform a comprehensive analysis of features that differentiate different gene classes. I wonder why didn't they use first a machine learning model to automatically find these important features, and then try to analyze which features were selected (instead of the other way around as done in the study). I think there is perhaps too much manual feature engineering in the previous steps of training an ML model. * Related to the previous point, in my comments below one of my concerns is about feature correlation. The authors compare individual features regarding their ability to separate different gene classes (ISG vs background vs non-ISG). But one can imagine that some features are highly correlated. Some features might not be useful to separate gene classes from a single-feature analysis (as the authors do at the beginning), but they could be useful in combination with other features. Unless I'm missing an important point, I would leave the machine learning model to learn this and then analyze each feature individually after the model identifies them. * Authors are concerned that including too many features in the support vector machine (SVM) model would complicate the prediction task. To remedy this, they manually select the features according to, in my opinion, a more subjective criterion. Why didn't the authors use a feature selection algorithm here? I know that they propose a model including feature selection, but I guess I don't understand well all the previous manual feature analyses. Using a known feature selection method here would provide a more data-driven approach to improve classification, in addition to their manual expert curation (which is also valid). * They run several classification models, but not consistently across the same set of features. For example, only SVM is run across genetic, parametric, all features, etc, but not the other models. Why is that? * The manuscript would really benefit from a figure with the main steps of the analyses performed, models tested, datasets employed, etc. It's hard to get the big picture as it is now. Results/Evolutionary characteristics of ISGs: Paragraph between lines 131-148: * I think the window size used (mentioned in the text) should be added to the Figure 2 caption * What's the vertical dashed line? In the text, you say that those at the left of this line are IRGs, but I don't understand the meaning of that vertical line (-0.9 log fold change). This explanation, which I didn't see, should be added to the figure caption also. * From the text, I understand that in the subfigures in Figure 2 you have IRGs, non-ISGs and ISGs. Would it be possible, or meaningful for the reader, to add an extra vertical line to separate them? Results/Differences in the coding region of the canonical transcripts: Paragraph between lines 193-208: * If GC-content is underrepresented in ISGs more than non-ISGs, the ApT and TpA should be expected to be more enriched in ISGs, right? Sounds like a redundant analysis. I would expect these two sequencederived features to be correlated. If this is the case, maybe it would be better to highlight other features instead of a correlated/expected one? * Figure 4: here the authors divided the parametric set of features into four categories and compared their representations among ISGs, non-ISGs and background genes. The figure shows p-values of the tests on the y-axis, and the four categories of features on the x-axis. I think it's important to run a negative control: could you please run these tests again, say, 100 times, with gene IDs/names shuffled, and check whether some of these results also appear in these null simulations? Maybe you can keep the same figure, but remove those also found in the null simulations. Paragraph between lines 209-227: * Is it possible that the comparison of codons frequencies (third category of features) is correlated with previous findings (like GC content or ApT/TpA enrichment)? If so, would it be possible that maybe the analysis is also expected or redundant? For example, in ISGs there is an underrepresentation of GCcontent, and you also found that ISGs there is an underrepresentation of "CAG" codons. I might be missing something, but aren't these expected to be correlated? Results / Differences in the protein sequence: Paragraph between lines 302-323: * Figure 6: I would suggest adding the same negative control suggested before. Results / Differences in network profiles * I think it's important to define what are all those eight features in the network analyses (closeness, betweenness, etc), otherwise it's hard to follow what comes next. Results / Features highly associated with the level of IFN stimulations * Figures 9 and 10: it would be good to add the sign of the correlation in the figure, in addition to mentioning it in the caption (as it is now). Results / Difference in feature representation of interferon-repressed genes and genes with low levels of expression * Given the unique patterns or differences between non-ISG class and IRG class, wouldn't it be better to perform different analyses excluding IRG genes? The authors also acknowledge these risks in lines 539- 541. Results / Implementation with machine learning framework * It was hard for me to understand the workflow in this section: you used different machine learning models applied to distinct features sets, for example. Why don't you apply the same set of models to the same set of features? I think this section needs an initial paragraph with a global description of what you are trying to do. * For example, I don't think I understand very well the concept of "disruptive feature". What does it mean? * Table 3: I don't understand the threshold selection here. I guess you refer to classification or decision threshold from a model that outputs a probability of a gene to be ISG or non-ISG. First, I think there should be a line separating each performance measure to clearly show those that are "Thresholddependent" and "Threshold independent" * I also understand that, during cross-validation, you selected for each model/feature set combination, the threshold that maximized the MCC (this is explained in Table 3 as a footnote, but it should be more explicitly mentioned in the text). * Table 3: What is the "Optimum" set of features? Why is this "Optimium set" only used with SVM? * How does the "AUC-driven subtractive iteration algorithm (ASI)" compare with other feature selection algorithms. * Table 5: you mention this in the text, but it would be good to have an extra column indicating which datasets were used for training and which are for testing. * Figure 13: it would be good to have the AUROC in the figure, not only the curves. Web-server: * I think, in general, that the web application needs to be more intuitive and have more documentation. For example, the main interface says "Predict your human genes of interest", what does that mean? What does it predict?

      Reviewer2: Muthukumaran Venkatachalapathy

      First of all, this manuscript is well-written after a thorough research investigation. I enjoyed reading about interferons, interferon stimulating genes (ISGs), mechanisms and signalling pathways. In the introduction, the authors have highlighted the different methods (including other bioinformatics databases) available to identify ISGs and their potential pitfalls. This unmet need is addressed using in silico approaches which were used to classify interferon stimulating genes from non-stimulating ones in human fibroblast cells. Here, the authors have applied a combination of expression data and sequential/compositional features and designed a machine learning model for the prediction of ISGs from non-ISGs. Apart from features like duplication, alternative splicing, mutation and presence of multiple ORFs, the authors extracted various sequential features and found them to be correlated well with ISG prediction. For example, ISGs are prone to GC depletion and a significant difference in the codon usage among ISGs was found. In that context, the authors claim that ISGs are evolutionarily less conserved, codon usage features, genetic composition features, proteomic composition features and sequence patterns (especially like SLNPs and SLAAPs) are optimal parameters that can cumulatively help in differentiating ISGs from non-ISGs. When it comes to building a machine learning model, the authors faced challenges due to similarities between ISGs and IRGs. They have experimented using different algorithms for model building ranging from the decision tree, and random forest and found decent results with support vector machine. Limitation: Model Prediction accuracy was close to 70% for type I and III IFN and it performed below par when it comes to predicting ISGs activated by type II IFN system. There is scope to improvise the model prediction accuracy and extend its usage to type II IFN systems. If the authors could briefly add few points on how to improve the model accuracy and also highlight the application/impact of this work in their discussion, that would help scientists from other background to resonate with this manuscript. Relevance: I believe there are inherent attributes (genetic, compositional, expression) with ISGs which may facilitate or even elevate their expression after IFN stimulation. On the other end, I think these properties may also be leveraged by the viruses to escape or evolve from IFN mediated antiviral response. This study is relevant during the on-going pandemic, this bioinformatics tool can help design better drug target and may indirectly aid in developing novel antiviral compounds. I recommend this work for publication without any changes.

    1. Abstract

      Reviewer1: Mahesh Neupane

      Nicely written paper on selection signatures for Cashmere goats with detailed analysis and possible deletion. Here are some of the suggestions to improve the paper:

      How was the optimal size of K determined in admixture? Please review the formatting on the manuscript, for example page 5 and page 6 figure have some formatting errors. Was sample size enough for all the comparison? What was the power of study design. How the results from mouse and human cell line justified for comparison with goat? Very good job of supplying all the codes used in the programs. Perhaps this codes or parameters can be combined together as supplemental material or GitHub repository.

      Reviewer2: Yixue Li

      The authors raised an interesting question, hoping to discover the genetic mechanisms associated with cashmere traits for breed improvement. The authors sequenced 120 native Chinese goats, including 2 cashmere goat breeds and 6 common goat breeds. Through analysis, the authors found and believe they confirmed that a 582 bp deletion at 367 kb upstream of LHX2 is involved in regulating cashmere yield and cashmere diameter. The results are very interesting, and if they convincingly answer the questions below, acceptance for publication is recommended. 120 goats, are they inbred, and are there enough to get a statistically significant result? What is the statistical basis? The article begins to describe: 582del and 504del are both correlated with cashmere yield, but only 582del is also significantly correlated with fiber diameter, while 504del has no significant correlation with fiber diameter. Then he added: The interaction effect between 582del and 504del was significantly correlated with cashmere fiber diameter, indicating an interaction between the two genes. What is the mechanism of this interaction? How they are significantly related to cashmere fiber diameter, further elaboration is needed. On the one hand, the text mentions that the deletion sequence 582del in the upstream region of the LHX2 gene may act as an insulator, preventing the function of the LHX2 enhancer. Later, it is mentioned that the deletion of the LHX2 insulator 582del increases the expression of LHX2 and promotes the growth of cashmere fibers during the growth period. There seems to be a contradiction here, please explain. To confirm the insulator function of the 582del sequence, the authors synthesized a 551 bp DNA fragment and inserted it into the pGL3 plasmid downstream and upstream of the SV40 promoter, and then co-transfected human 293T cells and mouse 3T3 cells, and thus confirmed the insulator function of the 582del sequence. Here: (1) the two sequences are not identical in length and identity, (2) using mouse and human cell lines respectively, how do we conclude that the 582del sequence will have the same function in goat? What is the experimental logic and biological logic here? Hopefully a convincing explanation can be given.

      Reviewer3: Yu Jiang

      This manuscript resequenced 42 cashmere goats and 78 ordinary goats, performing Genome-Wide Selective Sweeps and then a 582 bp deletion in the thirteenth intron region of DENND1A upstream of LHX2 was found to increase cashmere yield. This discovery provides resources for the development of the wool industry and the enrichment of animal genetic resources. However, the description in some parts of the manuscript is very rough, and many attachments are missing. Major concerns: 1.In the result "Plausible Causative Mutation near LHX2", you selected a lot of cashmere and fiber diameter data for association analysis and get Fig 5e, but your results may have high false positives. The author should demonstrate that the deletion is significantly associated with phenotype after excluding factors such as gender, age and so on. 2.It is found from Fig.2 a that MT and JNG contain a large number of cashmere goat pedigrees. When they are selected and analyzed with cashmere goat samples, the results may be affected by this mixing. The author should consider the particularity of these samples when using them. Perform subsequent analysis. 3.In Fig 3, are strongly selected loci linked to surrounding loci? Do these sites have an effect on gene expression? 4.Fig 3c & d, the horizontal and vertical coordinates of your two graphs are the same, but the trends of the graphs are different. Please mark clearly what the two graphs describe in the legend and the paper. Minor concerns: 5.In the abstract, "Luciferase assay shows that the deletion, which acts as an insulator, restrains the expression of LHX2 by interfering its upstream enhancers", but in the result, "Therefore, the deletion of the LHX2 insulator increases the expression of LHX2 and promotes cashmere fiber growth at the anagen stage, while deletion of the FGF5 enhancer reduces the expression of FGF5, inhibiting the regression". The conclusion is inconsistent, please clarify the logic of the paper and draw the correct conclusion. 6."Luciferase assay shows that the deletion, which acts as an insulator, restrains the expression of LHX2 by interfering its upstream enhancers. Our study discovers a novel insulator of the LHX2 involved in regulating cashmere production and diameter." These two sentences are easy to confuse us, and it will make people understand that two insulators are found on LHX2, one to suppress expression and one to regulate cashmere production and diameter, and it is recommended to modify. 7.There are inconsistencies in the sample names in the paper. For example,you use IRWG in the front of the sentence and IRW in the back,please unify the name. At the same time, the paper contains many spelling and symbol errors, such as "mddle", "goatswith", symbol repetition and so on. (1)In the STRUCTURE analysis, When K = 4, we observed five separate clusters: IRWG and ANG in west Asia, YNBB, GZB, JTB and CDB in southwest China; cashmere goat in north China; MT and JNG in mddle east China; and Korean goats in south Korea. At K = 6, goats in the southwest China further split into two geographic subgroups: the Yunnan-Kweichow Plateau group including YNBB and GZB goats, and the Chengdu Plain group including CDM and JTB goats. Two west Asian goats (IRW and ANG) were also separated . (2)More interestingly, we found that the 582del has a high frequency in the IRWG population (80.9 %), while the 504del was absent, which is also consistent with previous research. (3)10 JTB from Jintang County of Sichuan Province; 12 CDB from Chengdu City of Sichuan Province".." To further evaluate whether these two deletion variants were related to cashmere (4)traits, we selected 235 CDMC goatswith cashmere yield (Supplementary Fig. 22, Supplementary Table s8) and 581 CDMC goats with fiber diameter records (Supplementary Fig. 23, Supplementary Table s9) for association analysis. (5)Fig 2b, "KOG". 8."We inspected all variants within exons to identify the potential causal mutation around the DENND1A-LHX2 locus; however, no coding variants were found. " Please put the information of the relevant sites in the attachment, only one sentence will make the paper unconvincing. 9."Analysis of the 582del deletion region using the BLAST program revealed that it is not a highly conserved element but was found in the genomes of primate and ungulate species. " "582del deletion" is a repetition.

    1. taxonomic

      Reviewer name: Francesco Asnicar (revision 1)

      This reviewer thanks the authors for their revision. However, the quality of the figures and the main goal the authors would like to reach with the tool named TAMPA is noThe main goal of TAMPA is to allow to compare taxonomic profiling tools, but it is evident from the supplementary figures that the software cannot allow such comparison when the taxonomic tree is large enough, as the circles added to the branches become unreadable. This I believe is a major flaw of the tool that aims to do that specifically and for such cases a smarter way that allow comparing taxonomic profilers should be found. For instance, a legend to each figure created by TAMPA should be added to make immediately clear what the colors represent. Also, for such taxonomic trees that the visualization fails in allowing comparing the taxonomic profilers a different and complementary data should be provided, for instance a table listing all branches and the numbers the depicted circles represent. In addition, such table should allow to overcome the limitation of just 3 tools allowed in the comparison.

    2. computational

      Reviewer name: Alessio Milanese (revision 1)

      Many thanks to the authors for their detailed responses to my comments.The edits have improved the manuscript and I have only few minor comments.COMMENT 1:In Figure 4b I can see that "Tenericutes" and "Planctomycetes" are both in orange, meaning that they bothhave been measured only by mOTUs. But in the main text I read "mOTUs failed to detect theTenericutes group, while MetaPhlAn failed to detect Planctomycetes", which is wrong.COMMENT 2:I would improve the figure legends. In particular, the description of 4b is the same as in 2a and 3a and 1:"The size of the discs represents the total amount of relative abundance at the corresponding clade in theground truth, or the tool prediction if that clade is not in the ground truth. If the tool predictions agree,a disc is colored half orange and half teal. The proportion of teal to orange changes with respect to thedisagreement in the prediction of that clade's relative abundance between the two tools being compared. Highlighted blue text represents clades where the difference between the relative abundances of the prediction and ground truth exceeds 30%".I would suggest to have this description only for figure 1, and then have a shorter description for thefollowing figures.COMMENT 3:The second color is described sometimes as "green" and sometimes as "teal". For clarity, I would suggestusing just one of the two.

    3. Metagenomic

      Reviewer name: Francesco Asnicar

      The manuscript by Sarwal et al. presents a novel tool for a standardized visualization of metagenomic taxonomic profiler tools, named TAMPA, that also enables a more general assessments of performances of taxonomic profiler tools by providing an extensive of different metrics.It would be interesting to see (if possible) the comparison of three (or more) taxonomic profiles at the same time. The evaluations shown are always binary, but in a real-case scenario where a user would like to evaluate 3 or 4 different taxonomic profiling tools on his community, it would be great to be able to do it.Other than the evaluation on the agreement between two (or more) taxonomic profiling tools, it is not clear how TAMPA can drive improvement over biologically-relevant question. Although it is clear, as the authors stated in the introduction, that different taxonomic profilers (with different parameters settings) can produce very different taxonomic representations, to support this statement it will be important to be able to show, at least one case, where TAMPA can suggest a different taxonomic interpretation of a microbial community that is also biologically relevant.Figures in general appear to be of low-quality and stretched, please consider improving them as they are the main point of TAMPA.

    1. identify

      Reviewer name: Raul Guantes (Revision 1)

      In the revised version and the response letter, the authors have clarified all the questions and addressed the comments raised in my previous report, and I think the manuscript is now suitable for publica

    2. techniques

      Reviewer name: De-Shuang Huang (Revision 1)

      I think the paper can be accepted.

    3. entities

      Reviewer name: Thomas Schlitt

      The manuscript "contrast subgraphs allow comparing homogeneous and hetereogeneous networks derived from omics data" introduces and illustrates the application of contrast subgraph analysis to gene expression, protein expression and protein-protein interaction data. The method can be applied to weighted networks. The authors give a good description of the method and the context of other available methods.The authors apply the contrast subgraph analysis to three different omics data sets - overall these analysis are not very detailed and do not yield surprising results but they provide a nice illustration of the potential usefulnes of the contrast subgraph analysis in the context of omics data. To my opinion this is really where the merit of the paper is: to promote and make accessible the method to a wider audience of researchers in the field of bioinformatics/molecular biology. The authors have also applied their method to brain imaging derived networks, but that work is not part of this publication.The contrast subgraph analysis is particularly interesting, for data that is collected under different conditions but for the same set of nodes (i.e. genes, proteins, ...), i.e. where the nodes present do not change (much), but their interaction strengths differes between conditions. It remains to be seen where this method can deliver unique value that is not achievable by other means, but the approach is very intuitive. Its rationale can be readily understood, reducing the temptation to use it as a "black box" without critically questioning the results as might be the case for more complex methods. One of the downsides of the presented approach is that it does not provide any measures of confidence in the results - while there is a parameter >alpha< that allows some tuning, little information is given on how to choose a suitable value for this parameter (which obviously depends on the data). Another issue that might come a little too short is how to derive graph representations from experimental omics data in the first place. Usually these methods do not yield yes/no answers, but rather we obtain a matrix of pairwise measurements (e.g. correlation of coexpression) and to obtain a graph a threshold on these numbers is applied to obtain an edge or not. Various methods have been proposed to choose thresholds, but in the end, moving from a full matrix to graph representation means loosing some information - it would be interesting to see a deeper analysis on how much this thresholding influences the outcomes of the proposed method - this question is obviously linked to obtaining some confidence information on the results.Overall, the method described here is very interesting, it shares downsides with other graph based methods (thresholding), the biological examples given are brief, but illustrative for the use of the method, the manuscript is well readable. The manuscripts stimulates to add this method to your own toolbox and to apply it to interesting data sets to see if it yields results that were not obvious from other approaches.Minor comments:-figure captions esp 1-3 - please provide more information in the figure captions to make the figures "readable" on their own without a need for the reader to refer back to the text; figure captions for Fig 1-3 are almost identical, yet very different data is shown - a clear indication that important information is missing in the figure caption - such as what is the underlying data?Please explain all terms used in the figure in its caption: here what is "GeneRatio"? Figs A/B what is the x-axis showing for the violin plots?-figure 3c and para on Protein vs mRNA coexpression (p2-5) - are the differences really that striking - in 3C, the box plots do not look that different, super low p-values are probably due to very large number of data points, but not sure it is really that meaningful here (effect size?)-figure 4 is too small, nodes are barely visible, colours cannot be distinguished-algorithm 1 and description in text - I would probably move the description of the algorithm from the text to a "figure caption" for the algorithm box, to make it easier for the reader to find the definitions of the terms.

    4. Biological

      Reviewer name: Raul Guantes

      In this manuscript the authors apply the method of contrast subgraphs (developed among others by some of the authors), that identifies salient structural differences between two networks with the same nodes, to several biological co-expression and PPI networks. This method adds to the extensive toolkit of network analyses that have been used in the last two decades to extract useful biological information from omics data. In particular, the authors identify subgraphs containing maximum differences in connectivity between two networks, and basically use functional annotations to assign biological meaning to these differences. Of note, contrast subgraphs is not the only method that provides 'node identity awareness' when comparing networks. For instance, identification of network modules or community partitions are common methods to identify groups of nodes that highlight potentially relevant structural differences between two networks, and have been applied to many biological and other types of networks.I find the manuscript well motivated and clearly written in general, but lacking detailed information on part of the Methods. The discussion connecting their findings on structural differences between networks to potential biological functions is also a bit vague and could be worked out in more detail. I feel that the paper is potentially acceptable in GigaScience after a revision to provide more details on the methods and on their findings. Here are my comments:Methods:1.- Coexpression networks for luminal and basal cancer subtypes:1a.- The authors don't give enough information about the data they are using to build these networks. How many samples/points are they using to calculate correlations? Do they correspond to different patients, expression dynamics after some treatment…? Is there any preprocessing in the data (e.g. differential expression with respect to healthy tissue) or they just take all quantified transcripts and proteins with minimal filtering (they only specified that filter out genes with FPKM < 1 in more than 50 samples in transcriptomic data)? How many nodes and links have the final coexpression networks?.1b.- To determine links between genes/proteins they calculate Spearman rho and transform it to (0.5(1+rho)^12 to give a 'signed' network. But since Spearman correlation ranges between +1 and -1, this transformed quantity lies between 0 and 1, so I don't see the sign. Moreover, why the exponent 12 in the transformation??. Please clarify because I don't know if they are analyzing just weighted networks, unweighted networks or signed networks in the end because somehow they 'keep track' of the sign of rho. They spend some space in Methods discussing the extension of the contrast subgraph method to sign networks, but I don't know if they finally apply it, since coexpression networks built in this way and PPI networks are not signed.1c.- Do they keep all links or use some cutoff in rho by magnitude/significance? Presumably yes, because otherwise the final network would be a clique and unmanageable, but they don't give any info on that. Again, which is the final size (node/links) of the coexpression networks?1d.- As for coexpression networks based on relative abundance data as those from transcriptomic/proteomic experiments, it is well known that correlations may be misleading due to the possible large number of spurious correlations (see for instance Lovell at al., PLoS Computational Biology 11(3) (2015) e1004075). The use of correlations requires some justification, and at least to acknowledge the potential pitfalls of this measure.1e.- How many nodes/links are in the first contrast subgraphs shown in Figures 1-2? Is the degree calculated within the whole network or just within the extracted subgraph?1f.- Page 4, last paragraph before 'Protein vs mRNA coexpression in breast cancer' section: 'the results obtained with the two independent breast cancer cohorts show good agreement, with the top differential subgraphs significantly overlapping for both the basal-like and the luminal-A subtypes (Fisher test p < 2.2 · 10-16)'. I guess the overlapping is in terms of functional annotations, how is this overlapping and the corresponding statistical test calculated?.2.- Protein versus mRNA coexpression:2a.- Please provide again information about the number of samples, how the 'subset of breast cancer patients included in the TCGA' is chosen and if transcriptome and proteome are quantified in the same conditions (relevant if one is directly to compare both networks). Provide also details about the number of link/nodes of each subnetwork and corresponding subgraph. Since transcriptomic data are provided usually in FPKM and proteomic in counts (sum of normalized intensities of each ion channel), are data further normalized to facilitate their comparison?3.- PPI networks:3a.- Since they are going to compare PPIs about different 'contexts', a brief explanation about the tissue origin and peculiarities of the three cell lines investigated is in order.3b.- Please provide details about number of proteins/interactions in the contrast subgraphs obtained from the comparisons of the three cell lines. Since these subgraphs are going to be compared to RNA expression data from a different dataset, please specify if these data are obtained from the same cell lines. Why PPI data are compared only to upregulated genes? (and not to up-down regulated). Also, concerning the criterion for 'upregulation' (logFC>1), is this log base 2?. How do they quantify the overlap between proteins in PPI and upregulated genes? They just state that 'did indeed significantly overlap the corresponding up-regulated genes'. How much is the overlap and what does 'significantly' mean?3c.- Discussion of the results shown in Figure 4 is not clear to me. First, the authors state 'We thus analyzed in more depth the first contrast subgraphs obtained from the comparison of the HEK293T PPI network with those obtained from the other two cell lines'. Does this mean that they analyze four subgraphs (2 for HEK vs. HUVEC and 2 for HEK vs. Jurkat?. When they say that the 'top contrasts subgraphs were identical', do they mean that the four subgraphs contained exactly the same nodes?. Also, in main text Figure 4 seems to contain the subnetwork of these subgraphs with only the nodes annotated as 'ribosome biogenesis' and 'signal transduction through p53', and the links would be the PPIs. But in the caption to Figure 4 they state that 'green edges join proteins involved in the two biological processes' (probably a subset of the PPIs). Please clarify. Why do they give only the comparison between HEK and HUVEC, and not between HEK and Jurkat if the same nodes are present?Interpretation of results:1.- Coexpression networks in two cancer subtypes: they find that the subgraph with the stronger connections in the basal subtype is enriched in 'immune response' and the subgraph denser in the luminal subtype is enriched in categories related to microenvironment regulation. If they identify clearly enriched genes they should discuss in more depth their known roles in connection to these two functions in their biological context. This would enrich and support their findings. It is tempting to speculate that, since the basal type is less aggressive, cancer cells are challenged by the immune system of the organism but, once they developed mechanisms to evade the immune system (becoming more aggressive as in the luminal subtype) they are committed to manipulate their microenvironment to proliferate. Are there any evidences for this in these subtypes of cells?2.- Comparison of transcriptomic and proteomic networks: From their analyses in Figure 3 they claim in the Discussion that 'adaptive immune system genes are more connected at the transcriptional level, while innate immune systems are more connected at the proteomic level'. This is a rather vague statement based on the functional enrichment analysis. First, they should identify and discuss in more detail the genes/proteins responsible for this enrichment, to see if their documented function supports their speculations (and since the data they use are from breast cancer, I don't know how general could be this observation of if it is specific of this type of tumor). Moreover, caution should be exerted when interpreting these coexpression networks: the most connected transcripts are not necessarily those who are being simultaneously translated. Also, since apparently the network is not signed the abundance of connected transcripts may be anticorrelated. Finally, Figure 3 is not clear: which panel corresponds to the transcriptomic subgraph and which one to the proteomic one? This should be specified in the caption or with titles in the panel.Minor comments:- The distinction between 'heterogeneous' and 'homogeneous' networks in the Introduction is a bit confusing, as they classify mRNA and protein coexpression networks as 'heterogeneous'. Why is that? Is that because they are built from many different samples/individuals or time course data?.- Although I have nothing against how the authors display differences between the first contrast subgraphs in panels A-B of Figures 1 and 2, it may be more eye-catching to display these differences as usual boxplots or violin plots, with perhaps the test for significant differences between the means of both degree distributions.

    5. Abstract

      This work has been peer reviewed in GigaScience (see paper https://doi.org/10.1093/gigascience/giad010), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: De-Shuang Huang

      The authors proposed an algorithm based on contrasting subgraphs to characterize the biological networks, so as to analyze the specificity and conservation between different samples. It is interesting and I think there are some problems that need to be clarified.1, Sub-graphs are generated by dividing the whole graph in a certain way, and the similarity and difference of the samples are described by the comparison between the sub-graphs. The authors should discuss the advantages of the proposed approach in a non-heuristically way compared with the previous methods. Besides that, I wonder why subgraphs need to be non-overlapping.2, For TCGA or other databases, I think the authors should state the details of the samples, such as the number of samples, sequencing technology, batch effects, etc. In addition, the authors should describe the relationship between the subgraphs and GO modules to explain the results and draw some biological conclusions.3, The authors performed a similar analysis on protein networks and compared the results with RNA-seq, and get some conclusions. I'm a little confused whether the GO enrichment analysis of proteomics is to map the protein ID to the gene ID. If so, the authors can easily combine transcript co-expression and protein co-expression networks through ID-to-ID mapping, and I look forward to the results of such an analysis.4, I would like to know how the proposed method handles heterogeneous graphs by treating heterogeneous graphs as Homogeneous graph to generate subgraphs? I didn't figure out which dataset is the heterogeneous graph scenario.5, In addition to the elaboration of results such as degree and density differences between subgraphs, I would like to see the relationships between these results and the biological problems.6, Authors may consider citing the following articles on networks in molecular biologyBarabasi A L, Oltvai Z N. Network biology: understanding the cell's functional organization[J]. Nature reviews genetics, 2004, 5(2): 101- 113.Zhang, Q., He, Y., Wang, S., Chen, Z., Guo, Z., Cui, Z., ... & Huang, D. S. (2022). Base-resolution prediction of transcription factor binding signals by a deep learning framework[J]. PLoS computational biology, 2022, 18(3): e1009941.Hu J X, Thomas C E, Brunak S. Network biology concepts in complex disease comorbidities[J]. Nature Reviews Genetics, 2016, 17(10): 615-629.Z.-H. Guo, Z.-H. You, Y.-B. Wang, D.-S. Huang, H.-C. Yi, and Z.-H. Chen, "Bioentity2vec: Attribute-and behavior-driven representation for predicting multi-type relationships between bioentities." GigaScience 9.6 (2020): giaa032.Z.-H. Guo, Z.-H. You, D.-S. Huang, H.-C. Yi, K. Zheng, Z.-H. Chen, Y.-B. Wang, MeSHHeading2vec: a new method for representing MeSH headings as vectors based on graph embedding algorithm[J]. Briefings in bioinformatics, 2021, 22(2): 2085-2095.

    1. Conclusions

      Reviewer names: Alban Gaignard (Report on revision 1)

      The reading of the revised paper would have been easier by providing updates in a different color but thank you for taking into account the comments and remarks, and clearly answering the raised issues. I also appreciated the extension of the discussion. However, I still have some concerns regarding the proposed approach. The proposed platform targets both workflow sharing and testing. It is explicitly stated in the abstract: "the validation and test are based on the requirements we defined for a workflow being reusable with confidence". It is clear in the paper that tests are realized through the GitHub CI infrastructure, possibly delegated to a WES workflow execution engine. Although I inspected Figure 3 as well as the wf_params.json and wf_params.yml provided in the demo website. It doesn't seem to be enough to answer questions such as: how are specified tests ? How can a user inspect what has been done during the testing process ? What is evaluated by the system to assess that a test is successful ? I tried to understand what was done during the testing process but the test logs are not available anymore (Add workflow: human-reseq: fastqSE2bam · ddbj/workflow-registry@19b7516 · GitHub) Regarding the findability of the workflows, in line with FAIR principles, the discussion mentions a possible solution which would consists in hosting and curating metadata in another database. To tackle workflow discoverability between multiple systems, accessible on the web, we could expect that the Yevis registry exposes semantic annotations, leveraging Schema.org (or any other controlled vocabulary) for instance. This would also make sense since EDAM ontology classes are referred to in the Yevis metadata file (https://ddbj.github.io/workflow-registry-browser/#/workflows/65bc3bd4-81d1-4f2a8886-1fbe19011d81/versions/1.0.0).

    2. Background

      This work has been peer reviewed in GigaScience (see paper https://doi.org/10.1093/gigascience/giad006), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Kyle Hernandez

      Suetake et. al designed and developed a system to publish, validate, and test public workflows utilizing existing standards and integration with modern CI/CD tools. Their design wasn't myopic, they relied heavily on their own experiences, work from GA4GH, and interacting with the large workflow development communities. They were inspired by the important work from Goble et. al that applies the FAIR standards to workflows. As someone who had a long history of workflow engine development, workflow development, and workflow reusability/sharing experience I greatly appreciate this work. There are still unsolved problems, like guidelines on how to approach writing tests for workflows for example, but their system is one level above this and focuses on ways to automate the validation, testing, reviewing/governance, and publishing into a repository to greatly reduce unexpected errors from users. I looked through the source code of their rust-based client, which was extremely readable and developed with industry-level standards. I followed the read me to setup my own repositories, configure the keys, and deploy the services successfully on the first walk through. That speaks to the level of skill, testing, and effort in developing this system and is great news for users interested in using this. At some level it can seem like a "proof of concept", but it is one that is also usable in production with some caveats. The concept is important and implementing this will hopefully inspire more folks to care about this side of workflow "provenance" and reproducibility. There are so many tools out there for CI/CD that is often poorly utilized by academia and I appreciate the author's showing how powerful they can be in this space. The current manuscript is fine and will be of great interest to a wide ranging set of readers, I only have some non-binding suggestions/thoughts that could improve the paper for readers: 1. Based on your survey of existing systems, could you possibly make a figure or table that showcases the features supported/not supported by these different systems, including yours? 2. Thoughts on security/cost safeguards? Perhaps beyond the scope, but it does seem like a governing group needs to define some limits to the testing resources and be able to enforce them. If I am a bad actor and programmatically open up 1000 PRs of expensive jobs, I'm not sure what would happen. Actions and artifact storage aren't necessarily free after some limit. 3. What is the flow for simply updating to a new version of an existing workflow? (perhaps this could be in your docs, not necessarily this manuscript). 4. CWL is an example of a workflow language that developers can extend to create custom "hints" or "requirements". For example, seven bridges does this in cavatica where a user can define aws spot instance configs etc. WDL has properties to config GCP images. It seems like in these cases, tests should only be defined to work when running "locally" (not with some scheduler/specific cloud env). But the author's do mention that tests will first run locally on the user's environment, so that does kind of get around this. 5. For the "findable" part of FAIR, how possible is it to have "tags" of sort associated with a wf record so things can be more findable? I imagine when there is a large repository of many workflows, being able to easily narrow down to the specific domain interest you have could be helpful.

    3. Results

      Reviewer names: Alban Gaignard

      The reading of the revised paper would have been easier by providing updates in a different color but thank you for taking into account the comments and remarks, and clearly answering the raised issues. I also appreciated the extension of the discussion. However, I still have some concerns regarding the proposed approach. The proposed platform targets both workflow sharing and testing. It is explicitly stated in the abstract: "the validation and test are based on the requirements we defined for a workflow being reusable with confidence". It is clear in the paper that tests are realized through the GitHub CI infrastructure, possibly delegated to a WES workflow execution engine. Although I inspected Figure 3 as well as the wf_params.json and wf_params.yml provided in the demo website. It doesn't seem to be enough to answer questions such as: how are specified tests ? How can a user inspect what has been done during the testing process ? What is evaluated by the system to assess that a test is successful ? I tried to understand what was done during the testing process but the test logs are not available anymore (Add workflow: human-reseq: fastqSE2bam · ddbj/workflow-registry@19b7516 · GitHub) Regarding the findability of the workflows, in line with FAIR principles, the discussion mentions a possible solution which would consists in hosting and curating metadata in another database. To tackle workflow discoverability between multiple systems, accessible on the web, we could expect that the Yevis registry exposes semantic annotations, leveraging Schema.org (or any other controlled vocabulary) for instance. This would also make sense since EDAM ontology classes are referred to in the Yevis metadata file (https://ddbj.github.io/workflow-registry-browser/#/workflows/65bc3bd4-81d1-4f2a8886-1fbe19011d81/versions/1.0.0).

    4. analysis

      Reviewer name: Samuel Lampa

      The Yevis manuscript makes a good case for the need to be able to easily set up self-hosted workflow registries, and the work is a laudable effort. From the manuscript, the implementation decisions seem to be done in a very thoughtful way, using standardized APIs and formats where applicable (Such as WES). The manuscript itself is very well written, with a good structure, close to flawless language (see minor comment below) and clear descriptions and figures.

      Main concern

      I have one major gripe though, blocking acceptance: The choice to only support GitHub for hosting. There is a growing problem in the research world that more and more research is being dependent on the single commercial actor GitHub, for seemingly no other reason than convenience. Although GitHub to date can be said to have been a somewhat trustworthy player, there is no guarantee for the future, and ultimately this leaves a lot of research in an unhealthy dependenc on this single platform. As a small note of a recent change, is the proposed removal of the promise to not track its users (see https://github.com/github/site-policy/pull/582). A such a central infrastructure component for research as a workflow registry has an enormous responsibility here, as it may greatly influence the choices of researchers in the future to come, because of encouragement of what is "easier" or more convenient to do with the tools and infrastructure available. With this in mind, I find it unacceptable for a workflow registry supporting open science and open source work to only support one commercial provider. The authors mention that technically they are able to support any vendor, and also on-premise setups, which sounds excellent. I ask the authors to kindly implement this functionality. Especially the ability to run on-premises registries is key to encourage research to stay free and independent from commercial concerns.

      Minor concerns

      1. I think the manuscript is a missing citation to this key workflow review, as a recen overview of the bioinformatics workflows field, for example together with the current citation [6] in the manuscript: Wratten, L., Wilm, A., & Göke, J. (2021). Reproducible, scalable, and shareable analysis pipelines with bioinformatics workflow managers. Nature methods, 18(10), 1161-1168. https://www.nature.com/articles/s41592-021-01254-9
      2. Although it might not have been the intention of the authors, the following sentence sounds unneccessarily subjective and appraising, without data to back this up (rather this would be something for the users to evaluate):

        The Yevis system is a great solution for research communities that aim to share their workflows and wish to establish their own registry as described. I would rather expect wording similar to: "The Yevis system provides a [well-needed] solution for ..." ... which I think might have been closer to what the authors intended as well. Wishing the authors best of luck with this promising work!

    1. The orb-web

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giad002), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Jonathan Coddington

      This paper presents the first uloborid spider genome--and it is a chromosome level assembly. Genomes of this family are important because the orb web is supposedly independently and convergently evolved in this group. Although my expertise is not in the technology and informatics of genome sequencing, it appears to be well done.

      Figure 1 A. geniculate -- spelling N. clavipes = T. clavipes Table S1 Number of Componenet Sequences-- typo Text single exon We found a -- typo can be ascribed by -- can be inferred by? an Araneid orb-weaver-- araneid usually not capitalized ♂X1X2/♀X1X1X2X2.[48] should be ♂X1X2/♀X1X1X2X2 [48]. You might want to be careful about citing Purcell & Pruitt, see https://purcelllab.ucr.edu/blog6.html and other questions about Pruitt's work.

      Re methods, it would be of interest to know what HMW DNA fragment sizes were (expressed as kb, or mb), although Tape Stations are not very accurate. For people who collect spiders with the intent to yield HMW DNA, such data are important. Data are scarce, so any facts are significant.

      Any homologs of the Pyriform spidroin (PySp) in Acanthoscurria? Piriform silk attachment points are a synapomorphy of araneomorph or "true" spiders. Liphistiomorph and mygalomorph spiders do not (cannot?) make point attachments, and the inability to make point attachments either to substrate or silk-silk point attachments probably constrains/ed the evolution of web architectures in non-araneomorph spiders. Therefore finding homologs to PySp spidroins in non-araneomorph spiders is of great interest to explain araneomorph web architecture diversity.

      Likewise, tubuliform spidroin (TuSp) is probably a synapomorphy of entelegyne spiders, with derived female genitalia--a "flow-though" sperm management system. Eggsacs occur widely in non-entelegyne spiders, so it is a mystery why entelegynes have specialized spigots, glands, and spidroins for the same purpose. Indeed, the particular function of tubuliform silk is not clear. Any thoughts on this? E.g.

      It is good to see attention paid to the mitochondrial genome, as many whole genome studies ignore it. In spiders, early work claimed that tRNA's appeared to be peculiar. Masta and Boore. 2004. The Complete Mitochondrial Genome Sequence of the Spider Habronattus oregonensis Reveals Rearranged and Extremely Truncated tRNAs. Molecular Biology and Evolution, Volume 21, Issue 5, May 2004, Pages 893-902. Any comments on U. diversus tRNAs from that point of view?

      Finally, any comments on evidence for or against the convergent evolution of the orb web? Homology between the pseudoflagelliform and flagelliform spidroins would be pertinent. The intro does raise expectations that some of the macro / larger evolutionary questions will be addressed in the paper, but many, see above, are only cursory or not too much. Perhaps include a sentence in intro acknowledging this, but saying that this paper intends to present the genome and address sex chromosomes, but other topics? For example the sections on some of the spidroins do not extensively discuss comparisons with other spider genomes.

      Reviewer 2: Hui Xiang

      In this study, the authors generated huge genome sequencing data and RNA-seq data and provided a genome assembly with rather complicated merging approach, of a spider with novel phylogenetic position. The genome undoubtedly added novel and important resources for deep understanding of spider evolution. However, there are still severe issues that need to be addressed. 1. There are huge sequencing data from different samples. However, I don't think that marge of different assemblies is good for a final qualified genome. Given high heterozygosity, that illumina data and ONT data from different individuals is quite difficult to use for assembling a clean genome. As shown in Table 2, assembly by Hify approach is not obviously inferior compared with the merged one, but obviously much better in avoiding redundancy. I strongly suggest that the author adopt the genome assembly of Hify data from one individual, instead of merging two sets of assemblies. Illumina and Nanopore assembly may be helpful in fully deciphering silk proteins. 2. Proportion of repeats are somewhat affected by the quality of assembly. The high heterozygous genome assembly is complicated merged by diverse batch of data, so the real quality might be not as good as the author described. The quality of repeat is especially hard to evaluate. Hence the statements on genome size (Line 193-200) are not convictive. 3. About the assembly of RNA-seq data. The authors get huge amounts of data. However, it is not so helpful to obtain novel transcripts if the data is saturated. More importantly, assembly of short reads is even not so useful to obtain long transcripts. 4. As to whole genome duplication. The authors did not provided solid evidence supporting that WGD occurred in U. diversus genome. They only demonstrated two hox clusters therein. The synteny analysis was quite confusing which is not helpful in confirmation of WGD. They need to provide more solid genome-wide evidence, or otherwise totally downplay the statements. 5. The identification of the sex chromosome is still vague. The statements are not well organized. The statements and the results are so vague and not convictive. "While 8 of the 10 pseudochromsomes had a median read depth of 40 ± 2, pseudochromosomes 3 and 10 were outliers, with read depths of 36 and 33, respectively." The difference in sequencing depth is rather convictive. As I know the authors sequenced female and male samples. So why they didn't clearly compare the depth of the two sex chromosomes between them and make more evidence? Other: 1. The information of chromosome-level spider genome are not Incomplete. As I know, there is a black widow genome with chromosome-level. The authors need to added this one. 2. The authors need to release the sequences of the spidroins the identified and described.

      Reviewer 3: Zhisheng Zhang, Ph.D

      The manuscript GIGA-D-22-00169 presents a chromosome-level genome of the cribellate orb-weaving spider Uloborus diversus. The assembly reinforces evidence of an ancient arachnid genome duplication and identifies complete open reading frames for every class of spidroin gene. And the authors identified the two X chromosomes for U. diversus and identify candidate sex-determining genes.

      The methods of work are well fited to the aims of the study, clearly described, and well written.

      Minor comments:

      1. In the Figure 1B, I noticed that it noted the estimated divergence times of the Araneae, I think there should be add the reference, or detail describe how to do.

      2. There is something wrong with the table format, such as Table1, 2, 5 and Table 6.

      3. Line 70: "chromosome- scale" changes to "chromosome-scale".

      4. Line 147 to lines 148: Line breaks error.

      5. Line 458: "[48]" in the wrong location.

      6. Line 511-512: In the genome of spider Uloborus diversus, which chromosome the genes of "sex lethal (sxl)" and "doublesex (dsx)" located at?

      7. Line 515-516: "The 534 shared sex-linked genes in these three species, 14 are predicted to be DNA/RNA-binding", if these sex-linked genes have difference on RNA level between male and female?

      8. Line 685: "Dovetail Chicago and Dovetail Hi-C Sequencing" should be bold.

      9. Line 764: "We then used the Trinity assembler43 v.2.12.0", the number of 43 may be redundancy.

      10. Some softwares lack the number of RRID, such as line 223 "BRAKER2", line 245 of "NOVOplasty", line 790 of "tRNAscan-SE", line 773 of "RepeatModeler", line 774 of "RepeatMasker", line 797 of "EMBOSS", and so on.

      11. Lines 780 "using the BRAKER 2 pipeline" changes to "using the BRAKER2 pipeline".

      12. Lines 950: "Literature Cited" changes to "Reference".

      13. Lines 952-953: wrong cite. The World Spider Catalog is a web online, the version and the data you accessed from should also added, and the author's name should change to World Spider Catalog.

    1. Background Malignant Pleural Mesothelioma (MPM)

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac128), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Saurabh V Laddha

      Authors did a fantastic job by integrating MPM multi-omics datasets and created an integrative and interactive map for users to explore these datasets. MPM is a rare cancer type and understudied so such resources are very useful to move the field forward at a molecular level. The comprehensive data is well presented and the manuscript is well written to explain the complex genomics dataset for MPM. All the figures are well explained and very clear to understand

      Minor point: - Author mentioned an evaluation of tumor purity was done using pathological review, did author used molecular data such as genomic data to find tumor purity ? and if yes, how was the consensus ? This is very important factor to interpret the genomic results as the data was sequenced at 30X - In the same line, RNAseq can also be used to identify tumor purity and it will be really helpful for users to clear picture on tumor purity. - Is it not very clear from method section that the same MPM samples were used to sequence at DNA , RNA and DNA methylation level ? A brief explanation or table will be very easy for users to understand. - Recent WHO classify MPM into three different histopathological types. Did author do any unsupervised analysis from these comprehensive data to understand MPM heterogeneity or replicate WHO classification? or did author find WHO subtypes of MPM using molecular dataset ? A brief analysis/comment on usage of histological classification Vs Molecular classification will certainly move the MPM research field forward as researcher have found vast differences between histological vs molecular classification and the field is moving towards more molecular based classification in clinic.

      Reviewer 2: Jeremy Warner

      In this paper, the authors describe a new public resource for the molecular characterization of malignant pleural mesothelioma (MPM), which they describe as the most comprehensive to date. They perform WGS, transcriptome, and methylation arrays for 120 patients with MPM sourced through the MESOMICS project and integrate this dataset with an additional several hundred patients from previously published datasets.

      Although I cannot independently verify their claim that this is the largest and most comprehensive dataset for MPM, it is quite impressive and expansive. The pipeline utilized is well described and the results at all stages are transparently shared for prospective users of this dataset.

      The description of the methods to identify and remove germline variants is interesting, although the length somewhat detracts from the main goal of the paper in describing an MPM resource. Perhaps, this part could be condensed with the technical details presented in supplement. This comment pertains to both the Point Mutations and Structural Variants sections.

      Additional moderate concerns:

      There are insufficient details provided on the clinical and epidemiological parameters. Indirectly, it would appear that sex, age class, and smoking status are the clinical parameters - but what are the age classes? Is smoking status binary ever/never, or more involved? There ought to be a data dictionary provided as a supplemental table which describes each clinical/epidemiological variable, along with the possible values that the variable can take on. It should additionally be explained why other important clinical parameters are not available - most importantly, the presence of accompanying pulmonary comorbidity such as chronic obstructive pulmonary disease (COPD) and the existence of conditions that might preclude the use of standard systemic therapies, such as renal disease precluding the use of platinum agents.

      Context: I would like to see more here about the role of asbestos in the etiology, including what might be known about the pathophysiology of asbestos fibers at the molecular level. Also, there is nothing here about the evolution of treatment for MPM; the latest "state-of-the-art" regimens (platinum doublet + bevacizumab [MAPS; NCT00651456] and dual checkpoint inhibition [Checkmate 743; NCT02899299]) have reported median survival in the 18-month range, which is distinctly better than the median survivals quoted by the authors. Finally, I would like to see one or more direct references to the clinical trials where molecular heterogeneity has "fueled the implementation of drug trials for more tailored MPM treatments".

      Data Description: All specimens in the MESOMICS study are said to be collected from surgically resected MPM; this also appears to be the case for the integrated multi-omic studies from Bueno et al. and Hmeljak et al. and this should be explicitly indicated. Somewhere, it should also be explicitly discussed that this is an important limitation in the future utility of this data - surgical specimens are convenience samples and while they do provide important information, they lack treatment exposure. Given that many if not most patients with MPM will survive to 2nd or 3rd line systemic therapy, and that 1st line is fairly standardized, a knowledge of induced mutations is going to be essential to the ultimate goal of precision medicine.

      Minor concerns:

      The labels in the figures (e.g., Figure 2 - "Unmapped..too.short") are human-readable but could still be presented in a more friendly fashion. All acronyms should be defined.

      Reviewer 3: Mary Ann Tuli

      I have been asked to review the process of accessing the controlled data cited in this study to ensure that the process is clear and smooth. The study is available from the European Genome-phenome Archive (EGA) under accession number EGAS00001004812 (https://ega-archive.org/studies/EGAS00001004812). The paper is clear about how to obtain the DAA.

      The study has three datasets.

      I can confirm that the author was very prompt in his response to me requesting the DAC, in providing the DAA and in responding to the queries I had when completing the DAA. The completed DAA was sent to the EGA by the author on 29-Jul, and EGA responded within 3 working days, stating access had been granted. This is an excellent response time, so I conclude that the process of obtaining the DAA and the EGA making the data available to the user is very good.

      Today (1-Sep) I have attempted to gain access to the data via EGA. I was easily able to login to my EGA account and see that the datasets are available to me to download. Users need to download data using the EGA download client - pyEGA3. EGA provides a video on how to install the client, but I hit a problem and require technical support.

      I emailed the EGA help desk but have not had a response yet. I was quite surprised to receive a response from the author and have learnt that EGA include the owner of the study in RT tickets so they see any communication. I commend the author for his prompt response to my ticket (though it didn't solve my problem).

      I cannot hold on to this review for any longer, and I am not yet in a position to comment on the nature of the data held within this study.

      I do have concerns that the process of accessing controlled data held in the EGA is not straight forward. Users need to watch a 12 minute video to learn how to install the download client and may need to install programs on their computer). There is a FAQ which is very technical. This is not an issue for the author to resolve though.

      I understand the author has some minor revisions to make, so hopefully I should have a response from the EGA help desk before a final decision needs to be made (?).

    1. Background

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac126), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Shiping Liu

      How to model the statistical distribution of the gene expression, is a basic question for the field of single cell sequencing data mining. Dharmaratne and colleagues looked details at the distribution of very gene. By using the generalized linear models (GLM), the authors present a new program scShapes, which matched a specific gene with a distribution from one of the four shapes, Poisson, Negative Binomial (NB), Zero-inflated Poisson (ZIP), and Zero-inflated Negative Binomial (ZINB). As the authors present in this manuscript, not all genes adapted to a single distribution, neither NB or Poisson, and some of the genes actually adapted to the zero-inflated models because of the property of high drop-out rate in the modern single cell sequencing, says 3' tag sequenced. It is has been popular to employ GLM in single cell data mining recently, but it also got both praise and blame. So it is a good forward step to model a specific model for an individual gene. But the bad side is the computing cost, especially for the number of cells been sequenced reach to millions in currently research, and it believed that the dataset will be reached even bigger in the future. So it make a great obstacle arise to the application of the method presented by the author here. How to speed up the calculation using the mixed model or scShapes? The authors also performed the scShapes on some datasets, including the metformin, human T cells, and PBMCs. They found some potential genes that changed the distribution shape, but didn't easy to be identified by other methods. It demonstrated that scShapes could identified the subtle change in gene expression.

      Major points: (1) We didn't see any details about the metformin dataset, the segueing depth and quality, number of genes/UMIs per cell, and so on. It makes hard to evaluate the quality and reliability of the results generated by scShapes. If this dataset is another manuscript could not possible to be presented at the same time, I suggest the author could perform on alternative dataset, as there are so many single cell datasets has been published could be used in this study.

      (2) Even the authors taken the cell type account in the GLM, I wonder for a specific gene, whether the distribution shape will change in different cell type. If so, it will becoming more complex, that is need to model the distribution shape for individual gene for every cell type alone.

      (3) To identify the different gene expression in scShapes, the author didn't consider the influence of different cell number, or the proportion of cell number, in the different cell type. A possible way to evaluate or eliminate this bias is to down sampling from a big dataset, instead of just simulated total number 2k ~ 5k from the PBMC. To evaluate the influence both the total number cell and the proportion in cell type.

      (4) The author should present the comparative results of the computational cost for different methods. Says the accuracy, time and memory consuming under different number of cells. I suggest the authors use much a larger dataset, because currently single cell research may include millions of cells, and the ability to process big data is very important to the application and becoming a widely used one.

      Minor points: (1) No figure legends for Fig.2 c and d.

      (2) It is unclear whether the total 30% genes undergo shape change, or just the proportion of the remaining after the pipeline. So please clarify the details.

      Reviewer 2: Yuchen Yang

      In this manuscript, authors presented a novel statistical framework scShapes using GLM approach for identifying differential distributions in genes across scRNA-seq data of different conditions. scShapes quantifies gene-specific cell-to-cell variability by testing for differences in the expression distribution. scShapes was shown to be able to identify biologically-relevant switch in gene distribution shapes between different conditions. However, there are still several concerns required to be addressed.

      1. In this study, authors compared scShapes to scDD and edgeR. However, besides these two, there are many other methods for calling DEGs from scRNA-seq. Wang et al. (2019) systematically evaluated the performance of eight methods specifically designed for scRNA-seq data (SCDE, MAST, scDD, D3E, Monocle2, SINCERA, DEsingle, and SigEMD) and two methods for bulk RNA-seq (edgeR and DESeq2). Thus, it is also worthy to compare scShapes to other methods, such as SigEMD, DEsingle and DESeq2, which were supposed to perform better than scDD or edgeR.

      2. When scShapes was compared to scDD, authors mainly focused on the distribution shifting. However, to users, it would be better to present a venn diagram showing the numbers of the genes detected by both scShapes and scDD, and the genes specifically identified by scShapes and scDD, respectively. In addition, authors showed the functional enrichment results for DEGs identified by scShapes. It is also worthy to perform enrichment analysis for the genes detected by both scShapes and scDD or specifically identified by scShapes or scDD.

      3. Since scShapes detects differential gene distribution between different conditions, it would be better to show users how to interpret the significant results biologically. For example, authors mentioned that RXRA is differentially distributed between Old and Young and Old and Treated, so what does this results mean? Can this differential distribution be associated with differential expression?

      4. In Discussion, authors mentioned that scRATE is another tool that can model droplet-based scRNA-seq data. It would be clearer to discuss that why authors develop their own algorithm rather than using scRATE to model the distribution.

      5. In Introduction, authors talked about the zero counts in scRNA-seq data, and presented evidence in Results part. Since 2020, there are several publications also focusing on this issue, such as Svensson, 2020 and Cao 2021. These discussions should be included in this manuscript.

    1. Motivation

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac125), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Ruibang Luo

      In this paper, the authors proposed xAtlas, an open-source NGS variant caller. xAtlas is a fast and lightweight caller with comparable performance with other benchmarked callers. The benchmark comparison in multiple popular short-read platforms (Illumina HiSeq X and NovaSeq) demonstrated xAtlas's capacity to identify small variants rapidly with desirable performance. Although xAtlas is limited to call multi-allelic variants, the high sensitivity (~99.75% recall for ~60x benchmarking datasets) and desirable runtime (<2 hours) enable xAtlas to rapidly filter candidates and be considered as important quality control for further utilization.

      The authors presented a detailed explanation of xAtlas's workflow, design decisions and have done complete experiments in benchmarking, while there are still some points the authors need to discuss further listed as follow:

      The authors reported the performance in multiple coverages of the HG001 sample and the benchmarking result of HG002-4 samples by measuring the concordance with the GIAB truth set (v3.3.2). I noticed that GIAB had updated the GIAB truth sets from v3.3.2 to v4.2.1 for the Ashkenazi trio. The updated version included more difficult regions like segmental duplications and the Major Histocompatibility Complex (MHC) to identify previously unknown clinically relevant variants. Therefore, it would be helpful if the author could give a performance evaluation using the updated truth sets to give a more comprehensive performance of the proposed caller.

      In the Methods section, The authors stated the main three stages of the xAtlas variant calling process: read prepossessing, candidates identification, and candidates evaluation. The author fed hand-craft features (base quality, coverages, reference and alternative allele support, etc.) into a logistic regression model to classify true variants and reference calls in the candidate evaluation stage. But in Figure 1, the main workflow of xAtlas, only model scoring was shown, and the evaluation details were not visible. It would be useful if the authors could enrich Figure 1 to add more details to ensure consistency with Methods and facilitate reader understanding.

      In Figure 2, the authors reported the xAtlas performance comparison across in HG001 dataset with other variant callers. I noticed that the x-axis was F1-score while the y-axis was true positives per second. The tendency measurement of two metrics seems irrelevant, which might confuse the readers. we suggest the authors make separate comparisons for the two metrics. (For instance, plot Precision-Recall curves for F1-score measurement and Runtime comparison of various variant callers for speed benchmarking).

      Zheng, Zhenxian on behalf of the primary reviewer

      Reviewer 2: Jorge Duitama

      The manuscript describes a variant caller called xAtlas, which uses a logistic regression model to call SNPs after building an alignment and pileup of the reads. The manuscript is clear. The software is built with the aim of being faster than other solutions. However, I have some concerns relative to the method and the manuscript.

      1. Unfortunately, the biggest issue with this work is that the gain of speed is obtained with an important sacrifice in accuracy, specially to call indels. I ran xAtlas with two different benchmark datasets and the accuracy, especially for indels and other complex regions was about 20% lower compared to other solutions. Although the difference was smaller, xAtlas is also less accurate than other software tools for SNV calling. It is well known that even a simple SNV caller can achieve high sensitivity and specificity (see results from https://doi.org/10.1101/gr.107524.110). However, several SNV errors can be generated by incorrect alignment of reads around indels and other complex regions. For that reason most of the work on variant detection is focused on mechanisms to perform indel realignment or de-novo miniassembly to increase accuracy of both SNV and indel detection. The paper of Strelka is a great example of this (https://doi.org/10.1038/s41592-018-0051-x). The manuscript does not mention if any procedure has been implemented to realign reads or to increase in some way the accuracy to call indels. This is critical if xAtlas is meant to be used in clinical settings.

      2. The manuscript looks outdated in terms of evaluation datasets, metrics and available tools. Since high values of standard precision and sensitivity are easy to achieve with simple SNV callers, metrics such as the false positives per million basepair (FPPM) proposed by the developers of the synthetic diploid benchmark dataset should be used to achieve a more clear assessment of the accuracy of the different methods (https://doi.org/10.1038/s41592-018-0054-7). Regarding benchmark experiments, SynDyp should also be used for benchmarking. To actually support that xAtlas is reliable across heterogeneus datasets (as stated in the title), further datasets should be tested, as it has been done for software tools such as NGSEP (https://doi.org/10.1093/bioinformatics/btz275). In terms of tools, both DeepVariant and NGSEP should be included in the comparisons.

      3. Regarding the metrics proposed by the authors, I do not think it is a good practice to merge results on accuracy and efficiency, taking into account that the accuracy in this case is lower than other solutions, and for clinical settings that is an important issue. The supplementary table should also report sensitivity and precision for indels, not only for SNVs.

      4. The SNV calling method and particularly the genotyping procedure should be describe in much better detail. The manuscript describes the general pileup process, then, it mentions some general filters for read alignments and then it mentions that it applies logistic regression. However, it is not clear which data is used for such regression or in general how allele counts and quality scores are taken into account. A much deeper description of the logistic regression model should be included in the manuscript.

      5. There are better methods than PCA to show clustering of the 1000g samples. A structure analysis is more suitable for population genomics data and it is more clear to show the different subpopulations.

      6. Finally, about the software, genotype calls produced by the xAtlas should have a value for the genotype quality (GQ) format field to assess the genotyping accuracy. For single sample analysis the QUAL value can be used (although this is not entirely correct). However, for population VCFs, the GQ field is very important to have a measure of genotyping quality per datapoint. Regarding population VCF files it is not clear, either from the in-line help or from the github site, how population VCF files should be constructed.

    1. The tiger

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac112), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Jong Hwa Bhak

      This manuscript is about assemblies of Bengal tigers. It is a great improvement over past two tiger genome assemblies. The assemblies quality is unprecedented (exceeding perhaps any feline genome in terms of contiguity).

      This represented a ~50x improvement in genome contiguity (see materials and methods). PanTigT.MC.v2

      What was the most important factor in this big jump of improvement in length?

      the overall contiguity was better than the domestic cat reference genome

      The quality comparison section is informative.

      We identified the "repetitive elements" in the genome by combining both

      ==> repeat elements is better.

      How close are the two genomes (MC & SI)?

      This reviewer finds it a great contribution to existing feline genome assemblies. The authors have done all the usual QC and constructed really high quality assemblies.

      Reviewer 2: Gang Li

      The submitted manuscript 'Near-chromosomal de novo assembly of Bengal tiger genome reveals genetic hallmarks of apex-predation' assemble the high-quality near-chromosomal leveled reference genomes of Bengal tiger, which will be of great significance for the conservation and rejuvenation of tigers, even other endangered felids. I have some comments on this manuscript: 1. Considering this the assembled genome used the Hic technology to figure out the chromosome structure, the figure of Hic results need to be presented. While, the assemble of sex chromosome always attract attentions, especially Y chromosome of tiger. More detailed information need to be specified, such as the conserved Y chromosome genes compared to other mammals, or whether there are tiger-specific Y linked gene has been observed or not. 2. In this work, authors used four zoo-bred individuals with known pedigree to test the inbreeding index of ROH and intend to evaluate the assembly quality. But I don't find any information about these four individuals and I guess they should be Bengal tigers. If it is the case, the question is that the quantity of ROH will not be only decided by the reference quality, but also the divergence between the target resequencing date and the used reference genome. That is to say, if the resequencing data and the reference genome are all from the same tiger sub-species, Bengal tiger, the quantity of ROH supposed to be more than that of the different sub-species comparison, which may not be an appropriate method used to evaluate the assembly quality. 3. I have some advice about the evolutionary divergence calibrations. Using some other species which have closer phylogenetic relationship might be better, according to their shared similar substitution rate and generation time, for instance, other species of Panthera . 4. The format of references part need to be rechecked.

    1. Japanese eels

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac120), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows: Reviewer 1:Christiaan Henkel This paper describes a new chromosome-level assembly of the Japanese eel, which could finally supersede the various more fragmented assemblies. The assembly process is perhaps overly complex (many data sources and assembly steps, suppl. figure 3), but the result in general appears to be of high quality, as demonstrated by BUSCO (twice) and alignment to a closely related genome (Anguilla anguilla, suppl. figure 4). Figures 1 and 2, however, contain some inconsistencies:

      Figure 1: track B (nanopore coverage) shows a clear bimodal signal, with large blocks of high (double) coverage. These appear possibly correlated with areas low in gene content (track E). Are these possibly collapsed duplicate regions? That would have a strong effect on the analyses of genome duplication. Do other somewhat comparable data sources, for example PacBio CLR, show this feature?

      Figure 2, right panel: the new A. japonica assembly appears to have many unclustered genes (brown), similar to the fragmented draft assembly of A. rostrata and unlike the other included chromosome-level assemblies. This appears to be related to the annotation process? Or are there other problems that preclude orthology assignment for these genes? And how does A. rostrata get its gain of 11756 genes in this analysis? (By the way, line 323 has genus Anguilla as +919/-531, the figure +919/-631).

      Some other questions and comments I would like the authors to address:

      The discussion of previous and current eel sequencing efforts in the Introduction is not complete. For example, I miss the assemblies by Kai et al (2014) and Nakamura et al (2017) of the Japanese eel genome. In addition, the Introduction and Discussion (lines 415-417) present the current assembly as the first chromosome-scale Anguilla genome, which is not the case. At least two high-quality assemblies of Anguilla anguilla (European eel) are available, and should be acknowledged: one is by the Vertebrate Genome Project, and this assembly is even used in the manuscript for comparative purposes (line 199). The other has been described in a preprint (Parey et al 2022). Some of the mentioned papers include similar analyses (mostly on evolution after genome duplication and ancestral genome reconstruction, see figure 5).

      Kai et al (2014) A ddRAD-based genetic map and its integration with the genome assembly of Japanese eel (Anguilla japonica) provides insights into genome evolution after the teleost-specific genome duplication. BMC Genomics 15, 233. https://doi.org/10.1186/1471-2164-15-233 Nakamura et al (2017) Rhodopsin gene copies in Japanese eel originated in a teleost-specific genome duplication. Zoological Lett 3, 18. https://doi.org/10.1186/s40851-017-0079-2 Parey et al. (2022) Genome structures resolve the early diversification of teleost fishes. BioRxiv https://doi.org/10.1101/2022.04.07.487469 The different statistics listed for each alternative assembly in the Introduction make comparisons difficult.

      The statement in line 79, that eels as the most basal teleost group are 'close' to non-teleosts, is incorrect. They are just as close to non-teleosts as any other teleost. (The rest of the sentence, up to line 82, could use rephrasing).

      The statement in line 307 that 'Japanese eels are phylogenetically closer to American than European eels' contradicts the phylogeny presented (fig. 2), or is this based on some additional analysis (a density plot not shown), or even on figure 2 right panel (see comment earlier)? Even if they are incrementally 'closer' by some metric, I would not interpret this a phylogenetic distance, given the inferred divergence dates. In any case, the American eel assembly is still highly fragmented, and not the best basis for inferences which otherwise rely on chromosome-scale assemblies.

      Similarly, the statements on divergence between teleost groups in lines 495-500 need rephrasing. Anguilla species did not diverge from Megalops etc.

      Figure 2 & lines 205-213/310-313: These divergence times are calibrated using a few intervals taken from TimeTree.org (red dots). I wonder how reliable this is, as I get quite different intervals when checking now: for Anguilla-Megalops it is 162.2-197.3 (the paper has 179.3-219.3). Also TimeTree appears to have arowana (Scleropages) as the most basal branch among the teleosts, the paper has a combined Osteoglossomorpha(arowana)/Elopomorpha(eels) branch. Has the phylogenetic tree topology been inferred or imposed? Why have the specific calibration points been chosen? The early branching among teleosts (see line 310-312) is somewhat controversial, see the preprint by Parey et al.

      Line 346-348: This uses the eel genome size (~1 Gbp) and the further (4R) duplicated salmon genome (3 Gbp) to argue against a such further genome duplication in eels. Although I agree that the eel 4R probably did not occur, comparing genome sizes presents no evidence in this case. Genome size changes by other processes as well, and more dramatically (e.g. transposon proliferation). In addition, salmon and eel are not closely related, at all. Compare this to the genomes of the (much more closely related) common carp and zebrafish, both ~1.5 Gbp: the carp genome, but not zebrafish, has experienced an additional duplication, but the zebrafish genome contains a higher transposon density.

      The second argument against 4R (lines 352-356, figure 4b) also does not really work. With 8 Hox clusters, the eel genome appears duplicated with respect to the gar (4 clusters), and not quadruplicated. However, with 8 clusters and 70+ genes, eels actually have more than all established 3R teleost genomes (max. 7 clusters, 42-50 genes). So the question is then whether these 8 clusters form nice 3R WGD ohnolog pairs, or if some clusters have been lost (as in nearly all other teleosts) and re-duplicated. The former hypothesis is consistent with the high level of retained WGD genes (line 369), the latter with the inferred high level of local duplication (line 363). The observation of duplicate eel Hox clusters goes back to the initial European eel genome assembly (Henkel et al 2012), but there the draft status precluded confident assignment to 3R for some clusters.

      The eel olfactory receptors have previously been identified using an assembled transcriptome (Churcher et al. 2015, not cited). How do the analyses of line 214-229/324-333/420-434/figure 3 compare?

      Churcher et al (2015) Deep sequencing of the olfactory epithelium reveals specific chemosensory receptors are expressed at sexual maturity in the European eel Anguilla anguilla. Molecular Ecology 24, 822-834. https://doi.org/10.1111/mec.13065 Lines 460-467 state eels have retained duplicates of immune genes, which have been under positive selec tion. So how does this translate to a (very recent) negative effect on eel fitness (line 460-462)?

      The discussion of line 482-502 on chromosome numbers invokes ecological explanations (freshwater vs. marine habitats, 482-489), but subsequently does not translate this to the low Anguilla chromosome numbers. As these ecological factors are highly applicable to Anguillidae, this connection should be explored here - including their evolutionary history (e.g. Inoue et al, 2010, Deep-ocean origin of the freshwater eels. Biology Letters 6, https://doi.org/10.1098/rsbl.2009.0989)

      In this discussion: how do the numbers of line 482/3 (modal 2n 54/48 chromosomes in fish) correspond to those of line 492 (peak chromosome number n = 24/25 in extant teleosts)?

      The supplementary figures/tables lack legends (just mentions in the main text).

      Line 109: which ONT flowcell, kit, and basecaller versions have been used? In the M&M, please list software versions.

      Reviewer 2: Zhong Li This manuscript by WANG et al. titled "A Chromosome-level Assembly of the Japanese Eel Genome, Insights into Gene Duplication and Chromosomal Reorganization " provides a high quality genome assembly of Japanese Eel, and economically important fish. The authors have used for kinds of sequencing technologies, and assembling strategies, and provided well annotated genomes. This genome provides useful information for the genome organization and evolution and other fields of this species.

      Overall, the manuscript is sufficiently descriptive and easy to follow. I have three major concerns:

      The genome annotation rely on the transcriptome. No detailed information was given the the method section. The analyses do not include command lines or software versions and thus are not repeatable easily. A document that include these information is higly recommended included as a supplementary file. The genome assembly seems has not been released on NCBI database (https://www.ncbi.nlm.nih.gov/bioproject/?term=PRJNA852364). Besides, the gene models (nucleotide, protein, and GFF files) should also be made available and included in the Data Availability section when the manuscript is accepted.

    1. Background

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac119), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Dominik Heider

      The paper is well written, and the objectives are clear. The study is a very nice application of CGR in bioinformatics and shows the excellent performance of CGR-encoded data in combination with deep learning. I have a few things that should be addressed in a minor revision:

      1) Some very important studies have not been addressed in the related work part, e.g., in Touati et al. (pubmed:32645523) and Sengupta et al. (pubmed:32953249), the authors compared SARS-CoV2 with other coronaviruses based on CGR, or we (pubmed:34613360) used CGR in combination with deep learning for resistance predictions in E. coli.

      2) To me, it is unclear how accuracy was used in the model. Is it one class (i.e., clade) versus all others? If yes, accuracy might be misleading because of the high class imbalance. In such high class imbalances, MCC has been shown to be more suitable.

      3) "The undersampled dataset was randomly split into train...". Why did you undersample? To balance the data, which would make sense to use accuracy as a metric but discard a lot of valuable data. What about oversampling?

      4) Comparison with other tools: I wonder whether the good performance of your model is the result of deep learning or the CGR encoding. Please also provide the results for another ML model (besides SVM, e.g., random forests) to compare to, e.g., Covidex.

      Reviewer 2: Riccardo Rizzo

      The authors propose a classification experiment based on Frequency Chaos Game Representation and deep learning. They used the outstanding performances of a ResNet network as an image classification tool and the FCGR method that represent a genome sequence as an image.

      The work seems good, although some major points should be clarified.

      First, whether the performance index values came from a 5-fold validation procedure (5 because they said the split was 80-10-10) or a one-shot experiment is unclear.

      Second, the part that involves the frequent k-mers and the SVM should be better explained. The authors should clarify what the meaning of this comparison is.

      Another point to clarify is the quality of the sequences used; the authors worked on complete sequences, but, as far as I know, in the real world virus sequences are noisy data, and authors should discuss this point.

      Minor points:

      • Authors said that a sequence is a string $s \in {A, C, G, T, N}^*$, so they should explain the procedure used in Definition 2, where only 4 symbols seem to be used. If they discard the N, or consider 4 k-mers (consider that N means "any symbol") they should say it clearly.
      • Figure 1 and 2 report two different quantities but say the same thing; maybe one of them can be omitted.
      • Authors should add some details about the training time of the network.

      A final suggestion: probably it will be interesting to use the same deep network with transfer learning (the whole network or just the first sections) to evaluate the gain with ad-hoc training and the different training time.

    1. The

      This work has been published in GigaScience Journal under a CC-BY 4.0 license (https://doi.org/10.1093/gigascience/giac076) and has published the reviews under the same license.

      Reviewer 1 Satoshi Hiraoka

      In this manuscript, the authors developed a new tool, DeePVP, for predicting Phage Virion Proteins (PVPs) using the Deep learning approach. The purpose of this study is meaningful. As the authors described in the Introduction section, currently it is difficult to annotate functions of viral genes precisely because of its huge sequence diversity and existence of many unknown functions, and there are still many rooms to improve the performance of in silico annotation of phage genes including PVPs. Although I'm not an expert in machine learning, the newly proposed method based on Deep learning seems to be appropriate. The proposed tool showed clear outperformance compared with the other previously proposed tools, and thus, the tool might be valuable for further deep analysis of many viral genomes. Indeed, the authors conducted two case studies using real phage genomes and reported novel findings that may have insight into the genomics of the phages. Overall, the manuscript is well written, and I feel the tool has a good potential to contribute to the wide fields of viral genomics. Unfortunately, I have concerns including the source cord openness. Also, I have some suggestions that would increase the clarity and impact of this manuscript if addressed.

      Major: I did not find DeePVP source cord on the GitHub page. Is the tool not open source? I strongly recommend the author disclose all scripts of the tool for further validation and secondary usage by other scientists. Or, at least, clearly state why the source cords need to hold private. Also, I was much confused about the GitHub page because the uploaded files are not well structured. Scripts and data used for performance evaluation were included in 'data.zip' file, which should be renamed to be an appropriate one. 'Source code' button in the Releases page strangely links to the 'Supporting_data.zip' files which only contained installing manual but not source cord file. The authors should prepare the GitHub page appropriately that, for example, upload all source cords to the 'main' branch rather than include them in zip file, and 'source code' file in Releases should contain actual source code files rather than manual PDF. According to the Material and method section, 1) using the Deep learning approach, and 2) using th large dataset retrieved from PhANNs as teacher dataset, are two of the important improvement from the other studies in the PVP identification task. Someone may suspect the better performance of DeePVP was mostly contributed by the increased teaching dataset rather than the used classification method. Is there a possibility that the previously proposed tools (especially the tools except for PhANNs) with re-training using the large PhANNs dataset could reach better performances than DeePVP? The naming of 'Reliability index' (L249) is inaccurate. The score did not support the prediction 'reliability' (i.e., whether the predicted genes are truly PVP or not) but just reflects the fact that the gene is predicted as PVP by many tools without considering whether it is correct or incorrect. The sentence 'A higher n indicates that this protein is predicted as PVP by more tools at the same time, and therefore, the prediction may be more reliable.' in L252 is not logical. I dose not fully agree with the discussion that the tool will facilitate viral host prediction as mentioned in L294-302. It is very natural that if the phages are phylogenetically close and possess similar genomic structures including PVP-enriched regions, those will infect the same microbial lineage as a host. However, this is not evaluated systematically in wide phage lineages. In general, almost all phage-host relations are unknown in nature except few numbers of specific viruses such as E. Coli phages. Further detailed studies should be needed on whether and how degree the conservation of PVP-enriched region could be a potentially good feature to predict phage-host relationship. I think the phage-host prediction is beyond the scope of this tool, and thus the analysis could be deleted in this manuscript or just briefly mention in the Discussion section as a future perspective.

      Minor: The URL of the GitHub page is better to describe in the last of the Abstract or inside of the main text in addition to the 'Availability of supporting source code and requirements' section. This will make it easy for many readers to access the homepage and use the tool. Fig 2 and 3. I think it is better to change the labels of the x-axis like 0 kb, 20 kb, 40 kb, ..., and 180 kb. This will make it easy for understanding that the horizontal bar represented the viral genome.

      Re-review:

      I read the revised manuscript and acknowledge that the authors made efforts to take reviewers' comments into account. My previous points have been addressed and I feel the manuscript was improved. I think the word 'incomplete proteins' in L391-396 would be rephrased like 'partial genes' because here we should consider protein-encoding genes (or protein sequences), not proteins themselves, and the word 'incomplete' is a bit ambiguous.

    2. ABSTRACT

      Reviewer 2. Deyvid Amgarten

      The manuscript presents DeePVP, a new tool for PVP annotation of a phage genome. The tool implements two separate modules: The main module aims to discriminate PVPs from non-PVPs within a phage genome, while the extended module of DeePVP can further classify predicted PVPs into the ten major classes of PVPs. Compared with the present state-of-the-art tools, the main module of DeePVP performs better, with a 9.05% higher F1-score in the PVP identification task. Moreover, the overall accuracy of the extended module of DeePVP in the PVP classification task is approximately 3.72% higher than that of PhANNs, a known tool in the area. Overall, the manuscript is well written, clear, and I could not identify any serious methodological inconsistence. I was not sure whether to consider the performance metrics shown as significant improvements or not, since PhANNs already does a similar job on that regard. And it is better for some types of PVPs for example. But I would rather give this task to readers and other researchers in the area. Specifically, I enjoyed the discussion about how one-hot encoded features may be more suitable for predictions that k-mers based ones. And by consequence, that convolution networks may present an advantage against simple multilayer perceptron networks. This manuscript brings an important contribution to the phage genomics and machine learning fields. I am certain that DeePVP will be helpful to many researchers. I have a major question about the composition of the dataset used to train the main module: Among the PVP proteins, do authors know if only the ten types of PVP are present? There is a rapid mention to key words used to assemble the PhANNs dataset in the discussion (line 340), but that is not clear to me. This will help me understand the following: Line 124: The CNN in the extended module has an output softmax layer, which outputs likelihood scores for 10 types of virion proteins. I wonder if only proteins from these 10 types were included in the datasets used to train the CNNs. I mean, is it possible that a different type of virion protein is predicted by the main module as PVP? And if so, how would the extended module predict this protein since it is PVP but none of the ten types? Minors: Line 121: By default, a protein with a PVP score higher than 0.5 is regarded as a PVP. How was this cutoff chosen? Was this part of the k-cross validation process? Line 157 and other pieces in the manuscript: I would suggest authors not to use sentences like "F1-score is 9.05% much higher than that of PhANNs" for obvious reasons that 9% may not seem such a great difference for using the "much" adverb. Same thing to "much better" and variations. About the comparisons between DeePVP and PhANNs: Did authors make sure that instances of the test set were not used to train the PhANNs model being used? Line 221: What authors mean by "more authentic prediction"? Looking at the github repository, I found rather unusual that authors chose to upload only a PDF with instructions of how to use and install. It is very detailed, I appreciate. The virtual machine and docke containers are also nice resources to help less experienced users. However, I noticed that the github repository has no clear mention to the source code of the tool. I found it by a mention in the Availability of supporting data, where authors created a release with the datasets and the scripts. Again, very unusual, but I suppose authors have chosen this approach due to github limitations to large files. Table 2: I would like to ask authors what might me the reason for such low performance metrics to some types of PVP (for example, minor capsid)? Figure 5 states: "Host genus composition of the subject sequences". But there is a "Myoviridae" category, which is a family of phages. Not anything related to bacterial hosts. Please, verify why this is in the figure.

      Re-review:

      Thank you for authors' responses. Most of my concern were addresses. I have to say, though, that the github page is not quite in the standards for a bioinformatics tools yet. I appreciate the source code upload, but I noticed that not a single line of #comments were present in the code I have checked. README file is also not very clarifying. I do not consider this as an impediment for publication (since there are detailed info in GigaScience DB), but perhaps this may hind usage of authors' tool. Most users will only look at the github repository. I suggest some improvements in case authors judge my comment makes some sense. Bellow I list three examples just to give authors an idea:

      https://github.com/fenderglass/Flye https://github.com/LaboratorioBioinformatica/MARVEL https://github.com/vrmarcelino/CCMetagen

      One last concern was about authors' response to the Myoviridae mistake in figure 5. Authors stated that the genus of a phage host is in its name (as for example Escherichia phage XX). But this is a dangerous assumption, since many phage names are outside of this rule. For example, there are many phages with Enterobacteria phage XXX (for instance NC_054905.1 ), meaning that they infect some Enterobacteria. Again, enterobacteria is not a genus. Phage nomenclature may be a mess sometimes, be careful.

    1. Studies

      This work has been published in GigaScience Journal under a CC-BY 4.0 license (https://doi.org/10.1093/gigascience/giac068) and has published the reviews under the same license.

      Reviewer 1 Tomas Sigvard Klingström,

      As a researcher who may occasionally use long read sequencing technique for projects it is immensely helpful to get an insight into the experience accumulated through work related to the Vertebrate Genomes Project (VGP). My personal research interest on the subject is more on understanding why and how DNA fragment during DNA extraction. Due to my work in that area I have one key question regarding the interpretation of the data presented in figure 2 and then a number of suggestions for minor edits. The answer on how to interpret figure 2 may require some minor edits but the article is regardless of this a welcome addition to what we know about good practices for DNA extraction generating ultra high molecular weight DNA. It should also be noted that the DOI link to Data Dryad seems broken and I have therefore not look at the supplementary material. In figure 2 the size distribution of DNA fragments is visualized from the different experiments. Most of the fragment distributions look like I would have expected them based on the work we did in the article cited as nr 25 in the reference list. However the muscle tissue from rats and the blood samples from the mouse and the frog indicates that there may be a misinterpretation in the article regarding the actual size distribution of fragments which needs to be looked in to. Starting with the mouse plots and especially the muscle one. There must either have been a physical shearing event that drastically reduced the size of DNA (using the terminology from ref 25 this would mean that physical shearing generated a characteristic fragment length of approximately 300-400 kb), or the lack of a sharp slope on the rightmost side of the ridgeline plot is due to the way the image was processed. All other animals got a peak on the rightmost side of the ridgeline plot and the agarose plug should, based on the referenced methods paper [7], generate megabase sized fragments which far exceed the size of the scale used in figure 2. I would presume these larger fragments would get stuck in or near the well which makes it easy to accidentally cut them out when doing the image analysis step which may explain their absence in the mouse samples. This leads me to the conclusion that the article is well designed to capture the impact of chemical shearing caused by different preservation methods but would benefit from evaluating whatever figure 2 properly covers the actual size distribution of fragments or only covers the portion of DNA fragments small enough to actually form bands on the PFGE gel with a substantial part of the DNA stuck in or near the well. The frog plot is a good example of how this may influence our interpretation of the ridgeline plots. If the extraction method generate high-quality DNA concentrated in the 300-400 kb range then there must be something very special with the frog DNA from blood as there is a continuous increase in the brightness all the way to the edge of the image. This implies that the sample contains a high amount of much larger DNA fragments than the other samples. I find this rather unlikely and if I saw this in my own data I would assume that we had a lot of very large DNA fragments that are out of scale for the gel electrophoresis but that in the case for the frog blood samples many of these fragments have been chemically sheared creating the "smeared" pattern we see in figure 2.

      Minor edits and comments: Dryad DOI doesn't work for me. Figure 1 - The meaning of x3 and x2 for the turtle should be described in the caption. Figure 2 - Having the scale indicator (48.5. 145.5 etc) at the top as well as the bottom of each column would make it quicker to estimate the distribution of samples. The article completely omits Nanopore sequencing, is there a specific reason for why lessons here are not applicable to ONT? There is a very interesting paragraph starting with "The ambient temperature of the intended collecting locality should be a major consideration in planning field collections for high-quality samples. Here we test a limited number of samples at 37°C to". Even if the results were very poor information about the failed conditions would be appreciated. What tissues/animals did you use, did you do any preservation at all for the samples and did you measure the fragment length distribution anyway? Simply put, even if the DNA was useless for long read sequencing it is an interesting data point for the dynamics of DNA degradation and a valuable lesson for planning sampling in warm climates.

      Re-review:

      All questions and commends made in my first review are now resolved. I understand the thought process behind the first cropping of figure 2 but appreciate the 2nd version as it makes it easier for researchers with a limited understanding of the experiment to interpret the data.

    2. Abstract

      Reviewer 2. Elena Hilario

      I am glad to have been selected as reviewer for the manuscript "Benchmarking ultra-high molecular weight DNA preservation methods for long-read and long-range sequencing" by Dahn and colleagues. The manuscript reports a detailed guide on the effect of preservation methods on the quality of the DNA extracted from a wide range of animal tissues. Although the work is only focused on vertebrates, it is a great foundation to conduct similar studies on plants, invertebrates and fungi, for example. Although the effectiveness of the tissue/preservative combination was only tested with the preparation of long range libraries, it would have been useful to select one or two cases for long range sequencing (PacBio or Oxford Nanopore) to explore the impact of the different QC parameters measured in this study.

      Minor comments and corrections are included in the file uploaded

    1. Background

      This work has been published in GigaScience Journal under a CC-BY 4.0 license https://doi.org/10.1093/gigascience/giac075) and has published the reviews under the same license.

      Reviewer 1. Nikos Karaiskos

      Reviewer Comments to Author: In this article the authors developed Stardust, a computational method that can be used for spatially-informed clustering by combining transcriptional profiles and spatial information. As spatial sequencing technologies gain popularity, it is important to develop tools that can efficiently process and analyse such datasets. Stardust is a new method that goes in this direction. It is particularly appealing to make use of the spatial information and relationships to cluster gene expression in these datasets. Overall the quality of data used is high and the manuscript is clearly written. The algorithm behind Stardust is simple and consists of an interpolation between spatial and transcriptional distance matrices. A single parameter called space weight controls the contribution of the spatial distance matrix. The authors benchmark Stardust against other recently developed tools in five different spatial transcriptomics datasets by using two measures. Stardust therefore holds the potential of being a useful method that can be applied in different datasets.

      Before recommending the manuscript for publication, however, the authors should thoroughly address the following points: 1. What is the rationale behind modelling the contributions as a linear sum of the spatial and transcriptional distance matrices? In particular, why did the authors not consider non-linear relationships as well? As cells neighboring in space often share similar transcriptional profiles (see for instance Nitzan et al., 2019 for this line of reasoning and several examples therein), I would expect product terms to be even more informative. 2. The authors demonstrate Stardust's performance only on datasets obtained with the 10X Visium platform. How does Stardust perform on higher-resolution methods, such as Slide-Seq, Seq-scope etc? As ST methods will improve in resolution in the future, it is critical to be able to analyze such datasets as well. An important question here concerns scalability: how well does Stardust scale with the number of cells/spots? 3. In Fig. 1b conclusions are driven based on the CSS for different space weights, but only for a clustering parameter=0.8. What happens for other clustering values? And can the authors comment on why the different space weight values do not perform consistently across the datasets (i.e. 0.5 is better for HBC2 but 0.75 for MK)? 4. The authors compared Stardust with four other tools. The conclusion is that Stardust outperforms all other methods --and performs equivalently with BayesSpace. All of these methods, however, rely on choosing specific values for a number of parameters. Did the authors optimize these values when they benchmarked these methods against Stardust? 5. I was able to successfully install Stardust and run it. The resulting clusters in the Seurat object, however, were all NAs. The authors should make an effort to better document how Stardust runs, including the input structure that the tool expects and potential issues that might arise.

      Re-review: The authors have successfully addressed all raised points. The introduction of Stardust*, in particular, is a valuable enhancement of the method. Therefore, I recommend the manuscript for publication.

    2. Spatial

      Reviewer 2. Quan Nguyen

      Reviewer Comments to Author: This work presents a new clustering method, Stardust, that has the potential to improve stability of clustering results against parameter changing. Stardust can assess the contribution to the clustering result by spatial information relative to gene expression information. Stardust appears to performs better than other methods in the two metrics used in this paper, stability and coefficient of variation. The essence of the method is the use of a spatial transcriptomics (ST) distance matrix as a simple linear combination of physical distance (S) and transcriptional distance (T) matrices. A weight factor is used for the S matrix to control and evaluate the contribution of the spatial information. The effort for evaluating multiple parameters and comparing with several latest methods and across a number of public spatial datasets is a highlight of the work. The authors also made the code available.

      Major comments: - The concept of combining spatial location and gene expression is not new and has been applied in most spatial clustering methods. It is not clear what are the new additions to current available methods, except for a feature to weigh the contribution of spatial components to clustering results. - The approach to assess the contribution of spatial information, by varying the weight factor from 0 to 1 is rather simple, because the contribution can be nonlinear and vary between spots/cells (e.g. spatial distance becomes more important for spots/cells that are nearer to each other; some genes are more spatially variable than the others; applying one weight factors for all genes and all spots would miss these variation sources) - The 5 weight factors 0, 0.25, 0.50, 0.75, and 1 were used. However, this range of parameters provided too few data points to assess the impact of spatial factor. As seen in figures, the 5 data points do not strongly suggest a point where the spatial contribution is maximum/minimum due to large fluctuation of values in the y-axis. - Although two performance metrics are used (stability and variation), there needs to be an additional metric about how the clustering results represent biological ground truth cell type composition or tissue architecture (for example, by comparing to pathological annotation). Consequently, it is unclear if the stardust results are closer to the biological ground truth or not. - Stardust was tested on multiple 10x Visium datasets, but different types of spatial transcriptomics data like seqFISH, Slideseq, MERFISH, ect. are also common. Extended assessment of potential applications to other technologies would be useful. Minor comments: - The paragraphs and figure legends in the Result section are repetitive. - The result section is descriptive and there is no Discussion section.

      Re-review:

      The authors have improved the initial manuscript markedly. There are a couple of important points regarding comparisons between Stardust and Stardust that need to be addressed: 1) In which cases Stardust improves over Stardust? It seems the results would be dependent on different biological systems (i.e., tissue types). The authors suggest both versions produce comparable results, but given the major change in the formula (replacing a constant weight with variable weights as normalised gene expression values to [0,1] minmax scale), there are likely differences between Stardust and Stardust. For example, certain genes will have higher weight than the others, therefore making the effects of the weights variable among genes. For this example, the authors may assess highly abundant genes vs low abundant genes 2) In cases where spatial distances are important, Stardust could be less accurate than Stardust version with a high space weight. How Stardust* considers cases that spatial distance is as important as gene expression.

    1. Survival

      Reviewer 2. Animesh Acharjee

      SurvBenchmark: comprehensive benchmarking study of survival analysis methods using both omics data and clinical data.

      Authors compared many survival analysis methods and created a benchmarking framework called as SurvBenchmark. This is one of the extensive study using survival analysis and will be useful for translational community. I have few suggestions to improve the quality of the manuscript.

      1. Figure 1: LASSO, EN and Ridge are regularization methods. So, I would suggest including a new classification category as "regularization" or "penalization methods" and take out those from non-parametric models. Obviously this also need to be included accordingly in the methodology section and discussions
      2. Data sets: please provide a table with six clinical and ten omics data sets with number of samples, features and reference link.
      3. Discussion section: How the choice of the method need to be chosen? What criteria need to be used? I understand one does not fit all but some sort of clear guidance will be very useful. Also sample size related aspects need to be more discussed. In the omics research number of samples are really limited and deep learning based survival analysis is not feasible as authored mentioned in the line number 328-331. So, question come, when we should used deep learning based methods and when we should not.

      Reviewer 3. Xiangqian Guo Accept

    2. Abstract

      This work has been published in GigaScience Journal under a CC-BY 4.0 license (https://doi.org/10.1093/gigascience/giac071) and has published the reviews under the same license.

      Reviewer 1. Moritz Herrmann

      First review: Summary:

      The authors conducted a benchmark study of survival prediction methods. The design of the study is reasonable in principle. The authors base their study on a comprehensive set of methods and performance evaluation criteria. In addition to standard statistical methods such as the CoxPH model and its variants, several machine learning methods including deep learning methods were used. In particular, the intention to conduct a benchmark study based on a large, diverse set of datasets is welcome. There is indeed a need for general, large-scale survival prediction benchmark studies. However, I have serious concerns about the quality of the study, and there are several points that need clarification and/or improvement.

      Major issues:

      1. The method comparison does not seem fair As far as I can tell from the description of the methods, the method comparison is not fair and/or not informative. In particular, given the information provided in Supp-Table-3 and the code provided in the Github repository, hyperparameter tuning has not been conducted for some methods. For example, Supp-Table-3 indicates that the parameters 'stepnumber' and 'penaltynumber' of the CoxBoost method are set to 10 and 100, respectively. Similarly, only two versions of RSF with fixed ntree (100 and 1000) and mtry (10, 20) values are used. Also, the deep learning methods appear not to be extensively tuned. On the other hand, telling form the code, methods such as the Cox model variants (implemented via glmnet) and MTLR have been tuned at least a little. Please clearly explain in detail, how the hyperparameters have been specified respectively how hyperparameter tuning has been conducted for the different methods? If, in fact, not all methods have been tuned, this is a serious issue and the experiments need to be rerun under a sound and fair tuning regime.

      2. Description of the study design Related to the first point, the description of the study design needs to be improved in general as it does not allow to assess the conducted experiments in detail. A few examples, which require clarification:

      3. as already mentioned, the method configurations and implementations are not described sufficiently. It is unclear how exactly the hyperparameter settings have been obtained, how tuning as been applied and why only for some methods

      4. concerning the methods Cox(GA), MTLR(GA), COXBOOST(GA), MTLR(DE), COXBOOST(DE): have the feature selection approaches been applied on the complete datasets or only on the training sets
      5. Supp-Table-3 lists two implementations of the Lasso, Ridge and Elastic Net Cox methods (via penalized and glmnet); yet, Figure 2 in the main manuscript only lists one version. Which implementations have been used and are reported in Figure 2?
      6. l. 221: it is stated that "the raw Brier score" has been calculated. At which time point(s) and why at this/these time point(s)?
      7. Supp-Table-2: it is stated that "some methods are not fully successful for all datasets", but only DNNSurv is further examined. Is it just DNNSurv or are there other methods that have failed in some iterations? Moreover, what has been done about the failing iterations? Have the missing values be imputed? Are the failing iterations ignored?

      I recommend that section 3 be comprehensively revised and expanded, in particular including the methods implementations, how hyperparamters are obtained/tuning has been conducted, aggregation of performance results, handling of failing iterations. Moreover, I suggest to provide summary tables of the methods and datasets in the main manuscript and not in the supplement.

      1. Reliability of the presented results In other studies [BRSB20, SCS+20, HPH+20] differences in (mean) model prediction performance have been reported to be small (while variation over datasets can be large). This can also be seen in Figure 3 of the main manuscript. Please include more analyses on the variability of prediction performances and also include a comparison to a baseline method such as the Kaplan-Meier estimate. Most importantly, if some methods have been tuned while others have not, the reported results are not reliable. For example, the untuned methods are likely to be ill-specified for the given datasets and thus may yield sub-optimal prediction performances. Moreover, if internal hyperparameter tuning is conducted for some methods, for example via cv.glmnet for the Cox model variants, and not for others, the computation times are also not comparable.

      2. Clarity of language, structure and scope I believe that the quality of the written English is not up to the standard of a scientific publication and consider language editing necessary (yet, it has to be taken into account that I am not a native speaker). Unlike related studies [BWSR21, SCS+20, e.g.], the paper lacks clarity and/or coherence. Although clarity and coherence can be improved with language editing, there are also imprecise descriptions in section 2 that may additionally require editing from a technical perspective. For example:

      3. l. 76 - 78: The way censoring is described is not coherent, e.g.: "the class label '0' (referring to a 'no-event') does not mean an event class labelled as '0'". Furthermore, it is not true that "the event-outcome is 'unknown'". The event is known, but the exact event time is not observed for censored observations.

      4. The authors aim to provide a comprehensive benchmarking study of survival analysis methods. However, they do not, for example, provide significance tests for performance differences nor critical differences plots (it should be noted that the number of datasets included may not provide enough power to do so). This is in stark contrast to the work of Sonabend [Son21].

      I suggest revising section 2 using more precise terminology and clearly describing the scope of the study, e.g., what type of censoring is being studied, whether time-dependent variable and effects are of interest, etc. I think this is very important, especially since the authors aim to provide "practical guidelines for translational scientists and clinicians" (l. 32) who may not be familiar with the specifics of survival analysis.

      Minor issues

      • l. 43: Include references for specific examples
      • l. 60: The cited reference probably is not correct
      • l. 266: "MTLR-based approaches perform significantly better". Was a statistical test performed to determine significant differences in performance? If yes, indicate which test was performed. If not, do not use the term "significant" as this may be misunderstood as statistical significance.
      • Briefly explain what the difference is between data sets GE1 to GE6.
      • It has been shown that omics data alone may not be very useful [VDBSB19]. Please explain why only omics variables are used for the respective datasets.
      • Figure 1: Consider changing the caption to 'An overview of survival methods used in this study' as there are survival methods that are not covered. Moreover, consider referencing Wang et al [WLR19] as Figure 1a resembles Figure 3 presented therein.
      • Figure 2: Please add more meaningful legends (e.g., title of legend; change numbers to Yes, No, etc.).
      • Figure 2 a & b: What do the dendrograms relate to?
      • Figure 2 d: The c-index is not a proper scoring rule [BKG19] (and only measures discrimination), better use the integrated Brier score (at best, at different evaluation time points) as it is a proper scoring rule and measures discrimination as well as calibration.
      • Figure 3: At which time point is the Brier score evaluated and why at that time point? Consider using the integrated Brier score instead.
      • This is rather subjective, but I find the use of the term "framework", especially that the study contributes by "the development of a benchmarking framework" (l. 60), irritating. For example, a general machine learning framework for survival analysis was developed by Bender et al. [BRSB20], while general computational benchmarking frameworks in R are provided, e.g., by mlr3 [LBR+19] or tidymodels [KW20]. The present study conducts a benchmark experiment with specific design choices, but in my opinion it does not develop a new benchmarking framework. Thus, I would suggest not using the term "framework" but better "benchmark design" or "study design".
      • In addition, the authors speak of a "customizable weighting framework" (l. 241), but never revisit this weighting scheme in relation to the results and/or provide practical guidance for it. Please explain w.r.t. the results how this scheme can and should be applied in practice.

      The references need to be revised. A few examples: - l. 355 & 358: This seems to be the same reference. - l. 384: Title missing - l. 394: Year missing - l. 409: Year missing - l. 438: BioRxiv identifier missing - l. 441: ArXiv identifier missing - l. 445: Journal & Year missing

      Typos: - l. 66: . This - l. 89: missing comma after the formula - l. 93: missing whitespace - l. 107: therefore, (no comma) - l. 121: where for each, (no comma) - l. 170: examineS - l. 174: therefore, (no comma) - l. 195: as part of A multi-omics study; whitespace on wrong position; the sentence does not appear correct - l. 323: comes WITH a

      Data and code availability

      Data and code availability is acceptable. Yet, the ANZDATA and UNOS_kidney data are not freely available and require approval and/or request. Moreover, for better reproducibility and accessibility, the experiments could be implemented with a general purpose benchmarking framework like mlr3 or tidymodels.

      References

      [BKG19] Paul Blanche, Michael W Kattan, and Thomas A Gerds. The c-index is not proper for the evaluation of-year predicted risks. Biostatistics, 20(2):347-357, 2019. [BRSB20] Andreas Bender, David Rügamer, Fabian Scheipl, and Bernd Bischl. A general machine learning framework for survival analysis.arXiv preprint arXiv:2006.15442, 2020. [BWSR21] Andrea Bommert, Thomas Welchowski, Matthias Schmid, and Jörg Rahnenführer. Benchmark of filter methods for feature selection in high-dimensional gene expression survival data. Briefings in Bioinformatics, 2021. bbab354. [HPH+20] Moritz Herrmann, Philipp Probst, Roman Hornung, Vindi Jurinovic, and Anne-Laure Boulesteix. Large-scale benchmark study of survival prediction methods using multi-omics data. Briefings in Bioinformatics, 22(3), 2020. bbaa167. [KW20] M Kuhn and H Wickham. Tidymodels: Easily install and load the 'tidymodels' packages. R package version 0.1.0, 2020. [LBR+19] Michel Lang, Martin Binder, Jakob Richter, et al. mlr3: A modern object-oriented machine learning framework in R. Journal of Open Source Software, 4(44):1903, 2019. [SCS+20] Annette Spooner, Emily Chen, Arcot Sowmya, Perminder Sachdev, Nicole A Kochan, Julian Trollor, and Henry Brodaty. A comparison of machine learning methods for survival analysis of high-dimensional clinical data for dementia prediction. Scientific reports,10(1):1-10, 2020. [Son21] Raphael Edward Benjamin Sonabend. A theoretical and methodological framework for machine learning in survival analysis: Enabling transparent and accessible predictive modelling on right-censored time-to-event data. PhD thesis, UCL (University College London), 2021. [VDBSB19] Alexander Volkmann, Riccardo De Bin, Willi Sauerbrei, and Anne-Laure Boulesteix. A plea for taking all available clinical information into account when assessing the predictive value of omics data. BMC medical research methodology, 19(1):1-15, 2019. [WLR19] Ping Wang, Yan Li, and Chandan K Reddy. Machine learning for survival analysis: Asurvey. ACM Computing Surveys (CSUR), 51(6):1-36, 2019.

      Re-review:

      Many thanks for the very careful revision of the manuscript. Most of my concerns have been thoroughly addressed. I have only a few remarks left.

      Regarding 1. Fair comparison and parameter selection The altered study design appears much better suited to this end. Thank you very much for the effort, in particular the additional results regarding the two tuning approaches. Although I think a single simple tuning regime would be feasible here, using the default settings is reasonable and very well justified. I agree that this is much closer to what is likely to take place in practice. However, it should be more clearly emphasized that better performance may be achievable if tuning is performed.

      Regarding 2. Description Thanks, all concerns properly addressed. No more comments.

      Regarding 3. Reliability I am aware that Figure 2c provides information to this end. I think additional boxplots which aggregate the methods' performance (e.g. for unoc and bs) over all runs and datasets would provide valuable additional information. For example, from Figure 2c one can tell that MTLR variants obtain overall higher ranks based on mean prediction performance than the deep learning methods. However, it says nothing about how large the differences in mean performance are.

      Kaplan-Meier-Estimate (KM) I'm not quite sure I understood the authors' answer correctly. The KM does not use variable information to produce an estimate of the survival function, and I think that is why it would be interesting to include it. This would shed light on how valuable the variables are in the different data sets.

      Regarding 4. Scope and clarity Thanks, all concerns properly addressed. No more comments.

      Minor points:

      • Since the authors decided to change 'framework' to 'design', note that in Figure 1b it still says 'framework'
      • l.51 & l.54/55 appear to be redundant
      • Figure 2 a and b:
      • Please elaborate more on how similarity (reflected in the dendrograms) is defined?
      • Why is the IBS more similar to Bregg's and GH C-Index than to the Brier Score?
      • Why is the IBS not feasible for so many methods, in particular Lasso_Cox, Rdige_Cox, and CoxBoost?
    1. Abstract

      This work has been published in GigaScience Journal under a CC-BY 4.0 license (https://doi.org/10.1093/gigascience/giac073 and has published the reviews under the same license.

      Reviewer 1. Siyuan Ma

      Reviewer Comments to Author: In Kang, Chong, and Ning, the authors present Meta-Prism 2, a microbial community analysis framework, which calculates sample-sample dissimilarities and queries microbial profiles similar to those of user-provided targets. Meta-Prism 2 adopts efficient algorithms to achieve the time and memory efficiency required for modern microbiome "big data" application scenarios. The authors evaluated Meta-Prism 2's performance, both in terms of separating different biomes' microbial profiles and time/memory usage, on a variety of real-world studies. I find the application target of Meta-Prism appealing: achieving efficient dissimilarity profiling is increasingly relevant for modern microbiome applications. However, I'm afraid the manuscript appears to be in poor state, with insufficient details for crucial methods and results components. Some display items are either missing or mis-referenced. As such, I cannot recommend for its acceptance, unless major improvements are made. My comments are detailed below.

      Major 1. The authors claim that from its previous iteration, the biggest improvements are: (1) removal of redundant nodes in 1-against-N sample comparisons. (2) functionality for similarity matrix calculation (3) exhaustive search among all available samples.

      a. (1) seems the most crucial for the method's improved efficiency. However, the details on why these nodes can be eliminated, and how dissimilarity calculation is achieved post-elimination are not sufficient. The caption for Figure 1C, and relevant Methods texts (lines 173-188) should be expanded, to at least explain i) why it is valid to calculate (dis)similarity postelimination based on aggregation, ii) how aggregation is achieved for the target samples. b. I may not have understood the authors on (2), but this improvement seems trivial? Is it simply that Meta-Prism 2 has a new function to calculate all pair-wise dissimilarities on a collection of microbial profiles? c. For (3), it should be made clearer that Meta-Prism 1 does not do this. I needed to read the authors' previous paper to understand the comment about better flexibility in customized datasets. I assume that this improvement is enabled because Meta-Prism 2 is vastly faster compared to 1? If so, it might be helpful to point this out explicitly.

      1. I am lost on the accuracy evaluation results for predicting different biomes (Figure 2). a. How are biomes predicted for each microbial sample? b. What is the varying classification threshold that generates different sensitivities and specificities? c. Does "cross-validation" refer to e.g. selection of tuning parameters during model training, or for evaluation model performances? d. What are the "Fecal", "Human", and "Combined" biomes for the Feast cohort? Such details were not provided in Shenhav et al.

      Moderate 1. I understand that this was previously published, but could the authors comment on the intuitions behind their dissimilarity measure, and how it compares to similar measures such as the weighted UniFrac? a. Does Meta-Storm and Meta-Prism share the same similarity definition? If so, why would they differ in terms of prediction accuracies? 2. There seems to be some mis-referencing on the panels of Figure 1. a. Panel B was not explained at all in the figure caption. b. Line 185 references Figure 1E, which does not exist.

      Minor 1. The Meta-Prism 1 publication was referenced with duplicates (#16 and 24) 2. There are minor language issues throughout the manuscript, but for they do not affect understanding of the materials. Examples: a. Line 94: analysis -> analyze b. Line 193: We also obtained a dataset that consists of ...

      Re-review:

      I find most of my questions addressed. My only remaining issue is still that the three biomes from FEAST (Fecal, Human, and Mixed) are still not clearly defined. The only definition I could find is line 206-208 "We also obtained a dataset that consists of 10,270 samples belonging to three biomes: Fecal, Human, and Mixed, which have been used in the FEAST study, defined as the FEAST dataset". Are "Fecal" simply stool samples, and "Human" samples biopsies from the human gut? What is "Mixed"? As a main utility of Meta-Prism is source tracking, it is important for the reader to understand what these biomes are, to understand the resolution of the source tracking results. If this can be resolved, I'll be happy to recommend the manuscript's acceptance.

      Reviewer 2. Yoann Dufresne

      In this article the authors present Meta-Prism 2, a software to compute distances between metagenomic samples and also query a specific sample against a pool of samples. They call "sample" a precomputed file with abundance of multiple taxa. In the article they first succinctly present multiple aspects on the underlying algorithms. Then they provide an extensive analysis on the precision, ram and time consumption of the software. Finally, they show 3 applications of Meta-Prism 2.

      I will start to say that the execution time of the tool looks very good compared to all other tools. But I have multiple concerns about these numbers. - First, I like to reproduce the results of a paper before approving it. But I had a few problems doing so. * The tool do not compile as it is on git. I had to modify a line of code to compile it. This is nothing very bad but authors of tools should be sure that their main code branch is always compiling. See the end of the review for bug and fix. * The analysis are done using samples from MGnify. I found related OTU tsv files linked in the supplementary but no explanation on how to transform such files in pdata files that the software is processing. * The only way to directly reproduce the results is to trust the pdata files present on the github of the authors. I would like to make my own experiments and compare the time to transform OTU files into pdata with the actual run time of MP2. - The authors evaluated the accuracy of their method (which is nice) but did not gave access on the scripts that were used for that. I would like to see the code and try to reproduce the figure by myself on my own data. - The 2nd and 3rd applications are explained in plain text but there is no script related neither any table of graphics to reproduce or explain the results. The only way for me to evaluate this part is to trust the word of the authors. I would like the authors to show me clear and indisputable evidences.

      For the methods part it is similar. We have hints on what the authors did, but not a full explanation: - For the similarity function, I would like to know where it comes from. The cited papers [14] and [24] do not help on the comprehension of the formula. If the function is from another paper, I ask the authors to add a clear reference (paper + section in the paper) ; if not, I would like the authors to explain in details why this particular function, how they constructed it and how it behaves. - The authors refer multiple times to "sparse format" applied to disk & cache but never defined what they mean by that. I would like to see in this section which exact datastructure is used. - In the Fast 1-N sample comparison, the authors write about "current methods" but without citing them. I would like the authors to refer to precise methods/software, succinctly describe them and then compare their methods on top of that. Also in this part, the authors point at figure 1E that is not present in the manuscript. - The figure 1 is not fully understandable without further details in the text. For example, what is Figure 1C4 ?

      I want to point that the paper is not correctly balanced in term of content. 1.5 page for time execution analysis is too much compared to the 2 pages of methods and less than 1 page of real data applications.

      Finally, the authors are presenting a software but are not following the development standards. They should provide unit and functional tests of their software. I also strongly recommend them to create a continuous integration page with the git. With such a tool the compilation problem would not exist.

      To conclude, I think that the authors very well engineered the software but did not present it the right way. I suggest the authors to rewrite the paper with strong improvements of the "methods" and "Real data application" sections. Also, to provide a long term useful software, they have to add guaranties to the code as tests and CI.

      For all these reasons, I recommend to reject this paper.

      --- Bug & Fix ---

      make mkdir -p build g++ -std=c++14 -O3 -m64 -march=native -pthread -c -o build/loader.o src/loader.cpp g++ -std=c++14 -O3 -m64 -march=native -pthread -c -o build/newickParser.o src/newickParser.cpp g++ -std=c++14 -O3 -m64 -march=native -pthread -c -o build/simCalc.o src/simCalc.cpp g++ -std=c++14 -O3 -m64 -march=native -pthread -c -o build/structure.o src/structure.cpp g++ -std=c++14 -O3 -m64 -march=native -pthread -c -o build/main.o src/main.cpp src/main.cpp: In function 'int main(int, const char)': src/main.cpp:128:31: error: 'class std::ios_base' has no member named 'clear' 128 | buf.ios_base::clear(); | ^~~~~ make: * [makefile:7: build/main.o] Error 1

      To fix the bug: src/main.cpp:128 => buf.ios.clear();

    1. Abstract

      This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.77 and has published the reviews under the same license.

      Reviewer 1. Cicely Macnamara

      The manuscript entitled " PhysiCOOL: A generalized framework for model Calibration and Optimization Of modeLing projects" is succinctly written; its purpose is clear and the software created simple yet effective. I think improvements could be made to the documentation allowing a non-expert user to make use of this valuable tool. I also have a few minor comments below. Otherwise I am happy to recommend the publication of this paper.

      Minor comments: (1) Could the authors clarify in the paper (where it says PhysiCool has partial support for PhysiCell v1.10.3 and higher) whether it is the author's intention to keep this tool up to date with newer releases of PhysiCell? (2) For the multilevel parameter sweep the authors suggest that the number of levels and grid parameters can be defined by the user. Do the authors have any suggestions on picking the appropriate number of levels, for example, or could future development include some form of dynamic choice for number of levels e.g. stop when a certain degree of accuracy is found?

      Reviewer 2. Daniel Roy Bergman

      This is a very nice addition to the PhysiCell ecosystem. Methods for parameterizing agent-based models is critical, and the ability to do so without expensive computing resources, i.e. HPC, will aid many researchers.

      Comments: 1) "Furthermore, experimental data could..., they can be used..." this feels like a run-on sentence. It is unclear who/what "they" is. 2) "bespoke HPC workflows..." Is this referencing DAPT and the PhysiCell-EMEWS workflow? If so, how does PhysiCOOL differ from these? 3) Is PhysiCOOL defining this multilevel sweep approach to parameter estimation? Or is this already established? If the former, please emphasize. If the latter, are there citations? 4) Please emphasize that the "Simple model of logistic growth" is not done with PhysiCell. 5) I needed Python version < 3.11.0 to install physicool

      Major revisions: 1) Please check on the issue I had with the motility example and it not generating output files.

      Minor revisions: 1) "As for many several computational modelling frameworks..." consider rewording. I would suggest "As with many computational modeling frameworks" 2) "...namely an Extensible..." 3) "...can be employed to randomly sample points within..." 4) Please change notation in Table 2 so that the "* point" columns report the values as coordinates ( , ) rather than like intervals [ , ].

      Re-review: The authors addressed all my concerns and I have no further reservations in recommending this manuscript for publication.

    1. Abstract

      This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.76 and has published the reviews under the same license.

      Reviewer 1. Sven Winter

      I am really sorry, and I do not want to sound mean, but this manuscript needs major improvements in structure, writing, and data validation. It violates so many standard practices of scientific writing. I have never seen anybody cite a full title of a previous manuscript. There is absolutely no need for that. The annotation is labeled as an improved annotation, but its results are only listed in the abstract, and it is not mentioned how it is generated anywhere other than the data availability section. That the genome is tagged under RefSeq by NCBI is absolutely unnecessary information in the abstract, this is just a label, and it tells not much about the quality. I would urge the authors to restructure the manuscript. Start with a short description of the species and why the species and its genome is important as an introduction, then focus on a detailed data description with methods and basic results such as assembly statistics (importantly not just scaffold N50 but also on the contig-level!), Busco, Merqury completeness and error rate, genome size estimate, annotation (repeat and gene), etc. There is really no need for 30 pages of useless supplementary tables (please also make sure that next time you sort the files during the submission so that the pdf does not start with 30 pages of tables). The data cannot support any information about gene loss, as there is so much of the assemblies not properly anchored into chromosomes. I would also try to improve the Hi-C contact map figure. There is really no need for the blue and green boxes and the assembly label at the x-axis. I may have overlooked it due to the writing style, but I would like to see mentioned how much of the assembly is in the chromosome-scale scaffolds and how much is unplaced. I like the improved assembly, it just needs a much better presentation in form of a well-structured manuscript, and unfortunately, in its current form, it clearly is not well-structured. There are plenty of other data notes available as templates. I personally would always opt for a more traditional manuscript structure (Introduction, Methods, combined Results and Discussion), but that is my personal preference. I hope my comments are helpful, and I am looking forward to seeing a revised version in the future.

      Re-review:

      Thank you for the improvement of the manuscript. It is now easier to follow and includes more information as before. It was a bit difficult to see the changes as they were not highlighted and the lines are not numbered. Despite that, I have only a few minor comments that should be addressed easily so that the manuscript will be ready for publication soon. Line numbers in the comments refer to lines of the specific paragraph/section.

      DNA and RNA extraction: L7:such as? If you listed all tissues, please remove such as, if you sequenced RNA for nor tissues please add them.

      Sequencing and Assembly: L5: 159 bp is an uncommon read length. Was this just a typo, or how did that come to be? L10: remove "the" before juicer; otherwise, it sounds like an actual fruit juicer instead of a bioinformatics tool ;-). Same for 3D-DNA in the line below. Please make it more clear in the text if you sequenced the RNA for each tissue separately or in one library. L11-12: I am not convinced that not allowing for correction was the right approach. Did you test how the results would look with corrections enabled?

      Assembly Statistics and Quast Results: Quast calculates assembly statistics so I am not sure why the header needs to include both. L5: Please avoid using "better" but instead rephrase so that is is clear that the NG50 is 1.75x larger than the previous assembly. "Better" is not clear.

      Busco and Merqury results: I would not claim that Busco says the genome is 95% complete, as busco only tries to find genes that are supposedly orthologous in Actinopterygii. So I would rather say Busco suggests a high completeness as it finds 95% of the orthologs. Also, all genes in the Busco dataset are supposed to be single-copy orthologs; therefore, I would not say that 93% are conserved single-copy orthologs, as the remaining duplicated or fragmented genes could just be assembly errors. Please also state the Merqury QV value, and I would suggest stating the error rate in %. I still find the discussion about missing Busco genes strange, as since Busco 4 or 5 the datasets all got much larger and the Busco completeness values went down in most assemblies, even in well studies taxa as mammals. With recent datasets, it is very unlikely to get much more than 95-97%. In my opinion, it is rather a sign of too large and incorrect Busco datasets than evidence for missing orthologs. I would at least add that point to the discussion.

      Table 1: Please follow standard practice in scientific writing and add separators to the numbers in all tables (main text and supplementary), e.g., 28444102  28,444,102. Otherwise, they are difficult to read.

      Annotation Results: L3: 20,101 coding genes, 18,616 genes … Please check throughout the whole manuscript for consistent style.

      Data Availability: L2: Annotation report release 100. What does "100" stand for? Also, "at here" sounds not correct; please remove "at". L4: Table S2 does not show the scaffold identifiers. L5: please state the complete BioProject accession not just the numerical part.

      Supplementary data: Please change numbers in all tables to standard format e.g., 21,671,036

      Reviewer 2. Yue Song

      (1) Please state clearly how much CCS Hi-Fi data has been produced by sequencing and hic-data finally used for chromosome assembly after filtration, not just the number of reads. (2) Please state clearly the estimated genome size using Hi-Fi data.
      

      (3) What is the process for “correct primary assembly misassembles”? Please described in detail. (4) In Table 1, I noticed that the difference between the new and previous genome of S.scovelli is more than 100M (about 25% of the size of the newly assembly). Otherwise, most of genome size of Syngnathus species ranged from 280-340 Mb, I think take some explanation of these extra sequences is necessary. (5) Need more detailed parameters and process about genome assembly and gene annotation. (6) Whether the previous version had any assembly errors and updated in this new one. if this exists, please state so.

  4. Feb 2023
    1. Abstract

      This work has been peer reviewed in GigaScience ( see https://doi.org/10.1093/gigascience/giac093 ), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Dadi Gao

      Summary: The authors developed a de novo assembly method, BrumiR, for small RNA sequencing data based on de Bruijin graph algorithm. This tool displayed a relatively high sensitivity in finding miRNAs and helped the authors discover a novel miRNA in A. thaliana roots.

      Major comments:

      Have the authors compare the performance with different seed length? Even if the minimal miR length is 18nt in MiRBase 21, seed=18 might not necessarily lead to the best AUC or F score (This might also be related to Comment 4).

      The authors need to benchmark BrumiR with more existing tools (e.g. those ML-based methods), and to include more genome-free methods (e.g. MiRNAgFree).

      It is also interesting to know whether de novo method for mRNA assembly would be useful on the miRNA side. It would be great if the authors were able to compare the performance of BrumiR2reference (without filtering for RFAM) with Trinity in genome-guided mode, by tweaking its seed length to be the same as BrumiR.

      The tool's sensitivity is promising across animal and plant datasets. However, the average precision is quite low, an average precision of 0.3 means a false discovery rate of 0.7. This is not an accepted value for a tool designed to discover novel miRNA. Is there any parameter the author could tweak towards a better performance? For example, is seed length of 18nt too short to start with? Is there any other sequences feature the authors should take into account to boost the performance? Or maybe some post-assembly filtering approaches might be sufficient and helpful.

      Wet-lab validation (e.g. Luciferase assay) for the identified novel miRs will leverage the real-life usefulness of BrumiR. This is extremely important, as the tool showed a high false discovery rate.

      Minor comments:

      MiRNA maturation involves RNA editing. Can the authors comment on how this would be handled and captured by BrumiR. It seems that the authors allow mismatches when cluster the potential miRNAs via edlib library. It is interesting to know whether or not, or to what extent, edlib would help in including RNA edited candidates in the final result.

    2. AbstractMicroRNAs

      This work has been peer reviewed in GigaScience ( see https://doi.org/10.1093/gigascience/giac093 ), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Marc Friedlander

      The authors here present BrumiR, a de Bruijn-based method to discover miRNAs independently of a reference genome. Today most miRNA discovery and annotation is done by mapping sequenced RNAs to readily available reference genomes and analyzing the mapping profiles. However, there are some uses cases where the genome-free approach is needed (particularly for species that have no reference genome or where the genomes have missing parts); therefore BrumiR could potentially be useful for the community. However, the comparison to existing tools needs to be done in a more careful way.

      Major comments:

      RFAM filtering is not really part of the prediction step, this is rather a filtering step. Therefore, to make a fair comparison with mirnovo (the other genome-free tool), BrumiR should additionally be run without RFAM filtering, and mirnovo should additionally be run using the exact same RFAM filtering.

      it appears that 16-mers from miRBase miRNAs were specifically excluded from the RFAM catalog used for the filtering, which is reasonable. However, the miRNAs from the exact benchmarked species should not be included in the used miRBase 16-mer catalog, to avoid circular reasoning.

      miRDeep2 software should ideally not be run with default options - this is particular important since the miRDeep2 performance in this manuscript appears lower than what is reported in other studies (e.g. Friedlander et al. 2012). First, reference mature miRNAs from a related and well-annotated species should be included to support the prediction. Second, a score cut-off should be used that gives a decent signal-to-noise ratio according to the miRDeep2 output overview table (for instance 5:1). Third, all read pre-processing and genome mapping should be performed with the mapper.pl script which is part of the miRDeep2 package.

      it appears that only miRNA-derived sequences were included in the simulated data. In fact, real small RNA-seq data typically contains fragments from other known types of RNA and also sequences from unannotated parts from the genome. Therefore, the authors should use simulated data that also includes samples from RFAM and randomly sampled sequences from the reference genome (for instance 10% of each). Overall, the use of simulated sequence data could be put a bit in the background in this study, since real small RNA-seq data is in fact readily available these days and typically has a structure that is not easy to simulate. Further, there is little reason not to use real data, since the miRNAs in miRBase tend to be reasonably well curated for most species and therefore can function well as a gold standard for benchmarking.

      precision of BrumiR is in some cases lower than 0.2, for instance for one mouse dataset. From this dataset ~3000 mouse miRNAs are reported - the majority of which are not in miRBase and can reasonably be presumed to be false positives. The authors should comment on why this particular dataset appears to produce so many false positives for BrumiR - could this have to do with the prevalence of piRNAs that the software cannot easily discern from miRNAs? Also, the authors should reflect on in what kind of use cases could tolerate these thousands of false positives. Would this be for generating candidates for downstream high-throughput validation?

      the authors should either benchmark BrumiR against the genome-free methods miReader and MirPlex, or explain why this comparison is not relevant.

      Minor comments:

      the brief introduction to miRNA biology should be carefully edited by an expert in the field. Currently, very old reviews are being cited (e.g. Bartel 2004), and some of the other references appear to be a bit spurious (e.g. why focus on plant host-pathogen interactions out of the hundreds of established functions of miRNAs?). The excellent review of Dave Bartel from 2018 contains references to numerous milestone studies that the introduction could build on.

      the authors write on page 2 that genome-based methods struggle with a high rate of false positive prediction, citing [9]. However, this is a mis-reference, since the reference [9] states that methods that rely on only the genome and do not leverage on small RNA-seq data have high false positive rates.

    3. AbstractMicroRNAs (miRNAs)

      This work has been peer reviewed in GigaScience ( see https://doi.org/10.1093/gigascience/giac093 ), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Ernesto Picardi

      The manuscript by Moraga et al. describes BrumiR, a software devoted to the de novo identification of miRNAs from deep sequencing experiments of the RNA fraction at low molecular weight. In contrast with existing tools, BrumiR is based on de Bruijn graphs, generated directly from raw fastq reads. The performances on simulated and real sequencing data, in terms of precision, recall and FScore, are very good. In addition, the tool is ultra-fast, enabling the analysis of huge amount of data. I have tried to use BrumiR but I always got a GLIB error. I have tested the script on different Linux and Mac computers but I was not able to fix the GLIB error. It seems that a very recent version of the GLIB library is required. So, unfortunately, I didn't have the possibility to test the program and look at the outputs.

      Major concerns:

      I was not able to run the program and, thus, provide a correct revision. In my opinion, the github page should take into account this by providing the minimal software and hardware architecture to run BrumiR. Authors could also include a copy of the output files (by the way, there is a typo in the description of the second output file).

      Since the tools is able to identify novel miRNAs and look also at known ones, they could provide an output file including the read count per miRNA. In addition, since the tool is expected to be ultra-fast (not checked … see above), the differential gene expression analysis could also be implemented.

      I suggest also to implement a graphical output. A sort of summary in a decorated html page.

      By using BrumiR, authors analyze miRNAs in Arabidopsis during the development, discovering three novel miRNAs. Although bioinformatics evidences indicate that they could be real miRNAs, an experimental validation is required. Indeed, these miRNAs have been detected by BrumiR only. I think that this validation could be easily done because authors directly performed sRNAseq data. In my opinion, this experiment could really improve the manuscript and assess the high performance of BrumiR.

    1. Background

      This work has been peer reviewed in GigaScience ( see https://doi.org/10.1093/gigascience/giac097 ), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Giulia De Riso

      In this study, a workflow is presented to generate classification models from DNA methylation data. Methods to deal with harmonization and missing data imputation are presented and the benefit of adopting them for classification tasks is tested on case-control datasets of schizophrenia and Parkinson disease. The authors support this workflow with source code. Although mostly based on already known methodologies, the present study may help orient studies aimed at building and applying DNA methylation based models. However, some major concerns can be raised:

      Majors: In different points of the manuscript, the authors refer to their approach as a pipeline. Indeed, this approach should be composed of sequential modules, in which the output of a module becomes the input of the next one. Although the modules are clearly distinguishable, their organization in the pipeline is less straightforward (also considering that modules can be adopted both to build a model and to use it on new data). The authors could think to draw a scheme of the pipeline, or to adopt a different term to refer to the presented approach. From the model performance perspective, the ML models poorly perform for schizophrenia. The authors point to inner characteristics of the disease as a possible reason for this. However, this point should be better commented in the Discussion section.

      Besides this, the impact of the smaller number of samples included in the training set and the higher proportion of imputed features compared to Parkinson disease on the classification accuracy should be discussed. In addition, since the authors provided the code, is there a way to select samples to include in training/test sets based on random choice (classical 70-30% splitting) instead of source dataset? "For machine learning models, we used only those CpG sites that have the same distribution of methylation levels in different datasets in the control group (methylation levels in the case group typically have greater variability because of disease heterogeneity).": is this filtering performed only on the datasets included in the training set, or also on the test set? It seems the former, but the authors should clearly state this point. Accuracy with weighted averaging should be defined with a formula in the methods section Regarding the ML models, the authors chose different types of decision-trees ensemble, along with a deep learning one. They should contextualize this choice (why different models from the same family?).

      In addition, ML models built on DNA methylation are often based on elastic net or Support-Vector Machines, which are not accounted for in this work. The authors should comment on this aspect in limitations, and state whether the code they provided for their approach could be customized to adopt different models from the ones they presented.

      Regarding the Imputation Method column in Table 2, the meaning is not clear. Are the different imputation methods described in the Imputation of missing values section paired with the ML models presented in Table 2? If yes, some of the methods (like KNN) are missing. In the harmonization section, Models for case-control classification are trained on different numbers and sets of CpGs. To assess the effect of harmonization alone, the number of CpGs should be instead fixed. This is especially critical for schizophrenia, when the number of features for the non-harmonized data is 35145 whereas the one for harmonized data is 110,137. Dimensionality reduction section: are the models from imputed and not-imputed data trained only on harmonized data? And how the set of 50911CpG sites for Parkinson and 110137 CpG sites for schizophrenia is selected?

      Imputation of missing values section: it is not clear on which CpGs and on which samples imputation is performed. Also, it is not clear whether the imputation has been tested on the best-performing model.

      Minors: Page 1, line 2: "DNA methylation is associated with epigenetic modification". DNA methylation is an epigenetic mark itself. Do the authors mean histone marks?

      Page 1, from line 7: "DNA methylation consists of binding a methyl group to cytosine in the cytosineguanine dinucleotides (CpG sites). Hypermethylation of CpG sites near the gene promoter is known to repress transcription, while hypermethylation in the gene body appears to have an opposite, also less pronounced effect.": references should be added

      Page 2, from line 2 : "Current epigenome-wide association studies (EWAS) test DNAm associations with human phenotypes, health conditions and diseases.": references should be added

      Page 3: "In most cases, an increase in dimensionality does not provide significant benefits, since lower dimensionality data may contain more relevant information". This point could be presented in a reverse way (higher dimensionality data may contain redundant information), introducing the collinearity issue. In addition, this issue could be introduced before the missing values and imputation section.

      Page 3: references for "Modern machine-l earning-based artificial intelligence systems are powerful and promising tools" could be more specific for the field of epigenetics and DNA methylation.

    2. Abstract

      This work has been peer reviewed in GigaScience ( see https://doi.org/10.1093/gigascience/giac097 ), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Liang Yu Reviewer

      Comments to Author: The paper by Kalyakulina et al. described the disease classification for whole blood DNA methylation. The author proposed a comprehensive approach of combined DNA methylation datasets to classify controls and patients. The solution includes data harmonization, construction of machine learning classification models, dimensionality reduction of models, imputation of missing values, and explanation of model predictions by explainable artificial intelligence algorithms. For Parkinson's disease and schizophrenia, the author also demonstrates that a method for classifying healthy individuals and patients with various disorders based on whole blood DNA methylation data is an efficient and comprehensive approach.

      Overall, the manuscript is well organized. I have some suggestions for the authors to improve their work:

      1. The manuscript has constructed different models for the prediction study of CpG sites for different types of data. It is suggested to add a flowchart of the whole model construction process to the manuscript so that readers can understand the study more clearly.

      2. In Figure 4, the author only shows the top 10 important features and marks the highest accuracy and number of features with black lines in the figure. It is recommended to show the relevant data (optimal accuracy and number of features) in the figure. For the three subplots included in the figure, please label them separately, e.g., A, B, and C to indicate them separately.)

      3. Remark concerns model performance evaluation: author should provide standard deviations of the obtained values.

      4. In this manuscript, the author used graphs to present the results and suggested that a table summarizing the performance results of the model would be intuitive.

      5. I didn't find how the authors optimize the hyper-parameters, usually using grid search.

      6. The authors do not adequately address how their method outperforms existing methods in the discussion section.

      7. The "Dimensionality reduction" section: I think this section is more appropriately called "feature selection", a sequence forward search method. First sort the features according to their importance values, then add or remove features from a candidate subset while evaluating the criterion

    1. AbstractRecent studies

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac094 ), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Milad Miladi

      In this work, Tombacz et al. provide a Nanopore RNA sequencing dataset of SARS-CoV-2 infected cells in several timepoints and sequencing setups. Both direct RNA-seq and cDNA-seq techniques have been utilized, and multiplex barcoded sequencing has been done for combining the samples. The dataset can be helpful to the community, such as for future transcriptomic studies of SARS-CoV-2, especially for studying the infection and expression dynamics. The text is well written and easy to follow. I find this work valuable; however, I can see several limitations in the analysis and representation of the results.

      Notably, the figures and tables representing statistical and biological insights of the data points are underworked, lack clarity, and provide limited information about the experiment. Further visualizations, analysis, and data processing could help to reveal the value and insights from this sequencing experiment.

      Comments: The presentation of reads coverage and lengths in Figs 1 & 2 are elementary, unpolished, and non-informative. Better annotation and labeling in Fig. 1 would be needed. Stacking so many violin plots in Fig 2 does not provide any valuable information and would only misguide. What are the messages of these figures? What do the authors expect the readers to catch from them? As noted, stacking many similar figures does not add further information. The authors may want to consider alternative representations and aggregation of the information, besides or replacing the current plots. For example, in Fig.2, scatter/line plots for the median & 25/75% percentile ranges, with an aggregation of the three replicates in on x-axis position, could help identify potential trends over the time points.

      It is better to start the paper by presenting the current Fig.3 as the first one. This figure is the core of contributions and methodologies, and current Figs 1&2 are logical followups of this step.

      There is a very limited description in the Figure Legends. The reader should be able to understand essential elements of the figures merely based on the Figure and its legend.

      This study does not provide much notable biological insight without demultiplexing the reads of each experimental condition into genomic and subgenomic subsets. Distinguishing the genomic and subgenomic reads and analyzing their relative ratio is essential in this temporal study. Due to the transcription process of coronaviruses, the genomic and subgenomic reads have very different characteristics, such as length distribution and cellular presence. Genomic and subgenomic reads can be reliably identified by their coverage and splicing profiles, for enough long reads. It is essential that the authors further process the data by categorizing the genomic/subgenomic reads and the provide statistics such as read length for each category. It would also be interesting to observe the ratio of genomic vs. subgenomic reads. This is an indicative metric of the infection state of the sample. An active infection has a higher sub-genomic share, while, e.g., a very early infection stage is expected to have a larger portion of genomic reads.

      Page-3: "[..] the nested set of subgenomic RNAs (sgRNAs) mapping to the 3'-third of the viral genome". Is 3'-third a typo? Otherwise, the text is not understandable.

      Page-4: " because after a couple of hours, the virus can initiate a new infection cycle within the noninfected cells." More context and elaboration by citing some references can help to support the authors' claim. A gradual infection of non-infected cells can be assumed. However, "a couple of hours" and "initiate a new infection cycle" need further support in a scientific manuscript. The infection process is fairly gradual, but the wording here infers a sudden transition to infecting other cells only at a particular time point.

      Page-4: "[..]undergo alterations non-infected cells during the propagation therefore, we cannot decide whether the transcriptional changes in infected are due to the effect of the virus or to the time factor of culturing." This can be strong support for why this experiment has been done and for the value of this dataset. I would suggest mentioning this in the abstract to highlight the motivation.

      Page-4: "based studies have revealed a hidden transcriptional complexity in viruses [13,14]" Besides Kim et. al, the first DRS experiments of coronaviruses have not been cited (doi.org/10.1101/gr.247064.118, doi.org/10.1101/2020.07.18.204362, doi.org/10.1101/2020.03.05.976167)

      Table-1: dcDNA is quite an uncommon term. In general, here and elsewhere in the text, insisting on a direct cDNA is a bit misleading. A "direct" cDNA sequencing is still an indirect sequencing of RNA molecules!

      Figs S2 and S3: Please also report the ratio of virus to host reads.

    2. Abstract

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac094 ), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows: Reviewer name: George Taiaroa

      The authors provide a potentially useful dataset relating to transcripts from cultured SARS-CoV-2 material in a commonly used cell line (Vero). Relevant sequence data are publicly available and descriptions on the preparation of these data are for the most part detailed and adequate, although this is lacking at times.

      Although the authors state that this dataset overcomes the limitations of available transcriptomic datasets, I do not believe this to be an accurate statement; based on comparable published work in this cell line, transcriptional activity is expected to peak at approximately one day post infection (Chang et al. 2021, Transcriptional and epi-transcriptional dynamics of SARS-CoV-2 during cellular infection), with the 96 hour period of infection described likely representing overlapping cellular infections of different stages.

      Secondly, many in the field have moved to use more appropriate cell lines in place of the Vero African Monkey kidney cell line, to better reflect changes in transcription during the course of infection in human and/or lung epithelial cells (See Finkel et al. 2020, The coding capacity of SARS-CoV-2). Lastly, the study would ideally be performed with a publicly available SARS-CoV-2 strain, as has been the case for earlier studies of this nature to allow for reproducibility and extension of the work presented by others.

      That said, the data are publicly available and could be of use. Primary comments I think that a statement detailing the ethics approval for this work would be essential, given materials used were collected from posthumously from a patient. Similarly, were these studies performed under appropriate containment, given classifications of SARS-CoV-2 at the time of the study? I do not know what the authors mean in reference to a 'mixed time point sample' for the one direct RNA sample in this study; could this please be clarified? Secondary comments I believe the authors may over-simplify discontinuous extension of minus strands in saying that

      'The gRNA and the sgRNAs have common 3'-termini since the RdRP synthesizes the positive sense RNAs from this end of the genome'. Each of the 5' and 3' sequence of gRNAs/sgRNAs are shared through this process of replication. 'Infections are typically carried out using fresh, rapidly growing cells, and fresh cultures are also used as mock-infected cells.However, gene expression profiles may undergo alterations non-infected cells during the propagation therefore, we cannot decide whether the transcriptional changes in infected are due to the effect of the virus or to the time factor of culturing. This phenomenon is practically never tested in the experiments.' I do not follow what these sentences are referring to. 'Altogether, we generated almost 64 million long-reads, from which more than 1.8 million reads mapped to the SARS-CoV-2 and almost 48 million to the host reference genome, respectively (Table 1).

      The obtained read count resulted in a very high coverage across the viral genome (Figure 1). Detailed data on the read counts, quality of reads including read lengths (Figure 2), insertions, deletions, as well as mismatches are summarized Supplementary Tables.' Could this perhaps be more appropriately placed in the data analysis section, rather than background?

    1. AbstractRecent technological

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac088), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Kamil S. Jaron

      Assembling a genome using short reads quite often cause a mixed bag of scaffolds representing uncollapsed haplotypes, collapsed haplotypes (i.e. the desired haploid genome representation) and collapsed duplicates. While there are individual software for collapsing uncollapsed haplotypes (e.g. HaploMerger2, or Redundans), there is no established workflow or standards for quality control of finished assemblies. Naranjo-Ortiz et al. describes a pipeline attempting to make one.

      The Karyon pipeline is a workflow for assembling haploid reference genomes, while evaluating the ploidy levels on all scaffolds using GATK for variant calling and nQuire for a statistical method for estimating of ploidy from allelic coverage supports. I appreciated the pipeline promotes some of good habits - such as comparing k-mer spectra with the genome assembly (by KAT) or treatment of contamination (using Blobtools). Nearly all components of the pipeline are established tools, but authors also propose karyon plots - diagnostic plots for quality control of assemblies.

      The most interesting and novel one I have seen is a plot of SNP density vs coverage. Such plot might be helpful in identifying various changes to ploidy levels specific to subset of chromosome, as authors demonstrated on the example of several fungal genomes (Mucorales). I attempted to run the pipeline and run in several technical issues. Authors, helped me overcoming the major ones (documented here: https://github.com/Gabaldonlab/karyon/issues/1) and I managed to generate a karyon plot for the genome of a male hexapod with X0 sex determination system. I did that, because we know well the karyotype and I suspected, the X chromosome will nicely pop-up in the karyon plot.

      To my surprise, although I know the scaffold coverages are very much bi-modal, I got only a single peak of coverages in the karyon plot and oddly somewhere in between the expected haploid and diploid coverages. I think it is possible I have messed up something, but I would like authors to demonstrate the tool on a known genome with known karyotype. I would propose to use a male of a species with XY or X0 sex determination system. Although it's not aneuploidy sensu stricto, it is most likely the most common within-genome ploidy variation among metazoans. I would also propose authors to improve modularity of the pipeline. On my request authors added a lightweighted installation for users interested in the diagnostic plots after the assembly step, but the inputs are expected in a specific, but undocumented format, which makes a modular use rather hard. At least the documentation of the formats should improve, but in general I think it could be made more friendly to folks interested only in some smaller bits (I am happy to provide authors with the data I used).

      Although I quite enjoyed reading the manuscript and the manual afterwards, I do think there is a lot of space for improvement. One major point is there is no formal description of the only truly innovative bit of this pipeline - the karyon plots. There is a nice philosophical overview, but the karyon plots are not explained in particular, which makes reading of the showcase study much harder. Perhaps a scheme showing the plot and annotating what is expected where would help. Furthermore, authors did a likelihood analysis of ploidy using nQuire, but they did not talk about it at all in the result section. I wonder, what's the fraction of the assembly the analysis found most likely to be aneuploid for the subset of strains that suspected to be aneuploids? Is 1000 basis sliding window big enough to carry enough signal to produce reliable assignments? In my experience, windows of this size are hard to assign ploidy to, but I usually do such analyses using coverage, not SNP supports.

      However, I would like to appraise authors for the fungal showcases, I do think they are a nice genomics work, investigating and considering both biological and technical aspects appropriately. Finally, a bit smaller comment is that the introduction could a bit more to the point. Some of the sections felt a bit out of place, perhaps even unnecessary (see minor comments bellow). More specific and minor comments are listed bellow. Kamil S. Jaron

      Minor manuscript comments: I gave this manuscript a lot of thought, so I would like to share with you what I have figured out. However, I recognise that these writing comments listed bellow are largely matter of personal preference. I hope they will be useful for you, bit it is nothing I would like to insist on as a reviewer. l56: An unnecessary book citation. It's not a primary source for that statement and if a reference was made a "further reading", perhaps better to cite a recent review available online rather than a book. l65 - 66: Is the "lower error rate" still a true statement? I don't think it is, error rates of HiFi reads are similar or even lower compared to short reads. (tough I do agree there is still plenty of use for short reads). l68 - 72: I don't think you really need this confusing statement " which are mainly influenced by the number of different k-mers", the problems of short read assembly are well explained bellow. However, I actually did not understand why the whole paragraph l76 - 88 was important. I would expect an introduction to cover approaches people use till now to overcome problems of ploidy and heterozygosity in assemblies. l176 - 177: "Ploidy can be easily estimated with cytogenetic techniques" - I don't think this statement is universally true. There are many groups where cytogenetics is extremely hard (like notoriously difficult nematodes) or species that don't cultivate in the lab. For those it's much easier to do NGS analysis. You actually contradict this "easily" right in the next sentence. l191: the first autor of nQUire is not Weib, but Weiß. The same typo is in the reference list. l222 - 223: and l69-70 explains what is a k-mer twice. l266 - 267: This statement or the list does not contain references to publications sequencing the original genomes. I am not sure, but when possible, it is good to credit original authors for the sequencing efforts. l302: REF instead of a reference l303: What is "important fraction"? l304: How can you make such a conclusion? Did you try to remove the contamination and redo the assembly step? Did the assembly improve? Not sure if it's so important for the manuscript, but I would tone down this statement ("could be caused by" sounds more appropriate). l310: "B9738 is haploid" are you talking about the genome or the assembly? How could you tell the difference between homozygous diploid and haploid genome? If there is a biological reason why homozygous diploid is unlikely, it should be mentioned. l342: How fig 7 shows 3% heterozygosity? How was the heterozygosity measured? Also, karyon plot actually shows that majority of the genome is extremely homozygous and all heterozygosity is in windows with spuriously high coverage. What do you think is the haploid / diploid sequencing coverage in this case? l343 - 345: I don't think these statements are appropriately justified. The analysis presented did not convincingly show the genome is triploid or heterozygous diploid. l350: I think citing SRA is rather unnecessary. l358: what "model"? How could one reproduce the analysis / where could be the model found? l378 - 379: Does Karyon analyse ploidy variation "during" the assembly process? Although the process is integrated in a streamlined pipeline, there are loads of approaches to detect karyotype changes in assemblies, from nQuire which is used by Karyon, through all the sex-chromosome analyses, such as https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002078.

      Method/manual comments:

      Scaffold length plots have no label of the x axis. As the plots are called distributions, I would expect frequency or probability on the y axis and the scaffold length on the x. Furthermore, plotting of my own data resulted in a linnear plot with a very overscaled y-axis. "Scaffold versus coverage" plot also does not have axis labels either. I would also call it scaffold length vs coverage instead. I also found the position of the illustrating picture in the manual confusing a bit (probably should be before the header of the next plot).

      Variation vs. coverage is the main plot. It does look as a useful visualisation idea. Do I understand right that it's just numbers of SNPs vs coverage? I am confused as I thought the SNP calling is done on the reference individual and in the description you talk about homozygous variants too, what are those? Missmapped reads? Misassembled references?

      I also wonder about "3. Diffuse cloud across both X and Y axes.", I would naturally imagine that collapsed paralogs would have a similar pattern to the plot that was shown as an example - a smear towards both higher coverage and SNP density. I guess this is a more general comment, would you expect any different signature of collapsed paralogs and higher ploidy levels? Should not paralogy be more explicitly considered as a factor?

    2. Recent tec

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac088), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows: Reviewer name: Michael F. Seidl

      The technical note 'Karyon: a computational framework for the diagnosis of hybrids, aneuploids, and other non-standard architectures in genome assemblies' by Naranjo-Ortiz and colleagues reports on the development and application of the Karyon framework. Karyon is a python-based toolkit that utilizes several software tools developed by the authors' and/or others with the overall aim to assess sequencing data and genome assemblies for potential assembly artefacts caused by a plethora of different features intrinsic to the analyzed species/strain. Karyon is publicly available from github and as a docker image.

      Genome assemblies are nowadays important tools to develop novel biological hypotheses. However, genome assemblies are often not ideal, i.e., they are highly fragmented and/or incomplete, which can significantly hamper their full exploitation. The genome assembly quality is impacted by different biological factors that can be, at least partially, discovered directly based on the raw sequencing data and from the genome assembly (e.g., allele frequency, k-mer profiles, coverage depth, etc.). There are already plenty of established computational tools available to perform these type of analyses (to name a few: KAT, genomscope, nQuire).

      Karyon will ease these analyses by providing a single computation framework that combines different and complex software tool and generates diagnostic figures to support biological interpretation. Karyon thus represents a valuable contribution to the scientific community. The Karyon toolkit is built around established software tools and the overall methodology is sound and suitable to assess genome qualities. The interpretation of the results of Karyon is on the user, which still necessitates expert knowledge to correctly interpret signals.

      While examples are provided in the manual, the level of experience required will likely hamper the full exploitation of the pipeline by not expert users. Furthermore, it can be anticipated that expert users already employ the separate software to study genome complexities, and thus might not be in full need for Karyon. Obviously, this is inherent to the problem at hand and cannot be easily addressed by the authors. However, I would like to encourage the authors to further improve the manual and the examples to guide the data interpretation with the aim to make this software as accessible to as many researchers as possible.

      I nevertheless also have some comments related to the data presented in the manuscript that the authors need to address. First, the introduction finishes by asserting that different biological factors are expected to impact published genome assemblies. Furthermore, the manuscript mentions that quality of fungal genomes is often sub-optimal. However, no evidence for these statements is provided. To strengthen this point and to further highlight the urgency of methods to discover and ultimately address these problems, the authors need to provide a more systematic analyses based on publicly available genome assemblies for the occurrence of compromised genome assemblies. For example, a random subset of genome sequences for different eukaryotic phyla and / or classes, and more systematic throughout the fungi, would

      i) significantly substantiate the manuscript's message and

      ii) confirm the applicability of the authors' framework to most eukaryotes and not only to specific fungal groups (Mucorales).

      Second, the table mentions the diagnosis derived from Karyon but simply mentions 'unknown' for most entries. Based on the manuscript is seem that these are supposedly haploid with very little heterozygosity (L279) but table 1 nevertheless reports for most species/strains strikingly different genome size estimates between the original and the Karyon-derived genome assemblies (Karyon is consistently smaller). The authors need to explain in much more depth the nature of these differences for the reported genomes. For instance, it could be that publicly deposited assemblies have been generated by a combination of different sequencing libraries and technologies that are not fully exploited by Karyon. Third, one additional measure often applied to assess genome quality is genome completeness as for instance assayed by BUSCO. Karyon should include as strategy such as BUSCO to

      i) assess the occurrence of marker genes in the genome assemblies and

      ii) the duplication level of these genes as this might reveal un-collapsed alleles etc. Especially the latter is important to interpret genome size differences between original and Karyon-derived genome assemblies.

      Further detailed comments and suggestions to improve the manuscript: L21: could the authors please specify what 'groups' they refer to? L22: there seems to be an extra space L59: could the authors please specify what they mean with a 'poor assembly'. What is poor in terms of genome assembly? Contiguity or completeness, or unresolved haplotypes, or …, or a combination of thereof? L63-: the authors only once refer explicitly to Fig 1 in this section. the manuscript would be clearer if they would refer to specific panels as they describe factors impacting genome assembly quality L66: could the authors please further substantiate their notion that most genome assemblies publicly available are formed by short-read sequencing data. This information should be readily available at NCBI and/or GOLD

      L119: the manuscript mentions pan-genomics, but the relevance of aneuploidy in these studies is not explain. The manuscript should provide a brief explanation for the importance of aneuploidy (or any form of ploidy shift) for pan-genomics L147: 'From' -> 'from' L148: 'Symbiotic' -> 'symbiotic' L232: the reference to nQuire should read Weiß et al. 2018. L302: the reference to blobtools is missing L349: To initiate the pipeline, was a single sequencing library or a combination of multiple libraries used? Table 1: The table formatting, at least in the combined pdf, seems to be broken.

    3. Abstract

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac088), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Zhong Wang

      In this work, Naranjo-Ortiz et al. presented a software pipeline that is capable of de novo genome assembly, variant calling, and generating diagnostic plots. Applying this software to 35 publically available, highly fragmented fugal genome assemblies revealed prevalent inconsistencies between the sequencing data and the assembly. I really appreciate the authors' effort to make their software, Karyon, easy to use by providing multiple ways to install and a detailed software manual. I especially like the detailed explanation of how to use the diagnostic plots to infer the "nonstandard genome architectures".

      The manuscript is clearly written and very easy to follow. I have the following general comments:

      1. It wasn't clear to me the relationships between the raw sequencing data and the assembly -- were they belong to the same isolate? If so, then the inconsistencies may reflect assembly errors in the fungal genome assembly. Have the authors rule our this possibility? The fact that these genomes are highly fragmented suggests they likely contain many errors. If they were from different isolates, then I agree with the authors that the diagnostic plots could be examined carefully to detect structural variations. For that, have the authors used any alternative method to validate at least some of their findings? To establish the validity of their approach, it would be more convincing to obtain the same findings using independent approaches, including experimental ones.

      2. Given the raw WGS reads and assembled genome, another software, QUAST (http://quast.sourceforge.net/), automatically detect assembly errors and structural variations. It would be interesting to see a comparison between the findings via Karyon and via Quast.

      3.This is an optional suggestion, as I realize it may not be easy to implement. The biggest limitation of Karyon is that it does not automatically detect these usual genome organization. It may be possible by comparing the de novo assemblies produced by Karyon to the reference genomes. At least such possibilities should be discussed.

    1. Background

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac083), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows: Reviewer name: Kevin Peterson Pati et al. report the expression profiles of miRNAs and a vast array of other, non-miRNA sequences, across 196 cell types based on thousands of publicaly available data sets. Although this will be an outstanding contribution to the miRNA field, I am recommending rejection for now with the strong encouragement to resubmit once the authors have addressed my concerns. To be frank, the authors have two diametrically opposed research agendas here:

      1) What are the expression profiles of bona fide miRNAs (as determined by MirGeneDB); and

      2) What else might be expressed in human cell types that could be of interest to small RNA workers and clinicians? Because of these opposed goals, the paper is not only confusing to read and process, but it gives fodder to the numerous paper-mill products that continue to identify non-miRNAs as diagnostic, prognostic, and even mechanistic indicators into virtually every human malady under the sun. Let me try and highlight why the use of only MirGeneDB (MGDB) would be highly useful for this paper.

      1) miRBase (MB) is not consistent in its identification of both arms of a bona fide hairpin, resulting in the authors not reporting star reads for highly expressed miRNAs such as mir-206 and mir-184.

      Further, there are numerous examples where the authors do in fact report a "mature" versus "star" read without both arms annotated in MB with some included in MGDB (e.g., Mir-944, Mir-3909) and others not (e.g., mir-3615) raising the question of how these data were annotated.

      2) The authors write that, "the majority (46%) of the reads are mature miRNAs." But MB makes no attempt to distinguish mature from star arms. Hence, if they are annotating to MB, they cannot distinguish between these two processing products. This is not only confusing, but also very unfortunate as one cannot get a sense of the expression of evolutionary intended gene products versus processing products.

      3) The authors report on the use of 5p versus 3p strand dominance, but have no examples of "codominant" miRNAs (Fig. 1C) when, in fact, there are numerous examples in their data including Mir-324, Mir-300, Mir-339, Mir-361 etc. with some switching arms depending on the variant. All of this is available at MGDB; none at MB. 3) MB does not allow the identification of loop or offset reads separate from the arm reads, allowing to authors to accurately report the amount of reads derived from the "hairpin" versus the arms (and how the authors reported this in Fig. 1B is not at all clear given that these sequences are not annotated as such at MB).

      4) The authors bias their genic origins of small RNA reads by filtering first using MB, and then identifying remaining reads as arising from other sources including tRNAs, rRNAs, mRNAs etc. However, numerous "miRNAs" in MB arise from these genic sources including mir-484 (mRNA) and mir-3648 (rRNA). So if I understand the authors pipe-line these sequences are mistakenly included in the "mature miRNA" column.

      5) The use of MGDB would allow the user to see the saturation of mature reads across the different cell types in Fig. 1E, and, if mature is distinguished by star, then one could also see the (near)-saturation of star reads as well. As it stands, their plot just simply highlights the non-genic nature of much of MB. Further, because MGDB identifies the age of each miRNA, if the authors were interested, they could also test a long-standing pattern that evolutionary older miRNAs are expressed at higher levels than younger miRNAs relative to specific cell types.

      6) The authors report the expression profiles of bona fide miRNAs in Figs 3 and 5, but report the expression profiles of non-miRNAs in Fig. 4. These include mir-3150b, mir-4298, mir-569, mir-934, mir302f, and mir-663b. None of these supposed miRNAs have the requisite reads for miRNA annotation, and all but mir-3150b fail a structural examination as well. In fact, MGDG has no reads (which includes numerous data sets from the Halushka lab) for mir-302f, mi-4298, and mir-569, and only a few reads from one "arm" for mir-663b and mir-3150, highlighting the need to examine these supposed reads in detail. The inclusion of obvious non-miRNAs here is confusing and needlessly undermines the authors study and conclusions. So, my strong recommendation is to potentially write two papers. The first (this one here) focuses only on the expression of miRNAs, emphasizing really interesting results (like what they report in Fig. 5), and providing to the miRNA field a robust cell-type expression profile for humans. This would eliminate the need for read/rpm cutoffs as they are simply reporting the read profiles for what is in MGDB. This would not hamper their attempts to include these data at UCSC as MGDB includes links to both MB as well as UCSC, and indeed, why report "miRNA" read data to a genome browser for well over a thousand nonmiRNAs? This simply will lend credence to all of these non-miRNAs that already clutter the literature. A second paper could focus on potentially interesting or relevant small RNAs that show interesting patterns of expression in normal and/or diseased tissues, highlighting the structural and expression profiles of these genic elements, and possibly trying to identify what they might be (including potential false negatives in MGDB). As Corey and colleagues (2022, NAR) recently stressed, we as a field must focus on mechanism as the identification of a "biomarker" in and of itself is of no real value if we don't understand what it is or where it comes from.

      Minor comments:

      1) The seed sequence is 7-8 nt in length, not 6 nt.

      2) miRNAs reads - both mature and star - have a mean length of 22 nt in length, and no miRNA is less than 20 nt long (5p: median = 22, mean = 22.56, SD = 0.94, range = 20-27; 3p: median = 22, mean = 22.11, SD = 0.57, range = 20-26. All data from MGDG.).

      3) Its misleading to write miRNAs "block protein translation." Please rewrite.

      4) I don't believe our understanding of the expression profile of miRNAs is hampered by the numerical naming scheme. MB's nomenclature system obscures the evolution of miRNAs by erecting both paraphyletic (e.g., MIR-8, which includes mir-141) and polyphyletic groups. Why would distinct monophyletic families like MIR-142, MIR-143 and MIR-144 create confusion regarding their expression?

      5) The use of the term "leading strand" is confusing given its clear association with DNA replication (and not a term I've heard of associated with miRNAs).

      6) Please give cut-offs for things like "infrequent", "frequent" etc.

      7) I was surprised at the lack of co-expression for Burge's co-targeting miRNAs, especially in the brain. I think it would be worthwhile to examine more carefully these miRNAs and discuss in a bit of detail why they don't appear together in Fig. 2A.

      8) Fig. 6 should be moved to the supplemental figures as this is not readable and of no real value.

      9) The authors might want to reference Lu et al. (2005) for Mir-1 expression in the colon as this is one of the obvious down-regulated miRNA in diseased colon tissues.

    2. Abstract

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac083), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Ian MacRae

      In this study, Patil and co-workers have combined the largest set of publicly available small RNA-seq datasets to provide a comprehensive analysis of cell-type-specific miRNA distributions. Moreover, the authors made their results easily accessible to the public via Bioconductor and UCSC genome browser. This deeply curated resource is a valuable asset to biomedical research and will help researchers better understand and utilize the otherwise overwhelming number of small RNA-seq datasets currently available.

      Here are some minor points for the authors to address:

      In the background section, the first sentence, "microRNAs (miRNAs) are short, ~18-21 bp, critical regulatory elements that block protein translation". Mature miRNA is single-stranded, so it would be more appropriate to use 'nt' (nucleotides) instead of 'bp' (base pairs) to describe miRNA length. Additionally, many mature miRNAs have a length of 22 and 23nt. Finally, "block protein translation" is not quite right as mammalian miRNAs are believed to primarily function by promoting the degradation of targeted mRNAs . 2. In Fig. 1C, is the "co-dominant" category bar missing? Since the sum of 5p and 3p bars are not equal to 100%.

      In Fig. 1D and 1E, the y-axis label "Unique miRNA count" is misleading/confusing. Would a more appropriate label be "Unique miRNA species"?

      In the "DESeq2 VST provided superior normalization" section, the authors mentioned that "An HTML interactive UMAP with cell type information is available in the GitHub repository (https://github.com/mhalushka/miOme2/UMAP/Figures)." However, the provided link is not accessible.

    3. Abstract

      This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac083), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1 Reviewer name: Qinghua Cui

      In this paper, the authors reported a curated human cellular microRNAome based on 196 primary cell types. This could be a valuable resource. The following comments could improve this study.

      1. Euclidean distance could be not a good metric for clustering analysis. I am wondering the results when using other metrics, e.g. spearman's correlation.

      2. More analysis are suggestted, such as cell-specific miRNA, functional set analysis etc.

    1. This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac080 ), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 3: Reviewer name: Roberto Pilu **

      The manuscript "Association Mapping Across a Multitude of Traits Collected in Diverse Environments in Maize" by Ravi V. Mural et al. reported the application of high-density genetic marker data from two partially overlapping maize association panels, comprising 1,014 unique genotypes grown in seven US states, allowing the identification 2,154 suggestive marker-trait associations and 697 confident associations and suggesting the possible application to study gene functions, pleiotropic effects of natural genetic variants and genotype by environment interaction.

      The background data are well documented, experimental data are convincing, clearly presented and well discussed, the paper is suitable for publication in Giga Science in its present form.

    2. This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac080 ), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 2: Reviewer name: Yingjie Xiao.

      The authors described a study of integrating multiple published datasets for reanalysis. They combined previously community panel data and newly collected data in the present study, finally assembling 1014 accessions with 18M SNP markers and 162 traits at different environments. They used a resample-based GWAS method to reanalyze this assemble dataset, and identified 2154 suggestive associations and 697 confident associations. They found genetic loci were pleiotropic to multiple traits.

      As the authors mentioned, I acknowledge their efforts for collecting and assembling different sources of previously datasets, which should be useful for the maize community. However, to the manuscript per se, I feel the paper seems not to be sufficiently quantified regarding the novelty and significance of reported findings. If the authors could present several novel results because the previous studies had the limitations on population size, diversity, trait dimensions and environments. In this study, the authors seemed trying to present like this, but it may be improved further and more.

      It's hard to let me understand there are some novel things which was found due to the merged large dataset. On the other hand, using this assembled dataset, I'm not very clear what's the scientific questions that the authors want to address. In technical sense, I'm wondering how did authors deal with the batch effects when merging datasets phenotype from different environments? It's not comparable for the phenotypes from different accessions collected in different environments. It's hard to figure out the phenotypic difference is caused by genotype, environment, or their interaction.

      The introduction section lacked the proper review for the project background, related progress and publications and findings.

    3. This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giac080 ), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer 1: Reviewer name: Yu Li

      Reviewer Comments to Author: Mural et al. reported a large-scale association analysis based on publicly published genotype and phenotype datasets and a meta-GWAS. This study provides a good example for mining community association panel data and further identifying candidate genes, pleiotropic loci and G x E. Actually, metaanalysis of GWAS has been used in humans and animals. However, I have some major concerns as follows.

      1. This study only used three association panels (MAP, SAM, and WiDiv), as I know, some publicly available genotype and phenotype could be obtained for other association panels, for example the association panel including 368 inbred lines (Li et al., 2013, Nat Genetics, 45(1):43-50. doi: 10.1038/ng.2484), which was used widely in GWAS studies in maize. Can other association panels be integrated into this research, which would provide a rich genetic resource for maize research groups.

      2. For association analysis, a total of 1014 unique inbred lines and 162 distinct traits from different association panels were used, but these traits were not measured for each of 1041 inbreds. For example, cellular-related traits were mainly measured in the SAM association panel. Hence, association analysis for cellular-related traits were conducted in SAM or 1014 inbreds. If 1014 inbreds were used to perform association analysis for cellular-related traits, how did you analyze the phenotype data? Please describe the method of phenotype data analysis in the Method section.

      3. Authors used RMIP values to identify significant association signals, please add more details about the RMIP method. What advantages of the resampling-based genome-wide association strategy over other methods?

      4. Although some important functional genes could be identified, were some new candidate genes obtained in this study functionally verified by the mutants or overexpression experiments.

      5. The authors identified pleiotropic loci based on categories of phenotypes associated with the same peak. For example, the phenotypes associated with the pleiotropic peak on chromosome 8 from 134,706,389 to 134,759,977 bp belongs to Flowering Time, Root and Vegetative categories, thus the locus was associated with different traits. Do you have any ideas on pleiotropic genes based on the results?

    1. at the same time

      Reviewer name: Ruben Dries (revision 1)

      The authors responded adequately to my original concerns and have adjusted their manuscript accordingly. I have no further questions or comments. Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    2. State

      Reviewer name: Ruben Dries

      In this article, the authors created a modular and scalable pipeline to process raw sequencing data from spatially resolved transcriptomic technologies. In contrast to other popular genomics technologies, such as (single-cell) RNA sequencing, there are virtually no existing public tools that allow users to quickly and efficiently process the raw spatial transcriptomic sequencing data that are generated through Illumina sequencing. This is largely due to the fact that each spatial transcriptomic workflow creates its own unique spatially barcoded reads and thus typically requires technology-specific tools or scripts to extract both the barcode and gene expression information. Here the authors created Spacemake which consists of multiple modules that are tied together using the popular workflow management system Snakemake. The innovative part of Spacemake comes from the creation of specific 'sample variables', such as the barcode-flavor, run-mode and puck, which allows them to create a flexible pipeline that in theory can be adapted to any type of spatial array-based sequencing technology. The authors use well-established tools for downstream quality control and data processing and provide useful additional modules to assess or improve spatial data quality. Finally, Spacemake is also directly linked to Squidpy for downstream analysis and creates a web-based report, which could certainly help to lower initial spatial data analysis barriers. Overall, the presentation of the tool and the methods used in the pipeline as described in their contents are comprehensive and the user manual is easy to understand. We appreciate the efforts to provide this tool to the spatial transcriptomics community and to make it open-source and flexible. However, we do have some suggestions and concerns regarding the manuscript and/or use of this tool. Major comments: 1. We managed to install the spacemake software on the linux based server but failed to install it on a MacOS machine due to the compatibility issue with bcl2fastq2. Unfortunately, we also ran into an issue on our linux server, which happened during one of the reading steps from "/dev/stdin" in the middle of the spacemake workflow. More specifically we encountered the following error: Job error: Job 7, TagReadWithGeneFunction Error message: [E::idx_find_and_load] Could not retrieve index file for '/dev/stdin' Even with the help of our IT team we were unable to resolve this issue. To help troubleshoot it might be helpful if the authors can provide exact commands for the examples provided in the manuscript and show what should be expected output of each job in the snakemake pipeline. As a result we were unable to re-run any of the provided examples, which severely limited our reviewing options. 2. A major drawback of Spacemake is that it currently does not offer solutions for the integration of imaging information, which is typically an essential step in any spatial sequencing workflow. The authors do note this shortcoming in their discussion and as a potential solution they argue that Spacemake can be used with another tool called Optocoder, which is currently being developed in their lab. However no information can be found anywhere. There is no biorxiv or github page available based on our search results and as such we were unable to test or assess this solution. At minimum the authors should provide general guidelines on how users could potentially integrate images together with the created spatial downstream results. Minor comments: 1. The figure labels and legends are not always clear. More specifically it's sometimes hard to figure out which samples are being used for each figure or panel. This could be simply resolved by writing more informative legends that specifically state which sample was used to create each figure panel. According to the text Seq-Scope was used to generate figure 3, however in the legend of figure 3 it says Slide-seq … 2. Overall, the figures are pretty and informative, however I would suggest starting with a general overview figure that highlights the spacemake pipeline and it's innovative framework. Given the goal and content of the manuscript this seems to be appropriate as a main figure. 3. In order to initialize a spacemake project, the dropseq tools that are required by Spacemake lack any introduction. Please provide a brief introduction and a link to the associated github page to improve this step. 4. In order to configure the spacemake project by adding a sample species, the pipeline does not allow compressed versions of genome files. This could be simply fixed and allows the user to directly link to their, typically compressed, genome files. 5. More information is needed about the R1 R2 arguments in the add sample function. For example, SeqScope has two separate libraries to get sequenced. Where each round of libraries should be loaded is not immediately clear from the tutorial the authors provided. 6. The downsampling and NovoSparc modules together might create an opportunity to identify the relative error that is introduced when NovoSparc is used to enhance spatial expression patterns. Although this might be outside the scope of this paper. 7. As mentioned in the Major comments section we were unable to successfully run an example script, but it would be of great interest to the large spatial community if this pipeline can easily be used with other downstream analysis tools, such as Giotto, Seurat, Bioconductor (spatialExperiment class), etc. Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    3. Spatial

      Reviewer name: Qianqian Song (revision 1)

      The revised version mostly addressed my concerns. Hopefully this tool can be widely used with the emerging spatial transcriptomics data. Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests. I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    4. Abstract

      This work has been peer reviewed in GigaScience (see paper), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Qianqian Song This manuscript proposed a python-based framework named spacemake, to process and analyze spatial transcriptomics datasets. It offers functionalities including sample merging, saturation analysis and analysis of long-reads as separate modules, etc. Overall, this tool holds promises for spatial analysis, though this manuscript lacks details and explanations of methods and results. Specifically, I have some concerns regarding this manuscript. 1) As shown in table 1, it is noticeable that spacemake doesn't include H&E integration, which is kind of necessary in spatial data. I would recommend the authors at least discuss the potential functionality in including H&E images. 2) From the legend of Fig 2B, I didn't find the plot with Shannon entropy, please double check. 3) I don't understand the meaning of fig 2D. The authors should explain how they calculate the Shannon entropy and string compression length of the sequenced barcodes, as well as how they define the expected theoretical distributions. More details are needed here. Though the authors mentioned related information/details would be in methods (last line in QC section), I didn't find any in methods. 4) In Fig 4 A, the authors show the mapped scRNA-seq of mouse cortical layers. I think a complement spatial plot with annotations is necessary, as there is a gap between Fig 4A and Fig 4B. 5) Fig 5C lack the annotations of different colors. 6) In page 16, the authors cited a manuscript in preparation, which is not good. I suggest remove the citation. 7) Supplementary Fig 1 would be better if put as fig 1, thus it would show the overall flow & functionality of spacemake. 8) Based on Supplementary Fig 1, the authors should add a section illustrating how they annotate the spatial data and the involved gene markers. 9) The paragraph "Spacemake can readily merge resequenced samples" lacks detailed explanation and results. 10) Though spackemake claims it is fast in processing data, well, Supplementary Fig 5 doesn't fully support that. Meanwhile, the authors should explain what the different colors represent. 11) In Supplementary Fig 2, the authors show very high correlation between spacemake and spaceranger, especially the exon intron and exon sub-figures. It looks like the correlations is close to 1. I suggest the authors double check the results and give explanations on their correlation analysis. Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    1. compute

      Reviewer name: Aleksandra Pakowska (revision 2)

      Thank you for the feedback and for including more analyses. Figure S 5 is hard to read (it is unclear where the loops are), in Figure S 6, HiCExplorer looks in fact worse than HiCCUPS. Both tools have issues at noisy loci but seem to be calling the most relevant interactions. The authors decided not to address the issue of pixel merging and its impact on the analysis which might have perhaps helped to understand the discrepancies between tools. Given that almost half of the loops detected by HiCExplorer are not detected by HiCCUPS, it would be interesting to check what these loops connect - convergent CTCF sites, cis regulatory elements to each other? This point could be addressed either in this or in another study. Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    2. are

      Reviewer name: Feng Yue (revision 1)

      My main concern for the revised manuscript is the additional benchmarking the authors performed with Fit-Hi-C and Peackachu. Since Fit-Hi-C is one of the first algorithms for Hi-C loop prediction (published in 2014) and Peakachu is the only method that uses the supervised machine learning approach for such purpose, I suggested that these two software should be recognized. If the authors can perform a fair benchmarking and find out where the differences come from, the results would be really interesting. The authors decided to test the aforementioned methods during the revision. Unfortunately, I believe there were some errors during the testing. For Peakachu: 1. Most importantly, the authors used the wrong form of normalized Hi-C files for Peakachu. Peakachu model was trained and should be used with ICE-normalized Hi-C matrix. However, based on page 8 in the supplementary file, the input file is gm12878_KR.cool. The data range for ICE and KR normalization is very different, and therefore, the model trained in ICE file will not work with KR format and the prediction will wrong. Therefore, all the following evaluations and descriptions for the Peakachu prediction are not accurate and needs to be revised (such as Fig. 4, Table S1 ...). 2. In the response letter, there is another misunderstanding about merging. Because Fit-Hi-C predicted too many contacts, the authors of Peakachu merged "the top 140,000 interactions into 14,876 loops (Fig. 3a, b), with the same pooling algorithm used by Peakachu." The reason is that if multiple continuous bins on a Hi-C map are all predicted as loops, the merging/filtering step will use the bin with the most significant P-value as the chromatin loops (local minimal). As the authors noted, Fit-Hi-C by default will generate "significant contacts in the 100,000-ends." Therefore, this merging/filtering step is necessary if we want to compare the loops predicted by each method. This is also what the author did in this manuscript as well - I am quoting their own writing here, "This filtering step is necessary to address the candidate peak value as a singular outlier within the neighborhood." Therefore, I do not understand the authors are "irritated" by such approach. 3. The authors of Peakach have released their prediction in 56 Hi-C datasets on their 3D Genome Browser website (http://3dgenome.fsm.northwestern.edu/publications.html), including the ones used in this manuscript. The authors used models trained at different sequencing depths for different datasets. Therefore, I would suggest the authors use this dataset for a fair evaluation. Regarding Fit-Hi-C, what are the number of peaks the before and after filtering? The author also needs to provide the loop locations so that reviewers can evaluate their claim independently. This information is critical. This manuscript might be helpful for the authors to evaluate Fit-Hi-C (Arya Kaul et al. Nature Protocol 2020). Finally, the authors need to provide all the predicted chromatin loops in the cell lines as well as loops predicted by other software used in this manuscript as supplementary materials (loops in Supplementary Table 1). Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    3. Chromatin

      Reviewer name: Feng Yue

      This paper provided a loop detection method using continuous negative binomial function combined with donut approach. To test the performance of this method, the authors used in-situ Hi-C data by Rao 2014 in GM12878, K562, IMR90, HUVEC, KBM7, NHEK and HMEC cell lines. This method showed comparable results with HiCCUPS and cooltools and better outputs than HOMER and chromosight. The significant advantage is the utilization of modern computational resources. The following are my comments: 1. The author claimed the advantages in utilizing computational resources. The authors need to clarify how their algorithm contributes to this advantage. 2. It will be helpful for the users to know the performance of the software at various sequencing depths, which can be achieved by down-sampling the high resolution datasets. 3. The authors need to compare (or at least discuss) Fit-Hi-C and Peakchachu. A table showing the strength and limitation of each method will be helpful. To be honest, I don't think any method is clearly better than the other. They are just different approaches. 4. It is better to use other types of orthogonal data like HiChIP, ChIA-PET to evaluate the loops called by these methods. There are H3K27ac HiChIP, SMC1 HiChIP, CTCF ChIA-PET and RAD21 ChIA-PET data in GM12878. 5. Just a minor suggestion. There are a lot of tables in the manuscript, which makes it hard for the readers to compare. It might be better to use figures instead. Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests. I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    4. Abstract

      This work has been peer reviewed in GigaScience (see paper), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Borbala Mifsud

      Wolff et al. present the python version of HiCExplorer for loop detection. The algorithm is included in the Galaxy HiCExplorer webserver (Wolff et al. 2020), although the publication about the webserver did not describe the algorithm in detail. HiCExplorer uses the same donut approach as HiCCUPS (Rao et al. 2014) with a few notable differences. HiCExplorer selects candidate peaks based on the significance of the distance-corrected observed/expected ratio using a negative binomial model, and compares the peak's enrichment to its neighbourhood's using a Wilcoxon rank-sum test. The method is appropriate for chromatin loop identification and it performs similarly to existing methods both in terms of computational requirements and specificity of the detected loops. However, the manuscript in its current format does not describe the method adequately, and the comparison with the other methods is limited and inconsistent. It would be good to describe each step of the method (filtering based on distance, candidate selection based on negative binomial test, additional filtering options, local enrichment testing using different neighbourhoods in a Wilcoxon rank-sum test). The graphical representation currently included for the algorithm is not informative for most of these steps. For the scientific community, it would be more informative if this method's performance would be further analyzed. Even though it is mentioned that the loop detection greatly depends on the initial parameters, the results do not show how the parameters influence it. The comparison of HiCExplorer with other existing methods is inconsistent. Finally, the text would need heavy editing for language, clarity and minor spelling mistakes. Specific comments: The background does not clearly lay out the motivation behind designing this algorithm. There are similar existing methods that are fast. Why is it expected to detect chromatin loops better? This is not a 3D genomics specialized journal, therefore the text should introduce Hi-C and its challenges clearly. For example, the notion that genome properties and ligations affect Hi-C data analysis is mentioned in the methods section without further elaboration. It would be hard for readers to understand why authors are normalizing for ligation events in their algorithm. The background introduces a few methods that are not aimed at detecting chromatin loops (e.g. GOTHiC) or not designed for Hi-C (e.g. cLoops) and are also not used in the comparison. It would be more useful to describe the algorithms of those methods that are comparable to Hi-C explorer in terms of their goal and design. Figure 1, which represents the steps of the algorithm, does not make it clear what happens at each step, some of arrows seem to point to random pixels, e.g. in panel C. More elaboration on the use of the three different expected value calculation methods would be needed. Which one is more appropriate for a mammalian vs. an insect Hi-C does it depend on the genome size, the sequencing depth or the sparsity of the data? The negative binomial distribution does model well the read counts in most high-throughput sequencing experiments, but the rationale given for choosing it is not appropriate. Also, citing a stackexchange discussion for the methods is not suitable. The numbers in most tables could be better appreciated if they were represented in a figure. What was the reason to increase the distance only to 8Mb instead of using the full genome as comparison, especially given that some of the compared methods only work on the full genome? The bottom left neighbourhood in HiCCUPS is assessed, because they only use the upper triangle in the Hi-C matrix, and the bottom left neighbourhood represents the shorter interactions. In Figure 2, the detected interactions are indicated on the bottom triangle , which is counterintuitive. Fig 2A is showing the same data as Fig 2A in the Galaxy HiCExplorer publication (Wolff et al 2020), but the detected loops indicated are different. What is the reason for that? The difference between the proportion of CTCF-bound loops for the different methods is probably not significant. It should be tested. Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests. I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published

    1. Results

      Reviewer name: Lutz Brusch (revision 1)

      The revised version of the manuscript "ChemChaste: Simulating spatially inhomogenous biochemical reaction-diffusion systems for modelling cell-environment feedbacks" addresses all my previous comments and I would also like to thank the authors for their in-depth response. Methods Are the methods appropriate to the aims of the study, are they well described, and are necessary controls included? Choose an item. Conclusions Are the conclusions adequately supported by the data shown? Choose an item. Reporting Standards Does the manuscript adhere to the journal’s guidelines on minimum standards of reporting? Choose an item. Choose an item. Statistics Are you able to assess all statistics in the manuscript, including the appropriateness of statistical tests used? Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests. I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    2. Motivation

      *Reviewer name: Lutz Brusch*

      The manuscript no. GIGA-D-21-00383, entitled "ChemChaste: Simulating spatially inhomogenous biochemical reaction-diffusion systems for modelling cell-environment feedbacks" addresses the important technical challenge of hybrid discrete-continuous models. The presented extension of the widely used Chaste software library, termed ChemChaste, now supports simulations of reactiondiffusion dynamics in a 2-dimensional environment bi-directionally coupled to motile and chemically active but point-like cells. Specifically, ChemChaste supports arbitrarily many spatial domains within the system, each with individual uniform diffusion coefficients. It supports arbitrarily many coupled reaction-diffusion equations and coupling via membrane reactions and transport reactions between bulk molecular species and intracellular species. Cells are coarsely represented as points on a cell-mesh that is distinct from the FE-mesh for solving the reaction-diffusion dynamics. The user interface is established through a tree of many small text and csv files that are human-readable. All these extensions to Chaste are valuable and their presentation is important for the large user base and beyond. The manuscript is clearly structured and well written. The source code is openly available under the permissive BSD 3- clause license at the provided GitHub link (https://github.com/OSS-Lab/ChemChaste) and includes all models, parameters and data as used in the present manuscript. As the motivation and title focus on "...modelling cell-environment feedbacks", then also the implications and limitations of the coarse cell representation in ChemChaste must be clearly stated, see comments below. Major comments:


      1. Coarse spatial cell representation: Cells are represented by their node position in the cell-mesh and interact with the environment through a single node at the same position in the FE-mesh. Can this formalism properly account for transport reaction fluxes in strongly heterogeneous environments where the FE-mesh needs many nodes with differing field values in a spatial area equivalent to the size of a single cell (with the cell node inside this area)? For example, how does this formalism evaluate the uptake from an exponential concentration gradient (as is common for diffusion and degradation around a localized source). For such a field, the local concentration value at any single position is always smaller than the average over any symmetric interval around it. Hence a transport reaction flux calculated with the single concentration value at the cell center will systematically underestimate the flux that would result from averaging over the area equivalent to the size of the cell. Moreover, such systematic errors also occur for linear concentration gradients and can get amplified when transport or membrane reactions are nonlinear with for instance high Hill coefficient. For comparison, with a spatially more explicit cell representation with many paired cell-nodes and field-nodes, one could directly sum the flux contributions from these paired field-nodes. But with the single cell-node here, usability seems limited to weak gradients at the scale of cell size. Alternatively, can a spatial kernel or stencil function be used to average or sum over field values in the spatial area equivalent to the size of a cell?
      2. Conservation of mass for transport: In biology, the number of molecules per time taken from the environment in a transport reaction has to equal the number of molecules per time added to the cell, and vice versa. So mass needs to be conserved and not concentration whereas ChemChaste seems to add and subtract the concentration flux in the different spatial compartments (cf. page 7 of SI.S1.4). For example, if the FE-mesh needs to use multiple nodes in a spatial area equivalent to the size of a single cell (hence Ve<Vc) but the transport reaction only relates the concentration value at one of these nodes to the cell-node, then mass is not conserved and results will be wrong. One option may be to attach volume attributes to nodes in both meshes. A node i in the cell-mesh would store the current cell volume Vc_i and a node j in the FE-mesh would store that node's share of the volume in the environment Ve_j (doubling the number of nodes in the FE-mesh would on average halve each node's volume Ve_j). Then secretion of molecules with intracellular concentration u at rate k would reduce the intracellular concentration by a flux of molecule number per per time and per volume, i.e. k*u*Vc/Vc=k*u, and increase the concentration at the environment node with flux k*u*Vc/Ve which in general is and must be different from the intracellular concentration flux k*u. Likewise, if the FE-mesh is coarse (hence Ve>Vc) then the transport flux must get diluted like kuVc/Ve < k*u. The factor Vc/Ve does not appear to be implemented and the equations on page 7 of SI.S1.4 omit this factor, limiting the usability to the special case Vc=Ve. This implies that the construction of the FE-mesh has to match the cell-mesh wherever cells are positioned and in their neighborhood. This limitation and the required construction of the FE-mesh must be described.
      3. Scaling of fluxes with cell surface area: In biology, membrane reactions and transport reactions occur at the molecular scale and yield a characteristic flux density per membrane area. The total flux per cell is then the integral of the flux density over the cell surface. Hence cells with larger surface area must be able to exchange more molecules with the environment. Since differently shaped cells will have different surface to volume ratios, it appears necessary to attach not only a cell volume Vc_i to each node i of the cell-mesh but also a surface area value Ac_i. The transport reaction fluxes from item 2. above then become k'AcuVc/Vc=k'Acu and k'AcuVc/Ve, respectively, with a new rate constant k' with units [1/(areatime)]. The same argument applies to membrane reactions. Only if all cells have the same and constant surface area then Ac does not need to be attached to nodes and k may be used instead of k'Ac.
      4. User interface and model format: To improve Interoperability according to FAIR,
      5. please explore and comment how the files that are required for model definition in ChemChaste can or cannot be packaged in a COMBINE archive [Bergmann et al. (2014). COMBINE archive and OMEX format: one file to share all information to reproduce a modeling project. BMC Systems Biology 15:369. https://doi.org/10.1186/s12859-014-0369-z].
      6. please compare ChemChaste's declaration of the reaction-diffusion model in the environment to that of the SBML Level 3 Spatial Processes Package (SBML-spatial) [https://synonym.caltech.edu/documents/specifications/level-3/version-1/spatial/].
      7. please compare ChemChaste's declaration of the reactions to that of the Antimony model format as used in the Tellurium framework [Smith et al. (2009). Antimony: a modular model definition language. Bioinformatics 25:2452. https://doi.org/10.1093/bioinformatics/btp401].
      8. please discuss the necessary steps to convert model files available in SBML-spatial or Antimony to ChemChaste and vice versa.
      9. Numerical accuracy of the 3-fold operator splitting scheme for cell-environment coupling: As shown in Fig.1b, the three operators 1 (Cell dynamics), 2 (Environment dynamics), 3 (Cellular fluxes) are applied sequentially for a coupled cell-environment model. How is the numerical error controlled for this 3-fold operator splitting scheme? How are time steps chosen or adapted internally?
      10. Model equations for test case with cell-environment coupling: In SI, Figure S10.c (and file CellA/Srn.txt in the code repository) apparently all 5 reactions are defined as reversible with "<->" and each has a nonzero kr=1.0 but only two of these reactions are reversible in the reaction scheme in main Fig.4a. Probably the file in the repo and SI is wrong (as the reverse generation of Precursor directly from Biomass and Enzyme is not physiological) and possibly the simulation results in Fig.4b may change after correction of the file CellA/Srn.txt.
      11. Findability of repository: To improve Findability of ChemChaste according to FAIR, the code repo should be integrated with or referenced from the core project at https://github.com/Chaste/ . This integration should also facilitate future code maintenance and usability in a sustainable manner. Minor comments:

      1. Further tests may be easily implemented for the Schnakenberg model which was qualitatively simulated but not quantitatively compared to an analytical prediction (main text, lines 368-375). One (rough) quantitative comparison could be achieved for the dominant mode of the Fourier-transformed simulated pattern (Fig.3b; or some other measure of the spatial period of the pattern) versus the critical mode of the diffusion-driven instability (|k_cr|^2 = 1/(2D_U) * dR_U/dU + 1/(2D_V) * dR_V/dV). In addition, the instability threshold from eq. (25) in SI.S6 (page 27) can be tested in simulations along a one-parameter scan across the instability and the temporal oscillation period in Fig.3a can be (roughly) compared to the predicted period from the imaginary part of the eigenvalues of the steady state or computed by means of numerical continuation in AUTO (http://indy.cs.concordia.ca/auto).
      2. Main text, lines 460-463: "Thus...lead to a spatial segregation of the two cell types." This behavior may be subject to the slow or lacking active motility of the cells. Now, cell division alone seems to generate compact clones of the same cell type instead of emergent spatial segregation. Maybe comment if/how ChemChaste handles random walks of cells or even chemotaxis of cells towards ES. Then the interesting question of emergent spatial segregation can be studied with ChemChaste.
      3. Please clarify if/how ChemChaste allows to incorporate transport reactions directly between neighboring cells (like auxin or calcium transport in tissues)?
      4. Where are the membrane reactions involving a cell and the environment included in Fig.1b: in steps 1./2. or in step 3.? That is interesting for the numerical operator splitting scheme and may be added to the caption.
      5. In addition to item 7. above (which should ensure future usability), the reproducibility of the current model results as presented in this manuscript should be ensured by archiving the current software version from the ChemChaste code repo at Zenodo or a similar service and the DOI of that archive should be given in the manuscript. In addition, that archived code shall be given a version number on GitHub and that version number shall also be given in the manuscript. Figure improvements:

      • Figure 2.b may have axes flipped or may have an unfortunate color scale with too little contrast for convergence scores between 0.4 and 0.5 to show the gradual change of score at the horizontal row with dt=0.1 (which is apparently used in Fig. 2.c and shows a change of accuracy there). Please check and improve the correspondence between panels b) and c) such that the data from panel c) helps to get a feeling for the L2 score changes in panel b).
      • Figure 2.b: How can we understand the loss of convergence if the time step is reduced (say from 0.006 to 0.0002) at any fixed dx? From other solvers, one is used to that finer dt improve convergence while this plot shows dark (high L2 score) areas on both sides of the light (low L2 score) areas at intermediate values of dt.
      • Figure 2.c: The color code is not suited for so many curves. Either include line style or reduce the number of curves (preferred). It must become clear which curve belongs to which dx. The green curve with dx=0.8 seems to be hidden?
      • Figure 3.a: The figure caption should explain the source of variation between nodes (e.g. by pointing to the noise terms in eqs. 13,14) and the color code for the two bands (dark and light) around each curve (1-sigma and 2-sigma or 1-sigma and min/max ?).
      • Figure 4b: These two panels could be given more space. Suggestion: re-arrange part a) horizontally and then put both diagrams of b) at the bottom, left and right.
      • Figure 5: The caption wrongly announces "and t=100" which is not shown. Also the words "towards the" in the first line seem to be linked to t=100. Text corrections:

      • main text, line 61. The sentence "...centred on the role chemical coupling." seems to miss the preposition "of".
      • main text, line 71. The phrase "cellular network reaction size" appears misleading, when it shall refer to "the size of the cellular reaction network".
      • main text, lines 280, 284, 286: Since the subsections of the Results section are not numbered here, then the text pointers "(Section )" can be omitted.
      • main text, one line below eq.(7): "reaction rate constants parameters" can drop the word "parameters"
      • main text, lines 450 and 451: "a...concentrations" should be either singular or plural
      • SI.S1, page 1, line 5 above eq. (1): text "exchange chemical concentrations" should read "exchange molecules" and, correspondingly, "controlling the chemical concentrations passing between the bulk and the cell" should read "controlling the flux of molecules between the bulk and the cell".
      • SI.S1, page 2, line 2: "asssociated" has an "s" too much
      • SI.S1, page 5, at the end of Fig.S1's caption: $k-p$ should be $k_p$
      • SI.S2.2.1, page 14, eq. (11) has capital U_0 and V_0 as initial values while the sentence above has small u_0, v_0. These should be the same symbols.
      • SI.S6, page 26, 1 line below eq. (19): "is a spatial case" should be "is a special case" Methods Are the methods appropriate to the aims of the study, are they well described, and are necessary controls included? Choose an item. Conclusions Are the conclusions adequately supported by the data shown? Choose an item. Reporting Standards Does the manuscript adhere to the journal’s guidelines on minimum standards of reporting? Choose an item. Choose an item. Statistics Are you able to assess all statistics in the manuscript, including the appropriateness of statistical tests used? Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests. I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.
    3. Abstract

      This work has been peer reviewed in GigaScience (see paper), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Cheryl Sershen

      It would be nice to include the Github link for Chaste. I was able to use the software and reproduce the results presented in the paper. Software is easy to use and install. A broader discussion of what would be necessary to expand Chemchaste to three dimensions is necessary. In a follow-up paper, comparisons to actual experimental results would be useful and promote users to consider this software. Only proximity to the analytical solutions were presented here. Methods Are the methods appropriate to the aims of the study, are they well described, and are necessary controls included? Choose an item. Conclusions Are the conclusions adequately supported by the data shown? Choose an item. Reporting Standards Does the manuscript adhere to the journal’s guidelines on minimum standards of reporting? Choose an item. Choose an item. Statistics Are you able to assess all statistics in the manuscript, including the appropriateness of statistical tests used? Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests. I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    1. report

      Reviewer name: Yang Zhou (revision 1)

      The authors have resolved most of my comments. However, I am still confused about the gap in the Pilon step from the information in Table 1. In the table, I could read that the assembly length of "Flye + Pilon" is 2,383,228,608 bp, and the ungapped legnth is 2,383,226,373 bp, so the gap length is 2,383,228,608 - 2,383,226,373 = 2,235 bp. Because in the "Flye" version the assembly length is equal to the ungapped legnth, this means that gaps are introduced after Pilon correction. Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    2. Findings

      *Reviewer name: Yang Zhou*

      The authors have resolved most of my comments. However, I am still confused about the gap in the Pilon step from the information in Table 1. In the table, I could read that the assembly length of "Flye + Pilon" is 2,383,228,608 bp, and the ungapped legnth is 2,383,226,373 bp, so the gap length is 2,383,228,608 - 2,383,226,373 = 2,235 bp. Because in the "Flye" version the assembly length is equal to the ungapped legnth, this means that gaps are introduced after Pilon correction. Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    3. Syrian

      Reviewer name: Derek Bickhart (revision 2)

      The authors have addressed all of my remaining concerns. Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    4. Background

      Reviewer name: Derek Bickhart (revision 1)

      Summary: In this revision, the authors have addressed most of my major concerns with the manuscript. More details must be provided in two sections of the manuscript based on new details provided by the authors. However, these concerns could feasibly be addressed in revision. Line 124: While the authors have provided an explanation for the sequencing of different target fragment length library preparations, I do not see any results that suggest that one particular preparation was more efficient than the others. This is particularly important given the prevalence of four experimental runs of varying dataset sizes that were uploaded to the cited Biosample accession on SRA. Currently, the metadata provided for that Biosample and its associated experiments is lacking, and one cannot easily distinguish which experiment resulted from different target length preparations. A discursive analysis is not required here, but a statement that provides limited data supporting the authors' preference for library prep is necessary. Line 301: I believe that the authors misinterpreted the comment on this section in my last review. I requested the proportion of sequence identity differences between assemblies due to INDELs, not assembly gaps. Residual INDELs are still a major problem in polished assemblies that may impact gene annotation. Figure 1 caption: Given the new k-mer genome size estimation analysis provided by the authors, it does not make sense to use the total length of the MesAur1.0 assembly here. I believe that the authors should choose a genome size estimate that seems most reasonable (from the two options provided) and then use that as the basis for NG50 comparisons. Otherwise, are they conceding that the MesAur1.0 assembly size is the full length of the Syrian Hamster sequence-accessible genome? Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests. I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    5. Abstract

      This work has been peer reviewed in GigaScience (see paper), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Derek Bickhart

      Summary: In this manuscript, Harris et al. detail the methods they used to create a new reference genome for the Syrian hamster, which is an important model for respiratory disease pathogens. They used several different sequencing technologies to generate the contigs and scaffolds for their new assembly, and achieved a relatively continuous end product. The analysis is suitable for the "genome report" style format (with one omission detailed below in my comments); however, the manuscript suffers from some awkward phrasing and grammar errors in the results and methods. I list my comments below in the relative order in which I encountered them in the manuscript. Since the authors did not provide line numbers in their submission, I provide my comments as a block listing of questions/suggestions/critiques. Section titled "oxford nanopore long-read sequencing": The description of the shearing is awkward. I recommend revising the first sentence to state that the genomic DNA isolates were sheared to three lengths (without providing these lengths in the sentence). In subsequent sentences, provide the lengths in situ with the methods used to prepare them. Also, it is unclear why three different fragment lengths were used here for oxford nanopore sequencing. Given that these fragment lengths are relatively similar in size (e.g. not disparate lengths similar to recent ultra-long nanopore read preps of >100kb), it would be very helpful to the reader if justification was given for this approach. Section titled "Genome assembly": This entire paragraph is awkwardly phrased with numerous past- or present-tense changes. Additionally, the reference to the Pilon polisher needs to be cited, and details need to be provided on what settings were used for Pilon polishing (it is often recommended to correct only indels and to omit gap-filling) and how many iterations of polishing were used. Details are missing on how BioNano optical maps were generated, and what DNA was used as input in the process. Also, what software was used to compare BioNano optical maps, and with what settings? Finally, it appears that the RNA-seq data used by NCBI for annotation was used in another study. Citation to that study would be required so that the reader is aware that the data resulted from different individuals other than the reference individual sequenced in this analysis. Section titled "Assembly Comparisons": What is the expected c-value of the Syrian Hamster genome? Also, what is the karyotype count? Are any of the chromosomes metacentric or acrocentric? Were any satellite regions identified and annotated in this assembly? Finally, I would have preferred that assembly comparisons be conducted with feature response curves, such as those produced by the program "FRC_align" as this provides a useful metric to assess assembly "correctness" by length. Section titled "Transcript and protein alignments and annotation comparisons": How many INDELs were identified in the alignments of RNA-seq transcripts to the BCM_Maur_2.0 assembly? Was this count different from those discovered in the short read assembly? Section titled "Interferon type 1 alpha gene cluster": Were there any gaps that spanned the gene cluster or flanked it? Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    1. Findings

      Reviewer name: Boas Pucker (revised version)

      The authors further improved the quality of this manuscript and responded to all my comments. My concerns were addressed and several comments were solved by extensive analyses (e.g. #7). Although some opportunities for further investigations were left for future studies, I still believe that this work is very important for the community. The quality of this Ensete glaucum assembly appears very high. I would like to congratulate the authors on this excellent work and recommend its publication in GigaScience. Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    2. Background

      Reviewer name: Ning Jiang

      In this study, the authors described the generation of a high-quality reference genome of Ensete glaucum, which is one of the most cold-hardy species in the Musaceae. It is also well known for its drought tolerance. The authors compared the expansion and contraction of gene families and the composition of repeats among related species. The genome assembly, analysis, and annotation are certainly useful for comparative genomic studies as well as future breeding practice. Everything seems to make sense to me. Certainly, the results are descriptive, but this is more than sufficient for a data note. Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    3. Abstract

      This work has been peer reviewed in GigaScience (see paper), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Boas Pucker

      Wang et al. generated a chromosome-scale genome sequence assembly of Ensete glaucum based on ONT long reads. This is a valuable resource for comparison against various Musaceae species. This assembly will certainly help to identify genes underlying agronomic traits in Musaceae. Important data sets are already well integrated into the banana genome hub and available to the community. The authors harnessed this highly contiguous assembly for analyses of synteny against Musa acuminata and for the investigation of repeats/TEs. Overall, the quality of this work is high and the manuscript is well written. I am not sure why this submission is classified as a data note, because it could also pass as a research article. I noticed a few issues and provided some specific comments that might be helpful to further improve the quality of this work: 1) There are many numbers in the abstract. I would recommend to reduce this to the most important ones. For example, the BUSCO results could be removed. 2) There is only one short paragraph about existing genome sequences. I would recommend to extend this and to mention the banana genome hub as the central community resource. 3) Please indicate if the coverage estimations are based on the haploid or diploid genome size (Table 1). 4) Please provide additional details about the BUSCO results (C, S, D, F, M) in line114 and/or in Table 2. 5) I find the sentence in line 120/121 confusing when reading for the first time. This suggests to me that more sequence was anchored than present in the initial assembly. The sentence is correct, but it might be better to present the total assembly size first and to describe the anchored proportion in a separate sentence. 6) It would be helpful to clearly distinguish between the genome (DNA) and the genome sequence (the assembly). That would make it easier to understand the discussion of differences between both (e.g. collapsed repeats). 7) Genome size estimation is always tricky. I would recommend to run several tools and to provide the estimated range (findGSE, gce, MGSE, GenomeScope, ….). It is also important to run the k-mer-based approaches with different k-mer sizes. Apparently, GenomeScope was used for the heterozygosity analysis, but not for the genome size estimation. That is surprising. 8) Statistics about the pseudochromosomes in Table 2 could be removed. For example, it is not necessary to say that the L50 number of 9 chromosomes is 5. 9) Please explain the difference in BUSCO results between predicted genes and BUSCO run in genome mode. Which genes are missing in the annotation? Table S3 suggests that the automatic BUSCO annotation (genome mode) is superior to the annotation generated in this study (analyzed in transcriptome mode). 10) Some statements about the CENs and telomeres would be interesting. These could give a good impression of the assembly results. Estimating their copy numbers could help to explain the difference between assembly size and estimated genome size. 11) Are there any genetic markers that could be used to check the assembly accuracy? 12) In my opinion, the section "Gene distribution and whole-genome duplication analysis" could be removed. Genes are never equally distributed across a genome and repeats/TEs are usually clustered around the centromeres. Therefore, this part does not add any novel insights. The second paragraph comes to the conclusion that all Musaceae share the same WGDs. This seems obvious to me. Was there a different expectation? 13) Orthogroup identification could be complemented with a synteny analysis. A comparison to Musa acuminata (https://doi.org/10.1038/s42003-021-02559-3) could help to check the accuracy of the orthogroups. 14) The statement "Genes with Ka/Ks > 1 were under positive selection (Supplementary Table S6)." does not fit well to the rest of this paragraph. Given that there are >35k genes, some would show values >1 by chance. Some statistical test would be needed to find out which genes are actually under positive selection. What is the conclusion from the identification of such genes? Any enrichment of particular functions? 15) The statement about the sugar transporters is interesting. This would be a good chance to connect these comparative genomics results with the transcriptome analyses. 16) Transcription factor families are mentioned, but not discussed. It is not surprising that MYBs are the largest TF gene family. However, it would be interesting to know if there are any striking differences compared to M. acuminata (https://doi.org/10.1371/journal.pone.0239275). Some MYBs like the anthocyanin regulators respond to sugar treatments. Is there a connection to the large number of sugar transporters? Any duplications/deletions compared to M. acuminata? This could be another opportunity to better connect different aspects of this study. 17) It is interesting to read that head-to-head and tail-to-tail repeats appeared collapsed. Previous studies identified that these arrangements of repeats are associated with low local read quality (e.g. https://doi.org/10.1093/nar/gkaa206, https://doi.org/10.1186/s12864-021-07877-8). I would not expect that both strands of the DNA molecules are sequenced. The authors might want to check this and provide additional explanation. 18) I am surprised that TEs were the most abundant class of repeats. Could this be caused by treating at all the different TEs as one group? CENs should appear with a much higher copy number than individual TEs or TE families. 19) The centromeric patterns could be compared to the situation in Arabidopsis thaliana: https://www.science.org/doi/10.1126/science.abi7489. 20) Are SSR less frequent around the centromeres and on the NOR chromosome arm or is this just a lack of detection in these regions? 21) Why is AG/CT more abundant than other SSRs? This could be compared to other species. 22) References for the length of 45S rDNA length in other species are missing. 23) How many 45S rDNA copies can be inferred from the ONT reads. The coverage is way higher thus this estimation should be more reliable. 24) NOR chromosome arm is depleted of protein encoding genes, but there should be plenty of rRNA genes. Please specify this in the sentence. 25) The synteny section is lengthy. The statements in context of previous studies are good, but removing some purely descriptive parts might make it more interesting. The corresponding figures show everything and could stand on their own. 26) What is the value of genotyping-by-sequencing if not combined with GWAS? 27) Which ONT flow cell type? Which Guppy version? 28) It does not become clear how the Hi-C library was prepared (line 562). What is the improvement? Please explain this here. 29) Please add the detailed parameters of the assembly and polishing. 30) BWA reference is missing. Why was BWA not used for the mapping of the Hi-C reads? 31) The statement in line 592/593 suggests that Hi-C was used for validation. However, it was also used for correction in the previous step. Anyways, this result should be moved from the method to the result section. 32) Trinity assembly and PASA steps lack details. 33) Parameters of STAR mapping and gene prediction steps are missing. 34) There is some discrepancy concerning the Musa acuminata genome assembly versions. It seems that v2 is used in some cases and v4 in others. Please check this. 35) Please make the customized script available via github (line 732) if this is different from the one mentioned in line 737. 36) Are the TE results consistent if a different 2Gb subsets of the illumina data are analyzed? 37) How were the centromere positions determined? I think that I have missed that in the method section. It must be connected to the CEN repeats, but the precise approach could be explained in more detail. 38) The read data sets are not released thus I cannot check if all raw data sets were included. It would be particularly important to have the FAST5 files of the ONT data to study base modifications in the future. 39) The link to the banana genome hub appears to be broken in the data availability statement. The data sets on the genome hub look fine. 40) The terms "core" and "pseudo-core" in Fig. 3 are not frequently used in the literature. These genes seem to have different degrees of dispensability and might be conditionally dispensable (https://pubmed.ncbi.nlm.nih.gov/24548794/; https://doi.org/10.1186/s13007-021-00718-5). 41) There seems to be some variation in the genome size estimation. I would recommend to present the results of multiple k-mer sizes (e.g. 17-25). The distribution of the resulting values might help to estimate the true genome size. JellyFish (k=17): 563Mb findGSE (k=21): 589Mb GenomeScope (k=21): 489Mb (this is smaller than the actual assembly size) 42) The presented sugar transporters are not among the top enriched GO terms (S2). Therefore, I am afraid that this analysis is not very informative. Could it be that the "enriched" GOs are just a "random" set? 43) Why is E. glaucum not presented as S5C? A direct comparison would make more sense. 44) S10: I would recommend to identify the precise break points. Next, it would be good to validate the accuracy of the assembly by finding individual reads that actually support the situation in E. glaucum. This would help to exclude an assembly artifact as reason for the difference. 45) It might be better to use a three letter abbreviation of the species ("Egl" instead of "Eg") in the gene IDs to avoid ambiguities in future genome sequencing projects. 46) The method section states that short DNA fragments below 12kb were removed. S11 suggests that two libraries were sequences: one with depletion of the short fragments and one without it. Please check this. Generally, I would recommend to try a different gDNA extraction protocol and to use SRE instead of BluePippin. 47) The north of eg06 looks suspicious in the Hi-C analysis (S12). There is also no substantial synteny with any of the Musa chromosomes (S8). Could this be an indication that there are errors in the assembly? 48) Table S1: What is the point in showing that all contigs are larger than 1, 2, and 5kb? 49) 445 bHLHs in M. acuminata is almost twice the number of bHLHs detected in E. glaucum. Some other TF families also show this large difference, but orther families show almost equal numbers. It could be interesting to further investigate this. The HB-KNOX value of M. acuminata is missing. Minor comments: line 70/71: Some countries are named multiple times. Please change this. line 113: chromosomes > pseudochromosomes line273/274: Please check this sentence. line 428: Please rephrase "translated proteins" and SynVisio should only be named in the method section. line 436: "protein-coding genomes" ? line 464: "second (right)" … should be replaced by north/south or q/p nomenclature. This also affects some following sentences. line 625: "Musa acuminata" is a species name line 639: blast > BLAST line 731: of of > of line 811: RNA-sequencing > RNA-seq (I have not seen a section about RNA sequencing) S10: "E glaucum" > "E. glaucum" Level of Interest Please indicate how interesting you found the manuscript: Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    1. ML

      Reviewer name: Gael Varoquaux (revision 1)

      I would like to thank the authors for the work done on their manuscript, in particular adding the experiments that enable linking to sparse-recovery theory. In my opinion, the manuscript brings a lot of value to the application community and is pretty much complete. A few details come to my mind that could help its message be most accurate. Because of my suggestions, the authors have used an l1 penalty in the SVC. This worked well in terms of prediction. However, it is not the default. I think that the authors should stress this and be precise on the peanlity each time they mention the SVC. In addition, I think that there would be value in performing an additional experiment with an l2 penality (which is the default) to stress the importance of the l1 penalty. The message should stress that the penality (l1 vs l2) is importance, but less the loss (log reg vs SVC). As a minor detail, I would invert the color scale of one of the plot plots on figure S12, S13, to stress the parallel between the two. Finally, I think that it is important to stress in the conclusion that all the results build on the fact that the predictive information is sparse (maybe putting this with words more familiar to the application community). Methods Are the methods appropriate to the aims of the study, are they well described, and are necessary controls included? Choose an item. Conclusions Are the conclusions adequately supported by the data shown? Choose an item. Reporting Standards Does the manuscript adhere to the journal’s guidelines on minimum standards of reporting? Choose an item. Choose an item. Statistics Are you able to assess all statistics in the manuscript, including the appropriateness of statistical tests used? Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests. I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    2. Results

      Reviewer name: Filippo Castiglione

      The article "Profiling the baseline performance and limits of machine learning models for adaptive immune receptor repertoire classification by Kanduri1 et al. describes the construction of suitable reference benchmarks data-sets to guide new AIRR ML classification methods. The article is interesting and potentially useful in defining benchmark data sets and criteria for constructing specialized AIRR benchmark datasets for the community of researcher interested in AIRR. The authors following previous indications about model reproducibility and availability also provide a docker container which include all data and procedures to reproduce the study. The article is sufficiently well written although at time a bit full of details which perhaps could be synthesised further (this has already been done in pictures and tables). I don't have major concerns. Only a couple of notes. Would be good to have a figure or diagram showing an example of bags containing receptors and associated witnesses. It could illuminate the reader not familiar with Multiple instanvd learning. Would be good to have line commands for the generation of data sets (in the case, for instance, of use of Olga). I understand these are inside the docker container but the reader that is not interested in the whole container might find useful to have access to pieces of the pipeline so to use this or that tool (being it in immuneML, in Olga, etc.). Curiosity: why have the authors used Olga and not the mate Igor? Why is the performance metric in model training the accuracy and not, for instance, the F1-score? Any particular reason? Methods Are the methods appropriate to the aims of the study, are they well described, and are necessary controls included? Choose an item. Conclusions Are the conclusions adequately supported by the data shown? Choose an item. Reporting Standards Does the manuscript adhere to the journal’s guidelines on minimum standards of reporting? Choose an item. Choose an item. Statistics Are you able to assess all statistics in the manuscript, including the appropriateness of statistical tests used? Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I declare that I have no competing interests I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    3. Background

      Reviewer name: Enkelejda Miho

      General opinion: approved with minor changes Comments: The manuscript profiles machine learning methods for AIRR T-cell receptor dataset immune state label prediction to establish the baseline performance of such methods across a diverse set of challenges. Simulated datasets with variable properties are used to provide a large amount of benchmarking datasets with known immune state signals while reflecting the natural complexity of experimental datasets. Their results provide insights on the current limits posed by basic dataset properties to baseline ML models and establish a frontier of improvement of AIRR ML research. The manuscript is understandable and well structured in the approach to comparisons as well as solid conclusions. The graphics are clear and consistent and support the manuscript. Very interesting insight into the importance of single individual variable parameters such as sample size or witness rate on the general accuracy. The advantage of the results to the scientific community is that it offers an evaluation of classical ML methods, provides large and specialized AIRR benchmark datasets, and allows further development and benchmarking of more sophisticated ML methods. The manuscript is overall well-written and we endorse it with minor changes: In paragraph Impact of noise on classification performance (page 14) the sentence "but enriched above a baseline in positive class examples" should be corrected with "but being enriched above a baseline in positive class examples" In paragraph Machine learning models (methods section, page 21) "lasso" should be corrected with "Lasso". In paragraph Machine learning models (methods section, page 21) " '- ' " should be corrected with "'-'" and "ð•‘‹jdenoting» with "ð•‘‹j denoting». In the discussion the sentence "which aligns with the observations that that the majority of the possible contacts between TCR and peptide" should be corrected with "which aligns with the observations that the majority of the possible contacts between TCR and peptide" Keep comparisons like size>500 and size > 500 concise Check for missing whitespace as in the description of the figure 1(b): …(5 x 105 % of sequence.. Same in cases like ≈90% | ≈ 90 % or n=60 | n = 60 Methods Are the methods appropriate to the aims of the study, are they well described, and are necessary controls included? Choose an item. Conclusions Are the conclusions adequately supported by the data shown? Choose an item. Reporting Standards Does the manuscript adhere to the journal’s guidelines on minimum standards of reporting? Choose an item. Choose an item. Statistics Are you able to assess all statistics in the manuscript, including the appropriateness of statistical tests used? Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. Enkelejda Miho owns shares is aiNET GmbH. I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    4. Abstract

      This work has been peer reviewed in GigaScience (see paper), which carries out open, named peer-review. These reviews are published under a CC-BY 4.0 license and were as follows:

      Reviewer name: Gael Varoquaux The manuscript by Kanduri et al benchmarks baseline machine-learning method on simulated sequencing data of adaptive immune receptors to predict immune states of individuals by detecting antigen-specific signatures. Given that there is a volume of publication using a wide variety of different machine learning techniques with the promise of clinical diagnostics on such data, the goal of the study is to set baseline expectations. From an application standpoint, I believe that the study motivated and useful to the communitee. From a signal processing standpoint, many aspects of the study are trivial consequences of the simulation choices: sparse estimators are good for prediction when the signal is generated from sparse coefficients. Though I do not know well this application community, it seems to me that the manuscript is valuable because it casts this knowledge in a specific application setting, however it should discuss a bit more the fundamental statistical reasons that underly the empirical findings. I give below some major and minor comment to help make the study more solid. 1. Plausibility of the simulations The validity of the findings relies crucial on the simulations, in particular the hypotheses of extreme sparsity. These hypotheses need to be discussed more in details, with references to back them. The amount of sparsity as detailed in table 1, is huge, which strongly favors sparse models. 2. Another baseline, natural given the sparsity I do realize that the goal of this study is not do an exaustive comparison of all machine learning methods --an impossible task--, however for someone knowledgeable about sparse signal processing, In particular, the study begs the question of whether univariate tests on appropriate k-mer can be enough, an alley suggested by the authors on page 7. This option should be studied empirical, as it would provide important practical methods. 3. Link to sparse model theory A vast variety of theoretical results state that a sparse model will be successful for n proportional to s log(p) where n here would be the number of samples in the minority class, s would be the number of non-zero coefficients. A good summary of these results can be found in the book "Statistical learning with sparsity: the lasso and generalizations T Hastie, R Tibshirani, M Wainwright - 2019" It would be interesting to see how these theoretical scaling match results, for instance those on figure 3. 4. Accuracy and class imbalance It seems to me that in parts of the manuscript (fig 4.a for instance) accuracy is compared across different scenarios with varying class imbalance. However, accuracy is not comparable when class imbalance varies: for instance with 90% positive class, a classifier that always choose the positive label will have .9 accuracy. In this light, I don't understand fig 4.a, in which even for large class imbalance accuracy goes to .5. In addition, the typical good practice is to use a metric for which decision under chance are not affected by class imbalance, such as area under the curve of the ROC curve. 5. Comparison with SVC The manuscript mentions that a Support Vector Classifier is also benchmarked, however it does not give details on which specific SVC is used. A crucial point is the kernel used: with a linear kernel, the SVC is a linear model, while with another kernel (RBF kernel, for instance), the SVC is a much more complex model and is not expected to behave well in large p, small n problems. Also, I suspect that the SVC is used with the l2 regularization. A linear SVC with l1 regularization would likely have similar performance as the l1-penalized logistic regression, as it is a model of the same nature. These details should be added; ideally, if the model benchmarked is not a linear SVC, a linear SVC should be benchmarked, to give a baseline (though the default l2 regularization can be used, to stick to common practices). 6. Wording in the conclusion The conclusion starts with "To help the scientific community in avoiding futile efforts of developing...". The word futile is too strong and the phrasing will not encourage healthy scientific discussion. I try to sign my reviews as much as possible. Gaël Varoquaux Methods Are the methods appropriate to the aims of the study, are they well described, and are necessary controls included? Choose an item. Conclusions Are the conclusions adequately supported by the data shown? Choose an item. Reporting Standards Does the manuscript adhere to the journal’s guidelines on minimum standards of reporting? Choose an item. Choose an item. Statistics Are you able to assess all statistics in the manuscript, including the appropriateness of statistical tests used? Choose an item. Quality of Written English Please indicate the quality of language in the manuscript: Choose an item. Declaration of Competing Interests Please complete a declaration of competing interests, considering the following questions:  Have you in the past five years received reimbursements, fees, funding, or salary from an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold any stocks or shares in an organisation that may in any way gain or lose financially from the publication of this manuscript, either now or in the future?  Do you hold or are you currently applying for any patents relating to the content of the manuscript?  Have you received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript?  Do you have any other financial competing interests?  Do you have any non-financial competing interests in relation to this paper? If you can answer no to all of the above, write 'I declare that I have no competing interests' below. If your reply is yes to any, please give details below. I have no competing interests. I agree to the open peer review policy of the journal. I understand that my name will be included on my report to the authors and, if the manuscript is accepted for publication, my named report including any attachments I upload will be posted on the website along with the authors' responses. I agree for my report to be made available under an Open Access Creative Commons CC-BY license (http://creativecommons.org/licenses/by/4.0/). I understand that any comments which I do not wish to be included in my named report can be included as confidential comments to the editors, which will not be published.

    1. Functional

      Reviewer 3: Chris Armit

      This Data Note describes an Open CC0 neuroimaging dataset of 15 subjects (young adults) who underwent simultaneous BOLD-fMRI and FDG-fPET imaging. FDG-fPET ([18]-fluorodeoxyglucose positron emission tomography) measures glucose uptake in the human brain, whereas BOLD-fMRI (blood oxygenation level dependent functional magnetic resonance imaging) captures the cerebrovascular haemodynamic response. FDG-PET data was acquired using three different radiotracer administration protocols - bolus, constant infusion, and 50% bolus + 50% infusion - and each administration protocol was applied to 5 subjects. BOLD-fMRI and FDG-PET was acquired while participants viewed a checkerboard stimulation, which was used to trigger dynamic changes in brain glucose metabolism.

      This neuroimaging dataset allows researchers to explore the complexity of energetic dynamics in the brain using multimodal imaging data analysis. In addition, this neuroimaging dataset includes structural MRI data for each of the subject, including T1 and T2 FLAIR, enabling neuroanatomical correlations to be explored. The neuroimaging data are available from OpenNeuro [http://doi.org/10.18112/openneuro.ds003397.v1.1.1] and the authors are to be commended for ascribing a CC0 Public Domain Dedication to this dataset. Importantly, the authors highlight that consent was obtained from participants to release de-identified data. I downloaded a small number of image files from this dataset and I confirm that the de-identified NIfTI (Neuroimaging Informatics Technology Initiative) format files can be opened using Fiji / ImageJ.

      This neuroimaging dataset has immense reuse potential and I recommend this Data Note for publication in GigaScience.

    2. Background

      Reviewer 2: Nicolas Costes

      Jadamar et al present a database of limited size, but of a rarity which amply justifies its interest. This is a combined dynamic FDG PET (fTEP) and fMRI study performed in three groups of 5 subjects for whom 3 different modes of FDG administration were used: bolus, infusion and bolus + infusion. The statistical analysis resulting from this study is also of limited scope due to the low residual degree of freedom of the design, but nevertheless makes it possible to confirm the expected characteristics of the shape of PET kinetics; It confirms the superiority of the bolus + infusion protocol ensuring maximum sensitivity to highlighting the neural circuits involved in the visual flickering task performed during acquisition. The interest of the study lies in the free provision of the whole data that can be used, as it is argued, as a demonstrator for the development of methods for correcting, processing and analyzing data. A multivariate analysis combing PET and fMRI taking advantage of the simultaneous recording is not accired out: a simple GLM voxel-to-voxel analysis makes it possible to expose notable differences between the 3 methods of administration of FDG. However, the provision of data opens the field for future exploitation. The fact that raw data before PET reconstruction is provided is relatively new and opens up the possibility of extending the field of their exploitation to methods of correction and reconstruction. Respecting the BIDS description format as much as possible is also a plus. These data are of undeniable interest to the community and therefore the description of their content and the exhaustive provision of all the demographic and physical parameters of their realization deserve their publication. Some following remarks should be considered before publication. p7. [18F]-FDG 18 should be in upper script p9: raw PE data are in the original format exported from the siemens console: is there a distinction between list-mode file exceeding 4 Gb, as it is the case on the Siemens console? In which format the raw data will be provided? Results: Figure 2: A. Please specify if plasma curves are corrected for 18F radioactivity decay at the time of injection. Figure 3. Why was the correction applied for Zcorr? FWE? FDR? Figure 4. How exactly « percent final change » is computed: is it an average of the active periods compared to rest period? Is it computed from the beta regressor or directly on signal change? In the later case, on which interval? Figure 5. A well the average accros all protocols is provided in Fig3.D to serve as a reference, could you also provide the average accros References Please review references: check for incomplete references (2., 8., 21. for example), uniformity of format and provide DOI as it is already done for the majority of your them.

    3. Abstract

      Reviewer 1: Antoine Verger

      Review on "Data Note: Monash DaCRA fPET-fMRI: A DAtaset for Comparison of Radiotracer Administration for high temporal resolution functional FDG-PET" This article is an important contribution in its field. This study is an open access dataset, Monash DaCRA fPET-fMRI, which contrasts three radiotracer administration protocols for FDG-fPET: bolus, constant infusion and hybrid bolus/infusion. The Monash DaCRA fPET-fMRI dataset is the only publicly available dataset that allows comparison of radiotracer administration protocols for fPET-fMRI. Even if the provided dataset is useful for the scientific community, the validation part needs some explanations.

      Comments: - Shame that this dataset is not available also for rest fPET-fMRI images. Indeed, most of the studies are also performed at rest (connectivity of neurodegenerative disorders for example) and should need some controls. Please discuss the opportunity to provide such databases. - Was the administered FDG dose unique for all patients or adapted to the body weight? Please detail. - The authors should discuss the gender variability across the 3 groups. Metabolism and radiotracer uptake is dependent of gender. The authors should at least include this covariate in their group analyses. - Of course, raw data are available. I have nonetheless one question: what is the interest of using PSF and after a Gaussian filter in reconstructed images? Why using PSF in dynamic PET (noisy) images? Please, can the authors justify the 16sec of frames for reconstruction of their images? Was it justified by any optimization? - The authors further applied a filter of FWHM 12 mm after having previously reconstructed their images with a Gaussian filter? They should choose one of these two filters. If not, smoothing of PET images is too important. - For the validation set at the group level, is the PET intensity normalization based on proportional scaling? It is particularly important to understand how the authors have obtained the grey matter mean signal. - How was the grey matter mean signal obtained? From a grey matter MRI mask? - Could the authors develop the way to have access to open access reconstructions algorithms? Particularly if images have been obtained with Biograph Siemens. They mention STIR and SIRF: please develop: is it able for anyone who has no access to a Siemens reconstruction algorithm? Is a specific PSF reconstruction for Siemens is implemented? - "there has not yet is not yet agreement in the best way to manage" : please rephrase. - Figure 1: Please include the conventional MRI sequences at the beginning of the acquisition. - Figure 2: Please provide units for signal intensity? It would be also more comfortable to provide elements to distinguish the tasks from the rest periods. - Figure 2: is the grey matter signal obtained for all the grey matter or only for the occipital cortex? Should the authors discuss the higher variability observed between patients for methods with bolus? Is it linked to the different sex ratio between the protocols? Discuss Why one patient in the infusion protocol has a truncated time-activity curve? - Figure 3: the authors should explain the variability of fMRI patterns in GLM albeit the same protocol was performed. Is there an influence of the coupled glycolytic metabolism? - Figure 3: how the authors explain the absence of correlation with task in the infusion protocol? (this was not observed in the 3 phases of the protocols for infusion in Figure 5). - Figure 4: define how the increase in signal percentage was calculated? How was the grey-matter normalized at the group level? Proportional scaling can be source of false positive abnormalities. - Figure 5: Can the authors display the changes in connectivity of the occipital area between the 3 phases for each protocol? (by adding a supplemental part at the bottom of the Figure).

  5. Jan 2023
    1. Abstract

      This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.75) and has published the reviews under the same license. These are as follows.

      Reviewer 1. Ned Peel

      Is the source code available, and has an appropriate Open Source Initiative license (https://opensource.org/licenses) been assigned to the code?

      Scripts have been made publicly available on GitHub (https://www.github.com/phiweger/adaptive) under an OSI-approved BSD-3-Clause license.

      As Open Source Software are there guidelines on how to contribute, report issues or seek support on the code?

      No.

      Is the code executable?

      Unable to test

      Have any claims of performance been sufficiently tested and compared to other commonly-used packages?

      Not applicable.

      Additional Comments: Sent authors accompanying file with comments

      https://gigabyte-review.rivervalleytechnologies.com/journal/gx/download-files?YXJ0aWNsZT0zNjkmZmlsZT0xMzcmdHlwZT1nZW5lcmljJnZpZXc9ZmFsc2U~

      Reviewer 2. Julian Sommer

      Is the code executable?

      The code used for analysis of the data has been published on the corresponding github page. Although, a link on this page for downloading data from a public database does not work at the time of testing. (Resource deleted). Also, most parts of the code are executable, the generated data and figures resulting from the code does not reproduce the figures from the publication.

      Is installation/deployment sufficiently outlined in the paper and documentation, and does it proceed as outlined?

      Yes. The code placed in the github repository can be executed mostly, but require basic knowledge of coding in the used programming languages. However, for the data presented in this work, I do not see the need for more detailed instructions.

      Is the documentation provided clear and user friendly?

      Only partly

      Is there a clearly-stated list of dependencies, and is the core functionality of the software documented to a satisfactory level?

      Only partly. However, I do not see the need for further instructions.

      Is test data available, either included with the submission or openly available via cited third party sources (e.g. accession numbers, data DOIs)?

      The data is available from the stated accession numbers, but an additional data link on the github page does not work and might be necessary to test the complete code.

      Additional Comments:

      The study compared three methods of oxford nanopore-based longread sequencing for detection of antibiotic resistant bacterial pathogenes. Therefore, the authors used cultivation based detection of carbapenem-resistant bacteria from a rectal swap and subsequent singe isolate sequencing. This technique was compared to an adaptive sequencing approach using a database of antibiotic resistance genes for adaptive sequence enrichment during the sequencing run facilitation oxford nanopore sequencing. The underlying technology is a unique approach, made possible by the oxford nanopore real-time sequencing technology and is of great interest for future applications in clinical microbiology diagnostics. Therefore, this study is of great importance for this field in general. As additional method, the authors performed metagenome sequencing of the rectal swap without culture, which is a completely different technique with unique advantages and drawbacks, compared to culture-based sequencing methods. This study is important for the development of real time sequencing and adaptive sequencing for the detection of antibiotic resistance genes and in future potentially other genes. It focusses on the adaptive sequencing approach, analysing in detail the factors influencing the performance of this new approach. The number of experiments is limited, as stated by the authors, but the data is nevertheless valuable for future projects. For further improvement, I have some suggestions for the manuscript. 1. The comparison of the three methods is quite complex and one of the main goals of this paper, illustrating, that low-cost sequencing devices (Flongle) can be used for detection of antibiotic resistance genes applying adaptive sequencing. Therefore, the description of this comparison and figure 1C is essential for understanding the data of this comparison of methods. However, figure 1C is hard to read and the represented data is not easily accessible. To clarify, I suggest including additional information. Does the “Set size” and “Intersection Size” describe absolute number of detected antibiotic resistance genes? This information could be included. To achieve additional connection from the legend of figure 1C, the absolute numbers of detected genes could be included to the text, supplementing the already stated relative detection numbers (lines 51-54, 137-142). Since this figure part is essential for the understanding, a larger version of this representation would be nice. 2. Figure 2 is essential for interpretation of the presented data on variables influencing the adaptive sequencing performance. a. Figure 2A is not easily accessible, in fact I am not sure, what information about the data is represented in this part of the figure (data throughput?). The figure legend does not explain, what is shown. I suggest clarification or, if applicable, deletion of this subfigure, for increased readability of figure 1B-D