supported by the project.
per the github suggestion:
Any modern Javascript framework, whether the more generic JQuery or interface frameworks like React, Backbone, or Ember, will provide cross-browser support.
supported by the project.
per the github suggestion:
Any modern Javascript framework, whether the more generic JQuery or interface frameworks like React, Backbone, or Ember, will provide cross-browser support.
Assessment:
I don't know of a good example of evals for this kind of maintenance grant.
The distinctions here are related to the purpose of the release and the prospect for ongoing maintenance.
wondering about section 2.2 of this more recent survey of NSF researchers: http://doi.acm.org/10.1145/2753524.2753533
and the goals and understandings of the communities. if it adds anything to what's there in any real way.
Medium:
This remains an issue but I'm thinking about the conversations around automated patching (what constitutes a good patch). Especially around deletions - if deleting some block is a correct solution but removes functionality, is it a good patch? Here, it's more about correctness in those areas that directly relate to the science question (an algo implementation, for example) or research goal and not expecting 100% bug-free code (that is unrealistic).
On the research software side (and I'm thinking about the push towards a lot of web-based systems), this medium ground is maybe defining expectations of correctness in the implementation directly aimed at the science/research question (so like the data collection in a Zooniverse project) with a second set of expectations for the other parts of the application (bugs in the tutorial are not critical to the science, necessarily, but raise concerns for adoption and community engagement). This would affect the kinds of specs we might want to see or encourage.
[If anyone feels that mobile apps should be included, add comments as necessary to this section.]
We do need something more explicit about mobile apps although it can also be lumped into the follows platform guidelines
Source code includes unit and integration tests.
Add text for including test data, configs for building databases, etc
Provide an installer
Add a note about installer best practices for the platform - target directory, customization for environment variables, etc.
For web applications
We do not directly cover localization/internationalization. Unclear the scope of that need; however, it might be worth adding (under the unicode statement or instead of) some text for it if localization/internationalization is a stated need of the project or if the stated goals imply that need (if for use in area not predominantly English-speaking, is there any attempt at localization/internationalization?)
Developer Documentation
Missing header formatting here
The medium- to long-term sustainability of research software
A discussion of methods being used in the OS crowd for funding: https://github.com/nayafia/lemonade-stand
34.Smith, A., Katz, D. & Niemeyer, K. FORCE11 SOFTWARE CITATION PRINCIPLES (Draft). (2016).
This is temporary until the final is released with a DOI (unknown timeline for that.)
Preservation/Archiving
might be missing a reference or two?
research software journal such as SoftwareX
one that supports any of the things that makes code reusable instead of just blobs of fortran would be fantastic here. So more examples?
Unlicensed code is unusable code
unusable code is unsustainable code, ie think about ROI from the funder's side
such as Docker
the docker security white paper
recommended
could be a better statement, but added here to make sure containers are included.
as potential conflicts and errors grow accordingly
meaning, do you really need 200 options for toggling private, private to group, public for a page?
releasing runnable containers
does not remove the sysadmin requirements but is (often) misinterpreted as such. point people to system resources for upgrades (to know what the thing is to update), give them an understanding of how to handle your app in the container for upgrades.
OAuth2
possibly the worst link for a functional description.
For HTTPS
this seems picky but google will downgrade sites without it
Any code or software
could include something like Coverity (http://www.coverity.com/), contributed during the revision process. doesn't fit currently now.
User
needs a better word.
Both can be mitigated
code isn't self-documenting and tests don't capture project decisions in very real ways.
benchmarking
If there are more HPC/GPU/cluster-related guidelines that might be missing, add them.
persistent performance problems
effective monitoring
Note re: robots.txt and web service access - it is a blunt instrument in many ways, relying solely on it to block traffic is indicative of load issues.
For web applications and services, provide information regarding uptime, methods for receiving outage notifications, expected response times for responses to support requests.
how complete do we need to make this list?
a clear process
add something about describing the process (ie you get booted temporarily for 1 violation, etc).
deployment requirements
so if i want to run awesome service framework and i know i need to handle XX requests per minute, i have a better idea if the system can handle that now or if i might have to modify it to meet my needs.
encourage sensitive information leaks
must be a better way to say bad interface design + private info made public != user error.
those requirements
vulnerabilities in the older browsers, IE.
also follow web accessibility
something about choosing or noting institutional requirements (limited to dept site and templates provided, etc).
community needs
could make a statement related - project includes explicit effort to gather info or has performed that prior to start.
For Semantic Services
section needs a bit more context (esp. ontology/vocab service bits) and examples/references.
For web applications, include automated GUI testing.
Insert more context here. maybe what kinds of things are tested, benefits, etc.
where possible
probably not the correct statement re: data inputs, mocking, etc.
development style
currently without a reasonable reference here.
runnable containers
the library crowd does this often, if there's a need to expand?
further research
if it exists, i'm not having luck finding it.
overarching categories
data pipeline scripts vs some data repository code, for example.
Research Funding Lifecycles
being revised for the top two levels (overlap in types not funding, ie the bars don't indicate when a grant ends but that a kind of grant could result in projects at different progression stages)
Binary distributions list all third-party
As applicable
Web site lists all third-party dependencies and external
this material must be contained with the software but not required in web site?
Project web site follows established guidelines for web accessibility.
specify guideline
States command names and syntax, says what menus to use, lists parameters and error messages exactly as they appear or should be typed; or provides similarly explicit instructions on how to apply product.
e.g., cook book
Quality control information
clarify what QC is
Documentation is updated to reflect new errors
could be a FAQ - Y or N
Documentation is on the project
PID over URL
A link to the documentation is contained in the code
PID should be used instead of URL
Partitioned into sections for users
usability vs maintenability?
English language descriptions of
don't understand this
API documentation is autogenerated
is this a value judgment
Code is documented (comment blocks, etc.)
potentially provides an opportunity for misinformation propogation
Project uses build tools (e.g. automake, etc.)
why so specific?
Binary and source packages reference thrid-party code (including
typo
auto updating can break other software. From geoscientist perspective these are valuable but shouldn't be a deal breaker for assessment.
The product user experience (e.g, response times)
important for end users but difficult to achieve
Presented information and data
too subjective, lower priority
Opinions and perspectives are offered only as they relate to the mission of the project, and are clearly identified and put into context.
larger assessment framework but not necessary for software, would this preclude forum or comments in GitHub issues?
Expertise of the originators of the project/product is represented
larger assessment framework but not necessary for software
through group login via a single account.
no password sharing
There is a specification of the algorithm against which results can be compared.
is this redundant? Documentation?
There are examples of outputs that follow the specification.
where are these? In documentation? SHould this be part of testing?
The precision presented in answers to the user is appropriate for the product's algorithm and implementation.
is this relevant to software? It seems releavnt to data. Not necessarily applicable to all
Grouping
(these first few are mislabeled - not from ssi/esip)
Does the algorithm use memory efficiently?
criteria statement?
Is the algorithm parallelized?
a principle
Rough order of time-complexity and memory-complexity
very system dependent (hpc)
Publications about the data cit
not about software
Review committees consider and recognize software contributions
not applicable as criteria.
Data for which the software
specific understanding of software - not applicable broadly.
Preservation/Archiving
review with documentation for duplication - make statement here for documentation archived with code and meets those doc standards.
Constraints or restrictions
see previous "constraints" comment
User-uploaded files are not executable, cannot completely fill a disk,
would like further clarification
dynamic testing
not sure what this means?
could add penetration testing.
visable
visible
adn
typo
Constraints or restrictions
revise for intended use to make it clearer re: license
Reference guides or contextual
over-documentation
Redundant
chuck anyway
uncon=mmon
typo
Installers are available for all relevant platforms (Windows, OSX, Linux, etc.)
installers provided for target platforms & target platforms are indicated in the documentation.
When appropriate, installer integrates with external authentication sources (e.g., LDAP)
too specific
Installers allow user to select where to install software.
if needed
Testability
add the linting concerns as sub group to testability. & rename testability.
Project in the wild
group under identity
Users are required to cite a boilerplate
also chuck this
if publishing papers
update for new software publication practices (not paper necessarily)
Analysability
Move any doc-related criteria to Documentation (and check Learnability)
Accessibility
Add description re: accessibility for publication/reproducibility and link to accessibility for ARIA, etc.
Copyright
Group with licensing.