Category
i really like the table format. I think this clarifies the presentation a lot.
Category
i really like the table format. I think this clarifies the presentation a lot.
Examples: Captchas, TODO.
this jumps out at me a little because it's the only one with "Examples."
I can confirm with some other folks on the team, but I think we are a bit hesitant to provide examples at the various levels as well, because context makes a difference to the rating. It's entirely possible that an Info finding on one app could be Low or Medium on another. At R7, I recall seeing an example of an Info Disclosure finding (which is typically Informational/Low) coming in at Critical.
CAPTCHA as an example, could be informational on one system with decent access controls, but on a system that doesn't enforce rate limiting or a password policy, could be higher.
In this case, the risk rating is based on the following factors
Now that I think about it, I don't think the 1-5 scale thing is "secret" info, and so providing that somewhere in here would give context to the numbers below (and that graphic if we can get it in there.)
And if the link is useful, here's the GSheet doc where I built the graphic: https://docs.google.com/presentation/d/1Us6OsBmegJxHOtIbuUSYCs8ikm2VW6Rd9v-CG_BogRE/edit?usp=sharing
Description
This may be more report-focused thinking than anything else, but I wanted to include a sort of "Call to Action" in the version I wrote. Like, "we recommend fixing [Critical] findings as soon as possible."
(the ones I wrote I don't think are perfect, but it seemed like another good way to indicate Severity)
should
"should" makes it sound a bit like they may not. As I understand it, Likelihood and Risk are required fields on the Finding report (1-5 scale, if I recall right).
The platform then calculates and plots the Risk based on those two numbers.
by OWASP ,
This might be a format thing, but it looks like there's an extra space before the comma here.
Includes vulnerabilities that are: * Medium risk, medium impact. * Low risk, high impact. * High risk, low impact.
Format-wise, this makes sense, but it does also look a little out of place being the only one with a bulleted list.
Such vulnerabilities do not appear
This phrasing makes it sound like Low-risk findings are rare. I'd say they're more common.
Exploitation is relatively rare, and they can be chained together with other Low- or higher-risk findings to become something worse.
Fixed Cobalt
I think the formatting might've gone funky here.
Slack
Mostly FYI, the Slack Channel also includes the Pentest Architect (PA) or Technical Project Manager (TPM) who's helping run the test.
PAs are often also cybersecurity experts, know enough to also do pentesting as needed. They also do all the stuff a TPM does. TPMs have less technical experience, but are around to answer some of the more day-to-day issues that crop up: Scheduling, getting credentials for stuff, communication, wrangling testers and customers. If they run into something they can't handle, they'll ping a PA.
They
Perhaps "Choose colleagues who can benefit..."?
For example, if industry, company, or national regulations require that you limit pentesters to residents of one or more countries, you can do so here
Copying this comment over from the Accounts for Pentesters page:
I don't think we necessarily want to add this here, so mostly for your behind-the-scenes knowledge, we do sometimes get requests for testers in specific geographic areas or with specific certifications (like CREST). I don't think we want to advertise that it's something we attempt to accommodate (because we don't want people to start asking more than they already do).
Currently, these requests add to the SLA for starting staffing, and require a fair amount of work on the backend to find the people required.
(I think this more or less covers it, but I would point Kathy to this specific section because it's something she's been working on lately.)
If you define your environment differently, let us know. Add that information in comments.
Mostly with this screenshot, it looks like they might be showehorned into one of those categories. Might it be worth adding "choose the one that's closest and add more info" or something along those lines?
Requirements to access the target environment:
I don't think we necessarily want to add this here, so mostly for your behind-the-scenes knowledge, we do sometimes get requests for testers in specific geographic areas or with specific certifications (like CREST). I don't think we want to advertise that it's something we attempt to accommodate (because we don't want people to start asking more than they already do).
Currently, these requests add to the SLA for starting staffing, and require a fair amount of work on the backend to find the people required.
you should share any or all special concerns about the asset.
I'd confer with CS on this, but my understanding is that this can be a pain point between the PenOps team and CS, when these notes aren't well communicated.
I'm not sure what would be a good solution here, other than perhaps suggesting that the customer reach out to their CSM? Communicating early and often could help.
DoS tests.
This might just be an interesting side-note, but PCI testing specifically says they shouldn't do DoS testing. Mostly because if the system goes down during a test, then that's their system down and losing money.
You would not share a user account with someone who is trying to break into your system. However, best practice for pentests assume that a black-hat hacker has somehow gained access to a valid account.
Depending on the methodology, we sometimes do black-box testing, or start with black-box testing then work to grey-box. I would confirm wording with Rogerio or someone, but I think it could be worth noting that we usually want a test account, but we'll do testing without it first to see what we can see.
This is also often a pain point, if the customer hasn't given us the credentials at the start of the test, or at the point where we've seen all we can without creds, testing kind of hits a wall. It's definitely worth mentioning we want these credentials provisioned before testing starts.
Pentest Methodology.
Mostly in case you didn't mean for it to, this 404'ed on me.
If you want a pentest report, you must set up a test of at least two (2) credits. For example, if you want a test report for a Small asset, specify Standard coverage or higher.
Excellent. Yeah, this is something that we've implemented fairly recently, and it's mostly helpful.
Note The number of credits associated with a pentest size and coverage is subject to change. While we do our best to keep this documentation up to date, the UI is the authoritative source of truth for the number of required credits.
There is a fair amount of talk, and a lot of pain points, about how credits are defined. So this is definitely going to be a complicated page, and likely one that needs a lot of review from both CS and PenOps.
Cobalt subdivides the number of User Roles and Dynamic Pages into the following categories:
I recall the older versions of this page being a topic of conversation among the PenOps team, but I don't recall what we felt about how accurate they are. I imagine this will also be one that they'll want to review (I'm not sure anyone has recently).
Includes APIs that supply data to the (Web) app.
Yep. I believe this is accurate, although again I'd defer to a more technical person.
Assets
I think one of the pain points that we have on our end is that the assets aren't extremely well-defined sometimes. I think it's an industry term that can mean slightly different things to different people. So it might not be a terrible idea to include a specific definition of Asset. Framed as "When we say Asset, what we mean is one web application, or one API, or one group of External Network hosts" (or segments of a network?)
I think this is ultimately a good question for Rogerio or the Pentest Architects to at least skim over too. The more specific they are up front, I hope, the smoother it is at the end.
Get Started:
Just a suggestion, but if there's a way to put a border around screenshots, they don't get lost in the white space.
(I've mostly used 1px black borders in the past)
subscription
I think we'd describe it as "credits" rather than "a subscription"?
I'm also not 100% sure how far along they are, but I think the CS Self-Service team is working on a way for people to sign up for the platform to poke around demo spaces before they buy. Might be worth poking Ali to see where that is (Or if I've completely misremembered something).
(OWASP)
Specifically we talk about the OWASP Top 10 and OWASP Application Security Verification Standard (ASVS).
We also use the acronym OSTMM (specifically v3 I think)
from web apps to internal networks.
This may just be for your edification, but we basically have 6 methodologies (so six things we test): Internal Networks, External Networks, Web Apps, APIs, Mobile Apps, and Cloud Configs.
We also do combos (Web App + API is very common).
Access
This might just be me with Friday-Brain, but I did see these steps, and a little thought they were going to provide more info on the graphic above. It's a step short, but at a quick glance there are 6-7 bullets on the clock and 6-7 steps.