874 Matching Annotations
  1. Dec 2020
    1. Just realised this doesn't actually work. If store is just something exported by the app, there's no way to prevent leakage. Instead, it needs to be tied to rendering, which means we need to use the context API. Sapper needs to provide a top level component that sets the store as context for the rest of the app. You would therefore only be able to access it during initialisation, which means you couldn't do it inside a setTimeout and get someone else's session by accident:
    1. Go is introducing publicly-visible API changes related to these issues in an upcoming major release, which risks making the vulnerabilities public without explicit public disclosure. 

      Whaaat ?!

    1. The only completely secure system is the one that doesn't exist in the first place.
  2. Nov 2020
    1. How can I create Internet ingress and egress security patterns for AWS

      [[How can I create Internet ingress and egress security patterns]]

    1. to be listed on Mastodon’s official site, an instance has to agree to follow the Mastodon Server Covenant which lays out commitments to “actively moderat[e] against racism, sexism, homophobia and transphobia”, have daily backups, grant more than one person emergency access, and notify people three months in advance of potential closure. These indirect methods are meant to ensure that most people who encounter a platform have a safe experience, even without the advantages of centralization.

      Some of these baseline protections are certainly a good idea. The idea of advance notice of shut down and back ups are particularly valuable.

      I'd not know of the Mastodon Server Covenant before.

    1. @hypothes.is

      be careful : because you store the page title, when we make annotation on a personal website that stores personal informations in title, hypothes.is users can retieve those informations.

      for example, here I can see that jbarnett mail is jeankap@gmail.com.

      and i can see his mail's title.

      I will find a report option that would be better than this current annotation.

    1. This is addressing a security issue; and the associated threat model is "as an attacker, I know that you are going to do FROM ubuntu and then RUN apt-get update in your build, so I'm going to trick you into pulling an image that ​_pretents_​ to be the result of ubuntu + apt-get update so that next time you build, you will end up using my fake image as a cache, instead of the legit one." With that in mind, we can start thinking about an alternate solution that doesn't compromise security.
    2. So let's say we pull down evil/foo which is FROM ubuntu followed by RUN apt-get update except with a small surprise included in the image. Subsequent builds using those same commands will be compromised.
    3. at least in the meantime allow users to bypass the security protections in situations where they are confident of the source of the layers
    1. If your Svelte components contain <style> tags, by default the compiler will add JavaScript that injects those styles into the page when the component is rendered. That's not ideal, because it adds weight to your JavaScript, prevents styles from being fetched in parallel with your code, and can even cause CSP violations. A better option is to extract the CSS into a separate file. Using the emitCss option as shown below would cause a virtual CSS file to be emitted for each Svelte component. The resulting file is then imported by the component, thus following the standard Webpack compilation flow.
    1. Now let me get back to your question. The FBI presents its conflict with Apple over locked phones as a case as of privacy versus security. Yes, smartphones carry a lot of personal data—photos, texts, email, and the like. But they also carry business and account information; keeping that secure is really important. The problem is that if you make it easier for law enforcement to access a locked device, you also make it easier for a bad actor—a criminal, a hacker, a determined nation-state—to do so as well. And that's why this is a security vs. security issue.

      The debate should not be framed as privacy-vs-security because when you make it easier for law enforcement to access a locked device, you also make it easier for bad actors to do so as well. Thus it is a security-vs-security issue.

    1. Barr makes the point that this is about “consumer cybersecurity” and not “nuclear launch codes.” This is true, but it ignores the huge amount of national security-related communications between those two poles. The same consumer communications and computing devices are used by our lawmakers, CEOs, legislators, law enforcement officers, nuclear power plant operators, election officials and so on. There’s no longer a difference between consumer tech and government tech—it’s all the same tech.

      The US government's defence for wanting to introduce backdoors into consumer encryption is that in doing so they would not be weakening the encryption for, say, nuclear launch codes.

      Schneier holds that this distinction between government and consumer tech no longer exists. Weakening consumer tech amounts to weakening government tech. Therefore it's not worth doing.

  3. Oct 2020
    1. Malicious code pushed to your .gitlab-ci.yml file could compromise your variables and send them to a third party server regardless of the masked setting. If the pipeline runs on a protected branch or protected tag, it could also compromise protected variables.
    1. Australia's Cyber Security Strategy: $1.66 billion dollar cyber security package = AFP gets $88 million; $66 million to critical infrastructure organisations to assess their networks for vulnerabilities; ASD $1.35 billion (over a decade) to recruit 500 officers.

      Reasons Dutton gives for package:

      • child exploitation
      • criminals scamming, ransomware
      • foreign governments taking health data and potential attacks to critical infrastructure

      What is defined as critical infrastructure is expanded and subject to obligations to improve their defences.

      Supporting cyber resilience of SMEs through information, training, and services to make them more secure.

    1. Could you please explain why it is a vulnerability for an attacker to know the user names on a system? Currently External Identity Providers are wildly popular, meaning that user names are personal emails.My amazon account is my email address, my Azure account is my email address and both sites manage highly valuable information that could take a whole company out of business... and yet, they show no concern on hiding user names...

      Good question: Why do the big players like Azure not seem to worry? Microsoft, Amazon, Google, etc. too probably. In fact, any email provider. So once someone knows your email address, you are (more) vulnerable to someone trying to hack your account. Makes me wonder if the severity of this problem is overrated.

      Irony: He (using his full real name) posts:

      1. Information about which account ("my Azure account is my email address"), and
      2. How high-value of a target he would be ("both sites manage highly valuable information that could take a whole company out of business...")

      thus making himself more of a target. (I hope he does not get targetted though.)

    2. Another thing you can do is to add pain to the second part of it. Attackers want the list of valid usernames, so they can then try to guess or brute force the password. You can put protections in place with that as well, whether they are lockouts or multi-factor authentication, so even if they have a valid username, it's much harder to gain access.
    3. That is certainly a good use-case. One thing you can do is to require something other than a user-chosen string as a username, something like an email address, which should be unique. Another thing you could do, and I admit this is not user-friendly at all, to let them sign up with that user name, but send the user an email letting them know that the username is already used. It still indicates a valid username, but adds a lot of overhead to the process of enumeration.
    1. How would you remediate this? One way could be to have the application pad the responses with a random amount of time, throwing off the noticeable difference.
    2. Sometimes, user enumeration is not as simple as a server responding with text on the screen. It can also be based on how long it takes a server to respond. A server may take one amount of time to respond for a valid username and a very different (usually longer) amount of time for an invalid username.
    1. By default all content inside template strings is escaped. This is great for strings, but not ideal if you want to insert HTML that's been returned from another function (for example: a markdown renderer). Use nanohtml/raw for to interpolate HTML directly.
    1. strict-origin: Only send the origin of the document as the referrer when the protocol security level stays the same (e.g. HTTPS→HTTPS), but don't send it to a less secure destination (e.g. HTTPS→HTTP).
  4. Sep 2020
    1. There are clever ways around trackers

      I also recommend switching to FIrefox, getting the Facebook container extension and Privacy Badger extension!

    2. These creeping changes help us forget how important our privacy is and miss that it’s being eroded.

      This is important we are normalizing the fact that our privacy is being taken slowly, update after update

  5. Aug 2020
  6. Jul 2020
    1. While stylesheets can be reworked relatively easily with AMP by inlining the CSS, the same is not true for JavaScript. The tag 'script' is disallowed except in specific forms. In general, scripts in AMP are only allowed if they follow two major requirements: All JavaScript must be asynchronous (i.e., include the async attribute in the script tag). The JavaScript is for the AMP library and for any AMP components on the page. This effectively rules out the use of all user-generated/third-party JavaScript in AMP except as noted below.
    1. If you have worked with emails before, the idea of placing a script into an email may set off alarm bells in your head! Rest assured, email providers who support AMP emails enforce fierce security checks that only allow vetted AMP scripts to run in their clients. This enables dynamic and interactive features to run directly in the recipients mailboxes with no security vulnerabilities! Read more about the required markup for AMP Emails here.
    1. Determine if who is using my computer is me by training a ML model with data of how I use my computer. This is a project for the Intrusion Detection Systems course at Columbia University.
    1. It's possible for a document to match more than one match statement. In the case where multiple allow expressions match a request, the access is allowed if any of the conditions is true

      overlapping match statements

    2. If you want rules to apply to an arbitrarily deep hierarchy, use the recursive wildcard syntax, {name=**}
    3. Security rules apply only at the matched path, so the access controls defined on the cities collection do not apply to the landmarks subcollection. Instead, write explicit rules to control access to subcollections
    1. only the @firebase/testing Node.js module supports mocking auth in Security Rules, making unit tests much easier
  7. Jun 2020
    1. Plenty of journalists, attorneys, and activists are equally if not more threatened by so-called evil maid attacks, in which a housekeeper or other stranger has the ability to tamper with firmware during brief physical access to a computer.
    1. See the documentation for HTML::Pipeline’s SanitizationFilter class for the list of allowed HTML tags and attributes. In addition to the default SanitizationFilter allowlist, GitLab allows span, abbr, details and summary elements.
    1. As billions of conversations transition online over the coming weeks and months, the widespread adoption of end-to-end encryption has never been more vital to national security and to the privacy of citizens in countries around the world.
    2. Proponents of this bill are quick to claim that end-to-end encryption isn’t the target. These arguments are disingenuous both because of the way that the bill is structured and the people who are involved.
    3. For a political body that devotes a lot of attention to national security, the implicit threat of revoking Section 230 protection from organizations that implement end-to-end encryption is both troubling and confusing. Signal is recommended by the United States military. It is routinely used by senators and their staff. American allies in the EU Commission are Signal users too. End-to-end encryption is fundamental to the safety, security, and privacy of conversations worldwide.
    4. The EARN IT act turns Section 230 protection into a hypocritical bargaining chip. At a high level, what the bill proposes is a system where companies have to earn Section 230 protection by following a set of designed-by-committee “best practices” that are extraordinarily unlikely to allow end-to-end encryption. Anyone who doesn’t comply with these recommendations will lose their Section 230 protection.
    1. Matrix provides state-of-the-art end-to-end-encryption via the Olm and Megolm cryptographic ratchets. This ensures that only the intended recipients can ever decrypt your messages, while warning if any unexpected devices are added to the conversation.
    1. More than two billion users exchange an unimaginable volume of end-to-end encrypted messages on WhatsApp each day. And unless an endpoint (phone) is compromised, or those chats are backed-up into accessible cloud platforms, neither owner Facebook nor law enforcement has a copy of those encryption keys.
    1. The industry argues that encryption backdoors will result in a weakening of end device security, making it more likely they will be compromised.
    2. As uber-secure messaging platform Signal has warned, “Signal is recommended by the United States military. It is routinely used by senators and their staff. American allies in the EU Commission are Signal users too. End-to-end encryption is fundamental to the safety, security, and privacy of conversations worldwide.”
    3. “End-to-end encryption,” NSA says, “is encrypted all the way from sender to recipient(s) without being intelligible to servers or other services along the way... Only the originator of the message and the intended recipients should be able to see the unencrypted content. Strong end-to-end encryption is dependent on keys being distributed carefully.” So, no backdoors then.
    4. On April 24, the U.S. National Security Agency published an advisory document on the security of popular messaging and video conferencing platforms. The NSA document “provides a snapshot of best practices,” it says, “coordinated with the Department of Homeland Security.” The NSA goes on to say that it “provides simple, actionable, considerations for individual government users—allowing its workforce to operate remotely using personal devices when deemed to be in the best interests of the health and welfare of its workforce and the nation.” Again somewhat awkwardly, the NSA awarded top marks to WhatsApp, Wickr and Signal, the three platforms that are the strongest advocates of end-to-end message encryption. Just to emphasize the point, the first criteria against which NSA marked the various platforms was, you guessed it, end-to-end encryption.
    5. And while all major tech platforms deploying end-to-end encryption argue against weakening their security, Facebook has become the champion-in-chief fighting against government moves, supported by Apple and others.
    6. EFF describes this as “a major threat,” warning that “the privacy and security of all users will suffer if U.S. law enforcement achieves its dream of breaking encryption.”
    7. Once the platforms introduce backdoors, those arguing against such a move say, bad guys will inevitably steal the keys. Lawmakers have been clever. No mention of backdoors at all in the proposed legislation or the need to break encryption. If you transmit illegal or dangerous content, they argue, you will be held responsible. You decide how to do that. Clearly there are no options to some form of backdoor.
    8. While this debate has been raging for a year, the current “EARN-IT’ bill working its way through the U.S. legislative process is the biggest test yet for the survival of end-to-end encryption in its current form. In short, this would enforce best practices on the industry to “prevent, reduce and respond to” illicit material. There is no way they can do that without breaking their own encryption. QED.
    9. Governments led by the U.S., U.K. and Australia are battling the industry to open up “warrant-proof” encryption to law enforcement agencies. The industry argues this will weaken security for all users around the world. The debate has polarized opinion and is intensifying.
    1. One thing that would certainly be a game-changer would be some form of standardized RCS end-to-end encryption that allows secure messages to be sent outside Google Messages.
    2. You should not use a messaging platform that is not end-to-end encrypted, it really is as simple as that.
    3. Such is the security of this architecture, that it has prompted law enforcement agencies around the world to complain that they now cannot access a user’s messages, even with a warrant. There is no backdoor—the only option is to compromise one of the endpoints and access messages in their decrypted state.
    4. The answer, of course, is end-to-end encryption. The way this works is to remove any “man-in-the-middle” vulnerabilities by encrypting messages from endpoint to endpoint, with only the sender and recipient holding the decryption key. This level of messaging security was pushed into the mass-market by WhatsApp, and has now become a standard feature of every other decent platform.
    5. The issue, though—and it’s a big one, is that the SMS infrastructure is inherently insecure, lending itself to so-called “man-in-the-middle attacks.” Messages run through network data centres, everything can be seen—security is basic at best, and you are vulnerable to local carrier interception when travelling.
    1. Despite its opposition, EARN-IT is the clearest threat yet to end-to-end encryption, given this clever twist in pushing the onus onto the platforms to avoid transmitting illegal content, rather than mandating a lawful interception approach.
    2. Putting that risk more simply, the EARN-IT bill is cleverly leaving it to the tech platforms to keep themselves safe—there would be little option other than some form of access to encrypted content, even though it would not be specified in law. Sophos describes this as “the backdoor virus that law enforcement agencies have been trying to inflict on encryption for years.”
    3. On the encryption front, HRW echoes others that have argued vehemently against the proposals—that weakened encryption will “endanger all people who rely on encryption for safety and security—once one government enjoys special access, so too will rights-abusing governments and criminal hackers.” Universal access to encryption “enables everyone, from children attending school online to journalists and whistleblowers, to exercise their rights without fear of retribution.”
    4. Lawmakers and security agencies want legally warranted access to encrypted data. That can’t happen without some form of backdoor in those end-to-end systems.
    1. Just like Blackberry, WhatsApp has claimed that they are end to end encrypted but in fact that is not trueWhatsApp (and Blackberry) decrypt all your texts on their servers and they can read everything you say to anyone and everyoneThey (and Blackberry) then re-encrypt your messages, to send them to the recipient, so that your messages look like they were encrypted the entire time, when in fact they were not
    2. The only messaging app that has been proven, by an independent authoritative agency, is Apple’s Messages app (which uses Apple’s iMessage protocol that is truly end to end encrypted, Apple cannot read any of your texts which means that no one can read any of your texts)
    1. When you make a call using Signal, it will generate a two-word secret code on both the profiles. You will speak the first word and the recipient will check it. Then he will speak the second word and you can check it on your end. If both the words match, the call has not been intercepted and connected to the correct profile
    1. If the EU is set to mandate encryption backdoors to enable law enforcement to pursue bad actors on social media, and at the same time intends to continue to pursue the platforms for alleged bad practices, then entrusting their diplomatic comms to those platforms, while forcing them to have the tools in place to break encryption as needed would seem a bad idea.
    2. First, the recognition that sensitive information needs to be transmitted securely over instant messaging platforms plays into the hands of the privacy advocates who are against backdoors in the end-to-end encryption used on WhatsApp, Signal, Wickr, iMessage and others. The core argument from the privacy lobby is that a backdoor will almost certainly be exploited by bad actors. Clearly, the EU (and others) would not risk their own comms with such a vulnerability.
    3. Although WhatsApp has become the messaging platform of choice for many politicians and civil servants worldwide, there have been enough stories of potential vulnerabilities and hacks to spook people into adopting something else.
    1. Security agencies use anti-terror efforts to justify planting backdoors. The problem is that such backdoors can also be used by criminals and authoritarian governments. No wonder dictators seem to love WhatsApp: its lack of security allows them to spy on their own people, so WhatsApp continues to be freely available in places like Russia or Iran, where Telegram is banned by the authorities
  8. May 2020
    1. using SSH is likely the best approach because personal access tokens have account level access

      personal access tokens have account level access ... which is more access (possibly access to 10s of unrelated projects or even groups) than we'd like to give to our deploy script!

    1. Before enabling this, you should ensure jobs are visible to team members only. You should also erase all generated job logs before making them visible again.
    1. There is a serious weakness in DSA (which extends to ECDSA) that has been exploited in several real world systems (including Android Bitcoin wallets and the PS3); the signature algorithm relies on quality randomness (bits that are indistinguishable from random); once the PRNG enters a predictable state, signatures may leak private keys. Systems that use ECDSA must be aware of this issue, and pay particular attention to their PRNG.
    1. GanttLab does not store any information: everything runs in your browser. Under the hood, we use localForage to remember the state of the application, which enables great user experience without any security risk. Secured API requests are initiated in your computer's browser, hence coming straight from your local network. Those requests are going directly to the data source you selected (your GitHub or GitLab account) without ever transiting through any other server.
    1. GanttLab does not store any information: everything runs in your browser. Under the hood, we use localForage to remember the state of the application, which enables great user experience without any security risk. Secured API requests are initiated in your computer's browser, hence coming straight from your local network. Those requests are going directly to the data source you selected (your GitHub or GitLab account) without ever transiting through any other server.
    1. Image consumers can enable DCT to ensure that images they use were signed. If a consumer enables DCT, they can only pull, run, or build with trusted images. Enabling DCT is a bit like applying a “filter” to your registry. Consumers “see” only signed image tags and the less desirable, unsigned image tags are “invisible” to them.
    1. In terms of their border dispute, India and China are struggling with what game theorists refer to as a “commitment problem.” Meaning A commitment problem arises when two states, who would be better off in the present if they consented to a mutually beneficial agreement, are unable to resolve their disputes due to different expectations of future strengths, and a consequent inability to commit to future bargaining power or a division of benefits.
    1. Allowing port 80 doesn’t introduce a larger attack surface on your server, because requests on port 80 are generally served by the same software that runs on port 443.
    1. CodeGuard relies upon industry best practices to protect customers’ data. All backups and passwords are encrypted, secure connections (SFTP/SSH/SSL) are utilized if possible, and annual vulnerability testing is conducted by an independent agency. To-date, there has not been a data breach or successful hack or attack upon CodeGuard.
    1. However, it's possible to enforce both a whitelist and nonces with 'strict-dynamic' by setting two policies:
    1. Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS.
    1. sadness.js will not load, however, as document.write() produces script elements which are "parser-inserted".
  9. developer.chrome.com developer.chrome.com
    1. If a user clicks on that button, the onclick script will not execute. This is because the script did not immediately execute and code not interpreted until the click event occurs is not considered part of the content script, so the CSP of the page (not of the extension) restricts its behavior. And since that CSP does not specify unsafe-inline, the inline event handler is blocked.
    1. I will need to find a workaround for one of my private extensions that controls devices in my home network, and its source code cannot be uploaded to Mozilla because of my and my family's privacy.
    2. While there are security benefits to disallowing unsigned extensions by default, it is not clear why there is no option to turn off this behavior, perhaps by making it configurable only with administrator rights.
    3. It would be best to offer an official way to allow installing local, unsigned extensions, and make the option configurable only by root, while also showing appropiate warnings about the potential risks of installing unsigned extensions.
    4. They don't have to host the extension on their website, but it's absolutely and utterly unacceptable for them to interfere with me choosing to come to github and install it.
    5. I appreciate the vigilance, but it would be even better to actually publish a technical reasoning for why do you folks believe Firefox is above the device owner, and the root user, and why there should be no possibility through any means and configuration protections to enable users to run their own code in the release version of Firefox.
    6. I do not understand what is the threat model of not allowing the root user to configure Firefox, since malware could just replace the entire Firefox binary.
    7. I appreciate the vigilance, but it would be even better to actually publish a technical reasoning for why do you folks believe Firefox is above the device owner, and the root user, and why there should be no possibility through any means and configuration protections to enable users to run their own code in the release version of Firefox.
    8. I will need to find a workaround for one of my private extensions that controls devices in my home network, and its source code cannot be uploaded to Mozilla because of my and my family's privacy.
    9. We must consider introducing sensible default options in Firefox, while also educating users and allowing them to override certain features, instead of placing marginal security benefits above user liberties and free choice.
    1. potentially dangerous APIs may only be used in ways that are demonstrably safe, and code within add-ons that cannot be verified as behaving safely and correctly may need to be refactored
    2. Add-ons are not allowed to contain obfuscated code, nor code that hides the purpose of the functionality involved. If external resources are used in combination with add-on code, the functionality of the code must not be obscured.
    1. I'm really tired with mozilla,everything good about them is going away,I don't think they are security or privacy focused anymore and waterfox is a nice alternative to firefox.
    2. Mozilla can still block distribution of the extension, even when not distributed via ADO. It is not possible for us to provide Mozilla the unminified JavaScript source files for Google’s and Microsoft’s translation widgets. This is a risk because Mozilla can demand such.
    3. Mozilla does not permit extensions distributed through https://addons.mozilla.org/ to load external scripts. Mozilla does allow extensions to be externally distributed, but https://addons.mozilla.org/ is how most people discover extensions. The are still concerns: Google and Microsoft do not grant permission for others to distribute their "widget" scripts. Google's and Microsoft's "widget" scripts are minified. This prevents Mozilla's reviewers from being able to easily evaluate the code that is being distributed. Mozilla can reject an extension for this. Even if an extension author self-distributes, Mozilla can request the source code for the extension and halt its distribution for the same reason.

      Maybe not technically a catch-22/chicken-and-egg problem, but what is a better name for this logical/dependency problem?

    1. If you delegate all your IT security to the InfoSec, they will come up with draconian rules

      Try to do some of your own security before delegating everything to InfoSec that will come with draconian restrictions

  10. Apr 2020
    1. Don’t share any private, identifiable information on social media It may be fun to talk about your pets with your friends on Instagram or Twitter, but if Fluffy is the answer to your security question, then you shouldn’t share that with the world. This may seem quite obvious, but sometimes you get wrapped up in an online conversation, and it is quite easy to let things slip out. You may also want to keep quiet about your past home or current home locations or sharing anything that is very unique and identifiable. It could help someone fake your identity.
    2. Don’t share vacation plans on social media Sharing a status of your big trip to the park on Saturday may be a good idea if you are looking to have a big turnout of friends to join you, but not when it comes to home and personal safety. For starters, you have just broadcasted where you are going to be at a certain time, which can be pretty dangerous if you have a stalker or a crazy ex. Secondly, you are telling the time when you won’t be home, which can make you vulnerable to being robbed. This is also true if you are sharing selfies of yourself on the beach with a caption that states “The next 2 weeks are going to be awesome!” You have just basically told anyone who has the option to view your photo and even their friends that you are far away from home and for how long.
    1. I've written before about the attitude of people with titles like "Marketing Manager" where there can be a myopic focus on usability whilst serious security incidents remain "a hypothetical risk".
    2. There will be those within organisations that won't be too keen on the approaches above due to the friction it presents to some users.
    3. This has a usability impact. From a purely "secure all the things" standpoint, you should absolutely take the above approach but there will inevitably be organisations that are reluctant to potentially lose the registration as a result of pushing back
    4. I'm providing this data in a way that will not disadvantage those who used the passwords I'm providing.
    5. As such, they're not in clear text and whilst I appreciate that will mean some use cases aren't feasible, protecting the individuals still using these passwords is the first priority.
    6. you could even provide an incentive if the user proactively opts to change a Pwned Password after being prompted, for example the way MailChimp provide an incentive to enabled 2FA:
    7. Perhaps, for example, a Pwned Password is only allowed if multi-step verification is enabled. Maybe there are certain features of the service that are not available if the password has a hit on the pwned list.
    8. A middle ground would be to recommend the user create a new password without necessarily enforcing this action. The obvious risk is that the user clicks through the warning and proceeds with using a compromised password, but at least you've given them the opportunity to improve their security profile.
    1. The Authenticity Token is a countermeasure to Cross-Site Request Forgery (CSRF). What is CSRF, you ask? It's a way that an attacker can potentially hijack sessions without even knowing session tokens.
    2. Rails does not issue the same stored token with every form. Neither does it generate and store a different token every time. It generates and stores a cryptographic hash in a session and issues new cryptographic tokens, which can be matched against the stored one, every time a page is rendered.
    3. Since the authenticity token is stored in the session, the client cannot know its value. This prevents people from submitting forms to a Rails app without viewing the form within that app itself. Imagine that you are using service A, you logged into the service and everything is ok. Now imagine that you went to use service B, and you saw a picture you like, and pressed on the picture to view a larger size of it. Now, if some evil code was there at service B, it might send a request to service A (which you are logged into), and ask to delete your account, by sending a request to http://serviceA.com/close_account. This is what is known as CSRF (Cross Site Request Forgery). If service A is using authenticity tokens, this attack vector is no longer applicable, since the request from service B would not contain the correct authenticity token, and will not be allowed to continue.
    1. 1- Validation: you “validate”, ie deem valid or invalid, data at input time. For instance if asked for a zipcode user enters “zzz43”, that’s invalid. At this point, you can reject or… sanitize. 2- sanitization: you make data “sane” before storing it. For instance if you want a zipcode, you can remove any character that’s not [0-9] 3- escaping: at output time, you ensure data printed will never corrupt display and/or be used in an evil way (escaping HTML etc…)
    1. What Is Input Validation and Sanitization? Validation checks if the input meets a set of criteria (such as a string contains no standalone single quotation marks). Sanitization modifies the input to ensure that it is valid (such as doubling single quotes).
    1. In December 2006, 34,000 actual user names and passwords were stolen in a MySpace phishing attack. The idea of the attack was to create a profile page named "login_home_index_html", so the URL looked very convincing. Specially-crafted HTML and CSS was used to hide the genuine MySpace content from the page and instead display its own login form.
    1. Before we get to passwords, surely you already have in mind that Google knows everything about you. It knows what websites you’ve visited, it knows where you’ve been in the real world thanks to Android and Google Maps, it knows who your friends are thanks to Google Photos. All of that information is readily available if you log in to your Google account. You already have good reason to treat the password for your Google account as if it’s a state secret.
    2. You already have good reason to treat the password for your Google account as if it’s a state secret. But now the stakes are higher. You’re trusting Google with the passwords that protect the rest of your life – your bank, your shopping, your travel, your private life. If someone learns or guesses your Google account password, you are completely compromised. The password has to be complex and unique. You have to treat your Google account password with the same care as a LastPass user. Perhaps more so, because it’s easier to reset a Google account password. If your passwords are saved in Chrome, you should strongly consider using two-factor authentication to log into your Google account. I’ll talk about that in the next article.
    1. One of the drawbacks of waiting until someone signs in again to check their password is that a user may simply stay signed in for a long time without signing out. I suppose that could be an argument in favor of limiting the maximum duration of a session or remember-me token, but as far as user experience, I always find it annoying when I was signed in and a website arbitrarily signs me out without telling me why.
    1. When you use multiple email addresses, less data is connected directly to you. When breaches happen, the damage is limited. One can be used when you are not sure of the security issues, one for casual profiles and one for serious business. It’s up to you how many you have and what you use them for.
    1. It is possible the attacker gained access to your account through breaking your security questions. This is possible if you used answers that can be guessed based on your social media profiles or personal information.
    1. At the same time, we need to ensure that no information about other unsafe usernames or passwords leaks in the process, and that brute force guessing is not an option. Password Checkup addresses all of these requirements by using multiple rounds of hashing, k-anonymity, and private set intersection with blinding.
    1. Q. I would like a copy of my data from a breach, can you please send it to me? A. No, I cannot Q. I have a breach I would like to give you in exchange for “your” breach, can you please send it to me? A. No, I cannot Q. I’m a security researcher who wants to do some analysis on the breach, can you please send it to me? A. No, I cannot Q. I’m making a searchable database of breaches; can you please send it to me? A. No, I cannot Q. I have another reason for wanting the data not already covered above, can you please send it to me? A. No, I cannot
    1. This is pretty old now, but it should absolutely be mentioned that you can NOT always fall back to html - I suspect that MOST places that support markdown don't support html.

      Not sure if this is true, though. GitHub and GitLab support HTML, for example.

      Maybe comments on websites wouldn't normally allow it; I don't know. But they should. One can use this filter, for example, to make it safer.

    1. Identity-based policies are attached to an IAM user, group, or role. These policies let you specify what that identity can do (its permissions). For example, you can attach the policy to the IAM user named John, stating that he is allowed to perform the Amazon EC2 RunInstances action. The policy could further state that John is allowed to get items from an Amazon DynamoDB table named MyCompany. You can also allow John to manage his own IAM security credentials. Identity-based policies can be managed or inline. Resource-based policies are attached to a resource. For example, you can attach resource-based policies to Amazon S3 buckets, Amazon SQS queues, and AWS Key Management Service encryption keys. For a list of services that support resource-based policies, see AWS Services That Work with IAM.

      Identity-Based Policies and Resource-Based Policies

    1. But recent events have made me question the prudence of releasing this information, even for research purposes. The arrest and aggressive prosecution of Barrett Brown had a marked chilling effect on both journalists and security researchers.
    2. The fact is that it doesn’t matter if you can see the threat or not, and it doesn’t matter if the flaw ever leads to a vulnerability. You just always follow the core rules and everything else seems to fall into place.
    1. Credential stuffing is a serious attack vector that is underrated because of its unintuitive nature
    2. Download the billions of breached passwords and blacklist them all. Attackers have a copy; so should you.
    3. If you force people to frequently change their passwords, they will use bad passwords.
    4. Stop forcing users to change their passwords every 30, 60, or 90 days, and stop forcing users to include a mixture of uppercase, lowercase, and special charactersForcing users to change their passwords should only happen if there is reason to believe an organization has been breached, or if a new third-party data breach affects employees or users.
    5. Defending against credential stuffing attacks
    1. While KeeFarce is specifically designed to target KeePass password manager, it is possible that developers can create a similar tool that takes advantage of a compromised machine to target virtually every other password manager available today.
    2. KeeFarce obtains passwords by leveraging a technique called DLL (Dynamic Link Library) injection, which allows third-party apps to tamper with the processes of another app by injecting an external DLL code.
    1. Timed clipboard clearing: KeePass can clear the clipboard automatically some time after you've copied one of your passwords into it.
    1. By default: no. The Auto-Type method in KeePass 2.x works the same as the one in 1.x and consequently is not keylogger-safe. However, KeePass features an alternative method called Two-Channel Auto-Type Obfuscation (TCATO), which renders keyloggers useless. This is an opt-in feature (because it doesn't work with all windows) and must be enabled for entries manually. See the TCATO documentation for details.
    1. The time in which the first string part remains in the clipboard is minimal. It is copied to the clipboard, pasted into the target application and immediately cleared. This process usually takes only a few milliseconds at maximum.

      Seems like all a keylogger would need to do is poll/check the clipboard more frequently than that, and (only if it detected a change in value from the last cached clipboard value), save the clipboard contents with a timestamp that can be easily correlated and combined with the keylogger recording.