364 Matching Annotations
  1. Last 7 days
    1. In terms of their border dispute, India and China are struggling with what game theorists refer to as a “commitment problem.” Meaning A commitment problem arises when two states, who would be better off in the present if they consented to a mutually beneficial agreement, are unable to resolve their disputes due to different expectations of future strengths, and a consequent inability to commit to future bargaining power or a division of benefits.
    1. Allowing port 80 doesn’t introduce a larger attack surface on your server, because requests on port 80 are generally served by the same software that runs on port 443.
    1. CodeGuard relies upon industry best practices to protect customers’ data. All backups and passwords are encrypted, secure connections (SFTP/SSH/SSL) are utilized if possible, and annual vulnerability testing is conducted by an independent agency. To-date, there has not been a data breach or successful hack or attack upon CodeGuard.
  2. May 2020
    1. However, it's possible to enforce both a whitelist and nonces with 'strict-dynamic' by setting two policies:
    1. Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data in AWS.
    1. sadness.js will not load, however, as document.write() produces script elements which are "parser-inserted".
  3. developer.chrome.com developer.chrome.com
    1. If a user clicks on that button, the onclick script will not execute. This is because the script did not immediately execute and code not interpreted until the click event occurs is not considered part of the content script, so the CSP of the page (not of the extension) restricts its behavior. And since that CSP does not specify unsafe-inline, the inline event handler is blocked.
    1. I will need to find a workaround for one of my private extensions that controls devices in my home network, and its source code cannot be uploaded to Mozilla because of my and my family's privacy.
    2. While there are security benefits to disallowing unsigned extensions by default, it is not clear why there is no option to turn off this behavior, perhaps by making it configurable only with administrator rights.
    3. It would be best to offer an official way to allow installing local, unsigned extensions, and make the option configurable only by root, while also showing appropiate warnings about the potential risks of installing unsigned extensions.
    4. They don't have to host the extension on their website, but it's absolutely and utterly unacceptable for them to interfere with me choosing to come to github and install it.
    5. I appreciate the vigilance, but it would be even better to actually publish a technical reasoning for why do you folks believe Firefox is above the device owner, and the root user, and why there should be no possibility through any means and configuration protections to enable users to run their own code in the release version of Firefox.
    6. I do not understand what is the threat model of not allowing the root user to configure Firefox, since malware could just replace the entire Firefox binary.
    7. I appreciate the vigilance, but it would be even better to actually publish a technical reasoning for why do you folks believe Firefox is above the device owner, and the root user, and why there should be no possibility through any means and configuration protections to enable users to run their own code in the release version of Firefox.
    8. I will need to find a workaround for one of my private extensions that controls devices in my home network, and its source code cannot be uploaded to Mozilla because of my and my family's privacy.
    9. We must consider introducing sensible default options in Firefox, while also educating users and allowing them to override certain features, instead of placing marginal security benefits above user liberties and free choice.
    1. potentially dangerous APIs may only be used in ways that are demonstrably safe, and code within add-ons that cannot be verified as behaving safely and correctly may need to be refactored
    2. Add-ons are not allowed to contain obfuscated code, nor code that hides the purpose of the functionality involved. If external resources are used in combination with add-on code, the functionality of the code must not be obscured.
    1. I'm really tired with mozilla,everything good about them is going away,I don't think they are security or privacy focused anymore and waterfox is a nice alternative to firefox.
    2. Mozilla can still block distribution of the extension, even when not distributed via ADO. It is not possible for us to provide Mozilla the unminified JavaScript source files for Google’s and Microsoft’s translation widgets. This is a risk because Mozilla can demand such.
    3. Mozilla does not permit extensions distributed through https://addons.mozilla.org/ to load external scripts. Mozilla does allow extensions to be externally distributed, but https://addons.mozilla.org/ is how most people discover extensions. The are still concerns: Google and Microsoft do not grant permission for others to distribute their "widget" scripts. Google's and Microsoft's "widget" scripts are minified. This prevents Mozilla's reviewers from being able to easily evaluate the code that is being distributed. Mozilla can reject an extension for this. Even if an extension author self-distributes, Mozilla can request the source code for the extension and halt its distribution for the same reason.

      Maybe not technically a catch-22/chicken-and-egg problem, but what is a better name for this logical/dependency problem?

    1. If you delegate all your IT security to the InfoSec, they will come up with draconian rules

      Try to do some of your own security before delegating everything to InfoSec that will come with draconian restrictions

  4. Apr 2020
    1. Don’t share any private, identifiable information on social media It may be fun to talk about your pets with your friends on Instagram or Twitter, but if Fluffy is the answer to your security question, then you shouldn’t share that with the world. This may seem quite obvious, but sometimes you get wrapped up in an online conversation, and it is quite easy to let things slip out. You may also want to keep quiet about your past home or current home locations or sharing anything that is very unique and identifiable. It could help someone fake your identity.
    2. Don’t share vacation plans on social media Sharing a status of your big trip to the park on Saturday may be a good idea if you are looking to have a big turnout of friends to join you, but not when it comes to home and personal safety. For starters, you have just broadcasted where you are going to be at a certain time, which can be pretty dangerous if you have a stalker or a crazy ex. Secondly, you are telling the time when you won’t be home, which can make you vulnerable to being robbed. This is also true if you are sharing selfies of yourself on the beach with a caption that states “The next 2 weeks are going to be awesome!” You have just basically told anyone who has the option to view your photo and even their friends that you are far away from home and for how long.
    1. I've written before about the attitude of people with titles like "Marketing Manager" where there can be a myopic focus on usability whilst serious security incidents remain "a hypothetical risk".
    2. There will be those within organisations that won't be too keen on the approaches above due to the friction it presents to some users.
    3. This has a usability impact. From a purely "secure all the things" standpoint, you should absolutely take the above approach but there will inevitably be organisations that are reluctant to potentially lose the registration as a result of pushing back
    4. I'm providing this data in a way that will not disadvantage those who used the passwords I'm providing.
    5. As such, they're not in clear text and whilst I appreciate that will mean some use cases aren't feasible, protecting the individuals still using these passwords is the first priority.
    6. you could even provide an incentive if the user proactively opts to change a Pwned Password after being prompted, for example the way MailChimp provide an incentive to enabled 2FA:
    7. Perhaps, for example, a Pwned Password is only allowed if multi-step verification is enabled. Maybe there are certain features of the service that are not available if the password has a hit on the pwned list.
    8. A middle ground would be to recommend the user create a new password without necessarily enforcing this action. The obvious risk is that the user clicks through the warning and proceeds with using a compromised password, but at least you've given them the opportunity to improve their security profile.
    1. The Authenticity Token is a countermeasure to Cross-Site Request Forgery (CSRF). What is CSRF, you ask? It's a way that an attacker can potentially hijack sessions without even knowing session tokens.
    2. Rails does not issue the same stored token with every form. Neither does it generate and store a different token every time. It generates and stores a cryptographic hash in a session and issues new cryptographic tokens, which can be matched against the stored one, every time a page is rendered.
    3. Since the authenticity token is stored in the session, the client cannot know its value. This prevents people from submitting forms to a Rails app without viewing the form within that app itself. Imagine that you are using service A, you logged into the service and everything is ok. Now imagine that you went to use service B, and you saw a picture you like, and pressed on the picture to view a larger size of it. Now, if some evil code was there at service B, it might send a request to service A (which you are logged into), and ask to delete your account, by sending a request to http://serviceA.com/close_account. This is what is known as CSRF (Cross Site Request Forgery). If service A is using authenticity tokens, this attack vector is no longer applicable, since the request from service B would not contain the correct authenticity token, and will not be allowed to continue.
    1. 1- Validation: you “validate”, ie deem valid or invalid, data at input time. For instance if asked for a zipcode user enters “zzz43”, that’s invalid. At this point, you can reject or… sanitize. 2- sanitization: you make data “sane” before storing it. For instance if you want a zipcode, you can remove any character that’s not [0-9] 3- escaping: at output time, you ensure data printed will never corrupt display and/or be used in an evil way (escaping HTML etc…)
    1. What Is Input Validation and Sanitization? Validation checks if the input meets a set of criteria (such as a string contains no standalone single quotation marks). Sanitization modifies the input to ensure that it is valid (such as doubling single quotes).
    1. In December 2006, 34,000 actual user names and passwords were stolen in a MySpace phishing attack. The idea of the attack was to create a profile page named "login_home_index_html", so the URL looked very convincing. Specially-crafted HTML and CSS was used to hide the genuine MySpace content from the page and instead display its own login form.
    1. Before we get to passwords, surely you already have in mind that Google knows everything about you. It knows what websites you’ve visited, it knows where you’ve been in the real world thanks to Android and Google Maps, it knows who your friends are thanks to Google Photos. All of that information is readily available if you log in to your Google account. You already have good reason to treat the password for your Google account as if it’s a state secret.
    2. You already have good reason to treat the password for your Google account as if it’s a state secret. But now the stakes are higher. You’re trusting Google with the passwords that protect the rest of your life – your bank, your shopping, your travel, your private life. If someone learns or guesses your Google account password, you are completely compromised. The password has to be complex and unique. You have to treat your Google account password with the same care as a LastPass user. Perhaps more so, because it’s easier to reset a Google account password. If your passwords are saved in Chrome, you should strongly consider using two-factor authentication to log into your Google account. I’ll talk about that in the next article.
    1. One of the drawbacks of waiting until someone signs in again to check their password is that a user may simply stay signed in for a long time without signing out. I suppose that could be an argument in favor of limiting the maximum duration of a session or remember-me token, but as far as user experience, I always find it annoying when I was signed in and a website arbitrarily signs me out without telling me why.
    2. One suggestion is to check user's passwords when they log in and you have the plain text password to hand. That way you can also take them through a reset password flow as they log in if their password has been pwned.
    1. When you use multiple email addresses, less data is connected directly to you. When breaches happen, the damage is limited. One can be used when you are not sure of the security issues, one for casual profiles and one for serious business. It’s up to you how many you have and what you use them for.
    1. It is possible the attacker gained access to your account through breaking your security questions. This is possible if you used answers that can be guessed based on your social media profiles or personal information.
    1. At the same time, we need to ensure that no information about other unsafe usernames or passwords leaks in the process, and that brute force guessing is not an option. Password Checkup addresses all of these requirements by using multiple rounds of hashing, k-anonymity, and private set intersection with blinding.
    1. Q. I would like a copy of my data from a breach, can you please send it to me? A. No, I cannot Q. I have a breach I would like to give you in exchange for “your” breach, can you please send it to me? A. No, I cannot Q. I’m a security researcher who wants to do some analysis on the breach, can you please send it to me? A. No, I cannot Q. I’m making a searchable database of breaches; can you please send it to me? A. No, I cannot Q. I have another reason for wanting the data not already covered above, can you please send it to me? A. No, I cannot
    1. This is pretty old now, but it should absolutely be mentioned that you can NOT always fall back to html - I suspect that MOST places that support markdown don't support html.

      Not sure if this is true, though. GitHub and GitLab support HTML, for example.

      Maybe comments on websites wouldn't normally allow it; I don't know. But they should. One can use this filter, for example, to make it safer.

    1. Identity-based policies are attached to an IAM user, group, or role. These policies let you specify what that identity can do (its permissions). For example, you can attach the policy to the IAM user named John, stating that he is allowed to perform the Amazon EC2 RunInstances action. The policy could further state that John is allowed to get items from an Amazon DynamoDB table named MyCompany. You can also allow John to manage his own IAM security credentials. Identity-based policies can be managed or inline. Resource-based policies are attached to a resource. For example, you can attach resource-based policies to Amazon S3 buckets, Amazon SQS queues, and AWS Key Management Service encryption keys. For a list of services that support resource-based policies, see AWS Services That Work with IAM.

      Identity-Based Policies and Resource-Based Policies

    1. But recent events have made me question the prudence of releasing this information, even for research purposes. The arrest and aggressive prosecution of Barrett Brown had a marked chilling effect on both journalists and security researchers.
    2. The fact is that it doesn’t matter if you can see the threat or not, and it doesn’t matter if the flaw ever leads to a vulnerability. You just always follow the core rules and everything else seems to fall into place.
    1. Credential stuffing is a serious attack vector that is underrated because of its unintuitive nature
    2. Download the billions of breached passwords and blacklist them all. Attackers have a copy; so should you.
    3. If you force people to frequently change their passwords, they will use bad passwords.
    4. Stop forcing users to change their passwords every 30, 60, or 90 days, and stop forcing users to include a mixture of uppercase, lowercase, and special charactersForcing users to change their passwords should only happen if there is reason to believe an organization has been breached, or if a new third-party data breach affects employees or users.
    5. Defending against credential stuffing attacks
    1. While KeeFarce is specifically designed to target KeePass password manager, it is possible that developers can create a similar tool that takes advantage of a compromised machine to target virtually every other password manager available today.
    2. KeeFarce obtains passwords by leveraging a technique called DLL (Dynamic Link Library) injection, which allows third-party apps to tamper with the processes of another app by injecting an external DLL code.
    1. Timed clipboard clearing: KeePass can clear the clipboard automatically some time after you've copied one of your passwords into it.
    2. Open Source prevents backdoors. You can have a look at its source code and compile it yourself.
    1. By default: no. The Auto-Type method in KeePass 2.x works the same as the one in 1.x and consequently is not keylogger-safe. However, KeePass features an alternative method called Two-Channel Auto-Type Obfuscation (TCATO), which renders keyloggers useless. This is an opt-in feature (because it doesn't work with all windows) and must be enabled for entries manually. See the TCATO documentation for details.
    1. The time in which the first string part remains in the clipboard is minimal. It is copied to the clipboard, pasted into the target application and immediately cleared. This process usually takes only a few milliseconds at maximum.

      Seems like all a keylogger would need to do is poll/check the clipboard more frequently than that, and (only if it detected a change in value from the last cached clipboard value), save the clipboard contents with a timestamp that can be easily correlated and combined with the keylogger recording.

    1. Credential stuffing is the automated injection of breached username/password pairs in order to fraudulently gain access to user accounts.
    1. In 2017 NIST (National Institute of Standards and Technology) as part of their digital identity guidelines recommended that user passwords are checked against existing public breaches of data. The idea is that if a password has appeared in a data breach before then it is deemed compromised and should not be used. Of course, the recommendations include the use of two factor authentication to protect user accounts too.
    1. When processing requests to establish and change memorized secrets, verifiers SHALL compare the prospective secrets against a list that contains values known to be commonly-used, expected, or compromised.
    2. trim off a bunch of excessive headers such as the content security policy HIBP uses (that's of no use to a lone API endpoint).
    1. This isn't asking about which of the person's 500 doors was left unlocked, rather it's asking me to put the actual keys for over a billion doors up into a publicly accessible location with nothing other than my own personal best efforts to keep them safe.
    2. So I'd have to encrypt them and the problem with encryption is decryption. If HIBP got comprehensively pwned itself - and that is always a possibility - to the extent where the encryption key was also exposed, it's game over.
    3. So I'd have to encrypt them and the problem with encryption is decryption. If HIBP got comprehensively pwned itself - and that is always a possibility - to the extent where the encryption key was also exposed, it's game over. Or alternatively, if there's a flaw in the process that retrieves and displays the password such that it becomes visible to an unauthorised person, that's also a very serious issue.
    4. Once common practice, websites emailing you your password is now severely frowned upon. You'd often see this happen if you'd forgotten your password: you go to the "forgot password page", plug in your email address and get it delivered to your inbox. In fact, this is such a bad practice that there's even a website dedicated to shaming others that do this.
    5. Email is not considered a secure communications channel. You have no idea if your email is encrypted when it's sent between mail providers nor is it a suitable secure storage facility
    1. So there's a lot of stuff getting hacked and a lot of credentials floating around the place, but then what? I mean what do evil-minded people do with all those email addresses and passwords? Among other things, they attempt to break into accounts on totally unrelated websites
    1. An emerging way to bypass the need for passwords is to use magic links. A magic link is a temporary URL that expires after use, or after a specific interval of time. Magic links can be sent to your email address, an app, or a security device. Clicking the link authorizes you to sign in.

      Magic Links to replace passwords?

    2. Hashing passwords won’t save you or your users. Once a database of passwords has been stolen, hackers aim immense distributed computing power at those password databases. They use parallel GPUs or giant botnets with hundreds of thousands of nodes to try hundreds of billions of password combinations per second in hopes of recovering plaintext username/password pairs.

      What happens when the database is in hacker's hands

    1. Validating CloudTrail Log File Integrity PDF Kindle RSS To determine whether a log file was modified, deleted, or unchanged after CloudTrail delivered it, you can use CloudTrail log file integrity validation. This feature is built using industry standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital signing. This makes it computationally infeasible to modify, delete or forge CloudTrail log files without detection. You can use the AWS CLI to validate the files in the location where CloudTrail delivered them.

      use this help you to detect a potential secuirty issue, some one modify the logs.

      avoid tampering

    1. Unfortunately no - it cannot be done without Trusted security devices. There are several reasons for this. All of the below is working on the assumption you have no TPM or other trusted security device in place and are working in a "password only" environment.

      Devices without a TPM module will be always asked for a password (e.g. by BitLocker) on every boot

  5. Mar 2020
  6. www.graphitedocs.com www.graphitedocs.com
    1. Own Your Encryption KeysYou would never trust a company to keep a record of your password for use anytime they want. Why would you do that with your encryption keys? With Graphite, you don't have to. You own and manage your keys so only YOU can decrypt your content.
    1. This is acceptable because the standard security levels are primarily driven by much simpler, symmetric primitives where the security level naturally falls on a power of two. For asymmetric primitives, rigidly adhering to a power-of-two security level would require compromises in other parts of the design, which we reject.
    1. Enligt Polismyndighetens riktlinjer ska en konsekvensbedömning göras innan nya polisiära verktyg införs, om de innebär en känslig personuppgiftbehandling. Någon sådan har inte gjorts för det aktuella verktyget.

      Swedish police have used Clearview AI without any 'consequence judgement' having been performed.

      In other words, Swedish police have used a facial-recognition system without being allowed to do so.

      This is a clear breach of human rights.

      Swedish police has lied about this, as reported by Dagens Nyheter.

    1. The payment provider told MarketWatch that everyone has a unique walk, and it is investigating innovative behavioral biometrics such as gait, face, heartbeat and veins for cutting edge payment systems of the future.

      This is a true invasion into people's lives.

      Remember: this is a credit-card company. We use them to pay for stuff. They shouldn't know what we look like, how we walk, how our hearts beat, nor how our 'vein technology' works.

    1. startup focused on creating transparency in data. All that stuff you keep reading about the shenanigans with companies mishandling people's data? That's what we are working on fixing.
  7. Feb 2020
    1. To add insult to injury I learn that when Cloudflare automatically detects an anomaly with your domain they permanently delete all DNS records. Mine won't be difficult to restore, but I'm not sure why this is necessary. Surely it would be possible for Cloudflare to mark a domain as disabled without irrevocably deleting it? Combined with the hacky audit log, I'm left with the opinion that for some reason Cloudflare decided to completely half-ass the part of their system that is responsible for deleting everything that matters to a user.

      ...and this is why some companies should not grow to become too big for the good of their customers.

    1. When our analysts discovered six vulnerabilities in PayPal – ranging from dangerous exploits that can allow anyone to bypass their two-factor authentication (2FA), to being able to send malicious code through their SmartChat system – we were met with non-stop delays, unresponsive staff, and lack of appreciation. Below, we go over each vulnerability in detail and why we believe they’re so dangerous. When we pushed the HackerOne staff for clarification on these issues, they removed points from our Reputation scores, relegating our profiles to a suspicious, spammy level. This happened even when the issue was eventually patched, although we received no bounty, credit, or even a thanks. Instead, we got our Reputation scores (which start out at 100) negatively impacted, leaving us worse off than if we’d reported nothing at all.

      Paypal is a bad company in many ways. This is one of them.

    1. Last year, Facebook said it would stop listening to voice notes in messenger to improve its speech recognition technology. Now, the company is starting a new program where it will explicitly ask you to submit your recordings, and earn money in return.

      Given Facebook's history with things like breaking laws that end up with them paying billions of USD in damages (even though it's a joke), sold ads to people who explicitly want to target people who hate jews, and have spent millions of USD every year solely on lobbyism, don't sell your personal experiences and behaviours to them.

      Facebook is nefarious and psychopathic.

    1. The most popular modern secure messaging tool is Signal, which won the Levchin Prize at Real World Cryptography for its cryptographic privacy design. Signal currently requires phone numbers for all its users. It does this not because Signal wants to collect contact information for its users, but rather because Signal is allergic to it: using phone numbers means Signal can piggyback on the contact lists users already have, rather than storing those lists on its servers. A core design goal of the most important secure messenger is to avoid keeping a record of who’s talking to whom. Not every modern secure messenger is as conscientious as Signal. But they’re all better than Internet email, which doesn’t just collect metadata, but actively broadcasts it. Email on the Internet is a collaboration between many different providers; and each hop on its store-and-forward is another point at which metadata is logged. .
  8. Jan 2020
    1. Now, Google has to change its practices and prompt users to choose their own default search engine when setting up a European Android device that has the Google Search app built in. Not all countries will have the same options, however, as the choices included in Google’s new prompt all went to the highest bidders.As it turns out, DuckDuckGo must have bid more aggressively than other Google competitors, as it’s being offered as a choice across all countries in the EU.
    1. A Microsoft programme to transcribe and vet audio from Skype and Cortana, its voice assistant, ran for years with “no security measures”, according to a former contractor who says he reviewed thousands of potentially sensitive recordings on his personal laptop from his home in Beijing over the two years he worked for the company.

      Wonderful. This, combined with the fact that Skype users can—fairly easily—find out which contacts another person has, is horrifying.

      Then again, most people know that Microsoft have colluded with American authorities to divulge chat/phone history for a long time, right?

    1. maybe the server is getting brute-forced some way. so the ssh connections are in use this way. In this case MaxStartups would only lead to more bandwith usage and higher server load. You should think about a non default port in high port range and something like fail2ban
  9. Dec 2019
    1. greater integration of data, data security, and data sharing through the establishment of a searchable database.

      Would be great to connect these efforts with others who work on this from the data end, e.g. RDA as mentioned above.

      Also, the presentation at http://www.gfbr.global/wp-content/uploads/2018/12/PG4-Alpha-Ahmadou-Diallo.pptx states

      This data will be made available to the public and to scientific and humanitarian health communities to disseminate knowledge about the disease, support the expansion of research in West Africa, and improve patient care and future response to an outbreak.

      but the notion of public access is not clearly articulated in the present article.

    1. I understand that GitHub uses "Not Found" where it means "Forbidden" in some circumstances to prevent inadvertently reveling the existence of a private repository. Requests that require authentication will return 404 Not Found, instead of 403 Forbidden, in some places. This is to prevent the accidental leakage of private repositories to unauthorized users. --GitHub This is a fairly common practice around the web, indeed it is defined: The 404 (Not Found) status code indicates that the origin server did not find a current representation for the target resource or is not willing to disclose that one exists. --6.5.4. 404 Not Found, RFC 7231 HTTP/1.1 Semantics and Content (emphasis mine)
    1. Now using sudo to work around the root account is not only pointless, it's also dangerous: at first glance rsyncuser looks like an ordinary unprivileged account. But as I've already explained, it would be very easy for an attacker to gain full root access if he had already gained rsyncuser access. So essentially, you now have an additional root account that doesn't look like a root account at all, which is not a good thing.
    2. My exploit was to set the setuid bit on the dash binary; it causes Linux to always run that binary with the permissions of the owner, in this case root.
    1. In actuality, most people choose words from a set of 10,000 or more words, bringing the complexity of a 5 word passphrase to 16,405 or more times greater than that of a 8 character password.
    1. Google found 1,494 device identifiers in SensorVault, sending them to the ATF to comb through. In terms of numbers, that’s unprecedented for this form of search. It illustrates how Google can pinpoint a large number of mobile phones in a brief period of time and hand over that information to the government
    2. Google found 1,494 device identifiers in SensorVault, sending them to the ATF to comb through. In terms of numbers, that’s unprecedented for this form of search. It illustrates how Google can pinpoint a large number of mobile phones in a brief period of time and hand over that information to the government
    1. Use a USB-condom. This is a device that plugs in between your normal cable and the computer and blocks the data lines
    2. Use a USB cable with a "data switch". This cable is normally power-only, which is what you want 90% of the time. However there is a button ("Data Transfer Protection On/Off Switch") you can press that will enable data. An LED indicates the mode. This kind of cable is much safer and secure, plus more convenient for the users. It follows the security principle that if you make the defaults what you want users to do, they're more likely to follow your security policy.