330 Matching Annotations
  1. Nov 2019
    1. Loading this iframe allows Facebook to know that this specific user is currently on your website. Facebook therefore knows about user browsing behaviour without user’s explicit consent. If more and more websites adopt Facebook SDK then Facebook would potentially have user’s full browsing history! And as with “With great power comes great responsibility”, it’s part of our job as developers to protect users privacy even when they don’t ask for.
    1. half of iPhone users don’t know there’s a unique ID on their phone (called an IDFA, for “identifier for advertisers”) tracking their app activity and sending it to third-party advertisers by default.
    1. How you use our services and your devicesThis includes: call records containing phone numbers you call and receive calls from, websites you visit, text records, wireless location, application and feature usage, product and device-specific information and identifiers, router connections, service options you choose, mobile and device numbers, video streaming and video packages and usage, movie rental and purchase data, TV and video viewership, and other similar information.
    2. Demographic and interest dataFor example, this information could include gender, age range, education level, sports enthusiast, frequent diner and other demographics and interests.
    3. Information from social media platformsThis may include interests, "likes" and similar information you permit social media companies to share in this way.
    4. Information from Verizon MediaFor example, we may receive information from Verizon Media to help us understand your interests to help make our advertising more relevant to you.
    5. Learn about the information Verizon collects about you, your devices and your use of products and services we provide. We collect information when you interact with us and use our products and services. The types of information we collect depends on your use of our products and services and the ways that you interact with us. This may include information about: Contact, billing and other information you provide 1 How you use our services and your devices 2 How you use our websites and apps 3 How our network and your devices are working 4 Location of your wireless devices

      Verizon Privacy Policy

    1. Google has confirmed that it partnered with health heavyweight Ascension, a Catholic health care system based in St. Louis that operates across 21 states and the District of Columbia.

      What happened to 'thou shalt not steal'?

    1. Found a @facebook #security & #privacy issue. When the app is open it actively uses the camera. I found a bug in the app that lets you see the camera open behind your feed.

      So, Facebook uses your camera even while not active.

    1. Speaking with MIT Technology Review, Rohit Prasad, Alexa’s head scientist, has now revealed further details about where Alexa is headed next. The crux of the plan is for the voice assistant to move from passive to proactive interactions. Rather than wait for and respond to requests, Alexa will anticipate what the user might want. The idea is to turn Alexa into an omnipresent companion that actively shapes and orchestrates your life. This will require Alexa to get to know you better than ever before.

      This is some next-level onslaught.

    1. From Peg Cheechi, an instructional designer at Rush University: informing faculty members about the advantages of working with experts in course design.

      The Chronicle of Higher Education is a website and newspaper informing students and faculty of college affairs and news.

      Rating: 9/10

    1. An explosive trove of nearly 4,000 pages of confidential internal Facebook documentation has been made public, shedding unprecedented light on the inner workings of the Silicon Valley social networking giant.

      I can't even start telling you how much schadenfreude I feel at this. Even though this paints a vulgar picture, Facebook are still doing it, worse and worse.

      Talk about hiding in plain sight.

    1. Clear affirmative action means someone must take deliberate action to opt in, even if this is not expressed as an opt-in box. For example, other affirmative opt-in methods might include signing a consent statement, oral confirmation, a binary choice presented with equal prominence, or switching technical settings away from the default. The key point is that all consent must be opt-in consent – there is no such thing as ‘opt-out consent’. Failure to opt out is not consent. You may not rely on silence, inactivity, default settings, pre-ticked boxes or your general terms and conditions, or seek to take advantage of inertia, inattention or default bias in any other way.

      On opt in vs opt out in GDPR.

    1. Although the GDPR doesn’t specifically ban opt-out consent, the Information Commissioner’s Office (ICO) says that opt-out options “are essentially the same as pre-ticked boxes, which are banned”.

      On opt in vs opt out in GDPR.

    1. Somewhere in a cavernous, evaporative cooled datacenter, one of millions of blinking Facebook servers took our credentials, used them to authenticate to our private email account, and tried to pull information about all of our contacts. After clicking Continue, we were dumped into the Facebook home page, email successfully “confirmed,” and our privacy thoroughly violated.
    1. We are disturbed by the idea that search inquiries are systematically monitored and stored by corporations like AOL, Yahoo!, Google, etc. and may even be available to third parties. Because the Web has grown into such a crucial repository of information and our search behaviors profoundly reflect who we are, what we care about, and how we live our lives, there is reason to feel they should be off-limits to arbitrary surveillance. But what can be done?
    1. If the apparatus of total surveillance that we have described here were deliberate, centralized, and explicit, a Big Brother machine toggling between cameras, it would demand revolt, and we could conceive of a life outside the totalitarian microscope.
  2. Oct 2019
    1. Scientists at Imperial College London and Université Catholique de Louvain, in Belgium, reported in the journal Nature Communications that they had devised a computer algorithm that can identify 99.98 percent of Americans from almost any available data set with as few as 15 attributes, such as gender, ZIP code or marital status.

      This goes to show that one should not trust companies and organisations which claim to "anonymise" your data.

    2. Two years ago, when he moved from Boston to London, he had to register with a general practitioner. The doctor’s office gave him a form to sign saying that his medical data would be shared with other hospitals he might go to, and with a system that might distribute his information to universities, private companies and other government departments.The form added that the although the data are anonymized, “there are those who believe a person can be identified through this information.”“That was really scary,” Dr. de Montjoye said. “We are at a point where we know a risk exists and count on people saying they don’t care about privacy. It’s insane.”
    1. ) Blockchain MemoryWe let LL be the blockchain mem-ory space, represented as the hastable L:{0,1}256→{0,1}NL:\{0,1\}^{256}\rightarrow \{0, 1\}^{N}, where N≫N \gg 256 and can store sufficiently-large documents. We assume this memory to be tamperproof under the same adversarial model used in Bitcoin and other blockchains. To intuitively explain why such a trusted data-store can be implemented on any blockchain (including Bitcoin), consider the following simplified, albeit inefficient, implementation: A blockchain is a sequence of timestamped transactions, where each transaction includes a variable number of output addresses (each address is a 160-bit number). LL could then be implemented as follows - the first two outputs in a transaction encode the 256-bit memory address pointer, as well as some auxiliary meta-data. The rest of the outputs construct the serialized document. When looking up L[k]L[k], only the most recent transaction is returned, which allows update and delete operations in addition to inserts.

      This paragraph explains how blockchain hides one's individual identity and privacy, while giving them a secure way of using the funds. In my opinion lot hacker ransomware are done using block-chain technology coins, this and one more paragraph here is really interesting to read about how blockchain helps protect personal data. and i also related this this hacking and corruption or money laundering

    1. Per Bloomberg, which cited an memo from an anonymous Google staffer, employees discovered that the company was creating the new tool as a Chrome browser extension that would be installed on all employees’ systems and used to monitor their activities.

      From the Bloomberg article:

      Earlier this month, employees said they discovered that a team within the company was creating the new tool for the custom Google Chrome browser installed on all workers’ computers and used to search internal systems. The concerns were outlined in a memo written by a Google employee and reviewed by Bloomberg News and by three Google employees who requested anonymity because they aren’t authorized to talk to the press.

  3. Sep 2019
    1. On social media, we are at the mercy of the platform. It crops our images the way it wants to. It puts our posts in the same, uniform grids. We are yet another profile contained in a platform with a million others, pushed around by the changing tides of a company's whims. Algorithms determine where our posts show up in people’s feeds and in what order, how someone swipes through our photos, where we can and can’t post a link. The company decides whether we're in violation of privacy laws for sharing content we created ourselves. It can ban or shut us down without notice or explanation. On social media, we are not in control.

      This is why I love personal web sites. They're your own, you do whatever you want with them, and you control them. Nothing is owned by others and you're completely free to do whatever you want.

      That's not the case with Facebook, Microsoft, Slack, Jira, whatever.

    1. There is already a lot of information Facebook can assume from that simple notification: that you are probably a woman, probably menstruating, possibly trying to have (or trying to avoid having) a baby. Moreover, even though you are asked to agree to their privacy policy, Maya starts sharing data with Facebook before you get to agree to anything. This raises some serious transparency concerns.

      Privacy International are highlighting how period-tracking apps are violating users' privacy.

  4. Aug 2019
    1. Even if you choose not to use Wi-Fi services we make available at MGM Resorts, we may still collect information concerning the precise physical location of your mobile device within and around MGM Resorts for non-marketing purposes. 

      Holy cow

    1. Now, I'd rather pay for a product that sticks around than have my personal data sold to use a free product that may not be around tomorrow. I value my privacy much more today. If you're not paying for the product... you are the product being sold.
  5. Jul 2019
    1. Even if we never see this brain-reading tech in Facebook products (something that would probably cause just a little concern), researchers could use it to improve the lives of people who can’t speak due to paralysis or other issues.
    2. That’s very different from the system Facebook described in 2017: a noninvasive, mass-market cap that lets people type more than 100 words per minute without manual text entry or speech-to-text transcription.
    3. Their work demonstrates a method of quickly “reading” whole words and phrases from the brain — getting Facebook slightly closer to its dream of a noninvasive thought-typing system.
    1. If Bluetooth is ON on your Apple device everyone nearby can understand current status of your device, get info about battery, device name, Wi-Fi status, buffer availability, OS version and even get your mobile phone number
    1. Comparison between web browsers

      This is one of the best resources on web privacy I've ever seen. I warmly recommend it!

    1. Two years ago, when he moved from Boston to London, he had to register with a general practitioner. The doctor’s office gave him a form to sign saying that his medical data would be shared with other hospitals he might go to, and with a system that might distribute his information to universities, private companies and other government departments.The form added that the although the data are anonymized, “there are those who believe a person can be identified through this information.”“That was really scary,” Dr. de Montjoye said. “We are at a point where we know a risk exists and count on people saying they don’t care about privacy. It’s insane.”
    2. Scientists at Imperial College London and Université Catholique de Louvain, in Belgium, reported in the journal Nature Communications that they had devised a computer algorithm that can identify 99.98 percent of Americans from almost any available data set with as few as 15 attributes, such as gender, ZIP code or marital status.

      This goes to show that one should not trust companies and organisations which claim to "anonymise" your data.

  6. Jun 2019
  7. educatorinnovator.org educatorinnovator.org
    1. snafus, like those of privacy settings

      I'm struck by the choice of "snafu" to describe "privacy settings." I worry describing privacy as a snafu undermines the seriousness with which teachers and students should evaluate a technology's privacy settings when choosing to incorporate the technology into a classroom and other learning environment.

  8. www.joinhoney.com www.joinhoney.com
    1. Honey’s products do not support Do Not Track requests at this time, which means that we collect information about your online activity while you are using Honey’s products in the manner described above.

      So even if you ask us not to track you, we will anyway.

    2. Once you delete your profile, there is no longer any data attributable to you.

      Which means they do not delete all your information.

    3. After you have terminated your use of Honey’s products, we will store your information in an aggregated and anonymised format.

      We keep your info forever, in other words.

    4. as long as is required

      Which is?

    5. That means while you are using the Extension and Honey is saving you money,

      Slickly written. These dudes are good!

    6. While you are using the Extension, this does NOT include any information from your search engine history or from your email.

      Trust us!

  9. May 2019
    1. They’ve learned, and that’s more dangerous than caring, because that means they’re rationally pricing these harms. The day that 20% of consumers put a price tag on privacy, freemium is over and privacy is back.

      Google want you to say yes, not because they're inviting positivity more than ever, but because they want you to purchase things and make them richer. This is the essence of capitalism.

    1. Unsurprisingly living up to its reputation, Facebook refuses to comply with my GDPR Subject Access Requests in an appropriate manner.

      Facebook never has cared about privacy of individuals. This is highly interesting.

    1. Now, how does that mother build an online scrapbook of all the items that were poured into the system?

      The assumptions here are interesting. Does mom have the right to every picture taken at her party? Do the guests have the right to take pictures and post them on the web?

  10. Apr 2019
    1. The report also noted a 27 percent increase in the number of foreigners whose communications were targeted by the NSA during the year. In total, an estimated 164,770 foreign individuals or groups were targeted with search terms used by the NSA to monitor their communications, up from 129,080 on the year prior.
    1. we get some of it by collecting data about your interactions, use and experiences with our products. The data we collect depends on the context of your interactions with Microsoft and the choices that you make, including your privacy settings and the products and features that you use. We also obtain data about you from third parties.
    1. Washington state Attorney General Bob Ferguson said Thursday that Motel 6 shared the information of about 80,000 guests in the state from 2015 to 2017. That led to targeted investigations of guests with Latino-sounding names, according to Ferguson. He said many guests faced questioning from ICE, detainment or deportation as a result of the disclosures. It's the second settlement over the company's practice in recent months.

      If you stay at Motel 6, prepare to have your latino-tinged data handed over to the authorities who are looking to harm you permanently.

    1. LastPass is run by LogMeIn, Inc. which is based in United States. So let’s say the NSA knocks on their door: “Hey, we need your data on XYZ so we can check their terrorism connections!” As we know by now, NSA does these things and it happens to random people as well, despite not having any ties to terrorism. LastPass data on the server is worthless on its own, but NSA might be able to pressure the company into sending a breach notification to this user.
    1. Facebook users are being interrupted by an interstitial demanding they provide the password for the email account they gave to Facebook when signing up. “To continue using Facebook, you’ll need to confirm your email,” the message demands. “Since you signed up with [email address], you can do that automatically …”A form below the message asked for the users’ “email password.”

      So, Facebook tries to get users to give them their private and non-Facebook e-mail-account password.

      This practice is called spear phishing.

    1. I find it somewhat interesting to note that with 246 public annotations on this page using Hypothes.is, that from what I can tell as of 4/2/2019 only one of them is a simple highlight. All the rest are highlights with an annotation or response of some sort.

      It makes me curious to know what the percentage distribution these two types have on the platform. Is it the case that in classroom settings, which many of these annotations appear to have been made, that much of the use of the platform dictates more annotations (versus simple highlights) due to the performative nature of the process?

      Is it possible that there are a significant number of highlights which are simply hidden because the platform automatically defaults these to private? Is the friction of making highlights so high that people don't bother?

      I know that Amazon will indicate heavily highlighted passages in e-books as a feature to draw attention to the interest relating to those passages. Perhaps it would be useful/nice if Hypothes.is would do something similar, but make the author of the highlights anonymous? (From a privacy perspective, this may not work well on articles with a small number of annotators as the presumption could be that the "private" highlights would most likely be directly attributed to those who also made public annotations.

      Perhaps the better solution is to default highlights to public and provide friction-free UI to make them private?

      A heavily highlighted section by a broad community can be a valuable thing, but surfacing it can be a difficult thing to do.

  11. Mar 2019
    1. As one of 13 million officially designated “discredited individuals,” or laolai in Chinese, 47-year-old Kong is banned from spending on “luxuries,” whose definition includes air travel and fast trains.
    2. Discredited individuals have been barred from taking a total of 17.5 million flights and 5.5 million high-speed train trips as of the end of 2018, according to the latest annual report by the National Public Credit Information Center.The list of “discredited individuals” was introduced in 2013, months before the State Council unveiled a plan in 2014 to build a social credit system by 2020.

      This is what surveillance capitalism brings. This is due to what is called China's "Golden Shield", a credit-statement system that, for example, brings your credit level down if you search for terms such as "Tianmen Square Protest" or post "challenging" pictures on Facebook.

      This is surveillance capitalism at its worst, creating a new lower class for the likes of Google, Facebook, Microsoft, Amazon, and insurance companies. Keep the rabble away, as it were.

    1. Amazon has been beta testing the ads on Apple Inc.’s iOS platform for several months, according to people familiar with the plan. A similar product for Google’s Android platform is planned for later this year, said the people, who asked not to be identified because they’re not authorized to share the information publicly.

      Sounds like one of the best reasons I've ever heard to run Brave Browser both on desktop and mobile. https://brave.com/

    1. Sharing of user data is routine, yet far from transparent. Clinicians should be conscious of privacy risks in their own use of apps and, when recommending apps, explain the potential for loss of privacy as part of informed consent. Privacy regulation should emphasise the accountabilities of those who control and process user data. Developers should disclose all data sharing practices and allow users to choose precisely what data are shared and with whom.

      Horrific conclusion, which clearly states that "sharing of user data is routine" where the medical profession is concerned.

    2. To investigate whether and how user data are shared by top rated medicines related mobile applications (apps) and to characterise privacy risks to app users, both clinicians and consumers.

      "24 of 821 apps identified by an app store crawling program. Included apps pertained to medicines information, dispensing, administration, prescribing, or use, and were interactive."

  12. Feb 2019
    1. Less than a third of the apps that collect identifiers take only the Advertising ID, as recommended by Google's best practices for developers.

      33% apps violate Google Advertising ID policy

    1. “It’s, like, maybe you could have a conversation about whether you should be able to pay and not see ads. That doesn’t feel like a moral question to me. But the question of whether you can pay to have different privacy controls feels wrong.”

      surveillance capitalism or pay-for-privacy capitalism knocking on the door...

    2. though it might break Facebook’s revenue machine by pulling the most affluent and desired users out of the ad targeting pool.

      I doubt the vast majority of the most active FB users are "affluent"

    1. Growing Focus on Measuring Learning

      This topic belongs here, but I would have liked to see an acknowledgement about privacy concerns related to measuring learning. How are we engaging students in the design of this work?

    1. Nearly half of FBI rap sheets failed to include information on the outcome of a case after an arrest—for example, whether a charge was dismissed or otherwise disposed of without a conviction, or if a record was expunged

      This explains my personal experience here: https://hyp.is/EIfMfivUEem7SFcAiWxUpA/epic.org/privacy/global_entry/default.html (Why someone who had Global Entry was flagged for a police incident before he applied for Global Entry).

    2. Applicants also agree to have their fingerprints entered into DHS’ Automatic Biometric Identification System (IDENT) “for recurrent immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes.

      Intelligence checks is very concerning here as it suggests pretty much what has already been leaked, that the US is running complex autonomous screening of all of this data all the time. This also opens up the possibility for discriminatory algorithms since most of these are probably rooted in machine learning techniques and the criminal justice system in the US today tends to be fairly biased towards certain groups of people to begin with.

    3. It cited research, including some authored by the FBI, indicating that “some of the biometrics at the core of NGI, like facial recognition, may misidentify African Americans, young people, and women at higher rates than whites, older people, and men, respectively.

      This re-affirms the previous annotation that the set of training data for the intelligence checks the US runs on global entry data is biased towards certain groups of people.

    4. for as long as your fingerprints and associated information are retained in NGI, your information may be disclosed pursuant to your consent or without your consent.

      Meaning they can give your information to with or without your consent.

    5. people enrolled in, or applying to, the program consent to have their personal data added to the FBI’s Next Generation Identification (NGI) database, shared with “federal, state, local, tribal, territorial, or foreign government agencies”, and DHS third-party “grantees, experts, [and] consultants” forever.

      So it's not just shared with the US government but any government official from any country. Also third-party experts pretty much opens it up for personal information to be shared with anyone.

    1. as part of the application process, TSA collects a cache of personal information about you, including your prints. They’re held in a database for 75 years, and the database is queried by the FBI and state and local law enforcement as needed to solve crimes at which fingerprints are lifted from crime scenes, according to Nojeim. The prints may also be used for background checks.

      While Global Entry itself only lasts for 4 years, the data you give them and allow them to store lasts for almost your entire life.

    1. by providing their passport information and a copy of their fingerprints. According to CBP, registrants must also pass a background check and an interview with a CBP officer before they may be enrolled in the program

      I was at my Global Entry interview (not at all sure I made the right decision to apply) and a person who already had Global Entry came into the room because he had gotten flagged. The lady at the desk asked him if he had ever been arrested, he said no. She said their new system (they continuously update it with new algorithms to find this info) had flagged a police incident that had happened prior to him applying for Global Entry. He hadn’t been arrested, wasn’t guilty of any crime but his name had apparently made it into some police report and that gave them cause to question him when he re-entered his country.

    2. including data breaches and bankruptcy, experienced by “Clear,” a similar registered traveler program

      Clear was another travel program that had a breach of traveler's personal information so it is not unreasonable to be cautious of Global Entry which has the same information and same legal protections in place (or lack there of).

    1. Both afford us the op-portunity to learn with others, but they are very different environments with different po-tential risks and benefits.

      As mentioned earlier in this article, experiences that incorporate private and public contexts can help people advance their understanding and facility in negotiating these different spaces.

  13. Jan 2019
    1. 被修复的并不是这些互联网巨头,而是区块链本身。那些承诺将世界从资本主义的枷锁中解放出来的加密货币创业公司,现在甚至无法保证其自己员工的收益。Facebook的方法是整合区块链的碎片并紧跟潮流,从而让股东更容易接受。

      <big>评:</big><br/><br/>鲁迅先生曾说过这么一句话:「我家院子里有两棵树,一棵是枣树,另一棵也是枣树」。有趣的是,以「榨取」用户隐私商业价值起家的 Facebook,其创始人 Zuckerberg 为了避免狗仔队的骚扰,把自家房子周围的其他四所房子也给买了下来。现在,我们可以回答,what is beside the walled garden? It’s another walled garden.

  14. Dec 2018
    1. Instagram, otra red de su propiedad. “¿Por qué debería alguien seguir creyendo en Facebook?”, fue uno de los artículos publicados. Foto: AP

      Willing to find refugee, to escape from one's own mind. The high winners. Their realities replicated in millions of minds.

  15. Nov 2018
    1. Does the widespread and routine collection of student data in ever new and potentially more-invasive forms risk normalizing and numbing students to the potential privacy and security risks?

      What happens if we turn this around - given a widespread and routine data collection culture which normalizes and numbs students to risk as early as K-8, what are our responsibilities (and strategies) to educate around this culture? And how do our institutional practices relate to that educational mission?

  16. Oct 2018
    1. how do we help students navigate privacy issues in learning spaces augmented with social/digital media. There was a specific request for examples to walk students through this. Here is what I do.

      I'm a little unnerved by the semi-legal nature of the "Interactive Project Release Form" but I think it's a great model (whether really legally enforceable or just a class constitution-type document).

  17. Sep 2018
    1. // Download a json but don't reveal who is downloading it fetch("sneaky.json", {referrerPolicy: "no-referrer"}) .then(function(response) { /* consume the response */ }); // Download a json but pretend another page is downloading it fetch("sneaky.json", {referrer: "https://example.site/fake.html"}) .then(function(response) { /* consume the response */ }); // You can only set same-origin referrers. fetch("sneaky.json", {referrer: "https://cross.origin/page.html"}) .catch(function(exc) { // exc.name == "TypeError" // exc.message == "Referrer URL https://cross.origin/page.html cannot be cross-origin to the entry settings object (https://example.site)." }); // Download a potentially cross-origin json and don't reveal // the full referrer URL across origins fetch(jsonURL, {referrerPolicy: "origin-when-cross-origin"}) .then(function(response) { /* consume the response */ }); // Download a potentially cross-origin json and reveal a // fake referrer URL on your own origin only. fetch(jsonURL, {referrer: "https://example.site/fake.html", referrerPolicy: "origin-when-cross-origin"}) .then(function(response) { /* consume the response */ });
  18. Aug 2018
    1. A file containing personal information of 14.8 million Texas residents was discovered on an unsecured server. It is not clear who owns the server, but the data was likely compiled by Data Trust, a firm created by the GOP.

    1. By last year, Google’s parent, Alphabet, was spending more money on lobbyists than any other corporation in America.
    2. “Under this law, the attorney general of California will become the chief privacy officer of the United States of America,” Mactaggart argued.
    3. “Silicon Valley’s model puts the onus on the user to decide if the bargain is fair,” Soltani told me recently. “It’s like selling you coffee and making it your job to decide if the coffee has lead in it.” When it comes to privacy, he said, “we have no baseline law that says you can’t put lead in coffee.”

      An interesting analogy for privacy

    1. Google also says location records stored in My Activity are used to target ads. Ad buyers can target ads to specific locations — say, a mile radius around a particular landmark — and typically have to pay more to reach this narrower audience. While disabling “Web & App Activity” will stop Google from storing location markers, it also prevents Google from storing information generated by searches and other activity. That can limit the effectiveness of the Google Assistant, the company’s digital concierge. Sean O’Brien, a Yale Privacy Lab researcher with whom the AP shared its findings, said it is “disingenuous” for Google to continuously record these locations even when users disable Location History. “To me, it’s something people should know,” he said.
    2. Sen. Mark Warner of Virginia told the AP it is “frustratingly common” for technology companies “to have corporate practices that diverge wildly from the totally reasonable expectations of their users,” and urged policies that would give users more control of their data. Rep. Frank Pallone of New Jersey called for “comprehensive consumer privacy and data security legislation” in the wake of the AP report.
    3. Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.” That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking. (It’s possible, although laborious, to delete it .)
    4. Storing your minute-by-minute travels carries privacy risks and has been used by police to determine the location of suspects — such as a warrant that police in Raleigh, North Carolina, served on Google last year to find devices near a murder scene. So the company lets you “pause” a setting called Location History.
    1.  recording it all in a Twitter thread that went viral and garnered the hashtag  #PlaneBae.

      I find it interesting that The Atlantic files this story with a URL that includes "/entertainment/" in it's path. Culture, certainly, but how are three seemingly random people's lives meant to be classified by such a journalistic source as "entertainment?"

    1. I am not, and will never be, a simple writer. I have sought to convict, accuse, comfort, and plead with my readers. I’m leaving the majority of my flaws online: Go for it, you can find them if you want. It’s a choice I made long ago.
  19. Jul 2018
    1. where applicable, any rating in the form of a data trust score that may be assignedto the data fiduciary under section 35;and

      A Data Trust score. Thankfully, it isn't mandatory to have a data trust score, which mean that apps and services can exist without there being a trust score

    2. the period for which the personal data will beretained in terms of section 10 or where such period is not known, the criteria for determining such period;

      This defines the terms for data retention. From a company perspective, they are likely to keep this as broad as possible.

    3. Upon receipt of notification, the Authority shall determine whether such breach should be reported by the data fiduciaryto the data principal, taking into account the severity of the harm that may be caused to such data principal or whether some action is required on the part of the data principal to mitigate suchharm.

      This means that users aren't always informed about a breach of data. That's the prerogative of the Data Protection Authority, and not mandatory, in the interest of the user.

    4. “Personal data breach”means any unauthorised or accidental disclosure, acquisition, sharing, use, alteration, destruction, loss of access to, of personal data that compromises the confidentiality, integrity or availability of personal data to a data principal;

      Personal data breach here includes "accidental disclosure" as well.

    5. Notwithstanding anything contained in sub-sections (1) and (2), the Act shall not apply toprocessing ofanonymised data.

      The law isn't applicable to anonymised data. However it doesn't deal with pseudonomised data.

    6. in connection with any activity which involves profiling of data principals within the territory of India.

      This clause gives the law jurisdiction over data of Indian residents or visitors, processed beyond the physical boundaries of India

    7. in connection with any business carried on in India, or any systematic activity of offering goods or services to data principals within the territory of India; or

      Since the Internet is boundary-less, this law will apply to all online services that are being consumed in India: apps downloaded, websites viewed.

    8. Where the data principal withdraws consentfor the processing of any personal data necessary for the performance of a contract to which the data principal is a party, all legal consequences for the effects of such withdrawal shall be borne by the data principal.

      How does it serve public interest and individual rights to hold people liable for the withdrawal of consent to the processing of their personal data?

    1. Privacy

      Privacy is super important! I'm glad they reference this.

    1. challenging and time-consuming

      I'd agree with all of the challenges identified here. Understanding these is useful in designing ways to help support faculty and staff regarding OEP. An additional challenge that emerged in my recent research on OEP was faculty concerns regarding privacy and identity -- this included defining (and continually negotiating) personal/professional & teacher/student boundaries in their open practice. Exploring such tensions is an important part of supporting faculty and staff consideration/exploration of open practices.

    1. But Blair is not just posting about her own life; she has taken non-consenting parties along for the ride.
    2. A friend of mine asked if I’d thought through the contradiction of criticizing Blair publicly like this, when she’s another not-quite public figure too.

      Did this really happen? Or is the author inventing it to diffuse potential criticism as she's writing about the same story herself and only helping to propagate it?

      There's definitely a need to write about this issue, so kudos for that. Ella also deftly leaves out the name of the mystery woman, I'm sure on purpose. But she does include enough breadcrumbs to make the rest of the story discover-able so that one could jump from here to participate in the piling on. I do appreciate that it doesn't appear that she's given Blair any links in the process, which for a story like this is some subtle internet shade.

    3. the woman on the plane has deleted her own Instagram account after receiving violent abuse from the army Blair created.

      Feature request: the ability to make one's social media account "disappear" temporarily while a public "attack" like this is happening.

      We need a great name for this. Publicity ghosting? Fame cloaking?

    4. Even when the attention is positive, it is overwhelming and frightening. Your mind reels at the possibility of what they could find: your address, if your voting records are logged online; your cellphone number, if you accidentally included it on a form somewhere; your unflattering selfies at the beginning of your Facebook photo archive. There are hundreds of Facebook friend requests, press requests from journalists in your Instagram inbox, even people contacting your employer when they can’t reach you directly. This story you didn’t choose becomes the main story of your life. It replaces who you really are as the narrative someone else has written is tattooed onto your skin.
    5. We actively create our public selves, every day, one social media post at a time.
    1. Privacy advocates tried to explain that persuasion was just the tip of the iceberg. Commercial databases were juicy targets for spies and identity thieves, to say nothing of blackmail for people whose data-trails revealed socially risky sexual practices, religious beliefs, or political views.
    1. In fact, these platforms have become inseparable from their data: we use “Facebook” to refer to both the application and the data that drives that application. The result is that nearly every Web app today tries to ask you for more and more data again and again, leading to dangling data on duplicate and inconsistent profiles we can no longer manage. And of course, this comes with significant privacy concerns.
  20. Jun 2018
    1. IDEAS FOR TECHNICAL MECHANISMSA technique called differential privacy1 provides a way to measure the likelihood of negative impact and also a way to introduce plausible deniability, which in many cases can dramatically reduce risk exposure for sensitive data.Modern encryption techniques allow a user’s information to be fully encrypted on their device, but using it becomes unwieldy. Balancing the levels of encryption is challenging, but can create strong safety guarantees. Homomorphic encryption2 can allow certain types of processing or aggregation to happen without needing to decrypt the data.Creating falsifiable security claims allows independent analysts to validate those claims, and invalidate them when they are compromised. For example, by using subresource integrity to lock the code on a web page, the browser will refuse to load any compromised code. By then publishing the code’s hash in an immutable location, any compromise of the page is detectable easily (and automatically, with a service worker or external monitor).Taken to their logical conclusion these techniques suggest building our applications in a more decentralized3 way, which not only provides a higher bar for security, but also helps with scaling: if everyone is sharing some of the processing, the servers can do less work. In this model your digital body is no longer spread throughout servers on the internet; instead the applications come to you and you directly control how they interact with your data.
  21. May 2018
  22. Apr 2018
    1. What can we build that would allow people to 1.) annotate terms of service related to tools they adopt in a classroom? and 2.) see an aggregated list of all current annotations. Last, if we were to start critically analyzing EdTech Terms of Service, what questions should we even ask?

    1. A purpose that is vague or general, such as for instance ‘Improving users’ experience’, ‘marketing purposes’, or ‘future research’ will – without further detail – usually not meet the criteria of being ‘specific’”.[

      I see a lot of cookie notices that give vague reasons like "improving user experience". Specifically disallowed by GDPR?

    2. The GDPR permits the opt-out approach when the purposes that the companies want to use the data for are “compatible” with the original purpose for which personal data were shared by users.[6] In addition to the opt-out notice, users also have to be told of their right to object at any time to the use of their data for direct marketing.[7]

      GDPR can allow opt out rather than opt in.

    1. The alternative, of a regulatory patchwork, would make it harder for the West to amass a shared stock of AI training data to rival China’s.

      Fascinating geopolitical suggestion here: Trans-Atlantic GDPR-like rules as the NATO of data privacy to effectively allow "the West" to compete against the People's Republic of China in the development of artificial intelligence.