304 Matching Annotations
  1. Sep 2019
    1. On social media, we are at the mercy of the platform. It crops our images the way it wants to. It puts our posts in the same, uniform grids. We are yet another profile contained in a platform with a million others, pushed around by the changing tides of a company's whims. Algorithms determine where our posts show up in people’s feeds and in what order, how someone swipes through our photos, where we can and can’t post a link. The company decides whether we're in violation of privacy laws for sharing content we created ourselves. It can ban or shut us down without notice or explanation. On social media, we are not in control.

      This is why I love personal web sites. They're your own, you do whatever you want with them, and you control them. Nothing is owned by others and you're completely free to do whatever you want.

      That's not the case with Facebook, Microsoft, Slack, Jira, whatever.

    1. There is already a lot of information Facebook can assume from that simple notification: that you are probably a woman, probably menstruating, possibly trying to have (or trying to avoid having) a baby. Moreover, even though you are asked to agree to their privacy policy, Maya starts sharing data with Facebook before you get to agree to anything. This raises some serious transparency concerns.

      Privacy International are highlighting how period-tracking apps are violating users' privacy.

  2. Aug 2019
    1. Even if you choose not to use Wi-Fi services we make available at MGM Resorts, we may still collect information concerning the precise physical location of your mobile device within and around MGM Resorts for non-marketing purposes. 

      Holy cow

    1. Now, I'd rather pay for a product that sticks around than have my personal data sold to use a free product that may not be around tomorrow. I value my privacy much more today. If you're not paying for the product... you are the product being sold.
  3. Jul 2019
    1. Even if we never see this brain-reading tech in Facebook products (something that would probably cause just a little concern), researchers could use it to improve the lives of people who can’t speak due to paralysis or other issues.
    2. That’s very different from the system Facebook described in 2017: a noninvasive, mass-market cap that lets people type more than 100 words per minute without manual text entry or speech-to-text transcription.
    3. Their work demonstrates a method of quickly “reading” whole words and phrases from the brain — getting Facebook slightly closer to its dream of a noninvasive thought-typing system.
    1. If Bluetooth is ON on your Apple device everyone nearby can understand current status of your device, get info about battery, device name, Wi-Fi status, buffer availability, OS version and even get your mobile phone number
    1. Comparison between web browsers

      This is one of the best resources on web privacy I've ever seen. I warmly recommend it!

    1. Two years ago, when he moved from Boston to London, he had to register with a general practitioner. The doctor’s office gave him a form to sign saying that his medical data would be shared with other hospitals he might go to, and with a system that might distribute his information to universities, private companies and other government departments.The form added that the although the data are anonymized, “there are those who believe a person can be identified through this information.”“That was really scary,” Dr. de Montjoye said. “We are at a point where we know a risk exists and count on people saying they don’t care about privacy. It’s insane.”
    2. Scientists at Imperial College London and Université Catholique de Louvain, in Belgium, reported in the journal Nature Communications that they had devised a computer algorithm that can identify 99.98 percent of Americans from almost any available data set with as few as 15 attributes, such as gender, ZIP code or marital status.

      This goes to show that one should not trust companies and organisations which claim to "anonymise" your data.

  4. Jun 2019
  5. educatorinnovator.org educatorinnovator.org
    1. snafus, like those of privacy settings

      I'm struck by the choice of "snafu" to describe "privacy settings." I worry describing privacy as a snafu undermines the seriousness with which teachers and students should evaluate a technology's privacy settings when choosing to incorporate the technology into a classroom and other learning environment.

  6. www.joinhoney.com www.joinhoney.com
    1. Honey’s products do not support Do Not Track requests at this time, which means that we collect information about your online activity while you are using Honey’s products in the manner described above.

      So even if you ask us not to track you, we will anyway.

    2. Once you delete your profile, there is no longer any data attributable to you.

      Which means they do not delete all your information.

    3. After you have terminated your use of Honey’s products, we will store your information in an aggregated and anonymised format.

      We keep your info forever, in other words.

    4. as long as is required

      Which is?

    5. That means while you are using the Extension and Honey is saving you money,

      Slickly written. These dudes are good!

    6. While you are using the Extension, this does NOT include any information from your search engine history or from your email.

      Trust us!

  7. May 2019
    1. They’ve learned, and that’s more dangerous than caring, because that means they’re rationally pricing these harms. The day that 20% of consumers put a price tag on privacy, freemium is over and privacy is back.

      Google want you to say yes, not because they're inviting positivity more than ever, but because they want you to purchase things and make them richer. This is the essence of capitalism.

    1. Unsurprisingly living up to its reputation, Facebook refuses to comply with my GDPR Subject Access Requests in an appropriate manner.

      Facebook never has cared about privacy of individuals. This is highly interesting.

    1. Now, how does that mother build an online scrapbook of all the items that were poured into the system?

      The assumptions here are interesting. Does mom have the right to every picture taken at her party? Do the guests have the right to take pictures and post them on the web?

  8. Apr 2019
    1. The report also noted a 27 percent increase in the number of foreigners whose communications were targeted by the NSA during the year. In total, an estimated 164,770 foreign individuals or groups were targeted with search terms used by the NSA to monitor their communications, up from 129,080 on the year prior.
    1. we get some of it by collecting data about your interactions, use and experiences with our products. The data we collect depends on the context of your interactions with Microsoft and the choices that you make, including your privacy settings and the products and features that you use. We also obtain data about you from third parties.
    1. Washington state Attorney General Bob Ferguson said Thursday that Motel 6 shared the information of about 80,000 guests in the state from 2015 to 2017. That led to targeted investigations of guests with Latino-sounding names, according to Ferguson. He said many guests faced questioning from ICE, detainment or deportation as a result of the disclosures. It's the second settlement over the company's practice in recent months.

      If you stay at Motel 6, prepare to have your latino-tinged data handed over to the authorities who are looking to harm you permanently.

    1. LastPass is run by LogMeIn, Inc. which is based in United States. So let’s say the NSA knocks on their door: “Hey, we need your data on XYZ so we can check their terrorism connections!” As we know by now, NSA does these things and it happens to random people as well, despite not having any ties to terrorism. LastPass data on the server is worthless on its own, but NSA might be able to pressure the company into sending a breach notification to this user.
    1. Facebook users are being interrupted by an interstitial demanding they provide the password for the email account they gave to Facebook when signing up. “To continue using Facebook, you’ll need to confirm your email,” the message demands. “Since you signed up with [email address], you can do that automatically …”A form below the message asked for the users’ “email password.”

      So, Facebook tries to get users to give them their private and non-Facebook e-mail-account password.

      This practice is called spear phishing.

    1. I find it somewhat interesting to note that with 246 public annotations on this page using Hypothes.is, that from what I can tell as of 4/2/2019 only one of them is a simple highlight. All the rest are highlights with an annotation or response of some sort.

      It makes me curious to know what the percentage distribution these two types have on the platform. Is it the case that in classroom settings, which many of these annotations appear to have been made, that much of the use of the platform dictates more annotations (versus simple highlights) due to the performative nature of the process?

      Is it possible that there are a significant number of highlights which are simply hidden because the platform automatically defaults these to private? Is the friction of making highlights so high that people don't bother?

      I know that Amazon will indicate heavily highlighted passages in e-books as a feature to draw attention to the interest relating to those passages. Perhaps it would be useful/nice if Hypothes.is would do something similar, but make the author of the highlights anonymous? (From a privacy perspective, this may not work well on articles with a small number of annotators as the presumption could be that the "private" highlights would most likely be directly attributed to those who also made public annotations.

      Perhaps the better solution is to default highlights to public and provide friction-free UI to make them private?

      A heavily highlighted section by a broad community can be a valuable thing, but surfacing it can be a difficult thing to do.

  9. Mar 2019
    1. As one of 13 million officially designated “discredited individuals,” or laolai in Chinese, 47-year-old Kong is banned from spending on “luxuries,” whose definition includes air travel and fast trains.
    2. Discredited individuals have been barred from taking a total of 17.5 million flights and 5.5 million high-speed train trips as of the end of 2018, according to the latest annual report by the National Public Credit Information Center.The list of “discredited individuals” was introduced in 2013, months before the State Council unveiled a plan in 2014 to build a social credit system by 2020.

      This is what surveillance capitalism brings. This is due to what is called China's "Golden Shield", a credit-statement system that, for example, brings your credit level down if you search for terms such as "Tianmen Square Protest" or post "challenging" pictures on Facebook.

      This is surveillance capitalism at its worst, creating a new lower class for the likes of Google, Facebook, Microsoft, Amazon, and insurance companies. Keep the rabble away, as it were.

    1. Amazon has been beta testing the ads on Apple Inc.’s iOS platform for several months, according to people familiar with the plan. A similar product for Google’s Android platform is planned for later this year, said the people, who asked not to be identified because they’re not authorized to share the information publicly.

      Sounds like one of the best reasons I've ever heard to run Brave Browser both on desktop and mobile. https://brave.com/

    1. Sharing of user data is routine, yet far from transparent. Clinicians should be conscious of privacy risks in their own use of apps and, when recommending apps, explain the potential for loss of privacy as part of informed consent. Privacy regulation should emphasise the accountabilities of those who control and process user data. Developers should disclose all data sharing practices and allow users to choose precisely what data are shared and with whom.

      Horrific conclusion, which clearly states that "sharing of user data is routine" where the medical profession is concerned.

    2. To investigate whether and how user data are shared by top rated medicines related mobile applications (apps) and to characterise privacy risks to app users, both clinicians and consumers.

      "24 of 821 apps identified by an app store crawling program. Included apps pertained to medicines information, dispensing, administration, prescribing, or use, and were interactive."

  10. Feb 2019
    1. Less than a third of the apps that collect identifiers take only the Advertising ID, as recommended by Google's best practices for developers.

      33% apps violate Google Advertising ID policy

    1. “It’s, like, maybe you could have a conversation about whether you should be able to pay and not see ads. That doesn’t feel like a moral question to me. But the question of whether you can pay to have different privacy controls feels wrong.”

      surveillance capitalism or pay-for-privacy capitalism knocking on the door...

    2. though it might break Facebook’s revenue machine by pulling the most affluent and desired users out of the ad targeting pool.

      I doubt the vast majority of the most active FB users are "affluent"

    1. Growing Focus on Measuring Learning

      This topic belongs here, but I would have liked to see an acknowledgement about privacy concerns related to measuring learning. How are we engaging students in the design of this work?

    1. Nearly half of FBI rap sheets failed to include information on the outcome of a case after an arrest—for example, whether a charge was dismissed or otherwise disposed of without a conviction, or if a record was expunged

      This explains my personal experience here: https://hyp.is/EIfMfivUEem7SFcAiWxUpA/epic.org/privacy/global_entry/default.html (Why someone who had Global Entry was flagged for a police incident before he applied for Global Entry).

    2. Applicants also agree to have their fingerprints entered into DHS’ Automatic Biometric Identification System (IDENT) “for recurrent immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes.

      Intelligence checks is very concerning here as it suggests pretty much what has already been leaked, that the US is running complex autonomous screening of all of this data all the time. This also opens up the possibility for discriminatory algorithms since most of these are probably rooted in machine learning techniques and the criminal justice system in the US today tends to be fairly biased towards certain groups of people to begin with.

    3. It cited research, including some authored by the FBI, indicating that “some of the biometrics at the core of NGI, like facial recognition, may misidentify African Americans, young people, and women at higher rates than whites, older people, and men, respectively.

      This re-affirms the previous annotation that the set of training data for the intelligence checks the US runs on global entry data is biased towards certain groups of people.

    4. for as long as your fingerprints and associated information are retained in NGI, your information may be disclosed pursuant to your consent or without your consent.

      Meaning they can give your information to with or without your consent.

    5. people enrolled in, or applying to, the program consent to have their personal data added to the FBI’s Next Generation Identification (NGI) database, shared with “federal, state, local, tribal, territorial, or foreign government agencies”, and DHS third-party “grantees, experts, [and] consultants” forever.

      So it's not just shared with the US government but any government official from any country. Also third-party experts pretty much opens it up for personal information to be shared with anyone.

    1. as part of the application process, TSA collects a cache of personal information about you, including your prints. They’re held in a database for 75 years, and the database is queried by the FBI and state and local law enforcement as needed to solve crimes at which fingerprints are lifted from crime scenes, according to Nojeim. The prints may also be used for background checks.

      While Global Entry itself only lasts for 4 years, the data you give them and allow them to store lasts for almost your entire life.

    1. by providing their passport information and a copy of their fingerprints. According to CBP, registrants must also pass a background check and an interview with a CBP officer before they may be enrolled in the program

      I was at my Global Entry interview (not at all sure I made the right decision to apply) and a person who already had Global Entry came into the room because he had gotten flagged. The lady at the desk asked him if he had ever been arrested, he said no. She said their new system (they continuously update it with new algorithms to find this info) had flagged a police incident that had happened prior to him applying for Global Entry. He hadn’t been arrested, wasn’t guilty of any crime but his name had apparently made it into some police report and that gave them cause to question him when he re-entered his country.

    2. including data breaches and bankruptcy, experienced by “Clear,” a similar registered traveler program

      Clear was another travel program that had a breach of traveler's personal information so it is not unreasonable to be cautious of Global Entry which has the same information and same legal protections in place (or lack there of).

    1. Both afford us the op-portunity to learn with others, but they are very different environments with different po-tential risks and benefits.

      As mentioned earlier in this article, experiences that incorporate private and public contexts can help people advance their understanding and facility in negotiating these different spaces.

  11. Jan 2019
    1. 被修复的并不是这些互联网巨头,而是区块链本身。那些承诺将世界从资本主义的枷锁中解放出来的加密货币创业公司,现在甚至无法保证其自己员工的收益。Facebook的方法是整合区块链的碎片并紧跟潮流,从而让股东更容易接受。

      <big>评:</big><br/><br/>鲁迅先生曾说过这么一句话:「我家院子里有两棵树,一棵是枣树,另一棵也是枣树」。有趣的是,以「榨取」用户隐私商业价值起家的 Facebook,其创始人 Zuckerberg 为了避免狗仔队的骚扰,把自家房子周围的其他四所房子也给买了下来。现在,我们可以回答,what is beside the walled garden? It’s another walled garden.

  12. Dec 2018
    1. Instagram, otra red de su propiedad. “¿Por qué debería alguien seguir creyendo en Facebook?”, fue uno de los artículos publicados. Foto: AP

      Willing to find refugee, to escape from one's own mind. The high winners. Their realities replicated in millions of minds.

  13. Nov 2018
    1. Does the widespread and routine collection of student data in ever new and potentially more-invasive forms risk normalizing and numbing students to the potential privacy and security risks?

      What happens if we turn this around - given a widespread and routine data collection culture which normalizes and numbs students to risk as early as K-8, what are our responsibilities (and strategies) to educate around this culture? And how do our institutional practices relate to that educational mission?

  14. Oct 2018
    1. how do we help students navigate privacy issues in learning spaces augmented with social/digital media. There was a specific request for examples to walk students through this. Here is what I do.

      I'm a little unnerved by the semi-legal nature of the "Interactive Project Release Form" but I think it's a great model (whether really legally enforceable or just a class constitution-type document).

  15. Sep 2018
    1. // Download a json but don't reveal who is downloading it fetch("sneaky.json", {referrerPolicy: "no-referrer"}) .then(function(response) { /* consume the response */ }); // Download a json but pretend another page is downloading it fetch("sneaky.json", {referrer: "https://example.site/fake.html"}) .then(function(response) { /* consume the response */ }); // You can only set same-origin referrers. fetch("sneaky.json", {referrer: "https://cross.origin/page.html"}) .catch(function(exc) { // exc.name == "TypeError" // exc.message == "Referrer URL https://cross.origin/page.html cannot be cross-origin to the entry settings object (https://example.site)." }); // Download a potentially cross-origin json and don't reveal // the full referrer URL across origins fetch(jsonURL, {referrerPolicy: "origin-when-cross-origin"}) .then(function(response) { /* consume the response */ }); // Download a potentially cross-origin json and reveal a // fake referrer URL on your own origin only. fetch(jsonURL, {referrer: "https://example.site/fake.html", referrerPolicy: "origin-when-cross-origin"}) .then(function(response) { /* consume the response */ });
  16. Aug 2018
    1. A file containing personal information of 14.8 million Texas residents was discovered on an unsecured server. It is not clear who owns the server, but the data was likely compiled by Data Trust, a firm created by the GOP.

    1. By last year, Google’s parent, Alphabet, was spending more money on lobbyists than any other corporation in America.
    2. “Under this law, the attorney general of California will become the chief privacy officer of the United States of America,” Mactaggart argued.
    3. “Silicon Valley’s model puts the onus on the user to decide if the bargain is fair,” Soltani told me recently. “It’s like selling you coffee and making it your job to decide if the coffee has lead in it.” When it comes to privacy, he said, “we have no baseline law that says you can’t put lead in coffee.”

      An interesting analogy for privacy

    1. Google also says location records stored in My Activity are used to target ads. Ad buyers can target ads to specific locations — say, a mile radius around a particular landmark — and typically have to pay more to reach this narrower audience. While disabling “Web & App Activity” will stop Google from storing location markers, it also prevents Google from storing information generated by searches and other activity. That can limit the effectiveness of the Google Assistant, the company’s digital concierge. Sean O’Brien, a Yale Privacy Lab researcher with whom the AP shared its findings, said it is “disingenuous” for Google to continuously record these locations even when users disable Location History. “To me, it’s something people should know,” he said.
    2. Sen. Mark Warner of Virginia told the AP it is “frustratingly common” for technology companies “to have corporate practices that diverge wildly from the totally reasonable expectations of their users,” and urged policies that would give users more control of their data. Rep. Frank Pallone of New Jersey called for “comprehensive consumer privacy and data security legislation” in the wake of the AP report.
    3. Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.” That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking. (It’s possible, although laborious, to delete it .)
    4. Storing your minute-by-minute travels carries privacy risks and has been used by police to determine the location of suspects — such as a warrant that police in Raleigh, North Carolina, served on Google last year to find devices near a murder scene. So the company lets you “pause” a setting called Location History.
    1.  recording it all in a Twitter thread that went viral and garnered the hashtag  #PlaneBae.

      I find it interesting that The Atlantic files this story with a URL that includes "/entertainment/" in it's path. Culture, certainly, but how are three seemingly random people's lives meant to be classified by such a journalistic source as "entertainment?"

    1. I am not, and will never be, a simple writer. I have sought to convict, accuse, comfort, and plead with my readers. I’m leaving the majority of my flaws online: Go for it, you can find them if you want. It’s a choice I made long ago.
  17. Jul 2018
    1. where applicable, any rating in the form of a data trust score that may be assignedto the data fiduciary under section 35;and

      A Data Trust score. Thankfully, it isn't mandatory to have a data trust score, which mean that apps and services can exist without there being a trust score

    2. the period for which the personal data will beretained in terms of section 10 or where such period is not known, the criteria for determining such period;

      This defines the terms for data retention. From a company perspective, they are likely to keep this as broad as possible.

    3. Upon receipt of notification, the Authority shall determine whether such breach should be reported by the data fiduciaryto the data principal, taking into account the severity of the harm that may be caused to such data principal or whether some action is required on the part of the data principal to mitigate suchharm.

      This means that users aren't always informed about a breach of data. That's the prerogative of the Data Protection Authority, and not mandatory, in the interest of the user.

    4. “Personal data breach”means any unauthorised or accidental disclosure, acquisition, sharing, use, alteration, destruction, loss of access to, of personal data that compromises the confidentiality, integrity or availability of personal data to a data principal;

      Personal data breach here includes "accidental disclosure" as well.

    5. Notwithstanding anything contained in sub-sections (1) and (2), the Act shall not apply toprocessing ofanonymised data.

      The law isn't applicable to anonymised data. However it doesn't deal with pseudonomised data.

    6. in connection with any activity which involves profiling of data principals within the territory of India.

      This clause gives the law jurisdiction over data of Indian residents or visitors, processed beyond the physical boundaries of India

    7. in connection with any business carried on in India, or any systematic activity of offering goods or services to data principals within the territory of India; or

      Since the Internet is boundary-less, this law will apply to all online services that are being consumed in India: apps downloaded, websites viewed.

    8. Where the data principal withdraws consentfor the processing of any personal data necessary for the performance of a contract to which the data principal is a party, all legal consequences for the effects of such withdrawal shall be borne by the data principal.

      How does it serve public interest and individual rights to hold people liable for the withdrawal of consent to the processing of their personal data?

    1. Privacy

      Privacy is super important! I'm glad they reference this.

    1. challenging and time-consuming

      I'd agree with all of the challenges identified here. Understanding these is useful in designing ways to help support faculty and staff regarding OEP. An additional challenge that emerged in my recent research on OEP was faculty concerns regarding privacy and identity -- this included defining (and continually negotiating) personal/professional & teacher/student boundaries in their open practice. Exploring such tensions is an important part of supporting faculty and staff consideration/exploration of open practices.

    1. But Blair is not just posting about her own life; she has taken non-consenting parties along for the ride.
    2. A friend of mine asked if I’d thought through the contradiction of criticizing Blair publicly like this, when she’s another not-quite public figure too.

      Did this really happen? Or is the author inventing it to diffuse potential criticism as she's writing about the same story herself and only helping to propagate it?

      There's definitely a need to write about this issue, so kudos for that. Ella also deftly leaves out the name of the mystery woman, I'm sure on purpose. But she does include enough breadcrumbs to make the rest of the story discover-able so that one could jump from here to participate in the piling on. I do appreciate that it doesn't appear that she's given Blair any links in the process, which for a story like this is some subtle internet shade.

    3. the woman on the plane has deleted her own Instagram account after receiving violent abuse from the army Blair created.

      Feature request: the ability to make one's social media account "disappear" temporarily while a public "attack" like this is happening.

      We need a great name for this. Publicity ghosting? Fame cloaking?

    4. Even when the attention is positive, it is overwhelming and frightening. Your mind reels at the possibility of what they could find: your address, if your voting records are logged online; your cellphone number, if you accidentally included it on a form somewhere; your unflattering selfies at the beginning of your Facebook photo archive. There are hundreds of Facebook friend requests, press requests from journalists in your Instagram inbox, even people contacting your employer when they can’t reach you directly. This story you didn’t choose becomes the main story of your life. It replaces who you really are as the narrative someone else has written is tattooed onto your skin.
    5. We actively create our public selves, every day, one social media post at a time.
    1. Privacy advocates tried to explain that persuasion was just the tip of the iceberg. Commercial databases were juicy targets for spies and identity thieves, to say nothing of blackmail for people whose data-trails revealed socially risky sexual practices, religious beliefs, or political views.
    1. In fact, these platforms have become inseparable from their data: we use “Facebook” to refer to both the application and the data that drives that application. The result is that nearly every Web app today tries to ask you for more and more data again and again, leading to dangling data on duplicate and inconsistent profiles we can no longer manage. And of course, this comes with significant privacy concerns.
  18. Jun 2018
    1. IDEAS FOR TECHNICAL MECHANISMSA technique called differential privacy1 provides a way to measure the likelihood of negative impact and also a way to introduce plausible deniability, which in many cases can dramatically reduce risk exposure for sensitive data.Modern encryption techniques allow a user’s information to be fully encrypted on their device, but using it becomes unwieldy. Balancing the levels of encryption is challenging, but can create strong safety guarantees. Homomorphic encryption2 can allow certain types of processing or aggregation to happen without needing to decrypt the data.Creating falsifiable security claims allows independent analysts to validate those claims, and invalidate them when they are compromised. For example, by using subresource integrity to lock the code on a web page, the browser will refuse to load any compromised code. By then publishing the code’s hash in an immutable location, any compromise of the page is detectable easily (and automatically, with a service worker or external monitor).Taken to their logical conclusion these techniques suggest building our applications in a more decentralized3 way, which not only provides a higher bar for security, but also helps with scaling: if everyone is sharing some of the processing, the servers can do less work. In this model your digital body is no longer spread throughout servers on the internet; instead the applications come to you and you directly control how they interact with your data.
  19. May 2018
  20. Apr 2018
    1. What can we build that would allow people to 1.) annotate terms of service related to tools they adopt in a classroom? and 2.) see an aggregated list of all current annotations. Last, if we were to start critically analyzing EdTech Terms of Service, what questions should we even ask?

    1. A purpose that is vague or general, such as for instance ‘Improving users’ experience’, ‘marketing purposes’, or ‘future research’ will – without further detail – usually not meet the criteria of being ‘specific’”.[

      I see a lot of cookie notices that give vague reasons like "improving user experience". Specifically disallowed by GDPR?

    2. The GDPR permits the opt-out approach when the purposes that the companies want to use the data for are “compatible” with the original purpose for which personal data were shared by users.[6] In addition to the opt-out notice, users also have to be told of their right to object at any time to the use of their data for direct marketing.[7]

      GDPR can allow opt out rather than opt in.

    1. The alternative, of a regulatory patchwork, would make it harder for the West to amass a shared stock of AI training data to rival China’s.

      Fascinating geopolitical suggestion here: Trans-Atlantic GDPR-like rules as the NATO of data privacy to effectively allow "the West" to compete against the People's Republic of China in the development of artificial intelligence.

    1. Data Re-Use. Contractor agrees that any and all Institutional Data exchanged shall be used expressly and solely for the purposes enumerated in the Agreement. UH Institutional Data shall not be distributed, repurposed or shared across other applications, environments, or business units of the Contractor. The Contractor further agrees that no Institutional Data of any kind shall be revealed, transmitted, exchanged or otherwise passed to other vendors or interested parties except on a case-by-case basis as specifically agreed to in writing by a University officer with designated data, security, or signature authority.

      Like this clause. Wonder if this is the exception or the rule in Uni procurement deals these days?

  21. Mar 2018
  22. Feb 2018
    1. We will not require a child to provide more information than is reasonably necessary in order to participate in an online activity.
  23. Jan 2018
  24. Dec 2017
    1. Starting Tuesday, any time someone uploads a photo that includes what Facebook thinks is your face, you’ll be notified even if you weren’t tagged.

      This is eerily like in the book The Circle where facial recognition is done over all photos and video on the web--including CCTV. No more secrets.

    1. Projects by IF is a limited company based in London, England. We run this website (projectsbyif.com) and its subdomains. We also use third party services to publish work, keep in touch with people and understand how we can do those things better. Many of those services collect some data about people who are interested in IF, come to our events or work with us. Here you can find out what those services are, how we use them and how we store the information they collect. If you’ve got any questions, or want to know more about data we might have collected about you, email hello@projectsbyif.com This page was published on 25 August 2017. You can see any revisions by visiting the repository on Github.

      As you'd expect, If's privacy page is fantastic

  25. Nov 2017
    1. While the teacher can correlate individual responses with the children’s names, no one else—not the app, not the museum—has any personal information about the learners.
    2. creates a highly personalized experience for the children while simultaneously alleviating privacy concerns.
    1. The users of a website known as Ashleymadison.com which was used by people who wanted to have secret relationships had 30 million of its users names released. This resulted in 2 suicides which were linked to the disclosure. The article talks about the “illusion” of internet securuity and if someone knows how to they can steal sensitive data and ruin lives. It shows that internet data is never truly safe. Related posts:

      i completely agree that today hackers can get into almost any device or software and this needs to be dealt with asap as people who do get exposed suffer miserably not only with depression but so much pressure too.

    2. i completely agree that today hackers can get into almost any device or software and this needs to be dealt with asap as people who do get exposed suffer miserably not only with depression but so much pressure too.

    1. Yes, it is very probable that you can due to the high probability that there is only one person of a specific gender and D.O.B., living in your zip code. However, it is possible that there could be a few people of the same demographic all living in a larger city.

      I do agree with the possibility of being able to trace someone based on all three aspects. I never really considered the likability for having the same demographic when living in a larger city.

    1. Every site you access and every vendor you purchase from keeps data on you and so does your computer. I think it is very important for everyone to be aware of this. If you access unreputable sites, it could be used against you in a job search for instance.

      I agree that everyone should be aware that every activity made on the Internet is monitored. Thus, with that data other people will be able to use those information to cause hindrance into our life. For example, our credit card information got hacked.

    1. It is likely that you can. Because odds are there is only one person that is the same age and birth day that lives in your zip code. But it is possible that you would only have a couple options.

      I agree as based on a study they could re-identify credit card users 90% of the time just based on information which were not personal to the credit card users.

  26. Oct 2017
    1. SomescholarshavechallengedthesortingeffectsoftheGooglesearchenginetohighlightthatitsoperation(1)isbasedondecisionsinscribedintoalgorithmsthatfavouranddiscriminatecontent,(2)issubjecttopersonalization,localization,andselection,and(3)threatensprivacy
    2. Opennessinrelationtosharingthushasmultiplemeaningsandisamatterofpoliticalcontestationthatbeliesthepositiveformulationsofitasafoundingimaginaryofcyberspace.Ontheonehand,itmeansmakinggovernmentstransparent,democratizingknowledge,collaboratingandco-producing,andimprovingwell-beingbutontheother,exposing,makingvisible,andopeningupsubjectstovariousknownandunknownpracticesandinterventions.[76]Alongwithparticipatingandconnecting,sharinggeneratesthesetensions,especiallyinrelationtowhatisoftenreducedtoasquestionsofprivacy.Thistensionthatopennessgeneratesincreasinglycreatesadditionaldemandsthatcitizenssecurethemselvesfromandberesponsibleforthepotentialandevenunknowableconsequencesoftheirdigitalconduct.
    3. Actsofconnectingrespondtoacallingthatpersistseveninlightofthetraceabilityofdigitalactionsandconcernsaboutprivacy.Thosewhoaremakingrightsclaimstoprivacyanddataownershiparebyfaroutnumberedbythosewhocontinuetosharedatawithoutconcern.Thatadatatraceisamaterialthatcanbemined,shared,analysed,andacteduponbynumerouspeoplemakestheimaginaryofopennessvulnerabletooftenunknownorunforeseeableacts.Butdigitaltracesalsointroduceanothertension.Anothercalling,thatofsharingdigitalcontentandtraces,isademandthatevokestheimaginaryofopennessfundamentaltotheveryarchitectureofsharedresourcesandgifteconomythatformedtheonce-dominantlogicofcyberspace
    1. The learning analytics and education data mining discussed in this handbook hold great promise. At the same time, they raise important concerns about security, privacy, and the broader consequences of big data-driven education. This chapter describes the regulatory framework governing student data, its neglect of learning analytics and educational data mining, and proactive approaches to privacy. It is less about conveying specific rules and more about relevant concerns and solutions. Traditional student privacy law focuses on ensuring that parents or schools approve disclosure of student information. They are designed, however, to apply to paper “education records,” not “student data.” As a result, they no longer provide meaningful oversight. The primary federal student privacy statute does not even impose direct consequences for noncompliance or cover “learner” data collected directly from students. Newer privacy protections are uncoordinated, often prohibiting specific practices to disastrous effect or trying to limit “commercial” use. These also neglect the nuanced ethical issues that exist even when big data serves educational purposes. I propose a proactive approach that goes beyond mere compliance and includes explicitly considering broader consequences and ethics, putting explicit review protocols in place, providing meaningful transparency, and ensuring algorithmic accountability. Export Citation: Plain Text (APA
  27. Sep 2017
    1. AsRonaldDeibertrecentlysuggested,whiletheInternetusedtobecharacterizedasanetworkofnetworksitisperhapsmoreappropriatenowtoseeitasanetworkoffiltersandchokepoints.[4]ThestruggleoverthethingswesayanddothroughtheInternetisnowapoliticalstruggleofourtimes,andsoistheInternetitself.

    Tags

    Annotators

    1. Co-regulation encompasses initiatives in which government and industry share responsibility for drafting and en-forcing regulatory standards
    2. policy makers and scholars should explore an alternative approach known as “co-regulation.