O(n x) polynomial complexity.Bubble sort, traversing a 2D array
Traveler question to decide which city to go
O(n x) polynomial complexity.Bubble sort, traversing a 2D array
Traveler question to decide which city to go
O(log n) divide and conquer algorithmsBinary-search to find an item in a sorted-list
Divide in half
The game requires a generic ArrayList class, but you’re not allowed to use Java’s built-in ArrayList, so you’ll have to supply your own. Your ArrayList needs to be generic and implemented with a linked list. (If you see Java’s ArrayList being imported, you need to comment out that import statement.)
寫個ArrayList with Generic applying with Linklist
BOND: I mean, bluntly, Rachel, they see demand for this product. So as you said, Facebook's rules prohibit kids under 13 from signing up. You have to put in a birth year when you sign up. The company says it kicks people off if it finds out that they misrepresented their age. But even Mark Zuckerberg acknowledged at a congressional hearing back in March, some kids do lie about their age to use these apps.
點解FACEBOOK係知道有咁多害處既情況下都要搞個小童版IG出黎?
因為睇到the deamnad for this product Even Mark Zuckerberg knows some kids do lie about their age to use this app
SHANNON BOND, BYLINE: Well, they lay them out in - their concerns - in this letter they wrote to Facebook CEO Mark Zuckerberg. And basically they're saying children are just not equipped to navigate the challenges of social media, so things like knowing what's appropriate to share, who can see what they share. They listed fears about cyberbullying and exposure to online predators. And they pointed to research suggesting that use of social media could be connected with things like depression, lower self-esteem, body image concerns. Remember, you know, Instagram is a visual platform. It's full of selfies. And these AGs aren't the only ones concerned. Child safety groups and members of Congress have also raised alarm about this idea.
使用IG既壞處
A bipartisan group of 44 attorneys general is telling Facebook to scrap its plans for an Instagram for kids. They're worried about children's safety, privacy, their mental health. Facebook says even though kids under the age of 13 aren't supposed to sign up for the photo sharing app now, many still do. So it wants to build a version just for them. NPR's Shannon Bond has been following this story. We should note, Facebook is among NPR's financial supporters. Shannon, explain the argument the AGs are making.
因為吾少未成年sign up 左 photo sharing app, Meta 想搞個專門比13歲以下既人既IG & Facebook.
Yeah. So, I mean, our intentions, like I said before, we're trying to find--like many other companies and many other people, there's been an international response to the crisis in Ukraine, and everyone's been trying to help. So, we were just thinking along the same lines, how do we help as a company? How do we provide something that could be, you know, useful? And the response has been just way more than we could ever imagine in terms of the success and the ability we've had to help them.So, I mean, you know, we're a very mission driven company. We support law enforcement here in the United States. And we've had, you know, our fair share of criticism. But ever since we've had it, what kept us going as a company, what kept us motivated, is hearing every day these success stories from our customers like Homeland Security, and FBI, where they've been able to, you know, ID child molesters. And so I think it's just the natural kind of cycle that happens with any new technology, where it--first it can be misunderstood, so many misunderstandings about what Clearview is and how it works. Many people think it's a real time service. When they realize it's after the crime investigative service, they’re a lot more comfortable with it.And we see major events, when January 6 happened and our technology was able to ID many of the Capitol rioters, writers more acceptance of it. So, kind of my job and the job of the company is to continue to educate people on how it's actually used in practice and so that way that the legislatures and now there are more people in government are able to make the right decisions about, you know, how to--how to regulate this software.We do think regulation is important. And any new technology goes through that. So, when the car, the automobile was invented, there weren't any street signs. There's no, you know, stop signs, traffic lights, or anything like that. But once we kind of adopted the technology--and society had talks about what it's good for, what it's not good for--I mean, you could take a car and drive it into a building, or you can get it from A to B. And then you know, the regulations came along. There have been seatbelts. There have been a lot of, you know, safety features. And we think it’s the same for facial recognition, and we welcome that. And I think that what we're here is just to talk about the, you know, good use cases that are possible with it. And we've been really surprised and--ourselves every day about all the types of crimes it’s been able to solve. So, when it comes to what we do as a company, you know, we've always been mission driven, and using this technology for good, and to help people and make society a lot safer.
The perspect of promoting their company throught this war
Yeah, I think that's one of the ideas that we thought of, because we've seen it be able to identify people that have been decreased previously. But we didn't think it would be as important as it turned out to be. So, some of the examples that I've seen where they would have someone with an identification, someone without, this is something that would not be possible in a previous time, where, again, you won't have a database of these in DNA or fingerprints or anything like that. So, you know, war is very gruesome. And, you know, in any kind of war zone, it is dangerous, and this technology can help decrease, you know, misidentification if you really know who you're dealing with. So, when we take everyone through all these different scenarios in each scenario as a way to, you know, make things safer and better, and in the cases of, you know, victims of war, I think that it's really unfortunate that there's people in Russia who really believed that this war is not happening. They don’t know where their family members are, and what's really happening. And so I think a sense of closure could be very helpful.
They didn't expect to be used in identifying dead but he also think people who still believes the war is not happening in Russia is unfortunate He feels happy about the usage of this tech
Yeah, I think that’s a great question. At the end of the day, facial recognition is a technology, and it can be deployed in many different ways. The way we like to deploy it at Clearview is in after the fact investigations. So, it's not, you know, authoritarian countries are deploying it right now, anyway, regardless of--and they’ve developed their own facial recognition technology. So, what we want to do is help to try and set an example of good use cases and how it can be used in a positive way. So, I don't think it's inevitable, if that makes any sense, that this technology is going to be deployed in the same way here in the in the West. But I do think that the risks of authoritarian countries using it, I just say they are already using it in a lot of different ways. And just because we develop a technology here, doesn't mean we're going to sell it to those kind of countries.
Question:
does the potential harm from the use of facial recognition software by authoritarian governments outweigh the benefits of this technology?
MR. TON-THAT: Yeah. Thanks, Drew. I'm glad you brought up that question. So, you know, there's no political motivation to Clearview. We have people from every side of the political spectrum from the left, on the right, that work here. There's no left-wing way or right-wing way to help law enforcement catch a pedophile or solve any kind of crimes. You know, Charles--some--Charles Johnson, someone we met in 2016 who made some introductions, but he's not a co-founder of a Clearview. He's never been an employee, a director, or on the board, and never had any active involvement in the company.What I can tell you is about who I am, where I'm from. I was born and raised in Australia. And I spent my whole entire technology--career in technology since I moved when I was 19 to the Bay Area. So that's been my focus, is always on technology and making sure that this is something that's used in the public interest. And so there's no connection there.
They welcomes right wing and left wing people to work together
Sure. So, we have a wide variety of investors from all different backgrounds, a lot of family offices and institutional funds as well, at various stages of the company. So, we're well capitalized by, you know, in our last round by institutional investors and larger family offices, and some of them would rather just, you know, invest and they help out and support us that way. We have like Naval Ravikant, who was an early investor in my previous companies in the Bay Area, where I used to live. And so we're very appreciative from all our investors for their support. And as the company grows, you know, the investor base has changed to become more, you know, serious institutional money. And they've been, you know, big believers in the mission from day one, each one of them, and continue to support us.
People are from different stage and backgrounds
Sure. So, what we do is, when we talk to the customers, we suggest that they and highly encourage them to have a facial recognition policy. So that way, they're talking to the public about what crimes it's used for, what crimes it's not. In the process of training these organizations, we get a very good sense of how seriously they take the technology and the tool and the use cases that they have. And everyone who is using it, they know that an administrator in their agency is overseeing the searches and can audit those at any time. And so we give these agencies all the tools they need to communicate with the public about how they use it. And also, they can easily generate reports on the type of usage.Story continues below advertisementNow, you know, we're not perfect. We can't see everything that's happening. I don't think it's our role to monitor every search. But we're considering and we're always thinking about more ways to make this a safe technology in terms of deployment. And on the flip side, you know, we've never had any wrongful arrest or misidentification due to use of our technology. And you know, we weigh that against the amazing positive use cases in terms of solving crimes against children, or the Capitol riots and, you know, these really major things. We think we're striking the right balance.
Questions: what sorts of systems do you have to ensure that the FBI isn't misusing this tool? What kind of insight do you have even into what searches they're able to run?
So, we're actually a very small startup in comparison to many other companies. We're around 50 employees now. We were about 10 employees when we were written about on the front page of The New York Times. So, we really care about making sure this technology is used for the right purposes and by the right people. So, when it comes to, you know, anything international, we will never sell to China, Iran, and North Korea, or any country that's partially sanctioned by the U.S. And when we look at these countries, one thing about Clearview, it’s deployed as a cloud service. So, they pay for subscriptions. So, if there's any violations, egregious of any terms of service we have or they don't follow a lot of the policies, then we have the ability as a company to revoke access. So, this is a technology that you can take back if you see, you know, major egregious abuse. So, I think that we've taken a slower approach to making sure we get the technology and learning as much about it here in the U.S.Story continues below advertisementSo, the Government Accountability Office wrote two reports last year on the deployment of facial recognition, and they made a lot of suggestions and guidelines about how to deploy it in law enforcement contexts. For example, making sure that there are cybersecurity audits, that the audits possible of the search history of every person who has access to the technology, that there is training. So having figured out a lot of the model here in the U.S., when we look at other countries on a case-by-case basis, we want to make sure that we're comfortable that they are using it for the right reasons. We'd never want this to be used to, you know, surveil journalists, or in any kind of abusive way.
Question: How to ensure people using their technology is not abusing it in a bad way? How are they going to ensure that?
The program is deployed in cloud, which means they can remove users' access when they find illegal usage of some users
The US government has given a guide line about how to deploy facial recognition.
So, I would say that we're very proud to have really high rankings on NIST. So NIST tested over 650 algorithms from around the world. And if you take the average of the different categories in there, Clearview AI ranked second, with the number-one being a Chinese company called SenseTime. And so I think you'd always want more accuracy when it comes to facial recognition. But I’d draw a quick--you know, sharp distinction about how it's deployed in authoritarian regimes like Russia and China. So those countries are deploying facial recognition in a completely real time way, where it's all the time. When we're deploying it, it's, you know, in after the fact investigated manner. So, I think it's really important for the U.S. and its allies to have this technology, but also have to make sure we use it within a moral framework.
How important it is to have better facial recongnition for US than China and Russia?
Emphasize that China is running facial recognition in all the time, while Clearview AI run their technology after the fact
He says it is important to have high accuracy in US and it's allies.
Sure. So, there's been previous reporting talking about, you know, Clearview AI, and we've developed [audio distortion] prototypes, different versions of our technology for retail and banking. But this dataset is used just by governments. There's no non-governmental use of this dataset at this time. Facial recognition is used everywhere today, from ID verification in banking, financial services, airports, in many other places on a consent basis. And I think the consent-based version of facial recognition is the least controversial one where the end user is opting in. So, I think it's still very early in terms of understanding this technology and its implications. And what I think is, what we see, in the last two, three years with this technology deployed in law enforcement is a ton of positive stories about being able to stop and prevent crime and help victims. So, I think facial recognition technology has a potential to be a way to prevent crime and fraud as well, because law enforcement, they can only do so much when it comes to solving crimes, unfortunately.
you had said that Clearview was interested in moving sort of potentially beyond U.S. law enforcement, beyond this kind of public safety example that you've brought up, to potentially working in the financial sector with stores and banks, and also potentially doing an international expansion.
Facial recognition is used everywhere today, from ID verification in banking, financial services, airports, in many other places on a consent basis.
Not at this time. You know, we believe that the way we collect images is just like any other search engine. And you know, this is stuff in the public domain. And for the purposes that it’s being used for I think, they can be very pro-social. I don't think we want to live in a world where any big tech company can send a cease and desist, and then control, you know, the public square. So, I think it's an issue that is really important because the issue of collecting publicly available online data is not just images, any kind of data. It affects researchers who may be, you know, studying things like discrimination or studying other things like misinformation, and it affects academics and a whole wide range of other types of use cases as well.
the companies that have asked Clearview to delete these images, has Clearview done so?
MR. TON-THAT: So first of all, I would say that the way we collect information is in compliance with all applicable laws around data collection. We have a really great legal team that handles these kinds of issues. And anything that's out there is public information. So, what I like to talk about is this is kind of like digital public square. And you know, this information is public already. And if it can be used to help solve crimes and make the public safer, I think that's a really good use case of this information.I'd say that also, when it comes to the size of the database, in the context of trying to identify someone that you don't know who it is--and we have a really good example from Homeland Security of a child pornography case they were able to solve--the more photos you have, the more accurate the database is, the less biased it is, the more likely you are to find the right person. So, every photo is a clue that could potentially, you know, identify a victim, like a child of child crimes, or any kind--or a perpetrator of a crime. So, in this kind of context, I think that a larger database is less biased.Story continues below advertisementSo, Homeland Security, in 2019, were using Clearview AI, and they had a photo of a person who's molesting a 6-year-old girl. And he was in the back of this child abuse video, he was selling online. And it was just, you know, a grainy face of him. When they put it through Clearview, they only found one image in the database, and this was 3 billion images at the time. And he was in the background of someone else's publicly available gym selfie. They were able to find the location of that gym, find the employer, eventually got the name, and now he's doing 35 years in jail and they could save a 6-year-old girl. And they say that without Clearview AI, there was no way they would have caught that guy. And so that's really interesting in terms of how any photo could be a clue. Now that we have about 20 billion images, that person has other photos in the database and those have his direct name. So, it would have saved even more time for law enforcement to catch that perpetrator and save that child.
It indicate the perspective of Hoan Ton-That about why the company should be allowed to take all of these photos that were not posted online for this purpose, and use them for this tool that you're selling to the police
They claim that they way they collect information is completely legal by the applicable laws around data collection
we have a really good example from Homeland Security of a child pornography case they were able to solve--the more photos you have, the more accurate the database is, the less biased it is, the more likely you are to find the right person
he was in the back of this child abuse video, he was selling online. And it was just, you know, a grainy face of him. When they put it through Clearview, they only found one image in the database, and this was 3 billion images at the time.
Facebook, Twitter, YouTube have filed cease and desist orders demanding you delete these photos from your database. They say they were scraped illegitimately. Senators have said that these photos were illegitimately obtained and have called on federal money to not go to Clearview.
And again, instead of searching with text, you upload a photo. And what's really interesting and important is that we've adopted a lot of controls around usage. So, we limit this dataset just to law enforcement. And we also [audio distortion] on responsible usage of the technology. In every case that they use--I mean, every search they use on the platform, they have to put in a case number and a crime type, and that allows these agencies to conduct effective audits of the technology.
Sure. So, it--these all come from the public internet. So, you can imagine whatever you find in a Google search result--it could be a mugshot, website, news websites, you know, educational websites, social media--and again, this is anything that's public. So, if your settings for social media are in private, those won't show up in Clearview, just like in Google. And so it's anything that's publicly available.
the latest number we have, is over 3,100 who used it in the US. We also have usage as well in Ukraine. Now we're in six agencies there. And it's a technology that's had a lot of widespread adoption, because given the right training and usage, in just a few minutes law enforcement’s able to set up their accounts and start solving crimes they never would have solved otherwise.
Clearview AI is a facial recognition search engine that works just like Google. But instead of putting in words or text, you upload a photo of a face. So, it's used, you know, after the fact matter--not in real time to identify perpetrators or victims of crime. It has been used successfully in the U.S. by FBI, Homeland Security, and many other agencies to, you know, help with human trafficking cases, crimes against children, financial fraud, and a lot more. Most notably, it's helpful for the FBI in the January 6th Capitol riots to identify a lot of the perpetrators.
That’s the score assigned to Yi by Sesame Credit, which is run by Jack Ma’s online-shopping empire Alibaba, placing the 22-year-old near the top of the scheme’s roughly 500 million–strong user base. Sesame determines a credit-score ranking—from 350 to a theoretical 950—dependent on “a thousand variables across five data sets,” according to the firm.
Yi has 805 Sesame Credit score. Sesame Credit is run by Jack Ma's online-shopping empire Alibaba
805
her score
Scaling
吾會出現
And so, these have constituted a new kind of marketplace, a marketplace that trades exclusively in behavioral futures, in our behavioral futures. That’s where surveillance capitalists make their money
Surveillance capitalists make their money by selling our predicted behavioral future.