Skip to main content

More than 2,000 organizations have used Clearview AI’s unprotected and inaccurate facial technology

More than 2,000 organizations have used Clearview AI’s unprotected and inaccurate facial technology
NewsColony

While the technology was allegedly meant to help law enforcement solve crime cases including sexual exploitation and identity theft, Clearview AI has reportedly been violating Apple developer program policies by offering government and private entities a preview of its services. As a result, Apple has suspended Clearview AI’s developer account for sharing a preview of a program meant only for developers. Amongst those the program was shared with are U.S. Immigration and Customs Enforcement (ICE), Macy’s, Walmart, and the NBA, BuzzFeed News reported. In addition, more than 600 law enforcement agencies started using Clearview AI in the past year, according to The New York Times. 

Two U.S. senators, including Sen. Ed Markey of Massachusetts, sent a letter to the startup on Tuesday questioning its sharing of the application with countries like Saudi Arabia and the United Emirates. “Recent reports about Clearview potentially selling its technology to authoritarian regimes raise a number of concerns because you would risk enabling foreign governments to conduct mass surveillance and suppress their citizens,” Markey wroteTo date more than 2,200 private and public organizations worldwide and law enforcement agencies in 27 different counties have tried the app, BuzzFeed News reported.

“The weaponization possibilities of this are endless,” Eric Goldman said, co-director of the High Tech Law Institute at Santa Clara University. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.” While law enforcement has historically used facial recognition tools for almost 20 years, those tools have been limited to searching through government-provided images including mug shots and driver’s license photos, according to The New York Times.

“Government agents should not be running our faces against a shadily assembled database of billions of our photos in secret and with no safeguards against abuse,” ACLU staff attorney Nathan Freed Wessler told BuzzFeed News. “More fundamentally, that so many law and immigration enforcement agencies were hoodwinked into using this error-prone and privacy-invading technology peddled by a company that can’t even keep its client list secure further demonstrates why lawmakers must halt use of face recognition technology, as communities nationwide are demanding.”

Facial recognition apps have repeatedly misidentified people of color at a rate much higher than white people, studies have shown. Depending on the search and algorithm used, a study found that “Asian and African American people were up to 100 times more likely to be misidentified than white men,” The Washington Post reported. In the past, many African Americans have been misidentified as suspects of crimes by facial recognition tools used by law enforcement. In 2018, Microsoft, IBM, and Amazon were called out for using facial recognition technology that was biased against people with darker skin tones. These companies failed to accurately identify people with darker skin, connecting members of Congress to criminals and not recognizing the faces of other people of color they depicted, a clear bias in the creation of technology.

Apps like Clearview AI present a threat not only to people of color but for anyone that it is used to search for. Women specifically are at greater risk with the ability of app holders to take a picture of a stranger they might find attractive to reveal not just their name, but other images and even where they live. In addition, at the hands of corruptive governments, apps like Clearview AI could be used to search for those to speak up against the government or act in protest creating potential human rights violations. While Clearview AI creators claim it is their “First Amendment right to public information,” social media sites including Twitter, Facebook, and YouTube have sent cease and desist letters in regards to the app using their platforms to collect data.

Source: Daily Kos NewsColony: Politics

The post More than 2,000 organizations have used Clearview AI’s unprotected and inaccurate facial technology appeared first on NewsColony.



from WordPress https://ift.tt/3aC4q8R

Comments

Popular posts from this blog

Volunteers book hotel room for homeless man with SingapoRediscovers vouchers

NewsColony Volunteers book hotel room for homeless man with SingapoRediscovers vouchers © The Independent Singapore Singapore — A group of volunteers from the Mummy Yummy Singapore welfare organisation donated their SingapoRediscovers Vouchers to book a hotel room for a homeless man. The man, who was identified as Jayden, did not have a place to live while waiting for the Housing Board (HDB) to allocate him a rental flat. In a Facebook post on Mummy Yummy Singapore’s page on Wednesday (Dec 16), the volunteers said: “We used our $100 Rediscover Singapore vouchers to redeem hotel stay for him and successfully booked 9 days worth of stay at 3 days per voucher.” They added that they were unable to book a longer stay because of higher hotel rates over Christmas and New Year. The volunteers hoped that they would be able to bridge Jayden’s stay until he got a flat. “Thanks our government for giving us these vouchers which in return we can put them to good use for people in need,” th...

Chinese stars moonlighting as live-streamers

NewsColony Chinese stars moonlighting as live-streamers Li JIaqi and Yang Mi joined forces to sell products online during coronavirus, blurring the boundaries between conventional celebrities and live streamers. Photo: @TrendingWeibo/Twitter The line between Chinese celebrities and live streamers continues to blur these days. Luxury brands are expanding their pool of friend-of-the-brand endorsements with top live streamers – Louis Vuitton, for example, tapped both actress Song Jia and “lipstick king” live streamer Li Jiaqi for its much anticipated 520 Chinese Valentine’s Day campaign. Celebrities, actors and singers are jumping on the bandwagon to test out their commercial values on e-commerce platforms, with Yang Mi, Li Xiaolu and Michelle Ye Xuan just a few of the screen stars moonlighting on live streaming portals including Taobao, TikTok and Red Book. So why are Chinese celebrities so eager to embrace the battlefield of live streaming e-commerce, and how are they getting on so...

Two hundred thousand Northern Beaches residents prepare for lockdown amid panic buying

NewsColony Two hundred thousand Northern Beaches residents prepare for lockdown amid panic buying Sydney’s Northern Beaches have entered  lockdown as the coronavirus cluster in the area grows to 41 cases. From 5pm on Saturday until midnight on Wednesday, the local government area will revert to lockdown orders issued across the state in March. People will only be permitted to leave their homes for five basic reasons: to seek medical care, exercise, grocery shop, work or for compassionate care reasons.  An additional 23 cases were recorded in the 24 hours to 8pm on Friday, including 10 already announced.    People line up to shop at a Woolworths supermarket in Avalon (pictured) on Saturday before the Northern Beaches goes into a lockdown at 5pm until midnight Wednesday  Two women (pictured) leave a Coles supermarket in Avalon before being required to follow stay-at-home guidelines  The toilet paper section of the Woolworths at...