- | - Select Menu

Slider-Index

Breaking Gist

Newspaper Reviews

Entertainment Gists

Education

Cryptocurrency Gists

Music

JOB ALERTS

VIRAL GISTS

Politics & Government

» » » » » » Technology – Facial Recognition Would Let Police Track Our Every Move
«
Next
Newer Post
»
Previous
Older Post

Picture a crowded street. Police are searching for a man believed to have committed a violent crime. To find him, they feed a photograph into a video surveillance network powered by artificial intelligence.

A camera, one of the thousands, scans the street, instantly analyzing the faces of everyone it sees. Then, an alert: The algorithms found a match with someone in the crowd. Officers rush to the scene and take him into custody.

But it turns out the guy isn’t the one they’re looking for ─ he just looked a lot like him. The machines were wrong.

This is what some makers of this technology fear might happen if police adopt advanced forms of facial recognition that make it easier to track wanted criminals, missing people and suspected terrorists ─ while expanding the government’s ability to secretly monitor the public.

Despite “real-time” facial recognition’s dazzling potential for crime-prevention, it is also raising alarms of the risks of mistakes and abuse. Those concerns are not only coming from privacy and civil rights advocates but increasingly from tech firms themselves.

In recent months, one tech executive has vowed never to sell his facial recognition products to police departments, and another has called on Congress to intervene. One company has formed an ethics board for guidance, and another says it might do the same. Employees and shareholders from some of the world’s biggest tech firms have pressed their leaders to get out of business with law enforcement.

"It’s not too late for someone to take a stand and keep this from happening."
“Time is winding down but it’s not too late for someone to take a stand and keep this from happening,” said Brian Brackeen, the CEO of the facial recognition firm Kairos, who wants tech firms to join him in keeping the technology out of law enforcement’s hands.
Brackeen, who is black, said he has long been troubled by facial recognition algorithms’ struggle to distinguish faces of people with dark skin, and the implications of its use by the government and police. If they do get it, he recently wrote, “there’s simply no way that face recognition software will be not used to harm citizens.”

With few scientific standards or government regulations, there is little-preventing police departments from using facial recognition to target immigrants or identify participants in a political protest, critics say.
“There needs to be greater transparency around the use of these technologies,” said Rashida Richardson, director of policy research at the AI Now Institute at New York University. “And a more open, public conversation about what types of use cases we are comfortable with — and what types of use cases should just not be available.”

Technology’s spread


Facial recognition — using algorithms to match someone’s facial characteristics across photos and video — is already commonplace in many aspects of contemporary life. It is used to tag people on Facebook, to unlock iPhones and PlayStations and to focus cellphone photographs, and soon will be used to admit fans to Major League Baseball games. Most adult Americans are already in a facial recognition database of some kind, the result of governments formatting driver’s license and passport photos for such use, according to the Center on Privacy & Technology at Georgetown University Law Center.

Many law enforcement agencies — including the FBI, the Pinellas County Sheriff’s Office in Florida, the Ohio Bureau of Criminal Investigation and several departments in San Diego — have been using those databases for years, typically in static situations — comparing a photo or video still to a database of mugshots or licenses. Maryland’s system was used to identify the suspect who allegedly massacred journalists at the Capital Gazette newspaper last month in Annapolis and to monitor protesters following the 2015 death of Freddie Gray in Baltimore.

As the technology advances, “real-time” facial recognition — which involves the constant scanning of live video feeds to match moving faces with a database of still images — is starting to spread. Police in China are reportedly using it to pick suspects out of crowds, and retailers there are using it to identify customers and their buying preferences. U.S. security agencies are testing the technology in some airports and border crossings. And now systems are being designed for use by local police.

“This is a technology that is progressing so rapidly and is coming down in cost so rapidly that in the future we should expect it to be efficient, cheap and common,” said Gregory C. Allen, an adjunct fellow at the Center for a New American Security, a Washington-based think tank. “People have gotten used to Facebook using facial recognition on them and have come up with an understanding of why and when that is acceptable.”

Too many mistakes?

But this new type of facial recognition technology has deepened concerns about mass surveillance, mistaken identifications and the unfair targeting of minorities.

That is because facial recognition has never been perfect, and probably never will be. It cannot say with 100 percent certainty that the faces in two images are the same; most current systems provide a score indicating how likely the match is. Police agencies can set thresholds, depending on how close of a match they're looking for, and then decide how to act on the results.

A system’s accuracy depends on several factors, starting with data used to “train” the algorithms. The broader the database of faces and conditions — people with varied skin tones, captured at various angles and distances and under different lighting conditions — the more accurate the algorithm will be.

Technological advances have improved the accuracy of facial recognition systems, which have evolved from old-style machine learning, based on comparisons of certain facial characteristics, to “neural networks” that take a more holistic view of faces. But the systems still are susceptible to misidentifying people of certain races. A recent MIT study found that facial recognition algorithms developed by Microsoft, IBM and China-based Face++ failed to identify black women far more frequently than white men. One of the MIT researchers, Joy Buolamwini, has also showed that facial recognition systems are unable to determine the gender of famous black women, including Oprah Winfrey, Serena Williams and Michelle Obama.

Microsoft and IBM have since announced efforts to lessen bias in their algorithms.

US-TECHNOLOGY-ARTIFICIAL INTELLIGENCE

That reflects a shift in thinking over the last two years, as it has become clear that facial recognition algorithms are not “race neutral,” said Clare Garvie, a researcher at the Center on Privacy & Technology. “There’s an increased awareness on the part of companies that, hey, this technology isn’t magic,” she said.

A system’s accuracy can also vary based on the quality of cameras capturing video footage, the lighting conditions and how far away a camera is from someone’s face.

When police in Cardiff, Wales, ran its first test of a facial recognition system at a June 2017 soccer game, it wrongly identified thousands of people, a 92 percent “false positive” rate that authorities blamed on poor lighting, algorithm shortcomings and unfamiliarity with the system. The FBI’s facial recognition system has been found to misidentify people 14 percent of the time.
“We are at a moment where facial recognition is being marketed to communities while not being proven as public safety tools,” 
said Matt Cagle, an attorney for the American Civil Liberties Union of Northern California, which uncovered efforts by Amazon to market its facial recognition technology to police departments, and then tested it, finding that it mistakenly matched faces of 28 members of Congress with police mugshots.
“We think it’s harmful because it’s unproven and it’s been deployed in some places without any rules.”

"Facial recognition is being marketed to communities while not being proven."
In a recent demonstration for NBC News, the U.K.-based surveillance-software company Digital Barriers ─ which is marketing to U.S. police a facial recognition system that can run off footage from body cameras, surveillance cameras and cellphones ─ successfully identified a reporter as he crossed a street in suburban Virginia and at an office park that houses the firm’s U.S. headquarters. Technicians described having to set the equipment up in a way that made sure people’s faces were not obscured by shadows, and noted that the system’s accuracy depended on the type of camera used.

Tech’s soul-searching

Nobody knows for sure which law enforcement agencies are pursuing real-time facial recognition systems. There are few laws regarding the technology’s use. And many people don’t realize how easy it is to be put in a database that can be used by police for facial recognition.

That lack of scrutiny breeds distrust — not just from the public, but from within the tech industry.

“I would like to see a more public conversation,” Brackeen said. His Miami-based company develops facial recognition to safeguard consumers’ digital profiles, secure online financial transactions and allow cruise lines and theme parks to sell photographs to visitors. He announced last month that he would never sell his product to law enforcement.
“If a city council or state representatives decided it made sense, that’s a completely different thing. We are not against facial recognition’s existence,” Brackeen said. “But we are at a place where it’s being used when people don’t know it’s there, and when people have their driver’s license photo taken they have no idea they’re doing it for facial recognition.”
Miami Int'l Airport To Use Facial Recognition Technology At Passport Control
Microsoft President Brad Smith echoed some of Brackeen’s concerns earlier this month, calling for Congress to create “a bipartisan expert commission” to explore how to regulate facial recognition’s use.

But tech companies should not be left to regulate themselves, Smith wrote in a blog post.
“After all, even if one or several tech companies alter their practices, problems will remain if others do not,” Smith wrote. “The competitive dynamics between American tech companies — let alone between companies from different countries — will likely enable governments to keep purchasing and using new technology in ways the public may find unacceptable in the absence of a common regulatory framework.”
China has gone further than any society to expand facial recognition, using it to create a national surveillance state in which the technology is used to shame jaywalkers and find criminal suspects in the crowds of sporting events.

The potential for something similar exists in places with expansive networks of surveillance cameras ─ such as New York, Chicago or London, researchers say.

Jennifer King, director of consumer privacy at Stanford Law School’s Center for Internet and Society, said false identification is among her biggest concerns. She likened it to the use of license plate readers that aim to catch people breaking traffic laws but also identify the wrong cars ─ and make it difficult for the innocent to appeal.

If cities connect surveillance networks with live facial recognition, and then link them to municipal infrastructure, the technology could be used to accuse people of crimes or other transgressions and shut them out of public services, she said.
“My concern is that a city buys into this so deeply, and buys into a process that … forces people to defend themselves against things they haven’t done,” King said.

‘Great promise ─ and great peril’

Axon, the country’s largest supplier of police body cameras — which could one day be outfitted with facial recognition tools to scan faces from an officer’s lapel — has also acknowledged the concerns of real-time facial recognition. In April, Axon set up an ethics board of outside experts to guide the company as it explores the use of artificial intelligence, which included filing a patent application ─ discovered recently by a technology watchdog ─ for a real-time facial recognition system.

Nearly four-dozen civil rights groups sent an open letter to the board earlier this year urging the company to reject as “categorically unethical” any products that allow body cameras to use real-time facial recognition. Axon has repeatedly said it is not currently working on developing facial recognition for its devices.
“We see facial recognition as a technology which holds great promise ─ and great peril,” Steve Tuttle, an Axon spokesman, said in an email in response to news of the patent. 
“We do see a day when facial recognition, with the right controls and sufficient accuracy, could reduce bias and increase fairness in policing. However, we have elected to hold off on investing in developing this technology until we better understand the necessary controls and accuracy thresholds to ensure its benefits significantly outweigh its costs and risks.”
NEC Corporation of America, a major developer of facial recognition systems, is also considering whether to create an ethics board, said Benji Hutchinson, vice president of federal operations. The company isn’t marketing a real-time facial recognition product to American police, but has sold such technology to law enforcement elsewhere, he said.
“We hear the privacy discussion and we’re sensitive to it,” Hutchinson said. “NEC wants to be and we are a corporation that is interested in balancing the rights of citizens to privacy and law enforcement’s ability to protect public safety.”
Another big player, Amazon, came under fire after the American Civil Liberties Union of Northern California revealed the company’s efforts to sell its facial recognition software to American police forces. The findings included a deal with Orlando, Florida, where seven officers have volunteered to be subjects in a test of a system that can scan live feeds from surveillance cameras and determine whether anyone in the images matches photos in a database of wanted or missing people.

Many Amazon employees, and some of its investors, urged CEO Jeff Bezos to end the partnership, warning that the technology could be used to target minorities and immigrants.

Amazon did not back down. Neither did Orlando, which chose to continue the pilot program.

Orlando Police Chief John Mina said he wants to see if the system even works. He cited the 2017 killing of a city officer allegedly by a man who’d been wanted in the murder of his pregnant ex-girlfriend. Before his fatal confrontation with the officer, the suspect moved around the city for weeks while police searched for him.
“What if that technology had been in place and recognized his image and in turn immediately notified law enforcement ─ and then we could have responded there, or anywhere, to arrest him,” Mina said.
“Ultimately, it’s about enhancing public safety.”

One company’s plans

Orlando is the only confirmed example of a local law enforcement agency in the United States using facial recognition in real-time video, even as a test. But as a handful of companies race to create products that will give police agencies similar capabilities, real-time facial recognition has taken on an air of inevitability.
“I think we’re very close to getting the technology into our law enforcement here,” said Nicola Dickinson, a Digital Barriers vice president who runs its operations in North and South America.
The firm first introduced its real-time facial recognition system last summer, and since then it has been adopted by law enforcement agencies in Europe and Asia, and within the U.S. government, the company says — although it won’t disclose the names of those clients.

Digital Barriers says it is trying to weigh the public-safety benefits of facial recognition and concerns that the technology will mushroom into a mass-surveillance apparatus.

The company says its products are equipped with tools that allow authorities to retain information about people on watch lists and ignore the rest.

While the company says facial recognition shouldn’t be used everywhere, or to look for anyone, it does not tell customers how its products can or cannot be used.
“We trust that our government has rules and regulations within their organizations to use it effectively and safely,” Dickinson said.
But, for the most part, the government does not.


Lawless frontier

There are few regulations at the federal, state or local level regarding law enforcement’s use of facial recognition. The exceptions include Oregon and New Hampshire, which ban facial recognition on police body cameras, and Maine and Vermont, which prohibit the technology’s use with police drones. 

Six states — Maine, Missouri, New Hampshire, Vermont, Washington, Oregon and Hawaii — restrict law enforcement’s ability to use driver’s license databases for facial recognition systems, according to Garvie, of Georgetown’s Center on Privacy & Technology.

A few local governments have also stepped up. Among them are Oakland, California, which requires public input on any proposal to acquire government surveillance systems, and Seattle, Washington, which restricts police use of facial recognition to compare suspects’ images to jail mugshots — and prohibits the real-time scanning of video footage to find matches in that database.

In the rest of the country, policymaking resembles a frontier-like landscape where standards and rules are made on the fly.

"We don’t see the benefit of facial recognition software in terms of the cost, the impact to community privacy."
In Ohio, for example, the Bureau of Criminal Investigation began using facial recognition in 2013 to identify suspects from photographs or video stills. A backlash forced the agency to limit which officers can access the system, and to prohibit it from being used to monitor groups of people or their activities, Superintendent Thomas Stickrath said. The agency also formed an advisory group to help guide it through legal and ethical issues.
“This technology is helpful to law enforcement,” Stickrath said. “But like all evolving technologies, whether it’s GPS or license plate readers or body readers, there’s a proper balance. We’re trying to find the right balance.”
Oakland, too, faced resistance after planning a citywide network five years ago that would collect feeds from surveillance cameras, gunshot detectors, license plate readers and other communication systems into a centralized hub monitored by authorities. The public outcry forced the city to abandon much of the project, and led to the creation of a privacy advisory commission that must review any effort by the local government to obtain technology that could impact privacy.

So if Oakland ever sought out a facial recognition system, it would be debated publicly.

That’s a big “if.”

The department, he said, considers surveillance technologies by weighing their cost and benefits — not just money, but in public trust. “And we don’t see the benefit of facial recognition software in terms of the cost, the impact to community privacy,” Birch said. “Until we identify an incredible benefit for facial recognition, the cost is just too high.”

About Colossus Gists

Nigerian #1 Multifarious Entertainment Magazine, providing Nigerian gists on Celebrities, Politics, Lifestyle, Relationships, Events, Pageants, Fashion and other Breaking Stories. For Event, Pageant, Wedding, Program, Hype and Media PR, let us know; Add Us On BBM:5651B8C1 Or Call/Whatsapp +2347034265167
«
Next
Newer Post
»
Previous
Older Post

No comments

Lets Hear Your View

Let's hear your own personal view using the comment box below