The civic electrical structures from last century's wiring of our communities and homes looks much like today's digital surveillance structures we are discussing this week----the only difference is all those steps above are being done remotely.
Today, this facial recognition technology is at best being downloaded into individual technology units-----as street cameras and audio-----as building cameras and audio. We read an article yesterday that made our US 99% WE THE PEOPLE think this surveillance technology can literally track a person like CINDY WALSH-----for being a REAL LEFT SOCIAL PROGRESSIVE ACADEMIC everywhere----
Yet, a REAL LEFT SOCIAL PROGRESSIVE ACADEMIC placed on THE LIST to be followed and recorded everywhere she goes must have tons of individual citizens as REPORTERS----must have a NOSY NEIGHBOR with a house filled with all kinds of spying equipment in order to suppress the voice of a REAL LEFT SOCIAL PROGRESSIVE ACADEMIC.
THIS IS TECHNOLOGY YET TO BE DEVELOPED BUT FAR-RIGHT WING GLOBAL BANKING 1% IS MOVING FORWARD AS FAST AS THEY CAN TO MEET THIS GOAL OF DEEP, DEEP, REALLY DEEP STATE ======
staff technologist Georgetown Center on Privacy and Technology
And the amazing uses of facial recognition tech are growing up each day'.
Pros and Cons of Facial Recognition Technology For Your Business
Just recently TecSynt finished working on biometric facial recognition tech for a new mobile app, so we decided to talk about things that you were afraid to ask. What are the benefits and the doubts, and where it is being successfully used?
WHAT IS FACIAL RECOGNITION TECHNOLOGY.
BEHIND THE CURTAIN
It’s the fastest biometric technology that has one and only purpose – to identify human faces. Forget about fingerprints readers and eye scanners, current face recognition systems analyze the characteristics of a person’s face images that were taken with a digital video camera. It’s the least intrusive method that provides no delays and leaves the subjects entirely unaware of the process.
Various distinguishable landmarks of facial features are measured by facial recognition tech (FRT) from approximately 80 nodal points, creating a faceprint – a numerical code. Some of these features include length of the jawline, cheekbones shape, distance between the eyes and the depth of the eye sockets, and the nose width. The measurements gathered by the system are then put in a database and compared to other detected faces when a certain person stands before the camera.
In short, the use of facial recognition software allows your CCTV security algorithms trigger an alert when it identifies particular individuals from a hit list. It is an irreplaceable technology equally for finding shoplifters, scam artists, or potential terrorists, as well as for recognizing VIP customers in stores who need special attention.
HOW DOES FACIAL RECOGNITION TECHNOLOGY ACTUALLY WORK?
The mathematical algorithms of biometric facial recognition follow several stages of image processing:
The first step is for the system to collect physical or behavioral samples in predetermined conditions and during a stated period of time.
Then, all this gathered data should be extracted from the samples to create templates based on them.
After the extraction, collected data is compared with the existing templates.
The final stage of face detection technology is to make a decision whether the face’s features of a new sample are matching with the one from a facial database or not. It usually takes just seconds.
But it’s hardly an exclusive example. There is famous Facebook facial recognition technology which accuracy and powerfulness happen to beat the FBI systems! Every time you upload a photo and tag your friends on it you provide enormous help for the facial recognition (FR) algorithm.
“There are more FR algorithms and techniques than there are companies. But with its huge database of images, Facebook’s algorithm has a leg up on most others in that it is constantly being taught how to improve.” (c)
Jonathan Frankle, staff technologist Georgetown Center on Privacy and Technology
And the amazing uses of facial recognition tech are growing up each day. It is already being used for examination, investigation, and monitoring. Governments all over the world use it to identify potential and current threats. Retailers and businesses of all kinds look for suspected shoplifters or track their workers’ chronograms. There are a lot of social media apps which increase a user experience with FRT. Aside from public usage by airports and railway stations, stadiums and cashpoints, there is even a progressive adaptation of facial recognition technology in medical applications.
But such technological approach raises more and more awareness, questions about whether it violates people’s privacy or not. Is it safe to live in the world of facial recognition technology future? Let’s sort this through. To give you a better insight of what this invention brings for your business, we’ve made a list of facial recognition pros and cons.
Here is the GORILLA-IN-ROOM public policy issue for our US 99% WE THE PEOPLE black, white, and brown citizens and our new to US immigrants surrounding PRIVACY LAWS as regard the capture of our personal image-----our personal voices and keeping them in DATABASES for any length of time.
300 years of US Constitutional and Federal laws as well as centuries of COMMON LAW has protected citizens' privacy rights. As we developed technology all last century new laws were written to protect civil liberties. A department store or convenience store wanting to use VIDEO CAMERAS for safety were required to keep any images and voices captured for only a set time------30 days---60 days. Those recording devices were built to do just that.
There have never been laws allowing for creating database images of individual citizens EVEN with law enforcement. A police department-----or a state attorney's office creating a database of citizens exercising their rights to protest for example were always found to be ILLEGAL.
EVEN LAW ENFORCEMENT COULD NOT CREATE DATABASES ON INDIVIDUAL CITIZENS ----THEN CLINTON/BUSH/OBAMA STARTED TO SLOWLY PASS LAWS -------ERGO, BELOW WE SEE BIOMETRIC PRIVACY LAWS.
'Why biometric privacy laws?
Biometric information like face geometry is high-stakes data because it encodes physical properties that are immutable, or at least very hard to conceal. Moreover, unlike other biometrics, faceprints are easy to collect remotely and surreptitiously by staking out a public place with a decent camera'.
It is STILL UNCONSTITUTIONAL-------IT STILL VIOLATES US FEDERAL, STATE, AND LOCAL LAWS FOR THESE DEVICES AND MEGA DATABASES TO BE INSTALLED AND MAINTAINED.
Our US elected officials CANNOT pass laws attacking our INDIVIDUAL AND COLLECTIVE PRIVACY-----no matter how much they wrap this all into a FAKE LEFT SOCIAL BENEFIT calling it PUBLIC SAFETY====HEALTH SAFETY.
Facial recognition technology is everywhere.
It may not be legal.
By Ben Sobel
June 11, 2015
Co-administrator of the facial recognition program for the Pinellas County (Fla.) Sheriff's Office, Scott McCallum, displays a method of facial mapping used to set criteria for facial image searches. The sheriff's office uses one of the most advanced facial recognition programs for law enforcement in the country. (By Edward Linsmier for The Washington Post, 2013 file)
Ben Sobel is a researcher and incoming Google Policy Fellow at the Center on Privacy & Technology at Georgetown Law.
Being anonymous in public might be a thing of the past. Facial recognition technology is already being deployed to let brick-and-mortar stores scan the face of every shopper, identify returning customers and offer them individualized pricing — or find “pre-identified shoplifters” and “known litigious individuals.”
Microsoft has patented a billboard that identifies you as you walk by and serves ads personalized to your purchase history. An app called NameTag claims it can identify people on the street just by looking at them through Google Glass.
Privacy advocates and representatives from companies like Facebook and Google are meeting in Washington on Thursday to try to set rules for how companies should use this powerful technology. They may be forgetting that a good deal of it could already be illegal.
There are no federal laws that specifically govern the use of facial recognition technology. But while few people know it, and even fewer are talking about it, both Illinois and Texas have laws against using such technology to identify people without their informed consent. That means that one out of every eight Americans currently has a legal right to biometric privacy.
The Illinois law is facing the most public test to date of what its protections mean for facial recognition technology. A lawsuit filed in Illinois trial court in April alleges Facebook violates the state’s Biometric Information Privacy Act by taking users’ faceprints “without even informing its users — let alone obtaining their informed written consent.” This suit, Licata v. Facebook, could reshape Facebook’s practices for getting user consent, and may even influence the expansion of facial recognition technology.
How common—and how accurate—is facial recognition technology?
You may not be walking by ads that address you by name, but odds are that your facial geometry is already being analyzed regularly. Law enforcement agencies deploy facial recognition technology in public and can identify someone by searching a biometric database that contains information on as many as one-third of Americans.
Companies like Facebook and Google routinely collect facial recognition data from their users, too. (Facebook’s system is on by default; Google’s only works if you opt in to it.) Their technology may be even more accurate than the government’s. Google’s FaceNet algorithm can identify faces with 99.63 percent accuracy. Facebook’s algorithm, DeepFace, gets a 97.25 percent rating. The FBI, on the other hand, has roughly 85 percent accuracy in identifying potential matches—though, admittedly, the photographs it handles may be harder to analyze than those used by the social networks.
Facebook and Google use facial recognition to detect when a user appears in a photograph and to suggest that he or she be tagged. Facebook calls this “Tag Suggestions” and explains it as follows: “We currently use facial recognition software that uses an algorithm to calculate a unique number (“template”) based on someone’s facial features…This template is based on your profile pictures and photos you’ve been tagged in on Facebook.” Once it has built this template, Tag Suggestions analyzes photos uploaded by your friends to see if your face appears in them. If its algorithm detects your face, Facebook can encourage the uploader to tag you.
With the boom in personalized advertising technology, a facial recognition database of its users is likely very, very valuable to Facebook. The company hasn’t disclosed the size of its faceprint repository, but it does acknowledge that it has more than 250 billion user-uploaded photos -- with 350 million more uploaded every day. The director of engineering at Facebook’s AI research lab recently suggested that this information was “the biggest human dataset in the world.”
Eager to extract that value, Facebook signed users up by default when it introduced Tag Suggestions in 2011. This meant that Facebook calculated faceprints for every user who didn’t take the steps to opt out. The Tag Suggestions rollout prompted Sen. Al Franken (D-Minn.) to worry that “Facebook may have created the world’s largest privately held data base of faceprints— without the explicit consent of its users.” Tag Suggestions was more controversial in Europe, where Facebook committed to stop using facial identification technology after European regulators complained.
The introduction of Tag Suggestions is what’s at issue in the Illinois lawsuit. In Illinois, companies have to inform users whenever biometric information is being collected, explain the purpose of the collection and disclose how long they’ll keep the data. Once informed, users must provide “written release” that they consent to the data collection. Only after receiving this written consent may companies obtain biometric information, including scans of facial geometry.
Facebook declined to comment on the lawsuit and has not filed a written response in court.
Why biometric privacy laws?
Biometric information like face geometry is high-stakes data because it encodes physical properties that are immutable, or at least very hard to conceal. Moreover, unlike other biometrics, faceprints are easy to collect remotely and surreptitiously by staking out a public place with a decent camera.
Anticipating the importance of this information, Texas passed a law in 2001 that restricts how commercial entities can collect, store, trade in and use biometric data. Illinois passed a similar law in 2008 called the Biometric Information Privacy Act, or BIPA. A year later, Texas followed up with another law to further regulate biometric data in commerce.
The Texas laws were passed with facial recognition in mind. Brian McCall, now chancellor of the Texas State University system, introduced both Texas bills during his tenure as a state representative.
“Legislation is seldom ahead of science, and in this case I felt it was absolutely necessary that legislation get ahead of common practice," McCall explained. "And in fact, we were concerned about how the market would use personally identifiable information.” Sean Cunningham, McCall’s chief of staff, added the use of facial recognition by law enforcement at the 2001 Super Bowl in Tampa helped bring the issue to their attention. However, it appears that the Texas statute has not been used very often to litigate the commercial collection of facial identification information.
On the other hand, the Illinois law was galvanized by a few high-profile incidents of in-state collection of fingerprint data. Most notably, a company called Pay By Touch had installed machines in supermarkets across Illinois that allowed customers to pay by a fingerprint scan, which was linked to their bank and credit card information. Pay By Touch subsequently went bankrupt, and its liquidation prompted concerns about what might happen to its database of biometric information. James Ferg-Cadima, a former attorney with the ACLU of Illinois who worked on drafting and lobbying for the BIPA, told me that “the original vision of the bill was tied to the specific issue that was presenting itself across Illinois, and that was the deploying of thumbprint technologies…”
“Oddly enough,” Ferg-Cadima added, “this was a bill where there was little voice from the private business sector.” This corporate indifference might be a thing of the past. Tech companies of all stripes have grown more and more interested in biometrics. They’ve become more politically powerful, too: For instance, Facebook’s federal lobbying expenditures grew from $207,878 in 2009 to $9,340,000 in 2014.
Testing the Illinois law
The crucial question here is whether the Illinois and Texas laws can be applied to today’s most common uses of biometric identifiers. What real-world business practices would meet the standard of informed consent that Illinois law requires for biometric data collection?
When asked about the privacy law cited in the Licata case, Jay Edelson, the managing partner of the firm representing the plaintiff, said, “The key thing to understand is that almost all privacy statutes are really consent statutes.” The lawsuit stands to determine precisely what kind of consent the Illinois law demands.
If the court finds that Facebook can be sued for violating the Illinois biometrics law, and that its opt-out consent framework for Tag Suggestions violated the law, it may upend the practices of one of the world’s largest Internet companies, one that is possibly the single largest user of commercial facial recognition technology. And if the lawsuit fails for one reason or another, it would emphasize that regulation of facial recognition needs to take place on a federal level if it is to happen at all. Either way, there’s a chance this lawsuit will end up shaping the future of facial recognition technology.
Here is the difference between a global corporation using our images and voices-----------when we use computer business operations like FB -----like GOOGLE ---we are signing away PRIVACY RIGHTS in order to post our images----our voices online and we sign contracts saying we know this global corporation may sell those images and voices for profit. THIS IS BAD------but, ONLINE RULES OF LAW are new ------and our US 99% WE THE PEOPLE need to be strong in guiding those common law statutes.
This said, PUBLIC SPACES-------private institutions do not have those rights to evade PRIVACY LAWS. Our public officials are tasked with protecting PRIVACY-----with assuring no PRIVACY LAWS are passed that violate these civil liberties. If our elected officials do pass these laws----ILLEGALLY-------then all these policies can be VOIDED---EASY PEASY.
So, ONLINE TECHNOLOGY LAWS ARE NEW-----BREAKING GROUND WITH NO CONSTITUTIONAL AND FEDERAL COURT PRECEDENCE------OUR 99% WE THE PEOPLE NEED TO MAKE THOSE COURT PRECEDENCE which is why a REAL LEFT SOCIAL PROGRESSIVE fighting for US civil liberties and rights would be filling the courts with these lawsuits.
Even if a state has a court system captured by far-right wing global banking 5% freemason/Greek players as judges ---lawyers-------who rule against these civil rights and liberties tied to PRIVACY------as here in Maryland and Baltimore-------simply filing these lawsuits protecting PRIVACY both as individuals and communities create a LEGAL TRAIL OF PRECEDENCE of our 99% WE THE PEOPLE fighting these ILLEGAL laws and court rulings.
THIS IS HOW POLITICS WORK----THIS IS THE GROUNDWORK ALL PEOPLE MUST LAY IN FULFILLING THEIR DUTIES OF CITIZENSHIP.
Last Updated : Jan 23, 2019 01:57 PM IST | Source: Moneycontrol.com
Microsoft plans ethical principles for its facial recognition technology: Report
In December 2018, the tech giant had called upon countries to formulate laws and regulations to prevent bias in facial recognition
Moneycontrol News @moneycontrolcom
Microsoft Corporation is coming up with ethical practices to implement its artificial intelligence-backed facial recognition technology to prevent risks of biased outcomes and invasion of users' privacy after it asked governments to come up with more regulations in the field, Bloomberg reported.
In December 2018, the tech giant had called upon countries to formulate laws and regulations to prevent bias in facial recognition.
"We do need to lead by example and we’re working to do that," Microsoft President Brad Smith is quoted as saying in the report.
The company has planned to draft policies and build governance systems that make sure the technology usage is in line with its principles and goals. This would include setting controls for the company's global sales to ensure the AI technology is not sold to parties where there is a risk of wrongful usage.
There have been growing concerns about the use of facial recognition software by law enforcement, border security and the military due to possible risks of mass surveillance. Well-known researches have shown that products perform poorly and make mistakes with people who have darker skin. Many research and advocacy groups have protested against tech giants including Google, Amazon and Microsoft regarding the same.
Smith made it clear that this doesn't mean Microsoft will stop providing governments and militaries with the technology altogether. They just want to ensure that it is not used for surveillance without preferred safeguards.
Microsoft is rejecting some contracts for the same reason. In one case, Smith said providing the technology would have led to public surveillance "in a country where we were not comfortable that human rights would be protected".
With respect to China's new rule where citizens will be judged on their social behaviour, he said Beijing is anyway more interested to get the recognition technology from Chinese firms, rather than American ones.
"You never want to create a market that forces companies to choose between being successful and being responsible and unless we have a regulatory floor, there is danger of that happening," he added.
First Published on Jan 23, 2019 01:56 pm
When we speak of saturation of Baltimore City streets, buildings and now our homes with all these surveillance structures we are not talking about CIVILIAN SURVEILLANCE ---AKA our city/county/community police departments------all of what exists in Baltimore City as NYC as MOVING FORWARD in all US cities deemed FOREIGN ECONOMIC ZONES-----is MILITARY -------
When a REAL LEFT SOCIAL PROGRESSIVE ACADEMIC has a NOSY NEIGHBOR filling her house with surveillance equipment------outside capturing voice and images------inside her house with microphones capturing voices through walls-------and even if former tenets leave bugs in apartments they vacate---as is happening in Baltimore today-------all of this is being done by THE GLOBAL MILITARY COMPLEX and targeting individual citizens with being ON THE LIST for these illegal surveillance comes with creating FALSE reasons for doing so.
So, these few years of discussing how our local Baltimore City police department is becoming more and more and more PRIVATIZED and controlled by global security corporations----nothing 'community policing' happening-----this is it.
The entire saturation of Baltimore City streets, buildings, and individual houses is being done by global banking 1% OLD WORLD KINGS AND QUEENS ---foreign sovereignty of MALTA----KNIGHTS OF MALTA.
THIS IS NOT A PATRIOTIC ACTIVITY-----IT IS A CAPTURE AND COLONIZATION OF A US SOVEREIGN NATION AND ITS US 99% OF WE THE PEOPLE black, white, and brown citizens.
When we take a nosy neighbor to court for illegal surveillance located on her house----our inside a house, we know we are taking a global military corporation 5% freemason/Greek player to court.
Military Security CCTV Camera Systems
We offer military grade surveillance equipment to meet your needs and surpass your expectations. Our long range vari-focal night vision Day/Night security camera along with megapixel IP cameras with auto tracking option have been successfully used by various branches of military. Our sophisticated and state-of-the-art video analytics help security personnel capture and analyze video images according to specific criteria, pre-defined rules, and behavioral triggers to increase efficiency and reduce reliance on human factors. We are determined to deliver nothing but the best equipment and support to ensure our country’s military personnel and assets are safe.
Our Military grade security systems offer the following:
- Missing Object Detection - Instantly generates an alarm when a valuable object goes missing
- Unattended Object Detection - Instantly generates an alarm when an unattended object appears in a defined detection area
- Scene Change Detection - Get immediate notifications if a camera’s field of view changes
- Facial Detection - Detect human faces in the video and save them to an index
- Crowd Detection - Detects a crowd and generates an alert
- People Counting - Count the number of people or objects passing through predetermined areas
- Visual Quality Enhancer - Filter and enhance object visibility from foggy or blurred environment caused by bad weather such as rain, snow or fog
For specialized assistance, please contact our Military Specialist Jonathan Black or request a proposal online.
Senior Account Manager
866-424-9070 Ext 7126
The area of public policy tied to these saturated surveillance structures goes beyond simple creation of systematic image and voice recognition-----as critical is the capture of INTELLECTUAL conversations as happens to CITIZENS' OVERSIGHT MARYLAND when writing and posting our blog each day. The collection of voice or computer-generated discussions is SOLD by our education institutions to global corporations working towards goals of TEACHING ARTIFICIAL INTELLIGENCE TO THINK LIKE A HUMAN.
Here in Baltimore any number of global corporations are working towards that goal-----for example AMAZON.COM------would buy or be given for free these image and voice data collected inside buildings and along our community streets.
ALL THAT IS COOL-----REPLACING ALL HUMAN THINKERS WITH ARTIFICIAL INTELLIGENCE----WHO WOULDN'T WANT THAT SAY THOSE FAR-RIGHT WING GLOBAL BANKING 5% FREEMASON/GREEK PLAYERS/POLS?
No social benefit happening.
If Artificial Intelligence Is Taught To Think Like Humans, Then Are Machines Going To Be Sexist, Racist And Discriminatory?
If Artificial Intelligence Is Taught To Think Like Humans, Then Are Machines Going To Be Sexist, Racist And Discriminatory?
Artificial intelligence gives a second chance, with our future by giving us the opportunity to wipe out human bias in decision-making.
Ekta Kumar 17 March 2017
We live in a fractured world that is increasingly becoming polarised with shrill voices intent on drowning out dissent. We have divided ourselves, created inequalities and are steeped in prejudices that we have carried for years. It’s not a perfect world, and how can it be -- we are after all humans, intolerant, and unfair.
However when it comes to machines, the expectations change. The first words that usually come to mind are: cold, calculating and unbiased. But are they really?
It is a question that is becoming more and more relevant as Artificial Intelligence is no longer confined to the pages of a sci-fi novel. From the realms of fantasy it has now crept into our lives. Our devices are connected, personal digital assistants answer our queries, algorithms track our habits and make recommendations, AI is sparking advancements in medicine, cars will soon be driving themselves, and robots will be delivering our pizza etc. AI is growing fast, what was once considered a possible distant future is now being tested and rolled out. Just imagine our lives five, ten or twenty years from now.
But will the developments and the benefits suit us all? Will it be equal? The answer is perhaps ‘no’. AI is flawed, just like the rest of us.
Do you remember Google’s photo app that automatically classified dark skin tones as gorillas, or Nikon’s camera that insisted all asian faces were blinking. An AI-judged beauty contest went through thousands of selfies and chose 44 fair skin faces and only one dark face to be the winners. Microsoft’s twitter-based chatbox ‘Tay’ was designed to learn from its interactions with users. Within 24 hours it was shut down. The user community taught it some seriously offensive language and it regurgitated it faithfully. The very public experiment ended up in disaster with the aggressive bot spewing racist and sexist remarks.
These are not the only examples. Sexism, racism and all kinds of discrimination are built into the algorithms that drive these ‘intelligent’ machines, for a simple reason - these are built by humans. Machines reflect the biases we have.
This is not new, and certainly not limited to Artificial Intelligence.
Tools are usually designed for men, women clothing have no pockets, seat belts were till recently only tested on male dummies, thus putting women at greater risk in case of a crash. Yes, prejudices and stereotyping in product design has been around for a long time but what is worrying is that some of it is now creeping into the development of AI.
The deep learning algorithms are all around us, tracking us, prompting us, shaping our preferences and our behavior. This is just the beginning. Artificial intelligence is going to be an integral part of our lives, even more than it already is, and thus it is absolutely critical that we mould it in a way that makes it truly neutral. It is our chance to build our own future. The present may be imperfect, the future need not be. Considering it is still developing, and has still not entrenched itself in our lives, this is the time to begin talking about it.
Our conversations around this so far has largely been limited to the number of jobs that are going to be lost, perhaps now we should start asking other questions too - like that of its purpose and its accountability. Currently it is the tech companies, primarily in the west who are leading the discussion on it. But there need to be more participants from across the world - governments, social institutions, corporates, academicians, research bodies and so on. They must come together to talk, and think and figure out a way to make it equitable, to make it work for everyone. If not, then the development of AI is going to be lopsided, and this is not going to be limited to a social or cultural issue, it can mean the difference between life and death.
As in the case of self-driving cars - will it give preference to one racial group versus another, will it choose to hit someone or save someone based on colour, height or gender? I hope not.
In the coming years, for society to be equal, technology must also serve us all equally. Artificial intelligence gives us the incredible opportunity to wipe out human bias in decision making. This can be possible only now, at the development stage where diversity and inclusion should drive innovation. We need to involve all kinds of minds in research laboratories, in conference rooms and in workshops where decisions are taken about our future. A homogenous group will subconsciously carry forward their own prejudicial way of looking at life, their biases can taint the output, their assumptions can tilt the sphere of science thus sharpening inequalities, marginalizing people or putting certain sections of the population at higher risk.
We are at a crucial stage of technological evolution. A better world awaits us, but we need all kinds of people to imagine our tomorrow, design it, engineer it and finally make it real.