We use the term MONSTERS IN THE CLOSET for HOSTING SERVER NOSY NEIGHBORS AND THE GANG----instead of calling them CREEPY.
The mantra of global banking 1% that all this surveillance is about PROTECTING CITIZENS---PROTECTING CHILDREN----is FAKE NEWS. It's only goal is REPRESSION------SILENCING any dissent in MOVING FORWARD killing our US ALL-AMERICAN freedom, liberty, justice FOR ALL.
I have to deal with NOSY NEIGHBORS constantly pretending to READ MY MIND----as if that is OK! Even PRETENDING is MONSTERS IN THE CLOSET.
It is indeed CREEPY to want to place our US 99% WE THE CHILDREN under constant surveillance with the goal of creating a mega-data account that follows that child through life.
Face it: It’s creepy to put facial recognition technology in schools
By Michael Sapraicone
New York Daily News |
Jun 24, 2019 | 12:39 PM
Earlier this month, the school system in Lockport, N.Y., became the first in the nation to begin testing facial recognition technology. (Getty Images/iStock)
Earlier this month, the school system in Lockport, N.Y. (pop.: 20,569), near Niagara Falls, became the first in the nation to begin testing facial recognition technology, much to the consternation of state education officials and the New York Civil Liberties Union. Their concern is justified.
Aside from the fact that the technology will cost the school district nearly $4 million, one has to ask what exactly a public school system expects to accomplish with facial recognition. No one denies the safety of students and faculty is of utmost importance, but measures aimed at keeping them secure should be based on a myriad of factors including actual risk, and in conjunction with local police departments and other law enforcement agencies.
In this July 10, 2018 photo, district technology director Robert LiPuma stands in a doorway beneath a camera with facial recognition capabilities that is being installed in Lockport High School in Lockport, N.Y. (Carolyn Thompson/AP)
Safety ought to also include the ability of students and faculty to preserve their privacy and civil rights. With so many cyber intrusions reported almost on a daily basis, the personal information and images of innocent people — in this case, many minors — should not be put at risk, if indeed there is even a remote possibility that students’ images could be captured and stored.
Then there is the practical aspect of the technology itself.
In the case of Lockport, whose entire K-12 student population is a modest 4,600, the school superintendent told the The Buffalo News the testing phase of its facial recognition program “will use photos of registered sex offenders from a national database for comparisons to images of people entering the schools.”
But would it not be wiser and more prudent for Lockport to evaluate its overall security plan, and examine its schools’ points of ingress and egress? And if it is such a target of potential intrusions, why would it take matters entirely into its own hands instead of handing everything over to police?
Moreover, facial recognition is only as good as the database available. Lockport’s test phase will mine the sex-offender registry, but what about other criminals known to be violent, and those whose past transgressions would pose a more real threat to the lives of those in a school, such as former and current students? In my many years in law enforcement, I can’t remember a single incident of a sex offender entering a school.
In this July 10, 2018 photo, students walk down a hallway at Lockport High School in Lockport, N.Y.
On the upper left, a camera with facial recognition capabilities hangs on the wall waiting for its installation to be completed. (Carolyn Thompson/AP)
Our public schools should not be the testing ground for facial recognition technology; it would be far more prudent to use it as part of an overall security program in places such as offices and houses of worship. Indeed, citizens of Lockport might be better served if the $3.8 million tab for its public school facial recognition program was transferred to the police department, where there would be better monitoring and accountability.
It is essential that we temper our use of untested new technology. There is currently much debate about whether the negative aspects of facial recognition outweigh the positive, and we must understand that moving too quickly might have irreversible and unintended consequences.
It was encouraging that the state Assembly introduced a bill in its last session that would impose a one-year moratorium on the use of facial recognition in schools. Legislators would be wise to advance this bill when the legislature reconvenes, and promptly send it to Gov. Cuomo for his signature.
CORTEZ AND THIS BRIAN SCHATZ are being touted as fighting what has been installed over these few decades of CLINTON/BUSH/OBAMA. We didn't hear these guys thirty years ago when all this was being installed. We didn't hear them a decade or so as saturated surveillance hit PUBLIC AND PRIVATE buildings and sidewalks. We are hearing them today because it is ALREADY INSTALLED.
Now, the 'HUMAN RIGHTS' issue is how do we control EXTREME CORPORATE FASCISM? These far-right wing global banking 5% freemason/Greek players are creating a GOLD STANDARD for us---don't worry say THE PLAYERS.
THE GOLD STANDARD FOR PLAYERS-------WE DON'T CARE.
'Democratic Senator Brian Schatz of Hawaii has proposed the “End Support of Digital Authoritarianism Act” to bar companies from China, North Korea, Russia, Iran and other countries that consistently violate “internationally recognised human rights” from the Face Recognition Vendor Test (FRVT), which is widely considered the gold standard for determining the reliability of facial recognition software'.
US as FOREIGN ECONOMIC ZONES operating like CHINESE FOREIGN ECONOMIC ZONES say-------
WE ARE GOING TO KEEP CHINA IN LINE WITH OUR ALL-AMERICAN IDEALS HATING ALL THIS EXTREME WEALTH EXTREME POVERTY AUTHORITARIAN CORPORATE MARXISM.
'How facial recognition software is being used in some ...www.deseret.com/indepth/2019/12/28/20992530/... But just the existence of such files is enough to worry Moore, who has studied the use of facial recognition software under authoritarian regimes. “There are so many potentials for abuse with facial recognition technology,” he said. For example, in China, the government uses artificial intelligence to track how often citizens attend church'.
The problem for FAKE US PLAYERS is this---they are doing the same thing to our US 99% WE THE PEOPLE-----
This Week in Asia / Geopolitics
How the US plans to crack down on Chinese facial recognition tech used to ‘strengthen authoritarian governments’
- A proposed bill reflects a broader campaign underway in the US to check the spread of Chinese tech
- Hangzhou-based Hikvision, for example, has been criticised for its role in the detention and surveillance of the Uygur minority population in Xinjiang
Published: 6:00am, 18 Jun, 2019
Updated: 12:11pm, 19 Jun, 2019
Surveillance cameras manufactured by Hikvision on a post at a testing station near the company’s headquarters in Hangzhou, China. Photo: Bloomberg
A United States
senator is pushing to ban countries including China from an influential US government accuracy test of
technology, potentially opening up a new front in the escalating tech war between Washington and Beijing.Democratic Senator Brian Schatz of Hawaii has proposed the “End Support of Digital Authoritarianism Act” to bar companies from China, North Korea, Russia, Iran and other countries that consistently violate “internationally recognised human rights” from the Face Recognition Vendor Test (FRVT), which is widely considered the gold standard for determining the reliability of facial recognition software.
The results of the FRVT are regularly cited by firms as a measure of their credibility, and are referred to by businesses and policymakers when buying facial recognition technology.
ALBRIGHT as KISSINGER are the face of bringing back from last century HITLER/STALIN global banking 1% corporate fascism.
CONTROL FOOD CONTROL ENERGY CONTROL PEOPLE
That is what they have in CHINA and it is what CLINTON/BUSH/OBAMA installed with the help of ALBRIGHT AND KISSINGER.
NO, MADELYN----ALL THIS SURVEILLANCE TECHNOLOGY PUTTING CHINA TO SHAME----WAS INSTALLED BEFORE TRUMP CAME TO OFFICE. TRUMP SIMPLY CONGRATULATES THOSE DASTARDLY NEO-LIBERALS AND NEO-CONS.
'Madeleine Albright Warns of a New Fascism—and Trump | The New ...
www.newyorker.com/news/news-desk/madeleine... Apr 24, 2018 ·
Trump is not Mussolini. But the incident and the tenor of our times reflect why Madeleine Albright, who fled European Fascism as a child and became America’s first female Secretary of State as an adult, tackles the prospects of radical authoritarian nationalism—or Fascism—returning today in her latest book'.
Indeed, those tied to pre-WEIMAR GERMANY and TROTSKY STALINIST FASCISM did indeed install it back then and then came to US to do the same------LEO STRAUSS NEO-LIBERALISM ------KARL MARX-----equals HITLER/STALINIST extreme wealth extreme poverty LIBERTARIAN MARXISM.
REMEMBER, GLOBAL BANKING 1% OLD WORLD KINGS KNIGHTS OF MALTA TRIBE OF JUDAH ALWAYS HIDE BEHIND THE 'MADMEN'.
My case against HOSTING SERVER NOSY NEIGHBORS AND THE GANG----is just this------BRAIN/BODY IMPLANTS easily hacked----not accurate------used as WEAPONS has had ME by the NECK acting as FASCIST BOOTS.
NO----THIS IS ALL ABOUT SOCIAL BENEFIT SAYS GLOBAL CORPORATE FASCISM.
REAL LEFT SOCIAL PROGRESSIVE LIBERALS -----those ALL-AMERICAN civil liberties and justice people will NOT allow CLINTON/BUSH/OBAMA ----the one's installing this HITLER/STALIN fascism in US get away with saying they are the POPULIST LEADERS against it.
Facial Recognition Use Growing: Accurate or Not, Hackable or Not
By: Lee Rickwood
June 1, 2018
The use of facial recognition technology in retail, healthcare, law enforcement, travel & tourism, entertainment and other industries continues to grow, even as concerns about its overall accuracy and safety grow as well.
Facial recognition software tools are used to analyze both live and recorded images of people. The tools can be used to match or confirm a particular person for security, authentication and protection purposes. Facial recognition capabilities are sophisticated enough to not only assess and analyze a person’s age and gender, but also their ethnicity, behaviour, emotional state and physical intentions (as demonstrated by pace of movement, direction, surrounding crowd flow, etc).
NEC is among those companies to offer advanced technologies and solutions to make fingerprint identification and facial recognition a key component of safety and security systems.
A Research and Markets report on the sector indicated that the global mobile biometrics market will grow at a rate of over 100 per cent in the next three years. The overall facial recognition market is expected to triple in size, growing to a projected $6 billion (USD) by 2020.
Facial Recognition Providers in Canada
Canadian tech companies are already taking advantage: a Winnipeg-based company called Mexia One was among the first to use facial recognition systems to provide secure event access at a very large and internationally popular industry event, Mobile World Congress 2018.
It drew more than 100,000 attendees, and some 4,000 visitors opted-in to the ‘test’ by allowing system operators to scan their faces in exchange for easy access (paper documentation was also recommended by show officials) to the show. As such, being able to get real-time results was crucial to the process.
Depending on computational horsepower, Mexia describes its technology as being able to support three million comparisons per CPU per second (roughly one comparison of a database record to a facial image every 300 nanoseconds)!
The company’s facial recognition, authentication and analysis platform is being used by trade show operators, airport facilities, retail outlets, government agencies and more.
Another Canadian tech company, Oakville-based Applied Recognition, took the basic facial recognition capabilities in its consumer photo tagging and indexing app, Fotobounce, to the next level with the release of Ver-ID, an enhanced identification verification and user security system said to provide greater security than traditional username/password logins.
Of all the gadgets, devices and services that are now or may soon use facial recognition instead of personal passwords and manual log-ins, perhaps one of the most disconcerting (in light of recent revelations) is Facebook.
The social media giant announced in April that it would be enabling a facial recognition feature in Canada that automatically tags Facebook users in uploaded photos as a tool against false impersonation on the social network.
The decision to offer facial recognition is a controversial one for Facebook, already facing heat over the way it handles users’ data.
The system has been in use elsewhere since 2011, but some implementations were delayed after objections from data privacy groups. The renewed product push reflects enhancements to the system, to the point where Facebook will also now recognize faces in Europe (if desired by the user).
“We’ve offered products using face recognition in most of the world for more than six years. As part of this update, we’re now giving people in the EU and Canada the choice to turn on face recognition,” Facebook announced in a blog post by Chief Privacy Officer Erin Egan and Deputy General Counsel Ashlie Beringer. Facebook stressed that using the feature would be “entirely optional.”
That’s not what some Facebook users in the U.S. say; they have filed a class action lawsuit against Facebook, alleging that it used facial recognition on photos without user permission.
Popular image manipulation software can be used in facial recognition systems. Researchers used Adobe Photoshop CS5 to remove backgrounds from photos, then overlayed grid markings to help match different images containing the same face, even if the image was taken with different magnifications and from different distances.
Resistance to Recognition
So facial recognition technologies face some push back from concerned citizens and potential users. But there is resistance, too, from the tech industry itself, and people who are developing ways to spoof surveillance systems.
Rather than trying to hide or obscure one’s face from facial recognition technology (because such systems can see through obfuscation attempts like wearing glasses or a big floppy hat), one new product in particular allows users to present an entirely new face and alternate identity to the recognition software: that of the product developer!
The folks at URME Surveillance are doing so in an attempt to not only combat ubiquitous surveillance, but the possible misuse of data collected in surveillance, recognition and identification processes.
Rather than hiding behind someone else’s face, researchers at the University of Toronto have come up with a technological way to disable facial recognition systems and disrupt image-based search processes.
“[I]t is possible to craft fast adversarial attacks on a state of the art face detector,” write Avishek Bose and Parham Aarabi of the Department of Electrical and Computer Engineering at U of T (Aarabi and his cohorts there have been working with facial recognition tools for some time, and they have commercialized other products that utilize similar recognition algorithms).
They say their team plans to make the privacy filter publicly available as an app or website tool. “Personal privacy is a real issue as facial recognition becomes better and better,” Aarabi says. “This is one way in which beneficial anti-facial-recognition systems can combat that ability.”
Facing up to Law Enforcement
That may not be what police services in Canada and around the world want to hear.
WE EMPHASIZE OVER AND OVER AND OVER---THESE ARE NOT OUR LOCAL POLICING ----THESE ARE GLOBAL PRIVATE MILITARIZED POLICING/SECURITY CORPORATIONS IN PLACE BECAUSE NORTH AMERICA IS BEING MADE A THIRD WORLD FOREIGN ECONOMIC ZONE.
More and more, police and law enforcement agencies here in Canada and around the world are using facial recognition as an investigative if not preventative tool. Calgary was among the first police services in Canada to do so; while Toronto’s own police force has long been said to be investigating the use of facial recognition tools, it now has the support of the Ontario government, along with some $19 million funding for several projects, including the purchase and implementation of a facial recognition system.
Barring further announcements about such implementations, it is not clear how, when, or which Canada police service will make use of facial recognition, but they will surely be pitched to do so. The latest company to offer up facial recognition tools to police and law enforcement is a big one – Amazon.
Amazon’s system, called Rekognition, has capabilities that allow its user to identify people whose images are captured in full motion video, and it can identify and follow multiple people and track their movements in near real-time.
Concerned about potential for unfettered surveillance by any number of users, a coalition of concerned citizens, organizations and tech companies has sent a letter of protest to Amazon, asking that it Rekognition “take off the table”
My case against HOSTING NOSY NEIGHBORS AND THE GANG this past year of HITTING ME-----included my educating myself on this very same technology. My analysis of how NOSY NEIGHBORS followed every move inside my apartment started with spy cameras and microphones planted inside my apartment---expanded to BRAIN/IMPLANTS where ALGORITHMS were used to create THE ABC of how to track me and every move via BRAIN/BODY IMPLANTS.
ALGORITHMS tracking people on the street are already being implemented inside people's bodies. No health benefit---this is ALL CHINESE SOCIAL CREDIT SCORE---controlling US 99% WE THE PEOPLE and new to US 99% WE THE GLOBAL LABOR POOL IMMIGRANTS.
'On Monday New York’s City Council debated a bill that would require city agencies to publish the source code of algorithms used to target individuals with services, penalties, or police resources'.
Below we see where these CONCERNS are vocalized----it is only about those PUBLIC SURVEILLANCE MEGA DATA hitting these COMPUTER-GENERATED readouts about what kind of person you are----
WE ARE GOING TO TELL EVERYONE WHAT KIND OF PERSON YOU ARE SAYS MONSTERS IN THE CLOSET----OH, REALLY? THEY CAN JUST READ MY BLOG.
An Algorithm That Grants Freedom, or Takes It Away
Across the United States and Europe, software is making probation…
Across the United States and Europe, software is making probation decisions and predicting whether teens will commit crime. Opponents want more human oversight'.
10.18.2017 03:00 PMAI
Experts Want to End 'Black Box' Algorithms in Government
Researchers at AI
Now say algorithms increasingly used by government can be opaque and discriminatory.
The right to due process was inscribed into the US constitution with a pen. A new report from leading researchers in artificial intelligence cautions it is now being undermined by computer code.
Public agencies responsible for areas such as criminal justice, health, and welfare increasingly use scoring systems and software to steer or make decisions on life-changing events like granting bail, sentencing, enforcement, and prioritizing services. The report from AI Now, a research institute at NYU that studies the social implications of artificial intelligence, says too many of those systems are opaque to the citizens they hold power over.
The AI Now report calls for agencies to refrain from what it calls “black box” systems opaque to outside scrutiny.
Kate Crawford, a researcher at Microsoft and cofounder of AI Now, says citizens should be able to know how systems making decisions about them operate and have been tested or validated. Such systems are expected to get more complex as technologies such as machine learning used by tech companies become more widely available.
“We should have equivalent due-process protections for algorithmic decisions as for human decisions,” Crawford says. She says it can be possible to disclose information about systems and their performance without disclosing their code, which is sometimes protected intellectual property.
Governments increasingly lean on algorithms and software to make decisions and set priorities. Sometimes, as in the case of setting bail, it can make government more equitable. But other algorithms have been found to exhibit bias. ProPublica reported last year that a scoring system used in sentencing and bail by multiple states was biased against black people.
Whatever the ultimate impact, citizens struggle to access information about algorithms with sway over their lives. In June, the Supreme Court declined to review a ruling from Wisconsin’s highest court that denied a defendant’s request to learn the workings of a tool called COMPAS used to set his criminal sentence. A project by legal scholars that used open-records laws to seek information about algorithms and scoring systems used in criminal justice and welfare in 23 states came back largely empty handed. In some cases, governments signed agreements with commercial providers restricting disclosure of any information about a system and how exactly it was being used.
AI Now’s call for a rethink of government use of algorithms is one of 10 recommendations in the 37-page report, which surveys recent research on the social consequences of advanced-data analytics in areas such as the labor market, socioeconomic inequality, and privacy.
The group also recommends that companies work on tools and processes to identify biases in training data, which have been shown to create software with unsavory tendencies. And the report calls for research and policymaking to ensure the use of automated systems in hiring doesn’t discriminate against individuals or groups. Goldman Sachs and Unilever have used technology from startup HireVue that analyzes the facial expressions and voice of job candidates to advise hiring managers. The startup says its technology can be more objective than humans; Crawford says such technology should be subject to careful testing, with the results made public.
But changes in how governments use algorithms to shape citizens’ lives could be slow to arrive. Ellen Goodman, a law professor at Rutgers who has studied the subject, says many cities and state agencies lack the expertise needed to design their own systems, or properly analyze and explain those brought in from outside.
On Sunday the UK government released a review that examined how to grow the country’s AI industry. It includes a recommendation that the UK’s data regulator develop a framework for explaining decisions made by AI systems.
On Monday New York’s City Council debated a bill that would require city agencies to publish the source code of algorithms used to target individuals with services, penalties, or police resources.
On Tuesday a European Commission working group on data protection released draft guidelines on automated decision making, including that people should have the right to challenge such decisions. The group’s report cautioned that “automated decision-making can pose significant risks for individuals’ rights and freedoms which require appropriate safeguards.” Its guidance will feed into a sweeping new data protection law due to come into force in 2018, known as the GDPR.
It appears unlikely that the US federal government will join efforts to engage with concerns about the effects and use of algorithms and AI in public life.
In 2016, the Obama administration held a series of workshops around the country on the benefits and risks of artificial intelligence. AI Now cohosted one of them with the White House’s Office of Science and Technology Policy, and Economic Council. Neither organization now seems interested in subject. The OSTP now has a fraction of the staff it did under the Obama administration. “AI policy is not at the top of the current White House’s agenda,” says Crawford.
SUBLIMINAL MESSAGING through COCHLEAR IMPLANTS have been the norm in BALTIMORE. Baltimore MEDICAL EXPERIMENTAL RESEARCH has implanted a majority of citizens and as I do MY DEPOSITION as to what those IMPLANTS have done to me-----we continue to express REALITY -----in Baltimore COCHLEAR IMPLANTS are used for SUBLIMINAL MESSAGING by MONSTERS IN THE CLOSET-----by POLITICAL MACHINES----who are behind this US CITIES AS FAILED STATES these few decades. The violence is SUBLIMINAL to large part.
When US citizens are placed on a DATABASE for BEHAVIOR while BRAIN/BODY IMPLANTS have such a degree of hold on the 5Ws of people's behavior that is NOT NATURAL---IT IS NOT THAT PERSON'S BEHAVIOR---
We believe SUBLIMINAL MESSAGING has indeed brought an extreme amount of male VIOLENCE AGAINST WOMEN for example-----I call this DEMASCULATED MEN tied to PORN while connected to COCHLEAR IMPLANT subliminal messaging.
The amount of money global banking 1% SEX TRADE DARK WEB PORN cartles from implanted MEN making SEX the only interest these few decades IS TREMENDOUS. This is what I deal with with organized crime tied to HOSTING SERVER NOSY NEIGHBORS AND THE GANG-----same for our BALTIMORE communities experiencing third world violence.
This is what gets on someone's PERMANENT MEGA DATA CHINESE SOCIAL CREDIT SCORE through surveillance and each individual made VICTIM of these FALSE DATA cannot CLEAR that data OFF of scans tied to FACIAL RECOGNITION.
Subliminal Messages Can Fortify Inner Strength | Psychology Today
Apr 20, 2015 · On a cautionary note, the negative age stereotypes and subliminal messages that each of us absorb non-consciously every day through advertising and other streams of media can lead to lower self ...
Mind Control, Subliminal Messages and the Brainwashing of ...
Some of the most common subliminal messages promote sexual violence. Oftentimes, these messages can lead to horrible crimes — a chilling example is with serial killer Ted Bundy, who maintained that he was largely influenced by TV advertising which encourages brutality towards women.
Do Subliminal Messages Really Work?
For subliminal messages to influence behavior, people must already want to do that behavior. For example, researchers found that subliminal messages relating to thirst were only effective toward participants who were already thirsty (Strahan, Spencer and Zanna, 2002). For people who weren’t thirsty, the subliminal messages made no difference.
What Are Subliminal Messages And Do They Work?
What are subliminal messages?
Do subliminal messages work? Though everyone from Coca-Cola to Disney has been accused of using these tactics, few of us seem to know the truth about what these messages are and whether or not they're effective.
MY CASE is not of a black man predetermined to be VIOLENT via these IMPLANTS----my case is of a US MIDDLE-CLASS professional woman bringing brought into RETIREMENT----and how US citizens used to a QUALITY OF LIFE will handle third world income with NO PUBLIC TRUSTS---NO SOCIAL SAFETY NETS.
Black, white, and brown have already been extremely harmed by MOVING FORWARD CLINTON/BUSH/OBAMA----CHINESE BRAIN/BODY IMPLANT as STANFORD TOTAL PRISON MODEL.
I am constantly hearing from MONSTERS IN THE CLOSET-----WE HAVE TO GET HER ANGRY------and this is what is being done to our US city citizens caught in continuous violence.
Baltimore is the nation's most dangerous big cityAamer Madhani
FEB 19, 2018 USA TODAY
The collective homicide toll for America’s 50 biggest cities dipped slightly in 2017, a USA TODAY analysis of crime data found.
The FBI won’t publish its annual comprehensive crime report until later this year, but an early review of police department crime data shows that killings decreased by at least 1% in large jurisdictions compared with 2016.
The modest decrease in killings comes after FBI data showed back-to-back years in which homicides rose sharply in large cities. (Homicides in cities with 250,000 or more residents rose by about 15.2% from 2014 to 2015, and 8.2% from 2015 to 2016.)
There were 5,738 homicides in the nation’s 50 biggest cities in 2017 compared with 5,863 homicides in 2016, a roughly 2.3% reduction.
Las Vegas Police reported 141 homicides for 2017 in its official tally but did not include the Oct. 1 mass shooting at an outdoor country music concert that left 58 dead. If those deaths were included in the department's tally, the national big city homicide toll fell by 1.1%, the USA TODAY review found.
Even with the sharp rise in homicides in the two years prior to 2017, the national murder toll continued to hover near historic lows.
The national decrease in killings in 2017 was largely driven by double-digit percentage dips in some of the nation’s biggest cities, including Chicago (14.7%), New York City (13.4%) and Houston (11%). In fact, the New York Police Department reported that its annual murder tally fell below 300 for the first time and the city notched its lowest per capita murder rate in nearly 70 years.
Chicago ends 2017 with 650 murders, a grim sign of improvement
More:Opioids are adding a dangerous wrinkle to violent cities
More:FBI: Violent crime increases for second straight year
New York, which hit its nadir in the midst of the crack-cocaine epidemic when it tallied more than 2,200 murders in 1990, boasts that the nation’s largest city is now the safest it’s been since the Dodgers played in Brooklyn and a pizza slice set you back 15 cents.
While New York and others boasted of significant progress, other large cities saw a big surge in killings in 2017.
Baltimore is the big city with the highest per capita murder rate in the nation, with nearly 56 murders per 100,000 people. At 343 murders in 2017, the city tallied the highest per capita rate in its history. Columbus tallied 143 murders — 37 more than 2016 and the most the city has seen in a single year.
In both cities, officials blamed the rise in homicides on gangs and drug activity.
“In New York, they concentrated on the right neighborhoods, they’ve invested well in predictive analytics and technology,” said Peter Scharf, a criminologist at the LSU School of Public Health and Justice. “The other part of what we’re seeing nationally might be a story of haves and have-nots. While some departments have made the investments, other police departments are still in the backwater of policing.”
Chicago saw its murder tally dip to 650 in 2017 from 762 in the prior year. The murder toll remains high in the Windy City — near levels of violence the city endured in the late 1990s — but police officials there say they believe investments in technology are beginning to help officers stem the violence.
Dozens of chiefs and senior police officials from departments across the country gathered in Chicago late last month to trade notes on how to best use technology in the crime fight.
Chicago Police Superintendent Eddie Johnson said investments in technology are undoubtedly paying off. But his city’s department, like many others, will also have to continue to focus on improving relations with residents to further reduce the homicide toll, he said.
Chicago is one of many big departments that has seen its relationship strained in poor and minority communities in the aftermath of a series of controversial police-involved shootings across the country in recent years.
Some crime experts and law enforcement officials believe the fractured relationships could have had some impact on driving homicide rates in jurisdictions, such as Chicago and Baltimore, in recent years.
"We have an obligation and responsibility to keep our jurisdictions as safe as we can. The one way we really get to it is to have a collaborative effort," Johnson said. "You just can’t have law enforcement as the only entity out there trying to reduce crime. The partnerships you build with the community are paramount to reducing crime. I don’t care what jurisdiction you’re in.”