CINDY WALSH FOR GOVERNOR OF MARYLAND IS WAITING UNTIL AFTER THE PRIMARY ELECTION BECAUSE IF I DO NOT WIN-----I WILL BE CONTESTING THE ELECTION RESULTS AS WELL AS THE ELECTION VIOLATIONS.
I will be back to general policy issues in a few days-----this may seem boring, but election law is critical..... please take the time to see why we never have the candidates in elections for whom we want to vote. Media and 501c3s think they can select whom they place in front of an audience----but they cannot. My case will make this election primary invalid if anyone other than Cindy Walsh wins the democratic primary.
THE PEOPLE IN POWER DID NOT WANT VOTERS GOING TO MY PLATFORM AND CAMPAIGN WEBSITE. THIS IS WHAT MAKES THESE DEBATE/FORUM CRITERIA ILLEGAL FOR MEDIA----THEY ARE OBVIOUSLY CENSURING BECAUSE OF A CANDIDATE'S PLATFORM AND THE SUPREME COURT SAID IN A FCC LAWSUIT THAT THIS IS ILLEGAL.
I referred yesterday to the June 3, 2014 date to register with a political primary for the primary election later in the month. The general election allows a voter to cross party lines to vote for any candidate. If your 501c3 organization waits until after June 3 this year to introduce a candidate it may very well be that they want to be sure you stay with the party with which you are registered------it may be that they want to be sure that the time to network for a candidate is limited by those few weeks before an election. As statistics show, most people have made up their minds a few weeks before an election so a campaign only then being introduced will rarely be able to sway voters to leave their current choice. In any case, knowing the candidates at the beginning of a primary is what any organization promoting election education would want to do. Reminding people of the candidates before the primary election will then spur participation.
Giving one example, WEAA and Marc Steiner Show has been contacting me for two weeks. I announced to all media outlets, labor and justice organizations my candidacy back at the end of February and made clear I wanted time with their election coverage. I hit public media hardest because they are required by law not to damage any campaign by selectively leaving out platforms they do not like. Marc received several requests from my campaign over 3 months and ignored all requests until-----you guessed it-----after that June 3 deadline. His staff contacted my campaign once a few weeks ago with dates for an interview all just a few weeks from the primary election. HE KNOWS HE HAS DAMAGED MY CAMPAIGN. HE IS WAITING UNTIL THE END OF THE SEASON SO AS TO MAKE MY CANDIDACY 'UNVIABLE'. I have copies of all my communications. Who will think a campaign is serious if they are unheard of throughout the race? YOU BETCHA! This is happening to me this election but it happens every election as candidates with platforms that work against the movement towards global markets and corporate control of government are left out. The platform issue of stopping Trans Pacific Trade Pact and the dismantling of public structures building the stage for corporate control of government is deliberately left out. This is why Marylanders do not here of or know of TPP or the negatives of moving neo-liberal/neo-con policies forward.
THIS IS ELECTION CAPTURE AND RIGGING AND IT IS ILLEGAL.
If your republican or democratic party structure is not working to allow all platforms a place in these primaries----they are operating against election law.
Changing Party Affiliation
Changing party affiliation affects only the Primary Election. Only the Republican and Democratic Parties are running in the Primary. The Voter can vote for whoever they want in the General Election.
A campaign is made unviable when from the beginning of an election the candidate has no avenue with which to share a platform or gain name recognition. Today I want to talk about 'POLLING' in Maryland. It is a laugh out loud event!
'According to Maryland's State Board of Elections, there are 3,167,846 "Eligible Active Voters on the Precinct Register" for the Primary Elections as of 8/29/2010. 1,944,620 are registered as Democrats 915,506 are registered as Republicans
The methodologies of the two polls were significantly different. Unlike the St. Mary’s poll, The Sun's poll used live callers and concentrated on 500 likely voters rather than all registered voters. The college’s automated poll surveyed 954 registered voters and had a margin of error of 3.17 percentage points. The Baltimore Sun poll has The survey has a margin of error of 4.4 percentage points.
Now, consider the fact that Cindy Walsh for Governor of Maryland was not in any of these polls so there is no way for me to have polling numbers. Yet, all of the media outlets claim that only candidates who poll at a certain number can participate. Consider as well that the polls in Maryland surrounding elections have no statistical value----they are not relevant data because they do not meet any guidelines for accuracy. St Mary's College poll was the poll using the most scientific data collection and that was small.
So, we would go to the St Mary's poll to even begin considering the real voter sentiment.
The Baltimore Sun poll----the Gonzalez polls and others are marketing firms contracted to conduct a poll and as we know----marketing is about selling your product or candidate. These polls are purely marketing scams that make people believe a candidate is in the lead. Election officials know that the psychology of elections is that more people vote for the person they see in the lead....ergo---make sure Brown appears as the front-runner.
Keep in mind this poll was at the end of April after months of concentrated media attention for the 3 democratic candidates and not a word of Cindy Walsh in any venue.
Undecided voters dominate in new gubernatorial poll
April 23, 2014
|By Michael Dresser“Undecided” continues to hold a commanding lead in both the Democratic and Republican primary races for governor, according to a new poll released Wednesday by St. Mary’s College of Maryland.
The poll, an inaugural venture by the Southern Maryland college’s political science department, shows little movement in the race since previous surveys. The results suggest that voters have not tuned in to the June 24 primary contest.
Among the Democrats, the polls showed Lt. Gov. Anthony G. Brown with the support of 27 percent of registered primary voters. Attorney General Douglas F. Gansler and Del. Heather R. Mizeur of Montgomery County lagged behind at 11 percent and 8 percent respectively.
While Brown maintained a strong margin over his rivals, two Democrats said they were undecided for every one that backs the lieutenant governor in his bid to succeed term-limited Gov. Martin O’Malley.
The 54 percent undecided level on the Democratic side was eclipsed by the uncertainty among Republicans. Almost seven in 10 said they had not made a choice.
Among those that have picked a candidate, Larry Hogan, a former Ehrlich administration official and founder of the conservative group Change Maryland, led with 16 percent. Harford County Executive David R. Craig trailed with 8 percent. The severely underfunded campaigns of Del. Ron George of Anne Arundel County and Charles County business executive Charles Lollar were stuck below 4 percent.
Susan Grogan, professor of political science at St. Mary’s, said she doesn’t see much excitement about the race among voters.
“I would suspect we’re going to have a very low turnout,” she said.
The poll strongly tracks previous surveys of the race and shows little sign that any candidate is gaining significant ground. For instance, a poll released by The Baltimore Sun in February showed Brown with 35 percent, Gansler with 14 percent and Mizeur with 10 percent. On the Republican side Hogan polled at 13 percent and Craig at 7 percent.
The methodologies of the two polls were significantly different. Unlike the St. Mary’s poll, The Sun's poll used live callers and concentrated on 500 likely voters rather than all registered voters. The college’s automated poll surveyed 954 registered voters and had a margin of error of 3.17 percentage points.
___________________________________________
As you see with the Baltimore Sun and those attaching to polling firms doing its polls-----they use the term 'likely' voter. This is the same element of arbitrariness that goes into any calculation in Maryland. 'Likely' rather than random means------we created bias and are going to capture data in the way we want to see the results. So, you not only have an extremely small number of people polled (cohort), you use a system that deliberately skews the results you are collecting.
The St Mary's College poll was small as well but at least it started from the premise of 'random' data collection. Any data collection platform with a margin of error over 3% is known to be useless-----unless you are simply marketing a candidate. Again, St. Mary's at least comes closer to the least worst margin of error.
CINDY WALSH FOR GOVERNOR WILL SUBPOENA ALL POLLING METHODOLOGY FROM ALL SOURCES TO DETERMINE THE BIAS.
You do not have to be a rocket scientist to know if you have two million democratic voters and poll 500----and then introduce the bias of 'likely' voters-----you have nothing. This is how each election Maryland simply uses these poll results to promote the selected candidates and the organizations participating in these elections via forums and debates use these trumped-up poll results to pretend a candidate is or is not viable.
Gonzales Research conducts surveys of registered voters – nationally, statewide, and in local jurisdictions. Each of the surveys listed here is in the public domain, but we ask that Gonzales Research & Marketing Strategies, Inc. of Annapolis be credited if any of the surveys are cited in a story or column.
Please select a category of Survey.
Recent Polls
Maryland Statewide Polls
Maryland Regional Polls
District of Columbia Polls
National Polls
Gonzales Research &
Marketing Strategies, Inc.
558 Bay Dale Court
Arnold, MD 21012
Tel: 443-458-5034
hilary@gonzalesresearch.com
The Disadvantages of a Small Sample Size
By A. E. Simmons, eHow Contributor
Sample size should be taken seriously when conducting surveys and experiments. Researchers and scientists conducting surveys and performing experiments must adhere to certain procedural guidelines and rules in order to insure accuracy by avoiding sampling errors such as large variability, bias or undercoverage. Sampling errors can significantly affect the precision and interpretation of the results, which can in turn lead to high costs for businesses or government agencies.
- Sample Size
- To conduct a survey properly, you need to determine your sample group; this sample group should include individuals who are subject to the survey's topic. For example, if you are conducting a survey on whether a certain kitchen cleaner is preferred over another brand, then you should survey a large number of people who use kitchen cleaners. The only way to achieve 100 percent accurate results is to survey every person who uses kitchen cleaners; however, as this is not feasible, you will need to survey as large a sample group as possible.
- Variability is determined by the standard deviation of the population; the standard deviation of a sample is how the far the true results of the survey are from the results of the sample that you collected. You want to survey as large a sample size as possible; the larger the standard deviation, the less accurate your results will be, since smaller sample sizes get increasingly further away from the entire population.
- A small sample size also affects the reliability of a survey's results because it leads to a higher variability, which may lead to bias; the most common case of bias is a result of non-response. Non-response occurs when some subjects do not have the opportunity to participate in the survey. For example, if you call 100 people between 2 and 5 p.m. and ask whether they feel that they have enough free time in their daily schedule, most of the respondents might say "yes." This sample -- and the results -- are biased, as most workers are at their jobs during these hours. These people will not be included in the survey, and the survey's accuracy will suffer from non-response. Not only does your survey suffer due to timing, but the number of subjects does not help make up for this deficiency.
- Voluntary response bias is another disadvantage that comes with a small sample sizes. If you post a survey on your kitchen cleaner website, then only a small number of people have access to or knowledge about your survey, and it is likely that those who do participate will do so because they feel strongly about the topic. Therefore, the results of the survey will be skewed to reflect the opinions of those who visit the website. If an individual is on a company's website, then it is likely that she supports the company; she may, for example, be looking for coupons or promotions from that manufacturer. A survey posted only on their website limits the number of people who will participate to those who already had an interest in their products, which causes a voluntary response bias.
_______________________________________
THE LACK OF RANDOMNESS MAKES THIS POLL INVALID. - To conduct a survey properly, you need to determine your sample group; this sample group should include individuals who are subject to the survey's topic. For example, if you are conducting a survey on whether a certain kitchen cleaner is preferred over another brand, then you should survey a large number of people who use kitchen cleaners. The only way to achieve 100 percent accurate results is to survey every person who uses kitchen cleaners; however, as this is not feasible, you will need to survey as large a sample group as possible.
So, St Mary's College provided the closest polling numbers that are statistically relevant. Now, when they said they polled almost 1000 people I can assume that is total----maybe 500 democrats and 500 republicans. I will have to see if that 1000 was for each party or total and check the randomization procedure.
THEY HAVE A RANDOMIZED POLL WITH MARGIN OF ERROR CLOSE TO THE NECESSARY 3% AND A COHORT POSSIBLY LARGE ENOUGH FOR A RESULT. THE LARGER THE COHORT THE MORE STATISTICAL VALUE.
MARYLAND ALWAYS GIVES POLLING DATA THAT HAS NO STATISTICAL VALUE AND IT IS ALWAYS SKEWED IN WAYS THAT FAVOR THE OUTCOME FOR WHICH THE RICH WANT ----MEETING THE PROPER TERMS OF STATISTICAL VALUE IS NOT HARD.....IT JUST GIVES RESULTS THAT THEY DO NOT WANT TO PUBLICIZE.
It is no coincidence that the latest polls do not show St Mary's College but these invalid marketed polls.
'The sampling rate must be more than 5 percent to accumulate a sample base that is comparable'-----5% of 2 million democrats = 100,000 people. This is a sample size that gives accuracy.
Determine Sample Size
Confidence Level: 95%
Confidence Interval: 3% margin of error
Population: 2 million democrats
Sample size needed: 1066
Factors that Affect Confidence Intervals
There are three factors that determine the size of the confidence interval for a given confidence level:
- Sample size
- Percentage
- Population size
Sample Size
The larger your sample size, the more sure you can be that their answers truly reflect the population. This indicates that for a given confidence level, the larger your sample size, the smaller your confidence interval. However, the relationship is not linear (i.e., doubling the sample size does not halve the confidence interval).
Percentage
Your accuracy also depends on the percentage of your sample that picks a particular answer. If 99% of your sample said "Yes" and 1% said "No," the chances of error are remote, irrespective of sample size. However, if the percentages are 51% and 49% the chances of error are much greater. It is easier to be sure of extreme answers than of middle-of-the-road ones.
When determining the sample size needed for a given level of accuracy you must use the worst case percentage (50%). You should also use this percentage if you want to determine a general level of accuracy for a sample you already have. To determine the confidence interval for a specific answer your sample has given, you can use the percentage picking that answer and get a smaller interval.
Population Size
How many people are there in the group your sample represents? This may be the number of people in a city you are studying, the number of people who buy new cars, etc. Often you may not know the exact population size. This is not a problem. The mathematics of probability proves the size of the population is irrelevant unless the size of the sample exceeds a few percent of the total population you are examining. This means that a sample of 500 people is equally useful in examining the opinions of a state of 15,000,000 as it would a city of 100,000. For this reason, The Survey System ignores the population size when it is "large" or unknown. Population size is only likely to be a factor when you work with a relatively small and known group of people (e.g., the members of an association).
The confidence interval calculations assume you have a genuine random sample of the relevant population. If your sample is not truly random, you cannot rely on the intervals. Non-random samples usually result from some flaw in the sampling procedure. An example of such a flaw is to only call people during the day and miss almost everyone who works. For most purposes, the non-working population cannot be assumed to accurately represent the entire (working and non-working) population.
_______________________________________________
When Gallup and others say that it does not matter how large the population-----1000 people polled will cover it they are full of it. When I hear that they polled 1000 people to find an entire nation's opinion on a matter-----China has billions for example----you know you are getting propaganda, not polling.
For a question of voting you may not need to reach the 5% of population sample-----or the margin of error of 3%-----but you must be aware that a 4.4% margin of error with candidates polling at 10% is huge. So, the polls above used by the
Baltimore Sun could have most of the politicians at the lower end barely polling.
When my campaign was told over and again that the requirements to be in these televised debates was -----first the University of Maryland College Park forum told me 15% polling was their cutoff-----then the Maryland League of Voters used 10% as their cutoff-------and then Maryland Public Television told me the cutoff was 5% of polling. The numbers kept dropping because all of the candidates in the races----whether democrat or republican were not meeting any of these polling criteria. Yet, this polling criteria kept Cindy Walsh out every time.
Remember, Cindy Walsh for Governor was never on the polls for democratic candidates, but all republican candidates were on these polls.
Why have all republican candidates in polls/debates, and forums, and only the 3 democratic candidates seen over and over?
THE PEOPLE IN POWER DID NOT WANT ANYONE GOING TO MY PLATFORM AND CAMPAIGN WEBSITE. THIS IS WHAT MAKES THESE DEBATE/FORUM CRITERIA ILLEGAL FOR MEDIA----THEY ARE OBVIOUSLY CENSURING BECAUSE OF A CANDIDATE'S PLATFORM AND THE SUPREME COURT SAID IN THE FCC LAWSUIT THAT THIS IS ILLEGAL.
Opinion poll methods can make a difference in accurately predicting elections Identifying likely voters crucial to accurate polling
By Steve Israel Times Herald-Record Published: 2:00 AM - 10/28/12
Which poll should we believe? Gallup, which has former Massachusetts Gov. Mitt Romney leading President Barack Obama by 5 percentage points as of this past Friday? Or CBS News, which has Obama up 2 points? Just how do you know which poll to trust — or if you should trust any poll?
“The real question might not be why are polls so different,” says Lee Miringoff, “It's why are most of them even remotely similar?” Miringoff speaks to students at the Marist Institute for Public Opinion on Tuesday.
After all, how on earth can polling 1,000 Americans — or fewer — predict who 132 million voters will elect just nine days from now?To learn the answers to these and other questions about polls, we interviewed four of the top poll experts in the country: Lee Miringoff, director of the Marist College Institute for Public Opinion in Poughkeepsie, which conducts the NBC News/Wall Street Journal poll; Don Levy, director of the Siena (N.Y.) Research Institute; Frank Newport, editor-in-chief of Gallup; and Paul Lavrakas, president of the American Association for Public Opinion Research, the oldest professional organization devoted to survey research.
What's the reason for taking a poll?According to one of the oldest modern polling organizations, Gallup, the goal of a poll is to "come up with the same results that would have been obtained had every member of a population been interviewed."
The theory is that when you choose 1,000 or so people at random, it's the same as randomly sampling a much larger group.
So how are people polled?The most common method is to get a representative sampling of the population by using "random digit phone dialing," according to Levy of Siena. Area codes are chosen to correspond to a cross section of the population, weighting the calls to represent population.
"So we have a lot more from the Bronx, than, say, Poughkeepsie," says Miringoff, who notes that the NBC NEWS/Wall Street Journal/Marist poll calls 1,400 to 1,500 people.
A computer then randomly chooses the last two digits of those numbers.
After that, polling gets a bit more complicated. For instance, since women and old people are more likely to be home to answer phones and respond to a survey, their responses must be weighted to represent their portion of the entire population.
Different polls, different pollstersThere are three varieties of polling methods, says Lavrakas.
The first, and he says, most accurate is that telephone interview conducted by real people. Gallup, Pew, Marist and other established pollsters use this method.
The second is the automated method, in a which a computer asks a set number of questions and has no way of verifying whether the respondent is a 15-year-old nonvoting Obama supporter or a 50-year-old hard-core Romney Republican. Rasmussen Reports uses that method.
The third, and newest polling method — Internet polling — is the most unreliable, says Lavrakas, who notes that the Associated Press will not report the results of Internet polls.
Polling in the Internet age
Every time you look up, there seems to be a new poll released, often within minutes of an event like a presidential debate — thanks to the immediacy of the Internet. But instant doesn't necessarily mean accurate, says Frank Newport, editor-in-chief of Gallup. Instant polls don't have enough time to determine whether those polled actually represent the voting population. So while the public may have more access to more polls, the immediacy of the Internet hasn't actually altered the way traditional pollsters like Gallup work.
"It doesn't change what we're doing at all," says Newport.
Finding undecided voters
While specific questions vary by polling organization, these are basic questions a pollster asks, says Lavrakas. He notes that the pollster must always use the name of the candidate.
If the election were held today, who would you vote for? If the response is "I don't know," the next question is: "Are you leaning one way or another?"
That would be followed by "How confident are you that you would vote for the candidate?"
What determines a likely voter?Again, specific questions vary by poll. But questions like these are asked:
Do you know where your polling place is?
On a scale of 0 to 10, how likely is it that you'll vote?
And then they choose "the likeliest of the likely," says Levy of Siena.
The push poll
Legitimate pollsters dismiss push polls, which are really campaign tools disguised as polls.
Levy explains that a hypothetical push poll can sound like this:
First, the caller asks who you might vote for.
If the answer is Obama, the next question might be:
Knowing that Obama is actually a Muslim, does that make you more or less likely to vote for him?
And knowing that he wasn't born in America, does that also make you more or less likely to vote for him?
"They attempt to manipulate the respondent to validate a particular point of view," says Levy, who stresses that legitimate pollsters never conduct push polls.
Why so much variation this year?One reason might be the way the polls are conducted, say our polling experts.
Rasmussen tends to be more conservative, say the three experts, because it uses automated calls to only landline phones, which tend to attract replies from older voters who are generally more conservative.
Gallup, Marist and Siena also reach cell phones, which tend to attract younger voters.
So why the gap between Gallup and the other polls?
"It may not define likely voters as well," says Miringoff.
But the bottom line is even the pollsters don't know.
"The real question might not be why are polls so different," says Miringoff. "It's why are most of them even remotely similar?"
Another wildcard
You might think that the margin of error in polls — usually "plus or minus four percentage points" — refers to the spread between candidates. Think again. It means each number for each candidate could be 4 points higher or lower. So if a poll has one candidate leading 52 percent to 48 percent, that doesn't necessarily mean the race could be tied. It means one candidate could actually have 56 percent, while the other could have 44 percent — a 12 point spread.
What the poll doesn't tell youAll of which may explain why the race for president is so difficult to call. On top of the variables in polling, there's this:
"People are less certain as to what they're going to do," says Lavrakas. "So the polls are less reliable. Right now, I think either candidate could win the election."