Search This Blog

De Omnibus Dubitandum - Lux Veritas

Showing posts with label Junk Science. Show all posts
Showing posts with label Junk Science. Show all posts

Monday, December 19, 2022

Has the Spike Protein Apocalypse Begun?

By JD Rucker December 17, 2022 

Anyone who has been to a hospital or doctor’s office lately has likely had a very different experience than what they’ve had in the past. Even during the height of the Plandemic, corporate media had to manufacture stories of hospitals being overloaded; when citizen journalists went to investigate hospitals that were allegedly maxed out, they found empty waiting rooms and no lines for admission.

While doctors and nurses were making TikTok dance videos, surpluses of ventilators were sitting in storage rooms. The narrative of overloaded hospitals was false for the most part. There were some hospitals that experienced short surges which crippled them temporarily, but it was nothing compared to what we’re seeing today............To Read More.... 

 

Saturday, October 1, 2022

Why Science Journalists Rarely Get Their Stories Right

by | Aug 6, 2022 | @ America Out Loud

As I sat down to begin this essay, I turned on the television to see the beginning of Stage 19 of this year’s Tour De France bicycle race. Before I got to the correct station, I passed a morning news program with a reporter stating that today’s headline news was Scientists discover that humans are now causing the greatest mass species extinction in history.”

It was undoubtedly the inspiration to build on material from Alex Epstein’s new book Fossil Future to support my headline for this article. I am optimistic that most of the public is beginning to ignore these constant scary headlines, which have no basis in fact.

The problems can be placed in the order of efforts that must be applied to all research that generates a near limitless amount of specialized knowledge that only the rare scientists can know close to everything in their own fields. There is no better example than climate science, where specialists may prevail in paleoclimatology, climate physics, oceanography, climate modeling, or others.

The knowledge must be synthesized, disseminated, and evaluated to prepare the research results. This is always performed by other than the researchers. Synthesizing means organizing, refining, and condensing. Disseminators are those who broadcast the synthesized knowledge such as newspapers, radio, and television. Evaluators are those who tell us what should be done with the information, which could be making policies of all kinds.

Along the path from research to public knowledge lies a minefield of obstacles to the ultimate truth of everything. I recognized this in my early work in environmental science, so I contacted 50 different scientists working in the environment and asked if they had experienced similar distortion of their work before it reached wide recognition. One and all had witnessed the same problems and agreed to write an original paper explaining their individual issues. It allowed me to compile the articles into a new book titled Rational Readings on Environmental Concerns. It was published by the company now known as John Wiley & Sons in 1991. I was honored to learn that it played a role in Alex Epstein’s attacks on these problems that I will describe now further.

In climate science, the leading synthesizers are the United Nations Intergovernmental Panel on Climate Change (IPCC), the United States Government, and other organizations like the American Meteorological Society and the National Academy of Science. The synthesis can and does go badly wrong by virtue of honest mistakes but more often as a result of biases existing within the synthesizers, which are the non-stop targets of influence groups able to gain from how the research is presented.

A good example is that if you wade through thousands of pages of recent IPCC reports, you will not find a single proven statistic of increasing weather-related deaths in recent decades. Yet, the final summary states emphatically that this is the case. The opposite is absolutely the truth based on dozens of studies. 

Once the synthesizers do their jobs, however well or poorly, the essentials of their synthesis must be disseminated. There are all kinds of disseminators, certainly alternative media today. Still, the most important ones remain the mainstream media that we all know spread misinformation daily to a public not well trained to recognize its veracity.

If one has time to read the actual reports made by the synthesizers, the absurdity of conclusions comes clear. A recent IPCC report summary popularized the term ”code red for humanity” with obvious terrifying intent. Yet the report offered more opposite data on decreasing floods and droughts etc., the disseminators grabbed code red” and ran with it.

It is difficult for any science journalist to grasp the reality of what is going on, and if they have a political bias, all is really lost.

Finally, we have the evaluators of the synthesized and disseminated information. Prominent evaluators are the editorial pages of major newspapers such as The NY Times, Washington Post, and Wall Street Journal. They are the institutions and people who help us evaluate what to do about what is disseminated and tell us what is true about the world.

One way to spot when potential bad evaluations are being made is when we are told to listen to the scientists.” This refrain is almost always used to get us to accept a given policy evaluation without critical thinking, which is what we should never do. Very quickly, one can recognize when an evaluation system is very wrong. The first is when the evaluation has an anti-human basis, and the other is when all sides of an issue, pros and cons, are not considered. If you have long followed the evaluation by nearly all radical environmental groups, and one wonders which are not, they all focus on the terrible things humans do to nature when the reality is that it is mankind that is good and nature destructive.

The moral case for eliminating fossil fuel is a profoundly anti-human argument which Alex Epstein proves brilliantly through his 420-page magnum opus. It is my intent to help you grasp the clarity of his wisdom, allowing you to be a citizen warrior on the side of humanity with your friends, neighbors, and colleagues that one day will turn the corner on the public understanding of the lies they have been exposed to.

A clear indication of the anti-human evaluation is their insistence not just on rapidly eliminating fossil fuels but on replacing them with exclusively green unreliable energy systems. The most substantial evidence by far of their plans going catastrophically wrong is that they oppose things on the basis of side effects ignoring massive benefits. The obvious ones are:

1 – Fossil fuels are a uniquely cost-effective source of energy.

2 – Cost-effective energy is essential to human flourishing. Get used to this term as it should always guide our decisions.

3 – Billions of people remain suffering and dying for lack of cost-effective energy.

It should be obvious now to most of us that the anti-fossil fuel folks are, in fact, opposed to human beings. They actually believe humans are a cancer on the Earth. Otherwise, would they desire to force mankind to use only the intermittent, uncontrollable sources of energy from the sun and the wind that currently supply only three percent of the world’s energy and even that must have fossil fuel back up to avoid crushing destruction of electric grids that become unbalanced. The obvious answer is no!

Human Flourishing is defined as an effort to achieve self-actualization and fulfillment within the context of a larger community of individuals, each with the right to pursue such efforts.

Portions of this article were excerpted from the book, Summary of Fossil Future By Alex Epstein: Why Global Human Flourishing Requires More Oil, Coal, and Natural Gas–Not Less, with permission of the author Alex Epstein and the publisher Portfolio/Penguin.

I strongly recommend this book to everyone fighting for the preservation of life in America, which has been made possible by our abundance of fossil fuels before the leftist, liberal, progressives, and communists attempted to limit humankind’s well-being.


Dr. Jay Lehr

Dr. Jay Lehr is a Senior Policy Analyst with the International Climate Science Coalition and former Science Director of The Heartland Institute. He is an internationally renowned scientist, author, and speaker who has testified before Congress on dozens of occasions on environmental issues and consulted with nearly every agency of the national government and many foreign countries. After graduating from Princeton University at the age of 20 with a degree in Geological Engineering, he received the nation’s first Ph.D. in Groundwater Hydrology from the University of Arizona. He later became executive director of the National Association of Groundwater Scientists and Engineers.

 

Tuesday, May 24, 2022

Science Must be Reproducible: Three Parts

 By |May 9th, 2022|Regulation, Science|10 Comments

The National Association of Scholars (NAS) is a non profit organization of academics and independent scholars intent on recapturing the essence of scholarship which was so well respected in the past. We once respected all doctors without questioning their level of knowledge. Perhaps you have heard the joke beginning with the question “what do you call an individual who graduated medical school with a D average” the answer is “doctor”. The same was true for academics, college professors, all those that taught us at a college or university. Sadly as government slowly took over 80% of all academic research the standard of excellence declined . The by-words of too much research became “as the fear increases so does the money” and government involvement.

In hopes of bringing back the level of excellence among teachers and researchers NAS was formed. It does research itself into how schools are performing in the modern era. While there remains much that is good, there is a great deal that is bad.

This is the first of three essays taken from their new publication aptly titled Shifting Sands. It focuses on the failing efforts to reproduce scientific research that too often ends up supporting unnecessary or inappropriate government regulations. Much of their book uses the tremendous flaws in EPA’s effort to tighten the already unsupportable air quality regulation of Particulate Matter smaller than two and a half microns (millionth of a meter) which is called their PM2.5 rule. I wrote about the PM2.5 hearing they held by Zoom on February 25 on these pages in the weeks of March 27 and April 4, 2022. All but two of those who testified opposed EPA’s effort to tighten the current rule. There were 15 of us testifying against their plan. The EPA panel on the conference call did not ask a single question of the 15 people giving testimony in opposition. We suspect they had no interest in even listening to us. We all agreed that none of the evidence EPA was using to tighten the PM 2.5 rule could be reproduced even if their data could be obtained.

An irreproducibility crisis afflicts a wide range of scientific and social scientific disciplines from epidemiology to social psychology. Science has always had a layer of untrustworthy results published in respectable places and experts who are eventually shown to be sloppy, mistaken or untruthful. Herman Muller even won the Nobel Prize for his fraudulent studies of the fruit fly, which is now known to have resulted in the unsupportable Linear No Threshold model that has handicapped work on medical radiation for more than a half century. But the irreproducibility crisis is something new. It’s magnitude has brought many scientists confidence in other’s research to a very skeptical position. And most of today’s work is performed on the public’s dollar. A majority of modern research may well be wrong. How much government regulation is actually built on irreproducible science.

In the NAS text Shifting Sands the authors included 8 sources of misdirection leading to irreproducibility. They include:

  • Malleable research plans
  • Legally inaccessible data sets
  • Opaque methodology and algorithms
  • Undocumented data cleansing
  • Inadequate or non existent data archiving
  • Flawed statistical methods
  • Publication bias hiding negative results
  • Political or disciplinary group think (political correctness)

Government regulation is meant to clear a high barrier of proof. Regulations should be based on a large body of scientific research, the combined evidence of which provides sufficient certainty to justify reducing American’s liberty with a government regulation.

The justifiers of regulations based on flimsy or inadequate research often cite the “precautionary principle”. They would say that instead of being a regulation on rigorous science, they base the regulation on the possibility that a scientific claim is accurate. They do this with the logic that it is too dangerous to wait for the actual validation of a hypothesis, and the lower standard of reliability is warranted when dealing with matters that might involve severely adverse outcomes. The invocation of the precautionary principle is not only non-scientific, but is also an inducement to accepting poor science and even fraud.

We are living with this right now as the government wants to stop the use of fossil fuels because of a belief that it could lead the earths’ temperature to an unwanted level. No such proof of this exists.

The political consequences have unavoidably had the affect of tempting political activists to skew scientific research in order to impact the manner in which the government weighs evidence. Any formal system of assessment inevitably invites attempts to game it.

To all this we must add the distorting effects of massive government funding of scientific research. Our federal government is the largest single funder of research in the world. It’s expectations affect not only the research it directly funds, but also all research done in hopes of receiving federal funding. Government experts therefore have it in their power to create a skewed body of research which they can use to justify regulations.

A 2020 report prepared for the Natural Resource Defense Council estimates that American air pollution regulations cost $120 billion per year, and we may take that estimate provided to us by an environmental advocacy group to be the lowest plausible number.

It is time for US citizens to know all this and react to it in a manner that begins swinging the pendulum back toward more reliable research conclusions.

Note: Portions of this essay were excerpted from the book Shifting Sands with permission of the National Association of Scholars (NAS) and its authors Peter Wood, Stanley Young, Warren Kindzierski, and David Randall.

Irreproducible science – Part two

By |May 16th, 2022 | Science | 80 Comments @ CFACT 

The empirical scientist conducts controlled experiments and keeps accurate, unbiased records of all observable conditions at the time the experiment is conducted. If a researcher has discovered a genuinely new or previously unobserved natural phenomenon, other researchers -with access to his or her notes and some apparatus of their own devising- should be able to reproduce or confirm the discovery. If sufficient corroboration is forthcoming the scientific community eventually acknowledges that the phenomenon is real and adapts existing theory to accommodate the new observations.

The validation of scientific truth requires replication or reproduction. Replicability most commonly refers to obtaining an experiment’s result in an independent study, by different investigator with different data, while reproducibility refers to different investigators using the same data, methods, and/or computer code to reach the same conclusions.

Yet today the scientific process of replication and reproduction has ceased to function properly. A vast proportion of the scientific claims in published literature have not been replicated or reproduced. Estimates are that a majority of these published claims that cannot be replicated or reproduced are in fact false.

An extraordinary number of scientific and social-scientific disciplines no longer reliably produce true results, a state of affairs commonly referred to as the Irreproducibility Crisis. A substantial majority of 1500 active scientists, recently surveyed by Nature magazine coined the urgent situation a Crisis. The scientific world’s completely inappropriate professional incentives bear much of the blame for this catastrophic failure.

Politicians and bureaucrats commonly act to maximize their self-interest rather than acting as disinterested servants of the public good. This applies specifically to scientists, peer reviewers and government experts. The different participants in the scientific research system all serve their own interests as they follow the systems incentives.

Well published university researchers earn tenure, promotion, lateral moves to more prestigious universities, salary increases, grants, professional reputation, and public esteem-above all, from publishing exciting new positive results. The same incentives affect journal editors who receive acclaim for their journal, and personal awards by publishing what may be viewed as exciting new research-even though the research has not been thoroughly vetted.

Grantors want to fund exciting research, and government funders possess the added incentive that exciting research with positive results supports the expansion of their organization’s mission. American university administrations want to host grant -winning research, from which they profit by receiving overhead costs- frequently the majority of the amount in the grant. As one who has experienced and viewed this first hand it will boggle the readers mind as to the huge portions of most research grants that goes to the university as overhead rather than to support actual research costs.

All these incentives reward published research with new positive claims but not necessarily reproducible research. Researchers, editors, grantors, bureaucrats , university administrations, each has an incentive to seek out what appears to be exciting new research that draws money, status, and power. There are few if any incentives to double check their work. Above all, they have little incentive to reproduce the research, to check that the exciting claim holds up because if it does not, they will lose money status and prestige.

The scientific world’s incentives for new findings rather than reproducible studies, drastically affects what becomes submited for publication. Scientists who try to build their careers on checking old findings, or publishing negative results are unlikely to achieve professional success. The result is that scientists do not submit negative results for publication. Some negative results go to the file drawer. Others somehow turn into positive results as researchers consciously or unconsciously massage their data and their analyses.(As a science modeler we call this “tuning”, a technical word for cheating). Neither do they perform or publish many replication studies, since the scientific world’s incentives do not reward those activities either.

The concept of statistical significance is being tortured to the point that literally hundreds if not thousands of useless papers claiming that significance, appear everywhere.

Researchers try to determine whether the relationships they study differ from what can be explained by chance alone by gathering data and applying hypothesis tests, also called tests of statistical significance. Most commonly they start by testing the chance that there is no actual relationship between two variables which is called the “null hypothesis”. If that fails and it is likely their is relationship they go on to other hypothesis. How well the data supports a “null hypothesis” (no relationship) is a statistic called a p-value. If the p-value is less that 5% or .05 it is assumed there may be a relationship between the variables being studied.

The governments central role in science, both in funding scientific research and in using scientific research to justify regulation, adds tremendously to the growth of flimsy statistical significance throughout the academic world. Within a generation statistical significance went from a useful shorthand that agricultural and industrial researchers used to judge whether to continue their current procedures or switch to something new, to a prerequisite for regulation, government grants, tenure and every other form of scientific prestige and also essential for publication.

Many more scientists use a variety of statistical practices with more or less culpable carelessness including:

  • improper statistical methodology
  • biased data manipulation that produces desired results
  • selecting only measures that produce statistical significance and ignoring any that do not
  • using illegitimate manipulations of research techniques

Still others run statistical analyses until they find a statistically significant result and publish the one result. This is called “p-hacking”. Far too many researchers report their methods unclearly and let the uninformed reader assume they actually followed a rigorous scientific process.

The most insidious of all scientific cheating is p-HARKING. That is when a scientist chooses a hypothesis only after collecting all the data that produces a desired result. A more obvious word for it is CHEATING. Irreproducible research hypotheses produced by HARKING sends whole disciplines chasing down rabbit holes.

Publication bias and p-harking collectively have degraded scientific research as a whole. In addition, for decades surveys show that researchers are unlikely too publish any negative results their studies uncover.

A false research claim can become the foundation for an entire body of literature that is uniformly false and yet becomes an established truth. We cannot tell exactly which pieces of research have been affected by these errors until scientists replicate every piece of published research. Yet we do possess sophisticated statistical strategies that does allow us to diagnose specific claims that support government regulation. One such method- an acid test for statistical skullduggery is p-value plotting described in detail in the the National Association of Scholars handbook, SHIFTING SANDS. A brief paper back I can not recommend too strongly

Note: Portions of this essay were excerpted from the NAS book SHIFTING Sands with permission of the National Association of Scholars and its authors Peter Wood, Stanley Young, Warren Kindzierski, and David Randall.

The National Association of Scholars (NAS) recognition of a scientific duplication crisis – Part 3

By May 23rd, 2022 | Science | 23 Comments @ CFACT Part One, Part two

EPA regulations rely on environmental epidemiological literature, without applying rigorous tests for reproducibility, and without considering the environmental epidemiology discipline’s general refusal to take account of the need for Multiple Testing and Multiple Modeling. Such rigorous tests are needed not least because earlier generations of environmental epidemiologists have already identified the low hanging fruit.

These include massive statistical correlations between risk factors and health outcomes such as the connection between smoking and lung cancer. Modern environmental epidemiologists habitually seek out small but significant risk factors and health outcome associations. These practices render their research susceptible to false positives as real results. They risk mistaking an improperly controlled co-variable for a positive association.

Environmental epidemiologists are aware of these difficulties, but regardless have made their discipline into exercises in applied statistics. They do little to control for bias, p-hacking and other well known statistical errors. The intellectual leaders of their discipline have positively counseled against taking measures to avoid these pitfalls. But environmental epidemiologists, and the bureaucrats who depend on their work to support regulations, proceed as a field with unwarranted self-confidence. They have an insufficient sense of the need for humble awareness of how much statistics remains an exercise in measuring uncertainty rather than establishing certainty. Their results, do not possess an adequate scientific foundation. Their so-called “facts” are built on Shifting Sands, not on the solid rock of transparent, and critically reviewed scientific inquiry.

A NAS study showed how one particular set of statistical techniques simply counting and p-value plots, can provide a severe test for environmental epidemiology. Meta analyses must be used to detect p-hacking and other frailties in the underlying scholarly literature. We have used these techniques to demonstrate that meta-analyses associating PM 2.5 and other air quality components with mortality, heart attacks and asthma attacks fail this severe test.

The NAS study also demonstrates negligence on the part of both environmental epidemiologists and the EPA. The discipline of environmental epidemiology has failed to adopt a simple statistical procedure to test their research. The EPA failed to require that research justifying regulation be subjected to such a test. These persistent failures undercut confidence in their professional capacities as researchers and as regulators.

Both environmental epidemiology as a discipline, including journals , foundations and tenure committees and the EPA must adopt a range of reforms to improve the necessary reproducibility of their research. However, NAS directs its recommendations to the EPA and more broadly to federal regulatory and granting agencies.

They have reluctantly come to the conclusion that scientists will not change their practices unless the federal government credibly warns them it will withhold government grant dollars until they adopt stringent reproducibility reforms. NAS has also come to the conclusion that federal regulators will not adopt stringent new tests of science underlying regulation unless they are explicitly required to do so.

The National Association of Scholars recommend the following eleven actions be taken by the EPA in order to bring their methodologies up to the level of Best Available Science which is mandated in The Information Quality Act of 2019.

The EPA should adopt resampling methods as part of its standard battery of tests applied to environmental epidemiology research/
  1. The EPA should adopt resampling methods as part of its standard battery of tests applied to environmental epidemiology research.
  2. The EPA should rely for regulation exclusively on meta-analyses that use tests to take account of endemic questionable research procedures, p-hacking and HARKing.
  3. The EPA should redo its assessment of base studies more broadly to take account of endemic questionable research procedures, p-hacking and HARKing.
  4. The EPA should require preregistration and registered reports of all research that informs regulation.
  5. The EPA should also require public access to all research data used to justify regulation.
  6. The EPA should consider the more radical reform of funding data set building, and data set analysis separately.
  7. The EPA should place greater weight on reproduced research.
  8. The EPA should constrain the use of “weight of evidence” to take account of the irreproducibility crisis.
  9.  The EPA should report the proportion of positive results to negative results in the research it funds.
  10.  The EPA should not rely on research claims of other organizations until these organizations adopt sound statistical practices
  11.  The EPA should increase funding to investigate direct causal biological links between substances and health outcomes.

NAS has used the phrase “irreproducible crisis” throughout this essay, and they note that distinguished meta-researchers prefer to regard the current state of affairs as a challenge rather than a crisis.

You do not need to believe it to be a crisis. These current scientific practices are simply not the best available science. We should use the best scientific practices simply because they are the best scientific practices. Mediocrity ought not be acceptable.

If this is the first article you have read in this series please go back to the past two weeks at cfact.org to read the even more extensive parts 1 and 2 or click on my name at the beginning of this article and all my previous article titles will pop up on a list. Click on any title and the full article will appear.

There is no doubt that all CFACT readers question many EPA regulations. After you read this series of articles extracted from the National Association of Scholars booklet, SHIFTING SANDS, you will question even more.

Note: Portions of this essay were excerpted from the book Shifting Sands with permission of the National Association of Scholars (NAS) and its authors Peter Wood, Stanley Young, Warren Kindzierski, and David Randall.

Author

  • CFACT Senior Science Analyst Jay Lehr has authored more than 1,000 magazine and journal articles and 36 books. Jay’s new book A Hitchhikers Journey Through Climate Change written with Teri Ciccone is now available on Kindle and Amazon.

Sunday, November 22, 2020

EPA Transparency and the Deep State

By Duggan Flanakin

Ballot harvesting, behind-the-curtains ballot counting and other hijinks have made transparency a critical issue this election year.

Meanwhile, as the U.S. Environmental Protection Agency celebrates its fiftieth birthday, political battles continue to rage over the extent of public, executive and congressional oversight, and access to research files, original data and other information used by the agency in taking legal actions against individuals, institutions and businesses. The latest salvos involve the first-ever “transparency” requirements for EPA guidance documents – requirements likely to be tossed out by a Biden Administration.

In October 2019, President Trump signed an executive order to curb what he called abuses of authority by unaccountable bureaucrats who were “imposing their private agendas” on Americans. “A permanent federal bureaucracy,” he observed, “cannot become a fourth branch of government unanswerable to American voters.” Nor should federal agencies be able to impose multi-billion-dollar regulations, while claiming studies used to justify them are proprietary, confidential or otherwise inaccessible.

This February, the EPA launched a new searchable portal to provide public access to agency guidance documents. When it was finalized in July, the EPA brought over 9,000 guidance documents out of the darkness and made the entire active guidance library available to the public for the first time.

In September, the EPA finalized a rule that significantly increases the transparency of guidance practices and amends the agency’s process for managing guidance documents. The rule establishes the first formal public petition process for asking the EPA to modify, withdraw or reinstate a guidance document.

You would think “transparency” would be universally practiced and praised. However, the Deep State, science establishment, activist and pressure groups, and Democratic Party politicians have been horrified. Some claimed the rule reveals and distorts EPA’s decision-making processes. Others said it risks exposing private medical data and other confidential information. Still others carped that the rule is a bad-faith ploy to hamstring the agency’s ability to regulate industries and individuals.

Indeed, last October, an unsigned article in Wired magazine (we can’t even have author transparency) claimed the regulation was a Zombie-like attempt to resurrect the Secret Science Reform Act. Horrors!

The proposed legislation merely attempted to end the EPA’s widespread practice of basing regulations and guidance on research whose details remain hidden behind confidentiality agreements and are not publicly accessible, and whose research data cannot be replicated or independently verified.

During Congressional hearings on the proposal, critics claimed transparency would force the EPA to exclude important studies to protect confidentiality agreements. FactCheck.org found that private data sometimes cannot be redacted. But it also acknowledged that the rule allows the EPA administrator to exempt regulations if releasing study data publicly (rarely) does conflict with protecting privacy. It also allows for alternatives to complete public release if the data actually include confidential information.

One of the most scandalous cases of regulatory secrecy (and presidential secrecy) involved the acid rain provisions of the 1990 Clean Air Act Amendments. President Bush and the EPA suppressed the findings of the 10-year, $537 million National Acid Rain Assessment Program (NAPAP), which had been authorized by President Carter.

To gain public support for the legislation, EPA scientists conjured up scary scenarios, claiming that sulfur dioxide emissions from coal-fired power plants combined with water in the air to form acid rain that polluted streams, lakes and rivers and damaged trees, wildlife and buildings. The NAPAP found that the acidity of a lake is determined as much (or more) by the acidity of local soil and vegetation as it is by acidic rain. The frightening scenarios were wildly exaggerated, to justify closing power plants.

Moreover, many of these lakes were historically acidic and fishless until around 1900, when logging removed the acid vegetation and made the soil slightly alkaline. After logging slowed to a halt (around 1915), the naturally acidic decaying vegetation built up again, and the lakes became acidic again. In many cases forests were also debilitated due to insects or drought – not acid rain.

Curiously, a 1991 paper by environmental law scholar Richard Lazarus argued that Congress should let EPA be more independent, while admitting that legislators and regulators alike “have rarely known the best way to respond to an environmental pollution problem at the time a statute was passed.” Lazarus further claimed, “Statutory prescription therefore is an especially risky endeavor [that] can lead to wasteful expenditures for pollution control and … to more, rather than less, environmental degradation.”

These realities, Lazarus argued, make congressional oversight problematical, especially because the scientific options proposed by regulators for solving pollution often conflict with the political interests of lawmakers. True. But what if the legislators’ science is corrupted by “dark money” and the perpetual quest for more agency funding? Or if the regulators’ science is corrupted or weaponized by White House or Deep State biases, agendas, censoring of certain views, or manipulation or fabrication of data?

(In a republic, at least theoretically, the political leadership is informed by a citizenry that has all the needed facts, and ultimately has the authority to decide whether or not to follow the particular scientific pathways favored by regulators. In a pure democracy, minority views can be deemed or made irrelevant.)

A 2018 CFACT report assessed the extent of the public information problem, noting that EPA regulations have the force of law and constitute 25% of all federal regulations. Congress often grants regulatory bodies immense power over how people and businesses may operate, without giving targeted entities even the same level of due process that the law affords to criminal defendants. We should expect that EPA expands these overly broad mandates even further.

Indeed, the CFACT report contends, federal bureaucrats, and EPA administrators in particular, determine “who gets a permit to operate, and who does not; what technologies a business must use; what lightbulbs are available for your homes; what gas we can buy; what chemicals can be used; where companies can mine; what local land use decisions will survive; and even where a pond can be built on private property.”

It concluded: “While the President has massive powers over war and peace, and sets the operating philosophy of federal agencies, the EPA Administrator has direct power over the business operations … and thus the economy … of the entire nation.”

In creating the Consumer Finance Protection Bureau in 2010, the Democrat-controlled Congress gave its director broad powers and virtual immunity from political scrutiny – with more power than the President. The Supreme Court only narrowly recognized this as an unconstitutional grant of power to an unelected official. The EPA Administrator’s powers should be equally restrained.

In a second October 2019 executive order, President Trump required that agencies inform individuals of regulatory cases against them, acknowledge their responses, and educate businesses about new regulatory impacts. This order too should be non-controversial, but could well be axed by a President Biden.

Under current law, those whose livelihoods are assaulted by regulatory bodies can challenge an agency in court only after the agency has sullied their reputations and prosecuted their alleged noncompliance. Even then, the environmental defendant typically loses, because courts have mostly upheld the agencies if their decisions are “rational,” even if (absent long-sought transparency) the agency has concealed any or all of its “public” (but secret) data and records that do not support its “reasoning.”

Ultimately, the future of EPA transparency (and openness in all government) rides on the final outcome of the 2020 Presidential election. It’s fascinating how entities that set arbitrary and ever-changing standards for “acceptable” speech, favor crude protests over peaceful assemblies, and seek to curtail entire industries – also see no reason to inform the public of the rationale behind their politicized “scientific” decisions on issues from climate change to COVID to all manner of environmental regulation.

Via email

 

Friday, November 13, 2020

Moral Fashions and the Corruption of Science

Author:  John A.  Posted by Rich Kozlovich   

Editor's Note:  Originally published July 26, 2009 @ Paradigms and Demographics, I asked John A. for his last name in order to properly credit this commentary, but he commented that he keeps his “surname off the Internet."  John apparently is a history buff and categorizes himself as a “classical liberal”. 

Those among us who are history buffs will find that statement insightful. For those who aren’t, I have linked the history of “Classical Liberalism” in the article. It isn’t what you think. I have also linked a number of names and events that may not be commonly known to most people in order to give everyone the true flavor of what John is saying. 

I would also like to thank him for allowing me to reprint what originally was an e-mail that appeared in the blog Greenie Watch

I didn't recheck all the links, so I hope they all still work. RK

The history of science is rife with examples of political, social and moral fashions which not simply influence, but pervert the scientific method and corrupt the conduct of scientists. Einstein faced off the political and moral fashions of Nazism and
eugenics but plenty of his colleagues happily incorporated those twin systems into their own research. Eugenics also laid the foundations for the moral crusade against alcohol in early 20th Century America which was again a supposedly scientific assessment delivered as a moral panic which must be addressed immediately lest America fall into a deep pit of moral degeneration.

The example of
Trofim Lysenko and the political outlawing of Mendelian genetics in Stalinist Russia is a particularly scary example of a political fashion given to be a moral and political imperative by a dangerously unstable man who became President of the Russian Academy of Sciences. The parallels with the modern global warming scare are obvious.

Another example would be
neo-Malthusianism as popularized in the 20th Century repeatedly by Paul Erlich first in the 1960s and more recently by the scarily named "Optimum Population Trust" (and here)which includes such luminaries as Sir David Attenborough calling for mandatory limits on family size to prevent near future overpopulation and mass starvation. Once again, a supposed scientific analysis is communicated as a moral imperative.  John Holdren, now President Obama's Climate Czar, co-wrote several books with Paul Erlich in the 1970s at least one of which argued for forced abortions, forced adoptions of illegitimate children or from mothers "who contribute to general social deterioration by overproducing children" and the introduction of chemicals into water and food that rendered people sterile. All of this to forestall a crisis of overpopulation by the year 2000!

Carl Sagan, Erlich and others began and propagated the
Nuclear Winter story of the 1980s, together with scary scenarios about likely darkening of the skies due to dust from burning cities rising into the stratosphere and blocking out the Sun. All with the aid of computer models with extremely rubbery parameters and dubious simplifications. A moral imperative against nuclear weapons? You betcha. Even Richard Feynman, iconoclast as he was, while averring that the underlying theory was nonsense, could not raise his voice too loud lest people think he was in favour of nuclear proliferation. Moral panics do that to the best of scientists.

There are lots more examples, but you get the idea. These scientific fashions all in their own time held great sway in academia and mainstream media. They divided scientists into those who were credible and those who were so morally and intellectually corrupt as to actually oppose these ideas.

Modern environmentalism has most, if not all of the above ideas incorporated into the unholy fusion of science and
Marxist political theory now called "ecology", but is really a manifestation of what David Henderson memorably called "Global Salvationism".

The most interesting thing about all of this is that I, as a
classical liberal, can find common cause with people from a wide spectrum of political and philosophical beliefs that the lessons of history are that moral fashions in science are endemic, cyclical and a constant menace to the real business of scientists to understand how the Universe works.

Scientists don't live in a fashion-free vacuum. They dress themselves in the fashions of the day, read the latest scare stories of the day, follow the latest celebrity soap operas of the day and most of all abide by rules to not upset the funding apple-cart from which their work is done, whatever their personal and moral qualms, at least until retirement.

Wednesday, August 12, 2020

‘No change in insect population sizes’: Massive North American study challenges ‘insect apocalypse’ claims

| August 12, 2020

This article or excerpt is included in the GLP’s daily curated selection of ideologically diverse news, opinion and analysis of biotechnology innovation.  In recent years, the notion of an insect apocalypse has become a hot topic in the conservation science community and has captured the public’s attention. Scientists who warn that this catastrophe is unfolding assert that arthropods – a large category of invertebrates that includes insects – are rapidly declining, perhaps signaling a general collapse of ecosystems across the world.

Starting around the year 2000, and more frequently since 2017, researchers have documented large population declines among mothsbeetlesbeesbutterflies and many other insect types. If verified, this trend would be of serious concern, especially considering that insects are important animals in almost all terrestrial environments.

But in a newly published study that I co-authored with 11 colleagues, we reviewed over 5,000 sets of data on arthropods across North America, covering thousands of species and dozens of habitats over decades of time. We found, in essence, no change in population sizes.

These results don’t mean that insects are fine. Indeed, I believe there is good evidence that some species of insects are in decline and in danger of extinction. But our findings indicate that overall, the idea of large-scale insect declines remains an open question.......To Read More....

Tuesday, June 2, 2020

How to Make Money by Spreading Anti-GMO Propaganda

Anti-GMO activists routinely label scientists and biotech supporters "shills for Monsanto." However, a new study suggests that those who spread GMO disinformation are the ones who are actually motivated by money.


By Alex Berezow, PhD — December 3, 2019 @ American Council on Science and Health

The anti-GMO movement is bizarre in so many ways. The topic is essentially non-controversial in the scientific community, with 92% of Ph.D.-holding biomedical scientists agreeing that GMOs are safe to eat.

Yet, GMOs have become a perverse obsession among food and environmental activists, some of whom have gone so far as to accuse biotech scientists of committing "crimes against nature and humanity." Why? What's in it for them?

A new paper published by Dr. Cami Ryan and her colleagues in the European Management Journal examined this issue. They came to the conclusion that many of us had already suspected: It's all about the Benjamins, baby.

The Monetization of Disinformation: The Case of GMOs

The authors, who work for Bayer (which acquired Monsanto), begin by explaining the attention economy. Like most everything else, from money to coffee beans, human attention can be thought of in strictly economic terms. Attention is a scarce commodity; there is only so much of it to go around. Entire businesses, like social media, have developed a revenue model that relies on capturing as much of your attention as possible. In various ways, that attention can be monetized.

To quantify the attention that the topic of GMOs receives, the authors utilized BuzzSumo, a website that aggregates article engagement from all the major social media sites, such as Facebook and Twitter. The authors identified 94,993 unique articles from 2009 to 2019, and then whittled down the list to include only those domains that published at least 48 articles on GMOs (which is an average of one per month for four years). Thus, the researchers identified 263 unique websites.

And now, the depressing results. By far, the most shared articles on GMOs came from conspiracy, pseudoscience, and/or activist websites.

The image on the right depicts the top 25 websites based on median article shares. Of these, only two -- The Guardian and NPR (highlighted green) -- are widely considered to be mainstream news outlets. (It should be pointed out, however, that The Guardian is often not a reliable source of information on science, technology, and public health.)

It isn't a coincidence that many of these same websites also peddle snake oil. Mercola.com, for instance, is a website that sells everything from hydrogen-infused water to krill oil supplements for your pet. The website publishes anti-GMO and anti-vaccine articles, as well as a whole host of other fake health news, in order to drive traffic to itself. Then it sells the reader phony medicine.

If you're wondering how Mercola.com gets away with this, here's how: (1) It's not illegal to lie, and (2) It's not illegal to sell phony medicine, provided that there is a tiny disclaimer somewhere on the website admitting that the FDA hasn't evaluated any of the health claims. Here's Mercolas:

Perhaps Dr. Ryan's next research project should be how to put Mercola.com and its ilk out of business.

Source: Camille D. Ryan, Andrew Schaul, Ryan Butner, John Swarthout. "Monetizing Disinformation in the Attention Economy: the case of genetically modified organisms (GMOs)." European Management Journal. In press. 2019. doi: 10.1016/j.emj.2019.11.002

Thursday, April 9, 2020

Chlorpyrifos: A Hostage of the Secret Science Rule?

By Michael Dourson — March 27, 2020 @ The American Council on Science and Health

In short, the public is often worried about chemical exposure, as they should be when such exposure exceeds a safe dose. The public’s interest is best served by trusting experts dedicated to public health protection and not by withholding scientific data from independent analysis.

By Michael Dourson,Bernard K. Gadagbui, and Patrician M. McGinnis

Many groups have weighed-in on EPA’s recent secret science rule. Generally missing from nearly all these opinions is the perspective(s) of risk scientists charged with protecting public health.

As scientists know well, results of any one study, especially one with significant societal impact, should be replicated because positive findings occur, on average, in one out of every 20 studies due to chance.  If a study cannot be replicated, then it (at least) needs to be consistent with the pattern of available data. Studies that are not replicated or that do not “make sense” in an overall pattern are still considered by EPA (and other authoritative agencies). Scientists within these agencies will often contact the authors(s) to obtain additional information in order to conduct their own analysis.

A case in point is the publication of a series of studies on a single human group from exposure to chlorpyrifos that shows an unexpected effect at extremely low exposures. This finding has not been replicated in other human studies and is in contrast to extensive animal and limited human studies that all point to changes in a blood enzyme (cholinesterase) as sentinel, at higher exposures. In this case, EPA scientists asked authors of studies from this single human group for the underlying data in order to confirm this lower effect. The authors demurred citing confidentiality concerns despite the fact that EPA has rigorous procedures in place to handle confidential information. Thus, EPA was not able to conduct its own analysis, and since EPA had neither confirmatory studies, nor information consistent with other studies, it chose not to use the information from this single human group.

A recent publication confirms the EPA decision. In brief, Rauh et al. (2011) reported evidence of neurological deficits in children at 7 years old as a function of prenatal chlorpyrifos exposures that were much lower than levels causing cholinesterase inhibition. Since the raw data had not been made available, Dourson et al. (2020) extracted data from published figures of Rauh et al. (2011), and plotted these extracted data as response versus log dose, a common risk assessment approach.

Surprisingly, a significant portion of the data was not found in these published figures. Moreover, the reported associations of chlorpyrifos levels were also not replicated in our analysis. Like EPA, Dourson et al. (2020) sent multiple requests to Rauh et al. (2011) for access to the data so that confirmation could be attempted. No response was forthcoming. This general lack of data, inconsistency with cholinergic responses in other research studies, and refusal by the authors to share data and to write an editor-invited letter to the editor, raises concerns about the lack of data transparency.

From our perspective as toxicologists and risk scientists, EPA’s decision not to use such studies where information is not provided, suitably redacted to protect confidential information, is correct. The public’s interest is best served when science is replicable and consistent. When studies cannot be replicated or are not consistent with other information, using such studies then depends on having access to the underlying data for independent analysis. If the underlying data are not provided, it is difficult to use such studies to make a credible scientific risk judgment, much less national rulemaking.

In short, the public is often worried about chemical exposure, as they should be when such exposure exceeds a safe dose. The public’s interest is best served by trusting experts dedicated to public health protection and not by withholding scientific data from independent analysis. Others have also had thoughts on this topic.

Authors: Michael L. Dourson, PhD., DABT, FATS, FSRA; Bernard K. Gadagbui, MS, PhD, DABT, ERT; and Patrician M. McGinnis, PhD, DABT of the Toxicology Excellence for Risk Assessment (TERA)

Authors: Michael L. Dourson, PhD., DABT, FATS, FSRA; Bernard K. Gadagbui, MS, PhD, DABT, ERT; and Patrician M. McGinnis, PhD, DABT of the Toxicology Excellence for Risk Assessment (TERA)

Sunday, September 15, 2019

Time to Put an End to the Climate Cult

September 9, 2019 By Spike Hampson

The climate cult has gotten out of hand. It now threatens to prevail in politics by convincing the ignorant that the science is settled. Anybody who has a basic understanding of the science knows that it is not settled. A number of inconvenient facts seriously undermine the idea that catastrophic global warming caused by humans is about to overwhelm us. Here are three of them:
  1. Emissions of carbon dioxide into the atmosphere are causing far less heating than the climate models have been predicting.
  2. There is no scientifically reputable method for measuring the human contribution to carbon dioxide emissions relative to emissions from natural sources.
  3. Measured sea level rise in recent decades is insufficient to account for the alarmist forecasts about the amount of rise by the end of the century.
As long as the general public is unaware of realities such as these, the cultists will continue to proselytize using emotional appeals about saving the world and moralistic shaming of any who disagree........To Read More.....

Sunday, August 4, 2019

California judges provide stage for kangaroo court justice over Roundup

By Paul Driessen

San Francisco area juries have awarded cancer patients some $80 million each, based on claims that the active ingredient in Roundup weedkiller, caused their cancer – and that Bayer-Monsanto negligently or deliberately failed to warn consumers that the glyphosate it manufactures is carcinogenic. (It’s not.) Judges reduced the original truly outrageous awards of $289 million and even $1 billion per plaintiff!

Meanwhile, ubiquitous ads are still trolling for new clients, saying anyone who ever used Roundup and now has Non-Hodgkin Lymphoma or other cancer could be the next jackpot justice winner. Mass tort plaintiff law firms have lined up 18,500 additional “corporate victims” for glyphosate litigation alone.
Introduced in 1974, glyphosate is licensed in 130 countries. Millions of farmers, homeowners and gardeners have made it the world’s most widely used herbicide – and one of the most intensely studied chemicals in history. Four decades and 3,300 studies by respected agencies and organizations worldwide have concluded that glyphosate is safe and non-carcinogenic, based on assessments of actual risk.
Reviewers include the U.S. Environmental Protection Agency, European Food Safety Authority, European Chemicals Agency, UN Food and Agriculture Organization, Germany’s Institute for Risk Assessment, and Australia’s Pesticides and Veterinary Medicines Authority. Another reviewer, Health Canada, noted that “no pesticide regulatory authority in the world considers glyphosate to be a cancer risk to humans at the levels at which humans are currently exposed.” Therefore no need to warn anyone.
The National Cancer Institute’s ongoing Agricultural Health Study evaluated 54,000 farmers and commercial pesticide applicators for over two decades – and likewise found no glyphosate-cancer link.

Only the France-based International Agency for Cancer Research (IARC), says otherwise – and it based its conclusions on just eight studies. Even worse, IARC manipulated at least some of these studies to get the results it wanted. Subsequent reviews by epidemiologist Dr. Geoffrey Kabat, National Cancer Institute statistician Dr. Robert Tarone, investigative journalist Kate Kelland, “RiskMonger” Dr. David Zaruk and other investigators have demonstrated that the IARC process was tainted beyond repair.

The IARC results should never have been allowed in court. But the judges in the first three cases let the tort lawyers bombard the jury with IARC cancer claims, and went even further. In the Hardeman case, Judge Vincent Chhabria blocked the introduction of EPA analyses that concluded “glyphosate is not likely to be carcinogenic in humans,” based on its careful review of many of the studies just mentioned.

He said he wanted “to avoid wasting time or misleading the jury, because the primary inquiry is what the scientific studies show, not what the EPA concluded they show.” However, IARC didn’t do any original studies either. It just concluded that glyphosate is “probably carcinogenic,” meaning studies it reviewed found limited evidence of carcinogenicity in humans, plus sufficient evidence of carcinogenicity in lab animals that had been exposed to very high doses or lower doses for prolonged periods of time. In other words, under conditions that no animal or human would ever be exposed to in the real world.

It is also instructive to look at the three San Francisco area courtroom proceedings from another angle – an additional line of questioning that would have put glyphosate and Roundup in a very different light, and might have changed the outcome of these trials. Defense attorneys could have asked:  
Can you describe your family cancer history ... your eating, exercise and sleeping habits ... how much you eat high-fat foods ... how often you eat fruits and vegetables ... and your other lifestyle choices that doctors and other experts now know play significant roles in whether or not people get cancer?
How many times in your life [Johnson is 47 years old; Hardeman 70; Alva Pilliod 77; Alberta Pilliod 75] do you estimate you were exposed to substances on IARC’s list of Group 1 definite human carcinogens – including sunlight, acetaldehyde in alcoholic beverages, aflatoxin in peanuts, asbestos, cadmium in batteries, lindane ... or any of the 125 other substances and activities in Group 1? Have you ever smoked? How often have you been exposed to secondhand smoke? How often have you eaten bacon, sausage or other processed meats – which are also in Group 1?
How many times have you been exposed to any of IARC’s Group 2A probable human carcinogens – not just glyphosate ... but also anabolic steroids, creosote, diazinon, dieldrin, malathion, emissions from high-temperature food frying, shift work ... or any of the 75 other substances and activities in Group 2A? How often have you consumed beef or very hot beverages – likewise in Group 2A?
How many times have you been exposed to any of IARC’s Group 2B possible human carcinogens – including bracken ferns, chlordane, diesel fuel, fumonisin, inorganic lead, low frequency magnetic fields, malathion, parathion, titanium oxide in white paint, pickled vegetables, caffeic acid in coffee, tea, apples, broccoli, kale, and other fruits and vegetables ... ... or any of the 200 other substances and activities in Group 2B?
Pyrethrin pesticides used by organic farmers are powerful neurotoxins that are very toxic to bees, cats and fish – and have been linked by EPA and other experts to leukemia and other cancers and other health problems. How often have you eaten organic foods and perhaps been exposed to pyrethrins?
Large quantities of glyphosate have been manufactured for years in China and other countries. How do you know the glyphosate you were exposed to was manufactured by Bayer, and not one of them?
In view of all these exposures, please explain how you, your doctors, your lawyers and the experts you consulted concluded that none of your family history ... none of your lifestyle choices ... none of your exposures to dozens or even hundreds of other substances on IARC’s lists of carcinogens ... caused or contributed to your cancer – and that your cancer is due solely to your exposure to glyphosate.
Put another way, please explain exactly how you and your experts separated and quantified all these various exposures and lifestyle decisions – and concluded that Roundup from Bayer-Monsanto was the sole reason you got cancer – and all these other factors played no role whatsoever.
News accounts do not reveal whether Bayer-Monsanto lawyers asked these questions – or whether they tried to ask them, but the judges disallowed the questions. In any event, the bottom line is this:
It is bad enough that the IARC studies at the center of these jackpot justice lawsuits are the product of rampant collusion, misconduct and even fraud in the way IARC concluded glyphosate is a “probable human carcinogen.” It is worse that these cancer trials have been driven by plaintiff lawyers’ emotional appeals to jurors’ largely misplaced fears of chemicals and minimal knowledge of chemicals, chemical risks, medicine and cancer – resulting in outrageous awards of $80 million or more.
Worst of all, our Federal District Courts have let misconduct by plaintiff lawyers drive these lawsuits; prevented defense attorneys from effectively countering IARC cancer claims and discussing the agency’s gross misconduct; and barred defense attorneys from presenting the extensive evidence that glyphosate is not carcinogenic to humans. The trials have been textbook cases of kangaroo court justice.
The cases are heading to appeal, ultimately to the U.S. Supreme Court. We can only hope appellate judges will return sanity, fairness and justice to the nation’s litigation process. Otherwise our legal system will be irretrievably corrupted; products, technologies, companies and industries will likely be driven out of existence; and fraud, emotion and anarchy will reign.
Jackpot-justice law firms and their anti-chemical activist allies are already targeting cereals that have “detectable” levels of glyphosate: a few parts per billion or trillion, where 1 ppt is equivalent to 1 second in 32,000 years. Talc and benzene – foundations for numerous consumer products – are already under attack. Advanced technology neonicotinoid pesticides could be next.
It’s all part of a coordinated, well-funded attack on America, free enterprise and technology, using social media, litigation, intimidation and confrontation. Our legislatures and courts need to rein it in.  
Paul Driessen is senior policy analyst for the Committee For A Constructive Tomorrow (www.CFACT.org) and author of books and articles on energy and environmental policy.