Unmasking the AIDS Controversy: Is HIV the Only Culprit?
Is everything we know about AIDS wrong? For decades, HIV has been accepted as the sole cause of AIDS, but growing controversy suggests there may be more to the story.
For over 40 years, the AIDS epidemic has been at the forefront of public health concerns. While the government and many scientists have firmly identified HIV as the cause of AIDS, a growing body of research from prestigious scientists at institutions like Harvard and Yale is calling that conclusion into question. Many of these scientists, however, have been shut down or exorcized from the scientific community. But why?
“Trust the science” has become a mantra akin to “Thus saith the Lord” in secular circles.
Brett Weinstein, an American podcaster, author, and former professor of evolutionary biology, put it very elegantly in a discussion with filmmaker John Papola:
“There is nothing in science that is so secure that it couldn’t be upended by superior evidence...The authority you have in science is earned. It is earned by being precinct, it is earned by having predictive power. There is authority [in science], but it is provisional.”
THE DARK HISTORY OF MEDICAL DUNCES
History has shown that even well-established medical theories can be flawed, leading to disastrous consequences. In several instances, diseases were misattributed to infectious agents when they were actually caused by nutritional deficiencies or environmental factors. These misdiagnoses resulted in ineffective treatments and prolonged suffering for patients. The AIDS controversy mirrors these historical cases, raising the possibility that we may once again be wrongly blaming a virus for a widespread health crisis.
SCURVY
One of the most notable examples of a misdiagnosed epidemic is scurvy, a disease that plagued sailors in the 18th century. Ask any elementary student today, and they would likely be able to tell you how to cure scurvy.
At the time though, scurvy was widely believed to be an infectious disease. Not without reason, either. One sailor on a ship gets “scurvy.” Then another and another - It’s spreading! Nevermind that their diet consisted of bacon and salted meats.
Sailors who contracted scurvy were often quarantined, and ships were sanitized to rid them of the supposed germs. However, it wasn’t until British naval surgeon Dr. James Lind conducted experiments in 1747 that the true cause of scurvy—a deficiency in vitamin C—was discovered. Lind's findings showed that simply consuming citrus fruits could prevent and cure the disease. The British began to do this, earning them the nickname “limeys” among the American sailors.
Despite this breakthrough, it took the medical community nearly 150 years to fully accept that scurvy was not caused by germs. During this period, countless sailors continued to suffer needlessly. Thanks to the medical establishment, in the case of these sailors, life did not hand them lemons.
BERI-BERI
Another notable case is beriberi, a disease caused by a deficiency in thiamine (vitamin B1). In the late 1800s, beriberi affected large populations in Asia, particularly among those consuming polished rice, which lacks thiamine. Like scurvy and pellagra, beriberi was initially misdiagnosed as an infectious disease.
Researchers believed a bacterium or virus was responsible (largely due to “Germ Theory” - the belief that nearly all diseases are caused by microorganisms). Because of this, treatments were focused on combating this non-existent pathogen.
It wasn’t until Dr. Christiaan Eijkman, a Dutch physician, conducted experiments in Indonesia that the connection between polished rice and beriberi was established. Despite this breakthrough, the scientific community was reluctant to concede that there was not a bacterial or viral component to the disease.
So, they kept looking. They kept looking for the pathogen that never existed.
Despite the cure being discovered in 1897, it was not until the 1920s, that Eijkman was acknowledged for being right. He was awarded the Nobel Prize in Physiology or Medicine in 1929 - about 30 years for the scientific community to fully embrace the dietary cause of beriberi.
PELLAGRA
A similar situation occurred with pellagra, a disease caused by a deficiency in niacin (vitamin B3). Pellagra was rampant in the southern United States in the early 1900s, where corn-based diets lacked the essential nutrients found in other foods (cornmeal mush, cane syrup or molasses, gravy, and biscuits-the common diet of a poor southerner).
Like scurvy, pellagra was initially thought to be an infectious disease. Researchers spent years searching for a bacterium or virus responsible for the outbreaks. It wasn't until Dr. Joseph Goldberger of the U.S. Public Health Service conducted dietary experiments on twelve prisoners at the Rankin State Prison Farm in Mississippi that the true cause was discovered. The experiment was simple:
The prisoners ate only corn and corn-based foods (like the ones outlined before). Within six months, more than half of them had developed symptoms. When they returned to a diet with more nutrients (fresh meat, oats, green vegetables, etc), the symptoms disappeared.
Huzzah!
But the reaction was much different than you’d think. Despite clear evidence linking diet to the disease, the medical community was not just slow to embrace the findings, but they outright called him a fraud. After 2 years, he couldn’t take it anymore. He injected blood from a pellagra patient into the arm of his assitant, Dr. George Wheeler. Wheeler returned the favor. (A couple of cheeky doctors). They then took swabs from the infected patient’s nose and throat and rubbed it in their own noses and throats. Finally, they swallowed capsules that contained the scabs from the patient’s skin rashes.
They repeated this experiment several times with their friends, colleagues, and even Goldberger’s wife.
No one got sick. No one contracted pellagra.
Statistics from the pellagra epidemic illustrate the severe consequences of this suppression of the truth. Between 1906 and 1940, more than 3 million Americans were affected by pellagra, with 100,000 deaths attributed to the disease. If the medical establishment had accepted Goldberger’s findings earlier, many of these lives could have been saved.
RECENT EXAMPLES
The smon epidemic in Japan during the 1950s and 1960s is another example. Smon (subacute myelo-optic neuropathy) was characterized by severe gastrointestinal distress and nerve damage, leading to paralysis, blindness, and, in some cases, death.
Yikes.
Initially, scientists believed that smon was caused by a viral infection. (seeing a theme here?)
Over the course of nearly two decades, Japanese researchers isolated more than a dozen viruses, claiming each to be the cause of the disease. In response, the Japanese government implemented measures to control the spread of this supposed infection, such as quarantines and restrictions on movement, further fueling public fear.
However, the true cause of smon turned out to be a prescription drug called clioquinol, which was widely used in Japan to treat gastrointestinal disorders like dysentery. Patients taking clioquinol developed toxic reactions that mimicked the symptoms of a viral infection.
When the Japanese government finally banned the drug in 1970, the epidemic came to an abrupt end.
As Dr. Koichi Yamaguchi, a researcher involved in the case, noted:
“The smon epidemic was a man-made disaster, and we were looking in the wrong direction for nearly two decades.”
Another example is the Kuru epidemic among the Fore people in Papua New Guinea in the mid-20th century. Kuru, a fatal neurodegenerative disease, was initially believed to be caused by a virus or other infectious agent. Scientists conducted extensive research into the potential infectious nature of Kuru, but the real cause turned out to be ritualistic cannibalism.
Did not see that coming!
Specifically, it was the practice of consuming the brains of deceased relatives. The disease was spread through prions, which are misfolded proteins, not a virus or bacterium. Once the practice of cannibalism was halted, Kuru cases declined rapidly. According to the National Institute of Neurological Disorders and Stroke (NINDS), the disease has almost completely vanished today.
As Nobel laureate Dr. Peter Duesberg, one of the prominent dissenters in the AIDS debate, points out:
“History is full of examples where the ‘virus hunters’ were wrong, and millions paid the price. Are we making the same mistake with AIDS?”
So, with a history of foibles arounded chasing viruses, why were they still so quick to connect AIDS to HIV? Well, the answer requires to go back just a bit.
The Rise of the Virus Hunters: Early Successes
In the late 19th and early 20th centuries, Pasteur and Koch made groundbreaking advances by isolating bacteria responsible for diseases like anthrax, tuberculosis, and cholera, establishing the germ theory of disease. This success led to a focus on identifying pathogens, particularly bacteria, as the root cause of diseases. As technological advances in microscopy allowed for the discovery of smaller pathogens, attention shifted to viruses, which became the next frontier in medical research.
By the mid-20th century, virology had produced several critical breakthroughs:
The discovery of viruses like polio, smallpox, and influenza, which were definitively linked to their respective diseases, made viruses the focal point of modern infectious disease research.
The development of vaccines to combat viral diseases further solidified the role of viruses in public health efforts.
Virologists, driven by these successes, earned the nickname "virus hunters" as they embarked on missions to identify, isolate, and eliminate viral pathogens. The quest to link viruses to unexplained diseases became a dominant narrative in medical science, leading to substantial funding and public attention. It was like the gold rush of medicine. And you helped humanity in the process of getting rich and famous? Sounds like a win-win.
The Evolution of Virus Hunting into a Dogma
As virology grew in stature, there was an increasing tendency to link emerging diseases to viral causes, even when the evidence was preliminary. Virologists, emboldened by their past successes, became focused on isolating viral agents for nearly all unexplained illnesses.
This rush to link diseases to viruses created an environment where alternative explanations—such as environmental, nutritional, or lifestyle factors—were often overlooked, dismissed, or even ridiculed.
Virus Hunters and the Cure for Cancer
The quest to link viruses to cancer was a major focus for medical researchers, especially during the 20th century. And this was not without warrant. Researchers had discovered that certain viruses did indeed cause cancers in animals and humans. However, as Bryan Ellison and other critics noted, the obsession with finding viral causes sometimes led to premature conclusions, with lasting consequences on scientific research.
Early Successes and the Rise of Viral Cancer Theories
The relationship between viruses and cancer was first observed in the early 20th century. In 1911, Peyton Rous discovered a virus (later called the Rous sarcoma virus) that caused cancer in chickens, leading to the realization that some viruses could indeed trigger tumors. This discovery fueled interest in exploring viral causes of human cancers.
Epstein-Barr Virus (EBV) was discovered in the 1960s when it was isolated from the tumor cells of patients with Burkitt's lymphoma, a cancer prevalent in parts of Africa. This discovery led scientists (namely Anthony Epstein and Yvonne Barr) to suggest a causal relationship between EBV and Burkitt's lymphoma.
But they left out some key details...
EBV is commonly found in people with malaria and malnutrition without them developing Burkitt's lymphoma. In fact, EBV is extremely widespread globally, with approximately 90-95% of adults having been infected with the virus at some point in their lives. While most people infected with EBV do not develop cancer, it has been associated with a range of other diseases beyond Burkitt's lymphoma. (“Mono” being the most well-known disease, as well as MS).
Overall, EBV has been found in connection with a range of 6-10 diseases, particularly those involving immune suppression or chronic inflammation, but it is also carried harmlessly by most people.
Within 5-10 years, Epstein-Barr virus (EBV) was widely touted as the cause of Burkitt's lymphoma and nasopharyngeal carcinoma.
By the mid-20th century, the field of oncovirology (the study of cancer-causing viruses) began to grow, and "virus hunters" set out to identify viral culprits for various forms of cancer.
Cervical cancer was another cancer that fell into the pathogen hunt. In the early 1900s, observation showed that nuns had a much lower rate of cervical cancer compared to the general public.
Conclusion? It MUST be a sexually transmitted disease. Vis a vis a pathogen.
This belief held hold for over a century. Finally in the 1970s and 1980s, Dr. Harald zur Hausen proudly announced that certain strains of HPV were definitively linked to cervical cancer.
HPV, however, and similar to EBV, is extremely common, with 90-95% of adults having been infected at some point in their lives. However, most people who contract HPV do not develop cervical cancer, which makes the virus-cancer connection definitely not causal. While high-risk strains of HPV (such as HPV-16 and HPV-18) are strongly linked to cervical cancer, not everyone infected with these strains develops the disease either.
So again, not casual. Correlative.
But that didn’t stop the hunters from hunting. Somewhere, there was a golden goose. A disease with a viral cause and, more importantly, a vaccine to cure it.
Robert Gallo and HTLV: A Retrovirus Breakthrough
One of the most prominent virus hunters during this period was Dr. Robert Gallo, a key figure in the race to identify retroviruses as potential causes of cancer. In the 1970s, Gallo made significant strides in identifying retroviruses in humans. He discovered HTLV (Human T-cell Leukemia Virus), a virus linked to certain types of leukemia. HTLV was the first human retrovirus ever identified, and Gallo’s work on this virus heightened interest in retroviruses as potential culprits behind other human cancers.
In Case you were Wondering:
A retrovirus is like a spy who sneaks into a city and rewrites the city’s laws to make it easier for them to take control. In this case, the retrovirus enters a host cell, and instead of using its own instructions (RNA), it rewrites its RNA into DNA using an enzyme called reverse transcriptase. Once it has this rewritten DNA, it inserts it into the host cell’s genome, like changing the city's legal code, so that the host cell unknowingly starts producing copies of the virus, allowing it to spread.
Turns out, Gallo didn’t even discover the virus. Dr. Bernard Poiesz, a scientist who was working under Gallo at the National Cancer Institute (NCI), had been studying the human retrovirus and had already isolated and characterized the virus from a patient with T-cell leukemia. Like Veruca Salt waving a golden ticket, Gallo took the credit and renamed it HTLV-I.
This discovery of HTLV was the juice the virologists needed to keep pushing and searching. More funding came in and the net widened.
Statistics and the Limits of the Viral Cancer Theory
Despite the initial successes, the "virus hunters" encountered significant challenges in finding viral causes for most human cancers. While it’s now accepted that 15-20% of cancers are caused by infectious agents (the vast majority being viruses), the remaining 80-85% of cancers have not been linked to viral infections. This gap highlights the limitations of the viral cancer theory, even though certain virus-cancer links have been firmly established.
Real-life examples of viruses linked to cancer include:
Human Papillomavirus (HPV): Found in 99% of cervical cancers (as well as 90-95% of the general public) and is also implicated in other anogenital and oropharyngeal cancers.
Hepatitis B and C viruses (HBV and HCV): Are major “causes” of liver cancer. HBV and HCV are connected to about 5 to 8 other diseases.
Epstein-Barr Virus (EBV): Linked to Burkitt’s lymphoma, nasopharyngeal carcinoma, and certain types of Hodgkin's lymphoma.
However, beyond these links, the broader connection between viruses and cancer remained elusive. Despite extensive research, most common cancers—such as lung cancer, breast cancer, prostate cancer, and colorectal cancer—have no viral cause, which dampened the initial enthusiasm for the virus-cancer hypothesis.
As virologist Harold Varmus, a Nobel Prize winner and former director of the National Cancer Institute, noted:
"Viruses account for only a fraction of human cancers, and the search for viral causes has often led us to overestimate their role in oncogenesis. While viral research has made substantial contributions, it is just one part of a complex puzzle."
The well was drying up and something needed to happen.
The AIDS Controversy and Its Unanswered Questions
Since its identification in the 1980s, HIV (Human Immunodeficiency Virus) has been considered the primary cause of AIDS (Acquired Immunodeficiency Syndrome). The federal government has spent over $40 billion on HIV research and treatment, leading to tens of thousands of scientific papers. However, despite these efforts, no vaccine or definitive cure has been found. This has led some scientists to ask: Have we been wrong about HIV's role in AIDS?
So, at this point, you can see that there are two common threads in the scientific community (especially when money and/or notoriety is involved):
If you are a hammer, everything is a nail. Or in this case, a pathogen.
If you stay with the herd, you are safe. If you dissent, you are left to the wolves. Maybe, much later, you will be exonerated.
FROM AIDS TO HIV
The AIDS epidemic was first identified in the early 1980s, marking the beginning of one of the most significant public health crises of the 20th century. The journey from the first reported cases to the declaration that HIV was the cause of AIDS was marked by confusion, speculation, and controversies that persist to this day.
The Initial Discovery: 1981
The first cases of what would later be known as AIDS were identified in 1981. In Los Angeles, Dr. Michael Gottlieb, an immunologist at UCLA Medical Center, observed a pattern of rare infections in otherwise healthy young men. These patients, all of whom were homosexual men, presented with Pneumocystis carinii pneumonia (PCP) and Kaposi's sarcoma, both conditions typically associated with individuals who had severely compromised immune systems. Gottlieb noted an alarming deficiency in their T cells, a type of white blood cell critical for immune function, prompting concern that a new disease was emerging.
Shortly thereafter, the Centers for Disease Control and Prevention (CDC) published a report in their Morbidity and Mortality Weekly Report (MMWR) on these five cases. This was the first official recognition of the disease. The disease was not yet named AIDS. Instead, it was described based on the symptoms and infections observed in the patients, such as Pneumocystis carinii pneumonia (PCP) and Kaposi's sarcoma. Initially, it was referred to as a disease affecting homosexual men and was commonly called "Gay-Related Immune Deficiency" (GRID).
Within weeks, similar cases were being reported in New York City and San Francisco. This observation, however, may be linked to Confirmation Bias. Many of the early cases of AIDS in New York City and San Francisco involved men who engaged in heavy use of recreational drugs like poppers (amyl nitrite). These drugs were known to have toxic effects on the immune system, suggesting that immune suppression in these individuals could have been due to long-term drug use rather than a new virus.
Many of the reported cases of AIDS also had a wide range of symptoms and conditions, which would make someone question whether they all truly belonged to the same disease category. Some cases labeled as AIDS could have been caused by pre-existing health issues or environmental factors.
But time was of the essence and there wasn’t time to collect the data. At this point, the CDC officially defined the syndrome. The term AIDS (Acquired Immune Deficiency Syndrome) was coined in 1982.
Après moi, le déluge
The hysteria surrounding the fallout after the CDC labeled the syndrome as AIDS in 1982 was marked by widespread fear, misinformation, and stigma, as the public and medical community struggled to comprehend the nature of this new and deadly disease.
Initially termed "Gay-Related Immune Deficiency" (GRID), the CDC renamed the condition Acquired Immune Deficiency Syndrome (AIDS) as they announced the syndrome was not limited to homosexual men, but could affect people from various demographics. The decision to officially name the syndrome fueled both public panic and significant changes in social behavior, policy, and healthcare systems.
Key Elements of the Hysteria:
Fear of a New, Unknown Disease:
AIDS was identified as a rapidly fatal illness with no known cure, treatment, or clear cause in its early years. Reports of young, previously healthy individuals suddenly developing rare infections and cancers like Kaposi’s sarcoma or Pneumocystis carinii pneumonia (PCP), and then dying, created widespread fear. The rapid progression of the disease, coupled with its unknown origins, led to a sense of dread.
Media outlets reported the deaths of people with AIDS in graphic detail, often describing AIDS as a "plague" or "epidemic", which heightened fear among the public. The lack of knowledge about how the disease spread also contributed to anxiety.
Stigma and Discrimination:
Since many of the early cases were found among gay men, AIDS was initially referred to as a "gay disease," fueling homophobia and discrimination. People with AIDS were ostracized and shunned, even by medical personnel who were afraid of contracting the disease.
This focus on the gay community may have led to both misunderstanding and prejudice. There were widespread fears that AIDS could be spread through casual contact (even touted by Anthony Fauci in an interview), leading to people avoiding even shaking hands or sharing public spaces with individuals suspected of having the disease.
Beyond the gay community, intravenous drug users and sex workers were also stigmatized, as the disease began to appear in these groups. Despite the connection between the stereotypical lifestyle of these groups with activities that lead to low T-cell counts, blame was often placed on them.This contributed to hysteria and calls for harsh policies like quarantines or extreme isolation measures.
Medical and Public Health Challenges:
The healthcare system was overwhelmed by the sudden emergence of a deadly disease that disproportionately affected previously healthy young men. Doctors and nurses, unsure of how the disease spread, sometimes refused to treat patients with AIDS, fearing for their own safety. Hospitals lacked the protocols to handle these cases, leading to the isolation of patients in quarantine-like conditions.
The CDC’s classification of AIDS as an epidemic also led to changes in public health policy. There was a push for blood donor screening, changes to sexual education programs, and discussions of mass testing, particularly among high-risk groups like gay men and drug users.
Political and Social Fallout:
The U.S. government’s response to AIDS was widely criticized as inadequate and delayed. President Ronald Reagan did not publicly mention AIDS until 1985, four years after the first cases were reported. The slow response fueled public outrage, especially within the LGBTQ+ community, which felt abandoned by the government.
As the number of AIDS cases continued to rise, some politicians called for extreme measures, including quarantining people with AIDS, banning individuals with the disease from public spaces, or restricting their employment. Senator Jesse Helms was one of several lawmakers who advocated for policies that targeted individuals with AIDS, reflecting the fear and stigma surrounding the disease.
Impact on Everyday Life:
The fear of contracting AIDS from casual contact led to widespread social paranoia. People were unsure if they could catch the disease through physical touch, sharing utensils, using public restrooms, or being around someone who coughed or sneezed.
Schools, workplaces, and even families were affected by the hysteria. Children with HIV/AIDS, often infected through blood transfusions, were barred from schools in several high-profile cases, such as that of Ryan White, a teenager with hemophilia who contracted HIV and was banned from his school in Indiana in 1985.
Media Hype and Fear-Mongering:
The media played a significant role in fueling hysteria by often portraying AIDS in dramatic and sensationalized terms. Headlines described AIDS as the "gay plague" and emphasized its deadliness, creating a culture of fear. Stories of celebrities such as Rock Hudson, who publicly revealed his AIDS diagnosis in 1985, drew even more attention to the epidemic, contributing to both public sympathy and fear.
The growing awareness of AIDS led to a demand for answers, which the scientific and medical communities struggled to provide. Misinformation was rampant, leading to myths about how the disease spread and who was at risk. The mass hysteria was coming to a boiling point and fast. The public needed answers.
Early Theories and Controversies
In the early stage, before the hysteria, AIDS was the epitome of everything scary, deadly, and unknown. Some researchers speculated that lifestyle factors such as heavy drug use or multiple sexual partners in certain communities were contributing to the immune suppression seen in these early patients.
In fact, many scientists initially suspected that poppers (alkyl nitrites), a recreational drug popular among homosexual men, might be damaging the immune systems of these individuals. It was known that heavy, habitual use of poppers—often combined with other recreational drugs—was common among some individuals.
The first cases of AIDS were published in June 1981. By the end of the same year, a task force formed by the CDC suggested a possible infectious agent, given that the disease appeared to be spreading in clusters.
Clusters. Just like scurvy appeared in “clusters.” Bear in mind, of the original first five cases of AIDS, none of them were connected. Never met each other.
Here come the hammers.
The race to find the cause of AIDS was underway, but researchers faced significant challenges in identifying a pathogen responsible.
The Identification of HIV: 1983
The breakthrough came in 1983 when Dr. Luc Montagnier and his team at the Pasteur Institute in Paris isolated a new virus from the lymph nodes of a patient who exhibited symptoms of AIDS. The virus, originally called Lymphadenopathy-Associated Virus (LAV), was a type of retrovirus.
Now, it should be noted, the French man Dr. Montagnier and his team was studying presented with symptoms of persistent lymphadenopathy (swollen lymph nodes).
So, finding a retrovirus, what do you think Montagnier did?
Enter Dr. Robert Gallo again.
Montagnier and his team then sent samples of the virus to Robert Gallo at the National Institutes of Health (NIH) in the United States for further research. They hoped that Gallo, a prominent virologist with expertise in retroviruses, could help in further characterizing the virus.
In the May 20, 1983, issue of Science, Montagnier et al published an article entitled “Isolation of a T-Lymphotropic Retrovirus From a Patient at Risk for Acquired Immune Deficiency Syndrome (AIDS),” in which they identified the LAV virus as the cause of AIDS.
Within about a year (April 1984), Dr. Gallo claimed to have discovered a different virus (albeit it was literally identical to Gallo’s sample), which he named HTLV-III (later becoming HIV) and declared it the probable cause of AIDS at a press conference alongside Margaret Heckler, the U.S. Secretary of Health and Human Services. He announced that blood tests to screen for the virus would soon be available.
Understandably, this didn’t sit well with Montagnier and his team. It led to a bitter dispute between the American and French teams. This controversy became a major public issue, which resulted in a lawsuit in December 1985. They eventually reached an agreement between the U.S. and French governments in 1987, which acknowledged Montagnier as the original discoverer of the virus, though both Gallo and Montagnier were credited with the co-discovery of HIV.
This public declaration of HIV (Human Immunodeficiency Virus) as the cause of AIDS was not made at a scientific forum or through peer-reviewed publications, which led to some skepticism within the scientific community.
But the damage was done and the scientific community was in a corner. By the official naming of HIV as the cause of AIDS, this forced all AIDS research to focus exclusively on HIV." Some scientists questioned the validity of this rapid conclusion, arguing that research into alternative explanations was shut down prematurely, but their voices were not a part of the mainstream.
The Scientific Consensus and Global Response: 1985-1987
By 1985, HIV was firmly established as the leading cause of AIDS in the public mind and within much of the scientific community. The ELISA test (enzyme-linked immunosorbent assay) for detecting HIV antibodies was developed ( and became widely available, allowing for the identification of HIV infection in patients long before they showed symptoms of AIDS. This test was rolled out in March 1985, true to Gallo’s promise.
In 1986, Montagnier and Gallo’s virus was unified under a single name, HIV (Human Immunodeficiency Virus). The World Health Organization (WHO) declared AIDS a global health emergency, and by the late 1980s, HIV was recognized as the cause of the epidemic, leading to the establishment of massive public health initiatives worldwide.
The 1990s: Advances and Ongoing Debate
Despite the general consensus, the fight against AIDS continued into the 1990s with the development of antiretroviral therapy (ART), which significantly prolonged the lives of those infected with HIV. However, dissenting scientists, including Peter Duesberg and Bryan Ellison, remained vocal about their doubts concerning HIV as the sole cause of AIDS. Duesberg argued that HIV was merely a harmless passenger virus and that the immune deficiency seen in AIDS patients might be caused by other factors, including lifestyle or drug use. Ellison echoed these views, reminding the medical community of past missteps in science and medicine.
As Ellison explained, "There were prestigious scientists at major institutions questioning the HIV theory, but their voices were silenced. We were expected to accept HIV as the cause, even though the government had no conclusive proof." He and other dissenters argued that this approach may have hindered the discovery of alternative treatments or causes.
The Cure (ish)
AZT (Zidovudine) was originally developed in the 1960s by Jerome Horwitz as a treatment for cancer. It was designed to inhibit DNA synthesis, which is essential for cancer cell replication. However, the drug was abandoned because it was deemed too toxic for cancer patients and did not prove effective against tumors. It remained shelved until the 1980s, when researchers began seeking treatments for HIV/AIDS, which was emerging as a global epidemic.
Repurposing AZT for AIDS
In the early 1980s, after HIV was announced as the cause of AIDS, scientists began experimenting with various drugs to inhibit the virus’s replication. Dr. Samuel Broder, the head of the National Cancer Institute, was one of the key figures in this effort. He and his team tested thousands of compounds, including AZT, to see if any could inhibit the replication of HIV.
In 1984, AZT showed promise in laboratory tests because it was found to inhibit reverse transcriptase, the enzyme HIV uses to replicate its RNA into DNA.
These initial laboratory results came from cell cultures rather than human trials. These experiments were conducted in HIV-infected T-cells, which showed a clear inhibition of the virus’s replication. However, as the tests were conducted only on lab-grown cells, the results did not fully reflect how the drug would function in a complex human system, which can have vastly different responses in terms of toxicity, metabolism, and side effects.
Burroughs Wellcome, the pharmaceutical company that held the patent for AZT, quickly pursued clinical trials after the promising lab results. These human trials were also conducted rapidly, with the first double-blind, placebo-controlled trials beginning in 1986. These human trials were criticized for being conducted over too short a time period—only 16 weeks. The trial was actually stopped early because the initial results appeared so positive in terms of survival rate. The quick approval was justified by the evident urgency of the AIDS crisis.
The FDA fast-tracked AZT, making it the first drug approved for HIV in March 1987, only 20 months after the clinical trials began.
The Early Use of AZT
Initially, AZT was used as a monotherapy, meaning it was the sole drug administered to HIV patients. This turned out to be problematic. While AZT temporarily increased CD4 (T-cell) counts and reduced viral replication, it also had severe side effects, including anemia, muscle wasting, liver toxicity, and bone marrow suppression.
That’s where it gets a little interesting.
Side Effects of AZT
Symptoms of AIDS
Anemia (low red blood cell count)
Anemia (often caused by infections or medications)
Muscle wasting (cachexia)
Muscle wasting (AIDS wasting syndrome)
Liver toxicity
Liver damage (due to opportunistic infections or medications)
Bone marrow suppression
Bone marrow suppression (due to HIV or infections)
Fatigue
Fatigue (common in AIDS due to immune suppression)
Neutropenia (low white blood cell count)
Neutropenia (caused by HIV or infections)
Nausea and vomiting
Nausea (from opportunistic infections or medications)
Headaches
Headaches (due to infections like meningitis or HIV itself)
Diarrhea
Diarrhea (common due to infections or HIV)
Lactic acidosis (from mitochondrial damage)
Metabolic acidosis (seen in severe AIDS or infections)
Peripheral neuropathy (nerve damage)
Peripheral neuropathy (often caused by HIV or opportunistic infections)
Increased risk of infections (due to immune suppression)
Increased risk of infections (AIDS-related)
SOURCES:
https://aidsinfo.nih.gov/
https://www.fda.gov/
https://www.who.int/
SOURCES:
https://www.cdc.gov/hiv/basics/livingwithhiv/symptoms.html
https://www.mayoclinic.org/
https://www.hopkinsmedicine.org/
Do with that what you will.
Another major issue was the cost of AZT. At about $10,000 per year per patient, it was one of the most expensive drugs at the time, making it inaccessible to many people living with the conjoined term HIV/AIDS, especially those in low-income communities. Despite the high cost, patients had no alternative, and the drug was widely prescribed.
Shift to ART
By the mid-1990s, researchers realized that HIV’s rapid mutation allowed it to develop resistance to AZT when used alone. The turning point came with the advent of combination therapy, which later became known as ART (Antiretroviral Therapy). ART involves using multiple drugs from different classes to attack the virus at various stages of its replication cycle, reducing the likelihood of resistance.
In 1996, the introduction of Highly Active Antiretroviral Therapy (HAART), which included AZT along with other antiretrovirals, marked a major step in the treatment of HIV. ART typically consists of a combination of three or more drugs from different classes, such as:
NRTIs (Nucleoside Reverse Transcriptase Inhibitors): like AZT and Tenofovir.
NNRTIs (Non-Nucleoside Reverse Transcriptase Inhibitors): like Efavirenz.
Protease Inhibitors (PIs): such as Lopinavir or Ritonavir.
Integrase Inhibitors (INSTIs): like Dolutegravir.
Side Effects of ART vs. AIDS Symptoms
Let’s look at another side by side.
Side Effects of ART
Symptoms of AIDS
Nausea
Nausea (from infections or medications)
Diarrhea
Diarrhea (from opportunistic infections)
Liver toxicity
Liver problems due to infections or HIV medications
Fatigue
Fatigue due to immune suppression
Bone density loss
Bone pain and loss (common in advanced AIDS)
Immune reconstitution inflammatory syndrome (IRIS)
Infections due to immune system recovery
Anemia
Anemia from bone marrow suppression or infections
Lipodystrophy (fat redistribution)
Wasting syndrome (severe muscle and weight loss)
Peripheral neuropathy
Peripheral neuropathy from HIV or infections
Increased risk of cardiovascular disease
Increased risk of cardiovascular disease (as a result of long-term immune activation)
Sources: NIH, WHO, CDC
Sources: CDC, Mayo Clinic, Johns Hopkins
So, with so many overlaps, is the cure worse than the disease? Which one is causing the symptoms?
All valid questions. AIDS itself is characterized by severe immune suppression due to the depletion of CD4 (T-cells), leading to life-threatening opportunistic infections and cancers such as Kaposi’s sarcoma, Pneumocystis pneumonia, and tuberculosis. This is what was seen in the original five AIDS cases.
While the side effects of ART can be difficult, the benefits are still said to far outweigh the risks, as ART prevents the progression of HIV to AIDS by keeping viral loads undetectable and maintaining immune function.
Here’s the assumption though: HIV turns into AIDS.
That is the problem with AIDS research, prevention, and treatment.
Make no mistake, AIDS is real. But AIDS is NOT a disease. It refers to a specific set of symptoms and conditions that results from the severe weakening of the immune system.
AIDS is diagnosed when the CD4 T-cell count drops below a certain threshold (typically below 200 cells per microliter of blood) or when an individual develops one or more AIDS-defining illnesses, such as Kaposi’s sarcoma, Pneumocystis pneumonia, or tuberculosis.
So, while ART does do what it’s reported to do - keep viral loads undetectable, which helps maintain CD4 T-cell counts - if HIV is not the cause (which we’ve seen reason to explore other explanations), then this metic is useless.
Measuring HIV viral loads and maintaining undetectable levels would not necessarily prevent the syndrome known as AIDS. In this scenario (HIV not causing AIDS), reducing HIV levels wouldn't address the underlying cause of the immune system's collapse or the onset of AIDS-defining illnesses.
CD4 T-cell counts might still be used as a metric for immune function, but without HIV being linked to immune suppression, these counts may reflect other factors causing immune degradation, not HIV viral replication.
So why then? Why keep taking AZT and ART? Why would the government health agencies recommend it?
And here we get to the crux of it.
Big Pharma
As pointed out, AZT was first synthesized in the 1960s by Dr. Jerome Horwitz at the Detroit Cancer Institute as a potential treatment for cancer.
However, the drug was shelved because it proved ineffective in treating cancer and was considered too toxic. For over a decade, it was largely forgotten.
Horwitz never patented AZT because he saw no potential use for it. As a result, it remained largely dormant in the scientific world until the emergence of the HIV/AIDS crisis in the early 1980s.
This is where something interesting happens.
Gallo announced in April 1984 that HIV was the cause of AIDS. In October 1984 (6 months later), AZT was selected by Burroughs Wellcome for testing in their studies. Burroughs sent a sample of the AZT (calling it “compound S”) to the NIH for testing against HIV. This period of NIH testing included the 16-week trial with the favorable results.
The NIH reported back in February 1985 that it exhibited significant activity at low concentrations (which BW already knew).
Little did the NIH know, Burroughs Wellcome had already drafted a patent application back in October of 1984 before they sent it to the NIH. Within about 3 weeks, the patent application was filed in the U.K.
Needless to say, the NIH was a little miffed and this led to court cases, but nothing of great significance was changed.
AZT was approved by the DFA in March 1987. This came just two years after BW filed for the patent (unusually fast, mind you).
Between 1987 and 1996, AZT was widely prescribed as the only available treatment, even though its effectiveness was limited.Burroughs Wellcome made millions every year off of AZT.
But then the paint began to drip.
The public started to notice the severe side effects of AZT shortly after its FDA approval, when it became the first widely prescribed treatment for HIV/AIDS. As more patients began using AZT over extended periods, several concerning side effects became evident, which we outlined earlier.
By the early 1990s, it became clear that AZT alone (monotherapy) was not sufficient to control HIV and that long-term use led to drug resistance. The combination of side effects and limited long-term efficacy led to public outcry and demands for more research into safer and more effective treatments.
In 1995, Glaxo and Burroughs Wellcome merged to form Glaxo Wellcome. This merger created one of the largest pharmaceutical companies in the world at the time. Burroughs Wellcome was the company behind the development and commercialization of AZT, while Glaxo had a strong presence in respiratory treatments and other pharmaceuticals.
That same year and prior to the merger, Glaxo had received FDA approval for Lamivudine (3TC). Clinical trials testing the combination of AZT and lamivudine began soon after the merger. Early trials demonstrated that the two drugs together were more effective at reducing viral load and delaying resistance than when used individually.
The combination therapy was quickly fast-tracked for approval (shocker), and Combivir, which combined AZT and lamivudine into a single pill, was approved by the FDA in 1997. This was just two years after the merger.
This pill became a cornerstone of ART (Antiretroviral Therapy) regimens throughout the late 1990s and early 2000s. ART regimens, over the years have evolved, but one thing has not changed.
These prescriptions bring in a lot of money.
Today, GSK/ViiV Healthcare (the name for the continued evolution of mergers from Glaxo Wellcome with other companies) is responsible for about 20-25% of ART prescriptions globally, particularly driven by Dolutegravir-based therapies. Most of the remaining market share belongs to Gilead Sciences, with Tenofovir-based treatments leading globally.
While exact revenue figures fluctuate annually, estimates indicate that ViiV Healthcare (a joint venture between GSK, Pfizer, and Shionogi) made over $5 billion in annual sales from HIV treatments in recent years, with drugs like Triumeq, Dovato, and Juluca leading the market. These products alone generated around $1.6 billion in the first half of 2022.
Despite all these advancements though, there is a glaring reality.
There still is no cure for HIV or AIDS.
Patients with HIV must remain on ART for their entire lives to manage the virus. While ART has transformed HIV from a deadly disease into a manageable chronic condition (which assumes HIV is causing AIDS), it does not eliminate the virus from the body.
ART suppresses HIV, keeping the viral load so low that it becomes undetectable and cannot progress to AIDS. However, the virus hides in latent reservoirs within the body (such as in immune cells), and if ART is stopped, HIV can reemerge and begin replicating again, leading to immune system decline.
AKA, if you stop taking this drug (and paying the pharmaceutical companies for it), you will likely die.
Conclusion
While the mainstream medical community remains focused on HIV, several alternative theories have emerged regarding the true cause of AIDS. These theories suggest that factors like drug abuse, malnutrition, and pre-existing health conditions might play a more significant role in the syndrome than previously thought.
The Role of Drug Abuse in AIDS Cases
Research has shown that many early AIDS patients were heavy users of recreational drugs, particularly "poppers" (alkyl nitrites), which have been linked to immune system damage. Some scientists believe that long-term drug use could have caused the immune deficiencies seen in AIDS patients, rather than a viral infection like HIV. Studies in mice have even demonstrated that these drugs can deplete T-cells, a hallmark of AIDS.
Malnutrition and Immune Deficiency
Another theory points to malnutrition, especially in impoverished regions where AIDS rates are high. Malnutrition weakens the immune system, making individuals more susceptible to infections. This would explain why AIDS cases are often clustered in certain populations with poor nutrition and healthcare.
The Suppression of Scientific Dissent
Despite the growing evidence for alternative causes of AIDS, dissenting voices in the scientific community have been systematically silenced. Researchers who challenge the HIV theory often find themselves cut off from funding and unable to publish in major journals. Bryan Ellison, a molecular biologist, is one such figure. His book, Why We Will Never Win the War on AIDS, highlights the censorship and obstacles he faced in trying to bring attention to this debate.
Many high-profile scientists, including Nobel laureates, have expressed doubts about the HIV-AIDS link, but their concerns rarely make it into mainstream media. According to Ellison, the government and medical establishment have actively blocked media coverage and even pressured dissenting scientists to retract their claims. This has created an environment where only one narrative is allowed to exist, even if that narrative may be flawed.
The core of the AIDS controversy is the need for open, unbiased scientific debate. With billions of dollars at stake, vested interests in the AIDS industry are strong. However, scientific progress depends on the willingness to question established ideas, especially when those ideas have yet to produce the desired results.
If the current HIV-focused approach to AIDS is wrong, then continuing down this path could be wasting valuable time and resources. Alternative causes need to be researched and explored without fear of professional retribution. Only then can we hope to develop effective treatments and potentially cure AIDS once and for all.