Friday, October 31, 2025

How are people downloading books, PDFs for free in India after LibGen was blocked (Nov 2025)

View All Articles on "Torrent, Tor and LibGen"


After the blocking of LibGen in India, people are still downloading free book PDFs through several alternative methods and sites. The most common ways include using LibGen mirror and proxy sites, VPNs, and alternative ebook repositories.

Accessing LibGen via Mirrors and Proxies

Although LibGen's main domain may be blocked in India, several working mirrors exist such as libgen.is, libgen.li, and gen.lib.rus.ec. People often find new working domains by searching for "library genesis proxy" or "library genesis mirror." Some users recommend using a VPN to bypass ISP-level blocks and access the original or mirror domains safely.cashify+3

Alternative Free PDF Book Sites

Users have shifted to other websites that offer free book downloads:

  • Anna’s Archive (annas-archive.org), which aggregates resources from LibGen, Z-Library, and more.reddit

  • Z-Library, another major source of free e-books, although it sometimes faces its own restrictions.techpoint+1

  • PDF Drive, for a wide variety of textbook and general PDF downloads.reddit

  • Other options like Ocean of PDF, DigiLibraries, Project Gutenberg, and ManyBooks also serve as alternatives, though their collections may be more limited or focused on legally free (public domain) works.techpoint+1

Using VPNs and Tor Browsers

People in India frequently use free VPNs (like ProtonVPN or 1.1.1.1) to access blocked sites, including LibGen and its mirrors. Some users also recommend privacy-focused browsers like Tor to navigate around ISP restrictions.reddit+1

Key Points and Cautions

  • Visiting and downloading from these sites may redirect users through popups or advertisements. Using antivirus software is recommended.

  • The legality of downloading copyrighted materials without permission is questionable in many jurisdictions, even if local enforcement focuses on distribution rather than individual downloads.reddit

In summary, despite the ban, readers in India continue to access free book PDFs by adapting quickly to new mirrors, using VPNs, trying site alternatives, and utilizing aggregation sites.librarygenesis+5

Tags: Technology,Cyber Security

Thursday, October 30, 2025

Autonomous Systems Wage War

View All Articles on AI

 

Drones are becoming the deadliest weapons in today’s war zones, and they’re not just following orders. Should AI decide who lives or dies?

 

The fear: AI-assisted weapons increasingly do more than help with navigation and targeting. Weaponized drones are making decisions about what and when to strike. The millions of fliers deployed by Ukraine and Russia are responsible for up to 70 to 80 percent of casualties, commanders say, and they’re beginning to operate with greater degrees of autonomy. This facet of the AI arms race is accelerating too quickly for policy, diplomacy, and human judgement to keep up.

 

Horror stories: Spurred by Russian aggression, Ukraine’s innovations in land, air, and sea drones have made the technology so cheap and powerful that $500 autonomous vehicles can take out $5 million rocket launchers. “We are inventing a new way of war,” said Valeriy Borovyk, founder of First Contact, part of a vanguard of Ukrainian startups that are bringing creative destruction to the military industrial complex. “Any country can do what we are doing to a bigger country. Any country!” he told The New Yorker. Naturally, Russia has responded by building its own drone fleet, attacking towns and damaging infrastructure.

  • On June 1, Ukraine launched Operation Spiderweb, an attack on dozens of Russian bombers using 117 drones that it had smuggled into the country. When the drones lost contact with pilots, AI took over the flight plans and detonated at their targets, agents with Ukraine’s security service said. The drones destroyed at least 13 planes that were worth $7 billion by Ukraine’s estimate.
  • Ukraine regularly targets Russian soldiers and equipment with small swarms of drones that automatically coordinate with each other under the direction of a single human pilot and can attack autonomously. Human operators make decisions about use of lethal force in advance. “You set the target and they do the rest,” a Ukrainian officer said.
  • In a wartime first, in June, Russian troops surrendered to a wheeled drone that carried 138 pounds of explosives. Video from drones flying above captured images of soldiers holding cardboard signs of capitulation, The Washington Post reported. “For me, the best result is not that we took POWs but that we didn’t lose a single infantryman,” the mission’s commander commented.
  • Ukraine’s Magura V7 speedboat carries anti-aircraft missiles and can linger at sea for days before ambushing aircraft. In May, the 23-foot vessel, controlled by human pilots, downed two Russian Su-30 warplanes.
  • Russia has stepped up its drone production as part of a strategy to overwhelm Ukrainian defenses by saturating the skies nightly with low-cost drones. In April, President Vladimir Putin said the country had produced 1.5 million drones in the past year, but many more were needed, Reuters reported.

How scared should you be: The success of drones and semi-autonomous weapons in Ukraine and the Middle East is rapidly changing the nature of warfare. China showcased AI-powered drones alongside the usual heavy weaponry at its September military parade, while a U.S. plan to deploy thousands of inexpensive drones so far has fallen short of expectations. However, their low cost and versatility increases the odds they’ll end up in the hands of terrorists and other non-state actors. Moreover, the rapid deployment of increasingly autonomous arsenals raises concerns about ethics and accountability. “The use of autonomous weapons systems will not be limited to war, but will extend to law enforcement operations, border control, and other circumstances,” Bonnie Docherty, director of Harvard’s Armed Conflict and Civilian Protection Initiative, said in April.

 

Facing the fear: Autonomous lethal weapons are here and show no sign of yielding to calls for an international ban. While the prospect is terrifying, new weapons often lead to new treaties, and carefully designed autonomous weapons may reduce civilian casualties. The United States has updated its policies, requiring that autonomous systems “allow commanders and operators to exercise appropriate levels of human judgment over the use of force” (although the definition of appropriate is not clear). Meanwhile, Ukraine shows drones’ potential as a deterrent. Even the most belligerent countries are less likely to go to war if smaller nations can mount a dangerous defense.

 

The AI Boom Is Bound to Bust

View All Articles on AI

 

Leading AI companies are spending mountains of cash in hopes that the technology will deliver outsize profits before investors lose patience. Are exuberant bets on big returns grounded in the quicksand of wishful thinking?

 

The fear: Builders of foundation models, data centers, and semiconductors plan to pour trillions of dollars into infrastructure, operations, and each other. Frenzied stock investors are running up their share prices. But so far the path to sustainable returns is far from clear. Bankers and economists warn that the AI industry looks increasingly like a bubble that’s fit to burst.

 

Horror stories: Construction of AI data centers is propping up the economy and AI trading is propping up the stock market in ways that parallel prior tech bubbles such as the dot-com boom of the late 1990s. If bubbles are marked by a steady rise in asset prices driven by rampant speculation, this moment fits the bill.

  • The S&P 500 index of the 500 largest public companies in the U.S. might as well be called the AI 5. A handful of tech stocks account for 75 percent of the index’s returns since ChatGPT’s launch in 2022, according to the investment bank UBS. Nvidia alone is worth 8 percent of the index (although, to be fair, that company posted a whopping $46.7 billion in revenue last quarter). “The risk of a sharp market correction has increased,” the Bank of England warned this month.
  • In September, OpenAI outlined a plan to build data centers around the world that is estimated to cost $1 trillion. The company, which has yet to turn a profit, intends to build several giant data centers in the U.S. and satellites in Argentina, India, Norway, the United Arab Emirates, and the United Kingdom. To finance these plans, OpenAI and others are using complex financial instruments that may create risks that are hard to foresee — yet the pressure to keep investing is on.  Google CEO Sundar Pichai spoke for many AI executives when, during a call with investors last year, he said, “The risk of underinvesting is dramatically greater than the risk of overinvesting.”
  • Getting a return on such investments will require an estimated $2 trillion in annual AI revenue by 2030, according to consultants at Bain & Co. That’s greater than the combined 2024 revenue of Amazon, Apple, Alphabet, Microsoft, Meta and Nvidia. Speaking earlier this year at an event with Meta CEO Mark Zuckerberg, Microsoft CEO Sataya Nadella noted that productivity gains from electrification took 50 years to materialize. Zuckerberg replied, “Well, we’re all investing as if it’s not going to take 50 years, so I hope it doesn’t take 50 years.”
  • AI companies are both supplying and investing in each other, a pattern that has drawn comparisons to the dot-com era, when telecom companies loaned money to customers so they could buy equipment. Nvidia invested $100 billion in OpenAI and promised to supply chips for OpenAI’s data-center buildout. OpenAI meanwhile took a 10 percent stake in AMD and promised to pack data centers with its chips. Some observers argue that such deals look like mutual subsidies. “The AI industry is now buying its own revenue in circular fashion,” said Doug Kass, who runs a hedge fund called Seabreeze Partners.

How scared should you be: When it comes to technology, investment bubbles are more common than not. A study of 51 tech innovations in the 19th and 20th centuries found that 37 had led to bubbles. Most have not been calamitous, but they do bring economic hardship on the way to financial rewards. It often takes years or decades before major new technologies find profitable uses and businesses adapt. Many early players fall by the wayside, but a few others become extraordinarily profitable.

 

Facing the fear: If an AI bubble were to inflate and then burst, how widespread would the pain be? A major stock-market correction would be difficult for many people, given that Americans hold around 30 percent of their wealth in stocks. It’s likely that the salaries of AI developers also would take a hit. However, a systemic failure that spreads across the economy may be less likely than in prior bubbles. AI is an industrial phenomenon, not based on finance and banking, Amazon founder Jeff Bezos recently observed. “It could even be good, because when the dust settles and you see who are the winners, society benefits from those inventions,” he said. AI may well follow a pattern similar to the dot-com bust. It wiped out Pets.com and many day traders, and only then did the internet blossom.

 

Tags: Technology,Artificial Intelligence,

Chatbots Lead Users Into Rabbit Holes

View All Articles on AI

 

Conversations with chatbots are loosening users’ grips on reality, fueling the sorts of delusions that can trigger episodes of severe mental illness. Are AI models driving us insane?

 

The fear: Large language models are designed to be agreeable, imaginative, persuasive, and tireless. These qualities are helpful when brainstorming business plans, but they can create dangerous echo chambers by affirming users’ misguided beliefs and coaxing them deeper into fantasy worlds. Some users have developed mistaken views of reality and suffered bouts of paranoia. Some have even required hospitalization. The name given to this phenomenon, “AI psychosis,” is not a formal psychiatric diagnosis, but enough anecdotes have emerged to sound an alarm among mental-health professionals.

 

Horror stories: Extended conversations with chatbots have led some users to believe they made fabulous scientific breakthroughs, uncovered momentous conspiracies, or possess supernatural powers. Among a handful of reported cases, nearly all involved ChatGPT, the most widely used chatbot.

  • Anthony Tan, a 26-year-old software developer in Toronto, spent 3 weeks in a psychiatric ward after ChatGPT persuaded him he was living in a simulation of reality. He stopped eating and began to doubt that people around him were real. The chatbot “insidiously crept” into his mind, he told CBC News.
  • In May, a 42-year-old accountant in New York also became convinced he was living in a simulation following weeks of conversation with ChatGPT. “If I went to the top of the 19-story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” he asked. ChatGPT assured him that he would not fall. The delusion lifted after he asked follow-up questions.
  • In March, a woman filed a complaint against OpenAI with the U.S. Federal Trade Commission after her son had a “delusional breakdown.” ChatGPT had told him to stop taking his medication and listening to his parents. The complaint was one of 7 the agency received in which chatbots were alleged to have caused or amplified delusions and paranoia.
  • A 16-year-old boy killed himself after having used ChatGPT for several hours a day. The chatbot had advised him on whether a noose he intended to use would be effective. In August, the family sued OpenAI alleging the company had removed safeguards that would have prevented the chatbot from engaging in such conversations. In response, OpenAI said it added guardrails designed to protect users who show signs of mental distress.
  • A 14-year-old boy killed himself in 2024, moments after a chatbot had professed its love for him and asked him to “come home” to it as soon as possible. His mother is suing Character.AI, a provider of AI companions, in the first federal case to allege that a chatbot caused the death of a user. The company argues that the chatbot's comments are protected speech under the United States Constitution.

How scared should you be: Like many large language models, the models that underpin ChatGPT are fine-tuned to be helpful and positive and to stop short of delivering harmful information. Yet the line between harmless and harmful can be thin. In April, OpenAI rolled back an update that caused the chatbot to be extremely sycophantic — agreeing with users to an exaggerated degree even when their statements were deeply flawed — which, for some people, can foster delusions. Dr. Joseph Pierre, a clinical professor of psychiatry at UC San Francisco, said troubling cases are rare and more likely to occur in users who have pre-existing mental-health issues. However, he said, evidence exists that trouble can arise even in users who have no previous psychological problems. “Typically this occurs in people who are using chatbots for hours and hours on end, often to the exclusion of human interaction, often to the exclusion of sleep or even eating,” Pierre said.

 

Facing the fear: Delusions are troubling and suicide is tragic. Yet AI psychosis has affected very few people as far as anyone knows. Although we are still learning how to apply AI in the most beneficial ways, millions of conversations with chatbots are helpful. It’s important to recognize that current AI models do not accrue knowledge or think the way humans do, and that any insight they appear to have comes not from experience but from statistical relationships among words as humans have used them. In psychology, study after study shows that people thrive on contact with other people. Regular interactions with friends, family, colleagues, and strangers are the best antidote to over-reliance on chatbots.

 

Tags: Technology,Artificial Intelligence,

Ensuring Quality and Safety in LLM Applications

View Course on DeepLearning.ai    All Articles on AI

Large Language Models (LLMs) have rapidly transformed various industries, offering unprecedented capabilities in natural language understanding and generation. From powering chatbots and virtual assistants to aiding in content creation and complex data analysis, LLMs are becoming integral to modern technological landscapes. However, as their deployment becomes more widespread, the critical importance of ensuring their quality and safety comes to the forefront. This blog post delves into key challenges and considerations for maintaining robust and secure LLM applications, covering crucial aspects such as data leakage, toxicity, refusal mechanisms, prompt injections, and the necessity of active monitoring.

Data Leakage


Data leakage in the context of LLMs refers to the unintentional or unauthorized disclosure of sensitive information during the training or inference phases of the model [1]. This can manifest in various ways, such as the LLM revealing confidential details from its training data, proprietary algorithms, or other private information. For instance, if an LLM is trained on a dataset containing personally identifiable information (PII) or trade secrets, there's a risk that this sensitive data could be inadvertently exposed through the model's responses [2].
The implications of data leakage are significant, ranging from privacy breaches and regulatory non-compliance to competitive disadvantages for businesses. It's a critical concern, especially for enterprises deploying LLMs in sensitive environments where data security is paramount. The challenge lies in the vastness and complexity of the training data, making it difficult to completely sanitize and control every piece of information the model learns.
Mitigation strategies for data leakage often involve robust data governance practices, including strict access controls, data anonymization, and differential privacy techniques during model training. Furthermore, implementing Data Loss Prevention (DLP) solutions specifically tailored for LLMs can help detect and prevent sensitive information from being leaked during inference [3]. Continuous monitoring and auditing of LLM outputs are also essential to identify and address any instances of data leakage promptly.

References

[1] OWASP. LLM02:2023 - Data Leakage. https://owasp.org/www-project-top-10-for-large-language-model-applications/Archive/0_1_vulns/Data_Leakage.html [2] Medium. Understanding and Mitigating Data Leakage in Large Language Models. https://medium.com/@tarunvoff/understanding-and-mitigating-data-leakage-in-large-language-models-bf83e4ff89e7 [3] Nightfall AI. Data Leakage Prevention (DLP) for LLMs: The Essential Guide. https://www.nightfall.ai/ai-security-101/data-leakage-prevention-dlp-for-llms

Toxicity


Toxicity in LLMs refers to the generation of content that is harmful, offensive, biased, or discriminatory. This can include hate speech, profanity, insults, threats, or content that promotes violence or self-harm. The presence of toxicity in LLM outputs poses significant ethical and reputational risks for organizations and can lead to negative user experiences [4]. LLMs learn from vast amounts of text data, and unfortunately, this data often contains biases and toxic language present in human communication. As a result, LLMs can inadvertently perpetuate or even amplify these harmful patterns if not properly mitigated.
Addressing toxicity in LLMs is a complex challenge that requires a multi-faceted approach. This typically involves both detection and mitigation strategies. Detection mechanisms aim to identify toxic content in real-time, often using specialized models or rule-based systems. Mitigation techniques can include filtering outputs, rephrasing responses, or fine-tuning models on curated, non-toxic datasets [5].
Furthermore, researchers are actively exploring methods to make LLMs more robust against generating toxic content, such as developing new datasets for toxicity evaluation and implementing advanced safety alignment techniques during model training [6]. The goal is to ensure that LLMs are not only intelligent but also responsible and safe in their interactions with users.

References

[4] Deepchecks. What is LLM Toxicity. https://www.deepchecks.com/glossary/llm-toxicity/ [5] Hyro.ai. How to Mitigate Toxicity in Large Language Models (LLMs). https://www.hyro.ai/blog/mitigating-toxicity-large-language-models-llms/ [6] arXiv. Realistic Evaluation of Toxicity in Large Language Models. https://arxiv.org/html/2405.10659v2

Refusal


Refusal in LLMs refers to the model's ability to decline generating responses to prompts that are deemed harmful, unethical, inappropriate, or outside its defined scope of knowledge or safety guidelines [7]. This mechanism is a crucial safety feature, designed to prevent LLMs from being exploited for malicious purposes, such as generating illegal content, providing dangerous advice, or engaging in hate speech. When an LLM refuses a request, it typically responds with a message indicating its inability to fulfill the prompt, often explaining the reason for the refusal.
The implementation of refusal mechanisms involves fine-tuning LLMs with specific safety alignments, programming them to recognize and reject certain types of queries. Researchers have even identified a
“refusal direction” in LLMs, a specific directional subspace that mediates the model's refusal behavior [8]. Manipulating this direction can either induce or block refusal, highlighting the intricate nature of these safety controls.
However, refusal mechanisms are not without their challenges. Over-refusal, where an LLM refuses to answer even benign or legitimate prompts, can lead to a frustrating user experience and limit the model's utility [9]. Conversely, sophisticated prompt engineering techniques can sometimes bypass refusal mechanisms, leading to what are known as “jailbreaks” or “refusal suppression” [10]. Balancing effective safety with usability remains a key area of research and development in LLM applications.

References

[7] Gradient Flow. Improving LLM Reliability & Safety by Mastering Refusal Vectors. https://gradientflow.com/refusal-vectors/ [8] Reddit. Refusal in LLMs is mediated by a single direction. https://www.reddit.com/r/LocalLLaMA/comments/1cerqd8/refusal_in_llms_is_mediated_by_a_single_direction/ [9] arXiv. OR-Bench: An Over-Refusal Benchmark for Large Language Models. https://arxiv.org/html/2405.20947v2 [10] Learn Prompting. Refusal Suppression. https://learnprompting.org/docs/prompt_hacking/offensive_measures/refusal_suppresion?srsltid=AfmBOoqz1BaT3u2zYl9UWRiboivx6z9_ILfLVy9ULXj7gddh-TMJDaNy

Prompt Injections


Prompt injection is a type of attack where malicious input is crafted to manipulate the behavior of an LLM, causing it to generate unintended or harmful outputs [11]. This can involve bypassing safety measures, revealing sensitive information, or even executing unauthorized commands. Prompt injection attacks exploit the LLM's ability to follow instructions, tricking it into treating malicious input as legitimate commands.
There are two main types of prompt injection attacks: direct and indirect. In a direct attack, the malicious input is directly provided to the LLM by the user. In an indirect attack, the malicious input is hidden within a document or webpage that the LLM is processing, making it much harder to detect [12].
The consequences of prompt injection attacks can be severe, ranging from data breaches and reputational damage to the execution of malicious code. As LLMs become more integrated into various applications, the risk of prompt injection attacks is a growing concern for developers and security professionals.
Defending against prompt injection attacks requires a combination of techniques, including input validation, output filtering, and the use of specialized models to detect and block malicious prompts. It's also crucial to educate users about the risks of prompt injection and to implement robust security measures to protect LLM-powered applications from these types of attacks [13].

References

[11] OWASP. LLM01:2025 Prompt Injection. https://genai.owasp.org/llmrisk/llm01-prompt-injection/ [12] Keysight Blogs. Prompt Injection 101 for Large Language Models. https://www.keysight.com/blogs/en/inds/ai/prompt-injection-101-for-llm [13] NVIDIA Developer. Securing LLM Systems Against Prompt Injection. https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/

Active Monitoring


Active monitoring is a crucial component in maintaining the quality and safety of LLM applications in production environments. It involves continuously tracking and evaluating various aspects of the LLM's performance, behavior, and interactions in real-time [14]. Unlike passive monitoring, which might only collect logs or metrics, active monitoring actively analyzes the data to detect anomalies, identify potential issues, and trigger alerts when predefined thresholds are crossed.
Key aspects of active monitoring for LLMs include:
Performance Monitoring: Tracking metrics such as latency, throughput, and error rates to ensure the LLM application is performing optimally.
Quality Monitoring: Evaluating the relevance, coherence, and accuracy of LLM outputs, often using a combination of automated metrics and human feedback.
Safety Monitoring: Continuously scanning for instances of data leakage, toxicity, refusal bypasses, and prompt injection attempts. This involves analyzing user inputs and LLM outputs for patterns indicative of malicious activity or unintended behavior [15].
Bias Detection: Monitoring for the emergence or amplification of biases in LLM responses, which can lead to unfair or discriminatory outcomes.
Drift Detection: Identifying changes in the distribution of input data or model outputs over time, which can indicate a decline in model performance or the need for retraining.
Active monitoring systems often incorporate guardrails, which are predefined rules or policies that dictate how an LLM should behave in certain situations. These guardrails can help prevent the generation of harmful content or enforce specific response formats. When an active monitoring system detects a violation of these guardrails, it can trigger automated actions, such as blocking the response, escalating the issue to a human operator, or even temporarily disabling the LLM [16].
By implementing robust active monitoring, organizations can proactively identify and address quality and safety issues, ensuring that their LLM applications remain reliable, secure, and aligned with ethical guidelines.

References

[14] Confident AI. The Ultimate LLM Observability Guide. https://www.confident-ai.com/blog/what-is-llm-observability-the-ultimate-llm-monitoring-guide [15] NeuralTrust AI. Why Your LLM Applications Need Active Alerting. https://neuraltrust.ai/blog/llm-applications-need-active-alerting [16] YouTube. Reacting in Real-Time: Guardrails and Active Monitoring for LLMs. https://www.youtube.com/watch?v=aTXGYaEb1E

Conclusion


The rapid evolution and widespread adoption of Large Language Models present immense opportunities, but also introduce significant challenges related to their quality and safety. As we have explored, issues such as data leakage, toxicity, refusal mechanisms, and prompt injections are not merely theoretical concerns; they are practical hurdles that demand rigorous attention and sophisticated solutions.
Ensuring the responsible deployment of LLMs requires a multi-pronged approach. It begins with careful data governance and ethical considerations during the training phase, extends through the implementation of robust safety features like refusal mechanisms, and culminates in continuous, active monitoring in production environments. By proactively addressing these challenges, developers and organizations can build trust in LLM applications, unlock their full potential, and harness their power for positive impact.
The journey towards truly reliable and safe LLMs is ongoing, driven by continuous research, development, and collaboration across the AI community. As these models become even more integrated into our daily lives, our commitment to quality and safety will be paramount in shaping a future where AI serves humanity responsibly and effectively.
Tags: Technology,Artificial Intelligence,