Contact Me By Email

Monday, May 05, 2025

A.I. Hallucinations Are Getting Worse, Even as New Systems Become More Powerful - The New York Times

A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse

"A new wave of “reasoning” systems from companies like OpenAI is producing incorrect information more often. Even the companies don’t know why.

Erik Carter

Last month, an A.I. bot that handles tech support for Cursor, an up-and-coming tool for computer programmers, alerted several customers about a change in company policy. It said they were no longer allowed to use Cursor on more than just one computer.

In angry posts to internet message boards, the customers complained. Some canceled their Cursor accounts. And some got even angrier when they realized what had happened: The A.I. bot had announced a policy change that did not exist.

“We have no such policy. You’re of course free to use Cursor on multiple machines,” the company’s chief executive and co-founder, Michael Truell, wrote in a Reddit post. “Unfortunately, this is an incorrect response from a front-line A.I. support bot.”

More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using A.I. bots for an increasingly wide array of tasks. But there is still no way of ensuring that these systems produce accurate information.

The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.

Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.

These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. “Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. “That will never go away.”

Amr Awadallah, wearing a blue shirt, looks at a large computer monitor.
Amr Awadallah, the chief executive of Vectara, which builds A.I. tools for businesses, believes A.I. “hallucinations” will persist.Cayce Clifford for The New York Times

For several years, this phenomenon has raised concerns about the reliability of these systems. Though they are useful in some situations — like writing term papers, summarizing office documents and generating computer code — their mistakes can cause problems.

The A.I. bots tied to search engines like Google and Bing sometimes generate search results that are laughably wrong. If you ask them for a good marathon on the West Coast, they might suggest a race in Philadelphia. If they tell you the number of households in Illinois, they might cite a source that does not include that information.

Those hallucinations may not be a big problem for many people, but it is a serious issue for anyone using the technology with court documents, medical information or sensitive business data.

“You spend a lot of time trying to figure out which responses are factual and which aren’t,” said Pratik Verma, co-founder and chief executive of Okahu, a company that helps businesses navigate the hallucination problem. “Not dealing with these errors properly basically eliminates the value of A.I. systems, which are supposed to automate tasks for you.”

Cursor and Mr. Truell did not respond to requests for comment.

For more than two years, companies like OpenAI and Google steadily improved their A.I. systems and reduced the frequency of these errors. But with the use of new reasoning systems, errors are rising. The latest OpenAI systems hallucinate at a higher rate than the company’s previous system, according to the company’s own tests.

The company found that o3 — its most powerful system — hallucinated 33 percent of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent.

When running another test called SimpleQA, which asks more general questions, the hallucination rates for o3 and o4-mini were 51 percent and 79 percent. The previous system, o1, hallucinated 44 percent of the time.

Since the arrival of ChatGPT, the phenomenon of hallucination has raised concerns about the reliability of A.I. systems.Kelsey McClellan for The New York Times

In a paper detailing the tests, OpenAI said more research was needed to understand the cause of these results. Because A.I. systems learn from more data than people can wrap their heads around, technologists struggle to determine why they behave in the ways they do.

“Hallucinations are not inherently more prevalent in reasoning models, though we are actively working to reduce the higher rates of hallucination we saw in o3 and o4-mini,” a company spokeswoman, Gaby Raila, said. “We’ll continue our research on hallucinations across all models to improve accuracy and reliability.”

Hannaneh Hajishirzi, a professor at the University of Washington and a researcher with the Allen Institute for Artificial Intelligence, is part of a team that recently devised a way of tracing a system’s behavior back to the individual pieces of data it was trained on. But because systems learn from so much data — and because they can generate almost anything — this new tool can’t explain everything. “We still don’t know how these models work exactly,” she said.

Tests by independent companies and researchers indicate that hallucination rates are also rising for reasoning models from companies such as Google and DeepSeek.

Since late 2023, Mr. Awadallah’s company, Vectara, has tracked how often chatbots veer from the truth. The company asks these systems to perform a straightforward task that is readily verified: Summarize specific news articles. Even then, chatbots persistently invent information.

Vectara’s original research estimated that in this situation chatbots made up information at least 3 percent of the time and sometimes as much as 27 percent.

In the year and a half since, companies such as OpenAI and Google pushed those numbers down into the 1 or 2 percent range. Others, such as the San Francisco start-up Anthropic, hovered around 4 percent. But hallucination rates on this test have risen with reasoning systems. DeepSeek’s reasoning system, R1, hallucinated 14.3 percent of the time. OpenAI’s o3 climbed to 6.8.

(The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)

For years, companies like OpenAI relied on a simple concept: The more internet data they fed into their A.I. systems, the better those systems would perform. But they used up just about all the English text on the internet, which meant they needed a new way of improving their chatbots.

So these companies are leaning more heavily on a technique that scientists call reinforcement learning. With this process, a system can learn behavior through trial and error. It is working well in certain areas, like math and computer programming. But it is falling short in other areas.

“The way these systems are trained, they will start focusing on one task — and start forgetting about others,” said Laura Perez-Beltrachini, a researcher at the University of Edinburgh who is among a team closely examining the hallucination problem.

Another issue is that reasoning models are designed to spend time “thinking” through complex problems before settling on an answer. As they try to tackle a problem step by step, they run the risk of hallucinating at each step. The errors can compound as they spend more time thinking.

The latest bots reveal each step to users, which means the users may see each error, too. Researchers have also found that in many cases, the steps displayed by a bot are unrelated to the answer it eventually delivers.

“What the system says it is thinking is not necessarily what it is thinking,” said Aryo Pradipta Gema, an A.I. researcher at the University of Edinburgh and a fellow at Anthropic.

Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology.

Karen Weise writes about technology for The Times and is based in Seattle. Her coverage focuses on Amazon and Microsoft, two of the most powerful companies in America."

A.I. Hallucinations Are Getting Worse, Even as New Systems Become More Powerful - The New York Times

Saturday, May 03, 2025

The Secret AI Experiment That Sent Reddit Into a Frenzy - The Atlantic

‘The Worst Internet-Research Ethics Violation I Have Ever Seen’

"The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment.

A blurry, distorted, pixellated image of the Reddit robot's head against an orange background
Illustration by The Atlantic

Produced by ElevenLabs and News Over Audio (Noa) using AI narration. Listen to more stories on the Noa app.

When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.

So earlier this week, when members of a popular subreddit learned that their community had been infiltrated by undercover researchers posting AI-written comments and passing them off as human thoughts, the Redditors were predictably incensed. They called the experiment “violating,” “shameful,” “infuriating,” and “very disturbing.” As the backlash intensified, the researchers went silent, refusing to reveal their identity or answer questions about their methodology. The university that employs them has announced that it’s investigating. Meanwhile, Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”

Joining the chorus of disapproval were fellow internet researchers, who condemned what they saw as a plainly unethical experiment. Amy Bruckman, a professor at the Georgia Institute of Technology who has studied online communities for more than two decades, told me the Reddit fiasco is “the worst internet-research ethics violation I have ever seen, no contest.” What’s more, she and others worry that the uproar could undermine the work of scholars who are using more conventional methods to study a crucial problem: how AI influences the way humans think and relate to one another.

The researchers, based at the University of Zurich, wanted to find out whether AI-generated responses could change people’s views. So they headed to the aptly named subreddit r/changemyview, in which users debate important societal issues, along with plenty of trivial topics, and award points to posts that talk them out of their original position. Over the course of four months, the researchers posted more than 1,000 AI-generated comments on pitbulls (is aggression the fault of the breed or the owner?), the housing crisis (is living with your parents the solution?), DEI programs (were they destined to fail?). The AI commenters argued that browsing Reddit is a waste of time and that the “controlled demolition” 9/11 conspiracy theory has some merit. And as they offered their computer-generated opinions, they also shared their backstories. One claimed to be a trauma counselor; another described himself as a victim of statutory rape.

In one sense, the AI comments appear to have been rather effective. When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters, according to preliminary findings that the researchers shared with Reddit moderators and later made private. (This analysis, of course, assumes that no one else in the subreddit was using AI to hone their arguments.)

Read: The man out to prove how dumb AI still is

The researchers had a tougher time convincing Redditors that their covert study was justified. After they had finished the experiment, they contacted the subreddit’s moderators, revealed their identity, and requested to “debrief” the subreddit—that is, to announce to members that for months, they had been unwitting subjects in a scientific experiment. “They were rather surprised that we had such a negative reaction to the experiment,” says one moderator, who asked to be identified by his username, LucidLeviathan, to protect his privacy. According to LucidLeviathan, the moderators requested that the researchers not publish such tainted work, and that they issue an apology. The researchers refused. After more than a month of back-and-forth, the moderators revealed what they had learned about the experiment (minus the researchers’ names) to the rest of the subreddit, making clear their disapproval.

When the moderators sent a complaint to the University of Zurich, the university noted in its response that the “project yields important insights, and the risks (e.g. trauma etc.) are minimal,” according to an excerpt posted by moderators. In a statement to me, a university spokesperson said that the ethics board had received notice of the study last month, advised the researchers to comply with the subreddit’s rules, and “intends to adopt a stricter review process in the future.” Meanwhile, the researchers defended their approach in a Reddit comment, arguing that “none of the comments advocate for harmful positions” and that each AI-generated comment was reviewed by a human team member before being posted. (I sent an email to an anonymized address for the researchers, posted by Reddit moderators, and received a reply that directed my inquiries to the university.)

Perhaps the most telling aspect of the Zurich researchers’ defense was that, as they saw it, deception was integral to the study. The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

How humans are likely to respond in such a scenario is an urgent issue and a worthy subject of academic research. In their preliminary results, the researchers concluded that AI arguments can be “highly persuasive in real-world contexts, surpassing all previously known benchmarks of human persuasiveness.” (Because the researchers finally agreed this week not to publish a paper about the experiment, the accuracy of that verdict will probably never be fully assessed, which is its own sort of shame.) The prospect of having your mind changed by something that doesn’t have one is deeply unsettling. That persuasive superpower could also be employed for nefarious ends.

Read: Chatbots are cheating on their benchmark tests

Still, scientists don’t have to flout the norms of experimenting on human subjects in order to evaluate the threat. “The general finding that AI can be on the upper end of human persuasiveness—more persuasive than most humans—jibes with what laboratory experiments have found,” Christian Tarsney, a senior research fellow at the University of Texas at Austin, told me. In one recent laboratory experiment, participants who believed in conspiracy theories voluntarily chatted with an AI; after three exchanges, about a quarter of them lost faith in their previous beliefs. Another found that ChatGPT produced more persuasive disinformation than humans, and that participants who were asked to distinguish between real posts and those written by AI could not effectively do so.

Giovanni Spitale, the lead author of that study, also happens to be a scholar at the University of Zurich, and has been in touch with one of the researchers behind the Reddit AI experiment, who asked him not to reveal their identity. “We are receiving dozens of death threats,” the researcher wrote to him, in a message Spitale shared with me. “Please keep the secret for the safety of my family.”

One likely reason the backlash has been so strong is because, on a platform as close-knit as Reddit, betrayal cuts deep. “One of the pillars of that community is mutual trust,” Spitale told me; it’s part of the reason he opposes experimenting on Redditors without their knowledge. Several scholars I spoke with about this latest ethical quandary compared it—unfavorably—to Facebook’s infamous emotional-contagion study. For one week in 2012, Facebook altered users’ News Feed to see if viewing more or less positive content changed their posting habits. (It did, a little bit.) Casey Fiesler, an associate professor at the University of Colorado at Boulder who studies ethics and online communities, told me that the emotional-contagion study pales in comparison with what the Zurich researchers did. “People were upset about that but not in the way that this Reddit community is upset,” she told me. “This felt a lot more personal.”

Read: AI executives promise cancer cures. Here’s the reality.

The reaction probably also has to do with the unnerving notion that ChatGPT knows what buttons to push in our minds. It’s one thing to be fooled by some human Facebook researchers with dubious ethical standards, and another entirely to be duped by a cosplaying chatbot. I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.

The Secret AI Experiment That Sent Reddit Into a Frenzy - The Atlantic

Friday, May 02, 2025

NASA Urges Public To Look At Night Sky Now For ‘Nova’ Location

NASA Urges Public To Look At Night Sky Now For ‘Nova’ Location

Topline

“In the wake of 2024’s total solar eclipse and rare displays of the Northern Lights, a third once-in-a-lifetime sight could be possible in 2025 as a star explodes as a nova for the first time since 1946. With T Coronae Borealis (also called T CrB and the “Blaze Star”) due to become 1,000 times brighter than normal and become visible to the naked eye for the first since 1946, NASA is advising sky-watchers to get to know the patch of sky it’s going to appear in.

Key Facts

T Corona Borealis is a dim star that will briefly become a nova (new star) sometime during 2025, increasing from +10 magnitude, which is invisible to the naked eye, to +2 magnitude, which is about as bright as Polaris, the North Star.

It's a “cataclysmic variable star” and a “recurrent nova” — a star that brightens dramatically on a known timescale, in this case about 80 years. That last happened in 1946, so it's due any day now.

Astronomers first predicted T CrB would explode between April and September 2024 after it suddenly dimmed in 2023 — a telltale sign that an explosion is imminent. However, that didn't happen. It was then predicted by scientists to “go nova” on Thursday, March 27, 2025, but that also failed to happen.

The “Blaze Star” is about 3,000 light-years away from the solar system. When it does finally “go nova,” it will become visible to the naked eye for a few nights.

How To Find T Coronae Borealis (t Crb & ‘the Blaze Star’)

Unless you know where that star is in the night sky, it's not going to be an impactful event. NASA’s Preston Dyches makes that point in a new blog post published this week — and it includes a valuable sky chart (below) showing everyone where to look.

T Coronae Borealis is a dim star in a constellation called Corona Borealis, "Northern Crown," a crescent of seven stars easily visible after dark from the Northern Hemisphere. “You’ll find Corona Borealis right in between the two bright stars Arcturus and Vega, and you can use the Big Dipper’s handle to point you to the right part of the sky,” writes Dyches. “Try having a look for it on clear, dark nights before the nova, so you’ll have a comparison when a new star suddenly becomes visible there.”

He advises practicing finding Corona Borealis in the eastern part of the sky during the first half of the night after dark during May, “so you have a point of comparison when the T CrB nova appears there."”

The Science Behind The Nova

T Coronae Borealis is a binary star system that consists of two stars at the end of their lives: a white dwarf star that’s exhausted its fuel and is cooling down and a red giant star that's cooling and expanding as it ages, expelling hydrogen as it does.

That material is gathering on the surface of the white dwarf. When it reaches a critical point, it triggers a thermonuclear explosion that causes a sudden and dramatic increase in brightness. The explosion only affects its surface, leaving the white dwarf intact, so the whole process can occur again and again, according to NASA.“

Tuesday, April 22, 2025

After Trump Spares Apple, Other Businesses Want a Tariffs Break - The New York Times

Businesses Plead for Tariff Breaks After Trump Spares iPhones

"Retail executives huddled with the president amid fears that tariffs could result in higher prices.

A person carrying an umbrella walks past a glass cube framed by pink and blue neon lights and an Apple logo in the center.
President Trump acknowledged last week that he had “helped” Apple, sparing iPhones from the roughly 145 percent U.S. tariff that currently applies to Chinese imports.Karsten Moran for The New York Times

When President Trump’s steep tariffs threatened to send the price of iPhones soaring, Apple’s chief executive, Tim Cook, called the White House — and soon secured a reprieve for his company and the broader electronics industry.

Almost immediately, top aides to Mr. Trump insisted they had not strayed from their promise to apply import taxes across the economy with minimal, if any, exceptions. But the carve-out still caught the attention of many businesses nationwide, igniting a fresh scramble for similar help in the throes of a global trade war.

Top lobbying groups for the agriculture, construction, manufacturing, retail and technology industries have pleaded with the White House in recent days to relax more of its tariffs, with many arguing that there are some products they must import simply because they are too expensive or impractical to produce in the United States.

On Monday, executives from retailers including Home Depot, Target and Walmart became the latest to raise their concerns directly with Mr. Trump, as the industry continues to brace for the possibility that steep taxes on imports could result in price increases for millions of American consumers.

“We had a productive meeting with President Trump and our retail peers to discuss the path forward on trade, and we remain committed to delivering value for American consumers,” a Target spokesman, Jim Joice, said in a statement.

Doug McMillon, Walmart’s chief executive, has previously acknowledged the many “variables” surrounding Mr. Trump’s tariffs and retail prices. A spokeswoman for Walmart confirmed the meeting on Monday, describing the conversation in a statement as “productive.” Other companies did not respond to requests for comment.

“The deal window may be open,” David French, the executive vice president for government relations at the National Retail Federation, said in an interview last week. He said his industry had sought an audience with Mr. Trump and his team to make the case that “the consumer is very alarmed at what they fear is on the way in terms of higher prices.”

Many businesses say they want to satisfy the president’s demands and begin producing or purchasing more of their goods domestically. But they have also tried to impress on Mr. Trump and his aides that they cannot reconfigure their complicated global supply chains overnight, especially if steep import taxes on machinery and other critical components result in substantially higher manufacturing costs.

“We are calling on the administration to scope out specific manufacturing inputs that we need, specifically to make things in America,” said Charles Crain, the managing vice president for policy at the National Association of Manufacturers, whose board of directors includes executives from Caterpillar, Dow Inc., Pfizer and Toyota.

Kip Eideberg, the senior vice president for government relations at the Association of Equipment Manufacturers, said his group “made the case to the administration that if they want to achieve their stated objective, strengthening U.S. manufacturing and bolstering our global competitiveness, then there needs to be relief.”

His association, which represents a broad swath of agricultural and construction equipment firms, has called for a “blanket, no-tariff approach to parts and components that are critical and cannot be sourced at scale anywhere else.”

Now fully enmeshed in a global trade war, Mr. Trump has sent mixed messages about what he has described as a “flexible” tariff strategy.

Last week, the president acknowledged that he had “helped” Apple at Mr. Cook’s request, sparing iPhones from the new, roughly 145 percent U.S. tariff that currently applies to Chinese imports. Speaking to reporters in the Oval Office, the president said, “I don’t want to hurt anybody.”

But the Trump administration then took the first formal steps toward unveiling specific tariffs on semiconductors, the memory chips that power iPhones and other computing devices, as well as the machines that help to manufacture those goods. The move suggested that any relief for Apple may ultimately prove short-lived.

Mr. Trump suggested on that same day that he could extend similar aid to automakers, who are now subject to a 25 percent tariff on cars and auto parts imported into the United States. The president acknowledged that the industry would “need a little bit of time” to begin manufacturing vehicles and components in the United States, in comments that immediately caused carmakers’ share prices to spike.

No such reprieve has been announced. But the president’s aides and advisers have privately signaled renewed openness to discussing tariff exemptions. On a few occasions over the past month, officials with the Domestic Policy Council and elsewhere in the government have asked business groups to furnish lists of materials and machinery that they cannot quickly and easily make in the United States, according to two people familiar with the matter, who requested anonymity to describe the private discussions.

“The administration maintains regular contact with business leaders, industry groups and everyday Americans about our trade and economic policies,” Kush Desai, a spokesman for the White House, said in a statement. “President Trump, however, has been clear: If you’re worried about tariffs, the solution is simple. Make your product in America.”

After a new tariff on foreign-made cars and auto parts went into effect this month, Mr. Trump suggested that he could extend aid to automakers, but his administration announced no such reprieve.Brett Carlsen for The New York Times

For now, the president and his team have focused primarily on negotiating a series of bilateral trade agreements with dozens of countries that the administration says are engaging in unfair trade practices, including by imposing tariffs and other restrictions on American goods. This month, Mr. Trump announced stiff tariffs on nearly all of America’s trading partners, including India, Italy, Japan, South Korea, Vietnam and the European Union, before pausing those levies for 90 days in order to engage in negotiations.

On Monday, Vice President JD Vance met in India with the country’s prime minister, Narendra Modi, as the White House races to try to clinch “90 deals in 90 days,” as some of Mr. Trump’s aides have said. Without a deal, India could face a 26 percent “reciprocal” tariff rate.

Even without any trade agreements in hand, Mr. Trump has highlighted his approach as a success, boasting that his policies have helped to attract trillions of dollars in private investments from companies including Apple, OpenAI and Nvidia.

“Since our announcement of LIBERATION DAY, many World Leaders and Business Executives have come to me asking for relief from Tariffs,” the president posted on Truth Social on Sunday. “It’s good to see that the World knows we are serious, because WE ARE!”

Mr. Trump added, “But for those who want the easiest path: Come to America, and build in America!”

But the reality is more complicated. Early indicators suggest that some companies have actually slowed their spending out of concern that tariffs could result in higher input prices. One survey from the Federal Reserve Bank of New York, released in April, found that manufacturing activity in the region had declined for the second consecutive month while firms generally said they expected “conditions to worsen in the months ahead.”

Some business groups have echoed those fears, warning the White House that U.S. firms may not be able to meet their own domestic investment targets if the economics worsen. These companies may not be able to create new factories and jobs, as they have promised, without stable financial markets, available labor and access to raw materials and machinery — all inputs that may be made more expensive by the president’s recent tariffs.

“From our perspective, the Trump administration’s goal is clear: to enter into trade agreements, and they’re moving at a fast pace,” said Jason Oxman, the president of the Information Technology Industry Council, whose members include Apple and Nvidia.

“But the question for the companies looking to invest in the United States is how long will their operating expenses be higher because of the tariff regime, which may reduce the available investment for capital expenditures,” Mr. Oxman added, cautioning that he was not speaking on behalf of those tech giants.

The administration did exempt some metals, including copper and zinc, as well as rare earth minerals from the reciprocal tariffs that Mr. Trump announced and suspended in early April.

But many trade experts said any breaks may only be temporary. Much as it has for semiconductors, the administration has opened an investigation to determine whether lumber imports pose a threat to national security, a precursor to Washington issuing sector-specific tariffs under a provision of law known as Section 232.

That reflected a strategic choice by the White House “to give businesses time to relocate their production back to the United States and ramp up enough capacity and production in the U.S. to meet demand,” said Nick Iacovella, the executive vice president of the Coalition for a Prosperous America, an advocacy group that supports the president’s trade policies.

“There are always going to be companies that are going to want exemptions,” Mr. Iacovella continued, adding that the administration should resist those calls because they threaten to “undermine” Mr. Trump’s objectives.

Tony Romm is a reporter covering economic policy and the Trump administration for The Times, based in Washington."

After Trump Spares Apple, Other Businesses Want a Tariffs Break - The New York Times

Thursday, April 17, 2025

Google Is a Monopolist in Online Advertising Tech, Judge Says - The New York Times

Google Is a Monopolist in Online Advertising Tech, Judge Says

"The ruling was the second time in a year that a federal court had found that Google had acted illegally to maintain its dominance.

A statue of blindfolded Justice with outstretched arms holding a pair of scales in front of the squared columns of a building.
The federal courthouse in Alexandria, Va., where the Google antitrust trial has been held.Pete Marovich for The New York Times

Google acted illegally to maintain a monopoly in some online advertising technology, a federal judge ruled on Thursday, adding to legal troubles that could reshape the $1.88 trillion company and alter its power over the internet.

Judge Leonie Brinkema of the U.S. District Court for the Eastern District of Virginia said in a 115-page ruling that Google had broken the law to build its dominance over the largely invisible system of technology that places advertisements on pages across the web. The Justice Department and a group of states had sued Google, arguing that its monopoly in ad technology allowed the company to charge higher prices and take a bigger portion of each sale.

“In addition to depriving rivals of the ability to compete, this exclusionary conduct substantially harmed Google’s publisher customers, the competitive process, and, ultimately, consumers of information on the open web,” said Judge Brinkema, who also dismissed one portion of the government’s case.

Google has increasingly faced a reckoning over the dominant role its products play in how people get information and conduct business online. Another federal judge ruled in August that the company had a monopoly in online search. He is now considering a request by the Justice Department to break the company up, with a three-week hearing on the matter scheduled to begin Monday.

Judge Brinkema, too, will have an opportunity to force changes to Google’s business. In its lawsuit, the Justice Department pre-emptively asked the court to force Google to sell some pieces of its ad technology business acquired over the years.

Together, the two rulings and their remedies could check Google’s influence and result in a sweeping overhaul of the company, which faces a potential major restructuring.

Google and the Department of Justice did not immediately have comment.

The cases against Google are part of a growing push by regulators to rein in the power of the biggest tech companies, which shape commerce, information and communication online. The Justice Department has sued Apple, arguing that the company made it difficult for consumers to leave its tightly knit universe of devices and software. The Federal Trade Commission has sued Amazon, accusing it of squeezing small businesses, and Meta, for killing rivals when it bought Instagram and WhatsApp. The trial against Meta started this week.

President Trump has signaled that his administration will continue taking a tough stance on antitrust for the tech industry, despite efforts by tech executives to court his favor. His choices for F.T.C. chair and the Justice Department’s top antitrust role have said they intend to look closely at the power that tech companies have over online discourse. The Google search case was brought under his first administration.

The ad tech case — U.S. et al. v. Google — was filed in 2023 and concerns an intricate web of programs that sell ad space around the web, like on a news site or a recipes page. The suite of software, which includes Google Ad Manager, conducts split-second auctions to place ads each time a user loads a page. That business generated $31 billion in 2023, or about a 10th of the overall revenue for Google’s parent company, Alphabet.

Part of that business stems from the acquisition of DoubleClick, an advertising software company, for $3.1 billion in 2008. Google now has an 87 percent market share in ad-selling technology, according to the government.

The government argued during a three-week trial in September that Google had a monopoly over multiple pieces of technology that are used to conduct these transactions. The company locked publishers into using its software, and was able to take more money off the top of each transaction because of its dominance, the government said.

That hurt websites that produce content and make it available for online for no charge, the government said.

For years, groups representing news organizations, including The New York Times, have argued that the dominance of major tech platforms undermines the media industry. During the trial, the government called witnesses who had worked for publishers including Gannett and News Corp and for ad agencies that buy space online.

“These are the markets that make the free and open internet possible,” said Aaron Teitelbaum, a Justice Department lawyer, during closing arguments in November.

Google countered that it faced competition not just from other ad tech companies but from social networks like TikTok and streaming platforms. In response to the government’s arguments that it had built its ad tech products to work better together, Google’s lawyers argued that its case was bolstered by a 2004 Supreme Court decision that protects a company’s right to choose with whom it does and does not work.

“Google’s conduct is a story of innovation in response to competition,” Karen Dunn, Google’s lead lawyer, said in her closing argument.

David McCabe is a Times reporter who covers the complex legal and policy issues created by the digital economy and new technologies."


Google Is a Monopolist in Online Advertising Tech, Judge Says - The New York Times