Microsoft’s Strategic Shift in Data Center Expansion Raises Investor Concerns

Microsoft’s aggressive push into AI and cloud infrastructure has recently defined its growth strategy. Still, fresh reports suggest the company is now taking a more measured approach to its data center expansion. According to TD Cowen analysts, Microsoft has scrapped leases for several hundred megawatts of data center capacity in the U.S., a move that has caught investors’ attention and raised questions about whether the AI boom is hitting a slowdown.

The decision comes despite Microsoft’s commitment to investing over $80 billion in AI and cloud capacity this fiscal year. A company spokesperson acknowledged the adjustments but emphasized that Microsoft is still growing “strongly in all regions” and is simply pacing its infrastructure investments strategically.

Market Reaction and Investor Anxiety

While Microsoft’s stock remained largely unaffected, dipping only 1% on Monday, the ripple effect was felt across industries linked to data centers. Siemens Energy dropped 7%, Schneider Electric fell 4%, and U.S. power providers Constellation Energy and Vistra saw declines of 5.9% and 5.1%, respectively. The selloff extended to broader tech stocks, adding to growing market unease over whether the billions being poured into AI infrastructure will yield the expected returns.

Adding to the uncertainty is China’s rising competition in AI development. Chinese startup DeepSeek has showcased AI models at significantly lower costs than its Western counterparts, fueling concerns that companies like Microsoft may need to rethink their infrastructure spending to remain competitive.

A Sign of Oversupply or Just Smart Business?

Microsoft’s decision to pause or cancel leases could indicate a correction after years of rapid expansion. The company and rivals like Meta have been aggressively building data centers to support the surge in AI demand. However, as analysts point out, scaling AI infrastructure is costly, and companies are now balancing growth with financial sustainability.

Bernstein analyst Mark Moelder noted that the move could suggest a cooling in AI demand, especially following weaker-than-expected earnings from major cloud providers. However, not everyone is convinced this is a warning sign. Some industry experts argue that Microsoft is refining its strategy, ensuring it doesn’t overextend resources in a rapidly evolving market.

Whatever the case, this latest shift underscores a key reality: Even the biggest AI players are navigating a complex and uncertain landscape. The race to build next-generation AI systems isn’t just about who spends the most—it’s about who spends wisely.

Read More: Apple Launches iPhone 16e in China to Compete with Local Brands

Meta Faces Legal Battle Over AI Training with Copyrighted Content

Meta is under intense scrutiny after newly unsealed court documents revealed internal discussions about using copyrighted content, including pirated books, to train its AI models. The revelations, part of the Kadrey v. Meta lawsuit, shed light on how Meta employees weighed the legal risks of using unlicensed data while attempting to keep pace with AI competitors.

Internal Deliberations Over Copyrighted Content

Court documents show that Meta employees debated whether to train AI models on copyrighted materials without explicit permission. In internal work chats, staff discussed acquiring copyrighted books without licensing deals and escalating the decision to company executives.

Meta research engineer Xavier Martinet suggested an “ask forgiveness, not for permission” approach, in a chat dated February 2023, according to the filings. Stating:

“[T]his is why they set up this gen ai org for [sic]: so we can be less risk averse.”

He further argued that negotiating deals with publishers was inefficient and that competitors were likely already using pirated data.

“I mean, worst case: we found out it is finally ok, while a gazillion start up [sic] just pirated tons of books on bittorrent.” Martinet wrote, according to the filings. “[M]y 2 cents again: trying to have deals with publishers directly takes a long time …”

Meta’s AI leadership acknowledged that licenses were needed for publicly available data, but employees noted that the company’s legal team was becoming more flexible on approving training data sources.

Talks of Libgen and Legal Risks

The filings reveal that Meta employees discussed using Libgen, a site known for providing unauthorized access to copyrighted books. in Wechat Melanie Kambadur, a senior manager for Meta’s Llama model research team, suggested using Libgen as an alternative to licensed datasets.

According to the Filling in one conversation, Sony Theakanath, director of product management at Meta, called Libgen “essential to meet SOTA numbers across all categories,” emphasizing that without it, Meta’s AI models might fall behind state-of-the-art (SOTA) benchmarks.

Theakanath also proposed strategies to mitigate legal risks, including removing data from Libgen that was “clearly marked as pirated/stolen” and ensuring that Meta would not publicly cite its use of the dataset.

“We would not disclose use of Libgen datasets used to train,” he wrote in an internal email to Meta AI VP Joelle Pineau.

Further discussions among Meta employees suggested that the company attempted to filter out risky content from Libgen files by searching for terms like “stolen” or “pirated” while still leveraging the remaining data for AI training.

Despite concerns raised by some staff, including a Google search result stating “No, Libgen is not legal,” discussions about utilizing the platform continued internally.

Meta’s AI Data Sources and Training Strategies

Additional filings suggest that Meta explored scraping Reddit data using techniques similar to those employed by a third-party service, Pushshift. There were also discussions about revisiting past decisions not to use Quora content, scientific articles, and licensed books. In a March 2024 chat, Chaya Nayak, director of product management for Meta’s generative AI division, indicated that leadership was considering overriding prior restrictions on training sets.

She emphasized the need for more diverse data sources, stating: “[W]e need more data.” Meta’s AI team also worked on tuning models to avoid reproducing copyrighted content, blocking responses to direct requests for protected materials and preventing AI from revealing its training data sources.

Legal and Industry Implications

The plaintiffs in Kadrey v. Meta have amended their lawsuit multiple times since filing in 2023 in the U.S. District Court for the Northern District of California. The latest claims allege that Meta not only used pirated data but also cross-referenced copyrighted books with available licensed versions to determine whether to pursue publishing agreements.

In response to the growing legal pressure, Meta has strengthened its legal defense by adding two Supreme Court litigators from the law firm Paul Weiss to its team. Meta has not yet publicly addressed these latest allegations. However, the case highlights the ongoing conflict between AI companies’ need for massive datasets and the legal protections surrounding intellectual property. The outcome could set a major precedent for how AI companies train models and navigate copyright laws in the future.

Read More: Meta & X Approved Anti-Muslim Hate Speech Ads Before German Election, Study Reveals

Musk’s Legal Battle with OpenAI May Head to Trial, Judge Rules

A federal judge in California has ruled the sections of  Musk’s lawsuit will proceed to trials, requiring his testimony. On Tuesday, a judge said that portions of  Elon Musk’s case against OpenAI to stop its conversion to a for-profit-entity will proceed to trial. He also added that Tesla CEO will also appear in court for testimony,

“ Something is going to trial in this case,” District Judge Yvonne Gonzalez Rogers in Oakland, California, said in the early court session. 

Musk will sit on the stand and present his point to a jury, and a jury will decide who is on the right side. 

District judge Rogers was considering Musk’s recent request for a preliminary injunction to block the conversions generated by OpenAI before going to trial. Thus, the latest move in this battle is between the world’s richest person and CEO of OpenAI, Sam Altman, who is playing publicly. 

The last time Rogers was given a preliminary injunction was in Epic Games’ case against Apple in  May,2021. 

Musk has also been the Cofounder of OpenAi with Altman in 2015, but quit before the company took over and started the competing AI startup xAI in 2023. OpenAI is now shifting to a for-profit entity from nonprofit one, which clearly exhibits its need to generate a secure revenue for best AI models development

Last year, Musk filed a case against OpenAI and Sam Altman, saying that OpenAI founders approached him to fund a nonprofit AI development for humanity, but their focus is on making money. He later expanded the case to federal antitrust for more claims, and in December asked the judge presiding over the lawsuit to refrain AI from transitioning into a for-profit. 

In response to this filed case of Musk, Open AI has recorded their word, saying that the claims of Musks should be dismissed and that Musk “ should be competing in the market rather than the courtroom” 

The stakes on OpenAI’s transition has now moved after their last fundraising of around $6.6 billion with a new roundup of $25 billion under discussion with softbank are conditioned on the company restructuring to remove the non-profit entity’s control. 

Such reinstatement of AI would be a bit  unusual, said Rose Chan Loui, executive director of the UCLA law center for Philanthropy &  nonprofit entities. The shift of nonprofit work to for-profit has historically been for healthcare organizations like hospitals, not venture capital-backed organizations, she said. 

Read More: OpenAI Seals Partnership with Kakao