Meta’s Expansion of Anti-Fraud Facial Recognition Tool in UK, a Security Measure or Privacy Risk?

Meta has, once again, stepped into the lost realm of facial recognition, which hasn’t been free of controversies in the past. After years of being disenchanted by not-so-pleasant regulations, resulting in billion-dollar settlements, the tech giant is taking the AI-powered route to add facial recognition to its suite of tools all over again. This time, in reducing online scams and account takeover, is it really about user protection, or is this a strategic move to channel facial recognition back into public view under a different, more attractive guise? With Meta taking the anti-fraud tool into the UK, the subject of privacy, security, and corporate responsibility again throws the spotlight on users. 

Meta launched two new AI-powered features in October to facilitate either the impersonation of a celebrity or the recovery of hacked Facebook and Instagram accounts. An initial trial of the features only included global markets, but now, the tech company has expanded the experimentation into the UK after regulators embraced them. After being engaged with the regulators for a while, the approval to start the process was received. Meta is also extending a feature called “celeb bait,” which is meant to prevent scammers from using the real names of public figures to an even larger audience in countries where it was previously available. I guess it’s all fun and games until Meta’s facial recognition mistakes you for a celebrity and starts flagging your selfies.

Regulatory Hurdles and EU’s Future:

Meta’s choice to extend these technologies to the United Kingdom comes at a time when current legislation is evolving into a more welcoming environment for AI-oriented innovations. The company has not yet decided to unveil the facial recognition feature in the EU, another key jurisdiction with rigid fixation on data protection. With this strict approach, the EU has taken on the utilization of biometric data under the General Data Protection Regulation (GDPR), it would mean that it will be getting another layer of scrutiny ahead of any further expansion of the test.

Meta said, “In the coming weeks, public figures in the UK will start seeing in-app notifications letting them know they can now opt-in to receive the celeb-bait protection with facial recognition technology. This and the new video selfie verification that all users can use will be optional tools. Participation in this feature, as well as the new ‘video selfie verification’ alternative, will be entirely optional as well.

Meta’s AI Strategy and History with Facial Recognition:

Meta maintains that the use of these facial recognition tools is strictly to combat fraud and secure user accounts, yet it has a long and mostly disreputable history of using user data in its AI models. Meta says their new facial recognition tool is for security because obviously, that’s the first thing we think of when we hear ‘Meta’ and ‘privacy’ in the same sentence. Their previous reputation says a lot about them, which has caused trust issues, as they promise to delete facial data immediately after use, just like they promised to protect user privacy before, right? First, they took our data, now they want our faces. What’s next? A Meta DNA test?

In October 2024, when these tools were launched, the company assured users that any facial data used for fraud detection would be deleted immediately after a one-time comparison, with no possibility for its use in other AI training. Monika Bickert, Meta’s VP of Content Policy, wrote in a post,

“We immediately delete any facial data generated from ads for this one-time comparison regardless of whether our system finds a match, and we don’t use it for any other purpose”.

Its deployment comes as Meta aggressively implements AI in all of its operations. The company is building its large language models, is heavily invested in improving products through AI, and allegedly has been working on a standalone AI app. In parallel with that, Meta has also increased its promotion of AI regulation and embraced the image of being responsible.

Addressing Criticism of the Past:

Given its excellent track record, Meta would likely introduce facial recognition as a security measure, that is, as a step toward improving the company’s image. For years, the company has been criticized for making it easy for frauds to advertise fraudulent schemes on its advertising platform, with many of them misappropriating images of celebrities to promote doubtful crypto investments and other scams. Framing these new tools as solutions to such problems may soften public perception of the use of facial recognition technology.

Facial recognition is a very sensitive area for the company. Last year, Meta agreed to pay an enormous $1.4 billion to settle a lawsuit in Texas relating to unlawful biometric data collection allegations. Before this development, Facebook had shut down its decade-old photo-tagging facial recognition system in 2021 under strong legal and regulatory pressure. While Meta has discontinued that tool, it continues to hold on to the DeepFace model, which has somehow come back on its latest offerings.

Meta’s Facial Recognition, a Thin Line between Security and Surveillance:

Meta’s facial recognition highlights the thin line between technological innovation and invasion of privacy. While diminishing fraud and account security sounds good, a larger question of biometric data collection arises. With its not-so-glamorous past of biometric data handling and billion-dollar settlements to match, Meta paints an image of a tech giant that has always tested the limits and trust of its users. Deleting the facial profiles right after collecting them sounds good, but who are we kidding? There is little faith in that product coming from Meta, if history teaches us anything, Meta’s ambitions almost always go way beyond its upfront promises.

Facial recognition might serve a purpose in fraud detection, but indisputably, it can also serve for mass surveillance with potential abuse in the hands of the weak regulatory bodies. The balance between security and privacy is fragile, and since Meta treats its valid data collection methods as open invitations for more exploitation, history shows us that once the means are proven effective, there is rarely a strict application of the initial purpose. Historically, companies, once in possession of personal data, have had multiple ways of misusing that data or even expanding the use of such data beyond its original purpose.

With governments engaged in such areas and regulatory bodies being questioned, users must remain alert in demanding accountability and transparency before they will accept yet another layer of AI-based control. If accepted as the new norm, facial recognition tools for preventing fraud might just be the next step in Meta’s acceptance journey or would be another entry in their very long history of controversy relating to AI. With AI technology advancing in our lives, the time is now more important than ever for stronger safeguards and mandatory rules.  

Meta Fires 20 Employees for Leaking Confidential Information

According to The Verge, Meta has terminated approximately 20 employees for leaking confidential company information. The tech giant confirmed that the crackdown is part of its commitment to protecting sensitive data, particularly as leaks of internal meetings and upcoming product plans have increased in recent months. A Meta spokesperson stated, “We tell employees when they join the company, and we offer periodic reminders, that it is against our policies to leak internal information, no matter the intent.” The company added that additional terminations are expected as investigations continue.

The decision follows a series of news reports revealing details from Meta’s private discussions, including an all-hands meeting led by CEO Mark Zuckerberg. In response, the company has ramped up efforts to identify and take action against employees responsible for leaks. Meta’s Chief Technology Officer, Andrew Bosworth, reportedly warned staff that the company was close to identifying the culprits. Ironically, even his warning was leaked, underscoring Meta’s challenge in curbing unauthorized disclosures.

Meta’s History of Internal Leak Issues

This is not the first time Meta has cracked down on leaks. The company has faced scrutiny in the past over leaked internal documents related to privacy concerns, content moderation policies, and AI development strategies. As Meta continues to navigate growing competition and regulatory challenges, securing proprietary information has become a top priority. The recent terminations signal a stricter approach to handling internal breaches and protecting corporate secrets.

Read More: Meta Gears Up to Launch Standalone AI Chatbot to Challenge ChatGPT & Gemini

Meta Gears Up to Launch Standalone AI Chatbot to Challenge ChatGPT & Gemini

The rapid landscape of artificial intelligence in evolution has placed tech giants in an all-out war to dominate the AI-driven chatbot space. While Meta has often changed the game in digital communications, it is now aiming for a somewhat loftier challenge with an AI chatbot experience independent of its parent applications. With OpenAI’s ChatGPT and Google’s ‎Gemini both garnering formidable attention, Meta is ready to launch a standalone app for its Meta AI assistant, which not only extends the technology but represents a strategic initiative to carve out its position in the AI arena. As leverage continues to be exercised, will this be defined by Meta as its own paradigm shift toward realizing the future of AI engagements?

Anticipated Timeline for Launch:

Reports say that Meta is working on launching a standalone application for its AI assistant called Meta AI. The new application is expected to push Meta further into competition with AI-driven chatbots such as OpenAI’s ChatGPT and Google Gemini. According to a report from CNBC, Meta would have the possibility of rolling out the standalone Meta AI sometime in its next quarter (April to June). As of now, Meta AI is embedded into various services within the Meta ecosystem, including Facebook, WhatsApp, and dedicated websites. A separate app would give Meta’s AI more visibility in the competitive AI chatbot space.

Paid Subscription Model and Investment in AI:

A paid membership plan is also being researched for the Meta AI, which should provide additional user capabilities for premium members. CNBC’s sources did state that they were unsure about the pricing and the exact capabilities offered to premium members. With over 700 million monthly active users already, Meta AI constituted a major component of Meta’s overall strategy in AI. Besides the chatbot application, Meta has been investing in the open source AI development space mainly through its Llama models. The company is going after OpenAI, which is beginning to shape up the AI industry.

Anticipated AI-Centric Development:

Along with all such activity, Meta is also launching its very first and distinctly elite AI developer conference called LlamaCon in late April. The goal of that intervention would be to further showcase the very latest developments in Meta’s AI technologies and assist the developer community. Such is the development in Meta that the giant promises to become one of the best in the AI industry, matching each proprietary and open-source undertaking toward the future development of AI applications. The introduction of a standalone chatbot app could neutralize all competitive strategies as Meta continues its relentless voyage into AI. Combining its massive user base with next-generation AI technology, Meta can change the very contours of the chatbot market and usher AI into seamless user interactions. Yet, in the wake of serious competition posed by occupants OpenAI and Google, the overrunning success of the app will depend on how much the app drives innovation, simplicity, and utility for the users. 

Read More: Meta May Launch a Separate Video App for Instagram Reels

Meta May Launch a Separate Video App for Instagram Reels

Meta has long been chasing dominance in the short-form video space. From acquiring Instagram to launching Lasso in 2018—a failed attempt to rival TikTok—the company has explored multiple ways to keep users engaged. Now, sources suggest Instagram is considering turning Reels into a standalone app, signalling a renewed push to capture the short-video market. Unlike its past experiments, the landscape is shifting in Meta’s favour this time. With TikTok’s uncertain regulatory status in the U.S., Meta sees a potential opportunity to attract users looking for alternatives. While Reels is integrated into Instagram, a dedicated app would allow for deeper monetisation, better content discovery, and an experience tailored exclusively to short-form video creators.

Meta has already shown interest in expanding its video ecosystem. In January, the company launched Edits, a video-editing app designed to compete with CapCut, the tool owned by TikTok’s parent company ByteDance. A separate Reels app could follow a similar strategy—offering users an independent space for content creation and engagement while still being linked to Instagram’s massive audience. While Meta has not officially confirmed the move, the company has consistently adapted its products based on shifting user behaviour and competitor threats. Whether this is a preemptive strike against TikTok’s dominance or a new approach to boosting Instagram’s engagement, separating Reels could mark a significant shift in the short-video industry.

Read More: Alibaba Goes All-In on Open-Source AI With Wan 2.1 Release

Meta’s Oversight Board to Assess Hate Speech Policy Changes

Meta, the parent company of Facebook, Instagram, and Threads, has long faced scrutiny over its content moderation policies, especially when it comes to hate speech and misinformation. Over the years, the company has tightened and loosened its regulations in response to public pressure, political discourse, and regulatory scrutiny. Now, with its latest policy changes, Meta’s Oversight Board is preparing to review recent changes to the company’s hate speech policies on Facebook, Instagram, and Threads, marking a critical moment for content moderation on Meta’s platforms.

In January 2024, CEO Mark Zuckerberg introduced a policy shift aimed at allowing more expression on Meta-owned platforms. The update included rolling back certain protections for immigrants and LGBTQ users, a move that has sparked debate over free speech vs. platform safety.

The Oversight Board, an independent body established to review Meta’s policy decisions, has taken notice. It currently has four open cases related to hate speech and will use these cases to assess the impact of the company’s updated guidelines. According to a report by Engadget, the board’s decision could influence how Meta refines its content moderation approach moving forward.

Meta has a mixed record when it comes to adopting the Oversight Board’s recommendations. While the company is required to follow the board’s rulings on individual content cases, it has a limited obligation to make broader policy adjustments. This review will test whether Meta is willing to reevaluate its moderation approach or continue with its more lenient stance on content restrictions.

With the rise of misinformation, online harassment, and the political climate intensifying, the outcome of this review could influence how Meta shapes content regulation in the future. Whether the Oversight Board’s findings will result in actual policy changes remains to be seen.

Read More: Amazon Unveils Alexa+ AI Assistant to Revolutionize Smart Living

Meta Reportedly Planning $200 Billion AI Data Center Expansion Amid Growing Infrastructure Race

Meta is reportedly exploring a massive $200 billion investment in a next-generation AI data centre campus, signalling an aggressive push into artificial intelligence infrastructure. According to a report from The Information, Meta executives have been in discussions with data centre developers and have scouted potential locations in Louisiana, Wyoming, and Texas as part of the early planning stages.

However, a Meta spokesperson denied the report, stating that the company’s capital expenditure plans have already been disclosed, and anything beyond that is “pure speculation.” Despite this, industry analysts believe that such an expansion aligns with Meta’s growing AI ambitions, particularly after CEO Mark Zuckerberg confirmed last month that the company intends to spend up to $65 billion in 2024 to expand its AI infrastructure.

Tech Giants in a Race for AI Dominance

If the reported $200 billion project moves forward, it would dwarf Meta’s previous spending and position the company as a dominant player in the AI infrastructure race. Tech giants like Microsoft and Amazon are also ramping up their AI investments, with Microsoft planning an $80 billion investment in data centres for fiscal 2025 and Amazon expecting to surpass its $75 billion infrastructure spending from 2024.

Since the launch of ChatGPT in 2022, the AI sector has seen an unprecedented surge in investment, with companies across industries rushing to integrate AI-driven capabilities into their products and services.

Meta’s AI Ambitions and the Future of AI Computing

As Meta expands its AI and metaverse initiatives, its potential data center expansion could be critical to supporting its long-term artificial intelligence and machine learning advancements. Although official confirmation of the $200 billion project remains uncertain, Meta’s increasing AI infrastructure investments signal a fierce competition among tech giants to dominate the next era of AI-powered computing. Whether this rumored mega-campus materializes or not, the race to build the most advanced AI data centers is only intensifying.

Read More: After R1’s Success, DeepSeek Fast-Tracks Launch of New AI Model R2

WhatsApp to Introduce Viewer Count for Channel Updates on Web Client

Following recent enhancements focused on improving user engagement and content interaction, WhatsApp is now actively working on a new feature for its web client. This upcoming addition will enable channel administrators to see the exact number of viewers for individual channel updates, according to insights shared by WABetaInfo.

Initially announced as part of the Android beta update (version 2.23.24.15), the “Channel update viewers” feature aims to deliver transparent analytics directly within the WhatsApp interface. Specifically, viewer counts will be conveniently displayed within the message bubbles of each channel update. This approach allows channel admins and potential followers to gauge the reach and effectiveness of each post quickly.

Channel update viewers

Notably, the viewer metric will include views from followers and users who discovered the update through searches but haven’t followed the channel yet. This broader measurement gives admins a more accurate picture of their total audience and reach, helping them refine their future content strategies based on real engagement data.

This feature could significantly benefit businesses, marketers, and organizations using WhatsApp communication channels. With accurate viewership data, they’ll be better positioned to understand audience interests, adjust their messaging, and enhance overall engagement.

Privacy remains a cornerstone of WhatsApp’s strategy; this new feature aligns with that philosophy. WhatsApp will display only the total view count per update, explicitly ensuring the confidentiality of individual users. Viewers’ names or phone numbers will not be revealed, preserving user privacy while offering essential insights to content creators.

The exact rollout specifics remain undecided as WhatsApp continues exploring whether to restrict viewership metrics solely to channel admins or extend access to regular followers. However, extending this information to all users seems plausible since followers might also find value in these insights to gauge a channel’s popularity or credibility.

WhatsApp Web users can expect this to be a highly anticipated feature in future updates. More detailed information will become available as the feature moves closer to a public release. Incorporating a viewer count enriches user experience and positions WhatsApp Channels as a powerful tool for community building, content creation, and business communication, strengthening WhatsApp’s ecosystem across platforms.

Read More: WhatsApp Rolls Out Permanent Chat List Filters for Easier Navigation

Meta Faces Legal Battle Over AI Training with Copyrighted Content

Meta is under intense scrutiny after newly unsealed court documents revealed internal discussions about using copyrighted content, including pirated books, to train its AI models. The revelations, part of the Kadrey v. Meta lawsuit, shed light on how Meta employees weighed the legal risks of using unlicensed data while attempting to keep pace with AI competitors.

Internal Deliberations Over Copyrighted Content

Court documents show that Meta employees debated whether to train AI models on copyrighted materials without explicit permission. In internal work chats, staff discussed acquiring copyrighted books without licensing deals and escalating the decision to company executives.

Meta research engineer Xavier Martinet suggested an “ask forgiveness, not for permission” approach, in a chat dated February 2023, according to the filings. Stating:

“[T]his is why they set up this gen ai org for [sic]: so we can be less risk averse.”

He further argued that negotiating deals with publishers was inefficient and that competitors were likely already using pirated data.

“I mean, worst case: we found out it is finally ok, while a gazillion start up [sic] just pirated tons of books on bittorrent.” Martinet wrote, according to the filings. “[M]y 2 cents again: trying to have deals with publishers directly takes a long time …”

Meta’s AI leadership acknowledged that licenses were needed for publicly available data, but employees noted that the company’s legal team was becoming more flexible on approving training data sources.

Talks of Libgen and Legal Risks

The filings reveal that Meta employees discussed using Libgen, a site known for providing unauthorized access to copyrighted books. in Wechat Melanie Kambadur, a senior manager for Meta’s Llama model research team, suggested using Libgen as an alternative to licensed datasets.

According to the Filling in one conversation, Sony Theakanath, director of product management at Meta, called Libgen “essential to meet SOTA numbers across all categories,” emphasizing that without it, Meta’s AI models might fall behind state-of-the-art (SOTA) benchmarks.

Theakanath also proposed strategies to mitigate legal risks, including removing data from Libgen that was “clearly marked as pirated/stolen” and ensuring that Meta would not publicly cite its use of the dataset.

“We would not disclose use of Libgen datasets used to train,” he wrote in an internal email to Meta AI VP Joelle Pineau.

Further discussions among Meta employees suggested that the company attempted to filter out risky content from Libgen files by searching for terms like “stolen” or “pirated” while still leveraging the remaining data for AI training.

Despite concerns raised by some staff, including a Google search result stating “No, Libgen is not legal,” discussions about utilizing the platform continued internally.

Meta’s AI Data Sources and Training Strategies

Additional filings suggest that Meta explored scraping Reddit data using techniques similar to those employed by a third-party service, Pushshift. There were also discussions about revisiting past decisions not to use Quora content, scientific articles, and licensed books. In a March 2024 chat, Chaya Nayak, director of product management for Meta’s generative AI division, indicated that leadership was considering overriding prior restrictions on training sets.

She emphasized the need for more diverse data sources, stating: “[W]e need more data.” Meta’s AI team also worked on tuning models to avoid reproducing copyrighted content, blocking responses to direct requests for protected materials and preventing AI from revealing its training data sources.

Legal and Industry Implications

The plaintiffs in Kadrey v. Meta have amended their lawsuit multiple times since filing in 2023 in the U.S. District Court for the Northern District of California. The latest claims allege that Meta not only used pirated data but also cross-referenced copyrighted books with available licensed versions to determine whether to pursue publishing agreements.

In response to the growing legal pressure, Meta has strengthened its legal defense by adding two Supreme Court litigators from the law firm Paul Weiss to its team. Meta has not yet publicly addressed these latest allegations. However, the case highlights the ongoing conflict between AI companies’ need for massive datasets and the legal protections surrounding intellectual property. The outcome could set a major precedent for how AI companies train models and navigate copyright laws in the future.

Read More: Meta & X Approved Anti-Muslim Hate Speech Ads Before German Election, Study Reveals

Meta & X Approved Anti-Muslim Hate Speech Ads Before German Election, Study Reveals

A recent study by the German digital rights organization Eko has revealed that Meta and X (formerly Twitter) approved advertisements containing violent anti-Muslim and antisemitic hate speech ahead of Germany’s federal election on February 23, 2025. These findings raise significant concerns about the platforms’ content moderation practices and their potential impact on the electoral process.

Eko’s investigation involved submitting deliberately harmful political ads to Meta and X to assess their ad approval systems. Alarmingly, X approved all 10 of the submitted hate speech ads, while Meta approved five out of ten, despite both companies’ policies prohibiting such content. Some ads featured AI-generated imagery depicting hateful narratives without disclosing their artificial origin. Meta’s policies require such disclosures for social issues, elections, or political ads, yet half of these undisclosed AI-generated ads were still approved.

Elon Musk’s Involvement in German Politics

In addition to platform-specific issues, Elon Musk, the owner of X, has actively engaged in Germany’s political discourse. In December 2024, Musk tweeted, ‘Only the AfD can save Germany,’ expressing support for the far-right Alternative für Deutschland (AfD) party. He also hosted a live stream with AfD leader Alice Weidel on X, providing the party with a significant platform during the election period.

The Digital Services Act and EU Investigations

In addition, Meta failed to enforce its own AI content policies. Some of the submitted ads contained AI-generated imagery depicting hateful narratives, yet Meta approved half of these without requiring disclosure that AI was used—a direct contradiction to its policy mandating transparency for AI-generated political content.

Our findings suggest that Meta’s AI-driven ad moderation systems remain fundamentally broken, despite the Digital Services Act (DSA) now being in full effect“.

Eko has submitted its findings to the European Commission, which oversees the DSA’s enforcement. The organization argues that neither Meta nor X fully comply with the act’s hate speech and ad transparency provisions. This aligns with Eko’s prior investigation in 2023, which similarly found Meta approving harmful ads despite the DSA’s impending implementation.

“Rather than strengthening its ad review process or hate speech policies, Meta appears to be backtracking across the board,” an Eko spokesperson said. The statement points to Meta’s recent decisions to scale back its fact-checking and moderation policies, which they argue could place the company in direct violation of the DSA.

Potential Penalties Under the DSA

Violations of the DSA could lead to significant penalties, including fines of up to 6% of a company’s global annual revenue. If systemic non-compliance is proven, regulators could even impose temporary access restrictions on platforms within the EU. However, the EU has yet to finalize its decisions on Meta and X, leaving the possibility of enforcement actions uncertain.

Civil Society Organizations Raise Alarm Over Election Security

With Germany’s election imminent, digital rights groups warn that the DSA has not provided adequate protection against tech-driven election manipulation. A separate study from Global Witness found that algorithmic feeds on X and TikTok favor AfD content over other political parties. Researchers have also accused X of limiting data access, preventing independent studies on election-related misinformation—despite the DSA requiring platform transparency.

“Big Tech will not clean up its platforms voluntarily,” Eko’s spokesperson stated. “Regulators must take strong action—both in enforcing the DSA and implementing pre-election mitigation measures.”

Will Regulators Step In Before the Election?

As German voters prepare to go to the polls, pressure is mounting on EU regulators to act swiftly to prevent further disinformation and hate speech from spreading online. Despite calls for intervention, neither Meta nor X has publicly responded to Eko’s latest findings. With election integrity at stake, the question remains: Will Meta and X adjust their policies in response to regulatory pressure, or will the EU take more decisive action to enforce compliance?

Read More: Meta Rolls Out Community Notes on Facebook, Instagram, and Threads

Meta Rolls Out Community Notes on Facebook, Instagram, and Threads

Meta has officially launched Community Notes, a feature enabling users to provide context for potentially misleading Facebook, Instagram, and Threads posts. This represents a significant shift in content moderation on Meta’s platforms, moving from traditional fact-checking to a community-driven approach.

A New Way to Add Context to Social Media Posts

Community Notes allows users to submit succinct explanations of posts that need extra context or clarification. These notes are capped at 500 characters and must include a source link to validate the information provided. A diverse group of reviewers assesses the notes, ensuring that only balanced and widely agreed-upon explanations are made public. This process aims to combat misinformation while fostering an open exchange of ideas.

To participate, users must meet specific criteria, including being 18 years or older, based in the United States, and having an account in good standing for at least six months. Meta currently accepts sign-ups for contributors, with plans to expand the program.

How Community Notes Work?

The Community Notes system enables contributors to submit short contextual explanations on posts needing further clarification. To maintain quality and neutrality, the notes must follow strict guidelines:

  • Character Limit: Each note is limited to 500 characters, ensuring concise and relevant information.
  • Source Requirement: Every note must include a supporting link to a credible source, preventing opinion-based moderation.
  • Diverse Agreement Model: For a note to be approved and published, it must receive agreement from contributors with different perspectives, ensuring a balanced viewpoint.

Once approved, Community Notes will appear publicly alongside the post, helping users better understand the content context without censorship or direct platform intervention.

Meta’s Shift Away from Traditional Fact-Checking

The introduction of Community Notes coincides with Meta’s shift away from third-party fact-checking in the United States. Previously, the company collaborated with external organizations to verify information, a system that often resulted in content restrictions and allegations of over-censorship. By adopting a community-moderated model, Meta seeks to enhance transparency and minimize bias in how information is evaluated on its platforms.

According to Meta, this shift is part of an effort to give users greater control over content moderation while ensuring that important context is provided without suppressing speech. The program’s success will largely depend on effectively preventing misinformation while maintaining fair and neutral content moderation.

The Future of Community Notes on Meta’s Platforms

Community Notes is available only in the United States but is expected to expand soon. Meta will monitor its effectiveness and adjust its strategy based on user feedback and the platform’s impact. This change can redefine how misleading content is addressed on social media, establishing a new standard for community-led moderation. As Meta evolves its content oversight, the launch of Community Notes marks a significant shift in how information is verified and contextualized across Facebook, Instagram, and Threads.

Read More: Meta Launches Project Waterworth, World’s Longest Undersea Cable that Bridges Continents

Meta’s Cost-Cutting: Fewer Stock Options, Bigger Executive Bonuses

For years, the promise of stock options, an employment perk in themselves, made tech employees hope against hope for the conversion of their salary into millions until late, when the market, or in this case, their own company, decided otherwise. Meta, scooping stock prices at record highs, has now cut its employees’ equity compensation by 10%. The irony? Stock options dropped for rank-and-file workers but instead got inflated bonus awards for executives. It’s like watching someone put down their cake and give you half their piece while helping themselves to an extra slice on the side.

The Financial Times allegedly states that tens of thousands of Meta Platforms employees could have a 10% downsizing of their annual stock options as the stock makes record highs this month. Each year Meta employees are offered equity refreshers, giving a major part of their total compensation, which also includes base salaries and bonuses. These stock options top every three months over four years. Most of them have been told they would get around 10% less equity for this year, while the exact percentages supposedly depend on location and organizational hierarchy.

Increased Bonuses and Workforce Adjustments:

With the simultaneously extending resource base, a larger bonus to executives is offered in cases where and when the equity share seems to be for the broad workforce. An executive bonus is being raised according to the company’s ninth filing, to now 200% of base salary, where it was at 75%, but these new bonuses are not going to be offered to Meta’s CEO, Mark Zuckerberg.

The latest proportion is, thus, following the media buzz that Meta will terminate almost 5% of its “lowest-performing” members and is set to refill the open positions at a later point in the year. Moreover, Zuckerberg noted that he might eliminate even more jobs emphasizing that elevating performance standards is the company’s foremost aim.

Meta’s Stock Market:

Meta’s stock has seen a run since January 17, as the U.S. Supreme Court banned TikTok and Donald Trump‘s long overdue ban on TikTok was crawling toward enforcement dates. Investor confidence resumed in January with Mark Zuckerberg announcing that Meta plans to cover up to $65 billion this year in gripping its artificial intelligence infrastructure.

Contrarily, though, Meta’s shares declined by 1.3% to $694.8 last Thursday. A quarter-four earnings report in late January showed it delivered above what Wall Street estimated, yet the company cautioned that the first quarter may affect sales figures and will possibly mislead observers regarding the financial outcomes of Meta’s highly focused AI investments.

Growth and Cost Management:

Despite the high record-breaking stock and generally good market positioning, Meta chose to lower a share of stock options for employees, also as part of cost management amid high investments toward AI with evolving strategies of workforce. A classic case of what technology giants do to cut costs for some while keeping their top people happy is Meta’s trimming of stock options and increased bonuses for some executives. It would remain to be seen how the latest measure affects employee morale and retention while still involved in AI major expansion and market dominance. One thing about the rapidly changing technology scenario is that it will be a cloudy future for the employees of Meta, just like their stock allocations.

Read More: Meta Launches Project Waterworth, World’s Longest Undersea Cable

DeepSeek Disrupts the AI Titans: Google, Meta, and Microsoft Fight Back with Unprecedented Spending

In the last two weeks, we have seen speculations that DeepSeek will revolutionize investments in AI and (might give tough competition to Tech Giants like Meta, Google and Microsoft. They caused Nvidia stocks to go down rapidly and investors feared that the demand for AI chips and data centers would no longer be the same. DeepSeek’s huge success also put a question mark on whether Meta, Google, and Microsoft will be spending less on AI investments.

But in Google parent company Alphabet’s latest earnings call, CEO Sundar Pichai ended the rumours. He noticed a Chinese AI company, DeepSeek, praised what they do and compared it with some Gemini models, stating that they are good enough, too.

He also talked about his plans. Instead of stepping back, Alphabet is doubling down on AI investments, with a massive increase of capital expenditures to $75 billion in 2025, a 42% jump from the previous. Alphabet spent $32.3 billion on capital expenditures in 2023, so $75 billion in 2025 would be a big jump.

The reason? 

Cheaper AI could drive higher demand for Google’s AI-powered services. More people will use it, leading to more business opportunities.

Mark’s Meta:

Not only Sundar, but Meta’s CEO Mark Zuckerberg also announced massive long-term AI spending. Zuckerberg already announced last week that Meta would spend more than $60 billion in 2025 alone on capital expenditures, primarily on data centres. Sparking his confidence from his statement over leading the AI dominance race. He also suggests that tech giants aren’t slowing down despite DeepSeek or any newcomer AI model popularity or success.

What is Meta up to:

Meta’s goal with its next model, Llama 4, is to make it the world’s most competitive, even compared to closed models (like ChatGPT). Zuckerberg expects Llama 4 to have agentic capabilities, a mixture of both OpenAI and Anthropic and multimodal ones.

Microsoft’s Agenda:

Microsoft CEO Satya also has a take on DeepSeek’s ‘lower cost’ agenda. He said the spending would ease capacity constraints that have hampered the technology giant’s ability to capitalize on AI.

“As AI becomes more efficient and accessible, we will see exponentially more demand,” he said on a call with analysts.

With this, Microsoft has earmarked $80 billion for AI in its current fiscal year, while Meta has pledged as much as $65 billion. All 3 tech giants seem to have healthy competition in the AI global dominance race. Less concerned about newcomers with their ‘new strategies’.

But here comes the real question:

  • Will this huge spending actually pay off? 
  • Time will surely unfold this mystery.
    Stay tuned to learn more.

Read More: OpenAI Seals Partnership with Kakao, Expanding Its Asian Collaborations

Development of Extremely Risky AI Systems may Halt, Meta indicates

CEO Mark Zuckerberg has committed to eventually making artificial general intelligence (AGI), it refers to the capability of AI in performing any human task which is considered in the future to be openly available. However, a new policy document from Meta suggests that in certain cases, the company may choose not to release highly advanced AI systems developed internally.

AI System’s risks:

In the document which is named the Frontier AI Framework, two types of AI systems, “high risk” and “critical risk”, are considered somewhat risky to be released. According to Meta, both classifications involve AI systems that would support breaking through cybersecurity measures, as well as attacks on chemical and biological fronts. The critical risk systems could cause a “catastrophic outcome that cannot be mitigated in a proposed deployment context,” whereas high-risk systems may facilitate attacks but not as effectively or reliably as critical risk ones.

Meta provides examples of potential threats, such as the automated end-to-end compromise of a practice protected corporate scale environment and the ‘’proliferation of high-impact biological weapons”. Meta says,“ it doesn’t believe the science of evaluation is sufficiently robust as to provide definitive quantitative metrics for deciding a system’s riskiness”. The company acknowledges that its list is not exhaustive but represents what it views as “the most urgent” and plausible risks arising from the release of powerful AI.

Astonishingly, Meta measures system risk not through a single empirical test but through insights garnered from the collaboration of several internal and external researchers and the final decision residing with senior executives. According to the company, current assessment methods are just not “sufficiently robust” to allow for definitive quantitative risk assessment to be set.

Suppose an AI system is classified as high-risk. In that case, access will be restricted from internal parties, and action on the system’s release will remain in limbo until mitigations can reduce the risk to a moderate level. Suppose a system is determined to reach critical-risk status. In that case, Meta will set in place measures to restrict access to all by putting security in place and suspending its development until such a time when the system can be made less dangerous.

Meta’s Frontier AI Framework:

Meta’s Frontier AI Framework is designed to evolve alongside advancements in AI and aligns with the company’s prior commitment to publishing it before the France AI Action Summit. This initiative appears to be a response to criticism regarding Meta’s open approach to AI development. In contrast to companies like OpenAI, which restrict access to their AI systems by putting them behind an API, Meta has generally favoured a comparatively more open yet still controlled access to its AI models.

While this has created much popularity for its Llama AI models, it has also been fairly contentious, especially with the reports that adversaries of the U.S. have used Llama to create a defence chatbot. With the announcement of the Frontier AI Framework, Meta may also be trying to distinguish its stance from DeepSeek, a Chinese AI company following a similar path of openly launching its models while consisting of fewer safeguards to stop harmful content creation.

Meta says, “[W]e believe that by considering both benefits and risks in making decisions about how to develop and deploy advanced AI, it is possible to deliver that technology to society in a way that preserves the benefits of that technology to society while also maintaining an appropriate level of risk.” Meta aims to develop advanced AI technology with an approach that maximizes the societal benefit of AI development and innovation while minimizing its risks.

Read More: Metas Shift to Community Notes: Revolution or Risk?

Zuckerberg doubles down on AI investments despite DeepSeek’s impact on Tech Industry

DeepSeek is not a concern for Meta, as the company’s billions of users, trillions in assets, and future AI-powered plans create an unsurprising level of competition. I am not sure if money can buy happiness, but I am aware now that it can succeed in keeping Zuckerberg calm. The likelihood of DeepSeek’s AI models exceeding GPU demand led to a panic in U.S. markets, with Nvidia’s stock falling by almost 20%. Meta is said to be a major player in the betting industry, and Mark Zuckerberg, the CEO of Meta, confirmed during the company’s earnings call, that the firm would invest “hundreds of billions of dollars” in AI over the long term and is set to spend over $60 billion on capital expenditures in 2025, with a focus on data centers.

Meta’s AI Infrastructure: A Strategic Edge:

The company has emphasized the development of more data centers to support its expanding AI initiatives. Zuckerberg is not really worried about DeepSeek, as he believes that the company still has billions of users, and this is unrelated to its growth. He dismisses the notion that DeepSeek’s expansion has affected Meta. He believes that Meta’s dedication to building AI infrastructure would be a significant asset and advantage in terms of both service quality and scale, the focus on AI infrastructure will continue to give “a strategic edge” to the company. The industry’s potential for growth and the significant number of users it serves are highly valued.

Zuckerberg revealed that his new model, Llama 4, is intended to compete head-to-head with OpenAI’s ChatGPT, offering agentic capabilities and multimodal functionality, which are common attributes in OpenAI and Anthropic. He stated, “Our goal with Llama 3 was to make open source competitive with closed models, and our goal for Llama 4 is to lead.” There is still a debate over the AI competition, with Zuckerberg feeling confident about it and Meta showing signs of trouble.

Read More: SoftBank’s Biggest AI Gamble Yet: What $25B Means for OpenAI & Stargate

Google Partners with HTC in $250M XR Deal: A Bold Step to Rival Apple and Meta in Immersive Tech

Google’s Latest & Bold Move!!

Acquiring a part of HTC’s XR business for $250 million?

Trying to rule the XR space?

Google is damn serious about competing in the advanced technology space.

What’s happening?

Let’s start from scratch.

1. What is XR?

  • XR, or Extended Reality, is a collective term that includes:
    • Virtual Reality (VR): Fully immersive digital environments.
    • Augmented Reality (AR): Digital overlays on the real world (think Pokémon GO).
    • Mixed Reality (MR): A mix of both VR and AR.

2. What’s the Deal About?

  • Google is buying a part of HTC’s XR business for $250 million.
  • The deal includes:
    • Engineering staff: HTC VIVE engineers known for their expertise in VR headsets and technology will now work for Google.
    • Intellectual Property (IP): Google gets non-exclusive rights to HTC’s XR technology, meaning HTC can still use and develop it.

3. Why Is This Important?

  • Google recently launched its Android XR platform, a system for running XR applications (like headsets and smart glasses). This acquisition will speed up the development of this platform.
  • Google will be a strong competitor to companies like:
    • Meta (Facebook): Known for Oculus VR headsets.
    • Apple: Vision Pro and much more (who doesn’t know them?)

4. History Between Google and HTC

Lifetime collaboration on the way?

 5. What Does this deal have for HTC?

  • HTC will retain the rights to use and develop XR technology, so it’s not about losing its entire XR division.
  • This allows HTC to still innovate in the XR space while having funds from Google.

6. The Bigger Picture

Who will play a long-run game?

Google is trying to catch Meta and Apple, which are already ahead in their XR devices. Investing heavily to become Top 1 and become a key part of the future.

Google, Apple, Meta or any other pro player rule the XR game?

Time will tell.

Stay tuned to techi.com to get regular updates.

Read More: Google Stance on European Union Fact-checking Mandates

Meta Announces CapCut-like Video Editing App Called Edits

Meta has entered the competitive video editing space with the launch of Edits, its answer to popular apps like CapCut. The company aims to tap into the growing demand for easy-to-use video editing tools, offering a feature-packed platform designed to cater to both casual creators and professionals.

Edits is backed by the Meta suite of platforms to bring together a very easy interface and advanced editing features to allow for videotaping. It helps put together a polished video equipped with diverse effects, transitions, captions, and even audio embellishments. The app is quite integrated with Meta’s social media profiles; thus, anyone can post their creation straight onto Instagram, Facebook, and WhatsApp, all without the help of other applications.

Apart from the basic editing features, Edits includes outside-the-box editing enhancements propelled by AI. Among these are automatic scene detection and smart cropping, along with intelligent recommendations to improve the quality of one’s video. This will particularly attract a variety of content creators toward the app, who want a very high-quality output in minimal time.

After all, Meta got into video editing as a reaction to the increased demand for short-form videos, particularly on call sites such as Instagram Reels and Facebook Stories. With Edits, the giant hopes to open video production to as many people as possible, yet still enjoy the same fluidity and ease of use engraved in the minds of users when it comes to Meta products.

What Edits Offers:

  • AI-Powered Editing: Smart features for automatic scene detection, cropping, and quality improvements.
  • Seamless Integration: Share directly to Instagram, Facebook, and WhatsApp, streamlining the editing-to-sharing process.
  • Intuitive Interface: Simple, user-friendly design catering to both beginners and experienced creators.
  • Customization Tools: Text, effects, and audio options for fully personalized videos.

As Meta competes with established players like CapCut and iMovie, the focus will be on whether Edits can offer unique features that set it apart and attract a broad user base. For now, it stands as a promising option for Meta’s social media-savvy audience looking to enhance their video content creation process.

Read More: Artificial intelligence to improve Surrey’s Pothole Detection and Road Repairs