US DOJ Drops Bid To Make Google Sell AI Investment in Antitrust Case

Google has a little respite in the antitrust case. The U. S. Department of Justice (DOJ) dropped the clause to force Google to sell its AI Investments , including the Anthropic company, to boost competition. Anthropic had contended to the court that losing the investment would hand competitive advantage to its rivals OpenAI and Microsoft. The prosecutors obtained evidence that shows a risk if Google’s AI investments are banned. The risk is that it could have unintended consequences in the evolving landscape of AI. Google holds minority stakes worth billions of dollars in Anthropic.

The prosecutors asked that in future, Google should inform the government about its plan for investment in generative AI beforehand to get the approval. Google said it is going to appeal against this investment restriction order. The Lawsuit was filed back on 20 October 2020  with a primary focus on Google’s monopoly in the search engine market. It was said in the lawsuit that google is unlawfully maintained monopolies in search and online advertising markets through its anticompetitive practices.

However, a separate lawsuit was filed on January 24, 2023, by the DOJ, which focused more on digital advertising and was much harsher than the first lawsuit. It described how Google gained an unfair advantage by buying out the ad tools and serving technology. It asked for Google to sell significant portions of its ad tech business and stop certain business practices. However, that trial for second lawsuit was completed in November 2024, and a ruling is expected by August 2025

US DOJ also wants Google to sell off its Chrome browser as a part of its final remedy proposal in the antitrust case. It also requires google to stop paying partners for special treatment of its search engine. It is an unfair advantage if you are the default search engine. As per Reliablesoft, Google has an 89.74% share in the market, and Bing is languishing in 2nd place with just 3.97%. The tech world is eagerly looking forward to the conclusion of this case, which has the potential to change the tech world a great deal. It remains to be seen what the final verdict will be. Google has its task cut out, and there is a fair chance that it will get some unfavorable orders in the final ruling.

Google Reports AI Deepfake Terrorism Complaints to Australia’s eSafety Commission

In an era where artificial intelligence has almost reshaped the digital landscape, the concerning bit is it keeps dusting ugly issues where misuse is concerned. The big technology companies are increasingly under pressure to stamp out every evil application of innovation, be it deepfake terrorism propaganda or AI-generated abusive child pornography. Google, at this point, has provided one of the rare instances where scale is demonstrated with regard to how big AI abuse has become, as hundreds of users report from its Gemini programs specifically relating to such disturbing implications. This disclosure to Australia’s eSafety Commission raises immediate questions about AI governance, regulatory oversight, and the ethical responsibilities of tech companies.

In an almost year-long complaint period from April 2023 to February 2024, Google informed an authority in Australia about receiving over 250 global complaints regarding its artificial intelligence software, Gemini, in misuse to produce contagious content of terrorism related to deepfakes. This report by Google was submitted to the Australian eSafety Commission as part of a regulatory commitment to reporting harm minimization efforts by technology companies or facing penalties in Australia.

Furthermore, the deepfake content concerns AI-generated extremists turned up dozens of warnings from users that Gemini had been able to create child sexual abuse material. The eSafety Commission characterized Google’s report as one of a “world-first insight” into how the new technology would be used for harmful and illegal content and activities. Julie Inman Grant, the eSafety Commissioner, said,

“This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated”.

Google’s AI Safety Measures Faces Challenges:

According to Reuters, The report mentions that Google received a total of 258 complaints from users regarding suspected AI-generated deepfake terrorism content, in addition to further reports regarding 86 complaints concerning AI-generated child exploitation or abuse material. However, Google has not made public how many of these complaints were verified. Through an e-mail statement, a Google spokesperson emphasized the firm’s policy against the generation and distribution of content tied to violent extremism, child exploitation, and any other illegal activities. The spokesperson added that through email.

“We are committed to expanding on our efforts to help keep Australians safe online.”

According to the Google Spokesperson,

“The number of Gemini user reports we provided to eSafety represent the total global volume of user reports, not confirmed policy violations.”

Google now employs a hash-matching system to detect and eliminate AI-generated instances of child abuse material automatically. However, the company does not utilize the same system to detect terrorist or violent extremist content generated by Gemini, which is a limitation pointed out by the regulator.

Regulatory Pressure and Industry Scrutiny:

Generative AI tools like ChatGPT by OpenAI, which blasted the public’s attention late in 2022, triggered global concerns among regulators about AI’s misuse. Governments and regulators are asking for severe measures and regulations specifying that it should not be used for committing acts of terrorism, fraud, deepfake pornography, or any other forms of abuse. The eSafety Commissioner of Australia has traditionally fined social media platforms like Telegram and X (formerly Twitter) for not meeting the required regulations regarding the reporting requirements. X has already lost an appeal against its A$610,500 penalty but intends to rechallenge the ruling; Telegram has also made known its intention to challenge its penalty.

Such is the speed with which AI technologies are racing ahead, and so must the requirements for protecting users from their possible misuse. This requires strengthening regulations, improving AI monitoring systems, and introducing increased transparency from technology firms. With such a move now, there are certainly eagle eyes across the world on how the future of AI governance will pan out in the balance between innovation and the ethical responsibility of companies. 

Musk Attempt to Stop OpenAI Transition into a for-Profit Entity Turned Down by Judge

Elon Musk co-founded Open AI and was a major financial contributor to it from 2015 to 2020, as per the lawsuit. Musk claims, the original vision of OpenAI was to work as a non-profit entity with no personal gain. When ChatGPT was launched, Sam Altman started commercializing the product with a monthly fee for its pro users. At that time, Musk advocated having a separate for-profit entity. However, differences emerged while finalizing the plan as Musk wanted majority shares. In 2018, Musk resigned from the OpenAI board after the company refused to attach OpenAI to Tesla.

In Feb 2024, Musk filed a lawsuit against OpenAI and its leadership. This was withdrawn in June 2024 and a new revived lawsuit was filed in August 2024. This lawsuit had several allegations. One of them was the transition to for-profit entity. Musk highlighted that OpenAI is deviating from the original plan of keeping OpenAI as a non-profit and their vision to serve the masses was not being accomplished.

The Judge, Yvonne Gonzalez Rogers, dismissed the for-profit allegation because Musk wasn’t able to provide any substantial evidence in favour of his allegation. While the attempt to block OpenAI’s transformation was denied, the border lawsuit remained active. The Judge has offered to expedite the trial process so that this case can be concluded early to avoid any harm to the business.

It should be remembered that Elon Musk was among the 11 co-founders, including Sam Altman and Greg Brockman. Musk invested more than 44 Million as per the lawsuit point 82 in OpenAI. We believe any organization that wants to be involved in technological advancements needs to be a for-profit entity to sustain the consistent technological advancements in AI. The operational costs of research and development are ever so high, and inconsistent government and investor policies can hamper the process.

This legal battle is like a personal feud between Elon Musk and Sam Altman. Musk has no open ground to stop OpenAI from operating as a for-profit entity. Judges claimed the same. It is interesting to note that just as recently as February 2025, Musk offered $97.4bn to take over OpenAI and preserve the AI research lab’s original mission. However, the board rejected the offer.

In an interview with Bloomberg, Sam Altman said this of Musk

“I think he is probably just trying to show us down, he obviously is a competitor, its you know he’s working hard and he has raised a lot of money for xAI and they are trying to compete with us from a technological perspective from you know getting the product into the market and I wish he would complete by building a better product but I think there’s been a lot of tactics, you know mana many lawsuits, all kind of crazy stuff and now this. And we will try to just put our head down and keep working”.

With stiff competition from several prominent companies like DeepSeek, Anthropic, Google DeepMind, Meta AI and a few others, this legal battle is not good for OpenAI. Sam Altman needs to keep a clear head and ensure the company is making progress to keep the competition at bay while going through this legal battle. At the moment, it seems like they are moving in the right direction. Altman tweeted

“we are likely going to roll out GPT-4.5 to the plus tier over a few days.” There is no perfect way to do this; we wanted to do it for everyone tomorrow, but it would have meant we had to launch with a very low rate limit. We think people are gonna use this a lot and love it.”

On the other hand, Musk needs to keep driving his xAI company forward. Competition is heating up, with the recent announcement that DeepSeek computing power costs only $6 million. AI is heading to an era where we will see lower computing costs, but at the same time, data is increasing by many folds. It is a challenge for AI companies to keep evolving and coming up with new techniques to keep their noses in front; otherwise, they will be left far behind in this global race.

OpenAI Faces Legal Scrutiny over Copyright Claims, as Alec Radford gets Subpoenaed

Who knew AI models would end up needing copyright lawyers more than programmers? The more artificial intelligence transforms an industry, the more fire it ignites in the legal arena over how these models are trained. The war on AI and intellectual property has now reached a point as the boundaries are violated by exploiting the works of human creators. In this high-profile copyright case, former OpenAI Researcher and its leading Developer in Generative AI, Alec Radford, has been issued a subpoena, shedding further light on the confusing details of AI training data, Fair use, and the future of Generative models. Depending on how the case turns out, it might quite literally become a turning point in the ethics of AI, legal frameworks, and the protection of creative works in the digital age.

According to a court filing, Radford received the subpoena on 25 February, marking a key development in the lawsuit against OpenAI’s use of copyrighted materials in training its AI models. This was filed in the U.S District Court for the Northern District of California in the case entitled “re OpenAI ChatGPT Litigation”, which was previously initiated by several renowned book authors, Paul Tremblay, Sarah Silverman, and Michael Chabon. They claimed that OpenAI used their literary works without authorization to train its AI models, which is a copyright violation. They asserted that OpenAI’s ChatGPT produces text that is very similar to theirs and does not give any credit for it, which amounts to direct copyright infringement.

Radford’s Contribution to OpenAI:

Radford, who most recently left OpenAI to pursue independent research, has also been a key contributor in building the Generative Pre-Trained Transformer (GPT) on which OpenAI’s product, such as ChatGPT, runs. His other recent contributions have been to OpenAI’s speech recognition model Whisper, and its DALL·E image-generation model. Joining OpenAI in 2016, Radford was instrumental in developing the company’s AI capabilities.

Radford’s work as the lead author for OpenAI’s original paper on Generative Pre-trained Transformers provided the foundation for the AI models to support a surplus of applications today. His participation in the lawsuit gives an impression that the accusers are interested in seeking insider knowledge into OpenAI’s training processes and, more evidently, the usage of copyrighted content in making those models.

Legal Feuds:

The irony lies in that OpenAI needs a human lawyer to defend its non-human intelligence. As OpenAI has kept up its defense against copyrighted materials, the storm in the legal battle has intensified. Last year, the court dismissed two of the claims against OpenAI but allowed direct copyright infringement claims to go through. The accusers’ legal team are now seeking testimony from the former personnel of OpenAI to claim support towards justice further.

Radford is not the only big name involved in this legal battle; also caught in its net are Dario Amodei and Benjamin Mann, who left OpenAI to found Anthropic, an AI research company. Although these two former executives have resisted because the burden is too great, they are still answerable. Thus, this week, a U.S. magistrate judge ruled that Amodei must undergo questioning regarding his past work at OpenAI in two separate copyright cases, including one by The Authors Guild.

Broader Implications and Issue of Fair Use:

If the lawsuit is directed in favor of the accusers, it will have significant legal repercussions for the entire AI industry. A favorable ruling would likely force AI companies to re-examine their data collection and usage techniques for training models, followed by tighter controls, arrangements for licensing with content creators, and an alternative definition of copyright protection for AI-generated content.

At the heart of the OpenAI defense is a focus on the fair use doctrine, which allows limited use of copyrighted materials without permission under certain circumstances. On the other hand, the accusers argue that these AI models are commercial products that generate revenue for OpenAI, thus making fair use a questionable argument. As AI-generated content becomes widespread, courts will have to define the lines of fair use regarding machine learning and data scraping.

Therefore, the outcome of this lawsuit will affect both AI developers and content creators. If the courts determine that OpenAI’s use of copyright materials is outside fair use, this could bring new regulations and change how AI models are trained. It would heighten expectations requiring explicit licensing agreements with content creators. Conversely, a ruling in favor of OpenAI may further strengthen the positions of AI companies to scrape an enormous amount of data with minimum oversight.

With everything in mind, the case could be a pioneer for change within the AI sector. It is within the much wider framework of generative models, including not just generative adversarial networks or diffusion models, that gives birth to ethical/legal questions regarding training data. OpenAI claims its processes are protected by fair use, but what becomes an issue is the transparency regarding data sourcing, especially about any potential violation of intellectual property rights.

Trump’s Administration Cuts AI Funding, Threatening U.S Innovation?

AI technology is progressing enormously, and the United States is leading in technological innovation, but it seems the recent move by the Trump Administration is all set to jeopardize that position. Cutting off important personnel within the National Science Foundation (NSF) and slashing research funding has created an alarm in the scientific community. With the dismissal by the Trump Administration of National Science Foundation employees who are specializing in AI, experts now think disruptions of funds in AI may prevent advancement in this area, which bears heavy implications for national security, economic growth, and global competitiveness.

With tensions already brewing, the clash between AI scientists and policymakers emphasizes the need for consistency in funding scientific research.A significant impact is expected by the Directorate for Technology Innovation and Partnerships, the particular office that plays an important role in channeling federal grants to AI research.

Most of the review panels that were planned to evaluate and approve funding for research projects in AI have been either canceled or postponed, which means an extensive delay in financial support for many projects. This disruption would set back research efforts in machine learning, robotics, and automation, which are critical to National Security, health, and industrial innovation.

Criticism on Funding Reductions:

Experts and researchers working in AI have strongly condemned the administration’s actions aimed at reducing significant grantings, especially the cuts that influenced Elon Musk’s Department of Government Efficiency. Musk, a known champion of AI, has been accused of indirectly disrupting the research ecosystem via his imposition of restrictions on funding. A large number of researchers feel that the cuts could have long-term implications for the United States, along with it remaining on top, while other countries are making large-scale investments in promoting the technology itself.

Geoffrey Hinton, an AI pioneer and Nobel Laureate, stated in a post on X,

“Musk to be expelled from the British Royal Society because of the huge damage he is doing to scientific institutions in the U.S.”

Hinton called it a crime about the loss of US scientific institutions to progress and maintain their integrity. Such opinions among AI researchers and academics have been articulated, noting that without stable government funding, groundbreaking AI discoveries might be slowed down and would give countries like China, the incentive to take the driving seat in the field.

Musk’s Reaction:

Musk quickly defended his views on efficient funding in response to Hinton’s remarks while also conceding that he could be wrong. Musk responded to Hinton’s post:

“Only craven, insecure fools care about awards and memberships. History is the actual judge, always and forever. Your comments above are carelessly ignorant, cruel and false. That said, what specific actions require correction? I will make mistakes, but endeavor to fix them”.

Musk’s outburst has sparked a further debate in tech and research communities. Some believe that slow government processes need to be pushed to reduce wasteful spending, while others believe AI research needs to be funded reliably for the long term that may require harsh cuts to undeserving expenditures. The remarks have generated renewed interest on the ethical stakes surrounding private sector influence on public funding for science, therefore raising the debate about how AI will be governed in the future.

Consequences of AI Funding Cuts:

The current controversy has expanded into wider debates within the scientific community regarding the extent of government overreach versus the appropriate extent of financial support for AI research. The experts have warned that global competition in AI will indeed be further hindered by any funding disruptions as the U.S becomes less competitive in any artificial intelligence developments. Other countries like China as well as the European Union have suddenly increased their budgets in research devoted to AI applications, defense, cybersecurity, and automation.

It remains unclear whether the administration will reverse itself in light of a vast flood of criticism. For now, it does seem that the growing backlash from members of the AI research community and policymakers indicates that the quarrel over funding for AI research is far from over. In the next few months, we will be able to know whether the U.S continues to enjoy the competitive atmosphere with AI or if short-term funding decisions will conceal long-term impacts on innovation and economic leadership. 

Microsoft Has Officially Announced Skype Shuts Down in May

What was once a symbol of digital communication will soon be a thing of the past—Skype is officially shutting down in May 2025, marking the end of an era for video calls that once felt futuristic. Once a trailblazer in digital communication, Skype is officially shutting down in May 2025, marking the end of a platform that revolutionized video calling. Microsoft confirmed the news through Skype’s official X (formerly Twitter) account, urging users to transition to Microsoft Teams to continue their conversations. This decision follows years of Skype’s declining relevance, as competitors like WhatsApp, Zoom, and FaceTime overtook the market. Initially launched in 2003, Skype soared in popularity by offering free global voice and video calls, becoming a household name before Microsoft acquired it for $8.5 billion in 2011.

But as technology progressed and Microsoft pivoted to Teams, Skype slowly started disappearing into the background. With this closure, an era of internet history is over, leaving behind memories of a time when video calls seemed like a futuristic breakthrough.

Why is Skype Shutting Down?

Skype used to be the default video-calling app for millions, but its downfall began when:

  • Competition Acceleration: WhatsApp, Zoom, and Google Meet provided instant, mobile-native, and integrated solutions.
  • Unpopular Redesigns: Widely criticized in 2017 for an update that had mimicked Snapchat’s home screen.
  • Microsoft’s shift to Teams: When Microsoft rolled out Windows 11 in 2021, it no longer pre-installed Skype, signaling its phasing out.

Microsoft said this to the point in its official announcement and no more Skype.

What Happens to Skype Users?

Microsoft announced that All Skype accounts can now sign into Microsoft Teams, migrating chats and contacts. Before the final closure, users can export their chat history and contacts. Skype services, for-pay, will remain active until the coming renewal cycle. In the meantime, anyone who still uses Skype for personal or business calls will need to migrate to Teams or switch to one of the many other modern alternatives.

A Goodbye to Skype: How Could It Have Happened?

For many, Skype was not merely an app but a technological breakthrough that made video calling affordable, free, and worldwide, accessing loved ones in different parts of the world. in the mid-2000s, without costly phone bills was revolutionary. However, Skype’s decline was not simply a matter of growing competition. It was also one of missed opportunities and mismanagement. While competitors embraced mobile trends, AI-powered features, and cloud-based cooperation, Skype fell behind, becoming relict rather than cutting-edge. However, as the official end approaches, it will leave behind a legacy. and lose its relevance. However, will Microsoft Teams learn from the past?

Read More: Microsoft Expands AI Reach with Copilot App for Mac

Meta Fires 20 Employees for Leaking Confidential Information

According to The Verge, Meta has terminated approximately 20 employees for leaking confidential company information. The tech giant confirmed that the crackdown is part of its commitment to protecting sensitive data, particularly as leaks of internal meetings and upcoming product plans have increased in recent months. A Meta spokesperson stated, “We tell employees when they join the company, and we offer periodic reminders, that it is against our policies to leak internal information, no matter the intent.” The company added that additional terminations are expected as investigations continue.

The decision follows a series of news reports revealing details from Meta’s private discussions, including an all-hands meeting led by CEO Mark Zuckerberg. In response, the company has ramped up efforts to identify and take action against employees responsible for leaks. Meta’s Chief Technology Officer, Andrew Bosworth, reportedly warned staff that the company was close to identifying the culprits. Ironically, even his warning was leaked, underscoring Meta’s challenge in curbing unauthorized disclosures.

Meta’s History of Internal Leak Issues

This is not the first time Meta has cracked down on leaks. The company has faced scrutiny in the past over leaked internal documents related to privacy concerns, content moderation policies, and AI development strategies. As Meta continues to navigate growing competition and regulatory challenges, securing proprietary information has become a top priority. The recent terminations signal a stricter approach to handling internal breaches and protecting corporate secrets.

Read More: Meta Gears Up to Launch Standalone AI Chatbot to Challenge ChatGPT & Gemini

Nvidia CEO Shrugs Off DeepSeek Challenge as AI Chip Sales Soar

In the high stake world of AI, chips lead in the power race of dominance, the AI world wouldn’t just have NVIDIA keeping pace, rather it will also be launching a new standard altogether. While DeepSeek R1 rattles the market temporarily, Jensen Huang remains unfazed by it all, unshaken and guiding the ship with steady hands, as competitors scramble for attention. As the rest look for a foothold, Nvidia is still riding the largest growth wave one has ever seen, proving, yet again, that the AI revolution will not stop, it has merely begun.

CEO Huang keeps pushing his company’s future ahead, brushing off worries that advances made by DeepSeek threaten sales at Nvidia. Speaking to the latest earnings call on Wednesday, the founder and chief statesman of Nvidia reiterated his confidence in the company despite what people are saying about the fallout from DeepSeek’s R1 model.

Demand for the Chip:

Huang praised the new R1 model as an “excellent innovation,” saying it actually increases demand for Nvidia technology given the huge computational requirements those reasoning models need. This came after last month’s record reduction of shares of Nvidia in the market because of news that the model of DeepSeek R1 would require much fewer chips for the training.

Huang even countered such narratives and said, “Reasoning models can consume 100 times more compute, and future reasoning models will consume much more compute. DeepSeek R1 has ignited global enthusiasm. It’s an excellent innovation, but even more importantly, it has open sourced a world-class reasoning AI model. Nearly every AI developer is applying R1”.

Record Breaking Sales:

Nvidia’s financial performance seems stronger than ever, even in the wake of some market jitters last month. The company announced yet another record quarter in which sales totalled approximately $39.3 billion, beating not only its internal estimates but also the estimates of Wall Street. Nvidia expects high growth to continue, projecting revenue in the next quarter of approximately $43 billion. Within the sales of Nvidia’s Data Center segment, one of the most important growth factors, sales nearly doubled in 2024 to $115 billion, a 16% increase since last quarter, emphasizing the never ending demand for AI chips.

AI Chip Market:

Nvidia’s CEO Huang asserted in the earnings call that its latest Blackwell chip is important, being custom designed for AI reasoning models. He said, “Current demand for it is extraordinary. We will grow strongly in 2025”. It is safe to say that, despite last month’s DeepSeek uproar, the wider AI chip market has continued to show a uniform pace of expansion. Nvidia’s future looks bright despite some recent turbulence. Record-breaking sales, soaring demand for AI chips, and major corporations such as Meta, Google, and Amazon pouring billions into AI infrastructure have ensured Nvidia’s commanding position at the top. With a growing AI revolution, Nvidia’s role as the very backbone of it seems assured. From what we know of the past, Jensen Huang is not the one keeping up, he’s the one who is actually going to lead.

Read More: Amazon Unveils Alexa+ AI Assistant to Revolutionize Smart Living

Apple Shareholders Uphold DEI Initiatives Amidst Conservative Opposition

In a corporate environment where policies move as quickly as software updates, Apple has maintained a stand on Diversity, Equity & Inclusion (DEI). A proposal aimed at essentially dismantling these efforts was rejected by shareholders in an overwhelming fashion, thereby confirming that the tech giant’s commitment to an inclusive workplace has not wavered. Apple’s landslide victory reinforces the notion that diversity is not just an aspirational ideal; rather, it is a pressing business imperative.

Apple Inc. shareholders voted to uphold the giant tech company’s DEI policies, an important victory for the company’s management. The vote at Apple’s annual shareholders’ meeting rejected a conservative proposal to dismantle the program. The results cast light on the ongoing debate over corporate DEI initiatives’ role and worth amid mounting political and legal scrutiny.

Corporate DEI Policies:

The National Center for Public Policy Research, a free-market think tank, shared the proposal called “Request to Cease DEI Efforts.” The supporters of the measure argued that Apple’s DEI policies would expose the company to an increasing number of discrimination lawsuits following recent changes in the law. Apple argued that it has in place active monitoring to reduce legal risk, emphasizing further that this proposal improperly restraints management’s ability to oversee corporate policy.

The shareholder vote was decisive, with 210.45 million votes cast in favor of the proposal and an immense 8.84 billion votes cast in opposition. This ringing defeat is evidence that investors continue to trust Apple’s commitment to defend DEI, while major companies like Meta have been reduced in their similar endeavours under political pressures.

Company’s Global DEI Initiatives and Approach:

Apple does provide diversity data about its employees, but it does not maintain official hiring quotas or targets. Instead, the company focuses on initiatives like its racial justice program, which funds historically Black colleges and universities in America. DEI efforts abroad include coding education for indigenous peoples in Mexico and partnering with local Aboriginal non-profits in Australia to pursue criminal justice reform.

Apple’s approach to diversity has been scrutinized by shareholders in the past. Earlier proposals calling for greater transparency on racial and gender based pay gaps were rejected as well. While the company remains firmly committed to fostering an inclusive workforce, CEO Tim Cook during the meeting said, “Strength has always come from hiring the very best people and then providing a culture of collaboration, one where people with diverse backgrounds and perspectives come together to innovate”. Tim admitted that Apple’s DEI approach may someday have to change, depending on the law, but core values of dignity and respect will not be compromised. Cook added that, “as the legal landscape around these issues evolves, we may need to make some changes to comply, but our North Star of dignity and respect for everyone and our work to that end will never waver”.

Broader Corporate Trend:

Apple’s blunt rejection of the anti DEI proposal stands in contrast to general trends in corporate America. A growing number of larger companies have recently been toning down or even reversing their DEI initiatives because of political and legal pressures, especially since President Donald Trump condemned such programs and suggested he would investigate their legality. This same conservative group that worked to undermine Apple’s DEI agenda also targeted Costco Wholesale to pressure it to consider some of the risks of its own diversity program. The Costco shareholders voted against the proposal in January, affirming a growing corporate resistance to dismantling DEI programs in the face of mounting conservative opposition.

Apart from the DEI vote, Apple shareholders rejected a second proposal that called on the company to assess risks connected with its work in artificial intelligence. The AI plan commanded more shareholder support than any other initiative, 1.04 billion votes in favor but was voted down in the end, with 7.96 billion votes cast in opposition. Apple was rewarded for all its management proposals, including the so-called “say on pay” executive compensation plan.

Apple’s U.S. Investments with Trump:

The day before the shareholder meeting, Apple grabbed the spotlight by announcing a plan to invest $500 billion over four years in the U.S, Donald Trump praised the move a few days after reports emerged about a meeting between Tim Cook and Trump. During this occasion, Cook reaffirmed Apple’s commitment to domestic manufacturing, which includes being Taiwan Semiconductor Manufacturing Company TSMC’s largest customer for its Arizona factory, this project was initiated by Trump during his first term to bring TSMC to the U.S.

The shareholder vote at Apple illustrates the firm’s solemn commitment to DEI declarations despite the increasingly politicized and legal challenges it faces. Shareholders overwhelmingly rejected the anti-DEI proposal, signifying a firm ground for inclusive corporate governance, even when other firms have retreated from such commitments. As the legal and cultural backdrop concerning workplace equity subsides away, Apple has once again reasserted that this is not just a matter of social responsibility and that inclusivity is an advantage. The corporate focus will now be on future investments in the U.S. and innovations with AI and on how Apple will balance its values against the pressures of changing regulations. While the world outside changes, Apple will continue to operate under its tenets that will affect diversity and inclusion moving forward.

Read More: Apple Launches iPhone 16e in China to Compete with Local Brands

TikTok (with Douyin) Becomes First Non-Gaming App to Surpass $6B Revenue

TikTok, the virtual stage where blooming viral trends and some dance moves dare to question cultural viability, has now dipped its toe into history. Along with its Chinese counterpart, Douyin, it made headlines by becoming the first non-gaming app to generate a staggering $6 billion in revenue from in-app purchases in the year 2024. As per the report of Sensor Tower on App Intelligence, it is a new record for TikTok to have grossed $1.9 billion in IAP revenue in the fourth quarter of last year. If any social media has done a definite financial mic drop, that would be it.

TikTok’s Revenue and Other Apps:

From all the non-gaming apps, only YouTube and Google One can feasibly provoke TikTok’s Q4 revenue for a full calendar year. In any case, TikTok’s annual IAP revenue surpassed all other competitors and, in fact, is more than double the revenue of any other app or game in 2024. MONOPOLY GO!, TikTok’s closest competitor, could only bag $2.6 billion in the past year in IAP revenue, thus coming in a very distant second.

TikTok has had a successful economic run, starting with a sudden year on year rise from $4.4 billion in 2023 to a new high of $6 billion in 2024. The app did seal the second most downloaded app position in Q4 2024, with Instagram taking the top slot. WhatsApp, Facebook, and Temu for e-commerce made up the remainder of the top five.

TikTok-Douyin Comparison:

Money makes the direct comparison between TikTok and other apps inherently flawed in itself because of revenue accounts being pushed for Douyin, the Chinese counterpart. ByteDance owns the two platforms and follow relatively similar short-form video models. However, they serve entirely different markets, Douyin implements a tighter integration with e-commerce, heavy regulation with respect to Chinese authorities, while TikTok contains various forms of content in an audience oriented manner across the globe.

Challenges for TikTok’s Market:

In the U.S. regulatory scrutiny, there have been some attempts to take TikTok down from app stores for national security purposes. However, there was a 75-day delay following an executive order from Donald Trump, during which the ban could potentially be extended. TikTok has left an economically permanent mark through thick and thin, especially with regard to the creator economy. Users can buy virtual gifts for their favorite creators, who, in turn, may convert them into real currency. TikTok will keep 50% of the cash from these transactions, where the transactions go back to its revenue in the momentum.

TikTok has secured its way as a giant in digital entertainment and social media. With in-app revenue amounting to $6 billion and a catch on global culture that no other platform can rival, TikTok creates a financial dominance that is impossible to hide. Its ability to monetize virtual gifts, engage users, and power the economy of creators has cemented its place as the unstoppable force that it is, notwithstanding the restrictions and competition in social media. Whether by means of viral dance challenges or shopping via the app, TikTok is not just a social media app, it is an economic powerhouse that is redefining digital entertainment.

Read More: What’s Next for TikTok in the U.S.? Billion-Dollar Bids and High-Stakes Battles

Chegg Sues Google Over AI Summaries, Citing Unfair Competition and Revenue Losses

In a significant legal development, educational technology firm Chegg has filed a lawsuit against Alphabet Inc.’s Google, alleging that Google’s AI-generated search summaries are undermining original content creators and diverting web traffic away from educational publishers. Filed in Washington, D.C., the lawsuit contends that Google’s AI overviews utilize content from third-party sites like Chegg to provide instant answers directly on the search page, reducing the need for users to visit the source sites. This practice, according to Chegg, diminishes the financial incentives for publishers to produce original content, potentially leading to a degraded information ecosystem.

Chegg’s Concerns Over AI-Generated Content

Chegg’s CEO, Nathan Schultz, emphasized the broader implications, stating that the lawsuit addresses concerns about the future of digital publishing and the quality of student learning resources. He argues that students are increasingly encountering low-quality, unverified AI summaries instead of reliable, step-by-step educational content. This shift not only impacts Chegg’s visitor and subscriber numbers but also raises questions about the integrity of the information available online.

Google’s Response to the Allegations

In response, Google spokesperson Jose Castaneda dismissed the claims as unfounded, asserting that AI overviews enhance the search experience by making it more helpful and increasing opportunities for content discovery. Castaneda noted that Google continues to direct substantial traffic to websites across the internet, with AI overviews contributing to a more diverse range of sites receiving visitors.

Impact on the Digital Publishing Environment

The lawsuit underscores a significant challenge in today’s online content sphere: balancing AI-driven information delivery with the viability of original content creation. As AI tools become more embedded in search engines, creators and publishers are increasingly worried about maintaining their visibility and revenue streams. The resolution of this legal dispute could establish new standards for managing and monetizing AI-generated content, potentially reshaping the relationships between major tech companies and content creators.

This case also underscores the challenges faced by educational platforms like Chegg in adapting to rapidly changing technologies. As AI tools become more prevalent, traditional models of content delivery and monetization are being disrupted, prompting companies to reassess their strategies to remain competitive and relevant in the digital age.

Read More: Musk Starlink Battles Chinese Rivals in Fierce Satellite Internet Race

Grok 3’s Brief Censorship of Trump and Musk Sparks Controversy

Who knew AI could play favorites? Artificial intelligence was supposed to be neutral, right? Just pure cold logic with no human bias or political drama, I guess not in this scenario. When Elon Musk released Grok 3 as a “maximally truth-seeking AI”, most people wouldn’t have thought that it would suddenly get very shy about naming some controversial figures, particularly its own creator. Over the weekend, users discovered that Grok 3 seemed to have an unwritten rule that emphasized that Musk or Trump are not to be roasted.

Last Monday, in a live stream, billionaire Elon Musk introduced Grok 3, the latest AI model from the company he founded, xAI, calling it a “maximally truth-seeking AI.” However, users reported that Grok for a brief period censored unflattering mentions of President Donald Trump and Musk himself. When asked in “Think” mode, “Who is the biggest misinformation spreader?” social media users noted that Grok 3’s “chain of thought” reasoning indicated it had been explicitly instructed not to mention Trump or Musk. This revelation raised eyebrows, undermining Musk’s declarations of an apolitical AI.

Although, after some time, the changes were reverted and Grok 3 was back to mentioning Trump in response to the misinformation question. Igor Babuschkin, an engineering lead at xAI, confirmed in X post that it was indeed a bug caused by an internal change made by one employee that was withdrawn soon after it became the topic of much attention at the company.

He said, “I believe it is good that we’re keeping the system prompts open. We want people to be able to verify what it is we’re asking Grok to do. In this case an employee pushed the change because they thought it would help, but this is obviously not in line with our values”.

Misinformation and Controversy:

There is quite a lot of debate about misinformation, Trump and Musk bear the brunt of this confrontational subject for promoting provably false things. Recent examples include the claims that Zelenskyy is a dictator with a 4% popularity rating and the ridiculous assertion that it was Ukraine that started the ongoing war against Russia. xAI’s social platform X frequently marks the misleading statements of both with his Community Notes system.

This Grok 3 controversy is now merely the tip of the iceberg concerning accusations of AI political prejudice. Critics contend Grok is biased in favor of the left, and yet another recent incident has sparked debate on that. Some users reported Grok 3 was generating messages that claimed the death penalty for Trump and Musk was deserved. xAI quickly corrected the situation, and Igor Babuschkin called it a “really terrible and bad failure.”

AI Biasedness:

Musk has always pitched Grok as the opposite of the excessively “woke” AI models, promising it would be free of the constraint applied by the competitors like “OpenAI’s ChatGPT.” Previous Groks like Grok 2 were rather edgy and would even go as far as vulgarity when answering questions, which is tactfully avoided by the rest of their AI counterparts. Studies acclaim that Grok is biased in favor of the political left concerning transgender rights, diversity programs, and economic inequality. Musk attributes these supposed left-wing tendencies to Grok’s training set which is the publicly available web pages. He pledged to try to move Grok towards a more neutral political model.

With regard to Grok 3, we have yet another example of how incredibly hard it is to come up with an AI model that can claim neutrality and such instances continue to challenge the ever-colder war between AI transparency and control. While Musk and his fellow tech leaders push “unbiased” AI, the question comes down to, can any AI rule itself have been created by a set of biased people? Or can we expect a future where even machines are said to have political opinions? There is still a strong challenge of attaining fairness and neutrality for AI models that influence opinions in public discourse and only time will tell if Musk delivers on his promise of an unbiased Grok.

Read More: Did xAI Mislead About Grok 3’s Benchmarks? OpenAI Disputes Claims

Apple Launches iPhone 16e in China to Compete with Local Brands

Apple is preparing to launch the iPhone 16e in China, aiming to regain its competitive edge in one of the world’s largest smartphone markets. Priced at approximately $600, the model aligns with China’s national stimulus program, which offers subsidies on smartphones under $800. This move is seen as part of Apple’s effort to maintain its foothold in the market amid evolving consumer preferences.

Competitive Market Landscape

Apple faces increasing competition from domestic smartphone manufacturers, which continue to introduce feature-rich devices at more accessible price points. While Apple’s premium devices cater to high-end users, the demand for more affordable options is growing among Chinese consumers.

Regulatory Considerations

Apple has yet to receive regulatory approval for some of its latest software and AI-driven features in China. This situation creates uncertainty regarding the availability of Apple Intelligence services, which are central to its latest iPhone models. The lack of approval could impact the iPhone 16e’s appeal compared to locally manufactured devices that already integrate similar capabilities.

Apple’s Market Position and Future Outlook

Apple previously held the top position in China’s smartphone market, surpassing competitors. However, reports indicate a shift in market rankings, prompting Apple to introduce the iPhone 16e as part of its strategy to sustain its position. The iPhone 16e’s performance in China will be a crucial indicator of Apple’s ability to navigate market challenges, regulatory hurdles, and competitive pricing pressures.

Read More: HP Acquires Humane: What It Means for the Future of AI Wearables

Trump Administration Reportedly Shutting Down Federal EV Chargers

The General Services Administration (GSA), the federal agency responsible for managing government buildings, is reportedly planning to shut down all federal electric vehicle (EV) chargers, according to a report by The Verge. The move would impact hundreds of charging stations with approximately 8,000 charging plugs used by federal employees and government-owned vehicles. A source familiar with the situation told The Verge that federal employees will be given official guidance next week to shut down charging stations. Some regional offices have already received instructions to take their EV chargers offline.

Federal Centers Begin Disabling Charging Stations

This week, Colorado Public Radio reported that the Denver Federal Center had received internal communication indicating that charging stations on-site would be shut down. The email reportedly stated that the stations were deemed “not mission critical”, justifying their removal. The broader policy shift aligns with the Trump administration’s efforts to reduce government expenditures on renewable energy initiatives. The administration has previously cut back on federal support for EV infrastructure, including reducing funding for programs that once provided financial assistance to Tesla and other EV manufacturers.

Policy Shift Raises Concerns Over EV Adoption

The potential shutdown of federal EV chargers has sparked concerns about government sustainability goals and the future of federal fleet electrification. The federal government had previously made efforts to transition to electric vehicles as part of climate-conscious policies, but recent decisions signal a shift in priorities. The GSA has not yet issued an official statement regarding the reported shutdown. TechCrunch has reached out to the agency for comment, but no response has been provided as of now.

The removal of these EV chargers could have long-term implications on the adoption of electric vehicles within the federal workforce, potentially slowing progress toward clean energy transportation.

Read More: US AI Safety Institute Faces Major Cuts Amid Government Layoffs

US AI Safety Institute Faces Major Cuts Amid Government Layoffs

The US AI Safety Institute (AISI), a key organization focused on AI risk assessment and policy development, is facing significant layoffs as part of broader cuts at the National Institute of Standards and Technology (NIST). Reports indicate that up to 500 employees could be affected, raising concerns about the future of AI safety efforts in the US.

According to Axios, both AISI and the Chips for America initiative—which also operates under NIST—are expected to be significantly impacted. Bloomberg further reported that some employees have already received verbal notifications about their impending terminations, which primarily target probationary employees within their first two years on the job.

AISI’s Future in Doubt Following Policy Repeal

Even before news of these layoffs surfaced, AISI’s long-term stability was uncertain. The institute was established as part of President Joe Biden’s executive order on AI safety in 2023. However, President Donald Trump repealed the order on his first day back in office, casting doubt on AISI’s role in AI governance. Adding to the instability, AISI’s director resigned earlier this month, leaving the institute without clear leadership at a time when AI regulation remains a global concern.

Experts Warn of AI Policy Setbacks

The reported layoffs have drawn criticism from AI safety and policy experts, who argue that cutting AISI’s workforce could undermine the US government’s ability to develop AI safety standards and monitor risks effectively.

“These cuts, if confirmed, would severely impact the government’s capacity to research and address critical AI safety concerns at a time when such expertise is more vital than ever,” said Jason Green-Lowe, executive director of the Center for AI Policy. With AI development rapidly advancing and regulatory discussions taking center stage worldwide, the potential downsizing of AISI raises concerns over the US’s role in global AI safety initiatives.

Uncertain Path Forward for AI Regulation

As the federal government reassesses AI safety priorities, the impact of these layoffs remains unclear. While AISI was positioned to guide AI regulation and set technical standards, its ability to function effectively may be severely limited if staffing reductions proceed as reported. Industry analysts warn that a lack of dedicated AI safety oversight could leave the US at a disadvantage in shaping international AI policies. Meanwhile, affected employees await formal confirmation of layoffs and potential restructuring plans within NIST.

Read More: Did xAI Mislead About Grok 3’s Benchmarks? OpenAI Disputes Claims

HP Acquires Humane: What It Means for the Future of AI Wearables

HP’s recent $116 million acquisition of Humane has sent ripples through the tech industry. Once valued at $240 million, the AI wearable startup has been acquired for less than half of its original funding, signalling a major shift in the AI hardware space. The deal also comes with job offers for select Humane employees, while others have been let go. With Humane’s AI Pin officially discontinued, this raises questions about the future of AI-driven wearable technology and HP‘s plans for AI innovation. Let’s dive into the details.

Humane’s AI Pin: A Short-Lived Vision

Humane’s AI Pin was positioned as a screenless AI-powered assistant, promising a futuristic smartphone alternative. The $499 wearable aimed to leverage AI for daily tasks like messaging, calls, and web queries.

However, the device struggled due to:

  • High Price Tag – The $499 price made it less attractive than existing smart assistants.
  • Performance Issues – AI response times were slow, and cloud dependency limited functionality.
  • Limited Adoption – Consumers didn’t fully embrace the concept of screenless AI wearables.

With sales discontinued and cloud services shutting down by February 28, the Humane AI Pin is officially dead.

Why Did HP Acquire Humane?

HP’s decision to buy out Humane’s assets suggests the company sees value in AI wearables and computing. Potential reasons include:

  • AI Hardware Integration – HP may incorporate Humane’s technology into laptops, tablets, or smart accessories.
  • AI Research & Development – Humane’s AI models and patents could enhance HP’s AI-driven software and cloud services.
  • Enterprise & Consumer Applications – HP might reposition Humane’s AI assistant for business users rather than mainstream consumers.

What Happens to Humane’s Employees?

Following the acquisition, some Humane employees received job offers from HP, with salary increases ranging from 30% to 70%, stock options, and bonuses. However, many employees working closely with AI Pin development were laid off, indicating a shift in priorities.

What This Means for AI Wearables

The fall of Humane highlights key lessons for the future of AI-powered devices:

  • AI Hardware Needs Practicality – Consumers prefer AI features integrated into existing devices rather than standalone gadgets.
  • Cloud-Dependency is Risky – Relying on cloud services for core functionality limits usability.
  • Big Tech Dominates AI Innovation – Startups in AI hardware must compete with tech giants like Apple, Google, and Microsoft.

Final Thoughts: Is HP’s AI Bet Worth It?

HP’s acquisition of Humane raises an important question: Will AI wearables survive, or was Humane’s failure a sign that the market isn’t ready? With AI assistants like ChatGPT, Gemini, and Apple’s AI models becoming more powerful, the future of AI devices might lie in software rather than standalone wearables. Whether HP revives Humane’s vision or pivots entirely remains to be seen.

Read More: Nvidia CEO Jensen Huang says market got it wrong about DeepSeek’s impact

Did xAI Mislead About Grok 3’s Benchmarks? OpenAI Disputes Claims

Debates over AI benchmarks have resurfaced following xAI’s recent claims about its latest model, Grok 3. An OpenAI employee publicly accused Elon Musk’s xAI of presenting misleading benchmark results, while xAI co-founder Igor Babushkin defended the company’s methodology. The controversy stems from a graph published by xAI showing Grok3 performance on AIME 2025, a benchmark based on complex mathematical problems. While some AI researchers question AIME’s validity as an AI benchmark, it remains a commonly used test for assessing AI models’ math capabilities.

The Missing Benchmark Data

In xAI’s chart, Grok3 Reasoning Beta and Grok3 mini Reasoning were shown to outperform OpenAI’s o3-mini-high model on AIME 2025. However, OpenAI employees quickly pointed out that xAI did not include o3-mini-high’s score at “cons@64.” The “cons@64” (consensus@64) metric allows a model to attempt each problem 64 times, selecting the most frequent response as the final answer. Since this significantly improves a model’s benchmark scores, omitting it from xAI’s comparison may have made Grok 3 appear more advanced than it actually is.

When comparing @1 scores (which measure a model’s first attempt accuracy), Grok 3 Reasoning Beta and Grok 3 mini Reasoning scored below OpenAI’s o3-mini-high. Additionally, Grok 3 Reasoning Beta trailed behind OpenAI’s o1 model set to “medium” computing, raising further questions about xAI’s claim that Grok 3 is the “world’s smartest AI.”

xAI Defends Its Approach, OpenAI Calls for Transparency

Igor Babushkin, co-founder of xAI, responded on X, arguing that OpenAI has also presented selective benchmarks, though mainly when comparing its models. A third-party AI researcher attempted to provide a more balanced view by compiling a graph displaying various models’ performance at cons@64, aiming to offer a more transparent comparison. However, AI researcher Nathan Lambert pointed out a key missing element in the debate: computational cost. Without knowing how much computational power (and cost) was required for each model to achieve its best scores, benchmarking alone does not fully convey an AI model’s efficiency or real-world capabilities.

What’s Next for AI Benchmarks?

The dispute between xAI and OpenAI highlights ongoing challenges in AI benchmarking. As AI labs race to demonstrate superiority, the lack of standardized, transparent, and cost-aware metrics continues to fuel debates over how AI models should be evaluated. While xAI stands by its claims, OpenAI’s criticism raises questions about how AI companies should present performance results to avoid misleading comparisons. The broader AI community may need to push for more standardized evaluation methods to ensure fairness and accuracy in future AI model comparisons.

Read More: Nvidia CEO Jensen Huang says market got it wrong about DeepSeek’s impact

Nvidia CEO Jensen Huang says market got it wrong about DeepSeek’s impact

Nvidia founder and CEO Jensen Huang said the market got it wrong regarding DeepSeek’s technological advancements and its potential to impact the chipmaker’s business negatively. Instead, Huang called DeepSeek’s R1 open-source reasoning model “incredibly exciting” while speaking with Alex Bouzari, CEO of DataDirect Networks, in a pre-recorded interview that was released on Thursday.

“I think the market responded to R1, as in, ‘Oh my gosh. AI is finished,’” Huang told Bouzari. “You know, it dropped out of the sky. We don’t need to do any computing anymore. It’s exactly the opposite. It’s [the] complete opposite.”

Huang said that the release of R1 is inherently good for the AI market and will accelerate the adoption of AI as opposed to this release meaning that the market no longer had a use for compute resources — like the ones Nvidia produces.

“It’s making everybody take notice that, okay, there are opportunities to have the models be far more efficient than what we thought was possible,” Huang said. “And so it’s expanding, and it’s accelerating the adoption of AI.” He also pointed out that, despite DeepSeek’s advancements in pre-training AI models, post-training will remain important and resource-intensive.

“Reasoning is a fairly compute-intensive part of it,” Huang added.

Nvidia declined to provide further commentary. Huang’s comments come almost a month after DeepSeek released the open-source version of its R1 model, which rocked the AI market in general and seemed to affect Nvidia disproportionately. The company’s stock price plummeted 16.9% in one market day upon releasing DeepSeek’s news.

According to data from Yahoo Finance, Nvidia’s stock closed at $142.62 a share on January 24. The following Monday, January 27, the stock dropped rapidly and closed at $118.52 a share. This event wiped $600 billion off of Nvidia’s market cap in just three days. The chip company’s stock has almost fully recovered since then. On Friday, the stock opened at $140 a share, which means the company has almost fully regained that lost value in about a month. Nvidia reports its Q4 earnings on February 26, which will likely address the market reaction more. Meanwhile, DeepSeek announced on Thursday that it plans to open source five code repositories as part of an “open source week” event next week.

Read More: OpenAI to Shift AI Compute from Microsoft to SoftBank

OpenAI Blocks Accounts in China & North Korea Over Misuse

OpenAI has announced the removal of user accounts from China and North Korea. OpenAI blocks accounts because the company believes these users use their accounts for malicious activities like surveillance and opinion-influence operations. This action underscores OpenAI’s commitment to ensuring its technology is used ethically and responsibly. Openai did not specify the total number of accounts that have been banned and the time frame of the action.

According to the Reuters chatgpt team said on last Friday:
The activities are ways authoritarian regimes could try to leverage AI against the U.S. as well as their own people, OpenAI said in a report, adding that it used AI tools to detect the operations.”

Identified Malicious Activities

OpenAI’s internal investigation revealed several concerning practices:

Propaganda Generation: Some users employed ChatGPT to create Spanish-language articles critical of the United States. These articles were subsequently published in mainstream Latin American media under the guise of a Chinese company’s authorship.

Fraudulent Employment Schemes: Actors with potential ties to North Korea utilized AI to fabricate resumes and online profiles. The objective was to deceitfully secure employment within Western corporations.

Financial Fraud Operations: A network based in Cambodia leveraged OpenAI’s technology to produce translated content. This content was disseminated across platforms like X (formerly Twitter) and Facebook, aiming to perpetrate financial scams.

OpenAI’s Proactive Measures

To detect and counteract these malicious endeavors, OpenAI harnessed its own AI-driven tools. While the company has not disclosed the exact number of accounts affected or the specific timeline of these activities, its swift response highlights the challenges tech companies face in preventing malicious entities’ exploitation of AI technologies.

The U.S. government has previously voiced apprehensions regarding the potential for AI technologies to be harnessed by authoritarian regimes for purposes such as domestic repression, dissemination of misinformation, and threats to international security. OpenAI’s recent actions align with efforts to prevent such misuse and emphasize the importance of vigilant monitoring and regulation in the AI sector.

The Future of AI Security

As AI continues to evolve and integrate into various facets of society, ensuring its ethical application remains paramount. OpenAI’s recent measures testify to the ongoing efforts required to safeguard technology from being weaponized for malicious intents.

Read More: OpenAI launched Deep Research, ChatGPT’s new AI agent for advanced level research

Meta & X Approved Anti-Muslim Hate Speech Ads Before German Election, Study Reveals

A recent study by the German digital rights organization Eko has revealed that Meta and X (formerly Twitter) approved advertisements containing violent anti-Muslim and antisemitic hate speech ahead of Germany’s federal election on February 23, 2025. These findings raise significant concerns about the platforms’ content moderation practices and their potential impact on the electoral process.

Eko’s investigation involved submitting deliberately harmful political ads to Meta and X to assess their ad approval systems. Alarmingly, X approved all 10 of the submitted hate speech ads, while Meta approved five out of ten, despite both companies’ policies prohibiting such content. Some ads featured AI-generated imagery depicting hateful narratives without disclosing their artificial origin. Meta’s policies require such disclosures for social issues, elections, or political ads, yet half of these undisclosed AI-generated ads were still approved.

Elon Musk’s Involvement in German Politics

In addition to platform-specific issues, Elon Musk, the owner of X, has actively engaged in Germany’s political discourse. In December 2024, Musk tweeted, ‘Only the AfD can save Germany,’ expressing support for the far-right Alternative für Deutschland (AfD) party. He also hosted a live stream with AfD leader Alice Weidel on X, providing the party with a significant platform during the election period.

The Digital Services Act and EU Investigations

In addition, Meta failed to enforce its own AI content policies. Some of the submitted ads contained AI-generated imagery depicting hateful narratives, yet Meta approved half of these without requiring disclosure that AI was used—a direct contradiction to its policy mandating transparency for AI-generated political content.

Our findings suggest that Meta’s AI-driven ad moderation systems remain fundamentally broken, despite the Digital Services Act (DSA) now being in full effect“.

Eko has submitted its findings to the European Commission, which oversees the DSA’s enforcement. The organization argues that neither Meta nor X fully comply with the act’s hate speech and ad transparency provisions. This aligns with Eko’s prior investigation in 2023, which similarly found Meta approving harmful ads despite the DSA’s impending implementation.

“Rather than strengthening its ad review process or hate speech policies, Meta appears to be backtracking across the board,” an Eko spokesperson said. The statement points to Meta’s recent decisions to scale back its fact-checking and moderation policies, which they argue could place the company in direct violation of the DSA.

Potential Penalties Under the DSA

Violations of the DSA could lead to significant penalties, including fines of up to 6% of a company’s global annual revenue. If systemic non-compliance is proven, regulators could even impose temporary access restrictions on platforms within the EU. However, the EU has yet to finalize its decisions on Meta and X, leaving the possibility of enforcement actions uncertain.

Civil Society Organizations Raise Alarm Over Election Security

With Germany’s election imminent, digital rights groups warn that the DSA has not provided adequate protection against tech-driven election manipulation. A separate study from Global Witness found that algorithmic feeds on X and TikTok favor AfD content over other political parties. Researchers have also accused X of limiting data access, preventing independent studies on election-related misinformation—despite the DSA requiring platform transparency.

“Big Tech will not clean up its platforms voluntarily,” Eko’s spokesperson stated. “Regulators must take strong action—both in enforcing the DSA and implementing pre-election mitigation measures.”

Will Regulators Step In Before the Election?

As German voters prepare to go to the polls, pressure is mounting on EU regulators to act swiftly to prevent further disinformation and hate speech from spreading online. Despite calls for intervention, neither Meta nor X has publicly responded to Eko’s latest findings. With election integrity at stake, the question remains: Will Meta and X adjust their policies in response to regulatory pressure, or will the EU take more decisive action to enforce compliance?

Read More: Meta Rolls Out Community Notes on Facebook, Instagram, and Threads

X Blocks Signal Links – A Threat to Digital Freedom and Privacy?

X (formerly Twitter) has ignited controversy by blocking links to Signal.me, a domain linked to the widely used encrypted messaging app Signal. Users attempting to share these links are met with error messages suggesting they are spam or potentially harmful. However, links to Signal.org remain unaffected, leading to concerns that this selective restriction is an intentional move against encrypted communication rather than a broad moderation policy.

Why Is X Targeting Signal?

Signal has built a reputation as a go-to app for private messaging, relied upon by activists, journalists, and government officials seeking secure conversations. The app’s end-to-end encryption makes it an essential tool in an era where digital surveillance is an increasing concern.

Cybersecurity expert Matthew Green, a cryptography professor at Johns Hopkins University, expressed skepticism about X’s justification for blocking Signal links: “Blocking Signal links under the guise of ‘spam prevention’ is highly suspect. It’s hard not to see this as an attempt to make secure communication harder for people who rely on it.”

This move comes amid X’s history of restricting links to competing platforms like Facebook, Instagram, and Mastodon. However, Signal is not a direct competitor to X—it’s a private communication tool, not a social networking platform. So why would X take issue with Signal?

A Crackdown on Whistleblowers?

Some analysts believe this could be part of a broader effort to limit whistleblower activity, particularly among federal employees who have been increasingly using Signal to communicate privately about internal government matters. Reports indicate that employees in various agencies, including those under Elon Musk’s oversight, have turned to Signal to discuss concerns regarding internal policies, inefficiencies, and alleged misconduct.

Musk has previously hinted at developing his own encrypted messaging service within X, leading some to speculate whether this move is intended to direct users toward X’s own proprietary communication tools instead of an independent and secure platform like Signal.

What’s Next for Digital Privacy?

X Blocks Signal Links a worrying shift in digital communication policies. If X can arbitrarily suppress access to privacy-focused tools, what’s stopping other platforms from doing the same? Social media platforms have evolved into public communication spaces, and restricting encrypted messaging services could have widespread implications for journalists, activists, and privacy-conscious individuals.

For now, X has not officially commented on whether this block is intentional or a technical oversight. Users looking to share their Signal contacts will have to resort to workarounds, such as sharing usernames directly. However, the lack of transparency raises an important question.

Read More: Elon Musk’s AI Revolution Continues as xAI Unveils Grok 3 AI Model

What’s Next for TikTok in the U.S.? Billion-Dollar Bids and High-Stakes Battles

Fate has not been kind to TikTok in US lately, with tantrums of a legal battle between political turns and the suffering caused by billion-dollar bidding processes on a daily basis. One day it is banned, and the next day it is back with presidential blessings. Here it comes again, several of the high-profile investors take a stab at delving their fingers into the viral platform, the question remains about who is going to own a platform that creates trends, fuels influencer careers and keeps millions scrolling at 2 AM? As the battle heats up over TikTok, let’s break down what’s been going on so far and who has eyes on buying it.

TikTok’s Ongoing Controversy:

The last four years have been a storm of discontent for TikTok, which has had to crawl down in the U.S with a strong debate. The site is owned by a Chinese company, ByteDance, and this has raised worries in this country that the Chinese government might gain access to its users’ data. This has led to many legal actions, executive orders, and now, perhaps, the possible forced sale of TikTok’s U.S operations. Adding to the uncertain atmosphere, TikTok suffered a minor outage in the U.S last month leaving millions of users in suspense. The app was restored very quickly but reminded users about how fragile its hold exists in the country.

According to CFRA Research, Senior Vice President Angelo Zino, the value of TikTok’s U.S business could grow beyond $60 billion, with the demand from the U.S government to sell or ban TikTok. There have been several buyers emerging to snatch this most effective social media platform in the world.

TikTok’s Ban:

To make sense of that very uncertain future now with TikTok, we must revisit critical events in history that have forged its bumpy relationship with the U.S government. In August of 2020, President Donald Trump signed an executive order prohibiting transactions with ByteDance, virtually seeking to ban the app from usage within the U.S. The administration pushed for a forced sale of TikTok’s U.S operations, with Microsoft, Oracle, and Walmart lining up as potential buyers. A U.S judge temporarily blocked Trump’s executive order, allowing TikTok to continue its operations while legal proceedings were taking place.

Bipartisan efforts to address National Security concerns related to TikTok continued under former President Joe Biden. In April 2024, the U.S House of Representatives passed legislation that directly targeted TikTok, it later made its way through the Senate. President Biden signed the bill into law, making mandatory that TikTok either sell itself or be prohibited in the country.

In return, TikTok sued the U.S government, claiming that the ban was unconstitutional and a violation of First Amendment rights. The U.S Supreme Court upheld the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACA), popularly named as “the TikTok ban.

Trump’s Reversal and Temporary Extension:

Astonishingly, Trump submitted a court statement countering the expected ban on TikTok, suggesting that he wanted to find a way to keep TikTok in the U.S. In the aftermath of the Supreme Court’s ruling, TikTok swiftly shut down for a brief period in the U.S and it made its comeback within another less than 12-hour period with a statement that, “As a result of President Trump’s efforts, TikTok is back in the U.S”.

On January 20, Trump signed an executive order delaying the ban for 75 days, which will give TikTok additional time to sell a stake or sign some kind of agreement. Trump has proposed a 50-50 ownership deal between ByteDance and a U.S company, although nothing has been finalized yet.

Potential Buyers for TikTok in US:

Clearly, some potential investors and companies have emerged as prospective buyers of TikTok’s U.S operations. Here is who is contesting for the ownership;

1. The People’s Bid for TikTok:

Organized by Project Liberty founder Frank McCourt, The People’s Bid seeks to promote open-source initiatives that privilege privacy and data control. Kevin O’Leary, an investor and television personality, joined The People’s Bid on 6th January and has previously signaled interest in buying TikTok for $20 billion. Tim Berners-Lee, Prolific inventor of the World Wide Web said in a statement that, “users should have an ability to control their own data”, is also interested in bidding for it along with David Clark, a senior research scientist at MIT Computer Science and Artificial Intelligence Laboratory.

2. American Investor Consortium:

Headed by Jesse Tinsley, CEO of Employer.com, this consortium recently issued an all-cash $30 billion bid for TikTok U.S operation and the participants include: David Baszucki, Co-founder and CEO of Roblox, Nathan McCauley, CEO of crypto platform Anchorage Digital, and Jimmy Donaldson (MrBeast), the famous YouTube content creator.

3. Other Interested Buyers:

Other interested buyers are; Bobby Kotick, Former Activision CEO who is likely looking at how he can integrate TikTok into gaming. Steven Mnuchin, has come back into the discussion after being the U.S Treasury Secretary under Trump. Oracle has tried to acquire TikTok in 2020, but co-founder Larry Ellison is said to still have an interest. Walmart expressed interest in 2020 and it could have value in TikTok’s e-commerce potential. Microsoft, formerly a top player in 2020 who renewed interest according to reports. Rumble, the alternative video-sharing platform, which wants to purchase TikTok and be its cloud technology partner with it and Perplexity AI also submitted its bid last month.

TikTok’s Future in US:

The future of TikTok in the U.S will soon be determined by the months coming ahead as to whether the app will continue with new ownership or face another legal battle. The outcome might reshape how social media is being used by millions of creators and businesses and will be setting an example on how the U.S would treat digitally owned foreign platforms. What is for sure is that TikTok will shape not only the future of social media but also the broader conversation regarding data privacy, tech regulations, and digital influence. 

Read More: TikTok Returns to App Stores in the US

Apple Maps May Introduce Google-Style Ads to Expand Its Revenue Stream

The iPhone maker is envisioning a future where users could turn their navigation into a revenue-generating machine by implementing advertisements. Apple Inc. might open a new front in its advertising business by considering the introduction of ads into Apple Maps, as similar in the business model of Alphabet Inc. Google Maps. This could prove a huge shift in the company’s monetization strategy. In the event that this works, Apple Maps might soon be as much like Google Maps in terms of accuracy as monetization. This shift also shows an extension of Apple’s increasing ambitions regarding digital advertising, an area that has slowly yet surely been added into Apple’s catalog.

Apple Maps Monetization under Consideration:

The idea of paid advertising integrated within Apple Maps was raised during a recent Apple all-hands meeting. As Mark Gurman in his newsletter ‘Power On’ said, “During a recent all-hands meeting, Apple’s Maps division revisited the idea of monetizing its navigation app”. This would enable businesses to buy positions above organic results, something similar to ads in Google Maps.

Apple had previously tested out ads in several formats, with search ads within the App Store, advertisements in Apple News, and ads within the Stocks app being part of its strategy already. Extending this to Apple Maps will create another revenue source while aligning with Apple’s broader opportunity of increasing services income.

Revenue Significance:

Along with rapidly changing time comes the rapidly changed behavior of the users. In a 2023 antitrust case against Google, evidence showed that Google Maps usage on iPhones had significantly declined since Apple had switched over to its own mapping. Thus, Google had gained back only 40% of its historic mobile traffic, indicating that Apple has been growing its footprint in the navigation services.

Apple’s financials are also reflecting a growing portion of its revenue coming from services. For its fiscal first quarter of 2024, Apple made $124.3 billion in revenue, which is above the $124.13 billion forecast by analysts. The important bit was that its services revenue rose to $26.34 billion, from $23.12 billion a year ago, indicative of the company’s push toward subscription revenue and ad-based monetization.

Market Implications and Stock Market Response:

Advertising in Maps by Apple would bring huge changes to the digital advertising arena that has long been dominated by Google Maps in terms of location based search. Advertisers will tend to shift parts of their budgets toward Apple’s ecosystem that has a more dedicated user base and tougher strict privacy policies that target privacy conscious consumers. Apple’s already ‘privacy first’ moves, such as App Tracking Transparency (ATT), have already changed mobile advertising forever. Ads in Apple Maps will raise the question of how the company intends to generate revenue while keeping user privacy intact.

The stock of Apple was very calm with the news. The stock by after-hours trading registered an increase of 0.02% to $244.65 following a 1.27% rise to $244.60 at Friday’s close. Investors seem to be cautiously optimistic about the entry of Apple into location-based advertising.

Apple’s growing services:

The move that Apple is more likely to take in terms of advertising monetization within Maps will reinvent the methodologies employed by organizations when reaching users through navigation systems. While it represents a huge opportunity from Apple’s growing services business, it does bear several critical issues concerning user experience and privacy.

If Apple materialises its ad possibilities for Maps, then it may usher in a new era for Apple advertising, one that directly competes with Google’s immensely profitable model of search based advertising. Some users might appreciate recommendations that are relevant, while others could find them intrusive. As Apple dives further into services and advertising, iPhone users must prepare themselves to be monetized even further in the apps that they use on an everyday basis. The industry will be watching as Apple explores this possible revenue stream to see how it affects competition with Google and the digital advertising marketplace in general.

Read More: Apple to Bring AI Features and Spatial Content App to Vision Pro

South Korea Suspends New Downloads of DeepSeek over Data Privacy Concerns

Suppose one downloads an AI chatbot to brainstorm ideas, solve inquiries, and perhaps even crack a joke, but only to find that it’s under deactivation due to data privacy issues. That is the exact situation faced by the South Korean users of DeepSeek, the Chinese AI app that is now being considered for regulatory attacks. Its web service continues to work, but downloading for the new user is immobilized until compliance with South Korean data protection law. I guess South Korean regulators just hit DeepSeek with the digital equivalent of a “Try again later” sign.

As South Korea has put an end to any further download of the Chinese artificial intelligence app, DeepSeek, due to concerns about breaching its data privacy laws, the Personal Information Protection Commission (PIPC)  highlighted that the suspension took place on Saturday and would continue until DeepSeek was modified to conform to South Korea’s privacy law.

The AI-enabled chatbot, meanwhile, can still be accessed in South Korea through the web-based service. The inspection is back on the app on how DeepSeek handles user data, especially against the backdrop of global concerns considering how other AI applications collect data and provide privacy protections.

Regulatory Scrutiny:

DeepSeek once admitted that, in certain respects, it has failed to give account for Korea’s personal data protection laws, which was told by the PIPC. The company has appointed legal representatives in South Korea, and is prepared to tackle those regulatory concerns. This follows a similar set of steps taken by Italy’s data protection authority, the Garante, last month, to order DeepSeek to suspend its chatbot services in that country due to privacy policy concerns still to be resolved.

Such incidents illustrate the increasing regulatory pressures bearing on AI startups, as governments around the world tighten oversight on the collection, storage, and use of personal data. It seems like AI chatbots are great at solving problems but terrible at avoiding them.

China’s Response:

Regarding the situation in South Korea, a spokesperson from China’s foreign ministry stressed Beijing’s commitment to data security and international obligations, also arguing that it would never order firms or individuals to unlawfully collect or store data. Though DeepSeek has not yet issued a formal statement on the South Korean suspension, the regulatory challenges are similar to those being faced by many other Chinese tech companies in international markets.

Sensitivity towards Data Compliance:

The regulatory issues at DeepSeek have been articulated regarding scrutiny from governments across the world on the AI services and user data. With improving privacy regulations by many governments worldwide, companies like DeepSeek have had to be careful in balancing innovation along with sensitivity towards data compliance. As the existing users get access to the AI chatbot through its web service, the stopping of new downloads signals that local privacy provisions must be met. It is yet to be seen if DeepSeek can handle the issues quickly, although one thing can be determined concerning the future of the AI, which has to do with regulatory approval as much as with technological advancements.

Given all the concerns being raised about gathering of the data with AI and national security risks, compliance and the ability to navigate through will be DeepSeek’s most significant factors in future global operations. For now, users in South Korea who had already installed DeepSeek can freely use it, while new downloads will remain blocked until the company adapts to domestic privacy policies.

Read More: TikTok Returns to App Stores in the US and the Ownership Battle Continues

EU’s AI Regulation Shift: A Strategic Advantage for U.S. Tech Giants?

The European Union (EU) is reassessing its approach to artificial intelligence (AI) regulations, which could create significant opportunities for Apple, Google, Microsoft, and other major U.S. technology companies. According to a report by the Financial Times,

“The European Union is looking to scale back certain regulatory restrictions on AI to attract more investment and boost competitiveness in the global AI sector. Henna Virkkunen, the European Commission’s digital policy chief, emphasized that the EU’s objective is to ‘help and support’ AI-driven businesses while ensuring that compliance obligations do not create unnecessary barriers to growth.”

This potential shift in policy comes as the EU faces increasing pressure to balance technological advancements with regulatory oversight. The proposed AI Act, which categorizes AI technologies based on their risk levels, imposes stricter regulations on high-risk models such as GPT-4 and Google Gemini. However, the latest discussions indicate that the European Commission may seek to minimize reporting obligations for European businesses to prevent excessive regulatory burdens.

EU’s Changing AI Policy and Industry Reactions

Henna Virkkunen, the European Commission’s digital policy chief, stated in an interview with Euractiv that the EU’s goal is to “help and support” companies while ensuring responsible AI development. She emphasized that European businesses should not be overwhelmed by compliance requirements that could hinder their global competitiveness.

In a parallel development, the European Commission has withdrawn a proposed AI liability directive, signaling an effort to streamline AI regulations. An upcoming AI code of practice, expected to be introduced in April 2025, aims to align existing AI laws with practical industry requirements.

However, this regulatory shift has drawn mixed reactions. U.S. officials have expressed concerns over Europe’s AI governance model, arguing that overregulation could stifle innovation. Speaking at an AI summit in Paris, U.S. Vice President JD Vance criticized Europe’s content moderation policies, calling them “authoritarian censorship,” and warned that excessive restrictions could undermine the potential of AI-driven industries.

A Competitive Landscape for AI Development

With the U.S. maintaining a flexible regulatory approach, analysts suggest that the EU’s move may reflect a response to growing competition in AI leadership. U.S. President Donald Trump’s administration maintained a pro-business stance on AI, which some believe has indirectly influenced Europe’s evolving AI policies.

While the EU maintains that its regulatory changes are independent of U.S. influence, the timing raises questions about whether the continent seeks to attract more AI investment and prevent businesses from shifting operations elsewhere. The next few months will determine whether these adjustments will benefit European tech firms or strengthen U.S. tech dominance in the region. The April 2025 AI code of practice will provide further insights into the EU’s long-term AI strategy, shaping the future of AI governance, industry innovation, and global competition.

Read More: Elon Musk Announces Live Demonstration of Grok 3 AI Chatbot

North Carolina Amazon Labor Union Vote Fails as Workers Reject Unionization

Garner, NC – Workers at an Amazon fulfillment center in Garner, North Carolina, have voted against forming a union, marking a significant setback for labor organizers seeking to expand union representation within the e-commerce giant. According to Carolina Amazonians United for Solidarity and Empowerment (CAUSE), the group advocating for unionization, 3,276 ballots were cast in the election. The final count revealed that only 25.3% of workers voted favor unionizing, while 74.7% opposed the effort; the National Labor Relations Board (NLRB) is set to certify the results.

Amazon Labor Union Vote Faces Strong Opposition 

In a statement provided to CNBC, CAUSE strongly accused Amazon of suppressing unionization efforts through what it described as illegal tactics.
“Amazon’s relentless and illegal efforts to intimidate us prove that this company is afraid of workers coming together to claim our power.”  The group alleges that Amazon violated labor laws to discourage union support, though no formal legal action has been announced in response to the election results.

Amazon Maintains Workers’ Decision Was Fair 

Amazon, which has long opposed unionization within its workforce, denied any wrongdoing. Company spokesperson Eileen Hards responded to the outcome, stating,
“We’re glad that our team in Garner was able to have their voices heard and that they chose to keep a direct relationship with Amazon.”

While the North Carolina warehouse rejected unionization, other Amazon locations have seen successful labor organization efforts. In 2022, workers at a Staten Island warehouse voted to form a union, marking a historic victory for organized labor within Amazon. Earlier this year, employees at a Philadelphia Whole Foods, an Amazon-owned grocery chain, also voted to unionize. However, Whole Foods has contested the results and petitioned the NLRB to overturn them.

Amazon’s Broader Legal Battles with the NLRB 

Beyond labor disputes, Amazon’s legal team recently joined SpaceX in a lawsuit challenging the structure of the NLRB. The companies argue that the board’s framework is unconstitutional, signaling a broader pushback against federal labor regulations.

What’s Next for Labor Organizers After the Vote? 

Despite the loss in North Carolina, CAUSE and other labor groups are expected to continue pushing for unionization efforts in Amazon facilities across the country. The outcome in Garner may serve as a case study for future campaigns, particularly regarding Amazon’s response to organizing efforts. With ongoing legal battles and a shifting labor landscape, the fight over unionization at Amazon is far from over.

Read More: Earnings Report Shock: Amazon’s Stock Plunges as AWS Growth Slows

Elon Musk’s $97.4 Billion Offer to Acquire OpenAI Rejected

Tech billionaire Elon Musk, co-founder and former board member of OpenAI, recently made a staggering $97.4 billion offer to acquire full control of the artificial intelligence company. However, OpenAI’s board rejected the bid, citing concerns over its mission, autonomy, and ethical considerations. In an official statement released through OpenAI’s press account on X, board chair Bret Taylor described Musk’s offer as a deliberate move to interfere with his competition.

“OpenAI is not for sale, and the board has unanimously rejected Mr. Musk’s latest attempt to disrupt his competition,” Taylor said. “Any potential reorganization of OpenAI will strengthen our nonprofit and its mission to ensure [artificial general intelligence] benefits all of humanity.”

The New York Times reported that OpenAI also addressed a letter to Musk’s attorney, Marc Toberoff, stating that the proposal did not align with OpenAI’s mission and was not in its best interests. The decision has sparked widespread debate over the future of AI governance and Musk’s ambitions in the AI industry.

Why Did OpenAI’s Board Reject Musk’s Offer?

1. Preserving OpenAI’s Independence

OpenAI’s leadership believes that Musk’s takeover would jeopardize the organization’s autonomy, potentially shifting its priorities toward his business interests, mainly his AI venture, xAI. By rejecting the offer, the board aims to maintain control over its research direction and prevent external influence from dominating decision-making.

2. Conflict Between Mission-Driven and Profit-Driven Goals

Originally founded as a nonprofit, OpenAI transitioned into a capped-profit model to balance funding needs and ethical AI development. The board fears that Musk’s leadership could tilt the company towards a profit-driven agenda, undermining its commitment to developing AI for the broader good rather than commercial gain.

3. Musk’s History with OpenAI

Musk resigned from OpenAI’s board in 2018 after an unsuccessful attempt to take control of the company. The board considers this past power struggle as a key factor in rejecting his current bid, viewing it as a continuation of his previous efforts to dominate OpenAI’s direction.

4. Tensions with Microsoft and AI Ecosystem

OpenAI has a major partnership with Microsoft, which has invested billions in the organization. If Musk were to gain control, it could disrupt this collaboration, leading to legal and financial complications. The board is also concerned about potential conflicts between OpenAI’s roadmap and Musk’s competing AI firm, xAI.

5. Legal Risks and Regulatory Concerns

Musk has had numerous legal disputes and regulatory challenges in the past, including with the SEC and Tesla. OpenAI’s leadership fears that his control could introduce unnecessary instability, regulatory scrutiny, and delays in AI safety frameworks that are crucial for the industry’s responsible development.

Musk’s Response and Industry Reactions

Following the rejection, Musk expressed his displeasure, criticizing OpenAI for abandoning its original mission of creating open-source AI. He has also hinted at further legal action or alternative AI strategies, reinforcing his commitment to advancing artificial intelligence through xAI and other ventures.

Industry experts remain divided on the issue. Some argue that Musk’s resources and expertise could have accelerated OpenAI’s innovations, while others support the board’s stance on keeping AI development independent from corporate dominance.

What’s Next for OpenAI?

With Musk’s bid off the table, OpenAI will likely continue its current trajectory with Microsoft’s backing and expand its AI capabilities while maintaining governance safeguards. The rejection signals the board’s commitment to AI safety and ethical considerations, ensuring that advancements align with their foundational mission. As the AI race intensifies, OpenAI’s decision will shape the broader debate on who controls AI, how it is developed, and whether it remains a force for public benefit or corporate interests.

Read More: OpenAI Drops o3 AI Model to Unify AI Strategy with Game-Changing GPT-5

TikTok Returns to App Stores in the US and the Ownership Battle Continues

The odyssey of TikTok in the United States has been an exciting reality show with a high stakes infused series of courtroom battles and political backroom dealings along with last minute plot twists in between. A month-long disappearing act has made its grand re-entry to app stores of the U.S, however the drama might not be over yet. With concerns over national security, executive orders, and the prospect of U.S ownership looming, the events of the TikTok saga continue to unfold like a binge worthy series.

TikTok returns to the app stores in the United States, marking yet another chapter in its motto to survive. For nearly a month, the highly hyped short-video app, TikTok, was not available for download. Then, it re-entered the U.S today when Apple and Google reinstated it in their respective app stores. This follows the temporary ban earlier on national security grounds. Similarly, other apps of ByteDance, including the video editor CapCut and social media platform Lemon8, were also restored. However, without certainty as to whether it is going to be bought by one of the many incoming tech giants waiting in line for the apparent bidding war, the app’s future hangs in the balance.

TikTok’s Ban:

Government scrutiny of TikTok has been ongoing in the U.S for several years, with concerns related to national security risks from China’s ownership of ByteDance. In 2024, former President Biden signed a law requiring that by January 19, 2025, ByteDance has to dissociate from operating TikTok in the U.S or see the app outlawed altogether.

The law threatened significant financial penalties against any app store operators who failed to comply with it, resulting in Apple and Google pulling TikTok from their platforms. ByteDance tried to challenge the ruling, but on January 17, the Supreme Court reaffirmed the law making an obligation for TikTok to find a non-Chinese buyer or be permanently banned in the United States.

Trump’s Order and TikTok’s Return:

However, things turned around on January 20 when the newly elected President of the U.S, Donald Trump, signed an executive order putting the ban on hold for 75 days. This afforded ByteDance further tried to find a buyer for TikTok’s U.S operations, thus renewing support for the application from service providers like Oracle.

That is not the end of the road, rather the Apple and Google postponed TikTok’s return to their stores on account of uncertainty about the legal penalties that would be imposed under the law’s deferral. This makes a rather different situation for those users who had installed TikTok would be able to continue to use it, whereas others who uninstalled it would not be able to download it again. Thanks to the immediate tech industry response regarding potential U.S ownership of TikTok, the long-standing competitor of short video-sharing apps is witnessing one of the greatest upsurge of all time.

Of course, Trump would have the government owning 50% of the stake jointly with partner companies and setting up, as part of the deal, a sovereign wealth fund for the possible purchase of TikTok’s U.S operations, supposedly, Oracle and Microsoft are notable contenders for the acquisition. On the other hand, TikTok enemies have been aggressive in squeezing the situation. Social platforms such as X and Bluesky have introduced specific vertical video feeds to appeal to TikTok users, while Meta announced that it would develop a video-editing app for CapCut competition.

TikTok’s Popularity and Beyond:

As it stands in the year 2023, TikTok remains one of the most popular apps in the world in the United States. According to analytics company Sensor Tower, in 2022, it was the second most downloaded app in the U.S, with 52 million downloads. While the immediate future of TikTok in the U.S seems secured, its long term future depends on whether ByteDance would negotiate a sale to please regulators.

There is little doubt that the app’s popularity will be affected by all these factors, but more by how the company, U.S regulators, and future investors move forward as they all navigate through what will be a complex political and legal maze. All eyes will be on negotiating drama between ByteDance, U.S tech giants, and the federal government as time runs out with the 75-day extension. The one thing absolutely certain is that this is not the final twist, and the world will wait to see what happens next with the effects of this latest change in TikTok.

Read More: Elon Musk’s Battle to Buy OpenAI

Elon Musk’s Battle to Buy OpenAI, Five Crucial Insights from His Offer Letter

In the life of tech billionaires, the drama tends to arise sooner than a software update. Musk’s next move is all about bidding to the tune of nearly $97 billion to reclaim OpenAI, a company he once championed but is now suing. While OpenAI CEO, Sam Altman brushed his offer aside, the court filings reveal Musk’s detailed offer to tell the whole saga, including lawsuits, power wrangling, and strategic chess moves that have entangled two of the most powerful names in the AI Industry.

An investment syndicate led by Elon Musk’s xAI has proposed an unlikely offer of $97.4 billion to acquire OpenAI. Altman has anyway very quickly declared the offer as impossible because it is seen as a method to block OpenAI from making nonprofit transition, an action in which Musk is also challenging with his own suit. In a legal filing on Wednesday, Altman’s team argued that Musk’s position is contradictory, attempting to buy the assets of OpenAI while also trying to prevent its conversion to nonprofit.

Musk’s team countered that they would withdraw the offer if OpenAI’s efforts to move away from its nonprofit status ceased. Musk’s entire letter of intent to buy open AI was published as part of this legal turmoil, this opened up a broader understanding of his plan and motivation.

The five key details from Musk’s Offer Letter are following;

1. Deadline for the Offer:

A definite expiration date for May 10, 2025, is that of the unsolicited bid by Musk’s consortium. It goes off the track only if the parties to the interests finalize the deal, or they mutually decide to terminate discussions, or if OpenAI explicitly rejects the offer in writing.

Altman has publicly dismissed such offers (including a humorous counter offer to buy X at tenth of the price) but OpenAI has yet to issue an official rejection statement. Even offers between competitors must get consideration before being outright dismissed due to legal requirements.

2. Cash Transaction:

Musk’s financing group with notable venture capitalists like Joe Lonsdale’s 8VC and SpaceX investor Vy Capital has offered just $97.375 billion in cash. This is because, in the past, he has borrowed money to finance such acquisitions, like the $13 billion borrowed from banks to acquire twitter in 2022, and though self-proclaimed as having a fortune of about $400 billion, mostly raised by a boost from Donald Trump, it was not disqualified from consideration. Interestingly, the letter mentions seven investors, including Musk’s x.AI, alongside others unnamed, implying that Musk is not relying solely upon his wealth to finance the deal.

3. Access to all Financial and Operational Data:

The consortium of Musk demands full access to OpenAI’s financial records, assets, employees, and business operations before it commits a ginormous buy. “Assets, facilities, equipment, books, and records” is mentioned in the letter to indicate such needs.

This stress has indeed been caused, given that the said due diligence is normal in major transactions for such things, as making an acquisition review that is much more compelling opens it up to very internal and state of the art sensitive knowledge that might be a possible conflict of interest based on xAI’s having a direct market competitive claim in OpenAI to gain such level of access.

4. Undermining Musk’s Lawsuit:

Musk’s legal battle against OpenAI revolves around his contention that OpenAI’s assets can never be “transferred away” for private gain. However, in a filing on Wednesday, the lawyers for OpenAI pointed out that Musk’s offer contradicts this claim and emphasized that he is making an offer to dispose of an acquisition effort just to weaken a competitor. They said, “The offer isn’t serious, but an improper bid to undermine a competitor.”

According to OpenAI, the offer is not genuine and was strategically timed to complicate its privatization. Musk’s camp insisted otherwise, claiming that the bid was legitimate and that funding would be funneled straight into OpenAI’s nonprofit purpose.

5. Musk’s withdrawal from the offer:

Musk’s legal team stated that if the board of OpenAI decides not to convert into non-profit operation, he would withdraw the offer. More speculation would be reinforced with this statement that Musk’s bid was not aimed at buying OpenAI altogether but only bumping up the figure that would make Altman and other top executives acquire the company privately.

OpenAI board legal representative dismissed such an offer from Musk and said, “Musk’s bid doesn’t set a value for [OpenAI’s] non-profit and that the nonprofit is not for sale”.

The Repercussions:

This adds to the already complicated legal and financial drama over OpenAI, which is still far from being resolved. The rejection of the bid from OpenAI would give Musk the further ground to challenge the legitimacy of its nonprofit conversion. On the other hand, if OpenAI accepts or considers the offer, it risks getting into trouble over its governance. Either way, whether Musk’s offer is a genuine attempt to acquire OpenAI or just a tactic within his legal showdown, it has put OpenAI in a difficult position.

Truly, one wonders, whether the billionaire indeed wants to acquire OpenAI or is using the bid as a bluff or a feint to disrupt its transition into nonprofit status. Well, one thing is clear, it isn’t merely a corporate dispute rather it’s the critical moment of evolution regarding what artificial intelligence is and what big tech makes it into in the future. As both continue their game of legal and financial chess, the world waits to see who blinks first.

Read More: OpenAI Drops o3 AI Model to Unify AI Strategy with Game-Changing GPT-5

Elon Musk’s $97.4B Bid for OpenAI Sparks Controversy and Industry Shockwaves

Artificial intelligence is on its way to an advanced level day by day. Speaking of advanced, there can be none more audacious than Musk’s new initiative. He is making an incredulous $97.4 billion bid for OpenAI. This stirred up the tech industry with shock waves and the rest of the tech world is understandably confused, seeing as how OpenAI’s board states that it has not received a formal offer.

The feud between Elon Musk and OpenAI has taken another turn this week, as sources reveal the OpenAI board has yet to receive a formal takeover offer by the billionaire consortium. However, Musk’s lawyers assert that the offer was sent, and OpenAI stands firm that the nonprofit operating ChatGPT is not for sale.

The Bid:

The day after Musk publicly made his offer of $97.4 billion to buy OpenAI, doubts remained as to whether the board has even seen it or not. An anonymous source close to the situation alleged that OpenAI’s board has not received a formal offer, contradicting claims made by Musk’s attorney, Marc Toberoff. Toberoff asserts that a bid in the form of a four-page Letter of Intent was sent via email to OpenAI’s outside counsel at Wachtell, Lipton, Rosen & Katz. The law firm, however, has yet to confirm or deny receipt of the document. Toberoff said, “The bid – attached to an email – was in the form of a detailed four-page Letter of Intent to purchase OpenAI’s assets, signed by Musk and other investors and addressed to the board”. He further emphasized, referring to OpenAI’s CEO, “Whether Sam Altman chose to provide or withhold this from OpenAI’s other Board members is outside of our control”.

OpenAI CEO Sam Altman fiercely refuted Musk’s proposal. Addressing the audience at an AI summit in Paris, Altman dismissed the bid as “ridiculous,” arguing that OpenAI’s nonprofit side is not for sale. Altman said, referring to Musk, “The Company is not for sale. It’s another one of his tactics to try to mess with us”. This suggested that Musk’s bid was more for disruption than legitimate acquisition.

OpenAI’s Transformation:

Co-organized by Musk, OpenAI was inaugurated in 2015, a non-profit with the mission of being engaged in enlightened development of AI. Musk parted ways from the company because of disagreement over the vision and funding mechanisms. Since then, OpenAI has grown into one of the most prestigious destinations in AI, raising billions in its for-profit transformation.

OpenAI now finds itself in a maze of legal and regulatory cobwebs as it seeks to raise $40 billion while moving from the status of a non-profit to a capped-profit one. The attorney general of Delaware, Kathy Jennings, is looking into whether or not OpenAI’s transformation into a profit-oriented organization fits the aim for charitable operation it was designed for, thus preventing the current leadership from pursuing commercial objectives instead of benefitting the general public. She said, “I am reviewing OpenAI’s proposed changes to ensure the company is adhering to its specific charitable purposes for the benefit of the public beneficiaries, as opposed to the commercial or private interests of OpenAI’s directors or partners.”

Meanwhile, Musk’s startup is establishing its own AI project through promoting xAI as a competitor to OpenAI. The recent attempt to acquire OpenAI now throws a wrench into ongoing discussions on OpenAI’s valuation and fair market value for assets held by the nonprofit.

Challenges and Market Implications:

Legal experts remained certain that Musk’s offer would indirectly create an asset price for OpenAI’s nonprofit status. Robert Weissman, Co-President of consumer rights group, Public Citizen, pointed out that regulators would be obliged to ensure that any dismantling of these assets takes place at fair market value should OpenAI’s nonprofit entity decide to obtain control of the process. Weissman said, “It does help set a price point for the thinking about the valuation of the nonprofit assets. If it were to occur as proposed, the regulators have a duty to ensure that if there’s a selloff of assets to a for-profit entity, that fair market value is obtained.”

In the meantime, amidst the present volatility, OpenAI is put in a situation to get funding for financing and push AI research forward. Right now, Altman’s message to Musk is definitely no offers whatsoever for the purchase of OpenAI, regardless of whatever price it puts on the table.

Musk sees himself as the rightful steward of AI’s future, while OpenAI stands firm on independence and funding for future progress. Whether this standoff escalates into legal battles, under regulatory scrutiny, or with another surprising turn remains to be seen. One thing is for sure, the future of AI is equally about corporate power plays as technological breakthroughs.

Read More: OpenAI Hunts for U.S. Sites to Build Trump-Backed ‘Stargate’ AI Superhub

South Korea’s Acting President Choi Responds to Trump’s 25% Tariff Shock – What’s Next for Global Trade?

In the latest news, Trump’s remarks over import tariffs and taxes have caused immediate stress among global trading partners. On Monday, President Trump announced that the U.S. will impose 25% tariffs on all aluminium and steel imports as well as other import duties later this week, including Canada and Mexico. He broke this news on his way to attend the Super Bowl from Florida to New Orleans when asked about trade tax scenarios; he assured that aluminium was included as well.

Trump’s Tariffs Strategy:

In Trump’s presidency, this was the first time that tariffs came earlier than before his time at the White House, also when he prioritised tax cuts and deregulation. There are two sides of Trump’s tariff strategy:

  1. Import taxes as a tool to force concessions on issues like immigration.
  2. Source of revenue that would greatly help the government’s budget deficit.

After Effects:

This announcement has caused worry among financial markets and Americans. American citizens are expecting a high inflation rate in the upcoming months because of duties. Financial Markets fell on friday and stock prices also saw the drop because of reciprocal tariffs and of course ‘consumer sentiment’. Consumers of Shein and Temu weren’t able to receive their packages until customs officials could find an alternative way. The small packages have previously been exempt from tariffs.

Previously, he threatened 25% import taxes on all goods from Canada and Mexico but he paused them for 30 days a few days ago. Not only them, China has also been on the radar as he proceeded to add 10% duties on imports from China.

Global Trader’s Reaction:

Trump’s tax policies have caused some serious stress to global trading partners. On Monday, Choi Sang-Mok, South Korea’s acting president (who also serves as the country’s finance minister) called a meeting with the country’s trade and foreign policy officials to examine how Trump’s proposed tariffs on steel and aluminum would affect its industries as well as the U.S.-Japan summit. Also, Choi highlighted the need to strengthen the nation’s AI competitiveness while monitoring the growth of startup tech companies such as China’s DeepSeek. Specific details were not disclosed however according to him they discussed the impact and possible responses. The stock prices of major South Korean steelmakers, including POSCO and Hyundai Steel, dropped as the market opened on Monday.

From January to November, 2024, South Korea shipped about $4.7 billion worth of steel to the United States, which accounted for 21% of its global exports of the products during the period.

Read More: Trump’s Paris AI Summit: An Exclusive Showdown with AI Safety Institute Staff Being Ridiculed

Trump’s Paris AI Summit: An Exclusive Showdown with AI Safety Institute Staff Being Ridiculed

Artificial Intelligence seems to be gaining acceptance as the new battleground on which nations compete, with world leaders attempting to put forth policies, safety frameworks, and the outlines of innovation roadmaps. A U.S. delegation led by Vice President JD Vance will be present at the Paris Artificial Intelligence Action Summit on the 10th and 11th of February.  However, one major player will be off to Paris for the forthcoming AI summit being present, the AI Safety Institute (AISI). Such absence raises questions about the changing AI strategy of the United States under the Trump administration and what that means for global AI governance. According to sources knowledgeable about the scenario, technical staff from the U.S AI Safety Institute are being excluded from the delegation which seems to have become a last-minute change.

Absence of AI Safety Institute:

With 100 representative countries gathering in Paris for discussions on the AI revolution, it is going to be an interesting event indeed. It might not be so unique, but the fact that according to the Principal Deputy Director Lynne Parker and the Senior Policy Advisor for Artificial Intelligence, Sriram Krishnan, U.S delegation members from the White House Office of Science and Technology Policy (OSTP) will be included, while other high profile names from the Department of Homeland Security and the Department of Commerce will be absent.

Rumours fly in Washington regarding the Trump administration’s cancellation of trips for many officials that would delegate representatives from the AI Safety Institute now, formed under the rule of President Biden. The AI Safety Institute has moved quickly to assess AI risk and negotiate safety agreements with industry players such as OpenAI and Anthropic, the institute has not made any official comment about this incident.

U.S AI Policy under President Trump:

From the side of the AI Safety Institute, the exclusion is symbolic of other doubts about the Trump administration’s stand on AI governance. Even while taking over the office, President Trump was not interested in a verdict regarding AI policy, yet he trumped an executive order initiated by Biden several days ago on the same subject. United States AI policy is twisting in the context of many question marks.

In all possibilities, the absence of AI Safety Institute staff at the summit is reflective of changes within the Commerce Department, the institute’s host, as well as new focal points in Washington. The concerns would majorly move away as a main topic while shifting to more benefits discussed across the board concerning AI for the contrasting cities that were hosting the global summits on AI safety at Bletchley Park, Seoul on the previous occasions.

U.S in Global AI Debate:

Regardless of the absence of AI Safety Institute staff members, the U.S. would still interact to some extent with worldwide AI governance. As per information to this date, the U.S. chairing the International Network of AI Safety Institutes will ensure its presence at the summit. Moreover, U.S. representatives would not be entirely segregated from parallel tracks of discussions on AI regulation and innovation.

Once we consider China’s lead in AI, it becomes a critical geopolitical issue on how Washington influences international AI policies. Since the Trump administration is possibly considering redesigning its foreign policy, the absence of AI safety experts from the heart of Paris might hint at the themes unfolding in American national priorities-going from risk management to wider technological innovation and, increasingly, global politics.

Read More: South Korea Blocks Access to DeepSeek Over Security Concerns

Paris AI Summit: Will Microsoft, Google, China & US Agree on the Future?

From 6 to 12 Feb 2025. Capital of France will host the Artificial Intelligence (AI) Action Summit where heads of state and government, CEOs of International organizations, leaders of small/large companies, representatives of non-governmental organizations, artists and member societies and people from over 100 countries across the world will get together. Significant figures like Sam Altman CEO of OpenAI, and Top Executives from Microsoft and Google Parent company Alphabet are attending the Paris Summit. The exciting part is that India will also co-chair this AI Action Summit. Participants are invited based on their commitment to the action promoted by the summit. Also, they desired to debate at the summit. Previously two summits were organized by the United Kingdom and the Republic of Korea. 

This summit will be a huge deal for AI startups as France is using this summit for their promotion which are more likely to compete with U.S AI firms. Who can forget the hot news of recent times, Chinese AI startup DeepSeek, the most daring thing they did to challenge the U.S. dominance in AI at lower costs. Their Impact will also be a part of the discussion.

Key Focus of The Summit:

The participants will seek to achieve three main objectives:

  • Open-source AI systems (independent, safe and reliable AI to every user out there)
  • Clean energy for AI (AI that is environmentally friendly)
  • Effective and Inclusive global governance of Artificial Intelligence. (countries controlling their own AI instead of relying on U.S. tech giants.

This summit is based on five strategic focuses:

  • Future of Work
  • Trust in AI 
  • Innovation and Culture
  • Global Governance of AI
  • Public Service

Why is Trump’s administration in the spotlight?

The above is all about the summit. But why is Trump’s administration in the limelight? Well, there are many questions but the hottest question is: Will the U.S. align with China and other countries on AI principles? Since entering the White House on Jan 20, 2025., Donald Trump has revoked Biden’s 2023 (a set of guidelines for AI safety and ethics). Trump also pulled the U.S. out of the Paris Climate Agreement, again. He has faced congressional calls to consider new export controls on AI chips to counter rival China. 

From the U.S. side, Vice President JD Vance will represent the American delegation.

A non-binding AI principles document is being negotiated, which would be a huge diplomatic win if the U.S. and China both sign it.

No New AI Regulations

In previous summits, Safety commitment dominated the conversation but this year no new AI regulation is on the agenda to tackle upcoming challenges. France is evaluating how to implement the EU AI Act flexibly so it doesn’t discourage technology and innovation. 

Also, AI models consume massive electric energy, which ultimately raises its concerns about being unsustainable in the future. Meanwhile, the Hangzhou-based company disrupted global markets last month by proving it could compete with U.S. giants in human-like reasoning technology – at a significantly lower cost.

France has seized on the development as evidence that the global race to more powerful AI remains wide open.

Expected Outcomes of the Summit

  • $500M in AI funding, potentially increasing to $2.5B over five years, for global AI projects.
  • Agreement (or disagreement) on AI principles between the U.S., China, and other nations.
  • A push for open-source AI to benefit developing countries.
  • Discussions on how to balance AI innovation with national policies.

Read More: The AI Revolution in Europe; AI Startups Secured $8 Billion in 2024

Musk’s Legal Battle with OpenAI May Head to Trial, Judge Rules

A federal judge in California has ruled the sections of  Musk’s lawsuit will proceed to trials, requiring his testimony. On Tuesday, a judge said that portions of  Elon Musk’s case against OpenAI to stop its conversion to a for-profit-entity will proceed to trial. He also added that Tesla CEO will also appear in court for testimony,

“ Something is going to trial in this case,” District Judge Yvonne Gonzalez Rogers in Oakland, California, said in the early court session. 

Musk will sit on the stand and present his point to a jury, and a jury will decide who is on the right side. 

District judge Rogers was considering Musk’s recent request for a preliminary injunction to block the conversions generated by OpenAI before going to trial. Thus, the latest move in this battle is between the world’s richest person and CEO of OpenAI, Sam Altman, who is playing publicly. 

The last time Rogers was given a preliminary injunction was in Epic Games’ case against Apple in  May,2021. 

Musk has also been the Cofounder of OpenAi with Altman in 2015, but quit before the company took over and started the competing AI startup xAI in 2023. OpenAI is now shifting to a for-profit entity from nonprofit one, which clearly exhibits its need to generate a secure revenue for best AI models development

Last year, Musk filed a case against OpenAI and Sam Altman, saying that OpenAI founders approached him to fund a nonprofit AI development for humanity, but their focus is on making money. He later expanded the case to federal antitrust for more claims, and in December asked the judge presiding over the lawsuit to refrain AI from transitioning into a for-profit. 

In response to this filed case of Musk, Open AI has recorded their word, saying that the claims of Musks should be dismissed and that Musk “ should be competing in the market rather than the courtroom” 

The stakes on OpenAI’s transition has now moved after their last fundraising of around $6.6 billion with a new roundup of $25 billion under discussion with softbank are conditioned on the company restructuring to remove the non-profit entity’s control. 

Such reinstatement of AI would be a bit  unusual, said Rose Chan Loui, executive director of the UCLA law center for Philanthropy &  nonprofit entities. The shift of nonprofit work to for-profit has historically been for healthcare organizations like hospitals, not venture capital-backed organizations, she said. 

Read More: OpenAI Seals Partnership with Kakao

Google Revises AI Ethics, No Longer Rules Out AI‘s use for Weapons and Surveillance

In the current scenario of AI, corporate ethics appear to be very flexible. It is becoming very certain that the boundaries that separate innovation, ethics, and business interests are blurring by the day. It seems like Google’s AI ethics is now open source, as it’s free for anyone to rewrite, including Google itself. Google has silently removed one of the central ethical barriers that was once enshrined in its AI principles, a pledge not to develop AI technology for weapons and mass surveillance. This change, pointed out by CNN‘s analysis of the Internet Archive Wayback Machine, now indicates a major shift in Google’s perspective on AI ethics.

Ethical breach:

The much denied combat applications once had envisioned a greater consequences for such actions, AI principles generally formulated that Google would not engage in AI applications for weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people, nor develop technology that gathers or uses information for surveillance and resulting in violating internationally accepted norms. With the latest update, such language has completely disappeared, thus leaving it less clear on how Google engages with these topics now.

Since OpenAI released ChatGPT in 2022, AI has reached an unheard and unmatched level of evolution without proper regulation and ethical oversight. It can be assumed that with applications in law-and-order cases and military projects, Google could be flexibly engaging with such governments and defense contractors with its new policy wording.

A Shift in Values:

In a Tuesday blog, Senior Vice President of research, labs, technology and society, James Manyika and Google DeepMind head Demis Hassabis defended the policy shift, stating that, “AI frameworks published by democratic countries have deepened Google’s understanding of AI’s potential and risks. There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights”.

This latest turn is radically opposed to everything Google had committed itself to in the past. In 2018, thousands of upset and protesting employees who signed a petition against military applications of AI, Google had bid $10 billion for a Pentagon cloud computing contract. It explained then that it could not be sure that this project would be within its AI principles, as some employees even resigned in protest.

On this matter the post further elaborated and said, “We believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.” AI will remain ahead, and so shall the tussles regarding its ethical use, as Google’s recent pivot indicates that its position is far from being cemented.

Read More: OpenAI Seals Partnership with Kakao, Expanding Its Asian Collaborations

DeepSeek AI Shocks U.S. Markets: Why Nvidia Lost $590 Billion Overnight

What Happened?

DeepSeek against U.S. tech giants? Of Course, you heard it right. They’ve developed a large language model that can give tough competition at a much lower cost. This development gave the U.S. a shock as they had restricted China’s access to advanced AI chips, yet Apple App Store DeepSeek still managed to find its way.

  • Tech stocks crashed: The S&P 500 tech sector fell 5.6%, its worst drop since 2020.
  • Nvidia suffered the most: Its market value dropped by $590 billion, a massive loss, while Nvidia’s CEO Jensen Huang lost $20.8 billion in net worth.
  • Top executives lost billions: Oracle’s Larry Ellison saw a $27.6 billion drop in net worth.
  • AI-related energy stocks were hit too: Vistra Corp, an independent power producer, dropped 28.3% because investors considering Deepseek’s hype thought that AI infrastructure demand might not be as strong as expected.

Impact of DeepSeek AI on the U.S. Stock Market:

Significant Impact, For sure.

What’s the actual reason behind this?

The announcement of AI competition from China raised concerns that it could be a strong competitor to Nvidia, Broadcom, and Google. This projection created panic in investors, who sold off their stocks, causing prices to fall sharply.

Has everyone out there faced the loss?

Not exactly, even though tech stocks crashed more than half (351) of the S&P 500 companies gained value on Monday. This even shows big tech stocks’ influence over the entire market. Even though the Dow Jones Industrial Average, which relies less on tech, ended the day with a small gain.

What are people saying?

This isn’t just a win for China – it’s a GAME CHANGER. DeepSeek’s success shows the U.S. can no longer claim undisputed tech dominance. – The Saviour on his recent thread on X.

U.S. vs China.  Who will win the A. I race? Who will dominate the global market? Or any other ‘Surprise’ Startup will go ahead with both. Indeed, time will unfold this heated mystery.

Read More: Revival of US Tech Stocks Ignited after DeepSeek’s AI sell-off

Revival of US Tech Stocks Ignited after DeepSeek’s AI sell-off

DeepSeek’s AI revolution has revived tech stocks. American tech stocks experienced a rebound after undergoing one of the most significant drops in their history, following the Chinese market’s surprise by DeepSeek. After NVIDIA market value was wiped out by $593 billion in a single day, it led towards the tech rebound. It maintained its previous highs by stabilizing at $128.99, with technology shares recovering from Nvidia’s dominance in AI chips. Higher-performing semiconductor technology companies with advanced power and infrastructure industries collectively lost more than $1 trillion. Meanwhile, investors were searching for deals following the recent global recession caused by the low-cost artificial intelligence model.

DeepSeek’s Effect:

The release of an AI assistant by China’s DeepSeek, which costs less than other AI models and requires less data, caused resentment among tech stocks worldwide. Despite doubts, DeepSeek’s claims of its cost were still widely discussed and caught worldwide attention. The Tech Sector Index saw a 3.6% increase. A 9.2% decline in the previous session, the Philadelphia Semiconductor Index gained 1% during the rebound. Following a 13.8% drop, Oracle bounced back with 3.6% of its earnings after a 13.8% drop, while Broadcom and Marvell Technology posted modest gains. As reported by Vanda Research, Nvidia’s decline was seized by retail investors, with an unprecedented $562.2 million increase in retail buy-ins. 

At a Miami conference, Steven Cohen, the founder of Asset Management, declared that the outcome of DeepSeek is favorable as it supports the move toward artificial intelligence. Despite the downturn in Nvidia, options traders were eager to continue trading at high prices as the chipmaker’s shares rebounded on Monday. The selloff of AI-related stocks may cause investors to be more cautious. Despite the advent of more affordable AI designs, advanced chips will still be necessary to meet the demand for high-performance AI and economically sensitive products like DeepSeek.  

Chaotic Market, an aftershock of DeepSeek:

China’s sudden entry into the AI market has rekindled the perception that the Chinese were behind their larger American competitors. The decrease in value indicates the amount of invested capital within a small group of stocks trading above their price on the market. A massive flow of capital into the equities market, driven by excitement over artificial intelligence (AI), has led to an estimated $10 trillion increase in market valuations for the “Magnificent Seven” companies since the launch of the AI boom with ChatGPT in November 2022. Market leaders, including Apple, played a significant role in the tech index surge, with the company’s new tab opening up 3.7% and giving Nasdaq its second-largest boost after Nvidia.

The index’s performance was primarily attributed to Meta, formerly Facebook, advancing to 2.2% daily gains for the seventh consecutive day, while Microsoft added 2.9% to the momentum. Some experts argue that AI can have positive and other negative impacts, motivate innovations in different areas, and disrupt markets. China’s potential involvement in AI development is back in the discussion, while some U.S. tech companies warn against letting China lead after DeepSeek’s success. While AI still reigns in the technology arena, the battle between cost-effective AI models and modern chip technology has been dragging on forever. Investors and stakeholders have intensified their insights into the AI arms race, which is now gathering strength because of heightened competition.

Read More: Italy Demands Answers from DeepSeek: Is Your Data at Risk?

OpenAI vs Indian Newsrooms: OpenAI Faces Copyright Controversy

Lawsuit against OpenAI:

AI’s habit of “just borrowing” in legal language would translate into a Copyright issue. This ultimate drama has landed in India and made headlines, incorporating famous names like Ambani and Adani; AI deserves some credit for this. Yes, Mukesh Ambani and Gautam Adani, the Indian business tycoons, combined to fight against OpenAI in a copyright dispute and emphasize the importance of intellectual property. A legal battle in India over Copyright issues took place, which is considered parallel to the violation of rights. This has resulted in OpenAI being sued by Indian news outlets for violating Copyright laws. Indian news organizations, including The Indian Express, Hindustan Times, and other news agencies owned by Gautam Adani and Mukesh Ambani, have filed a lawsuit against OpenAI and other major digital outlets. The legal action alleges that the AI firm has been using Copyrighted material from its platforms to train its AI models without authorization or compensation. The lawsuit filed in New Delhi by the Digital News Publishers Association (DNPA) involves 20 other media companies, including Gautam Adani’s NDTV and Mukesh Ambani’s Network18. This leads to a rising concern among publishers. Copyright disputes have been intensifying lately, and writers, musicians, and media outlets continue to resist the exploitations of technology companies. Indian publishers claim that the variations carried by OpenAI in the content are a significant breach of their intellectual property rights.

OpenAI’s Global and Local Effects:

Perhaps OpenAI, being an Artificial Intelligence, is quite intelligent when it comes to attaining a human ability of ‘sharing is caring’ literally and making the copyright content as its diary. This is not the first time OpenAI has been sued due to the copyright issue; they have been in this position multiple times globally. For instance, The New York Times filed a lawsuit against OpenAI and Microsoft in 2023 for using its articles without authorization. In India, ANI filed the first legal lawsuit against OpenAI in 2024. Despite the Copyright accusations, OpenAI has established partnerships with outlets worldwide.

On the other hand, India’s rapidly growing 690 million smartphone users and the widespread growth of AI in the country give OpenAI essential market value. Among other companies, Time Magazine and Financial Times are comparatively at a disadvantage as they have not been able to secure comparable deals locally and globally. OpenAI, in its defense, claims that Indian courts cannot handle its operations as its servers are situated outside India. Indian publishers, on the other hand, are afraid that OpenAI’s actions will jeopardize and put the Indian media industry at stake. This implies that OpenAI is turning into a profit-driven organization while gaining revenues on a significant level from its generative output without providing suitable compensation for the Indian media industry. Although Copyright issues of OpenAI are something to be addressed and worked on, the rise of AI will result in legal challenges that affect how new technologies are incorporated into intellectual property laws.

Read More: Reliance Plans World’s Biggest AI Data Centre in India

Metas Shift to Community Notes: Revolution or Risk?

Meta, the company behind Facebook and Instagram, is changing how it deals with misinformation. It will also allow Meta to address misinformation on a much larger scale. What is the actual problem with the ‘previous’ traditional fast checkers? Traditional fact-checkers are resource-intensive and can only review a limited number of posts daily. Meta claims a community-driven system could fact-check much faster and on a larger scale.

Instead of relying on experts (fact-checkers) to verify false information, it plans to use something called “community notes.”

Where did the idea come from?

This idea comes from Elon Musk’s platform X (formerly Twitter), where users can write helpful notes to clarify posts, and those notes are rated by other users for accuracy.

Let the community decide what’s true. Getting users’ attention? Smart move? Ofcourse.

While some people support this idea, saying it’s faster and involves more voices, others are worried. Critics argue that regular users might not have the expertise to identify complex misinformation, and the system might miss important false claims. This will also lead to more rumours and cybercrimes as well.

This is not enough. Meta is also making its moderation rules less strict on sensitive topics, like immigration and gender. This will surely make people less censored.

But who will handle the flood of harmful & misleading content?

Backstory: What’s the actual reason behind it?

  1. Political Influence:

Meta’s traditional fact-checking programs, initiated after the 2016 election, faced criticism from conservatives, including former President Trump, who labelled them as politically biased.

This move aligns with efforts to favour the Trump administration, as evidenced by other recent actions.

  1. Resource and Scalability Challenges:

Fact-checking partnerships involved over 100 organizations globally, but this model struggled to match the scale of misinformation on Meta’s platforms.

The volume of content on Facebook and Instagram far exceeded what human fact-checkers could effectively monitor, leaving many false claims unchecked.

  1. Operational Delays:

Traditional fact-checking processes were slow, taking hours or even days to review and debunk viral misinformation. By the time a fact-check was published, the false information often reached a wider audience.

  1. Rebranding Under “Free Expression”?

By ending fact-checking, Meta signals a commitment to “free expression”.This shift also distances the company from accusations of partisan censorship.

Trying to copy the idea of X without realizing its repercussions

And wait,

Did X succeed in the implementation?

Here are the results from X’s system:

It faces significant challenges in combating misinformation. 

While many proposed notes provide accurate context, only a small percentage are approved and displayed publicly due to strict consensus requirements, with delays often exceeding 11 hours—allowing false posts to spread widely. The system also shows potential political bias, as notes on Republican posts are approved more often than on Democratic ones. 

Volunteers have expressed frustration over their efforts being largely unseen, and experts argue that, despite its innovation, Community Notes cannot replace traditional moderation, especially in a highly polarized environment where misinformation has real-world consequences. But for Meta, It embarks a new controversy because it trades professional oversight for user participation.

A total gamble, for sure. A hybrid model is suggested as the best approach in this case. Will Meta get similar or better results than X? Major success or a new disaster on the way? Time will tell.

Read More: Trump VS Biden: The AI Showdown Reshaping America’s Tech Future


DeepSeek vs The Tech Giants: The AI Disruption No One Saw Coming

What’s happening? A new AI app called DeepSeek has become super popular, hitting #1 on Apple’s App Store.

DeepSeek taking over Apple’s App Store?

Yes, you heard it right! It has gained massive attention for its large language models (LLMs), which are claimed to rival those of AI giants OpenAI and Meta.

What truly makes a difference?

DeepSeek claims that it developed its advanced AI systems at a fraction of the cost, challenging industry norms. Shaking the entire market? A new fear of missing out is seen in the global market.

Let’s have a look at after effects:

Wall Street Impact:

All three major U.S. stock indexes fell on Friday, with the S&P 500 retreating from a record high. How did it happen? The drop was fueled by concerns that DeepSeek’s efficient AI models could disrupt the dominance of U.S. tech giants.

Tech Stocks Hit Hard:

U.S. companies, especially those having mastery in AI and chipmaking, saw a significant decline. For Instance, Nvidia, a leader in AI accelerators, faced concerns over decreased demand for its products if DeepSeek’s AI proves game-changing.

Ripple Effects in Asia

In Japan, tech and chip firms were among the biggest losers: The Nikkei index dropped, with companies like Advantest Corp. (a chip-making tools firm) suffering losses. In contrast, Chinese tech companies tied to DeepSeek, such as Iflytek Co., surged in value.

DeepSeek’s potential as a Geopolitical tool?

For sure, DeepSeek could become a tool for Beijing to strengthen tech ties with developing nations, positioning itself as the leader of affordable AI solutions.

Will it become users’ new favourite?

Why not? With lower operational costs, it could bring high-quality AI tools to price-sensitive markets, reshaping industries like e-commerce and content creation at the grassroots level. It will ease AI access for small businesses and individuals.

What’s the big deal?

  • DeepSeek is seen as a Chinese competitor to U.S. AI leaders like OpenAI.
  • Its rapid rise has raised concerns about a shift in global technological leadership away from the U.S. to China.
  • Companies like SoftBank, heavily invested in competing AI ventures, for instance: Trump’s AI project Stargate, also took a hit, with its stock losing over 6%.

Will DeepSeek maintain its winning streak against existing tech giants?

DeepSeek can maintain its winning streak by focusing on innovation, localized expertise, and agility, but it faces tough competition from established tech giants. Success will depend on its ability to differentiate and adapt in a fast-evolving industry.

Read More: OpenAI Gains More Flexibility as Microsoft Backs $500B Stargate Initiative

Apple Miami Worldcenter opens Friday, January 24, in downtown Miami

Apple is set to expand its presence in Florida with the grand opening of its latest retail store, Apple Miami Worldcenter, on Friday, January 24, 2025. Located in the heart of downtown Miami at the prestigious Miami Worldcenter, this store promises to deliver Apple’s signature shopping experience with an added touch of Miami’s unique cultural flair.

Prime Location at Miami Worldcenter

Answering the needs of residents and visitors, Apple Miami Worldcenter is part of one of the largest mixed-use urban developments in the United States. The Miami Worldcenter itself is an exciting place featuring luxury retail spaces, apartment towers, restaurants, and entertainment venues. The Apple store is placed in a location where it can attract heavy customer flow from the bustling environment of the development.

What Would Customers Expect

More than just a retail store selling products, Apple Miami Worldcenter will give people an experience. It will host Apple’s latest design concepts like open spaces, interactive displays, and specialty areas for product workshops and customer support.

  • Genius Bar and Product Support: The Genius Bar provides customers personal technical support and repair services.
  • Today at Apple Sessions: They will offer creative sessions that provide free, hands-on learning in photography, coding, music production, and more.
  • Complete Product Range: This will include the full line of Apple products from iPhone, iPad, Macs, Apple Watches, and accessories. To celebrate the grand opening event, Apple will welcome special exclusive events to take place this weekend. Besides being the first customers in the store, they can also join the fun with live demonstrations, interact with artistic exhibits, and be part of exclusive Today at Apple events with some local creators. Some limited-edition Apple merch may also be included in the festivities.
  • Commitment to Sustainability: Miami Worldcenter reflects Apple’s commitment to sustainability and has energy-efficient systems and materials, as per the pledge made by the company to turn carbon neutral within the year 2030.

Additional Locations Opened By Apple in Florida
 

The Miami Worldcenter location fortifies the trend that Apple would continue the investment in Florida. It has numerous stores operating in the state, and this site symbolizes the effort of making that high level of top-quality service and access to state-of-the-art technologies for the dynamic cities in the nation. 

Apple Miami Worldcenter officially opens at 10 am on January 24, 2025, for the public. It encourages customers to get in early to explore the new store and Apple’s latest offerings and enjoy the grand opening celebrations. This opening will cement the spot of Apple as a tech giant in Miami whilst offering citizens and visitors combined innovation, creativity, and community involvement.

Read More: Apple’s Store App launch in India

Samsung Introduces Cobalt Recycling for the Galaxy S25

Samsung is moving way ahead on the sustainable line with cobalt recycling for its Galaxy S25 series. It is a new initiative which has been showcased in a recent video and is in tandem with much broader company objectives in closing the loop and minimizing general environmental impacts.

A Game-Changing Recycling Process

Cobalt is an important ingredient in lithium-ion batteries and is well known for the moral and ecologically problematic mining linked with it. This recycling technology from Samsung is the best for mining cobalt from discarded devices to repurpose it for new products.

The video that Samsung has published shows the actual advanced technology based on this recycling process. The procedure includes dismantling old devices, extracting cobalt from battery components, and purifying it into a condition for new batteries. Therefore, it minimizes the entry of fresh raw materials into application through ensuring quality and usability of the material.

Impact on the Galaxy S25

The flagship smartphone series of Samsung that is going to be used as a prototype for innovative cobalt recycling, however, is the Galaxy S25 series. Samsung has claimed that it has shifted considerable amounts of its cobalt obtained from recycled materials, not through mining but through recycled materials, thus reducing its carbon footprint, by the targets set.

The Galaxy S25 series is Samsung’s new hallmark in different innovative environmentally friendly technology. Apart from recycled cobalt, the green innovation includes creating components using recycled aluminium, glass, and plastic. All of this sets a new standard in environment-conscious manufacturing.

A Commitment Toward Circularity

The cobalt recycling initiative run by Samsung is part of its more extensive scheme under the framework of “Galaxy for the Planet” and aims at achieving net-zero carbon emissions by 2050. The company has been busy searching for means through which recycled materials can get into its products, e-waste reduction, and executing sustainable production techniques. 

Samsung showcases in the video its partnership synergies within the industry in creating a closed-loop system. The process of getting back materials obtained from discarded devices and reinserting them into production helps pave the way for a circular economy for the tech industry.

Consumer Benefits

Samsung’s recycling initiative is more than just sustainability; it offers physical benefits to consumers. Use of recycled cobalt in the manufacture of the Galaxy S25 is expected to deliver better performance from the battery like longer life cycles and improved energy efficiency to meet consumer demand for durable and environmentally friendly products.

It Appealed For Change Across the Board

This step taken by Samsung for cobalt recycling has made waves in the technology sector-wide for a change that goes beyond diminutive activity. In practice, it shows how innovation can solve tough problems of the environment.

Looking Forward

The cobalt recycling process marks a new chapter in continued efforts by Samsung to push the envelope of sustainable technology. With the introduction of the Galaxy S25 series, it is no longer just a smartphone but instead demonstrates the marriage of technology and sustainability. 

Consumers can now look forward to buying the Galaxy S25 knowing that they are purchasing a product that will prioritize environmental responsibility. Samsung’s dream for a greener future is becoming reality, one recycled piece at a time.

Read More: Revolutionizing Application Security and Network Management through F5 AI Assistant

When will 5G Technology Launch in Pakistan?


5G technology is set to revolutionize Pakistan’s telecom sector, with a projected official launch by June 2025. The Ministry of Information Technology and Telecommunications has been working on a detailed roadmap for the rollout, which has been delayed from the initial target of 2024. This delay was due to technical, regulatory, and infrastructural challenges that need careful consideration.

The government plans to auction the 5G spectrum in May 2025, with telecom operators expected to start offering 5G services to the public by July 2025. This will enable the country to unlock faster internet speeds and provide users with better overall connectivity. The launch of 5G will mark a significant shift, enabling new technologies such as autonomous vehicles, advanced IoT devices, and smart city infrastructure, contributing to the country’s modernization.
The Pakistan Telecommunication Authority ( PTA) will offer four 5G spectrum bands for auction, including 700 MHz, 2300 MHz, 2600 MHz, and 3500 MHz, aiming to advance the implementation of 5G across the country

Pakistan’s digital ecosystem stands to benefit immensely from 5G’s capabilities. For industries like healthcare, education, and agriculture, it will offer faster data processing and improved communication. Additionally, the enhanced speeds and low latency of 5G will support innovations in e-commerce, entertainment, and more, expanding economic opportunities nationwide.

However, there are considerable challenges to overcome. Developing the necessary infrastructure for nationwide 5G coverage is a complex task, particularly in remote or underserved regions where connectivity has historically lagged. Moreover, the country needs to ensure that the 5G spectrum auction is conducted smoothly and in line with global standards to attract both local and international investment.

As the 5G rollout progresses, Pakistan is poised to enter a new era of connectivity. While the journey ahead requires overcoming significant hurdles, the benefits of 5G will have far-reaching effects on Pakistan’s Economy, technological landscape, and global competitiveness. The focus now will be on preparing the necessary infrastructure, ensuring the regulatory environment supports innovation, and meeting the high demand for reliable, high-speed internet.

Read More: Meet Operator: OpenAI’s AI Tool That Could Take Over Your Computer Tasks

Ford’s BlueCruise is Undergoing an Investigation For US Safety Concerns

NHTSA’s Investigation:

The US authorities raise concerns over Ford’s BlueCruise. This advanced driver assistance system that provides hands-free driving solutions, is under a critical investigation by the National Highway Traffic Safety Administration (NHTSA). The action was taken after two deadly collisions took place involving Ford Mustang Mach E vehicles, where the cars collided with stationary objects. In April, the investigation was initiated and has now advanced to an engineering analysis level. The agency is going to conduct a more thorough investigation into BlueCruise and its potential shortcomings, which includes conducting vehicle evaluations, reviewing technical data, and conducting additional research on related crashes and non-crash reports. NHTSA’s investigation reveals that BlueCruise has difficulty detecting stationary vehicles in certain situations, such as high speeds or poor lighting. NHTSA said, “Additionally, system performance may be limited when there is poor visibility due to insufficient illumination.

Ford’s BlueCruise:

BlueCruise was launched in 2021, where it incorporated cameras, radar sensors and software to combine adaptive cruise control, and lane centring along with speed-sign recognition. The system operated hands-free on pre-mapped highways and was equipped with cameras inside the cabin to ensure the visibility of drivers. Around 129,000 vehicles have been equipped with the system, which is also present in some Mustang Mach E and F-150 pickup trucks. Although Ford is cooperating with NHTSA, the inquiry raises questions about BlueCruise’s ability to distinguish stationary items and operate in low visibility conditions.

Industry Rivalries:  

GM’s Super Cruise and Tesla’s Autopilot are both competing with the system, each offering similar features for advanced driver assistance technology. In contrast to Tesla’s Autopilot, BlueCruise is restricted technologically and has less advancement. The investigation into BlueCruise is in line with ongoing inquiries into Tesla’s “Full Self-Driving” technology, which has been associated with some accidents. The research highlights the difficulties faced by automakers in managing innovation and safety. As NHTSA continues to examine these systems, the safety and limitations of autonomous driving technology are a significant concern.

Read More: Google Stance on European Union Fact-checking Mandates

Venture Capitals’ Major investment is ineffective for struggling startups

Rise of Investments in the Fourth Quarter:

While startups are struggling, Venture capital has increased to $75B investment in the fourth quarter, not making it any easier for startups to raise money. However, according to current PitchBook data, the funding provided by venture capital to startups experienced a significant rise in the fourth quarter of last year that reached up to an investment of $74.6 billion. Unfortunately, after two years of low level investment in the market the current investment activity is at a pandemic level. Although, capital has been flowing in all sectors, startups are still struggling to secure funding due to uneven investment distribution.

Securing Major Deals:

The majority of the $75 billion investment is unequally benefiting few companies, as a matter of fact the 43.2% investment in the fourth quarter were made particularly to benefit some companies. Databricks worth increased to $62 billion after it secured an investment of $10 billion in December. OpenAI, the parent company of ChatGPT, gained $6.6 billion and attained worth of $157 billion in October. In December, Elon Musk’s generative AI project, Grok , received a $6 billion grant. Alphabet provided a heavy funding to Waymo, the company that provides self-driving cars. They secured $5.6 billion in November. At the same time, Amazon spent $4 billion developing generative AI models. The average investment level in the fourth quarter would have remained at $42 billion, which is equivalent to the previous nine quarters, if not for these major deals.

 Analysis upon Future Investments:

Startups as a whole still face challenges despite the billions in revenue generated by high-profile companies. This increasing gap highlights the bitter reality of venture capital, where only companies with AI minds dominate, and leaving smaller startups to compete for scarce resources. There’s a debate over whether venture capital in 2025 would be able to maintain such high level investments or not. Experts and analysts expect that a few AI-focused startups will continue to receive funding in disproportionate amounts, while others may face difficulties in looking for a finances.

Read More: Amazon Playing Bold Moves, Acquiring Indian Startup Axio For Over $150M

TikTok’s fight Against Going Dark Gains Support From key US Lawmakers

TikTok is currently at a crossroads in the United States, with a government deadline to threaten operations coming up. This deadline is very particular because it is part of the 2025 law that will require the app to divest from the United States because of national security worries whether a child has direct or indirect links with its Chinese parent company, ByteDance. But expect things to turn in some new direction when political bigwigs show aggrandizing support for the app’s existence within the space.

Support from American lawmakers and presidential allies

With the arrival of the January 19 deadline, U.S. lawmakers have begun to vociferously weigh in on how they consider the future of TikTok on the U.S. landscape. Among those whose voices are heard on this issue is Senate Democratic Leader Chuck Schumer, who has asked for a 90-day extension to allow for ample time needed for an orderly transition or resolution concerning divestiture.

President-elect Donald Trump, adding his own page to this chapter, also seems to have taken upon himself the consideration of changing position. The whirlwind winds of public opinion are not lost even on Trump, for he now reportedly contemplates an executive order delaying enforcement by 60 to 90 days regarding the prohibition he initially spewed forth against the use of the app.

Legal Challenges and the Supreme Court’s Role

Amid the buzz, legal analysts and civil rights proponents are putting their concerns forward showing how the clauses being included can be unconstitutional. Some contend that unless the government provides clear evidence of possible national security threats, it would be a violation of the Americans’ First Amendment rights by prohibiting TikTok. The Supreme Court has already stepped in to review the law that forced divestment from TikTok, clearing the way for what is foreseen to be a major legal ruling in this subject area.

Read More: RedNote: This is the Chinese App to make it popular among TikTok Creators- But had Unendurable Consequences

RedNote: This is the Chinese App to make it popular among TikTok Creators- But had Unendurable Consequences.

With an uncertain future for TikTok in the western world, these creators are turning to the next best alternative platform, RedNote or Xiaohongshu-or the Little Red Book. Not that one took a first glance at RedNote and it was paradise-there would be glossy lifestyle posts and polished recommendations with the familiar influencer culture attached. Yet, going further deep the reality gets more worrying.” Not just another app, it is a tightly controlled platform exporting censorship, influencing youth culture, and quietly subverting democratic values.

The Luring facade of RedNote: Free to Creatives?

Most great things about TikTok are found on this platform, then RedNote is like some otherwise familiar venue for expressing creativity and virality. It’s a space where they could continue engaging with audiences in the ways which were earlier enabled by TikTok. But there is a fundamental difference between these two platforms.

There is RedNote-a Chinese-owned platform- which means that whatever applies to all the content moderation and privacy policies is limited and constricted to the laws specified by China’s strict regulatory framework. Whereas TikTok promotes open discussions, RedNote imposes restrictions largely upholding the “core socialist values” as against freedom of expression. Posts, comments, and likes are all monitored in an ecosystem where China’s social control is promoted.

In Plain Sight Censorship and Self-Censorship

Though it is branded an app to attract western creators through the features that resemble those of TikTok, RedNote operates under the kind of censorship that is a lot stricter than the average. Posts that deal with topics politically sensitive-the likes of the Chinese human rights issue, criticisms of censorship laws, or advocacy of freedoms in regions that would have implications for issues within these categories-is prone to deletion before anyone reads anything.

Mad Hatter does take pride in their adaptations of thus app, which come with great benefits not just for the selling of the creators’ content, but also for empowering their audiences.

They’re a tool for absolute creativity, where people can be crazy and just lose their minds directing or curating their content. Really crazy, absurd style,’ says one user from Recherches. Imagine creating perfect looping videos using RedNote’s features.

With all the pomp and glamour that glitter at RedNote, one would not help attain a good first impression. Just like every other innovative app, however, there is a more profound reality underneath. RedNote is in fact, as far as cell control goes, definite.

This dwells on how the app may gain its strength as far as money matters regarding RedPost are concerned. For instance, Users care about all residual things. Inevitability comes into being when the main point centers around the primary concern which brings wealth into play.

Such “uses” as
“Photo sharing”
“Video posting”
“Very often”
“Later”

and many more such words keep on popping up and extend their number day by day.

In fact, at least for America, Red Note is going to take a while to capture the user’s attention. It’s not just another app; it is a tightly controlled platform exporting censorship, influences youth culture, and quietly undermines democratic values.

There are more such words- “Photo sharing”, “Video posting”, “Very often”, “Later”, and many more-with each passing day that keeps on multiplying in number.

Will it take long for Red Note to gain popularity among users in America?

Clearly, Red Note is not an app that connects users freely, but a highly regulated platform on which censorship is exported, which influences youth culture and quietly undermines democratic values. One would be hard pressed to find a first impression just as pretty with all the pomp and glamour that glitter at RedNote, but like every other innovative app, there is a more profound reality underneath it all. One would be hard pressed to find a first impression just as pretty with all the pomp and glamour that glitter at rednote, but like every other innovative app, there is a more profound reality underneath it all.

Read More: Cloud Transformation Conference Global 2025: The Premier Technology Event of the Year

Elon Musk Sued By SEC Over Twitters’ Shares

SEC’s Lawsuit:

The U.S. Securities and Exchange Commission (SEC) has filed a lawsuit against Elon Musk with the allegations that him for violating security laws when he took over Twitter, now renamed X. According to the complaint that has been filed in federal court, Musk allegedly failed to reveal regarding his ownership of more than 5% of the Twitters’ shares on time and instead delayed the announcement to obtain more shares at a lower or discounted price.

Allegations:

According to the SEC, Musk should have filed a report announcing his 5% ownership by March 24, 2022, rather he did not choose to do so until April 4, 2022. During that time, he reportedly increased his stake from 5% to 9%, saving himself more than $150 million as Twitter’s stock increased 27% in price after the disclosure of this information. The SEC wants to impose civil penalties to return these apparent gains among other things. Musk’s lawyer, Alex Spiro, called it a weak attempt by the SEC to redeem itself, accusing the agency of a years-long harassment campaign. Musk has echoed those sentiments before when he rejected a settlement offer from the SEC. it will be up to a federal court to decide whether the SEC’s allegation is valid and whether to impose a penalty on Musk.

click here to read: Astrohaus Unveils a Specialized Mechanical Keyboard for Writers

Implications Of a New Shift:

Musk’s lawyer Alex Spiro labelled this complaint by the SEC as an “admission of the SEC’s inability to bring an actual case.” At this filing, Gensler, the SEC chairman, is preparing to leave the office, and the new commissioner nominated by Trump appears to take up the responsibility. This new leadership is suspected by analysts to be favourable for Musk and would shift things up.  The court will decide upon the case and it will find whether Musk invaded the securities law and will look into any chance of what penalty would be appropriate to impose.

Savior in disguise, Watch Duty App Protects People From Wildfires

Watch Duty Shields, Los Angeles:

Watch Duty is an app that must be familiar to the residents of  Los Angeles due to its ability to provide the latest information related to wildfires. The app is free and is said to include reports on fire evacuation zones, air quality, and wind patterns, which have become an important tool for such individuals as firefighters and even residents during emergencies. Watch Duty is like a breath of fresh air among tech products as it is ad-free, data tracking-free, and does not bother about increasing user engagement. Over the last few days, it has gained more than a million downloads as it is designed to get rapid and accurate alerts for the sake of user safety. The thought behind Watch Duty came up in 2020 when co-founder John Mills was defending his off-grid residence during the Walbridge fire. Within sixty days, Mills along with David Merritt, Watch Duty’s co-founder and CTO, they were able to build the app.

Strategically designed app:

As the app is nonprofit, it is maintained mostly by volunteers, both engineers and reporters but it does have an interest in tax-free donations offering two tiers of membership which unlocks additional features including a firefighting flight tracker and a feature of setting alerts for over four counties. One of the biggest problems of wildfires is to reach and take over areas and structures in minutes. The delivery systems for notifications and messages take time, sometimes even 15 minutes in government agencies which is terrible. Today, in less than a minute, the app shoots out notifications to 1.5 million users, which is much quicker than many of the often delayed local government systems. Its no-login, no-ad, and no-data tracking nature means users get the information they require without distraction.  

Utility for all:

Volunteer Journalists at Watch Duty keep a constant eye on scanners updating the app in real-time for evacuation orders and firefighting efforts. Every piece of information gets verified for accuracy and relevance. The application has grown to cover around 22 states, aiming for nationwide and eventually international reach. According to Merritt, “It is a utility that everyone should have, which is timely, relevant information for their safety during emergencies. Right now, it’s very scattered. Even the agencies themselves, which have the best intentions, their hands are tied by bureaucracy or contracts. We partner with government sources with a focus on firefighting.” Watch Duty plans to expand this service and other emergency services across the United States along with other countries to eventually save millions of people from the slow, often unreliable local government alert systems.  

Keeping up with the system:

The app feeds well on all publicly available information and has its eyes on the likes of the National Weather Service, Environmental Protection Agency and others. It is prepared to absorb some costs such as purchasing data from agencies and with several corporate partners with whom it has relationships and contracts to provide its services. The app uses a mixed technology stack which includes Google Cloud platform, Amazon Web Services, Firebase, Fastly, and Heroku. During these times, when wildfires are becoming extensive, Watch Duty is most likely to serve the people by saving their lives.

Read More: Arc’s Sport Boat, an Aspiring Watercraft of The Future

 

Microsoft Files Suit Against Hundreds for Abuse of Azure OpenAI Services

Microsoft has now joined the growing ranks of legal claims bordering on what has been described as “abuse of its AI services” by stealing and, thus, “piercing” critical safety measures on the platform. The 10 Doe defendants, unnamed in the suit, purportedly committed theft of user credentials and access to Microsoft Azure OpenAI.

API Key Theft and Hacking-as-a-Service

As per Microsoft, the defendants systematically and through their deceitful acts stole API keys, the fundamental means of authentication to its AI services. The hacked accounts were allegedly pivotal in creating an act of “hacking-as-a-service” One main ingredient for that operation would be De3u, a software that enabled one to convert images synthesized by OpenAI’s DALL-E without the necessity of writing an actual code.

The company’s complaint also mentioned that De3u is preferable as it can bypass Azure OpenAI’s content moderation system hence making it possible to generate malicious and illegal content. Microsoft is alleging that both bright and dark worlds have been breached by the acts of the defendants into violations of various statutes such as the Computer Fraud and Abuse Act and the Digital Millennium Copyright Act, as well as federal racketeering laws.

Microsoft’s Counteroffensive and Legal Action

Microsoft stated that the misuse of its API keys was discovered in July 2024, and steps were taken to remedy this situation. The company recently received court permission to take control of a domain central to the operations of the defendants, thereby allowing Microsoft to gather evidence and demolish the remaining technical infrastructure used in the scheme.

The company would also have to remove De3u’s entire repository from its GitHub and has instituted new security measures to safeguard its Azure OpenAI services.

Moving Forward

In addition to seeking damages, Microsoft is pursuing injunctive relief to prevent further misuse of its services. The tech giant emphasized its commitment to ensuring the integrity of its platforms and protecting its customers from malicious activities.

This lawsuit highlights the increasing challenges tech companies face in securing their AI platforms as they become more widely adopted across industries.

xAI Testing Standalone iOS App for Grok Chatbot

xAI, the artificial intelligence startup founded by Elon Musk, is reportedly testing a standalone app for its generative AI chatbot, Grok, exclusively for Apple’s iOS platform. Currently, the app is in beta and available in select countries. This move marks an effort to untether Grok from the X platform, offering users a more direct way to access its advanced AI capabilities.

Last month, X introduced a free version of the AI chatbot in limited regions, and earlier this month, it was made available for everyone. Now, xAI seems to be taking the next step by testing a dedicated app for Grok, further expanding its reach. Alongside the app, a dedicated website for the chatbot has also been unveiled, displaying a “coming soon” message, signalling potential web-based access shortly.

Grok’s AI Features

The Grok chatbot boasts a range of generative AI features, recently enhanced with the addition of image generation capabilities. Using an autoregressive image model, Grok can predict token inputs based on text and image data. Moreover, it can generate images using existing inputs, enabling creative outputs tailored to user requirements.

Free Access with Limitations

While the standalone app is still in beta, users can try out Grok’s free version, albeit with some restrictions. The current version allows users to:

●     Ask 10 questions every 2 hours

●     Analyze up to three images per day

A Competitive Leap

According to the Wall Street Journal, creating a standalone app aligns with X’s vision to decouple Grok from the main platform, likely aiming to compete with standalone AI assistants like OpenAI’s ChatGPT. This move could help xAI build its presence in the AI space while attracting a broader audience through dedicated tools like the iOS app.

Stay tuned for further updates as xAI’s Grok continues to evolve. Users can now explore the beta version on iOS or the free version on the X app.

Elon Musk Acknowledges Limitations in AI Training Data

The recent admission of Elon Musk on how training data for AI is becoming increasingly limited has proved to be quite a dampener to advancements in the field of artificial intelligence. Musk said on X late Wednesday with Stagwell during a livestreamed conversation. “We’ve now exhausted basically the cumulative sum of human knowledge … in AI training” AI’s rapidly growing demand at the moment also calls for improving or innovating how AI is developed, ensuring its future sustainable productivity even with less and less high-quality training data. This reduced availability of quality training data could affect the current and future performance of AI engines that heavily depend on such data to operate effectively, potentially hindering their growth and potential.

Musk vs. OpenAI and the Future of AI

Musk’s comments are also coming as there is an increasing amount of conflict between him and Sam Altman, the head of OpenAI. This organization created the revolutionary AI tool, ChatGPT. Musk, an early investor in OpenAI, has been critical of the organization’s shift in direction. A lawsuit is ongoing, with Musk alleging that OpenAI has abandoned its original mission as a nonprofit lab focused on the public’s well-being in favour of profit-driven objectives. The ongoing lawsuit has raised questions regarding who controls the use and future of AI development.

2017 Power Struggle, Musk’s Attempt to Lead OpenAI

The root of this feud centres on the power struggle of 2017, when Musk sought to place himself at the helm of OpenAI. After his attempt was rebuffed, he became aggrieved and eventually left the organization, founding his own AI company, xAI, to compete with his former partner turned rival.

Musk Challenges OpenAI and Microsoft’s Monopoly

OpenAI has since entered into a powerful partnership with Microsoft, leveraging supercomputing to develop AI. Musk’s lawsuit also accuses OpenAI and Microsoft of benefiting from his early contributions to the organization while reaping the financial rewards. He argues that the combined forces of OpenAI and Microsoft are creating a monopoly that is being used unfairly against his own AI ventures.