Musk’s DOGE Released AI Chatbot for Government General Services Administration

Musk’s DOGE has started rolling out AI Chatbot to General Services Administration (GSA) Employees. General Services Administration (GSA) manages government real estate and certain IT efforts. The chatbot would help the employees with their daily tasks by automating them. An internal memo was given to the employees to let them know which tasks they could automate.

The Chatbot, GSAi, gives users 3 models to work with and the main idea is to use it to analyze contracts and procurement data. It should be remembered that GSA is one of the many agencies that faced job cuts. Reportedly, around 100 workers were affected due to what they call proper sizing.

The AI vs Workers battle

There has been this notion that AI will eventually replace human workers. Such layoffs always add more fuel to such predictions. While AI is improving rapidly, it is still nowhere near human intelligence. However, AI has proven more proficient in tasks like data entry, as automated systems can process data faster than humans. Similarly, with customer support and FAQ lists, AI Chatbot can do a better job. Similarly, Robotic automation, basic content writing, and accounting are other examples where AI outshines.

AI creating Jobs?

The counterargument is that while AI is replacing human workers in some capacities, AI also creates jobs. For example, AI development/engineering has seen more demand for people with the required skill set. Similarly, on the Data side, Prompt engineers and AI-assisted content Creators are some of the skill sets that are very hot in the market due to AI development. From this analysis, it is clear that humans must do more technical jobs and leave the simple, repetitive tasks to AI. Now, it remains to be seen if humans can step up the ladder and fill the more technical roles in more significant numbers.

DOGE is a villain

Elon Musk’s DOGE has come under heavy criticism from companies. DOGE stands for Department Of Government Efficiency. DOGE has been involved with the government to cut costs by replacing workers with AI cost-effectively. There have been lawsuits against DOGE’s actions, and while some were successful, others were not so fortunate. Federal Government contractors are wary of DOGE and have been outspoken about it. It is their livelihood that is under threat.

Even some businesses that are not directly affected have raised concerns that if their relevant Government department employees are reduced, it could slow down their business as well. For example, drugmakers want to ensure that the Government has enough staff so that their drug approval process is not affected.

OpenAI’s ChatGPT Hits 400 Million Users by Doubling Its User Base in Six Months

When OpenAI first introduced ChatGPT to the world in November 2022, it took the tech circle around the world by storm and was considered the fastest-ever consumer application in history. While the chatbot’s early success stemmed from people’s curiosity and novelty, it was widely discussed whether that initial buzz would continue or fade like the many other trends. All the indications throughout the past year that concern has been relieved, as ChatGPT is now certain to stay and continues to grow at an exceptional speed.

With immense progress being made in what AI can do and an upgrade to a more user-friendly interface, the chatbot has bounced back and doubled its active users in just under six months, solidifying its position on the top in the AI chatbot game. According to a report published by American VC firm Andreessen Horowitz (a16z), the AI chatbot ChatGPT has proved its worth, doubling its weekly active users in less than six months, where the report has pointed to the very impressive revival of the chatbot in the second semester of 2024 along with strategic updates and releases.

Speedy User Growth:

ChatGPT was originally famous for being the fastest app to cross the 100 million mark in monthly active users, a triumph it achieved within just 2 months of its debut in November 2022. The number had already increased to 100 million weekly active users by November 2023, rising to 200 million by August 2024. Even that increase has been outdone by this most recent surge in February 2025, when ChatGPT had achieved an incredible 400 million weekly active users.

Key Growth Drivers:

Major product releases in 2024 were key drivers for the increase in demand for ChatGPT which are;

  • Release of GPT-4o (April-May 2024): The launch of this AI model drew a sharp rise in user engagement since ChatGPT was able to handle text, image, and audio input with a greater level of accuracy and efficiency.
  • Advanced Voice Mode (July-August 2024): Launching a more natural, conversational voice feature contributed significantly to user interest and retention.
  • o1 Model Series (September-October 2024): These enhancements were like the cherry on the icing on the cake, creating an extra spike in usage, especially among enterprise and professional users.

ChatGPT’s user base continues to demonstrate a steady growth trend on mobile. There has been an approximately 5% to 15% increase in mobile users every month. Out of the 400 million weekly active users, about 175 million are accessing ChatGPT from mobile devices.

Competitive Landscape:

The industry has become quite competitive in developing AI chatbots, with emerging players like DeepSeek coming out strong from the launch pad. For instance, within ten days, DeepSeek ascended to the second position globally and attained 15% of the ChatGPT mobile users by February 2025. ChatGPT, nevertheless, maintains a strong lead in both web and mobile categories.

According to data from the market intelligence provider Similarweb, ChatGPT is ranked No. 1 as far as unique visits per month on the web and mobile active visitors are concerned. On the other hand, DeepSeek usage was measured to involve per user, slightly more than other competitors like Perplexity and Claude; however, ChatGPT remains dominant.

Future of ChatGPT:

ChatGPT isn’t just a superb thing, but it is also an omen of how AI is playing an increasingly important role in life today, whether it is for professional work, learning, education purposes or simply personal needs day to day, the millions of users who tap from the chatbot’s ongoing capabilities find value within its beneficial features, when these options become more popularized.

It will create the next round of interactions, primarily personal, real-time, and ever-developing into different digital ecosystems that match the level of the technology revolution. With AI adoption trending across industries, ChatGPT’s unparalleled growth suggests we have entered the age of generative AI, where fast paced technology development continues to redefine the way we interact and be productive worldwide.

Google Co-Founder Larry Page’s New AI Startup Dynatomics Aims to Transform Manufacturing

The Co-Founder of Google has always done things out of the ordinary by betting big on the future, be it how Google would revolutionize search, funding self-flying taxis, or making moon shot investments. According to The Information, Google Co-Founder Larry Page has thrown himself back into tech with a new startup in artificial intelligence and he’s creating waves with Dynatomics, a stealthy AI company already demonstrating the beginnings of changing how products are designed and manufactured.

Upon a positive outcome, this effort could consequently pave the way for a new world where AI will no longer assist in manufacturing but, even more shockingly, will conceive, optimize, and lead the production of real, physical objects at an efficiency never before realized. If robots designing robots isn’t the start of a sci-fi movie, I don’t know what is. As companies rush toward integrating their products into AI software, healthcare, and finance, Page’s vision targets a unique area of AI-driven manufacturing that is largely untouched and unpopularized.

AI-Powered Manufacturing:

Page is working with specific engineers to create AI that will eventually automate the production process of highly optimized designs for products that effortlessly transition to factory production. Chris Anderson, the former CTO of KittyHawk, the electric aircraft startup backed by Page, leads this effort. This Startup is currently running in stealth mode. Little news is available about it, but its suggestions offer a hopeful glimpse into an AI-centered future that significantly streamlines manufacturing processes, thereby improving efficiency and reducing material waste.

Expanding Role of AI in Manufacturing:

Larry Page is just one of the others who are pursuing the AI-manufacturing nexus. Here are few of the other companies developing similar AI-based solutions; Orbital Materials is developing an AI platform that is meant to discover advanced materials for other next-generation applications, including batteries and carbon capturing cells. PhysicsX offers AI simulation tools to engineers in various industries, including automotive, aerospace, and materials. Instrumental uses AI and computer vision to detect quality control and abnormalities in the factory in real time to help improve production quality and efficiency.

Catalyst for the AI Industry:

AI has done remarkable things for the software, healthcare, and finance industries, and much attention is given to the potential of AI to create the next industrial revolution. One could say Dynatomics could be a catalyst for Larry Page’s vision of transforming AI integration into industrial design with an insight into achieving smarter, faster, and more sustainable production methods.

Dynatomics isn’t merely another AI startup, rather it could also signal a possible turning point in the way physical structures are designed and built. With sufficient funding from Larry Page, an elite team of engineers and a clear focus on AI-driven optimization is guaranteed. If the world truly needs AI to tackle its complex problems, then Dynatomics will shape the future of that industry. After all, given Page’s history for backing transformational technologies, this startup is one to watch.

Google Reports AI Deepfake Terrorism Complaints to Australia’s eSafety Commission

In an era where artificial intelligence has almost reshaped the digital landscape, the concerning bit is it keeps dusting ugly issues where misuse is concerned. The big technology companies are increasingly under pressure to stamp out every evil application of innovation, be it deepfake terrorism propaganda or AI-generated abusive child pornography. Google, at this point, has provided one of the rare instances where scale is demonstrated with regard to how big AI abuse has become, as hundreds of users report from its Gemini programs specifically relating to such disturbing implications. This disclosure to Australia’s eSafety Commission raises immediate questions about AI governance, regulatory oversight, and the ethical responsibilities of tech companies.

In an almost year-long complaint period from April 2023 to February 2024, Google informed an authority in Australia about receiving over 250 global complaints regarding its artificial intelligence software, Gemini, in misuse to produce contagious content of terrorism related to deepfakes. This report by Google was submitted to the Australian eSafety Commission as part of a regulatory commitment to reporting harm minimization efforts by technology companies or facing penalties in Australia.

Furthermore, the deepfake content concerns AI-generated extremists turned up dozens of warnings from users that Gemini had been able to create child sexual abuse material. The eSafety Commission characterized Google’s report as one of a “world-first insight” into how the new technology would be used for harmful and illegal content and activities. Julie Inman Grant, the eSafety Commissioner, said,

“This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated”.

Google’s AI Safety Measures Faces Challenges:

According to Reuters, The report mentions that Google received a total of 258 complaints from users regarding suspected AI-generated deepfake terrorism content, in addition to further reports regarding 86 complaints concerning AI-generated child exploitation or abuse material. However, Google has not made public how many of these complaints were verified. Through an e-mail statement, a Google spokesperson emphasized the firm’s policy against the generation and distribution of content tied to violent extremism, child exploitation, and any other illegal activities. The spokesperson added that through email.

“We are committed to expanding on our efforts to help keep Australians safe online.”

According to the Google Spokesperson,

“The number of Gemini user reports we provided to eSafety represent the total global volume of user reports, not confirmed policy violations.”

Google now employs a hash-matching system to detect and eliminate AI-generated instances of child abuse material automatically. However, the company does not utilize the same system to detect terrorist or violent extremist content generated by Gemini, which is a limitation pointed out by the regulator.

Regulatory Pressure and Industry Scrutiny:

Generative AI tools like ChatGPT by OpenAI, which blasted the public’s attention late in 2022, triggered global concerns among regulators about AI’s misuse. Governments and regulators are asking for severe measures and regulations specifying that it should not be used for committing acts of terrorism, fraud, deepfake pornography, or any other forms of abuse. The eSafety Commissioner of Australia has traditionally fined social media platforms like Telegram and X (formerly Twitter) for not meeting the required regulations regarding the reporting requirements. X has already lost an appeal against its A$610,500 penalty but intends to rechallenge the ruling; Telegram has also made known its intention to challenge its penalty.

Such is the speed with which AI technologies are racing ahead, and so must the requirements for protecting users from their possible misuse. This requires strengthening regulations, improving AI monitoring systems, and introducing increased transparency from technology firms. With such a move now, there are certainly eagle eyes across the world on how the future of AI governance will pan out in the balance between innovation and the ethical responsibility of companies. 

Meta’s Expansion of Anti-Fraud Facial Recognition Tool in UK, a Security Measure or Privacy Risk?

Meta has, once again, stepped into the lost realm of facial recognition, which hasn’t been free of controversies in the past. After years of being disenchanted by not-so-pleasant regulations, resulting in billion-dollar settlements, the tech giant is taking the AI-powered route to add facial recognition to its suite of tools all over again. This time, in reducing online scams and account takeover, is it really about user protection, or is this a strategic move to channel facial recognition back into public view under a different, more attractive guise? With Meta taking the anti-fraud tool into the UK, the subject of privacy, security, and corporate responsibility again throws the spotlight on users. 

Meta launched two new AI-powered features in October to facilitate either the impersonation of a celebrity or the recovery of hacked Facebook and Instagram accounts. An initial trial of the features only included global markets, but now, the tech company has expanded the experimentation into the UK after regulators embraced them. After being engaged with the regulators for a while, the approval to start the process was received. Meta is also extending a feature called “celeb bait,” which is meant to prevent scammers from using the real names of public figures to an even larger audience in countries where it was previously available. I guess it’s all fun and games until Meta’s facial recognition mistakes you for a celebrity and starts flagging your selfies.

Regulatory Hurdles and EU’s Future:

Meta’s choice to extend these technologies to the United Kingdom comes at a time when current legislation is evolving into a more welcoming environment for AI-oriented innovations. The company has not yet decided to unveil the facial recognition feature in the EU, another key jurisdiction with rigid fixation on data protection. With this strict approach, the EU has taken on the utilization of biometric data under the General Data Protection Regulation (GDPR), it would mean that it will be getting another layer of scrutiny ahead of any further expansion of the test.

Meta said, “In the coming weeks, public figures in the UK will start seeing in-app notifications letting them know they can now opt-in to receive the celeb-bait protection with facial recognition technology. This and the new video selfie verification that all users can use will be optional tools. Participation in this feature, as well as the new ‘video selfie verification’ alternative, will be entirely optional as well.

Meta’s AI Strategy and History with Facial Recognition:

Meta maintains that the use of these facial recognition tools is strictly to combat fraud and secure user accounts, yet it has a long and mostly disreputable history of using user data in its AI models. Meta says their new facial recognition tool is for security because obviously, that’s the first thing we think of when we hear ‘Meta’ and ‘privacy’ in the same sentence. Their previous reputation says a lot about them, which has caused trust issues, as they promise to delete facial data immediately after use, just like they promised to protect user privacy before, right? First, they took our data, now they want our faces. What’s next? A Meta DNA test?

In October 2024, when these tools were launched, the company assured users that any facial data used for fraud detection would be deleted immediately after a one-time comparison, with no possibility for its use in other AI training. Monika Bickert, Meta’s VP of Content Policy, wrote in a post,

“We immediately delete any facial data generated from ads for this one-time comparison regardless of whether our system finds a match, and we don’t use it for any other purpose”.

Its deployment comes as Meta aggressively implements AI in all of its operations. The company is building its large language models, is heavily invested in improving products through AI, and allegedly has been working on a standalone AI app. In parallel with that, Meta has also increased its promotion of AI regulation and embraced the image of being responsible.

Addressing Criticism of the Past:

Given its excellent track record, Meta would likely introduce facial recognition as a security measure, that is, as a step toward improving the company’s image. For years, the company has been criticized for making it easy for frauds to advertise fraudulent schemes on its advertising platform, with many of them misappropriating images of celebrities to promote doubtful crypto investments and other scams. Framing these new tools as solutions to such problems may soften public perception of the use of facial recognition technology.

Facial recognition is a very sensitive area for the company. Last year, Meta agreed to pay an enormous $1.4 billion to settle a lawsuit in Texas relating to unlawful biometric data collection allegations. Before this development, Facebook had shut down its decade-old photo-tagging facial recognition system in 2021 under strong legal and regulatory pressure. While Meta has discontinued that tool, it continues to hold on to the DeepFace model, which has somehow come back on its latest offerings.

Meta’s Facial Recognition, a Thin Line between Security and Surveillance:

Meta’s facial recognition highlights the thin line between technological innovation and invasion of privacy. While diminishing fraud and account security sounds good, a larger question of biometric data collection arises. With its not-so-glamorous past of biometric data handling and billion-dollar settlements to match, Meta paints an image of a tech giant that has always tested the limits and trust of its users. Deleting the facial profiles right after collecting them sounds good, but who are we kidding? There is little faith in that product coming from Meta, if history teaches us anything, Meta’s ambitions almost always go way beyond its upfront promises.

Facial recognition might serve a purpose in fraud detection, but indisputably, it can also serve for mass surveillance with potential abuse in the hands of the weak regulatory bodies. The balance between security and privacy is fragile, and since Meta treats its valid data collection methods as open invitations for more exploitation, history shows us that once the means are proven effective, there is rarely a strict application of the initial purpose. Historically, companies, once in possession of personal data, have had multiple ways of misusing that data or even expanding the use of such data beyond its original purpose.

With governments engaged in such areas and regulatory bodies being questioned, users must remain alert in demanding accountability and transparency before they will accept yet another layer of AI-based control. If accepted as the new norm, facial recognition tools for preventing fraud might just be the next step in Meta’s acceptance journey or would be another entry in their very long history of controversy relating to AI. With AI technology advancing in our lives, the time is now more important than ever for stronger safeguards and mandatory rules.  

Musk Attempt to Stop OpenAI Transition into a for-Profit Entity Turned Down by Judge

Elon Musk co-founded Open AI and was a major financial contributor to it from 2015 to 2020, as per the lawsuit. Musk claims, the original vision of OpenAI was to work as a non-profit entity with no personal gain. When ChatGPT was launched, Sam Altman started commercializing the product with a monthly fee for its pro users. At that time, Musk advocated having a separate for-profit entity. However, differences emerged while finalizing the plan as Musk wanted majority shares. In 2018, Musk resigned from the OpenAI board after the company refused to attach OpenAI to Tesla.

In Feb 2024, Musk filed a lawsuit against OpenAI and its leadership. This was withdrawn in June 2024 and a new revived lawsuit was filed in August 2024. This lawsuit had several allegations. One of them was the transition to for-profit entity. Musk highlighted that OpenAI is deviating from the original plan of keeping OpenAI as a non-profit and their vision to serve the masses was not being accomplished.

The Judge, Yvonne Gonzalez Rogers, dismissed the for-profit allegation because Musk wasn’t able to provide any substantial evidence in favour of his allegation. While the attempt to block OpenAI’s transformation was denied, the border lawsuit remained active. The Judge has offered to expedite the trial process so that this case can be concluded early to avoid any harm to the business.

It should be remembered that Elon Musk was among the 11 co-founders, including Sam Altman and Greg Brockman. Musk invested more than 44 Million as per the lawsuit point 82 in OpenAI. We believe any organization that wants to be involved in technological advancements needs to be a for-profit entity to sustain the consistent technological advancements in AI. The operational costs of research and development are ever so high, and inconsistent government and investor policies can hamper the process.

This legal battle is like a personal feud between Elon Musk and Sam Altman. Musk has no open ground to stop OpenAI from operating as a for-profit entity. Judges claimed the same. It is interesting to note that just as recently as February 2025, Musk offered $97.4bn to take over OpenAI and preserve the AI research lab’s original mission. However, the board rejected the offer.

In an interview with Bloomberg, Sam Altman said this of Musk

“I think he is probably just trying to show us down, he obviously is a competitor, its you know he’s working hard and he has raised a lot of money for xAI and they are trying to compete with us from a technological perspective from you know getting the product into the market and I wish he would complete by building a better product but I think there’s been a lot of tactics, you know mana many lawsuits, all kind of crazy stuff and now this. And we will try to just put our head down and keep working”.

With stiff competition from several prominent companies like DeepSeek, Anthropic, Google DeepMind, Meta AI and a few others, this legal battle is not good for OpenAI. Sam Altman needs to keep a clear head and ensure the company is making progress to keep the competition at bay while going through this legal battle. At the moment, it seems like they are moving in the right direction. Altman tweeted

“we are likely going to roll out GPT-4.5 to the plus tier over a few days.” There is no perfect way to do this; we wanted to do it for everyone tomorrow, but it would have meant we had to launch with a very low rate limit. We think people are gonna use this a lot and love it.”

On the other hand, Musk needs to keep driving his xAI company forward. Competition is heating up, with the recent announcement that DeepSeek computing power costs only $6 million. AI is heading to an era where we will see lower computing costs, but at the same time, data is increasing by many folds. It is a challenge for AI companies to keep evolving and coming up with new techniques to keep their noses in front; otherwise, they will be left far behind in this global race.

OpenAI Faces Legal Scrutiny over Copyright Claims, as Alec Radford gets Subpoenaed

Who knew AI models would end up needing copyright lawyers more than programmers? The more artificial intelligence transforms an industry, the more fire it ignites in the legal arena over how these models are trained. The war on AI and intellectual property has now reached a point as the boundaries are violated by exploiting the works of human creators. In this high-profile copyright case, former OpenAI Researcher and its leading Developer in Generative AI, Alec Radford, has been issued a subpoena, shedding further light on the confusing details of AI training data, Fair use, and the future of Generative models. Depending on how the case turns out, it might quite literally become a turning point in the ethics of AI, legal frameworks, and the protection of creative works in the digital age.

According to a court filing, Radford received the subpoena on 25 February, marking a key development in the lawsuit against OpenAI’s use of copyrighted materials in training its AI models. This was filed in the U.S District Court for the Northern District of California in the case entitled “re OpenAI ChatGPT Litigation”, which was previously initiated by several renowned book authors, Paul Tremblay, Sarah Silverman, and Michael Chabon. They claimed that OpenAI used their literary works without authorization to train its AI models, which is a copyright violation. They asserted that OpenAI’s ChatGPT produces text that is very similar to theirs and does not give any credit for it, which amounts to direct copyright infringement.

Radford’s Contribution to OpenAI:

Radford, who most recently left OpenAI to pursue independent research, has also been a key contributor in building the Generative Pre-Trained Transformer (GPT) on which OpenAI’s product, such as ChatGPT, runs. His other recent contributions have been to OpenAI’s speech recognition model Whisper, and its DALL·E image-generation model. Joining OpenAI in 2016, Radford was instrumental in developing the company’s AI capabilities.

Radford’s work as the lead author for OpenAI’s original paper on Generative Pre-trained Transformers provided the foundation for the AI models to support a surplus of applications today. His participation in the lawsuit gives an impression that the accusers are interested in seeking insider knowledge into OpenAI’s training processes and, more evidently, the usage of copyrighted content in making those models.

Legal Feuds:

The irony lies in that OpenAI needs a human lawyer to defend its non-human intelligence. As OpenAI has kept up its defense against copyrighted materials, the storm in the legal battle has intensified. Last year, the court dismissed two of the claims against OpenAI but allowed direct copyright infringement claims to go through. The accusers’ legal team are now seeking testimony from the former personnel of OpenAI to claim support towards justice further.

Radford is not the only big name involved in this legal battle; also caught in its net are Dario Amodei and Benjamin Mann, who left OpenAI to found Anthropic, an AI research company. Although these two former executives have resisted because the burden is too great, they are still answerable. Thus, this week, a U.S. magistrate judge ruled that Amodei must undergo questioning regarding his past work at OpenAI in two separate copyright cases, including one by The Authors Guild.

Broader Implications and Issue of Fair Use:

If the lawsuit is directed in favor of the accusers, it will have significant legal repercussions for the entire AI industry. A favorable ruling would likely force AI companies to re-examine their data collection and usage techniques for training models, followed by tighter controls, arrangements for licensing with content creators, and an alternative definition of copyright protection for AI-generated content.

At the heart of the OpenAI defense is a focus on the fair use doctrine, which allows limited use of copyrighted materials without permission under certain circumstances. On the other hand, the accusers argue that these AI models are commercial products that generate revenue for OpenAI, thus making fair use a questionable argument. As AI-generated content becomes widespread, courts will have to define the lines of fair use regarding machine learning and data scraping.

Therefore, the outcome of this lawsuit will affect both AI developers and content creators. If the courts determine that OpenAI’s use of copyright materials is outside fair use, this could bring new regulations and change how AI models are trained. It would heighten expectations requiring explicit licensing agreements with content creators. Conversely, a ruling in favor of OpenAI may further strengthen the positions of AI companies to scrape an enormous amount of data with minimum oversight.

With everything in mind, the case could be a pioneer for change within the AI sector. It is within the much wider framework of generative models, including not just generative adversarial networks or diffusion models, that gives birth to ethical/legal questions regarding training data. OpenAI claims its processes are protected by fair use, but what becomes an issue is the transparency regarding data sourcing, especially about any potential violation of intellectual property rights.

Trump’s Administration Cuts AI Funding, Threatening U.S Innovation?

AI technology is progressing enormously, and the United States is leading in technological innovation, but it seems the recent move by the Trump Administration is all set to jeopardize that position. Cutting off important personnel within the National Science Foundation (NSF) and slashing research funding has created an alarm in the scientific community. With the dismissal by the Trump Administration of National Science Foundation employees who are specializing in AI, experts now think disruptions of funds in AI may prevent advancement in this area, which bears heavy implications for national security, economic growth, and global competitiveness.

With tensions already brewing, the clash between AI scientists and policymakers emphasizes the need for consistency in funding scientific research.A significant impact is expected by the Directorate for Technology Innovation and Partnerships, the particular office that plays an important role in channeling federal grants to AI research.

Most of the review panels that were planned to evaluate and approve funding for research projects in AI have been either canceled or postponed, which means an extensive delay in financial support for many projects. This disruption would set back research efforts in machine learning, robotics, and automation, which are critical to National Security, health, and industrial innovation.

Criticism on Funding Reductions:

Experts and researchers working in AI have strongly condemned the administration’s actions aimed at reducing significant grantings, especially the cuts that influenced Elon Musk’s Department of Government Efficiency. Musk, a known champion of AI, has been accused of indirectly disrupting the research ecosystem via his imposition of restrictions on funding. A large number of researchers feel that the cuts could have long-term implications for the United States, along with it remaining on top, while other countries are making large-scale investments in promoting the technology itself.

Geoffrey Hinton, an AI pioneer and Nobel Laureate, stated in a post on X,

“Musk to be expelled from the British Royal Society because of the huge damage he is doing to scientific institutions in the U.S.”

Hinton called it a crime about the loss of US scientific institutions to progress and maintain their integrity. Such opinions among AI researchers and academics have been articulated, noting that without stable government funding, groundbreaking AI discoveries might be slowed down and would give countries like China, the incentive to take the driving seat in the field.

Musk’s Reaction:

Musk quickly defended his views on efficient funding in response to Hinton’s remarks while also conceding that he could be wrong. Musk responded to Hinton’s post:

“Only craven, insecure fools care about awards and memberships. History is the actual judge, always and forever. Your comments above are carelessly ignorant, cruel and false. That said, what specific actions require correction? I will make mistakes, but endeavor to fix them”.

Musk’s outburst has sparked a further debate in tech and research communities. Some believe that slow government processes need to be pushed to reduce wasteful spending, while others believe AI research needs to be funded reliably for the long term that may require harsh cuts to undeserving expenditures. The remarks have generated renewed interest on the ethical stakes surrounding private sector influence on public funding for science, therefore raising the debate about how AI will be governed in the future.

Consequences of AI Funding Cuts:

The current controversy has expanded into wider debates within the scientific community regarding the extent of government overreach versus the appropriate extent of financial support for AI research. The experts have warned that global competition in AI will indeed be further hindered by any funding disruptions as the U.S becomes less competitive in any artificial intelligence developments. Other countries like China as well as the European Union have suddenly increased their budgets in research devoted to AI applications, defense, cybersecurity, and automation.

It remains unclear whether the administration will reverse itself in light of a vast flood of criticism. For now, it does seem that the growing backlash from members of the AI research community and policymakers indicates that the quarrel over funding for AI research is far from over. In the next few months, we will be able to know whether the U.S continues to enjoy the competitive atmosphere with AI or if short-term funding decisions will conceal long-term impacts on innovation and economic leadership. 

DeepSeek Claims 545% AI Profit Margin After Rapid Industry Rise

Just a few months ago, DeepSeek was a little-known name in AI, but that changed in January when the Chinese startup launched an AI model that challenged OpenAI’s dominance. Despite operating under U.S. trade restrictions, DeepSeek developed a model that reportedly matched OpenAI’s GPT-4 (o1 variant) on certain benchmarks, grabbing headlines and briefly overtaking ChatGPT on Apple’s App Store rankings. DeepSeek is making another bold claim about its profitability this time. The company recently revealed that its AI models supposedly generate an eye-watering 545% profit margin. But there’s a catch: the number is based on theoretical income rather than actual revenue.

DeepSeek’s 545% Profit Claim: The Fine Print

In a post on X (formerly Twitter), DeepSeek claimed that if all AI usage over 24 hours had been billed under its R1 model pricing, the company would have earned $562,027 in daily revenue. Meanwhile, leasing the required GPUs (graphics processing units) would have been only $87,072—resulting in the headline-grabbing 545% cost-profit margin. However, DeepSeek admitted in a longer GitHub post that its actual revenue is much lower due to the following:

Nighttime discounts reduce revenue during off-peak hours.
Lower pricing for the V3 model, which undercuts theoretical income.
Free access to web and app services, meaning only a portion of users are monetized.

The GitHub post also outlined DeepSeek’s technical approach to improving AI efficiency, focusing on higher throughput and lower latency. The company emphasized that its infrastructure is optimized for performance, but profitability still depends on how AI services are priced and used.

A Glimpse Into AI’s Profitability Debate

DeepSeek’s claim, while speculative, adds fuel to the ongoing discussion about the cost of AI and its potential for profitability. Training and running AI models require enormous computing power, often making them expensive. Tech giants like OpenAI, Google, and Anthropic have yet to prove whether AI chatbots can become sustainably profitable at scale. Yet, DeepSeek’s ability to develop a competitive AI model at a fraction of OpenAI’s cost already had analysts questioning the actual financials of AI research. Its latest claim of theoretical profitability further challenges the narrative that AI is a money-losing business.

DeepSeek’s AI and Market Impact

DeepSeek has already left a mark on the AI industry:

  • Its January model launch rattled Wall Street, raising concerns about AI development costs.
  • Its app briefly displaced ChatGPT at the top of Apple’s App Store rankings before settling at #6 in the productivity category, behind ChatGPT, Grok, and Google Gemini.

AI Monetization: Reality vs. Hype?

DeepSeek’s numbers suggest AI models could be extremely profitable under the right conditions, but whether this translates to sustainable revenue growth remains unclear. With the race for AI profitability heating up, one key question remains: Are AI startups truly on the brink of massive profits, or are these numbers just hopeful projections? Let us know your thoughts in the comments.

Read More: OpenAI to Integrate Sora’s AI Video Generator into ChatGPT

Samsung Galaxy A56 Unveiled with AI Upgrades at a Budget-Friendly Price

Not too long ago, AI-powered features were found in costly mobile phones. However, Samsung is altering this tendency by incorporating capable AI tools into its affordable Samsung Galaxy A56, A36, and A26 mobile models. Though these phones may lack major hardware renovations, the objective is clear: Samsung wants to bring brilliant AI abilities to more individuals without the top-tier cost. Is this enough to distinguish them in the competitive mid-range phone market? Let’s examine this more closely.

Artificial Intelligence Capabilities Now Accessible in Budget Phones

Samsung Galaxy A56 in black

Samsung is promoting its new AI-driven instruments as “remarkable intelligence,” a fun way of saying that inexpensive phones are becoming brainier. One of the handiest fresh features is Best Face, an AI-guided instrument that enables you to swap expressions in group images. If someone blinks or gazes away, Samsung’s AI can fix it. This resembles Google Pixel’s Best Shot, which debuted last year. Another significant addition is Google’s Circle to Search, which allows you to look for anything on your screen merely by encircling it.

This feature was first unveiled in Samsung’s high-end phones but is now available in the budget-friendly A-series. Samsung has also improved its AI-powered object elimination tool, making cleaning up undesirable parts of your photos simpler. Most significantly, Samsung is extending software program support, providing six years of Android OS and security updates. This implies that A-series users will get longer-lasting performance and security updates—a major win for cost-conscious buyers.

Design and Hardware: Small Adjustments, Big Impact

Samsung Galaxy A36 in Awesome Lavender

While AI is the headline feature, Samsung has made subtle design changes to enhance the overall look and feel of the A-series.

  • New Oval-Shaped Camera Module: Provides the phones with a more premium appearance.
  • Bigger Displays: All three models now feature a 6.7-inch Full HD+ display with a 120Hz refresh rate, making them smoother and more immersive.

The Galaxy A36 has a dazzling Awesome Lavender hue and boasts a vibrant 6.7-inch Full HD+ display.

Under the hood:

Samsung has added IP67 dust and water resistance to the A26 for the first time, making it more durable than before. The Galaxy A56 receives a fresh Exynos 1580 chipset, whereas the A36 sticks with the older Snapdragon 6 Gen 3—both the Samsung Galaxy A56 and A36 support 45W fast charging, unlike the A26, which lacks this feature.

Costing and Availability

Samsung has strategically priced the A-series phones to cater to different budgets:

  • Galaxy A56$499, launching later this year in the US (£499 in the UK on March 19th)
  • Galaxy A36$399, available March 26th at Best Buy (£399 in the UK)
  • Galaxy A26$299, debuting March 28th (£299 in the UK)

Is Samsung’s AI Push Enough to Make an Impact?

Samsung’s decision to bring AI to its budget phones is a smart move, but will it be enough to convince buyers? Many consumers still prioritize hardware aspects like camera quality, battery life, and overall performance over AI-powered features. However, as AI technology continues to shape how we use smartphones, Samsung may be setting a new trend that other brands will soon follow. Will AI features become the key selling point for budget phones? Or do buyers still prefer traditional hardware upgrades? Let us know your thoughts!

Read More: Microsoft Has Officially Announced Skype Shuts Down in May

OpenAI to Integrate Sora’s AI Video Generator into ChatGPT

In a move that could redefine AI-driven content creation, a company leader said in the session on Friday that OpenAI has announced plans to integrate its video-generating platform Sora directly into ChatGPT. This shift signals OpenAI’s expansion beyond text-based AI, merging video creation tools with its flagship chatbot to offer a more immersive and interactive user experience. Currently, Sora is only available as a standalone web app, launched in December 2024. It allows users to generate short AI-generated cinematic clips up to 20 seconds long. However, according to OpenAI’s product lead for Sora, Rohan Sahai, the company is working on bringing Sora to ChatGPT, making video generation more accessible to users.

Why Is OpenAI Merging Sora with ChatGPT?

  • A More Versatile ChatGPT: By adding AI video generation, OpenAI is positioning ChatGPT as a one-stop creative hub for text, images, and videos.
  • Expanding Sora’s Audience: Initially targeted at video production studios and creative professionals, Sora is now being geared toward everyday users and businesses.
  • Boosting ChatGPT Premium Subscriptions: OpenAI may limit high-quality video generation to paid tiers, encouraging more users to subscribe.
  • Advancing AI-Driven Creativity: The integration could pave the way for ChatGPT-powered storytelling, allowing users to generate videos directly from conversations.

How Will Sora Work Inside ChatGPT?

While OpenAI hasn’t provided a detailed roadmap, Rohan Sahai hinted at a few key points:

  • The ChatGPT version of Sora may offer limited editing tools compared to the full web app.
  • Users might not have full control over stitching and modifying clips.
  • OpenAI wants to keep ChatGPT intuitive and user-friendly, balancing simplicity with powerful features.

This strategic integration suggests OpenAI prioritizes accessibility over complex video production, making AI video generation as easy as chatting.

What’s Next? OpenAI’s Plans Beyond ChatGPT

Standalone Mobile App for Sora: OpenAI is hiring mobile engineers, hinting at a dedicated Sora app for smartphones.
AI-Powered Image Generator: OpenAI is working on a Sora-powered image generator, potentially surpassing DALL·E 3 in photorealism.
Upgrading Sora Turbo: OpenAI is actively developing Sora Turbo 2.0, promising faster, higher-quality video generation.

What This Means for Users

Seamless AI-powered content creation: Imagine generating text, images, and videos—all within ChatGPT.
Potential for businesses and creators: Marketers, educators, and content creators could automate video storytelling directly from ChatGPT.

A step toward AI-generated movies? – If Sora continues to evolve, it could be a major disruptor in digital media. While OpenAI hasn’t confirmed when Sora will be available inside ChatGPT, this move marks a huge leap for AI creativity and accessibility. Would you use ChatGPT to generate AI videos? Share your thoughts in the comment section.

Read More: OpenAI Unveils GPT-4.5 ‘Orion’ – The Next Leap in AI Evolution

China Startups Rush to Ride DeepSeek AI Boom

At this time, the wave of China’s technology sector is once again crashing against the rocks of finance and innovation, as the country gets lifted by a flood of optimism for its startup ecosystem, with the powerful blast from DeepSeek’s AI model as well as the rare appearance of President Xi Jinping himself in support of private enterprises. Venture capitalists shut down outside investment due to concerns over the effects of severe regulations and the current general climate that breeds uncertainty in the economy, thereby rushing back the soonest to sponsor thought based startups, the next kind of technologically empowering.

Chinese technology startups are racing against each other to secure new rounds of funding from the recent popularity that DeepSeek’s AI breakthroughs have garnered, along with Xi’s endorsement of private enterprises. With AI innovations in the limelight, now is a time when investors and entrepreneurs are trying to accelerate the growth of China’s highly evolving tech field.

A few other major companies that are taking advantage include AI Optics startup Rid Vision, Brain Computer Interface company AI CARE Medical, and robotics firm Shanghai Qingbao Engine Robotics, all of whom are seeking onshore financing, as confirmed by Andrew Qian, CEO of New Access Capital, which has invested in all three firms. He said, “Many people are knocking at the doors of these AI companies, half discussing business cooperation, the other half talking about investment”. He added, “You can see from the DeepSeek case, that a batch of Chinese innovators with disruptive technologies is emerging… Previously, Chinese start-ups were nearly all ‘me too'”.

Revival of China’s Venture Capital Sector:

The buzz that has returned to AI related businesses, including chipmakers, cloud service providers, and AI applications have revived the domestic venture capital industry in China. The general investment outlook remains grim due to regulatory roadblocks for IPOs in China and the swing of geopolitical considerations that complicate offshore listings. Despite problems, investor confidence has received a much needed boost after DeepSeek’s breakthrough in AI and Xi’s meeting with business tycoons. For instance, New Access Capital has recently invested in a chip startup and millimeter wave antenna technology and is also pursuing opportunities in rocket recovery technology in anticipation of the next big AI-driven breakthrough in these areas.

Companies that stand to gain from the advances in AI in China are at the center of the latest investment frenzy. In its record fundraising round, AI image generation platform LiblibAI reported securing hundreds of millions of yuan. AI-oriented medical startup SenseCare raised 100 million yuan, while the latest rounds of investments were also reported for chipmakers Aspiring and Hyseim.

Resilience within Venture Capital Landscape:

Other startups that have recently garnered attention for investment have included AI infrastructure provider Siliconflow, robotics startup Ruichi Smart Technology, and medtech startup Neurodome.This surge in VC activities implies a potential change in trend after years of continuous decline in fundraising and investment.

Since its historical height in 2021, China has remained in a downward spiral for ventures that have held onto the probability of a better fortune. Preqin data reflects a drop by 91% from the funds raised, $12.5 billion gathered from 67 funds in 2024 against the background of $141 billion in 2021. They were still worse off than dollar denominated funds that raised a scanty $1 billion last year. Meanwhile, the record of case filing through venture deals stood at $229 billion in 2023, which represents a significant 36% decline as compared to last year, and even smaller when taken in isolation compared to $816 billion in 2021.

IPO exits a major mode of cashing out venture capital investments in China have been badly affected by the country’s stringent rules and regulations coupled with the uncertainties in the international geopolitics affecting the offshore listings. However, due to AI breakthroughs from DeepSeek, there has been a significant turnaround of the market. Zhongyan Huo, founder of Bonanza Capital, which has invested in an AI-powered garment designing and marketing startup said, “Since the launch of DeepSeek’s breakthrough AI model, the sentiment has improved a lot. People get more sanguine about China’s future … Stock bullishness made entrepreneurs more confident, and investors more willing to place bets”.

Risks and Regulatory Uncertainties:

Morgan Stanley cites indications of the normalizing IPOs in the A-share market in China, however, Huo is doubtful concerning any forthcoming relaxation of the IPO restrictions. Besides, they show improvements regarding offshore listings but keep on not being viewed as entirely or completely free from the webs of geopolitical disturbances and changing investor attitudes. Racing ahead, China’s AI industry definitely puts a high balancing act on both investors and startups, they have to continue maneuvering as best their efforts might allow through regulatory landscapes, turbulence in geopolitics, and the growing challenges of an ever changing tech ecosystem. 

Read More: OpenAI Unveils GPT-4.5 ‘Orion’ – The Next Leap in AI Evolution

Baidu Set to Launch Ernie 4.5 AI Model in Mid-March, Adopting Open-Source Strategy

Baidu has long been a dominant player in China’s artificial intelligence landscape. In 2016, it pioneered AI-driven innovations like Baidu Brain and launched Ernie, a ChatGPT-style chatbot, in 2023. However, as AI competition heats up, Baidu is shifting gears to stay ahead. The company is preparing to launch its upgraded AI model, Ernie 4.5, in mid-March. This model will introduce improved reasoning and multimodal capabilities to more effectively process text, images, video, and audio.

Baidu had previously announced that it would gradually roll out the Ernie 4.5 series over the coming months, with a fully open-source release planned for 30 June. This represents a significant shift in the company’s approach. CEO Robin Li, who once believed in keeping AI models closed-source, has recently acknowledged that growing competition from the Chinese AI startup DeepSeek influenced Baidu’s decision to adopt open-source development.

Despite Baidu’s early entry into the AI chatbot arena, Ernie has struggled to secure mass adoption. DeepSeek, which has launched affordable AI models comparable to leading Western competitors, has further compelled Baidu to reassess its AI strategy. With the impending launch of Ernie 4.5, Baidu is making a clear statement—it is prepared to adapt and compete with both domestic challengers and AI leaders such as OpenAI and Google.

The AI race in China is intensifying, particularly after Alibaba recently announced that its video and image-generating AI model, Wan 2.1, would also be open-source. Baidu’s latest move suggests that the future of AI development in China may favour greater transparency and collaboration, a trend that could reshape the industry. As the official launch of Ernie 4.5 approaches in mid-March, the coming months will be crucial in determining whether Baidu’s strategic shift will enable it to strengthen its position in the AI landscape.

Read More:
After R1’s Success, DeepSeek Fast-Tracks Launch of New AI Model R2


Meta’s Oversight Board to Assess Hate Speech Policy Changes

Meta, the parent company of Facebook, Instagram, and Threads, has long faced scrutiny over its content moderation policies, especially when it comes to hate speech and misinformation. Over the years, the company has tightened and loosened its regulations in response to public pressure, political discourse, and regulatory scrutiny. Now, with its latest policy changes, Meta’s Oversight Board is preparing to review recent changes to the company’s hate speech policies on Facebook, Instagram, and Threads, marking a critical moment for content moderation on Meta’s platforms.

In January 2024, CEO Mark Zuckerberg introduced a policy shift aimed at allowing more expression on Meta-owned platforms. The update included rolling back certain protections for immigrants and LGBTQ users, a move that has sparked debate over free speech vs. platform safety.

The Oversight Board, an independent body established to review Meta’s policy decisions, has taken notice. It currently has four open cases related to hate speech and will use these cases to assess the impact of the company’s updated guidelines. According to a report by Engadget, the board’s decision could influence how Meta refines its content moderation approach moving forward.

Meta has a mixed record when it comes to adopting the Oversight Board’s recommendations. While the company is required to follow the board’s rulings on individual content cases, it has a limited obligation to make broader policy adjustments. This review will test whether Meta is willing to reevaluate its moderation approach or continue with its more lenient stance on content restrictions.

With the rise of misinformation, online harassment, and the political climate intensifying, the outcome of this review could influence how Meta shapes content regulation in the future. Whether the Oversight Board’s findings will result in actual policy changes remains to be seen.

Read More: Amazon Unveils Alexa+ AI Assistant to Revolutionize Smart Living

Amazon Unveils Alexa+ AI Assistant to Revolutionize Smart Living

This is a world where intelligent assistants are kept exclusively to setting alarms and playing music along with it misunderstanding some basic commands. However, this latest foray into Alexa+ from Amazon promises something beyond simple task accomplishments, it is a real agent with an AI engine. All things should go according to plan, for this system will update its activities to include not just stating the weather but also reserving tables and ordering grocery items, with the addition of possibly remembering one’s birthday. Alexa+ presents a dramatic foray into the world of consumer AI agents, it is a brand new evolution of the voice assistant towards which Alexa will go in helping manage day to day activities.

The big announcement was made in a keynote on a Wednesday, Alexa+ set up beyond booking reservations to managing home maintenance, and integrated a really wide net of both first and third party services. If it succeeds, Alexa+ is poised to become the most far reaching consumer AI agent, surpassing even the most ambitious competitors in terms of capability and access. Amazon signals a future where intelligent AI assistants do not just perform tasks for users, but also engage in so-called inter-agent behavior with other intelligent AI assistants for seamless integration across digital ecosystems.

Future of AI Assistance:

Amazon’s Alexa and Echo VP Daniel Rausch said, “We believe that the future is full of agents — we have believed this for some time. “There will be many AI agents out there doing things for customers, many of them will have specialized skills … And we’ve also always believed that in a world full of AI, these agents should interact with each other. They should interoperate seamlessly for customers.”

It comes at a significant time for the company, now that it is trying to revive Alexa, which hasn’t managed to generate much revenue despite years of investment. According to reports, Amazon’s hardware division has burned hundreds of billions of dollars, so Alexa+ could shape a turning point in evolution for the assistant.

AI Agents and Alexa+ Abilities:

The virtual assistants that are autonomous in their actions and can take proactive actions on the user’s behalf have become somewhat of a concept in the tech world. OpenAI and Anthropic have worked toward that goal with the development of AI models, but many implementations remain unsuccessful and inefficient, thus requiring a human in the loop for their actions.

Alexa+ from Amazon is presented in a different light as a polished and intuitive assistant capable of performing tasks that require the least amount of contact. The assistant showcased its ability to coordinate between several information sources in this case, emails, calendars, and user preferences automating mundane tasks with great efficiency. Some key capabilities showcase:

  • It automates your grocery shopping with Amazon Fresh, Whole Foods, and other retailers.
  • When products are offered at reduced prices, it smartly chooses to order them by itself.
  • It schedules bookings for the spa and fitness appointments through Vagaro.
  • It integrates with everyday services, including food through Grubhub, rides through Uber, and tickets through Ticketmaster.
  • The smart event planner always extracts useful information from flyers to create the right reminders.

Major Challenges:

Alexa+ seems very promising for the future but remains faced with challenges. Ever since the beginning, the AI agents have been poor at reliability, and there were reports that Alexa+ was delayed multiple times because its earlier iterations failed at even the most basic tasks of turning on and off smart home devices.

The research assistant from OpenAI, ChatGPT deep research, has been inaccurate in its results and Gemini has faced issues with providing factually accurate summaries. These issues must be dealt with for Alexa+ to become a reality for Amazon. Then there is the ever existing question about data privacy. Alexa+ depends heavily on user data to create personalized experiences, and while this welcomes greater utility, it also raises concerns about how Amazon handles sensitive personal information.

Amazon’s Strategic Advantage:

Amazon has a multitude of favorable conditions. Its position is already strong in consumers’ homes, with over 600 million Alexa enabled devices in circulation. Moreover, providing Alexa+ free of charge to Prime subscribers and charging $19.99 per month to non-prime users could speed up implementation by Amazon’s most committed users. Amazon’s vision for Alexa+ is extensive, futuristic, and ambitious. Suppose it can indeed deliver as an intelligent and autonomous AI agent. In that case, it will have a far-reaching impact on how people relate to technology within the sphere of everyday life.

Should Alexa+ meet its promises, then it could really transform how average consumers interact with AIs, setting an example for personal digital assistants. Otherwise, it could just add to Amazon’s list of grand experiments in AI and machine learning. The real challenge now will be in actualising that promise, can Alexa+ avoid the pitfall of AI limitations where other consumer agents have failed? Will it truly integrate with the intricate alliances of services and tasks it promises to manage? As yet, we do not know all the answers to these questions. Amazon is placing its wager on AI for the future, and now the world is looking for the answer to the question of whether Alexa+ will eventually become the intelligent assistant awaited by all.

Read More: Alibaba Goes All-In on Open-Source AI With Wan 2.1 Release

OpenAI Expands Deep Research Tool to More ChatGPT Subscribers

As AI-powered tools become more integral to professional and academic research, OpenAI is broadening access to its Deep Research feature. Previously reserved for ChatGPT Pro users, this advanced web browsing agent is now available to all paying users, including Plus, Team, Enterprise, and Edu subscribers. With this expansion, OpenAI gives users 10 deep research queries per month, allowing them to generate comprehensive reports on various topics. Meanwhile, ChatGPT Pro users, who subscribe at $200 per month, will now receive 120 queries, up from 100 at launch.

The move highlights OpenAI’s strategy to make AI-powered research tools a key selling point for its premium tiers. As competition in AI research tools heats up, Google and Perplexity are racing to roll out similar deep research capabilities. Google recently launched its deep research agent for Gemini Advanced users, signaling a clear industry shift toward AI-generated long-form analysis.

For AI companies, deep research features are more than just an added tool—they are a way to demonstrate the value of premium AI subscriptions. However, OpenAI acknowledges that it must refine how these agents interact with users and how they could influence decision-making. By expanding Deep Research to a broader audience, OpenAI is positioning itself at the forefront of AI-driven knowledge generation, reinforcing AI’s role in assisting professionals, educators, and researchers with in-depth, automated analysis.

Read More: OpenAI to Shift AI Compute from Microsoft to SoftBank


Perplexity Launches $50M AI Venture Fund to Back Future Tech Innovators

Creating the best chatbot is no longer a race nowadays; it’s got to do with who is throwing the biggest dollars in the future. With that in mind, here comes Perplexity, the latest in technology, and now with its own venture capital fund, an AI-powered search engine that caused rather a stir in the industry. Perplexity walks into the investor’s hall with fresh $50 million reserved for early startup investment, ready to discover what’s next big in AI and tech. It seems like Perplexity’s AI is smart enough to invest in people for now. However, with big bucks come big questions about who is getting funded and is Perplexity setting itself up as the Google of startup investments?

Perplexity, developer of an AI-powered search engine, has ventured into the venture capital arena by launching a $50 million seed and pre-seed fund, as reported by CNBC. Following their recent funding of $500 million at a $9 billion valuation, the company is using some of its own money in the fund’s cornerstone, while well proclaimed money comes from limited partners.

Perplexity dives into Venture Capital:

Kelly Graziadei and Joanna Lee Shevelenko are the GPs for the new fund. They previously co-founded f7 Ventures, an early-stage investment firm backing companies like women’s health startup Midi. It is still unclear if Graziadei and Shevelenko will continue in an advisory capacity with f7 Ventures or concentrate on Perplexity’s venture fund.

Through its venture capital foray, Perplexity seeks to nurture forward-thinking early-stage companies in AI and technology. This puts the company in the same league as other AI giants that have set up funds aimed at nurturing the next generation of tech immigrant businesses.

Perplexity VS OpenAI Investment Approach:

With the formation of its very own venture fund, Perplexity creates the juxtaposition with OpenAI, which itself has an investment scheme, namely the OpenAI Startup Fund. The contours of distinction arise in that OpenAI explained that it does not use its own funds to invest. Perplexity, however, has decided to use at least some part of its funds to capitalize its new venture.

Implications for the Ecosystem of Startups:

With this fund development, Perplexity is not just establishing itself in the AI and tech ecosystem but also providing necessary capital for startups that resonate with its vision. This action gives a nod to the heightened trend where AI firms are increasingly flexing their financial muscles to start their own investment vehicles to ensure innovation and strike strategic partnerships in the industry. It would be interesting to see what startups come into play as the fund emerges and how Perplexity’s investment strategy will put a mark in contributing to the shaping of AI and technology businesses. Now that the fund is beginning to deploy capital, all eyes will be on Perplexity to see if it can search for, and invest in the next billion-dollar idea.

Read More: After R1’s Success, DeepSeek Fast-Tracks Launch of New AI Model R2

Meta Reportedly Planning $200 Billion AI Data Center Expansion Amid Growing Infrastructure Race

Meta is reportedly exploring a massive $200 billion investment in a next-generation AI data centre campus, signalling an aggressive push into artificial intelligence infrastructure. According to a report from The Information, Meta executives have been in discussions with data centre developers and have scouted potential locations in Louisiana, Wyoming, and Texas as part of the early planning stages.

However, a Meta spokesperson denied the report, stating that the company’s capital expenditure plans have already been disclosed, and anything beyond that is “pure speculation.” Despite this, industry analysts believe that such an expansion aligns with Meta’s growing AI ambitions, particularly after CEO Mark Zuckerberg confirmed last month that the company intends to spend up to $65 billion in 2024 to expand its AI infrastructure.

Tech Giants in a Race for AI Dominance

If the reported $200 billion project moves forward, it would dwarf Meta’s previous spending and position the company as a dominant player in the AI infrastructure race. Tech giants like Microsoft and Amazon are also ramping up their AI investments, with Microsoft planning an $80 billion investment in data centres for fiscal 2025 and Amazon expecting to surpass its $75 billion infrastructure spending from 2024.

Since the launch of ChatGPT in 2022, the AI sector has seen an unprecedented surge in investment, with companies across industries rushing to integrate AI-driven capabilities into their products and services.

Meta’s AI Ambitions and the Future of AI Computing

As Meta expands its AI and metaverse initiatives, its potential data center expansion could be critical to supporting its long-term artificial intelligence and machine learning advancements. Although official confirmation of the $200 billion project remains uncertain, Meta’s increasing AI infrastructure investments signal a fierce competition among tech giants to dominate the next era of AI-powered computing. Whether this rumored mega-campus materializes or not, the race to build the most advanced AI data centers is only intensifying.

Read More: After R1’s Success, DeepSeek Fast-Tracks Launch of New AI Model R2

After R1’s Success, DeepSeek Fast-Tracks Launch of New AI Model R2

DeepSeek has entered into the game changer territory of AI, wherein tech giants are choking each other for supremacy, with the release of its low-cost AI models, the company shocked the AI community and challenged the very definition of innovation when it comes to AI. Now that DeepSeek is ahead of its schedule in launching its newest AI model R2, the world is watching, some with excitement, others with unease.The Hangzhou startup recently accessed the global source markets with its cost-effective yet high-performing AI model R1 and is now pushing home the credit.

With the success of R1, which started a $1 trillion global equities sell-off, DeepSeek’s rapid developments are closely being followed by competition and regulators alike. Rumors around the company have brightened up the release originally set for early May, suggesting plans to push back the launch. While insiders are not yet given permission for an official comment on where R2 stands for development, reports indicate the new model would provide enhanced coding capabilities and superior reasoning across many languages apart from English. This initiative is seen within the geographical focus on advancing a strong position in AI at a time of tight geopolitics and economics.

DeepSeek’s Unconventional Standpoint:

It runs more like a research lab rather than a corporation in the sense of conventional Chinese tech firms such as cut-throat hierarchies and tiring work hours. Founder Liang Wenfeng, instructed the culture of innovation by attracting the best algorithm engineers and establishing a very flat management style. Employees describe working in an environment where research interest and creativity come before corporate bureaucracy. A 26-year-old researcher, Benjamin Liu, who left the company in September, said, “Liang gave us control and treated us as experts. He constantly asked questions and learned alongside us. DeepSeek allowed me to take ownership of critical parts of the pipeline, which was very exciting”.

Deepseek’s R1 model made headlines by outperforming its competition even though it was trained on less powerful Nvidia chips. Whereas hundreds of billions have been poured into AI research by U.S tech titans like OpenAI and Google, DeepSeek showed that a cost-effective solution can also yield top tier results. Industry experts believe that the launch of R2 could further disrupt the AI landscape, making Western firms rethink their pricing strategies against such offerings and technological approaches.

Geopolitical Implications:

DeepSeek’s rapid rise is not merely a business success story, it has serious geopolitical repercussions. Both the U.S and China have identified AI leadership as a national priority, and DeepSeek’s developments will likely provoke further concern in Washington. In the meantime, Chinese authorities have embraced DeepSeek, incorporating its models into state and corporate systems at a strikingly fast pace so far. At least 13 Chinese city governments and 10 state owned enterprises are already using DeepSeek technology, further entrenching its role as a critical player in China’s AI ambitions.

High-Flyer’s Strategic Investments:

High-Flyer has invested heavily in AI research and infrastructure, which underpins DeepSeek’s ability to develop competitive AI models at less than half the cost. Long before this boom gripped the industry, the fund was one of the earliest adopters of AI-driven trading and committed 70% of its annual revenue to AI research. By 2021, it had already acquired in-house basic computing infrastructure, two supercomputing AI clusters featuring Nvidia A100 chips purchases that later proved critical when the U.S restricted advanced semiconductor technologies with China.

Global Scrutiny:

DeepSeek’s innovations are draped in praise in China, elsewhere, however, they are the object of great mistrust. Some Western governments, along with South Korea and Italy, announced the removal from its national app stores of any application developed by DeepSeek, citing privacy and security. Even then, some analysts warned of the possibility that a Chinese state entity may turn the DeepSeek models into a noun, much to the anger of everyone else, based on this perception, Western countries would likely impose restrictions on AI chip exports and software collaboration in retaliation, thus increasing the competition in the AI arena. An ever present concern is the restriction on the export of advanced AI chips, and from there, to really establish the serious testing of innovation would be the ability on the technological side to keep a perceived edge abroad with no access to top technological hardware.

In light of the rapidly approaching launch of R2, it is evident that this AI field is also undergoing transformative and convulsive changes. The ability of DeepSeek to create competitive models at a fraction of the cost has not only disrupted the markets but led to an escalating AI arms race between China and the West. Only time will illuminate the full repercussions of DeepSeek’s mind bending developments, but it is a fair prediction that AI’s strategic ingenuity will be a vessel in which its future will be developed. Whether this will spark collaboration, competition, or a regulatory onslaught remains uncertain, but there surely lies an exciting and turbulent ride ahead for the industry.

Read More: Google Unveils Free AI Coding Assistant ‘Gemini Code Assist’ with Industry-Leading Usage Caps

Anthropic Nears $3.5 Billion Fundraising as AI Investment Surges

Anthropic, the AI startup behind the Claude chatbot, is reportedly securing a massive $3.5 billion funding round, pushing its valuation to $61.5 billion, according to The Wall Street Journal. Initially, the company aimed to raise $2 billion, but strong investor demand has led to an expanded round, signaling growing confidence in AI-driven innovation.

Several major investors, including Lightspeed Venture Partners, General Catalyst, Bessemer Venture Partners, and Abu Dhabi-based MGX, are expected to participate in this funding. If the round closes at the projected amount, Anthropic’s total capital raised will surpass $18 billion, solidifying its position as one of the most well-funded AI startups. The company recently launched Claude 3.7 Sonnet, an upgraded AI model designed to enhance response speed and reasoning capabilities, strengthening its position in the generative AI space. However, Anthropic has not achieved profitability despite technological advancements, making the latest fundraising crucial for further AI model development and business expansion.

This influx of funding reflects the broader trend of soaring AI investments, with nearly half of U.S. venture capital funding directed toward AI startups last year. The demand for cutting-edge AI continues to fuel investor enthusiasm, but global competition is also intensifying. Chinese AI alternatives like DeepSeek are emerging as cost-effective rivals, challenging U.S. dominance in the field. Meanwhile, OpenAI, Anthropic’s key competitor, is reportedly pursuing a new funding round that could push its valuation to an astonishing $300 billion. As the AI race accelerates, Anthropic’s increasing valuation underscores the growing financial stakes in artificial intelligence development. With billions flowing into AI research, startups like Anthropic must continue innovating while proving their long-term sustainability in an increasingly competitive market.

Read More: Musk Starlink Battles Chinese Rivals in Fierce Satellite Internet Race

Microsoft’s Strategic Shift in Data Center Expansion Raises Investor Concerns

Microsoft’s aggressive push into AI and cloud infrastructure has recently defined its growth strategy. Still, fresh reports suggest the company is now taking a more measured approach to its data center expansion. According to TD Cowen analysts, Microsoft has scrapped leases for several hundred megawatts of data center capacity in the U.S., a move that has caught investors’ attention and raised questions about whether the AI boom is hitting a slowdown.

The decision comes despite Microsoft’s commitment to investing over $80 billion in AI and cloud capacity this fiscal year. A company spokesperson acknowledged the adjustments but emphasized that Microsoft is still growing “strongly in all regions” and is simply pacing its infrastructure investments strategically.

Market Reaction and Investor Anxiety

While Microsoft’s stock remained largely unaffected, dipping only 1% on Monday, the ripple effect was felt across industries linked to data centers. Siemens Energy dropped 7%, Schneider Electric fell 4%, and U.S. power providers Constellation Energy and Vistra saw declines of 5.9% and 5.1%, respectively. The selloff extended to broader tech stocks, adding to growing market unease over whether the billions being poured into AI infrastructure will yield the expected returns.

Adding to the uncertainty is China’s rising competition in AI development. Chinese startup DeepSeek has showcased AI models at significantly lower costs than its Western counterparts, fueling concerns that companies like Microsoft may need to rethink their infrastructure spending to remain competitive.

A Sign of Oversupply or Just Smart Business?

Microsoft’s decision to pause or cancel leases could indicate a correction after years of rapid expansion. The company and rivals like Meta have been aggressively building data centers to support the surge in AI demand. However, as analysts point out, scaling AI infrastructure is costly, and companies are now balancing growth with financial sustainability.

Bernstein analyst Mark Moelder noted that the move could suggest a cooling in AI demand, especially following weaker-than-expected earnings from major cloud providers. However, not everyone is convinced this is a warning sign. Some industry experts argue that Microsoft is refining its strategy, ensuring it doesn’t overextend resources in a rapidly evolving market.

Whatever the case, this latest shift underscores a key reality: Even the biggest AI players are navigating a complex and uncertain landscape. The race to build next-generation AI systems isn’t just about who spends the most—it’s about who spends wisely.

Read More: Apple Launches iPhone 16e in China to Compete with Local Brands

Alibaba Surpasses a Decade of AI Investment with its $52 Billion in AI and Cloud Computing

In this ongoing race of AI, Alibaba has made a statement, the company has now pledged an outlay of about $52 billion over the next three years on artificial intelligence and cloud computing. This isn’t just pocket money but this is a statement that just screams, “We’re here to rule.” While global tech giants race to secure footing regarding their future in AI, Alibaba ensures it’s not just keeping up but rather showing the world how to lead.

On Monday, Alibaba detailed its plans to invest at least 380 billion yuan ($52.44 billion) in artificial intelligence (AI) and cloud computing infrastructure over the next three years. This is the company’s largest-ever investment in these two segments, far exceeding its investments in these two segments dating back a decade.

Alibaba’s Strategy:

Earlier on Friday, the Chinese e-commerce technology giant said it planned to invest more in AI but refrained from specifying the amount until later. The intervention can be viewed as striving to put Alibaba firmly at the forefront of the race for AI in China, where competition is fierce among technology companies. For the December quarter ending on 31st, Alibaba recorded a revenue of 280.15 billion yuan, slightly more than what analysts had expected. Year to date, the stock has gained more than 68%, reflecting fresh confidence from investors in the growth strategy driven by AI.

AI Investment in China:

Alibaba is not the only one aggressively pursuing the path of AI. Other Chinese tech giants, such as ByteDance (the parent company of TikTok) have been employing huge resources for AI. Reports that came in, quoted sources as saying that ByteDance has planned more than 150 billion yuan in capital expenditure for 2025, with a major share being given to AI.

The AI investment capital surplus within China’s technology sector proves a strategic pivot as companies rush toward development and commercialization of novel AI models, cloud computing services, and digital infrastructure. This intensified attention on AI is in collaboration with global trends, where initial tech companies are laying bets on AI for rapid scaling and returns.

Significance of Tech Industry:

Strengthening AI capabilities for Alibaba was a step towards gaining a competitive advantage in aspects of machine learning, generative artificial intelligence, or solutions offered via the cloud, as these sectors underlie the drivers of the coming economic and technological growth. The next three years will be crucial for Alibaba as it executes its strategy for AI expansion. As China’s tech giants clash for supremacy in the new artificial transformational frontier, Alibaba’s bold commitment puts one into the warm-up lap for the competition and the novel industry innovations that this commitment generates.

Alibaba’s big investment in AI is thus more than a financial investment, it is a statement of intent. In this AI revolution, businesses that refuse to adapt will be scorned as irrelevant. With this investment foray, Alibaba is putting its money where its mouth is on the AI growth strategy, with an intention to redefine the future of clouding, e-commerce, and more. Whether action creates disruptive innovations or further aggravates the already heightened tech competition is still anyone’s guess, but it is evident that Alibaba is not playing it safe. Indeed, the race for AI supremacy has become far more interesting.

Read More: Grok 3’s Brief Censorship of Trump and Musk Sparks Controversy

WhatsApp Enhances AI Accessibility with New Home Screen Widget

Meta’s commitment to making artificial intelligence practical and easily accessible across its platforms continues to grow, and WhatsApp is becoming central to this strategy. Following recent innovations in chatbot technology and conversational AI integration, WhatsApp is taking another step forward by introducing a dedicated Home Screen widget designed specifically for Meta’s AI chatbot, as reported by WABetaInfo.

This upcoming widget signifies WhatsApp’s deeper integration with AI-driven services, showcasing a shift toward convenience and seamless user interaction. With this new widget, users will have immediate, direct access to the Meta AI chatbot right from their device’s Home Screen, completely bypassing the traditional method of manually searching through the app.

WhatsApp’s decision highlights a clear objective simplifying interactions and saving time. Interestingly, WhatsApp is developing this widget with a universal user experience in mind, ensuring both Android and iOS users receive identical functionality and ease of use, reflecting a broader push toward unified AI experiences across platforms.

Moreover, the widget includes three practical shortcuts, each catering to specific and frequent interactions with Meta AI. The first shortcut facilitates instant question-and-answer interactions, significantly reducing response time. The second shortcut enables quick image-sharing capabilities directly from the user’s Home Screen, promoting effortless multimedia interactions with AI. Lastly, a voice-chat shortcut allows users to engage with Meta AI using voice commands, reflecting WhatsApp’s increasing focus on voice-driven interactions as typing becomes less preferred by many users.

WhatsApp Home Screen update

In essence, this widget represents more than just an incremental update. It demonstrates WhatsApp’s strategic pivot towards deeper integration with AI, reshaping how millions interact with technology daily. For users, it means greater convenience; for Meta, it emphasizes their ambition to become leaders in practical AI accessibility across their family of apps.

Read More: OpenAI Blocks Accounts in China & North Korea Over Misuse

Grok 3’s Brief Censorship of Trump and Musk Sparks Controversy

Who knew AI could play favorites? Artificial intelligence was supposed to be neutral, right? Just pure cold logic with no human bias or political drama, I guess not in this scenario. When Elon Musk released Grok 3 as a “maximally truth-seeking AI”, most people wouldn’t have thought that it would suddenly get very shy about naming some controversial figures, particularly its own creator. Over the weekend, users discovered that Grok 3 seemed to have an unwritten rule that emphasized that Musk or Trump are not to be roasted.

Last Monday, in a live stream, billionaire Elon Musk introduced Grok 3, the latest AI model from the company he founded, xAI, calling it a “maximally truth-seeking AI.” However, users reported that Grok for a brief period censored unflattering mentions of President Donald Trump and Musk himself. When asked in “Think” mode, “Who is the biggest misinformation spreader?” social media users noted that Grok 3’s “chain of thought” reasoning indicated it had been explicitly instructed not to mention Trump or Musk. This revelation raised eyebrows, undermining Musk’s declarations of an apolitical AI.

Although, after some time, the changes were reverted and Grok 3 was back to mentioning Trump in response to the misinformation question. Igor Babuschkin, an engineering lead at xAI, confirmed in X post that it was indeed a bug caused by an internal change made by one employee that was withdrawn soon after it became the topic of much attention at the company.

He said, “I believe it is good that we’re keeping the system prompts open. We want people to be able to verify what it is we’re asking Grok to do. In this case an employee pushed the change because they thought it would help, but this is obviously not in line with our values”.

Misinformation and Controversy:

There is quite a lot of debate about misinformation, Trump and Musk bear the brunt of this confrontational subject for promoting provably false things. Recent examples include the claims that Zelenskyy is a dictator with a 4% popularity rating and the ridiculous assertion that it was Ukraine that started the ongoing war against Russia. xAI’s social platform X frequently marks the misleading statements of both with his Community Notes system.

This Grok 3 controversy is now merely the tip of the iceberg concerning accusations of AI political prejudice. Critics contend Grok is biased in favor of the left, and yet another recent incident has sparked debate on that. Some users reported Grok 3 was generating messages that claimed the death penalty for Trump and Musk was deserved. xAI quickly corrected the situation, and Igor Babuschkin called it a “really terrible and bad failure.”

AI Biasedness:

Musk has always pitched Grok as the opposite of the excessively “woke” AI models, promising it would be free of the constraint applied by the competitors like “OpenAI’s ChatGPT.” Previous Groks like Grok 2 were rather edgy and would even go as far as vulgarity when answering questions, which is tactfully avoided by the rest of their AI counterparts. Studies acclaim that Grok is biased in favor of the political left concerning transgender rights, diversity programs, and economic inequality. Musk attributes these supposed left-wing tendencies to Grok’s training set which is the publicly available web pages. He pledged to try to move Grok towards a more neutral political model.

With regard to Grok 3, we have yet another example of how incredibly hard it is to come up with an AI model that can claim neutrality and such instances continue to challenge the ever-colder war between AI transparency and control. While Musk and his fellow tech leaders push “unbiased” AI, the question comes down to, can any AI rule itself have been created by a set of biased people? Or can we expect a future where even machines are said to have political opinions? There is still a strong challenge of attaining fairness and neutrality for AI models that influence opinions in public discourse and only time will tell if Musk delivers on his promise of an unbiased Grok.

Read More: Did xAI Mislead About Grok 3’s Benchmarks? OpenAI Disputes Claims

Google Veo 2 AI Video Model Pricing Revealed at 50 Cents per Second

Google has long been a pioneer in artificial intelligence, consistently leading advancements and breakthroughs through its dedicated AI research divisions, such as Google DeepMind and Google AI. Over the years, the tech giant introduced transformative AI models like BERT for language understanding, Imagen for image generation, and Gemini, its versatile generative AI model, significantly shaping how industries approach AI-driven tasks.

In its latest development, Google has quietly disclosed pricing for Veo 2, Google latest AI video tool Veo 2 used for video generation, and was initially announced in December. According to details published on Google’s official pricing page, utilizing Veo 2 will cost users 50 cents per second of generated video, which equates to roughly $30 per minute or $1,800 per hour.

To put these figures into perspective, Google DeepMind researcher Jon Barron compared Veo 2’s pricing to the production costs of Marvel’s blockbuster, “Avengers: Endgame,” which had an enormous budget of approximately $356 million, or about $32,000 per second of footage. This comparison effectively highlights the relative affordability of Google’s AI-generated video content against traditional filmmaking costs.

However, it’s worth noting that users may not ultimately use every second of footage generated through Veo 2, especially since Google has indicated that the model typically creates videos of around two minutes in length. Users could pay for footage they don’t incorporate into their final projects.

Google’s pricing strategy also stands in contrast to rival OpenAI’s Sora model, which recently became available through a subscription-based pricing model—part of a $200-per-month ChatGPT Pro subscription.

Overall, Google per-second pricing positions Veo 2 as a premium service targeted at professionals and enterprises. While the upfront cost might appear significant, the model’s efficiency and flexibility could notably reduce production expenses and timelines, making it a compelling option for short, impactful, and commercially oriented video projects. Users, however, should carefully manage their content-generation planning to optimize cost-effectiveness.

Read More: Apple Launches iPhone 16e in China to Compete with Local Brands

US AI Safety Institute Faces Major Cuts Amid Government Layoffs

The US AI Safety Institute (AISI), a key organization focused on AI risk assessment and policy development, is facing significant layoffs as part of broader cuts at the National Institute of Standards and Technology (NIST). Reports indicate that up to 500 employees could be affected, raising concerns about the future of AI safety efforts in the US.

According to Axios, both AISI and the Chips for America initiative—which also operates under NIST—are expected to be significantly impacted. Bloomberg further reported that some employees have already received verbal notifications about their impending terminations, which primarily target probationary employees within their first two years on the job.

AISI’s Future in Doubt Following Policy Repeal

Even before news of these layoffs surfaced, AISI’s long-term stability was uncertain. The institute was established as part of President Joe Biden’s executive order on AI safety in 2023. However, President Donald Trump repealed the order on his first day back in office, casting doubt on AISI’s role in AI governance. Adding to the instability, AISI’s director resigned earlier this month, leaving the institute without clear leadership at a time when AI regulation remains a global concern.

Experts Warn of AI Policy Setbacks

The reported layoffs have drawn criticism from AI safety and policy experts, who argue that cutting AISI’s workforce could undermine the US government’s ability to develop AI safety standards and monitor risks effectively.

“These cuts, if confirmed, would severely impact the government’s capacity to research and address critical AI safety concerns at a time when such expertise is more vital than ever,” said Jason Green-Lowe, executive director of the Center for AI Policy. With AI development rapidly advancing and regulatory discussions taking center stage worldwide, the potential downsizing of AISI raises concerns over the US’s role in global AI safety initiatives.

Uncertain Path Forward for AI Regulation

As the federal government reassesses AI safety priorities, the impact of these layoffs remains unclear. While AISI was positioned to guide AI regulation and set technical standards, its ability to function effectively may be severely limited if staffing reductions proceed as reported. Industry analysts warn that a lack of dedicated AI safety oversight could leave the US at a disadvantage in shaping international AI policies. Meanwhile, affected employees await formal confirmation of layoffs and potential restructuring plans within NIST.

Read More: Did xAI Mislead About Grok 3’s Benchmarks? OpenAI Disputes Claims

HP Acquires Humane: What It Means for the Future of AI Wearables

HP’s recent $116 million acquisition of Humane has sent ripples through the tech industry. Once valued at $240 million, the AI wearable startup has been acquired for less than half of its original funding, signalling a major shift in the AI hardware space. The deal also comes with job offers for select Humane employees, while others have been let go. With Humane’s AI Pin officially discontinued, this raises questions about the future of AI-driven wearable technology and HP‘s plans for AI innovation. Let’s dive into the details.

Humane’s AI Pin: A Short-Lived Vision

Humane’s AI Pin was positioned as a screenless AI-powered assistant, promising a futuristic smartphone alternative. The $499 wearable aimed to leverage AI for daily tasks like messaging, calls, and web queries.

However, the device struggled due to:

  • High Price Tag – The $499 price made it less attractive than existing smart assistants.
  • Performance Issues – AI response times were slow, and cloud dependency limited functionality.
  • Limited Adoption – Consumers didn’t fully embrace the concept of screenless AI wearables.

With sales discontinued and cloud services shutting down by February 28, the Humane AI Pin is officially dead.

Why Did HP Acquire Humane?

HP’s decision to buy out Humane’s assets suggests the company sees value in AI wearables and computing. Potential reasons include:

  • AI Hardware Integration – HP may incorporate Humane’s technology into laptops, tablets, or smart accessories.
  • AI Research & Development – Humane’s AI models and patents could enhance HP’s AI-driven software and cloud services.
  • Enterprise & Consumer Applications – HP might reposition Humane’s AI assistant for business users rather than mainstream consumers.

What Happens to Humane’s Employees?

Following the acquisition, some Humane employees received job offers from HP, with salary increases ranging from 30% to 70%, stock options, and bonuses. However, many employees working closely with AI Pin development were laid off, indicating a shift in priorities.

What This Means for AI Wearables

The fall of Humane highlights key lessons for the future of AI-powered devices:

  • AI Hardware Needs Practicality – Consumers prefer AI features integrated into existing devices rather than standalone gadgets.
  • Cloud-Dependency is Risky – Relying on cloud services for core functionality limits usability.
  • Big Tech Dominates AI Innovation – Startups in AI hardware must compete with tech giants like Apple, Google, and Microsoft.

Final Thoughts: Is HP’s AI Bet Worth It?

HP’s acquisition of Humane raises an important question: Will AI wearables survive, or was Humane’s failure a sign that the market isn’t ready? With AI assistants like ChatGPT, Gemini, and Apple’s AI models becoming more powerful, the future of AI devices might lie in software rather than standalone wearables. Whether HP revives Humane’s vision or pivots entirely remains to be seen.

Read More: Nvidia CEO Jensen Huang says market got it wrong about DeepSeek’s impact

Did xAI Mislead About Grok 3’s Benchmarks? OpenAI Disputes Claims

Debates over AI benchmarks have resurfaced following xAI’s recent claims about its latest model, Grok 3. An OpenAI employee publicly accused Elon Musk’s xAI of presenting misleading benchmark results, while xAI co-founder Igor Babushkin defended the company’s methodology. The controversy stems from a graph published by xAI showing Grok3 performance on AIME 2025, a benchmark based on complex mathematical problems. While some AI researchers question AIME’s validity as an AI benchmark, it remains a commonly used test for assessing AI models’ math capabilities.

The Missing Benchmark Data

In xAI’s chart, Grok3 Reasoning Beta and Grok3 mini Reasoning were shown to outperform OpenAI’s o3-mini-high model on AIME 2025. However, OpenAI employees quickly pointed out that xAI did not include o3-mini-high’s score at “cons@64.” The “cons@64” (consensus@64) metric allows a model to attempt each problem 64 times, selecting the most frequent response as the final answer. Since this significantly improves a model’s benchmark scores, omitting it from xAI’s comparison may have made Grok 3 appear more advanced than it actually is.

When comparing @1 scores (which measure a model’s first attempt accuracy), Grok 3 Reasoning Beta and Grok 3 mini Reasoning scored below OpenAI’s o3-mini-high. Additionally, Grok 3 Reasoning Beta trailed behind OpenAI’s o1 model set to “medium” computing, raising further questions about xAI’s claim that Grok 3 is the “world’s smartest AI.”

xAI Defends Its Approach, OpenAI Calls for Transparency

Igor Babushkin, co-founder of xAI, responded on X, arguing that OpenAI has also presented selective benchmarks, though mainly when comparing its models. A third-party AI researcher attempted to provide a more balanced view by compiling a graph displaying various models’ performance at cons@64, aiming to offer a more transparent comparison. However, AI researcher Nathan Lambert pointed out a key missing element in the debate: computational cost. Without knowing how much computational power (and cost) was required for each model to achieve its best scores, benchmarking alone does not fully convey an AI model’s efficiency or real-world capabilities.

What’s Next for AI Benchmarks?

The dispute between xAI and OpenAI highlights ongoing challenges in AI benchmarking. As AI labs race to demonstrate superiority, the lack of standardized, transparent, and cost-aware metrics continues to fuel debates over how AI models should be evaluated. While xAI stands by its claims, OpenAI’s criticism raises questions about how AI companies should present performance results to avoid misleading comparisons. The broader AI community may need to push for more standardized evaluation methods to ensure fairness and accuracy in future AI model comparisons.

Read More: Nvidia CEO Jensen Huang says market got it wrong about DeepSeek’s impact

Nvidia CEO Jensen Huang says market got it wrong about DeepSeek’s impact

Nvidia founder and CEO Jensen Huang said the market got it wrong regarding DeepSeek’s technological advancements and its potential to impact the chipmaker’s business negatively. Instead, Huang called DeepSeek’s R1 open-source reasoning model “incredibly exciting” while speaking with Alex Bouzari, CEO of DataDirect Networks, in a pre-recorded interview that was released on Thursday.

“I think the market responded to R1, as in, ‘Oh my gosh. AI is finished,’” Huang told Bouzari. “You know, it dropped out of the sky. We don’t need to do any computing anymore. It’s exactly the opposite. It’s [the] complete opposite.”

Huang said that the release of R1 is inherently good for the AI market and will accelerate the adoption of AI as opposed to this release meaning that the market no longer had a use for compute resources — like the ones Nvidia produces.

“It’s making everybody take notice that, okay, there are opportunities to have the models be far more efficient than what we thought was possible,” Huang said. “And so it’s expanding, and it’s accelerating the adoption of AI.” He also pointed out that, despite DeepSeek’s advancements in pre-training AI models, post-training will remain important and resource-intensive.

“Reasoning is a fairly compute-intensive part of it,” Huang added.

Nvidia declined to provide further commentary. Huang’s comments come almost a month after DeepSeek released the open-source version of its R1 model, which rocked the AI market in general and seemed to affect Nvidia disproportionately. The company’s stock price plummeted 16.9% in one market day upon releasing DeepSeek’s news.

According to data from Yahoo Finance, Nvidia’s stock closed at $142.62 a share on January 24. The following Monday, January 27, the stock dropped rapidly and closed at $118.52 a share. This event wiped $600 billion off of Nvidia’s market cap in just three days. The chip company’s stock has almost fully recovered since then. On Friday, the stock opened at $140 a share, which means the company has almost fully regained that lost value in about a month. Nvidia reports its Q4 earnings on February 26, which will likely address the market reaction more. Meanwhile, DeepSeek announced on Thursday that it plans to open source five code repositories as part of an “open source week” event next week.

Read More: OpenAI to Shift AI Compute from Microsoft to SoftBank

Meta Faces Legal Battle Over AI Training with Copyrighted Content

Meta is under intense scrutiny after newly unsealed court documents revealed internal discussions about using copyrighted content, including pirated books, to train its AI models. The revelations, part of the Kadrey v. Meta lawsuit, shed light on how Meta employees weighed the legal risks of using unlicensed data while attempting to keep pace with AI competitors.

Internal Deliberations Over Copyrighted Content

Court documents show that Meta employees debated whether to train AI models on copyrighted materials without explicit permission. In internal work chats, staff discussed acquiring copyrighted books without licensing deals and escalating the decision to company executives.

Meta research engineer Xavier Martinet suggested an “ask forgiveness, not for permission” approach, in a chat dated February 2023, according to the filings. Stating:

“[T]his is why they set up this gen ai org for [sic]: so we can be less risk averse.”

He further argued that negotiating deals with publishers was inefficient and that competitors were likely already using pirated data.

“I mean, worst case: we found out it is finally ok, while a gazillion start up [sic] just pirated tons of books on bittorrent.” Martinet wrote, according to the filings. “[M]y 2 cents again: trying to have deals with publishers directly takes a long time …”

Meta’s AI leadership acknowledged that licenses were needed for publicly available data, but employees noted that the company’s legal team was becoming more flexible on approving training data sources.

Talks of Libgen and Legal Risks

The filings reveal that Meta employees discussed using Libgen, a site known for providing unauthorized access to copyrighted books. in Wechat Melanie Kambadur, a senior manager for Meta’s Llama model research team, suggested using Libgen as an alternative to licensed datasets.

According to the Filling in one conversation, Sony Theakanath, director of product management at Meta, called Libgen “essential to meet SOTA numbers across all categories,” emphasizing that without it, Meta’s AI models might fall behind state-of-the-art (SOTA) benchmarks.

Theakanath also proposed strategies to mitigate legal risks, including removing data from Libgen that was “clearly marked as pirated/stolen” and ensuring that Meta would not publicly cite its use of the dataset.

“We would not disclose use of Libgen datasets used to train,” he wrote in an internal email to Meta AI VP Joelle Pineau.

Further discussions among Meta employees suggested that the company attempted to filter out risky content from Libgen files by searching for terms like “stolen” or “pirated” while still leveraging the remaining data for AI training.

Despite concerns raised by some staff, including a Google search result stating “No, Libgen is not legal,” discussions about utilizing the platform continued internally.

Meta’s AI Data Sources and Training Strategies

Additional filings suggest that Meta explored scraping Reddit data using techniques similar to those employed by a third-party service, Pushshift. There were also discussions about revisiting past decisions not to use Quora content, scientific articles, and licensed books. In a March 2024 chat, Chaya Nayak, director of product management for Meta’s generative AI division, indicated that leadership was considering overriding prior restrictions on training sets.

She emphasized the need for more diverse data sources, stating: “[W]e need more data.” Meta’s AI team also worked on tuning models to avoid reproducing copyrighted content, blocking responses to direct requests for protected materials and preventing AI from revealing its training data sources.

Legal and Industry Implications

The plaintiffs in Kadrey v. Meta have amended their lawsuit multiple times since filing in 2023 in the U.S. District Court for the Northern District of California. The latest claims allege that Meta not only used pirated data but also cross-referenced copyrighted books with available licensed versions to determine whether to pursue publishing agreements.

In response to the growing legal pressure, Meta has strengthened its legal defense by adding two Supreme Court litigators from the law firm Paul Weiss to its team. Meta has not yet publicly addressed these latest allegations. However, the case highlights the ongoing conflict between AI companies’ need for massive datasets and the legal protections surrounding intellectual property. The outcome could set a major precedent for how AI companies train models and navigate copyright laws in the future.

Read More: Meta & X Approved Anti-Muslim Hate Speech Ads Before German Election, Study Reveals

6 New AI-Powered Tech Startups Reach Unicorn Status in January 2025

The year 2025 started with a surge of billion-dollar valuations as six promising tech startups officially entered the unicorn club in January. These AI-powered tech startups, spanning artificial intelligence, healthcare, fintech, and industrial technology, have drawn massive investments from leading venture capital firms. The surge in funding signals a strong investor appetite for cutting-edge innovations in AI-driven automation, genomic research, cybersecurity, and defense technology. Here’s a closer look at the six AI-powered tech startups that have achieved unicorn status and how they shape the future.

Truveta: 

Founded in 2020, Truveta is a health-tech company specializing in AI-powered genetic research. The company aims to advance personalized medicine by creating a comprehensive and diverse genomic database. Terry Myerson serves as the Chief Executive Officer and co-founder of Truveta.

Codeium: 

Established in 2023, Codeium is an AI-driven coding assistant designed to help developers write and optimize code more efficiently. While specific details about its leadership are not publicly disclosed, the company is reportedly discussing raising new funding at a valuation of $2.85 billion.

Mercor: 

Launched in 2024, Mercor is an AI-powered recruiting platform that streamlines the hiring process by matching candidates with suitable job opportunities. The company was founded by three 21-year-old Thiel Fellows, with Brendan Foody serving as the Chief Executive Officer. Mercor recently raised $100 million in a Series B funding round, bringing its valuation to $2 billion.

Augury: 

Founded in 2011, Augury specializes in AI technology that detects malfunctions in industrial machinery, aiming to prevent equipment failures and reduce downtime. The company recently raised $75 million, pushing its valuation over the $1 billion mark. Saar Yoskovitz is the co-founder and Chief Executive Officer of Augury.

Neko Health: 

Established in 2022, Neko Health is a Swedish startup co-founded by Spotify’s Daniel Ek. The company focuses on developing advanced body-scanning technology for early disease detection and preventive healthcare. In a recent Series B funding round, Neko Health secured $260 million, though its exact valuation remains undisclosed.

Epirus: 

Founded in 2018, Epirus is a defense technology company specializing in advanced directed energy systems designed to counter emerging threats. The company is reportedly in talks to raise between $150 million and $200 million in a new funding round led by venture firm 8VC. Leigh Madden serves as the Chief Executive Officer of Epirus.

A Strong Start for Tech Innovation in 2025

The emergence of these six unicorns in January 2025 highlights a broader trend in the startup ecosystem—investors are increasingly placing their bets on AI-powered solutions, predictive analytics, fintech security, and next-gen healthcare innovations. The influx of capital into AI-driven automation, genomic research, and industrial AI demonstrates the tech industry’s resilience and its ability to drive breakthrough innovations despite ongoing economic uncertainties. As these startups continue to grow, they are set to redefine healthcare, cybersecurity, defense, and industrial efficiency, shaping the next generation of global technology leaders. With more startups poised to achieve unicorn status in the coming months, 2025 is shaping up to be a landmark year for disruptive tech innovation.

Read More: Elon Musk’s AI Revolution Continues as xAI Unveils Grok 3 AI Model

Apple’s iPhone 16e Brings AI & A18 Chip at $599, Launching Feb 28

AI meets affordability, as top of the line smartphones smash the barrier of $1,000, Apple is extraordinary for bringing such a change and providing this new range with a low cost but powerful alternative. The iPhone 16e, an AI-integrated device that brings value to affordability along with giving some very sleek redesigns and its latest see-through internal chip has entered the world of AI technologies.

The new iPhone 16e budget friendly smartphone, has been officially introduced by Apple. It is priced at $599 for the fourth-generation device. Not only is it the recipient of the latest AI capabilities introduced by Apple, but it also comes with the makeover and updated internals. It’s supposed to make its market debut on February 28, and that’s just the bridging element, from being pretty affordable to being all high-tech.

Apple Intelligence:

The major feature of the iPhone 16e is Apple Intelligence with its response to AI assistant competitors such as OpenAI’s ChatGPT and Google’s Gemini. This advanced model is even able to run locally on devices, which will offer services such as text summarization, letter writing, and picture generation.

Along with the exclusive club of devices that have Apple Intelligence features, the 16e joins the iPhone 16 lineup. The brain of the device is powered by an A18 processor, on which superb AI performance is delivered and also users are granted access to ChatGPT via Siri without having an OpenAI account.

Hardware and Design Changes:

The iPhone 16e has several major design and hardware improvements which are the following;

  •  Processor; the new Apple A18 chip has a 16-core AI processing-neural engine and a 4-core GPU.
  • Display; a larger 6.1 inches of OLED display is compared to a previous 4.8-inch screen on the SE.
  • Camera; a single 48-megapixel rear camera with 2x zoom, which produces 24-megapixel images.
  • Face ID & Notch; for Face ID, it was done away with the Touch ID home button similar to the notch style of an iPhone X.
  •  USB-C Port; it moves away from Lightning ports to USB-C for standardization of the entire Apple’s hardware.
  •  Battery Life; it offers what Apple calls “the best battery life ever on a 6.1-inch iPhone” and claims to give 12 hours more battery life than its predecessor.
  • The most described milestone, iPhone 16e, is also synonymous with the debut of the new Apple brand modem, known as the C1, populating the pages of the history book. So now, Apple will stop depending on companies like Qualcomm and Intel for modem chips.

Strategic Timing for a Shifting Marketplace:

The iPhone 16e is coming at a critical time for Apple, especially due to the recent 11% drop in its market share in China. Increased competition from Huawei and limited AI availability in China were to be blamed for this drop. The company is working to integrate major Chinese tech firms, like Tencent, ByteDance, and Alibaba, to create a localized AI experience in the region. The iPhone SE series, historically, has had good sales mainly in key markets of China and India. While the new price tag of $599 is a $100 increase from its predecessor, premium features and AI capabilities may help Apple claw back in these markets.

Availability and Preorders:

Preorders start on February 21, while shipping is scheduled to begin on the 28th of February. The iPhone 16e is really not just any other low-cost device but rather a statement. With its AI capabilities, an improved battery, and a redesigned modern look, Apple is demonstrating how innovation does not only lie with high priced models. In a rapidly changing market environment with increasing competition, the iPhone 16e may be instrumental in appealing to new users wanting technology with a value for money proposal. As Apple continues stretching the boundaries of AI and smartphone technologies, the iPhone 16e could become a game changer in the midrange space.

Read More: Thousands of Apps Removed from EU App Store as Apple Enforces DSA

Gemini is No Longer in the Google App, as Google Pushes it into a Standalone App

Google just pulled its classic “now you see me, now you don’t,” with respect to its AI assistant Gemini, from the main Google app on iOS, but hold your horses, all is not lost. It has been moved into another place, the standalone Gemini app. Whether this is a significant step in AI independence or just another treat that users would hate, remains to be seen.

Gemini, an Artificial Intelligence Assistant, will, however, be one of the first big shifts Google is making in delivering it to its iOS customers. The company now says Gemini will no longer be available in the main Google app on any iOS devices, as users will have to download a separate app called Gemini to use the AI assistant.

The move made by Google indicates the intention to position Gemini more as an independent, consumer-facing AI, directly in competition with others like OpenAI’s ChatGPT, Anthropic’s Claude, or Perplexity. Although, the risky bit is that the Google app already has millions of users, many of whom may not wish to download a separate app just to access Gemini, limiting its reach.

Official Announcement:

The customers were informed about this change through an email from Google, stating, “Gemini is no longer available in the Google app.” The email goes on to recommend that users wishing to continue using Gemini’s features should download Gemini’s dedicated application that was launched worldwide for iOS users last year. Until this point in time, Gemini had also been made available in the main Google app.

The email also included a warning for the users, reminding them that Gemini still makes mistakes, and they should always fact-check any response given by it. On the other hand, whenever iOS users attempt to open Gemini from the general Google app, a message for full screen now appears saying “Gemini now has its own app” with a link to download it from the App Store.

Gemini App Features and Premium Access:

Standalone Gemini app users on iOS will be entitled to enjoy various AI-related features that includes; live voice conversation with Gemini, connecting with other Google services such as Search, YouTube, Maps, and Gmail, To Ask questions, make travel plans, and explore topics, avail AI summaries, deep dives, and images, Interact via text, voice, or using a camera. Those in search of advanced AI functionalities will be able to use the Google One AI Premium plan, including Gemini Advanced, with the Gemini app. Users can subscribe to this service through in-app purchase.

Impact of Google’s Strategy Shift:

This strategic change brings opportunities and threats to Google. Although it allows Gemini to move into a standalone app so features can change quicker and compete head-on with AI-enabled chatbots, it would risk losing customers who would resist downloading yet another application. Whether this gamble pays off for Google remains to be seen, but in any case, it adds to the credibility of Google keeping its AI-worshipping leg out there in front of search functionality.

Read More: Elon Musk’s AI Revolution Continues as xAI Unveils Grok 3 AI Model

Mira Murati’s AI Vision gains Momentum with her new AI startup, Thinking Machines Lab

In the world of AI, where changes can be sweeping and instantaneous, similar is the dynamics of power. Mira Murati, ex-CTO of OpenAI, just set up her own AI startup, Thinking Machines Lab, and in this tech-world heist, she had 20 researchers from OpenAI join her. If AI were chess, Murati just shouted, “Check!” while sipping her coffee. So what does that mean for the future of AI, and why, suddenly, does OpenAI look like a coffee shop on a busy Monday morning with hardly any staff?

Former OpenAI Chief Technology Officer Mira Murati’s new AI startup, Thinking Machines Lab, is already throwing a major twist in the AI research space. Announced last Tuesday, the company has bragged of collecting the best researchers and engineers working in the leading AI companies, including OpenAI, Meta, and Mistral. The incident bears testament to Murati’s industry influence, as about two-thirds of the young startup’s workforce comprises such ex-OpenAI employees.

Powerhouse Team:

One of the most notable arrivals is Barret Zoph, the renowned AI researcher who left OpenAI on the same date as Murati late in September and will join the startup as the Chief Technology Officer. Another star player, John Schulman, who co-founded OpenAI will be the startup’s Chief Scientist. Schulman at one time went from OpenAI to Anthropic in August arguing that he wanted to shift his focus towards the area of AI alignment, a primal arena that ensures that the AI models remain aligned with human values in the spaces of safety and reliability.

According to sources, more ex-OpenAI employees are expected to join Murati’s venture. The company might have already begun talks to raise funding from venture capitalists, evidence of investors’ great interest in the mission established by the startup. At this stage, I believe that OpenAI might need an AI-powered therapist.

New Vision for AI Development:

Thinking Machines Lab is going to position itself as an AI company claiming to build something more visionary and carrying an ethical veil than any of the companies doing something similar. The startup said, “While current systems excel at programming and mathematics, we’re building AI that can adapt to the full spectrum of human expertise and enable a broader spectrum of applications”.

Another unique selling point of Thinking Machines Lab is its cross-design approach whereby teams from research and product development work together on a common problem. They build artificial intelligence solutions that are very innovative and also practical. The company has plans to dedicate a significant portion of its funds to AI alignment research by open-sourcing datasets, making model specifications available, and publishing research results.

Murati influence:

An active participant in the development of AI, Mira Murati began her work at OpenAI in 2018. She took a leadership position in the development of ChatGPT and many times represented OpenAI in public together with CEO Sam Altman. However, she abruptly left OpenAI amid the transition of its governance structure, joined by several other high-profile exits. Murati was formerly at the helm of numerous Tesla projects as well as those at augmented reality startup Leap Motion, gathering ample experience in cutting-edge technological advancement.

OpenAI’s Departure:

Murati is an additional name in the growing list of former OpenAI executives diversifying out into their new endeavours. Other famous AI projects set up by OpenAI alumni include Anthropic and Safe Superintelligence, which have managed to attract significant investment, and talent alike from OpenAI. Thinking Machines Labs looks poised to be a regular player able to build on its solid research base, courtesy of Murati’s industry experience.

As the AI ecosystem continues to change, Thinking Machines Lab ushers in yet another chapter in the race for building next-generation artificial intelligence. With an impressive cast, a heavy focus on AI alignment, and a commitment to openness in research, Murati’s newly birthed venture is expected to cause ripples across the industry and the future of AI just got a lot more competitive. 

Also Read: South Korea’s AI Power Play; Securing 10,000 GPUs for the Future

HP Acquires Humane Assets for $116M, Shutting Down AI Pin for Good

Full of dreams and larger than life disasters, it seemed for a while that the world of tech startups is meant for the Humane Ai Pin. Once celebrated as the future of wearable AI, the product is indeed fading fast along with its parent startup. In a typical twist of this drama, HP clawed Humane’s assets for $116 million, taking the AI Pin off life support just 10 months after its grand launch. I guess the lesson to learn here is that sometimes tech dreams do not translate into reality, and in the case of the Humane AI Pin, reality came knocking hard at the door with “No, thanks!” from the market.

After having its assets bought for $116 million by HP, the hardware startup Humane, which was once looking to change personal computing with its AI-powered wearable device, is essentially shutting down its operations. The acquisition was announced on Tuesday and it signals the demise of Humane’s aspirations for an AI Pin that once served as hope for a smartphone alternative.

This instant effect means sales of the AI Pin are being halted by Humane, the AI Pin is priced at $499. Customers who purchased the AI Pin have been notified that their pins would stop working on 28th of February, 2025 at 12 PM PST. Upon that date, the devices will lose connectivity to Humane’s servers, rendering them incapable of performing core functions like calling, messaging, AI queries, and cloud access. Humane is advising existing owners of the AI Pin to save their important data and videos on some external device before the date of shutdown. Among the consequences of the closure is that the customers who bought their AI Pin in the last 90 days will get their money back, but those who bought it before this will get no refund.

Short-Term Vision:

When launched in April of 2024, the AI Pin brought in a lot of hype, and it was promoted as an entirely new ecosystem that bluntly contrasted with the smartphone way of life. Founded by ex-Apple executives Bethany Bongiorno and Imran Chaudhri, the startup from the Bay Area raised over $230 million to bring the product to market. However, raised expectations gave way to reality as the AI Pin was amazed to find acceptance.

Early reviews and user comments revealed significant shortcomings in the product, causing massive disappointment. Reports emerged in which it was indicated that return rates for AI Pin sales surpassed new sales beginning that summer of 2024. Humane further complicated the matter by issuing a safety warning asking users to stop using the device’s charging case over fire risks concerning the battery. The company tried to excite interest once again in October 2024 by announcing a price reduction for the AI Pin from $699 to $499, but it failed to gain any momentum.

HP’s AI Ambitions:

As part of the acquisition process, HP is absorbing numerous engineers and product managers from Humane, who will set the foundation of the new division of HP called HP IQ. According to HP, this newly formed innovation lab for AI is set to focus on integrating artificial intelligence within its product ecosystem specifically relating to applications of the future of work. HP is also acquiring some proprietary technologies from Humane. Among those is the CosmOS AI operating system, which Humane recently presented with a vision such that CosmOS would power a multitude of smart devices, including car entertainment systems, smart speakers, TVs, and Android smartphones.

HP is expected to investigate how CosmOS could eventually be utilized in their PCs and printers, thus creating an opportunity for AI hardware differentiation. Interestingly, in May 2024, Humane was reportedly looking for a much bigger deal, reportedly valuing itself between $750 million and $1 billion, as per Bloomberg. The final deal struck with HP was for a much lower amount, although Humane has not commented on the acquisition. With this acquisition, HP wants access to Humane’s AI know-how and technology in support of its AI-driven innovations, which signify a crucial strategic shift in HP’s approach to AI-integrated hardware.

Humane’s AI Pin now adds itself to the list of insatiable and ambitious tech products, some of which failed to engage and please the audience. True to its charms, the device promised wearers a screenless AI-enabled accessory but ended up disappointing, not only in practicality but also in the appeal of the mass market. As the dust settles, one thing becomes clear that there is no such thing as easy access to a deep-rooted industry, and even the richest and most well-funded ideas may not survive if real value does not add to them.

Read More: Legal AI Startup Luminance Secures $75M to Advance AI-Powered Contracts

Legal AI Startup Luminance Secures $75M to Advance AI-Powered Contracts

The law is often depicted as an extremely slow industry, slow because of its heavy contracts and legalese, the very vocabulary that takes years to master. What if artificial intelligence would do the heavy lifting for legal chores, making things faster, more accurate, and, dare we say, less mind-numbing? This is precisely what a new wave of legal tech startups seeks to accomplish with revolutionizing AI. A technology field that has perhaps benefited the most from this kind of advancement, surely legal technology is one of them, wherein AI-based solutions are really changing the way legal professionals interact with complex documents and contracts. Among these startups leading the pack is Luminance, which just had a big addition of $75 million to take its “legal-grade” AI to the next level. This momentum will further accelerate funding in legal tech, with several companies getting investments to develop their AI-driven platforms better.

Recently, Luminance, a legal tech startup that claims to provide “legal-grade” AI, received a $75 million Series C funding led by Point72 Private Investments. This raise is significant as it is probably the largest round of investment in a pure play legal AI company in the U.K and Europe. The round brings Luminance’s total amount raised to $165 million, with more than $115 million in the last 12 months alone.

The recent heavy funding in the legal tech space seems to have been growing ever so commonly with several startups scoring sizable rounds in the last months. Only last week did Eudia raise $105 million, while U.S based Harvey secured a whopping $300 million round led by Sequoia. Last year, Genie AI from London raised €16 million, and Lawhive raised $40 million to focus on ‘main street’ U.S lawyers. Luminance’s latest fundraising places it firmly in the category of such high-growth legal AI companies.

Luminance AI Approach:

At its core, Luminance uses what it calls a “Panel of Judges” AI system, designed to automate and augment a business’s approach to contracts, including generation, negotiation, and post-execution analysis. Unlike many other AI startups that rely heavily on general purpose large language models, Luminance has created its proprietary legal pre-trained transformer (LPT). It was trained using over 150 million verified legal documents, many of which were not publicly disclosed with the goal of giving the firm a significant strategic edge over its peers that build their applications on top of general purpose AI models.

Luminance offers:

Using Lumi Go, its flagship product, businesses can send draft agreements to counterparties through the platform and have the AI negotiate on their behalf. Unlike the GPT-based models, with their wide but sometimes not reliable outputs, Luminance’s LPT is specifically formed for legal application resulting in greater accuracy and defensibility.

Eleanor Lightbody, Luminance’s CEO, who took over from the founders after its Series A round said, “It’s a domain-specialized AI that is built with lawyers in mind […] They need to understand that the outputs have been validated and can be trusted, and that’s exactly what our specialized AI can achieve”. She emphasized that Luminance follows a “mixed model approach” in which different models verify each other’s outputs to ensure both transparency and accuracy. She said, “The platform was built with the understanding that each model is good at different things. What you want is to have a mixed model approach, where the models can check each other’s ‘homework,’ and you can get the most accurate and the most transparent answers”.

Global Expansion and Future Plans:

Luminance has rapidly flourished and now counts among its clients over 700 entities in more than 70 countries, including AMD, Hitachi, LG Chem, SiriusXM, Rolls-Royce, and Lamborghini, as well as several large scale corporations. The organization has also sought to enter the North American market aggressively, tripling its headcount while establishing offices in San Francisco, Dallas, and Toronto as it expanded its headquarters footprint in New York.

The Series C round saw participation from several investors, including Forestay Capital, RPS VENTURES, and Schroders Capital, alongside existing backers March Capital, National Grid Partners, and Slaughter and May. Now with newly injected capital, Luminance is set to further develop its AI capabilities while consolidating its status as a player in the legal tech arena.

The rising stock of luminance testifies to the innovative applications of artificial intelligence in different sectors, particularly in the so-called law where precision and reliability matter a lot. This recent funding has just placed the company in a better position to extend its footprint internationally to enhance its specialized legal AI models. With artificial intelligence being used in legal work plans, it will be clear that a future in law is going to be one with human expertise accompanied by intelligent automation. With this latest funding, the company is set to revolutionize legal automation to ensure efficiency, accuracy, and reliability worldwide.

Read More: Elon Musk’s AI Revolution Continues as xAI Unveils Grok 3 AI Model

South Korea’s AI Power Play; Securing 10,000 GPUs for the Future

South Korea is on its way to procure about 10,000 high-performance GPUs during the year, with a view to furthering its interest in the rapidly accelerating global AI race. This falls under its wider plan to build a more cohesive national AI computing infrastructure and sustain the country’s innovation ecosystem. Artificial Intelligence has changed the face of this world, and all the countries are doing hard work to establish large computing infrastructures. Here, South Korea is one of the last countries to announce acquiring 10,000 high-end GPUs. The AI race is not just between the technological maharajas, it becomes an all-out national showdown among countries.

An Alternate Strategic View:

As artificial intelligence becomes a key driver for economic and technological growth, the intensified competition now encompasses not just corporate rivalries but also national innovation ecosystems. This strategic view was articulated by acting President Choi Sang-mok, who said, “As competition for dominance in the AI industry intensifies, the competitive landscape is shifting from battles between companies to a full-scale rivalry between national innovation ecosystems”.

In partnerships with the private sector, South Korea seeks to obtain the GPUs for its national AI computing center, set to start operations shortly, to support its AI aspirations.

Global Regulation for AI Chips:

This follows the recent regulations by the U.S government prohibiting exportation of AI chips. The new rulings, rank nations in different tiers and place South Korea among 18 countries that are exempt from export restrictions. Meanwhile, 120 other countries are facing export restrictions, whereas nations such as Iran, China, and Russia are virtually banned from accessing U.S AI chips.

The number of GPUs that would be needed for an AI model depends on factors like processing power demands, amount of data, complexity of the model, and time to train the model.The Ministry of Science and ICT in South Korea is yet to finalize its requirements regarding the budget, models of GPUs, and partners in the private sector. However, the government anticipates wrapping this up by September 2025.

Global GPU Market and South Korea AI Investment:

NVIDIA is considered to dominate the global GPU market by more than 80 percent and has continued to be a vital supplier for most AI companies across the globe. GPUs are publicized as general AI and accelerated computing application’s keystone hardware. However, all major companies like Microsoft backed OpenAI are now searching for alternatives to cut down their reliance on Nvidia. The company is finalizing the design of its own AI chip and for manufacturing, will turn to Taiwan Semiconductor Manufacturing Co (TSMC).

China is beginning to produce spectacular results in artificial intelligence, as the Chinese startup DeepSeek develops AI models that focus more on computational efficiency than on processing power, which could narrow the gap between Chinese and U.S AI chips.

Following its ambitious program of acquiring 10,000 GPUs, South Korea is driving very hard to assert itself into the ranks of highly competitive AI innovators. It builds partnerships with the private sector and takes exemption from the U.S chip restrictions to make sure it really leads the way in the revolution. The next few months will be vital for the government in ratifying procurement plans and advancing towards the national AI strategy. As long as technology keeps evolving, that investment by South Korea into infrastructure for AI may indeed prepare for revolutionary breakthroughs to arrive in the upcoming years.

Read More: South Korea Suspends New Downloads of DeepSeek over Data Privacy Concerns

EU’s AI Regulation Shift: A Strategic Advantage for U.S. Tech Giants?

The European Union (EU) is reassessing its approach to artificial intelligence (AI) regulations, which could create significant opportunities for Apple, Google, Microsoft, and other major U.S. technology companies. According to a report by the Financial Times,

“The European Union is looking to scale back certain regulatory restrictions on AI to attract more investment and boost competitiveness in the global AI sector. Henna Virkkunen, the European Commission’s digital policy chief, emphasized that the EU’s objective is to ‘help and support’ AI-driven businesses while ensuring that compliance obligations do not create unnecessary barriers to growth.”

This potential shift in policy comes as the EU faces increasing pressure to balance technological advancements with regulatory oversight. The proposed AI Act, which categorizes AI technologies based on their risk levels, imposes stricter regulations on high-risk models such as GPT-4 and Google Gemini. However, the latest discussions indicate that the European Commission may seek to minimize reporting obligations for European businesses to prevent excessive regulatory burdens.

EU’s Changing AI Policy and Industry Reactions

Henna Virkkunen, the European Commission’s digital policy chief, stated in an interview with Euractiv that the EU’s goal is to “help and support” companies while ensuring responsible AI development. She emphasized that European businesses should not be overwhelmed by compliance requirements that could hinder their global competitiveness.

In a parallel development, the European Commission has withdrawn a proposed AI liability directive, signaling an effort to streamline AI regulations. An upcoming AI code of practice, expected to be introduced in April 2025, aims to align existing AI laws with practical industry requirements.

However, this regulatory shift has drawn mixed reactions. U.S. officials have expressed concerns over Europe’s AI governance model, arguing that overregulation could stifle innovation. Speaking at an AI summit in Paris, U.S. Vice President JD Vance criticized Europe’s content moderation policies, calling them “authoritarian censorship,” and warned that excessive restrictions could undermine the potential of AI-driven industries.

A Competitive Landscape for AI Development

With the U.S. maintaining a flexible regulatory approach, analysts suggest that the EU’s move may reflect a response to growing competition in AI leadership. U.S. President Donald Trump’s administration maintained a pro-business stance on AI, which some believe has indirectly influenced Europe’s evolving AI policies.

While the EU maintains that its regulatory changes are independent of U.S. influence, the timing raises questions about whether the continent seeks to attract more AI investment and prevent businesses from shifting operations elsewhere. The next few months will determine whether these adjustments will benefit European tech firms or strengthen U.S. tech dominance in the region. The April 2025 AI code of practice will provide further insights into the EU’s long-term AI strategy, shaping the future of AI governance, industry innovation, and global competition.

Read More: Elon Musk Announces Live Demonstration of Grok 3 AI Chatbot

Elon Musk Announces Live Demonstration of Grok 3 AI Chatbot

Tech billionaire Elon Musk has announced that Grok 3, the latest iteration of xAI’s artificial intelligence chatbot, will be officially unveiled in a live demonstration on Monday at 8 p.m. Pacific Time (0400 GMT on Tuesday). Musk’s xAI, positioned as a competitor to OpenAI, has been developing Grok 3 as an advanced AI chatbot aimed at rivaling the capabilities of ChatGPT and other leading AI models.

Grok 3 in Final Stages of Development 

Earlier this week, Musk announced that Grok 3 was nearing completion and that its launch was expected within one to two weeks. This announcement aligns with Musk’s ongoing efforts to expand xAI’s role in the AI sector, following the introduction of earlier Grok models, which were integrated into his social media platform, X (formerly Twitter).

Musk Claims Grok 3 Outperforms Existing AI Models 

Musk has been vocal about Grok 3’s capabilities, hinting at significant improvements in reasoning and language understanding. While no official benchmarks have been released, he suggested that Grok 3 has demonstrated superior performance compared to existing chatbots in internal tests. This launch is expected to be a critical moment for xAI as it attempts to establish itself in the increasingly competitive AI race, currently dominated by OpenAI, Google DeepMind, and Anthropic.

xAI in Talks for $10 Billion Investment 

Beyond the Grok 3 release, reports indicate that xAI is in discussions to secure up to $10 billion in funding, potentially valuing the company at $75 billion. If successful, this would further solidify xAI’s position as a major player in the AI industry, providing the necessary resources to scale its technology and infrastructure.

 What to Expect from the Grok 3 Launch? 

The live demonstration of Grok 3 will likely showcase real-time AI interactions, emphasizing improvements in contextual understanding, speed, and reasoning abilities. Industry experts and AI enthusiasts will be watching closely to see how Grok 3 stacks up against ChatGPT and Google Gemini. With the AI landscape evolving rapidly, Musk’s xAI is aiming to disrupt the market by delivering a more advanced, responsive, and intuitive AI assistant. Whether Grok 3 lives up to the hype remains to be seen, but one thing is certain—the competition in AI innovation is intensifying.

Read More: Google’s Most Advanced AI Model Gemini 2.0 Now Available for Everyone

Baidu Goes Open-Source With its Latest Ernie AI Model

The AI battlefield is getting more interesting day by day. Baidu, one of the top technology giants in China has decided to take a drastic step forward in the AI race by opening its Ernie AI model. Baidu of China has now declared making its next-generation artificial intelligence model Ernie open source starting June 30, in a major departure from the company’s previous exclusive hardcore AI development model. Such a shocking twist comes at a time when competition heats up in the domain of artificial intelligence with new entrants like DeepSeek, disrupting the field with accessible and cheaper AI solutions, along with intensifying competition in this sector has led to such a big leap.

Baidu’s Shifting Approach:

Baidu CEO Robin Li, a long-time advocate of closed-source AI models being the best ways to live with things, seems to be wavering on the issue. The growth of DeepSeek, a startup that offers open-source AI solutions with performance stated to rival that of OpenAI’s most developed systems, has shaken the environment. Cost effectiveness and accessibility have forced Baidu to reconsider its competitive strategy due to availability of open-source inputs.

Along with the open-sourcing of Ernie ai model, Baidu also made the announcement that starting April 1, its AI chatbot, Ernie Bot, will not require any payment to use. This move takes a very significant turn in the story since Ernie Bot was launched, premium versions were to be sold under exclusive services.

Market Share:

When OpenAI came up with its ChatGPT in 2022, Baidu was one of the first major Chinese entities that really poured money into AI ventures. Even with a lot of resources poured in, Ernie ai model still cannot fully compete with the adoption of public use. According to January statistics of AI product tracker Aicpb.com, present active monthly users are, Doubao chatbot by ByteDance at 78.6 million, followed by DeepSeek with 33.7 million, and Baidu’s Ernie Bot closes, with only 13 million users. Through open-sourcing Ernie, Baidu hopes to facilitate greater adoption and use of its AI technology.

Future of Ernie AI Model:

The gradual rollout of the Ernie 4.5 series will continue in the coming months, with the open-source release expected to take place on June 30. Baidu said in a WeChat post, “We will gradually launch the Ernie 4.5 series in the coming months and officially open-source it from June 30”. 

For the long term, Baidu is already working with future generations of their most up-to-date model, Ernie 5. This new model is expected to hit the market in the latter half of 2025. Li also emphasized the open-source aspects, arguing in effect that they would make models available to accelerate the process of tech adoption. He said, “If you open things up, a lot of people will be curious enough to try it. This will help spread the technology much faster”.

Baidu’s Strategy:

Baidu is positioning itself strategically in opening up Ernie, to work for and gain acceptance in increased AI adoption and compete with emerging rivals in the field. The decision by Baidu to open-source Ernie ai model is a big departure from its formerly closed view on AI development. With the fast-changing industry leaning towards accessibility and collaboration, could this position Baidu as a leader in open AI innovation, or is it merely a parting shot to remain relevant amid rising competitors?

With Ernie 5 coming up soon and open-source models releasing in June, only the next chapter of the AI race is beginning. How this might play out could be anyone’s guess, but it seems like one way or another, the AI space is trending towards quite an extraordinarily impulsive scenario. However, the strategy Baidu has for AI in the long term would prove to be very instrumental toward determining the future architecture and future avenues of artificial intelligence, both in China and globally.

Read More: Google’s Most Advanced AI Model Gemini 2.0 Now Available for Everyone

Elon Musk’s $97.4 Billion Offer to Acquire OpenAI Rejected

Tech billionaire Elon Musk, co-founder and former board member of OpenAI, recently made a staggering $97.4 billion offer to acquire full control of the artificial intelligence company. However, OpenAI’s board rejected the bid, citing concerns over its mission, autonomy, and ethical considerations. In an official statement released through OpenAI’s press account on X, board chair Bret Taylor described Musk’s offer as a deliberate move to interfere with his competition.

“OpenAI is not for sale, and the board has unanimously rejected Mr. Musk’s latest attempt to disrupt his competition,” Taylor said. “Any potential reorganization of OpenAI will strengthen our nonprofit and its mission to ensure [artificial general intelligence] benefits all of humanity.”

The New York Times reported that OpenAI also addressed a letter to Musk’s attorney, Marc Toberoff, stating that the proposal did not align with OpenAI’s mission and was not in its best interests. The decision has sparked widespread debate over the future of AI governance and Musk’s ambitions in the AI industry.

Why Did OpenAI’s Board Reject Musk’s Offer?

1. Preserving OpenAI’s Independence

OpenAI’s leadership believes that Musk’s takeover would jeopardize the organization’s autonomy, potentially shifting its priorities toward his business interests, mainly his AI venture, xAI. By rejecting the offer, the board aims to maintain control over its research direction and prevent external influence from dominating decision-making.

2. Conflict Between Mission-Driven and Profit-Driven Goals

Originally founded as a nonprofit, OpenAI transitioned into a capped-profit model to balance funding needs and ethical AI development. The board fears that Musk’s leadership could tilt the company towards a profit-driven agenda, undermining its commitment to developing AI for the broader good rather than commercial gain.

3. Musk’s History with OpenAI

Musk resigned from OpenAI’s board in 2018 after an unsuccessful attempt to take control of the company. The board considers this past power struggle as a key factor in rejecting his current bid, viewing it as a continuation of his previous efforts to dominate OpenAI’s direction.

4. Tensions with Microsoft and AI Ecosystem

OpenAI has a major partnership with Microsoft, which has invested billions in the organization. If Musk were to gain control, it could disrupt this collaboration, leading to legal and financial complications. The board is also concerned about potential conflicts between OpenAI’s roadmap and Musk’s competing AI firm, xAI.

5. Legal Risks and Regulatory Concerns

Musk has had numerous legal disputes and regulatory challenges in the past, including with the SEC and Tesla. OpenAI’s leadership fears that his control could introduce unnecessary instability, regulatory scrutiny, and delays in AI safety frameworks that are crucial for the industry’s responsible development.

Musk’s Response and Industry Reactions

Following the rejection, Musk expressed his displeasure, criticizing OpenAI for abandoning its original mission of creating open-source AI. He has also hinted at further legal action or alternative AI strategies, reinforcing his commitment to advancing artificial intelligence through xAI and other ventures.

Industry experts remain divided on the issue. Some argue that Musk’s resources and expertise could have accelerated OpenAI’s innovations, while others support the board’s stance on keeping AI development independent from corporate dominance.

What’s Next for OpenAI?

With Musk’s bid off the table, OpenAI will likely continue its current trajectory with Microsoft’s backing and expand its AI capabilities while maintaining governance safeguards. The rejection signals the board’s commitment to AI safety and ethical considerations, ensuring that advancements align with their foundational mission. As the AI race intensifies, OpenAI’s decision will shape the broader debate on who controls AI, how it is developed, and whether it remains a force for public benefit or corporate interests.

Read More: OpenAI Drops o3 AI Model to Unify AI Strategy with Game-Changing GPT-5

Elon Musk’s Battle to Buy OpenAI, Five Crucial Insights from His Offer Letter

In the life of tech billionaires, the drama tends to arise sooner than a software update. Musk’s next move is all about bidding to the tune of nearly $97 billion to reclaim OpenAI, a company he once championed but is now suing. While OpenAI CEO, Sam Altman brushed his offer aside, the court filings reveal Musk’s detailed offer to tell the whole saga, including lawsuits, power wrangling, and strategic chess moves that have entangled two of the most powerful names in the AI Industry.

An investment syndicate led by Elon Musk’s xAI has proposed an unlikely offer of $97.4 billion to acquire OpenAI. Altman has anyway very quickly declared the offer as impossible because it is seen as a method to block OpenAI from making nonprofit transition, an action in which Musk is also challenging with his own suit. In a legal filing on Wednesday, Altman’s team argued that Musk’s position is contradictory, attempting to buy the assets of OpenAI while also trying to prevent its conversion to nonprofit.

Musk’s team countered that they would withdraw the offer if OpenAI’s efforts to move away from its nonprofit status ceased. Musk’s entire letter of intent to buy open AI was published as part of this legal turmoil, this opened up a broader understanding of his plan and motivation.

The five key details from Musk’s Offer Letter are following;

1. Deadline for the Offer:

A definite expiration date for May 10, 2025, is that of the unsolicited bid by Musk’s consortium. It goes off the track only if the parties to the interests finalize the deal, or they mutually decide to terminate discussions, or if OpenAI explicitly rejects the offer in writing.

Altman has publicly dismissed such offers (including a humorous counter offer to buy X at tenth of the price) but OpenAI has yet to issue an official rejection statement. Even offers between competitors must get consideration before being outright dismissed due to legal requirements.

2. Cash Transaction:

Musk’s financing group with notable venture capitalists like Joe Lonsdale’s 8VC and SpaceX investor Vy Capital has offered just $97.375 billion in cash. This is because, in the past, he has borrowed money to finance such acquisitions, like the $13 billion borrowed from banks to acquire twitter in 2022, and though self-proclaimed as having a fortune of about $400 billion, mostly raised by a boost from Donald Trump, it was not disqualified from consideration. Interestingly, the letter mentions seven investors, including Musk’s x.AI, alongside others unnamed, implying that Musk is not relying solely upon his wealth to finance the deal.

3. Access to all Financial and Operational Data:

The consortium of Musk demands full access to OpenAI’s financial records, assets, employees, and business operations before it commits a ginormous buy. “Assets, facilities, equipment, books, and records” is mentioned in the letter to indicate such needs.

This stress has indeed been caused, given that the said due diligence is normal in major transactions for such things, as making an acquisition review that is much more compelling opens it up to very internal and state of the art sensitive knowledge that might be a possible conflict of interest based on xAI’s having a direct market competitive claim in OpenAI to gain such level of access.

4. Undermining Musk’s Lawsuit:

Musk’s legal battle against OpenAI revolves around his contention that OpenAI’s assets can never be “transferred away” for private gain. However, in a filing on Wednesday, the lawyers for OpenAI pointed out that Musk’s offer contradicts this claim and emphasized that he is making an offer to dispose of an acquisition effort just to weaken a competitor. They said, “The offer isn’t serious, but an improper bid to undermine a competitor.”

According to OpenAI, the offer is not genuine and was strategically timed to complicate its privatization. Musk’s camp insisted otherwise, claiming that the bid was legitimate and that funding would be funneled straight into OpenAI’s nonprofit purpose.

5. Musk’s withdrawal from the offer:

Musk’s legal team stated that if the board of OpenAI decides not to convert into non-profit operation, he would withdraw the offer. More speculation would be reinforced with this statement that Musk’s bid was not aimed at buying OpenAI altogether but only bumping up the figure that would make Altman and other top executives acquire the company privately.

OpenAI board legal representative dismissed such an offer from Musk and said, “Musk’s bid doesn’t set a value for [OpenAI’s] non-profit and that the nonprofit is not for sale”.

The Repercussions:

This adds to the already complicated legal and financial drama over OpenAI, which is still far from being resolved. The rejection of the bid from OpenAI would give Musk the further ground to challenge the legitimacy of its nonprofit conversion. On the other hand, if OpenAI accepts or considers the offer, it risks getting into trouble over its governance. Either way, whether Musk’s offer is a genuine attempt to acquire OpenAI or just a tactic within his legal showdown, it has put OpenAI in a difficult position.

Truly, one wonders, whether the billionaire indeed wants to acquire OpenAI or is using the bid as a bluff or a feint to disrupt its transition into nonprofit status. Well, one thing is clear, it isn’t merely a corporate dispute rather it’s the critical moment of evolution regarding what artificial intelligence is and what big tech makes it into in the future. As both continue their game of legal and financial chess, the world waits to see who blinks first.

Read More: OpenAI Drops o3 AI Model to Unify AI Strategy with Game-Changing GPT-5

OpenAI Drops o3 AI Model to Unify AI Strategy with Game-Changing GPT-5

The gigantic leap in AI technology has allowed competing organizations to race ahead in building the most powerful and efficient models. OpenAI, a prime name in AI, has been at the forefront of its very transformation. However, it has now strategically scrapped its much publicized o3 AI model development in favor of a more unifying and integrated approach. This raises important questions about OpenAI’s long-term vision, competitive positioning, and the way AI development is shaping the landscape of technology.

OpenAI has canceled its much anticipated next-generation o3 model and has instead adopted a bigger platform to cast a greater net over AI technologies. According to its Chief Executive, Sam Altman, the organization has thus decided to focus on a new model, GPT-5, which is designed to unify all OpenAI’s AI technologies in one integrated and more specialized offer. Altman said, “In the coming months, OpenAI will release a model called GPT-5 that integrates a lot of [OpenAI’s] technology, including o3, in its AI-powered chatbot platform ChatGPT and API”. As a result of that roadmap decision, OpenAI no longer plans to launch o3 as a stand-alone model.

The new development marks a change in the path of OpenAI’s AI roadmap toward a more unified ecosystem from several standalone models. Altman wrote in his post on X, “We realize how complicated our model and product offerings have gotten. We want to do a better job of sharing our intended roadmap, and a much better job simplifying our product offerings. We want AI to ‘just work’ for you; we realize how complicated our model and product offerings have gotten. We hate the model picker [in ChatGPT] as much as you do and want to return to magic unified intelligence”.

OpenAI’s AI Models:

OpenAI had originally planned to launch o3 in early 2024, with a possible suggested timeline of February to March given by Chief Product Officer Kevin Weil. However, the latest pronouncement from Altman makes it clear that o3 would not be released as a separate model. Instead, the competencies would now be merged into GPT-5, which would not only power OpenAI’s ChatGPT platform but also its API services as well. The next GPT-5 release will be available in several tiers which includes; The Standard Access, that has unlimited chat access at base intelligence level, free but subjected to some abuse thresholds. ChatGPT Plus Subscribers will be able to have higher intelligence level with better reasoning and lastly, ChatGPT Pro Subscribers will have the access to highest intelligence level with advanced reasoning, deep research, and multimodal function included.

Altman said, “[GPT-5] will incorporate voice, canvas, search, deep research, and more. A top goal for us is to unify our models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks”. Before the arrival of GPT-5, OpenAI would experiment by introducing GPT-4.5 (the codename is Orion), which will happen in a few weeks. It would mark the last time that the company would be pitching a “non-chain of thought” model, such a model being a model prior to the movement toward reasoning based systems that check their own outputs.

Competition against o3 AI Model:

The delay of o3 and quick shift to GPT-5 by OpenAI is truly happening when all the activities at the moment are heating up with regards to global competition in AI. The latest news from Chinese AI Assistant, DeepSeek, surrounding its R1 model is creating big headlines, as this open-source alternative has reported some strong performance in important benchmarks against the previous works from OpenAI. While OpenAI adopts a proprietary format for users, DeepSeek’s model is open for software developers’ use under threat of potential challenge against OpenAI’s command.

Altman admitted that DeepSeek has reduced OpenAI’s technological lead, and in response said, “OpenAI would pull up some releases to better compete”. However, according to reports by Bloomberg, The Information, and The Wall Street Journal, Orion (GPT-4.5) has not met performance expectations and falls short of the improvements that have typically been associated with previous upgrades in models relative to GPT-4o.

Future AI Scenarios:

Such “integrated AI” would then get interpreted as a major industry trend itself, it would mean improving user experience and exceptional reasoning power capacity beyond what OpenAI offers with its new capabilities in GPT-5, featuring voice, canvas, search, and deep research functions.

While all AI companies are racing to make the next great breakthrough, it appears that OpenAI is readjusting its strategy by renouncing o3 and turning to GPT-5. Whether that strengthens the company’s lead in the arms race for the next-generation AI or makes room for new competitors is yet to be seen. One thing is very certain, as the AI environment evolves, so does the need for adaptability along with innovation.

Read More: Elon Musk’s $97.4B Bid for OpenAI Sparks Controversy and Industry Shockwaves

Saudi Arabia’s Grand Vision at Leap 2025; a $15 Billion Plan for AI Leadership

AI has come a long way, from being merely a figment of imagination in the minds of science fiction writers, it has translated to something real in today’s economies and industries, where billion dollar headlines are created. With this, Saudi Arabia upped its stakes by $15 billion in AI, underlining its efforts to establish itself among the major players in the global technology arena. More profoundly than the showcase of such investments, the questions raised at the LEAP 2025 tech show is whether AI is indeed a tool for persistence in the human condition or whether we are on the verge of some identity crisis where AI might know us better than we know ourselves?

For more than a decade, Leap 2025 has served as the Saudi capital city’s stage for the performance of advertisements and arguments related to artificial intelligence investment. Building projects worth about $15 billion have been announced by the kingdom yet again as part of its AI investment promising Leap 2025 tech show. In the last two years, AI has grown so much, almost like one step out of fiction that the next step seems like an inevitable focus on agentive AIs, that is, an artificial intelligence operating in the background to improve human output. Still, with advancements, comes the erosion of freedom and autonomy, possible cloning of identities, as well as disruption of society.

Saudi Arabia’s Vision 2030:

As for the announcement of investment in AI, the Saudi Vision 2030 constitutes a long-term economic strategy that would steer the kingdom away from dependence on oil and broaden its scope into technological improvement. AI is therefore earmarked as a prime theme with good discussion at LEAP 2025 on its applications as well as future developments. Other significant AI Investments include a $1.5 billion pact between Groq, the provider of AI infrastructure, and Aramco Digital to enhance AI-powered inference infrastructure and cloud computing, along with a $2 billion agreement between Saudi manufacturing conglomerate ALAT and Lenovo to create an advanced manufacturing center for AI and robotics, an expansion by Google in AI-backed digital infrastructure and new computing cluster, and an ALLaM, Arabic LLM launched by Qualcomm on Qualcomm Cloud AI.

The event also marks Saudi Arabia’s investment since 2022 into technology infrastructure projects, amounting to $42.4 billion, which includes; investment of Databricks of $300 million in AI tools development for development of PaaS, SambaNova committed $140 million to build advanced AI infrastructure, Salesforce invests $500 million in Hyperforce enhancement and the development of regional cloud, and Tencent Cloud allocates $150 million to build AI empower cloud regions in the Middle East.

Agentic AI’s Future and Risks:

A definite theme throughout Leap 2025 seemed to be the transition of artificial intelligence from simple user interplay to agentic AI, where AI becomes the almost invisible assistant who works for the individual. Yaser Al-Onaizan, CEO of the National Center for AI in the Saudi Data and AI Authority (SDAIA), elaborated on the transition and said, “The promise of AI is that it will be in everything that we do and we touch every day. It needs to be invisible. It cannot be in your face – it should be listening to you, understanding you and doing things based on your opinion”.

He went on to say that next-generation AI models would do more than just answer queries. They would plan and act without requiring anything more than tacit user consent to book flights or make reservations.

Although the possibilities of AI seem boundless, the experts at the conference also expressed some concerns regarding the risks it poses. Lambert Hogenhout, Chief of data, analytics, and emerging technologies with the United Nations, drew attention to threats against human independence posed by AI. He warned that unchecked AI could contribute to fraud, convolute identity determination, and corrode human-purpose connection to society. He stated, “We want to make sure AI increases living connections, that we are not eliminated. That it makes a good society. The society where a number of people are excluded is not going to work. It will create problems.”

Productivity and Innovation in AI:

Another main point was how firms should best be onward to the AI industry. Cohere’s Canadian Generation AI CEO, Aidan Gomez, compared generative AI to acting as CPUs, stating that generative AI is valuable only for how it integrates and actually is implemented for business specifications. He said, “A generative model is kind of like a CPU – it’s a general piece of technology. You could deploy it inside any vertical for any purpose, like a CPU. But, in and of itself, just owning a CPU isn’t valuable. It’s what you build with it that is valuable. So, for that piece, you do have to be technical. You need to be a developer, to be able to build something on top of this model to create value on the other side.”

Saudi Arabia’s $15 billion investment in AI is a key sign of the country’s desire to become a progressive world leader in AI. Discussions at Leap 2025 pointed out AI’s enormous potential to drive productivity and the need to address pertinent ethical issues. With Vision 2030 as the kingdom’s inspiration, harnessing AI for development versus human autonomy will be the key counterpoint before the kingdom’s decision makers, businesses, and society.

Read More: UK Minister Urges Western AI Leadership to Dominate AI Development

UK Minister Urges Western AI Leadership to Dominate AI Development

The world keeps fast-forwarding in the AI race, making it undeniably evident that whoever leads AI will lead the future. The real conflict lies when the algorithms are being subtly engineered to outthink humans, it is not just who produces the smartest machine that counts, rather it is who ensures that those digital minds fit into the world of democratic ideals. UK’s Technology Secretary, Peter Kyle argued that leadership in artificial intelligence must remain within the “western, liberal, democratic” nations, most especially against the backdrop of the increasing global race in the use of AI technologies. Speaking ahead of a global summit on artificial intelligence on Sunday in Paris, Kyle seemed to refer to the importance of democratic values in the future development of artificial intelligence, hinting to an extent against China and its rising presence in that area.

The Artificial Intelligence Action Summit, jointly organized by France’s President Emmanuel Macron and India’s Prime Minister Narendra Modi from February 10-11, will bring together political leaders, tech executives, and policymakers to discuss AI’s global roadmap for future development. The summit comes against the background of the recently established DeepSeek, a Chinese AI company that has sought to undermine Silicon Valley with its latest technological improvements.

Democratic Powers’ Role:

Kyle made it clear that the UK intends to position itself at the forefront of AI development, leveraging its scientific expertise and technological capabilities. He stressed that governments play a crucial role in ensuring that AI aligns with democratic values and does not become a tool for authoritarian regimes.

Kyle stated, “Government does have agency in how this technology is developed and deployed and consumed. We need to use that agency to reinforce our democratic principles, our liberal values and our democratic way of life. Adding that he was under no illusion, there were some [other] countries that seek to do the same for their ways of life and their outlooks”.

Without naming nor specifying any particular country, Kyle said, “he was not pinpointing one country, but it was important that democratic countries prevailed so we can defend, and keep people safe”. He explained that competing states are already shaping AI according to their respective political ideologies. Such remarks are indications that China has begun establishing its own foothold in AI as presumably challenging Western leadership in this area.

Impact of DeepSeek Emergence:

Some investors in the United States characterized DeepSeek’s recent breakthroughs as a “Sputnik moment,” referring to the trauma felt after the first artificial satellite was put in orbit by the Soviet Union in 1957. The AI model from the Chinese firm has been developed at a low cost and is mostly on par with or has improved on US rivals, raising security approaches by Western nations. Kyle confirmed that national safety repercussions of DeepSeek and its chatbot innovation would be scrutinized by British officials. However, he maintained that competition should be a motivation rather than something to cause fright. He said, “I am enthused and motivated by DeepSeek. I’m not fearful”.

 The AI Summit and UK’s AI Growth Zones:

Now, the Paris summit has been structured around facets of how AI will affect jobs, cultures, and global governance as opposed to merely safety concerns, which were the preoccupation of the UK’s first, inaugural AI summit held at Bletchley Park in 2023. Some of the prominent participants are; US Vice President, JD Vance, President of the European Commission, Ursula von der Leyen, Chancellor of Germany, Olaf Scholz, Google CEO, Sundar Pichai, CEO of OpenAI, Sam Altman and AI pioneer Nobel Prize winner, Demis Hassabis. China’s Vice Premier Zhang Guoqing will also be attending, making the summit geopolitically important.

Kyle has announced on the UK’s part that bids are opened for AI growth zones, part of the AI strategy of the UK, that will now host new data centers critical for AI training and operation. Its aim is to bring economic rejuvenation to what are considered historically left behind regions, especially those in Scotland, Wales, and northern England. Kyle stated, “We are putting extra effort in finding those parts of the country which, for too long, have been left behind when new innovations, new opportunities are available. We are determined that those parts of the country are first in the queue to benefit … to the maximum possible from this new wave of opportunity that’s striking our economy”.

Energy provision in AI growth zones would then be increased by government promise to ensure that the zones have access to more than 500MW of power, enough to power about two million homes. Potential first sites for these AI hubs include the Culham Science Centre in Oxfordshire, where the UK Atomic Energy Authority is based.

AI Development:

A draft early closing statement of the summit seen by the Guardian goes for making AI “sustainable for people and the planet.” The same statement emphasized that it should be open, inclusive, transparent, ethical, safe, secure, and trustworthy. It does say trust and safety in AI governance in spite of fears the summit will not be enough on safety issues. Although the AI race speeds up, the UK’s posture is indicative of a wider western push to retain its leadership in AI innovation while making sure the technology works for and with democratic values. Whether it can fulfill this vision with rising global competition still awaits to be seen.

Read More: China’s Chip Industry Gains Momentum

Google Expands NotebookLM Plus Access to Individual Users with AI Premium Features

Envision a world where your notes do not merely lie resting on a page, but rather actively collaborate with you in answering questions, summarizing research, and generating podcasts. Sounds futuristic? Well! Google is turning this imagination into a reality. The upgrade of AI-powered note taking brings with it the extension of NotebookLM Plus to individual users.

Google has unwrapped the NotebookLM Plus, a paid version of its AI-powered note and research assistant. It is available to individual users with the subscription to the Google One AI Premium plan, nearly two months after the launch of this service tailored to enterprises through Google Cloud and Google Workspace.

Improved Features for Subscribers:

Initially launched in December, following a pilot, NotebookLM Plus offers expanding higher usage limits and premium capabilities to subscribers. The subscribers’ access includes, five times the usage of free NotebookLM version, they can avail 500 notebooks and up to 300 sources per notebook and can access bigger 500 chat queries and 20 AI audio clips, all derived in a day.

These enhanced features now became available for individual users under the Google One AI Premium Plan at $20 per month. Google has also provided a 50% student discount through selling eligible U.S students above 18 years at $9.99 per month. Kelly Schaefer, director of product and domain lead at Google Labs said, “We have always wanted to get NotebookLM Plus out to enterprises and consumers, and have seen really a ton of interest from consumers, and in particular students from the beginning”.

Evolution of NotebookLM:

NotebookLM, launched in 2023, made waves and influenced when it featured Audio Overviews in September 2023. This feature lets users create audio conversations that sound like a podcast based on the content uploaded, and later on can be copied by its competitors like ElevenLabs and Meta. Google has kept refreshing NotebookLM with all sorts of improvements to keep up with the rest of the world, including advanced AI audio guidance. The firm is working on further extending support for other languages beyond English.

Schaefer explained, “We are thinking about how to prioritize the languages and then, most importantly, how to make sure that they feel really genuine and just as seamless and natural as our current Audio Overviews do”. Google has not described about adding which specific ‎Gemini AI models are powering NotebookLM, however, Schaefer confirmed that the same AI model underlies both Plus and free versions. Google Labs still experiments with several Gemini model variants to best optimize experience with specific tasks.

Advanced AI Integrations and Market Growth: 

Schaefer also elaborated on Google’s plan to come up with a NotebookLM mobile application that is programmed to create a seamless but customized mobile experience, although a date was not mentioned. On the other hand, Google is trying to visualize how advanced reasoning models can really come in handy for NotebookLM in improving the assistant’s capacity to set up complex thought processes and reasoning tasks. Schaefer said, “We want mobile to feel in many ways similar to the desktop experience but also really tailor it for the use cases that are most common for mobile”.

While NotebookLM Plus always strives to improve for paying subscribers, the company’s commitment remains that of providing a wonderful experience for all free users. As Schaefer explained, “We want folks to get an excellent experience on NotebookLM, whether they’re free or paid, and it’s very important to us that the NotebookLM free experience is excellent. So, we’re thinking more about how we offer even more to the Plus users versus degrading any experience for free users”.

Google has not revealed any figures regarding the number of users on NotebookLM, but market intelligence firm Similarweb suggests that the AI assistant was able to garner 28.18 million visits in the last three months, 9 million of those in January alone. As more users join NotebookLM Plus, a combination of sustained improvability and planned diversification could further uphold Google in the AI-driven research assistant market.

Read More: Your Favorite App Might Be Gone! Apple & Google Removed Dangerous App

Google’s Most Advanced AI Model Gemini 2.0 Now Available for Everyone

The future of AI is unfolding before our eyes, and Google is leading the charge with its latest advancements in the Gemini model family. Gemini 2.0 is now available to everyone, from faster processing to deeper reasoning it promises a new era of AI powered creation, interaction, and collaboration. Google has officially launched the most recently updated AI model, Gemini 2.0 that brings in improved capabilities as well as better accessibility for developers and users worldwide. It is a major milestone in AI evolution, indicating an array of models for a collection of tasks ranging from high speed reasoning to very cost effective AI solutions.

Evolution of Gemini 2.0:

In December, Google introduced the revolutionized era with an experimental release of Gemini 2.0 Flash, an advanced model characterized by efficiency, low latency, and performance enhancement. Following that, advanced updates have since been made in some of its better capabilities, with a version integrated into Google AI Studio for complex and problem solving skills.

Last week, Google again widened the scope of 2.0 Flash in terms of accessibility to all Gemini users across all desktop and mobile platforms. It is now taking a significant step in offering general availability of the updated Gemini 2.0 Flash via the Gemini API in Google AI Studio and Vertex AI for developers to easily build and deploy production applications.

Introducing Gemini 2.0 Pro and Flash-Lite:

Gemini 2.0 Pro (Experimental) is for coding and really complicated prompts, this is the most powerful model Google has put out yet. With an enormous 2 million token context window, it has deep analytical capabilities and can call tools, like Google Search and code execution. The model is currently available to Gemini Advanced users on Google AI Studio, Vertex AI, and the Gemini app.

Gemini 2.0 Flash-Lite is very cost-efficient where it slightly surpasses in quality over its predecessor (1.5 Flash) without sacrificing speed or cost. The context window of Flash-Lite is 1 million tokens, and it is optimized for jobs that can have AI work very fast for very little. For example, in the paid tier of Google AI Studio, it charges less than 1 dollar to generate 40,000 unique photos with a one liner caption that is very relevant to each photo. It is now available via Google AI Studio and Vertex AI.

Upgrades for the Advanced Era:

All Gemini 2.0s provide a multimodal input with text output and are extending into other upgrades in a few months. Features like image generation and text to speech are also under consideration, giving enhanced interaction possibilities to AI. Meanwhile, as AI technology matures, Google remains committed to its safety and responsible use. The reinforcement learning systems used in the Gemini 2.0 series allow the AI to critique its own answers with a view of being more accurate and sensitive towards prompts. Automated Red Teaming is also being employed to counter security threats such as indirect prompt injection attacks.

Google, with Gemini 2.0 now available to a wider audience, is hopeful to flourish in a whole new age of AI-driven solutions. The powers of this gem can be put to the test by developers and users alike in the Gemini app, Google AI Studio, or Vertex AI. As AI models continue to evolve, Gemini 2.0 provides the perfect platform for the next generation of highly sophisticated, accessible, and secure applications. 

Read More: Google Revises AI Ethics, No Longer Rules Out AI‘s use for Weapons and Surveillance

OpenAI Joins the Super Bowl Ad League: Tough competition to tech giants

OpenAI is set to make its debut in the mainstream advertising market. They are about to air its first TV commercial during Sunday’s Super Bowl– reported The Wall Street Journal on Wednesday. The Super Bowl is the world’s most popular and covered TV event, facilitating advertisers and promoters with its huge audience and creativity as commercials of the Super Bowl can create a huge outsider buzz.

Super Bowl Sunday is one of the greatest days of the year — and not just because of football. It has the legacy of creating super hit commercials. Some Top names are E*Trade (2008) launched one of the most memorable marketing campaigns in recent years during Super Bowl XLII. Apple (1984) wanted to let the world know and promoted the upcoming release of the Macintosh computer. Volkswagen The Force (2011) tapped into those childhood memories during Super Bowl XLV.

Super Bowl’s Potential

The 2024 Super Bowl drew an estimated 210 million viewers – figure that highlights its potential. OpenAI isn’t the first one to opt for this, rivals like Google run ads promoting its AI prowess during last year’s Super Bowl. The spot is estimated to cost up to $8 million for a 30-second ad during the Super Bowl 2025. According to Adweek’s Chief Content Officer Zoe Ruderman, the same similar spot made $7 million last year. 

On February 9 at 6:30 pm ET, The 59th Super Bowl is scheduled. With an estimated 83,000 spectators, The NFL championship game will occur in New Orleans at the Caesars Superdome, home to the New Orleans Saints.

New Marketing Moves:

CEO OpenAI Sam Altman has not only taken the above marketing measure. Since the ChatGpt release in November 2022  and a wide reach of over 300 million weekly active users two years later. The AI developer is in talks and making strategies to raise up to $40 billion at a valuation of $300 billion. He also hired its first chief marketing officer, Kate Rough in December 2024.

OpenAI, Google AI or LamaAI? Tough competition.

Not only OpenAI or Google, other competitors are entering the Super Bowl market as well. Well… we had two choices: Spend millions of dollars on a flashy commercial, OR – invest in building the best #GenAI-powered #lending platform. We went with the exciting option of course (sorry, Hollywood). Because for bankers, business lending isn’t a game. It’s about making the right decisions—quickly, confidently, and without second-guessing.

Consider it our unofficial #SuperBowl debut. Lama AI sparks confidence from their latest post on linkedin to create the same legacy as of Apple 1984. Will Super Bowl advertising change the destiny of many brands? Stay tuned to learn more.

Read More: Musk’s Legal Battle with OpenAI May Head to Trial, Judge Rules

The AI Revolution in Europe; AI Startups Secured $8 Billion in 2024

Europe’s AI startups secured a big bag of 8 billion dollars with a mic drop scene in Europe. I suppose Europe is not all about fine wine and old castles, rather they are focused on keeping up with the likes of the U.S and China’s AI developments. Startups dealing with AI have raised a whopping $8 billion through funding in Europe.

This amount greatly boosts investment and innovation across the continent. However, this surge comes just ahead of the Artificial Intelligence Action Summit, where world leaders and technology executives will gather in France to discuss AI from the perspectives of impact, ethics, and investment potential.

Emerging AI Landscape in Europe:

AI startups are popping up across Europe faster than tourists at the Eiffel Tower. While most of the discussion on the global AI landscape revolves around the likes of mainstream OpenAI or DeepSeek, European startups have formed their corner in a dignified fashion. In fact, this year’s AI accounted for an estimated 20% of total VC funding in Europe, signifying that investor confidence in the possibilities of AI in Europe is rising.

The majority, around 70%, of these investments have been directed toward AI companies in their initial stage, from seed funding to Series B rounds. This shows that the European AI ecosystem is still undergoing rapid growth, with many players well-positioned for future expansion. The UK, France, Germany, and the Nordic countries, traditionally strong with VC-backed startups, therefore supply major propulsion to AI innovation.

Interestingly, as European AI startups mature, they increasingly attract international investment, by the time they reach Series C and later rounds, about half of the funding comes from U.S based venture capital firms. This not only gives an idea about the strength of European AI firms but also implies that they are becoming relevant on an international scale.

AI Ecosystem in France is on the rise:

Innovation in AI is, indeed, the name of the game in France. The country is home to more than 750 AI startups that create approximately 35,000 jobs. Also, with 2,000 scientists and 600 doctorate students working on AI development, France has laid down a solid research infrastructure. As the Minister delegate for artificial intelligence and digital technologies Clara Chappaz said at a press conference that, “in France more specifically, there are more than 750 startups that have created 35,000 jobs and operate in all areas that are transforming today’s society”.

This abundance of talent is reflected in the increasing number of French engineers and researchers who contribute to leading AI companies in the U.S and elsewhere. According to the recently published French Report on AI in France, the spectrum of AI applications being born in France is rather wide. While Mistral AI and Poolside have been making headlines with their activities, many other startups are contributing to the AI infrastructure and application development.

LinkUp and Kestra for instance, optimize data workflows, ZML on the other hand improves inference performance. Others like Dust are creating AI agents to enhance productivity through automating large scale data processing. Nevertheless, most AI startups in France are focused to address the concerns in the health and climate industries.

AI Innovations for Health and Climate:

Owkin and Bioptimus, in the health sector, along with others, have been developing AI applications for medical imaging, drug discovery, and treatment optimization. These developments are keen on radically upgrading patient diagnostics and care towards a more precise and efficient way of doing things. In the meantime, AI-focused climate startups tackle some of the world’s most pressing challenges. Whether in agritech or carbon and energy management, AI-based solutions are finding their way to sustainability applications in Europe. Startups such as Altrove look for alternative materials that can leverage the green economy.

Future of AI in Europe:

Realistically, not every AI startup will survive in these coming years, however, the European ecosystem for AI is surely gaining a grip in the industry. As the very essence of AI gets embedded in various industries, the continent’s investment landscape is shifting accordingly to support innovation at every point.

This does not seem like a winner takes all situation, rather, the AI explosion appears to be quite an evenly spread phenomenon, with several members from various locations helping to shape its future. With the approaching AI Action Summit, the importance of Europe in shaping AI’s future has never been more prominent, as the next few years will speak to whether European AI startups will sustain the growth to compete on a global front but, for now, they are proving that AI innovation isn’t limited to a few tech superpowers.

Read More: Google’s Search Will Evolve into a Personal AI Assistant by 2025

Google’s Search Will Evolve into a Personal AI Assistant by 2025

Google Search is undergoing a transformation that could mark a defining moment for search technology in 2025. Sundar Pichai, Google CEO, in an interview during his opening remarks said, “As AI continues to expand the universe of queries that people can ask, 2025 is going to be one of the biggest years for search innovation yet”.

The company’s vision is to evolve Search from a simple tool that provides links into an AI assistant to be capable of browsing the internet, analyzing web pages, and delivering well-structured answers.

Evolution of AI Assistant:

This transition began with AI Overviews, a fundamental shift in how Google processes and presents information to users. As the rollout has been controversial due to some bizarre errors which include suggesting users eat rocks, Google remains committed to integrating AI into Search. Pichai said that, “You can imagine the future with Project Astra”, this implies that he believes AI can handle a broader range of queries, and in 2025, Google aims to introduce more advanced features.

One of the key multimodal AI system that contributes to this modification is DeepMind’s Project Astra. Unlike conventional search engines, Astra can process and analyze live video or images in real time, providing answers based on what it perceives. This innovation could accelerate Google’s vision for AR smart glasses, moving the company further into an AI-driven future.

Search and AI Research:

Another major feature of Google is the Gemini Deep Research, which is an AI agent that is designed to work in creating extensive and comprehensive research reports. This can then change everything about how users interact with Google Search, rather than going through tons of links, users would receive complete AI-generated insights on very complex topics. This would help in automating tasks that are usually done manually.

On the other hand, Project Mariner, another AI initiative, carries automation even further by attempting to perform online tasks on behalf of users. This could massively change how people browse the web, potentially making it unnecessary to visit a website. Google would like to further enhance these developments by making Search more conversational, allowing a user-friendly experience where follow-up conversations and queries will upgrade the interaction, resulting much like that of a chatbot.

Google’s AI-driven Search is a way to counter the esteemed chatbots, its great rivals, and not just more of a benefit for the enhancement of user experience. The rapid rise of ChatGPT proves to be an even greater challenge to the dominance of Google in online search. Its strategy of throwing AI deep into Search, Google wants to remain relevant in the age when users expect an intelligent instant response rather than a simple list of links.

However, the first generation of Google Search AI was nevertheless full of bumps and faced challenges but a whole string of unfortunate mess ups followed the rollout of AI overviews, as it dished out improper and at times, even foolish responses. Google accepted the mishaps and promised improvements, while continuing to make sure to refine the AI-supported Search rollout for a reliable experience of users.

Future of Google Search:

Sundar Pichai regarding the future of Google stated, “You are really dramatically expanding the types of use cases for which Search can work – things which don’t always get answered instantaneously, but can take some time to answer. Those are all areas of exploration, and you will see us putting new experiences in front of users through the course of 2025.” This hinted towards some new ways that users will be able to interact with Search, mainly through conversational features and a more robust capability over time to respond to complex queries.

He also said, “I think the [Search] product will evolve even more. As you make it more easy for people to interact and ask follow-up questions, etc., I think we have an opportunity to drive further growth.” It shows us that regardless of criticisms and hurdles, Google aims to spearhead the AI revolution in search technology.

Read More: Google Partners with HTC in $250M XR Deal: A Bold Step to Rival Apple and Meta in Immersive Tech

Google Revises AI Ethics, No Longer Rules Out AI‘s use for Weapons and Surveillance

In the current scenario of AI, corporate ethics appear to be very flexible. It is becoming very certain that the boundaries that separate innovation, ethics, and business interests are blurring by the day. It seems like Google’s AI ethics is now open source, as it’s free for anyone to rewrite, including Google itself. Google has silently removed one of the central ethical barriers that was once enshrined in its AI principles, a pledge not to develop AI technology for weapons and mass surveillance. This change, pointed out by CNN‘s analysis of the Internet Archive Wayback Machine, now indicates a major shift in Google’s perspective on AI ethics.

Ethical breach:

The much denied combat applications once had envisioned a greater consequences for such actions, AI principles generally formulated that Google would not engage in AI applications for weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people, nor develop technology that gathers or uses information for surveillance and resulting in violating internationally accepted norms. With the latest update, such language has completely disappeared, thus leaving it less clear on how Google engages with these topics now.

Since OpenAI released ChatGPT in 2022, AI has reached an unheard and unmatched level of evolution without proper regulation and ethical oversight. It can be assumed that with applications in law-and-order cases and military projects, Google could be flexibly engaging with such governments and defense contractors with its new policy wording.

A Shift in Values:

In a Tuesday blog, Senior Vice President of research, labs, technology and society, James Manyika and Google DeepMind head Demis Hassabis defended the policy shift, stating that, “AI frameworks published by democratic countries have deepened Google’s understanding of AI’s potential and risks. There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights”.

This latest turn is radically opposed to everything Google had committed itself to in the past. In 2018, thousands of upset and protesting employees who signed a petition against military applications of AI, Google had bid $10 billion for a Pentagon cloud computing contract. It explained then that it could not be sure that this project would be within its AI principles, as some employees even resigned in protest.

On this matter the post further elaborated and said, “We believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.” AI will remain ahead, and so shall the tussles regarding its ethical use, as Google’s recent pivot indicates that its position is far from being cemented.

Read More: OpenAI Seals Partnership with Kakao, Expanding Its Asian Collaborations

OpenAI Seals Partnership with Kakao, Expanding Its Asian Collaborations

Open AI unveiled its expansion by making a second major Asian alliance this week. Altman, Ceo of Open AI said, “We will develop products for South Korea with the AI collaboration between Open AI and Kakao.”  Moreover, he also sat with the chief executives of Samsung and SoftBank. 

On the whirlwind tour regarding Open Ai Asian expansion, Sam Altmon also announced its partnership with Japan. According to sources, it is also being recorded that his next visit on Wednesday is scheduled for India, where he is seeking to meet Prime Minister Narendra Modi. 

Like Softbank, Kakao also stated that they will introduce the Kakao AI talk Features with the help of ChatGPT. Hoping it will make a significant impact on their technological enhancement. Since Kakao is the dominant messaging app in South Korea which holds 97% of domestic shares along with the expansion in other industries like e-commerce, gaming and payments. It has positioned the AI as a catalyst of growth but analysts say that this has lagged the local rival Naver behind in the South Korean AI Market Growth. 

“We are particularly interested in AI and messaging,” Altman said at the joint press conference in Seoul with Kakao CEO Chung Shina. He also added that Korea’s technical companies make it a more demanding market for AI interaction in their products.

Stargate, Korea Computing Centre

Sam Altman has also said that Korean companies are significant contributors to Stargate project, a venture between OpenAi and Oracle to enhance the AI capacity in the U.S that has been backed by U.S. President Donald Trump. Altman refused to spill more saying that partnership conversations are confidential. 

Before the meeting on Stargate at Samsung’s office, SoftBank’s son mentioned a potential cooperation with Samsung. Later, he said, “We had a good discussion,” and didn’t elaborate.
Rene Hass, the Ceo of SoftBank has said that” Samsung is a good Partner.
” Meanwhile, Samsung declines to comment on the meeting.

Altman also met with the chairman of SK Group on Tuesday. Both Sk Hynix and Samsung Electronics produced the Ai Processor chips. Sk Hynix covered the topic saying they had a good chat with Altman regarding Ai chips for a valuable AI ecosystem.  

Son declined to answer when reporters asked about the Stargate initiative. Separately asked whether Open AI was looking to join or invest in the Korean computing centre project, Altman said the U.S. company was “ actively considering” such a move.

Last month, the South Korean government said that building a national AI computing centre would draw on an investment worth up to 2 Trillion won ($1.4 billion). Kakao Shares fell by 2% on Tuesday after rolling 9% a day earlier.  

Read More: OpenAI launched Deep Research, ChatGPT’s new AI agent

Development of Extremely Risky AI Systems may Halt, Meta indicates

CEO Mark Zuckerberg has committed to eventually making artificial general intelligence (AGI), it refers to the capability of AI in performing any human task which is considered in the future to be openly available. However, a new policy document from Meta suggests that in certain cases, the company may choose not to release highly advanced AI systems developed internally.

AI System’s risks:

In the document which is named the Frontier AI Framework, two types of AI systems, “high risk” and “critical risk”, are considered somewhat risky to be released. According to Meta, both classifications involve AI systems that would support breaking through cybersecurity measures, as well as attacks on chemical and biological fronts. The critical risk systems could cause a “catastrophic outcome that cannot be mitigated in a proposed deployment context,” whereas high-risk systems may facilitate attacks but not as effectively or reliably as critical risk ones.

Meta provides examples of potential threats, such as the automated end-to-end compromise of a practice protected corporate scale environment and the ‘’proliferation of high-impact biological weapons”. Meta says,“ it doesn’t believe the science of evaluation is sufficiently robust as to provide definitive quantitative metrics for deciding a system’s riskiness”. The company acknowledges that its list is not exhaustive but represents what it views as “the most urgent” and plausible risks arising from the release of powerful AI.

Astonishingly, Meta measures system risk not through a single empirical test but through insights garnered from the collaboration of several internal and external researchers and the final decision residing with senior executives. According to the company, current assessment methods are just not “sufficiently robust” to allow for definitive quantitative risk assessment to be set.

Suppose an AI system is classified as high-risk. In that case, access will be restricted from internal parties, and action on the system’s release will remain in limbo until mitigations can reduce the risk to a moderate level. Suppose a system is determined to reach critical-risk status. In that case, Meta will set in place measures to restrict access to all by putting security in place and suspending its development until such a time when the system can be made less dangerous.

Meta’s Frontier AI Framework:

Meta’s Frontier AI Framework is designed to evolve alongside advancements in AI and aligns with the company’s prior commitment to publishing it before the France AI Action Summit. This initiative appears to be a response to criticism regarding Meta’s open approach to AI development. In contrast to companies like OpenAI, which restrict access to their AI systems by putting them behind an API, Meta has generally favoured a comparatively more open yet still controlled access to its AI models.

While this has created much popularity for its Llama AI models, it has also been fairly contentious, especially with the reports that adversaries of the U.S. have used Llama to create a defence chatbot. With the announcement of the Frontier AI Framework, Meta may also be trying to distinguish its stance from DeepSeek, a Chinese AI company following a similar path of openly launching its models while consisting of fewer safeguards to stop harmful content creation.

Meta says, “[W]e believe that by considering both benefits and risks in making decisions about how to develop and deploy advanced AI, it is possible to deliver that technology to society in a way that preserves the benefits of that technology to society while also maintaining an appropriate level of risk.” Meta aims to develop advanced AI technology with an approach that maximizes the societal benefit of AI development and innovation while minimizing its risks.

Read More: Metas Shift to Community Notes: Revolution or Risk?

Meet Operator: OpenAI’s Bold Step Toward an AI That Works for You

Remember ‘Operator’?

I’ve previously introduced and given a sneak peek of the upcoming AI agent to you guys!

In that post, we talked about what we’ve learned from the leaks. But now OpenAI has launched its latest innovation ‘Operator’. For a test drive, of course.

Let me tell you what it is actually.

An AI agent designed to perform tasks autonomously.

No, it will not do laundry for you but,

  • Can take actions on behalf of you, like booking travel, shopping online, or making restaurant reservations.
  • It uses a dedicated browser interface to interact with websites, much like a human would (e.g., clicking buttons, navigating menus, filling forms).

Mimicking humans? Yes, maybe.

Key Features:

  1. Autonomy: Operator can independently complete tasks like online bookings and purchases.
  2. Supervision: It requires user confirmation for critical tasks like finalizing payments or sending emails, ensuring accuracy and security. (control in your hands)
  3. Technology: It’s powered by OpenAI’s Computer-Using Agent (CUA) model, which combines vision and reasoning capabilities from GPT-4o and other advanced models.
  4. Collaborations: OpenAI is working with platforms like DoorDash, Uber, and eBay to ensure Operator aligns with their terms of service. (trying to get your daily tasks done in seconds)
  5. Limited Launch: Initially available in the U.S. for users of ChatGPT’s $200/month Pro subscription plan, with plans to expand to more users and countries.

Limitations:

  • Initial stage so can’t do complex jobs.
  • May struggle with tasks requiring users to step in when needed like asking for passwords.
  • Security measures can be a hurdle for it like completing bank transactions etc.

Why Is This a Big Deal?

  • Step Toward AI Agents: Major move into the aura of AI agents; tools capable of taking real world actions.
  • Vision of the Future: AI agents X ChatGPT shortly?! Potentially revolutionize how people interact with technology.

Concerns and Precautions:

  • Prevention of misuse can be a big concern for both users and OpenAI. Adding safety measures will be non-negotiable.
  • The release is a research preview, meaning OpenAI is still exploring its full capabilities and limitations.

Why Does It Matters?

  • A step closer to where  AI doesn’t just inform but acts.
  • It sets the stage for competition with similar AI agent technologies from Google, Anthropic, and others.

Stay Tuned!

To learn more about how this technology unfolds.

Related Articles:

Microsoft’s Relationship with OpenAI Cracked When it Hired Mustafa Suleyman, Rival Marc Benioff Says

OpenAI Gains More Flexibility as Microsoft Backs $500B Stargate Initiative

Meet Operator: OpenAI’s AI Tool That Could Take Over Your Computer Tasks

Microsoft Sets Up CoreAI Division for AI Development

Establishment of CoreAI

Microsoft announced a new engineering division, CoreAI – Platform and Tools to accelerate AI infrastructure and software development. The new body epitomizes a renewed focus within the company on AI through all platforms.

Headship and Structure

Jay Parikh, ex-VP at Meta, will head the division, which includes vast experience in data centre operations and technical infrastructure. He will report to Satya Nadella directly, as he has just joined Microsoft. CoreAI embraces teams from both Microsoft Developer Division and AI Platform, plus some portions of the Office of the CTO.

Microsoft’s AI Vision

In their internal memo, Nadella spoke about the company’s efforts to build “model forward” applications that address changes to the various categories of technology. Above all, this attests that the company ceaselessly aims to hold the lead in innovative advancement in artificial intelligence.

Strategic Impact

CoreAI is such a move that positions Microsoft to liquidate both its areas of AI tools and other assets in cloud computing and advanced applications. Restructuring ensures that staying on top of AI remains a high priority for the company overall so that it can keep up with developments at such a quick pace.

click here to read: Microsoft Files Suit Against Hundreds for Abuse of Az