Is Trump’s Strategic Bitcoin Reserve a Game-Changer or a Political Stunt?

With Bitcoin getting a White House invite, it’s time for gold bars to move aside, as an interesting collision of politics and cryptocurrency has taken place in the center of history. In a move to solidify establishing digital assets as a principal U.S financial strategy, President Donald Trump has signed an executive order to establish a Strategic Bitcoin Reserve. This action is exceptional, as it marks the first time a global superpower has formally included cryptocurrency in its national reserves. The very phrase “digital Fort Knox”, often called “digital gold” has excited many crypto advocates, however it now raises urgent issues related to governance, taxpayer benefit, and the risk of conflicting interests.

The establishing of the Strategic Bitcoin Reserve is considered to be a possible turning point in the government’s policy on cryptography, as it has already rolled in both political and financial realms. The announcement took place with top executives from the crypto industry, a day before the scheduled meeting at the White House.

Digital Fort Knox

According to White House crypto czar David Sacks’ post on social media platform X,

The reserve will be capitalized with bitcoin owned by the federal government that was forfeited as part of criminal or civil asset forfeiture proceedings ”.

Sacks in his post on X, described the initiative as a “digital Fort Knox”, he said, The U.S will not sell any bitcoin deposited into the Reserve. It will be kept as a store of value. The Reserve is like a digital Fort Knox for the cryptocurrency often called digital gold.

As part of this initiative, Trump has decided five cryptocurrencies that will go inside the reserve, which are: Bitcoin (BTC), Ethereum(ETH), XRP, Solana(SOL), and Cardano(ADA). This news, which moved through the markets earlier this week, proves how government policy impacts the highly volatile and growing field of crypto.

Uncovered Areas & Market Response

The unexpected dramatic act has left some questions unanswered. The actual working of the reserve fund, its advantages to taxpayers, and any potential acquisition in the future are still subjects that are covered in mystery. Sacks added in his post on X, “Premature sales of bitcoin have already cost U.S taxpayers over $17 billion in lost value. Now the federal government will have a strategy to maximize the value of its holdings”.

Trump’s executive order has tasked the Treasury and Commerce departments with working out “budget-neutral strategies” to acquire further bitcoin, thereby necessitating that the government become creative in firming up its reserves without increasing public expenditure. Bitcoin reacted sharply to the announcement by initially falling by over 5% to below $85,000 after Sacks’ post, recovering to $88,107 later. Many traders had expected a larger show of force in buying by the government rather than the mere confirmation of the holdings that it already has.

Criticism & Ethical Concerns

Not all crypto enthusiasts are toasting the initiative. Charles Edwards, Head of the Bitcoin focused hedge fund Capriole Investments, dismissed the initiative in a post on X and said,

This is the most underwhelming and disappointing outcome we could have expected for this week. No active buying means this is just a fancy title for Bitcoin holdings that already existed with the Government. This is a pig in lipstick.”  

Concerns have also been raised about possible conflicts of interest. Trump’s family made meme coins from cryptocurrency in the past, the president has financial interests in World Liberty Financial, and is the cryptocurrency provider. His advisors insist that all business interests are being cleared with external ethics lawyers but skeptics think that Trump’s decisions on policy could be influenced by his private investments.

Game-Changer or a Political Play?

These devotees, mostly millionaires who lend overwhelming financial backing to the electoral efforts of Republicans in the November elections, now have the long-awaited political support from Trump. It is believed by the National Bitcoin Reserve advocate that it’s a government effort towards allowing taxpayers to cash in on any future price appreciation, whereas the critics termed it a transfer of wealth to a rich elite in the already wealthy crypto.

The crypto world holds its breath, with the U.S government in the process of forming its Bitcoin holdings into a strategic reserve. The question is raised about whether the act grants state-backed legitimacy on cryptocurrency, or is it merely a symbolic gesture in an election year? Some see it as a brave new foray, while others voice the potential for such state enforced legitimacy to simply allow the few crypto elite to continue flying off the masses. How much of a financial masterstroke this reserve becomes or how much of a regulatory nightmare it turns out to be is still open for discussion, but its repercussions are assuredly going to radiate far beyond Washington.

UK Investigates TikTok, Reddit, and Imgur Over Children’s Data Privacy Concerns

Rising online platforms have increasingly become characters that create the web experience for millions, in turn growing concern among parents regarding the safety of children and data privacy. Nowadays, when social media algorithms decide what users shall see, this hazard of inappropriate content being exposed to today’s young audiences or personal information being misused has become very real. The British Information Commissioner’s Office (ICO) has come into play, undertaking a major investigation into TikTok, Reddit, and Imgur for alleged breaches of Children’s Data Protection Law. The decision of the inquiry may set a whole new pattern for the way tech companies interact with online privacy regarding children.

The investigation into TikTok and Reddit was announced by the UK Information Commissioner’s Office (ICO) for the alleged mishandling of children’s personal data and online safety, as the inquiry will examine whether these platforms operate within the boundaries of data protection laws and age assurance regulations aimed at protecting young users.

Algorithm Driven Content and Age Verification:

Social media platforms employ highly sophisticated algorithms to recommend content and keep users engaged but these machines, while doing that, often expose children to content that might be harmful or inappropriate. The ICO is specifically interested in how TikTok, a Chinese based app owned by parent company ByteDance, assembles and uses personal data of minors, 13 to 17 years of age, to recommend content on their feeds.

On the other hand, Reddit and Imgur are also being looked into for their age verification efforts to assess whether they effectively inspect and restrict child users. Given the fines in the past against the big social media platforms for failure to comply with UK data protection laws, compliance with these laws is critical.

Regulatory Actions:

The Information Commissioner’s Office said in a statement, “If we find there is sufficient evidence that any of these companies have broken the law, we will put this to them and obtain their representations before reaching a final conclusion”. They are very clear that these firms would have to face them if they were found to be in breach of the law before a final verdict. That same year, TikTok was penalized $16 million (£12.7 million) for the violation of data protection laws, which included such violations as using without account the personal data of children under the age of 13.

Reddit has assured all that they will cooperate with such a venture as it is legally binding with all their users’ adults, and then it will be put in place. The Reddit spokesperson said, “Most of our users are adults, but we have plans to roll out changes this year that address updates to UK regulations around age assurance”. However, there is little word from major players such as TikTok, ByteDance, and Imgur.

Strengthening Online Safety Regulations:

This makes Britain tough on social media, as they have mandated stricter age verification for online social media platforms for the protection of children against harmful content. Proposed regulations also require algorithmic changes for filtering or reducing exposure to harmful content for platforms like Facebook, Instagram, and TikTok.

The ICO investigation is yet another step towards the worldwide efforts to check the social media platforms in holding them accountable against the usage of these platforms by the young people. Just as companies are being scrutinized for their actions, the likes of TikTok, Reddit and Imgur find themselves under tremendous pressure to increase transparency, adopt stricter age verification, and modify the algorithm to reduce exposure to harmful content. It’s unclear yet what final consequences this investigation may bring, penalties, policy changes, or stricter enforcement but this issue holds significance and has to be addressed one way or the other.

Read More: TikTok (with Douyin) Becomes First Non-Gaming App to Surpass $6B Revenue


SymbyAI Secures $2.1M Seed Funding to Revolutionize Scientific Research with AI

Scientific research has long been burdened by outdated processes, scattered resources, and inefficient collaboration tools. Researchers often spend months reviewing papers, replicating experiments, and navigating fragmented data sources. Recognizing these challenges, SymbyAI, an AI-powered SaaS platform, aims to streamline the research process and enhance collaboration for scientists worldwide.

Founded in 2023 by Ashia Livaudais and Michael House, SymbyAI provides a centralized workspace where researchers can access papers, code, data, and experiment results in one place. The platform also features an AI-powered assistant designed to aid peer review and experiment replication, significantly reducing the time required for critical research tasks.

$2.1M Seed Funding to Fuel Expansion

To accelerate its mission, SymbyAI has raised a $2.1 million seed round, backed by investors including Drive Capital and CharacterVC. The fresh funding will be used to enhance platform capabilities, strengthen partnerships with research institutions, and expand its reach within the scientific community.

Livaudais, who co-founded SymbyAI after struggling with traditional research workflows, highlighted the real-world need for such a solution:

“The foundations of Symby were formed while creating a solution to a problem that I was facing every day, and then realizing that my colleagues in the research community were looking for solutions to the exact same problems. By the time we realized that we could successfully and repeatedly shorten critical research processes from months to hours, demand for a productized version started to emerge from almost every discovery conversation I had.”

Privacy-Focused AI for Researchers

Unlike many AI-powered tools that rely on external models, SymbyAI is built on its proprietary AI solution. This ensures that researchers’ intellectual property remains protected, as their data is not shared with or used to train models from OpenAI, Anthropic, or other third-party AI providers.

Livaudais reassured users about data security, stating:

“It’s also important to note that SymbyAI is built on a proprietary AI solution, so users don’t have to worry about accidentally sending confidential information to OpenAI, Anthropic, or any other company.”

Bridging Science and AI Through Strategic Partnerships

SymbyAI collaborates with academic publishers, universities, and research organizations, making it a valuable tool for institutions that require faster, AI-enhanced workflows.

The company’s journey began with participation in the gBeta program, a startup accelerator run by gener8tor. Through gBeta, Livaudais connected with early investors, including Antler, which had already backed SymbyAI in its pre-seed round.

With its new funding, SymbyAI plans to expand its development team and refine its AI-driven research tools, ensuring that scientists can work more efficiently, collaborate seamlessly, and accelerate discoveries in various fields.

As AI continues to reshape industries, SymbyAI is positioning itself at the forefront of scientific research innovation, promising to transform the way researchers work, publish, and collaborate.

Read More: Google Sheets Gets a Gemini AI Upgrade for Smarter Data Analysis

AI Robots Could Revolutionize Elderly Care in Japan Amid Workforce Crisis

Japan has long been at the forefront of technological innovation, but its rapidly aging population and shortage of elderly care workers are pushing the country to rely on AI-driven robots for assistance. With a declining birth rate and a growing elderly population, experts believe that robots could play a key role in the future of nursing care.

Japan’s Aging Population and Caregiver Shortage

Japan faces what experts call the “2025 problem”, a situation where all baby boomers born between 1947 and 1949 will have turned 75 or older by the end of 2024. This demographic shift has led to a growing demand for nursing care services, but the country is struggling to find enough caregivers.

According to Japan’s health ministry, the number of newborns in 2024 dropped by 5% compared to the previous year, reaching a record low of 720,988. Meanwhile, the nursing sector is facing a severe labor shortage, with only one applicant for every 4.25 available jobs, a far worse ratio than Japan’s overall job market, which stands at 1.22 job seekers per position.

Although Japan has increased efforts to attract foreign workers, the total number of international caregivers in the country remains at just 57,000, accounting for less than 3% of the workforce in this sector. Given these challenges, AI and robotics are emerging as the most promising solutions to support the healthcare system.

AI Robots Enter the Nursing Sector

In response to the growing demand for caregivers, Japanese researchers are developing AI-powered robots designed to assist elderly individuals with basic daily tasks. One such innovation is AIREC, a 150-kg (330 lb) humanoid robot that can help turn patients over to prevent bedsores or change diapers.

AIREC is being developed by Professor Shigeki Sugano at Waseda University, with funding from the Japanese government. The goal is to enhance patient care while reducing the workload of human caregivers. However, Sugano notes that robots designed for direct human interaction require high levels of precision and intelligence, something that remains a challenge in robotic development.

Sugano Said, who also serves as President of the Robotics Society of Japan. “Humanoid robots are being developed the world over. But they rarely come into direct contact with humans. They just do household chores or some tasks on factory floors,”

Current Role of AI in Elderly Care

Although fully autonomous humanoid robots like AIREC are still years away from deployment, AI-powered caregiving technology is already being used in some nursing homes. Some of the most practical applications include:

  • Sleep monitoring sensors placed under mattresses to track sleeping patterns, reducing the need for nighttime check-ups by caregivers.
  • Small companion robots that engage with residents by singing songs and leading light exercises to provide emotional and mental stimulation.
  • Automated scheduling systems that help manage medication reminders and caregiver tasks.

Despite these advancements, many experts believe that human caregivers will always be necessary, and that AI should be seen as a support tool rather than a replacement.

Will AI Robots Be the Future of Elderly Care?

While AI-driven nursing robots hold great promise, their widespread adoption will take time. AIREC is expected to be ready for real-world use by 2030, but the initial cost is projected to be at least 10 million yen ($67,000), making it expensive for many facilities.

Some caregivers, like Takaki Ito from the Zenkoukai elderly care facility, are hopeful about the future of AI-assisted nursing but remain cautious. “If we have AI-equipped robots that can grasp each care receiver’s living conditions and personal traits, there may be a future for them to directly provide nursing care,” he said.

With Japan’s aging crisis deepening, the need for innovative solutions in the healthcare sector is greater than ever. Whether AI robots like AIREC will be the answer remains to be seen, but technology will undoubtedly play a crucial role in shaping the future of elderly care.

Read More: Microsoft Expands AI Reach with Copilot App for Mac

Meta’s Oversight Board to Assess Hate Speech Policy Changes

Meta, the parent company of Facebook, Instagram, and Threads, has long faced scrutiny over its content moderation policies, especially when it comes to hate speech and misinformation. Over the years, the company has tightened and loosened its regulations in response to public pressure, political discourse, and regulatory scrutiny. Now, with its latest policy changes, Meta’s Oversight Board is preparing to review recent changes to the company’s hate speech policies on Facebook, Instagram, and Threads, marking a critical moment for content moderation on Meta’s platforms.

In January 2024, CEO Mark Zuckerberg introduced a policy shift aimed at allowing more expression on Meta-owned platforms. The update included rolling back certain protections for immigrants and LGBTQ users, a move that has sparked debate over free speech vs. platform safety.

The Oversight Board, an independent body established to review Meta’s policy decisions, has taken notice. It currently has four open cases related to hate speech and will use these cases to assess the impact of the company’s updated guidelines. According to a report by Engadget, the board’s decision could influence how Meta refines its content moderation approach moving forward.

Meta has a mixed record when it comes to adopting the Oversight Board’s recommendations. While the company is required to follow the board’s rulings on individual content cases, it has a limited obligation to make broader policy adjustments. This review will test whether Meta is willing to reevaluate its moderation approach or continue with its more lenient stance on content restrictions.

With the rise of misinformation, online harassment, and the political climate intensifying, the outcome of this review could influence how Meta shapes content regulation in the future. Whether the Oversight Board’s findings will result in actual policy changes remains to be seen.

Read More: Amazon Unveils Alexa+ AI Assistant to Revolutionize Smart Living

Australia Hits Telegram with A$1M Fine Over Delayed Child Safety and Extremism Response

In the social media realm, rapidity defines the arena in which transparency cannot merely be a vague term but rather a legitimate right. Telegram, however, has chosen to take the scenic route in replying to Australian regulators about its safety measures. What was set to be a straight compliance issue has rather taken a five-month delay, and for its failure to respond on time to the inquiry into the prevention of child abuse material and violent extremist content, Australia’s online safety regulator imposed about A$1 million ($640,000) in fines on messaging platform Telegram.

The eSafety Commission, which imposed the fine on Monday, criticized Telegram for what it termed, in the delayed response, a blatant lack of transparency, which, according to Australian law, should have been timely. Now that Telegram is trying to appeal, this whole saga gives rise to an urgent question, Is online safety regulation something Big Tech can afford to put on snooze?

Scrutiny and Compliance Issues:

In March 2024, the eSafety Commission reached out to a host of social media platforms, including YouTube, X, Facebook, Telegram, and Reddit, inquiring into what they had done or should do to control the use of their platforms by extremists. Companies were asked to outline their strategies for countering Child Sexual Abuse Material and recruitment by extremist organizations through streaming features, algorithms, and recommendations. The response was timely from other platforms, while Telegram submitted only in October, five months after the deadline.

eSafety Commissioner Julie Inman Grant stressed that transparency in regulatory compliance is very important. Grant said in a statement, “Timely transparency is not a voluntary requirement in Australia and this action reinforces the importance of all companies complying with Australian law. She described the delay as, Telegram’s delay in providing information obstructed eSafety from implementing its online safety measures”.

Telegram’s Response:

Telegram defended its position, stating that it responded fully to all inquiries, no issues left pending. The company said in an email statement, “The unfair and disproportionate penalty concerns only the response time frame, and we intend to appeal”. The company argued that the fine was unfair and disproportionate because it related only to timing and not to any failure to comply with safety requirements.

The scrutiny has been building on the platform globally, after the investigation by French authorities into its founder, Pavel Durov, in August 2024 on allegations concerning Telegram’s use in illicit activities. Durov, who is presently out on bail, denied all allegations.

Implications of Tech Regulation:

The case raises issues that reveal the prevailing climate demanding transparency and accountability of tech companies in the field of online safety. Grant asserted that extremist online materials are exponentially growing threats, thus demanding enhanced enforcement mechanisms that will hold tech companies accountable for their preventive actions regarding the exploitation of their platforms. Grant said, “If we want accountability from the tech industry we need much greater transparency. These powers give us a look under the hood at just how these platforms are dealing, or not dealing, with a range of serious and horrendous online harms which affect Australians.” The eSafety Commission stated that if Telegram does not comply with the order, it would then go to civil court to enforce it.

Concerns of Counter-Terrorism:

Australia’s intelligence agencies also raised alarms about threats from online extremism. As of December 2024, the report said one out of the five priority counter-terrorism cases in Australia had a youth component. Such findings also added to the urgency for the regulatory body to enforce stricter policies on digital platforms to stop radicalization and harmful content.

This isn’t merely a slap on the wrist for Telegram, rather this sends a signal to the tech industry that there are limits to regulatory patience. As scrutiny grows around the world about the role of digital platforms in promoting extremism and child safety issues, the need for accountability has become apparent. As regulatory scrutiny of digital platforms is gaining momentum on the global front, Telegram’s legal manoeuvrings could provide a precedent on how tech companies engage with regulators and manage compliance expectations going forward.

Read More: Grok 3’s Brief Censorship of Trump and Musk Sparks Controversy

Meta’s Cost-Cutting: Fewer Stock Options, Bigger Executive Bonuses

For years, the promise of stock options, an employment perk in themselves, made tech employees hope against hope for the conversion of their salary into millions until late, when the market, or in this case, their own company, decided otherwise. Meta, scooping stock prices at record highs, has now cut its employees’ equity compensation by 10%. The irony? Stock options dropped for rank-and-file workers but instead got inflated bonus awards for executives. It’s like watching someone put down their cake and give you half their piece while helping themselves to an extra slice on the side.

The Financial Times allegedly states that tens of thousands of Meta Platforms employees could have a 10% downsizing of their annual stock options as the stock makes record highs this month. Each year Meta employees are offered equity refreshers, giving a major part of their total compensation, which also includes base salaries and bonuses. These stock options top every three months over four years. Most of them have been told they would get around 10% less equity for this year, while the exact percentages supposedly depend on location and organizational hierarchy.

Increased Bonuses and Workforce Adjustments:

With the simultaneously extending resource base, a larger bonus to executives is offered in cases where and when the equity share seems to be for the broad workforce. An executive bonus is being raised according to the company’s ninth filing, to now 200% of base salary, where it was at 75%, but these new bonuses are not going to be offered to Meta’s CEO, Mark Zuckerberg.

The latest proportion is, thus, following the media buzz that Meta will terminate almost 5% of its “lowest-performing” members and is set to refill the open positions at a later point in the year. Moreover, Zuckerberg noted that he might eliminate even more jobs emphasizing that elevating performance standards is the company’s foremost aim.

Meta’s Stock Market:

Meta’s stock has seen a run since January 17, as the U.S. Supreme Court banned TikTok and Donald Trump‘s long overdue ban on TikTok was crawling toward enforcement dates. Investor confidence resumed in January with Mark Zuckerberg announcing that Meta plans to cover up to $65 billion this year in gripping its artificial intelligence infrastructure.

Contrarily, though, Meta’s shares declined by 1.3% to $694.8 last Thursday. A quarter-four earnings report in late January showed it delivered above what Wall Street estimated, yet the company cautioned that the first quarter may affect sales figures and will possibly mislead observers regarding the financial outcomes of Meta’s highly focused AI investments.

Growth and Cost Management:

Despite the high record-breaking stock and generally good market positioning, Meta chose to lower a share of stock options for employees, also as part of cost management amid high investments toward AI with evolving strategies of workforce. A classic case of what technology giants do to cut costs for some while keeping their top people happy is Meta’s trimming of stock options and increased bonuses for some executives. It would remain to be seen how the latest measure affects employee morale and retention while still involved in AI major expansion and market dominance. One thing about the rapidly changing technology scenario is that it will be a cloudy future for the employees of Meta, just like their stock allocations.

Read More: Meta Launches Project Waterworth, World’s Longest Undersea Cable

India’s AI Ambitions: Can It Catch Up in the Global Race?

The world of Artificial Intelligence (AI) is evolving rapidly, with China and the US leading the way in developing powerful AI models. Recently, China’s DeepSeek stunned the tech industry by dramatically reducing the cost of building generative AI applications. Meanwhile, India is still playing catch-up in developing its foundational AI model.

The Indian government, however, remains confident. It has announced plans to provide thousands of high-end chips to startups, universities, and researchers, aiming to develop an AI model within 10 months. But with China and the US already years ahead, the question remains: Can India close the gap in time?

Global Tech Giants Bet on India’s AI Future

India’s AI potential is not going unnoticed. OpenAI CEO Sam Altman, who was once skeptical about India’s AI ambitions, now acknowledges the country’s capabilities, stating:

“India should be playing a leading role in the AI revolution.”

India is now OpenAI’s second-largest market by users, highlighting a rapid adoption of AI-driven tools.

Tech giants are also stepping in with major investments:

With over 200 AI startups, India has an active startup ecosystem working on generative AI. But despite this entrepreneurial energy, experts say India is still far behind in critical areas.

Why Is India Lagging Behind?

Limited AI Funding

While India has announced a $1 billion AI mission, this amount pales in comparison to the $500 billion investment the US has allocated for AI infrastructure (Stargate Project) and $137 billion by China.

Technology analyst Prasanto Roy points out that China and the US have a “four to five-year head-start”, thanks to massive funding in AI research, academia, and military applications.

Lack of India-Specific AI Datasets

A major roadblock for India is the lack of high-quality datasets for training AI models in local languages like Hindi, Marathi, Tamil, and Bengali. Without strong datasets, creating an India-first AI model remains a challenge.

Talent Drain & Weak Research Infrastructure

India has 15% of the world’s AI workforce, but many top Indian AI experts are moving abroad due to better research opportunities. AI consultant Jaspreet Bindra highlights a key issue:
“Foundational AI innovations typically come from deep R&D in universities and corporate research labs.”

Unlike China and the US, India’s academic institutions and corporate research labs have not yet produced groundbreaking AI innovations.

IT Sector Focused on Services, Not AI Development

India’s $200 billion IT outsourcing industry, centred in Bengaluru, employs millions of coders. However, IT companies have traditionally focused on service-based projects rather than foundational AI research.

As Prasanto Roy points out:
“It’s a huge gap which they left to the startups to fill.”

While startups are trying to bridge this gap, experts question whether they have the resources to match China’s and the US’s AI advancements.

India’s Path Forward: Can It Still Catch Up?

Leveraging Open-Source AI Models

Instead of building AI models from scratch, India can modify and improve existing open-source models like DeepSeek.

AI entrepreneur Bhavish Aggarwal, founder of Krutrim, recently wrote on X:
“India can continue to build and tweak applications upon existing open-source platforms like DeepSeek to leapfrog our own AI progress.”

Investing in Semiconductor Manufacturing

AI models require huge computational power, which means India must invest in semiconductor manufacturing. Currently, India depends on imports for AI chips, which increases costs and delays AI research.

Government-Industry Collaboration

Experts say that India’s success in digital payments through UPI (Unified Payments Interface) was possible because of strong government-industry-academia collaboration. A similar strategy is needed for AI, ensuring research, funding, and policy support AI breakthroughs.

Jaspreet Bindra warns that without sustained funding, India’s 10-month AI model deadline may not be realistic, stating:
“Despite what has been heard about DeepSeek developing a model with $5.6 million, there was much more capital behind it.”

The Race Is On, But India Must Act Fast

India has the talent, market size, and growing investment interest. But to truly compete with the US and China, it must address funding gaps, invest in research, and build AI infrastructure.

Experts agree that the next few years will determine whether India will emerge as an AI leader or continue to rely on foreign AI technology.

Read More: Thousands of Apps Removed from EU App Store as Apple Enforces DSA

Mira Murati’s AI Vision gains Momentum with her new AI startup, Thinking Machines Lab

In the world of AI, where changes can be sweeping and instantaneous, similar is the dynamics of power. Mira Murati, ex-CTO of OpenAI, just set up her own AI startup, Thinking Machines Lab, and in this tech-world heist, she had 20 researchers from OpenAI join her. If AI were chess, Murati just shouted, “Check!” while sipping her coffee. So what does that mean for the future of AI, and why, suddenly, does OpenAI look like a coffee shop on a busy Monday morning with hardly any staff?

Former OpenAI Chief Technology Officer Mira Murati’s new AI startup, Thinking Machines Lab, is already throwing a major twist in the AI research space. Announced last Tuesday, the company has bragged of collecting the best researchers and engineers working in the leading AI companies, including OpenAI, Meta, and Mistral. The incident bears testament to Murati’s industry influence, as about two-thirds of the young startup’s workforce comprises such ex-OpenAI employees.

Powerhouse Team:

One of the most notable arrivals is Barret Zoph, the renowned AI researcher who left OpenAI on the same date as Murati late in September and will join the startup as the Chief Technology Officer. Another star player, John Schulman, who co-founded OpenAI will be the startup’s Chief Scientist. Schulman at one time went from OpenAI to Anthropic in August arguing that he wanted to shift his focus towards the area of AI alignment, a primal arena that ensures that the AI models remain aligned with human values in the spaces of safety and reliability.

According to sources, more ex-OpenAI employees are expected to join Murati’s venture. The company might have already begun talks to raise funding from venture capitalists, evidence of investors’ great interest in the mission established by the startup. At this stage, I believe that OpenAI might need an AI-powered therapist.

New Vision for AI Development:

Thinking Machines Lab is going to position itself as an AI company claiming to build something more visionary and carrying an ethical veil than any of the companies doing something similar. The startup said, “While current systems excel at programming and mathematics, we’re building AI that can adapt to the full spectrum of human expertise and enable a broader spectrum of applications”.

Another unique selling point of Thinking Machines Lab is its cross-design approach whereby teams from research and product development work together on a common problem. They build artificial intelligence solutions that are very innovative and also practical. The company has plans to dedicate a significant portion of its funds to AI alignment research by open-sourcing datasets, making model specifications available, and publishing research results.

Murati influence:

An active participant in the development of AI, Mira Murati began her work at OpenAI in 2018. She took a leadership position in the development of ChatGPT and many times represented OpenAI in public together with CEO Sam Altman. However, she abruptly left OpenAI amid the transition of its governance structure, joined by several other high-profile exits. Murati was formerly at the helm of numerous Tesla projects as well as those at augmented reality startup Leap Motion, gathering ample experience in cutting-edge technological advancement.

OpenAI’s Departure:

Murati is an additional name in the growing list of former OpenAI executives diversifying out into their new endeavours. Other famous AI projects set up by OpenAI alumni include Anthropic and Safe Superintelligence, which have managed to attract significant investment, and talent alike from OpenAI. Thinking Machines Labs looks poised to be a regular player able to build on its solid research base, courtesy of Murati’s industry experience.

As the AI ecosystem continues to change, Thinking Machines Lab ushers in yet another chapter in the race for building next-generation artificial intelligence. With an impressive cast, a heavy focus on AI alignment, and a commitment to openness in research, Murati’s newly birthed venture is expected to cause ripples across the industry and the future of AI just got a lot more competitive. 

Also Read: South Korea’s AI Power Play; Securing 10,000 GPUs for the Future

UK Minister Urges Western AI Leadership to Dominate AI Development

The world keeps fast-forwarding in the AI race, making it undeniably evident that whoever leads AI will lead the future. The real conflict lies when the algorithms are being subtly engineered to outthink humans, it is not just who produces the smartest machine that counts, rather it is who ensures that those digital minds fit into the world of democratic ideals. UK’s Technology Secretary, Peter Kyle argued that leadership in artificial intelligence must remain within the “western, liberal, democratic” nations, most especially against the backdrop of the increasing global race in the use of AI technologies. Speaking ahead of a global summit on artificial intelligence on Sunday in Paris, Kyle seemed to refer to the importance of democratic values in the future development of artificial intelligence, hinting to an extent against China and its rising presence in that area.

The Artificial Intelligence Action Summit, jointly organized by France’s President Emmanuel Macron and India’s Prime Minister Narendra Modi from February 10-11, will bring together political leaders, tech executives, and policymakers to discuss AI’s global roadmap for future development. The summit comes against the background of the recently established DeepSeek, a Chinese AI company that has sought to undermine Silicon Valley with its latest technological improvements.

Democratic Powers’ Role:

Kyle made it clear that the UK intends to position itself at the forefront of AI development, leveraging its scientific expertise and technological capabilities. He stressed that governments play a crucial role in ensuring that AI aligns with democratic values and does not become a tool for authoritarian regimes.

Kyle stated, “Government does have agency in how this technology is developed and deployed and consumed. We need to use that agency to reinforce our democratic principles, our liberal values and our democratic way of life. Adding that he was under no illusion, there were some [other] countries that seek to do the same for their ways of life and their outlooks”.

Without naming nor specifying any particular country, Kyle said, “he was not pinpointing one country, but it was important that democratic countries prevailed so we can defend, and keep people safe”. He explained that competing states are already shaping AI according to their respective political ideologies. Such remarks are indications that China has begun establishing its own foothold in AI as presumably challenging Western leadership in this area.

Impact of DeepSeek Emergence:

Some investors in the United States characterized DeepSeek’s recent breakthroughs as a “Sputnik moment,” referring to the trauma felt after the first artificial satellite was put in orbit by the Soviet Union in 1957. The AI model from the Chinese firm has been developed at a low cost and is mostly on par with or has improved on US rivals, raising security approaches by Western nations. Kyle confirmed that national safety repercussions of DeepSeek and its chatbot innovation would be scrutinized by British officials. However, he maintained that competition should be a motivation rather than something to cause fright. He said, “I am enthused and motivated by DeepSeek. I’m not fearful”.

 The AI Summit and UK’s AI Growth Zones:

Now, the Paris summit has been structured around facets of how AI will affect jobs, cultures, and global governance as opposed to merely safety concerns, which were the preoccupation of the UK’s first, inaugural AI summit held at Bletchley Park in 2023. Some of the prominent participants are; US Vice President, JD Vance, President of the European Commission, Ursula von der Leyen, Chancellor of Germany, Olaf Scholz, Google CEO, Sundar Pichai, CEO of OpenAI, Sam Altman and AI pioneer Nobel Prize winner, Demis Hassabis. China’s Vice Premier Zhang Guoqing will also be attending, making the summit geopolitically important.

Kyle has announced on the UK’s part that bids are opened for AI growth zones, part of the AI strategy of the UK, that will now host new data centers critical for AI training and operation. Its aim is to bring economic rejuvenation to what are considered historically left behind regions, especially those in Scotland, Wales, and northern England. Kyle stated, “We are putting extra effort in finding those parts of the country which, for too long, have been left behind when new innovations, new opportunities are available. We are determined that those parts of the country are first in the queue to benefit … to the maximum possible from this new wave of opportunity that’s striking our economy”.

Energy provision in AI growth zones would then be increased by government promise to ensure that the zones have access to more than 500MW of power, enough to power about two million homes. Potential first sites for these AI hubs include the Culham Science Centre in Oxfordshire, where the UK Atomic Energy Authority is based.

AI Development:

A draft early closing statement of the summit seen by the Guardian goes for making AI “sustainable for people and the planet.” The same statement emphasized that it should be open, inclusive, transparent, ethical, safe, secure, and trustworthy. It does say trust and safety in AI governance in spite of fears the summit will not be enough on safety issues. Although the AI race speeds up, the UK’s posture is indicative of a wider western push to retain its leadership in AI innovation while making sure the technology works for and with democratic values. Whether it can fulfill this vision with rising global competition still awaits to be seen.

Read More: China’s Chip Industry Gains Momentum

Musk’s Legal Battle with OpenAI May Head to Trial, Judge Rules

A federal judge in California has ruled the sections of  Musk’s lawsuit will proceed to trials, requiring his testimony. On Tuesday, a judge said that portions of  Elon Musk’s case against OpenAI to stop its conversion to a for-profit-entity will proceed to trial. He also added that Tesla CEO will also appear in court for testimony,

“ Something is going to trial in this case,” District Judge Yvonne Gonzalez Rogers in Oakland, California, said in the early court session. 

Musk will sit on the stand and present his point to a jury, and a jury will decide who is on the right side. 

District judge Rogers was considering Musk’s recent request for a preliminary injunction to block the conversions generated by OpenAI before going to trial. Thus, the latest move in this battle is between the world’s richest person and CEO of OpenAI, Sam Altman, who is playing publicly. 

The last time Rogers was given a preliminary injunction was in Epic Games’ case against Apple in  May,2021. 

Musk has also been the Cofounder of OpenAi with Altman in 2015, but quit before the company took over and started the competing AI startup xAI in 2023. OpenAI is now shifting to a for-profit entity from nonprofit one, which clearly exhibits its need to generate a secure revenue for best AI models development

Last year, Musk filed a case against OpenAI and Sam Altman, saying that OpenAI founders approached him to fund a nonprofit AI development for humanity, but their focus is on making money. He later expanded the case to federal antitrust for more claims, and in December asked the judge presiding over the lawsuit to refrain AI from transitioning into a for-profit. 

In response to this filed case of Musk, Open AI has recorded their word, saying that the claims of Musks should be dismissed and that Musk “ should be competing in the market rather than the courtroom” 

The stakes on OpenAI’s transition has now moved after their last fundraising of around $6.6 billion with a new roundup of $25 billion under discussion with softbank are conditioned on the company restructuring to remove the non-profit entity’s control. 

Such reinstatement of AI would be a bit  unusual, said Rose Chan Loui, executive director of the UCLA law center for Philanthropy &  nonprofit entities. The shift of nonprofit work to for-profit has historically been for healthcare organizations like hospitals, not venture capital-backed organizations, she said. 

Read More: OpenAI Seals Partnership with Kakao

Elon Musk Reportedly Exerts Influence Over U.S. Government Agencies

Washington, D.C. — In a shocking turn of events, Elon Musk and his associates are said to be taking control of key operations within several U.S. government agencies, including the Office of Personnel Management, the Treasury Department, and the General Services Administration (GSA). This situation reflects Musk’s disruptive approach, similar to his management style after acquiring Twitter.

Tensions Rise in the Treasury Department

According to The Washington Post, the Treasury Department’s highest-ranking career official is stepping down after a “clash” with members of Musk’s so-called Department of Government Efficiency (DOGE). The disagreement reportedly revolves around “access to sensitive payment systems” that manage over $6 trillion annually, funding essential programs like Social Security and Medicare.

Since the November election, DOGE officials have been pushing to gain access to these critical systems. The Trump administration has also been exploring ways to halt federal funding, enacting a controversial spending freeze that legal experts argue could violate the Constitution.

Civil Servants Locked Out

Reuters reports that Musk’s aides have locked career civil servants out of vital computer systems containing the personal data of millions of federal employees. Leaked documents obtained by Wired indicate that Musk’s team has also taken control of the General Services Administration, which manages federal offices and technological infrastructure.

Tech Scrutiny and Employee Exodus

Across various government departments, tech employees are reportedly being subjected to intense code reviews and audits led by Musk’s team, as per Wired. Furthermore, a recent government-wide email believed to have Musk’s influence offered federal employees the option to resign, reflecting the chaotic restructuring seen during Musk’s Twitter takeover.

From Advisory Role to Direct Control

Initially, DOGE was created by Donald Trump as an external advisory group meant to recommend federal spending cuts. However, this purpose shifted significantly after Trump’s inauguration. He signed an executive order renaming the U.S. Digital Service to the U.S. DOGE Service, effectively embedding Musk within the federal government.

Reports suggest Musk now has an office in the West Wing of the White House and has even been seen sleeping at the DOGE headquarters, showcasing his hands-on management style.

A Growing Concern Over Federal Autonomy

This unprecedented influence raises serious concerns about federal agency independence and potential conflicts of interest. Musk’s deep involvement in government operations blurs the lines between private sector influence and public governance.

Further updates are expected as more details emerge regarding Musk’s role and the broader implications for U.S. governmental operations.

This article has been updated to include new reports from Wired confirming that Elon Musk’s staff have also infiltrated the General Services Administration.

Read More: Italy Bans DeepSeek But Banning AI Model is Harder Than You Think

Trump VS Biden: The AI Showdown Reshaping Americas Tech Future

Is Trump trying to make the U.S. the global leader in Artificial Intelligence?

His executive orders scream his vision. Trump makes a strategic move to create policies that position the U.S. as the top innovator in AI, ensuring it remains ahead of global competitors like China and the European Union.

What was Biden’s thought process regarding AI?

National security, public health and safety above all – The AI executive order signed by Biden in 2023.

Biden’s administration was focused on AI regulation and risk management.

Developers were restricted from sharing safety test results with the government before releasing the technology, preventing misuse & harm.

What is Trump up to now?

New AI Executive Order:

Trump signed an executive order that mandates the creation of an Artificial Intelligence Action Plan. The plan must be completed within 180 days and aims to:

Strengthen America’s position as the global leader in AI innovation.

Promote human flourishing, AI technology that improves the quality of life.

To enhance economic competitiveness, the U.S. benefits financially from its AI advancements.

Safeguard national security, AI to compete against the US. enemies. 

Revoking Biden’s Policies:

Disabling Biden-era policies on AI?

Trump views Biden’s regulations as obstacles to innovation, arguing that they create unnecessary red tape for developers and companies.

and order to Implement newly created policy by disabling the biden’s.

Trump and Biden have contrasting approaches towards AI.

Biden’s policies leaned toward caution and regulation, focusing on the risks of AI and ensuring responsible development.

Trump’s approach prioritizes rapid development and deployment, aiming to reduce regulatory barriers that might slow innovation.

What Does This Mean?

  • AI Action Plan Goals:
    The action plan will provide a comprehensive strategy for AI innovation, likely involving:
    • Investment in AI research and development.
    • Policies to attract AI talent and companies to the U.S.
    • Collaboration between the government and private sector to accelerate progress.
  • Regulatory Changes:
    • Trump eliminates the requirement for AI developers to submit safety tests for high-risk systems to the government leading to harmful or untested AI technologies.
    • This deregulation could make the U.S. a more attractive hub for AI companies but raises concerns about ethical and safety risks.
  • Global AI Competition:
    • By removing regulations and promoting innovation, Trump hopes to ensure the U.S. stays ahead in this competition.

Key Take aways

  • For AI Developers:
    • Trump’s policies favour innovation by reducing oversight and regulation, which could accelerate the development and deployment of AI technologies.
  • For Critics:
    • The removal of safety requirements raises concerns about potential risks to privacy, security, and ethical standards.
  • For the U.S. Economy:
    • Could boost America’s AI industry, create jobs, and position the U.S. as a leader in the global tech economy.

Will Trump’s favouring innovation and reduced regulation over caution and oversight could be the biggest success or technological disaster?

Read More: Revolutionizing Application Security and Network Management through F5 AI Assistant

Netflix’s increasing Subscription Cost Plans Tends to Maintain its Top Streaming Service Position

Raise in the Subscription Cost Plans:

According to the most recent reports of the company, Netflix subscription cost plans in the US, Canada, Portugal, and Argentina would rise and might become expensive. The Ad-supported cost plan will include an increase from $6.99 to $7.99 per month, the Standard ad-free plan will jump from. $15.49 to $17.99 per month and the Premium plan would have raised its cost from $22.99 to $24.99 per month. . The price hikes will go into effect during subscribers’ next billing cycle, according to Shengjie Zhou.

Growth of a Fresh Objective:

This is the first increase in price for an ad-supported plan since its launching in 2022. Netflix proclaims that the price hikes are necessary to finance programming and provide a better experience for its customers. Despite the massive increase in subscriber numbers, the company possesses an attitude of room for more subscribers and is still adding a total of 19 million new subscribers in the last quarter to its 300 million global customer base. It was the first time when Netflix announced its operating income, which was above $10 billion. However, the company claims that even with the popularity of streaming today, it is available only in less than 10% of households with TVs in the countries, which could be expanded.

Strategizing with Strong Plan and Content:

Along with the price hike, Netflix announced that it is introducing a new Extra Member with Ads plan service, which will allow those on the ad-supported plan to add someone outside their household to their subscription. The strategy to stream a strong content lineup that included new seasons of Squid Game and the League of Legends spinoff Arcane is growing. Its approach to live content has also gotten more aggressive within the past several weeks, as it has gone from airing “sports-adjacent” events like a golf tournament that paired PGA players with Formula One drivers to NFL games featuing performances from Beyoncé and Mariah Carey. Netflix aims to maintain its position as the top streaming service through maintaining profitability, content investment and new plans.

Read More: OpenAI Gains More Flexibility as Microsoft Backs $500B Stargate Initiative

Could TikTok Ever Be Banned in the UK too?

TikTok has grown into one of the most sought-after applications of the time in the last few years, offering millions of people a platform to share and view short video clips on an everyday basis within the UK. However, the increasing nagging worries about international security and data privacy have led many governments to begin a closer examination of the app. The US has had its fair share of banning the app, and now questions have started emerging about whether or not the UK could do the same thing.

Growing Concerns About TikTok

It is a subsidiary of the Chinese giant ByteDance, which has repeatedly accused TikTok’s data collection practices of putting user information in jeopardy by Chinese government access. Although TikTok refutes such claims, citing extensive data security measures, Western scepticism abounds. Concerns have been expressed in the UK by Members of Parliament and cybersecurity experts, claiming that the policies of data retention in TikTok could be harmful from the viewpoint of national security. The UK government has imposed a ban from March 2023, which forbids the installation of TikTok in government devices due to the risk of misuse of sensitive data. This is the banning and restriction of TikTok similar to those placed by the US and European countries.

The Case for a UK Ban

With such a ban possibly emerging, there would likely be pressure for the UK to conform with allies such as the United States or growing discontentment about the fact that many young people are becoming so accustomed to TikTok. Critics point to the fact that not only is there much personal information about users stored in the application but it can also serve as a platform for the dissemination of wrong information and through its algorithms manipulate content and, hence, create a public perception. It is also the more general subject of cybersecurity. Indeed, there have been increasing strains between China and the West, especially in the geopolitical area. Covertly becoming warier, government to foreign-owned apps and platforms. A ban may be used as a protective shield to safeguard citizens as well as national interests.

Challenges of Implementing a Ban

For all that there is a thought of a ban on TikTok in the UK, numerous problems seem to be behind its reach. All these make the arena weighty on its back. Youngsters occupy the greatest part of the chunk of users in the country. As well as entertainment, TikTok has been central to education and even marketing mini businesses. Implementation of a ban would entail desperate protests of users and creators: many of whom take TikTok as their lifeblood source of revenue. Besides that, it is also likely to bring forth the freedom-of-speech question and the role of the government in supervision over technology platforms.

What TikTok Does to Keep Operating

The company has been busy cutting off ties with its Chinese heritage to address this concern. Under the “Project Clover” campaign, committed to reinforcing data security as well as being transparent, the company has now agreed to host all European user’s info in Ireland and Norway. It has also thrown itself open to audit by third parties for government and public reassurance. 

It is difficult to figure out if all these will be sufficient for the company to avoid a ban. But they speak a lot about how much the company knows about the growing threat to its activities in Western markets.


Could the UK Really Ban TikTok?

Nevertheless, there were no suggestions from the UK concerning any immediate plans that may lead to the banning of TikTok. However, the ban would depend on the accumulation of proof of foul play or growing geopolitical tensions. For the moment, TikTok keeps working as one of the most coveted apps in the UK, although the future does not hold that much merit. As the government does its careful balancing act between national security concerns and the rights of citizens, where they’d ever actually be able to ban TikTok now becomes a kind of unclear question in and of itself.

Read More: Meta Announces CapCut-like Video Editing App Called Edits

Apple’s Store App Launch in India

Apple’s Customized Features:

Apple’s recently launched Apple Store app in India is a significant expansion of its reach, where customers can enjoy individualized shopping experiences. The app can be downloaded and offers personalized shopping suggestions for each item from the newly launched App Store in India. The app allows Indian users to customize their Mac with enhanced chips, additional memory or storage along with providing customers the advantage of arranging delivery or pickup of their purchased items.

Expansion Strategy in India:

The company’s first physical stores were opened in Mumbai and Delhi in 2023, the plan is to expand into Bengaluru, Pune, and other parts of Mumbai. This move is part of Apple’s overall strategy to build a strong presence in India. Karen Rasmussen, Apple’s Head of Retail Online, explained that contributing to our existing customers in India through innovative retail solutions and efforts to expand our reach is the main purpose and upcoming move of the company.

Personalized Experiences:

Beyond just shopping, the app offers post-purchase assistance along with Apple experts offering to set up an online session. Furthermore, Apple provides customers with options for trade-ins and financing free training and ad-free engraving services on devices like AirPods or iPads in eight languages. The expansion of Apple’s manufacturing operations outside of China is making India, the second-largest smartphone market in the world.

Read More: Apple’s Unsolved Problems with iPhone Alarms

UNSC Meeting Aims To Prevent The Abuse Caused By Commercial Spyware

Sustaining World Peace:

For the first time, the United Nations Security Council has tackled the global threat posed by commercial spyware, a software often used by governments and private entities for surveillance. The purpose of this meeting was to demand strict regulations and to address the impact of the abused commercial spyware to preserve international peace and security. United States, along with 15 other countries, called for the meeting to address how spyware affects international peace, security, and human rights.

Spywares’ Atrocities Causing Global Concerns:

Among the participants in the meeting, countries like France, South Korea, and the UK agreed on the need for strict regulations to restrict spyware’s misuse. However, Russia and China dismissed such suspicions and raised broader cyber threats as a more concerning issue, the meeting remained informal with no concrete proposals. John Scott-Railton, Citizen Labs’ researcher, warned of a “secretive global ecosystem” of spyware developers and brokers who are causing exploitations and increasing its abuse. As he has been investigating spyware abuses since 2012, John pointed out Europe, especially Barcelona being the hub for spyware companies. He also mentioned Europe as “an epicentre of spyware abuses” and emphasized the importance of better training for homeland security.

Digging deep into Scandals: 

Spyware scandals involving firms like NSO Group and Intellexa, a creation of Greece and Poland, took centre stage and intervened in the discussion. Greece emphasized in its 2022 legislation that they banned the sales of spyware, while Poland highlighted the need for local legislative measures to increase control over security and intelligence services, acknowledging that there is no justification or requirement for spyware yet it can be used legally. On the other hand, Russia criticized the U.S. and held them responsible for creating actual global surveillance and illegal interference in the private lives of citizens, referring to past revelations of the NSA by Edward Snowden. China argued that national cyber weapons, like the Stuxnet malware, created as part of a U.S.-Israeli effort to undermine Iran’s nuclear program, pose greater risks than commercial spyware. The spokesperson from China emphasized that discussing issues such as commercial spyware and maintaining international peace is overshadowing the more harmful propaganda by governments.

click here to read: xAI Testing Standalone iOS App for Grok Chatbot

Claiming Accountability:

As the U.S. faced criticism from Russia, they claimed to have made amends to control further damage. Under President Biden, the U.S. has implemented measures to contain commercial spyware effects by imposing sanctions and travel bans on individuals who have been tied to spyware abuses, indicating a much more difficult objective. However, people in this particular field worry these measures might also affect those with legitimate roles in cybersecurity.

OpenAI Unveils Comprehensive “Economic Blueprint” for AI Policy

OpenAI has introduced an ambitious “economic blueprint,” outlining policies aimed at fostering AI innovation and collaboration with the U.S. government and its allies. This dynamic document emphasizes the need for significant investment in chips, data infrastructure, energy, and talent to maintain U.S. leadership in AI while safeguarding national security.

Chris Lehane, OpenAI’s VP of global affairs, highlighted the urgency of this initiative, stating, “The U.S. government has the opportunity to strengthen its global leadership in innovation by prioritizing AI development.” Lehane underscored the risk of falling behind as some nations dismiss AI’s economic potential.

The blueprint critiques the fragmented state of AI regulation in the U.S., where nearly 700 AI-related bills introduced in 2024 alone present inconsistencies across states. OpenAI views this patchwork approach as inadequate for addressing AI’s rapid advancements.

CEO Sam Altman has also expressed concerns about current federal efforts, such as the CHIPS Act, which he argues has not delivered on its promises to revitalize the U.S. semiconductor industry. Altman advocates for streamlined processes to enable faster development of critical infrastructure like power plants and data centers, which are essential for AI’s growth.

Key Recommendations

The blueprint proposes several measures to strengthen AI infrastructure and policy:

  • Infrastructure Investment: Federal spending on energy and data transmission should be “dramatically” increased, with support for renewable energy and nuclear power to meet the growing demands of AI development.
  • National Security: OpenAI urges the government to develop best practices for AI deployment, strengthen collaboration with security agencies, and establish export controls to share AI models with allies while restricting access to adversaries.
  • Regulatory Standards: The blueprint calls for internationally recognized voluntary standards to ensure AI safety and security without stifling innovation.

OpenAI also touches on the contentious issue of copyright, advocating for the use of publicly available information, including copyrighted content, to train AI models. It argues that restricting such practices in the U.S. would disadvantage domestic AI development while benefiting foreign competitors.

Lobbying for Policy Impact

OpenAI’s growing influence in shaping AI policy is evident in its lobbying efforts, which tripled in the first half of last year, reaching $800,000. The company has also recruited former government leaders to bolster its policy expertise, including ex-NSA chief Paul Nakasone and former Commerce Department economist Aaron Chatterji.

As OpenAI continues to champion AI-friendly legislation and oppose restrictive policies, its blueprint reflects a clear intention to lead the charge in shaping U.S. AI strategy. Whether these proposals will translate into actionable policies remains to be seen, but OpenAI’s proactive stance signals its commitment to shaping the future of AI regulation.

Read More: OpenAI Faces Loss Due to Excessive ChatGPT Pro Usage, Says CEO

Microsoft Sets Up CoreAI Division for AI Development

Establishment of CoreAI

Microsoft announced a new engineering division, CoreAI – Platform and Tools to accelerate AI infrastructure and software development. The new body epitomizes a renewed focus within the company on AI through all platforms.

Headship and Structure

Jay Parikh, ex-VP at Meta, will head the division, which includes vast experience in data centre operations and technical infrastructure. He will report to Satya Nadella directly, as he has just joined Microsoft. CoreAI embraces teams from both Microsoft Developer Division and AI Platform, plus some portions of the Office of the CTO.

Microsoft’s AI Vision

In their internal memo, Nadella spoke about the company’s efforts to build “model forward” applications that address changes to the various categories of technology. Above all, this attests that the company ceaselessly aims to hold the lead in innovative advancement in artificial intelligence.

Strategic Impact

CoreAI is such a move that positions Microsoft to liquidate both its areas of AI tools and other assets in cloud computing and advanced applications. Restructuring ensures that staying on top of AI remains a high priority for the company overall so that it can keep up with developments at such a quick pace.

click here to read: Microsoft Files Suit Against Hundreds for Abuse of Az

Microsoft Files Suit Against Hundreds for Abuse of Azure OpenAI Services

Microsoft has now joined the growing ranks of legal claims bordering on what has been described as “abuse of its AI services” by stealing and, thus, “piercing” critical safety measures on the platform. The 10 Doe defendants, unnamed in the suit, purportedly committed theft of user credentials and access to Microsoft Azure OpenAI.

API Key Theft and Hacking-as-a-Service

As per Microsoft, the defendants systematically and through their deceitful acts stole API keys, the fundamental means of authentication to its AI services. The hacked accounts were allegedly pivotal in creating an act of “hacking-as-a-service” One main ingredient for that operation would be De3u, a software that enabled one to convert images synthesized by OpenAI’s DALL-E without the necessity of writing an actual code.

The company’s complaint also mentioned that De3u is preferable as it can bypass Azure OpenAI’s content moderation system hence making it possible to generate malicious and illegal content. Microsoft is alleging that both bright and dark worlds have been breached by the acts of the defendants into violations of various statutes such as the Computer Fraud and Abuse Act and the Digital Millennium Copyright Act, as well as federal racketeering laws.

Microsoft’s Counteroffensive and Legal Action

Microsoft stated that the misuse of its API keys was discovered in July 2024, and steps were taken to remedy this situation. The company recently received court permission to take control of a domain central to the operations of the defendants, thereby allowing Microsoft to gather evidence and demolish the remaining technical infrastructure used in the scheme.

The company would also have to remove De3u’s entire repository from its GitHub and has instituted new security measures to safeguard its Azure OpenAI services.

Moving Forward

In addition to seeking damages, Microsoft is pursuing injunctive relief to prevent further misuse of its services. The tech giant emphasized its commitment to ensuring the integrity of its platforms and protecting its customers from malicious activities.

This lawsuit highlights the increasing challenges tech companies face in securing their AI platforms as they become more widely adopted across industries.

Nvidia CEO Claims His AI Chips Are Improving Faster Than Moore’s Law

Nvidia’s advanced AI chip:

An AI chip has exceeded the advancement of Moore’s Law, Nvidia’s latest innovation. CEO Jensen Huang claims in an interview with TechCrunch on Tuesday that the performance of his company’s AI chip has progressed beyond the principle that made computers twice as powerful through transistors on a chip that doubles yearly, this drove the technology sector for half a century, known as Moore’s Law. Despite its role in cost reduction and technological advancement, the progress in computing has declined over the past few years. On the other hand, Nvidia’s newest data superchip can process AI commands more than 30 times faster than its previous version.

Innovation and Cost Benefits:

According to Huang, the key to progress more rapidly than Moore’s Law lies in the idea of innovation across multiple areas, such as building the architecture of chips to enhance design efficiency, creating a system for integration, developing an algorithm and restructuring libraries and tools for developers. These enhancements enable us to innovate and advance in technology.  Nvidia’s AI chips are utilized by AI research institutions like Google, OpenAI and Anthropic to operate and activate AI models, with time the chip enhancements would be able to improve the model proficiency. The early AI models were not budget-friendly but the development costs have considerably reduced with the steady pace of implementing new technology.

Super Moore’s Law and AI progression:

Huang firmly believes that his AI chip has transcended Moore’s Law, in a podcast last year he mentioned a mechanism to boost the AI sector. Huang presented Super Moore’s Law, a set of mechanisms of three guiding principles under which AI is evolving. These principles are; Pre-training, Post-training and Test-time compute. In pre-training the models identify patterns from datasets before any modifications, post-training refines model responses through human feedback, and test-time computing allows AI models additional time for complex problem-solving. With such a level of productivity, Moore’s Law had a role in reducing the cost of computation which implies an increase in value and AI progression.

AI development:

Nvidia’s new data centre superchip GB200 NVL72 was presented by Huang, he mentioned that the chip works 30 to 40 times faster in its updated version and it is designed for AI inference tasks. This would help lower the expense of highly running models such as one used by OpenAI o3, which is extremely costly. Huang’s goal is to create chips that perform better and are cost-friendly, furthermore, he claims that his AI chip is 1000 times better than what he made 10 years ago. Ensuring quality and accessibility to the developers of the future is a primary goal and perhaps a future outcome.

Era of AI Evolution, Disclosure of Nvidia’s personal AI supercomputer “Project Digits”

Project Digits and Progression of AI:   

Nvidia introduced its latest device as a ‘personal AI supercomputer‘ during CES 2025 in Las Vegas. Labelled Project Digits, this invention revolutionizes an era due to AI’s progressive nature. This technology is an infusion of high-calibre AI tools, and its upgraded version is the epitome of fascination. The technology boasts Nvidia’s exclusive Grace Blackwell hardware, which greatly enhances the performance of researchers, data scientists, and educators alike. 

An Efficient Desktop AI Workstation:

Project Digits is huge for every industry, business and company, while showcasing the growth opportunities it obtains, Nvidia CEO Huang Ian made some remarks about the incredible business potential Digital Twins acquire. The creators of virtual worlds will have the ability to move the physical objects into the Meta Verse ensuring that it never goes away. In terms of accessibility, it is considered even as a workstation referred to by Huang Ian. He mentioned, “It is a cloud computing platform that sits on your desk, it is even a workstation if you like it to be.” This supercomputer transforms the current definition of AI accessibility from being a high-end cloud-based technology to an efficient desktop AI workstation.

The Future of AI:

The ability to use Project Digits in conjunction with MediaTek resulted in the formation of GB10 Grace Blackwell Superchip, it is one of the strongest highlights of the Project Digits. It has a power delivery rating of one Petaflop, this new hardware allows users to rapidly and effortlessly build, refine, and run AI models. It has the following specifications;

  • Compatible with Windows and Mac PCs
  • Up to 4TB of flash storage
  • Up to 200 billion processing parameters with an expansion limit of four hundred and five billion parameters with dual-unit links
  • 128GB of unified memory

The AI models that can be used with Live Chat Projects are still very much in development and do require extensive computing capabilities, which this particular model is suitable for besides the other models.

High Performance Comes at a Cost:

Building a supercomputer is easy but minimizing the price and cost is not. This range is set to and has exceeded the likes of $3,000, rather than casual use it is more used by students and professionals working with AI. Huang highlights the transformative potential of this product by emphasizing bringing supercomputers to the desk of every data scientist, AI researcher, and student, after all, it will become a necessity of a much more AI-revolutionized and progressive era.

Market Impact:

Project Digits is utilizing Nvidia’s DGX operating system, working with the likes of Linux and other operating systems, which will be available from May 2025 to the top distribution partners of Nvidia. This aims to further allow innovations to take place by providing a specific slot for growing AI developers and researchers.

Conclusion:

The Nividia Project Digits AI supercomputer is the beginning of an advanced technological era that blurs the boundaries of mere digital devices as it allows high-performing AI experiences to everyone and sets a new standard in the AI domain.

Microsoft Invests $3 Billion in India to Lead AI Revolution

Bengaluru, January 7, 2025: Microsoft CEO Satya Nadella announced a $3 billion investment in India over the next two years to advance the country’s cloud and AI infrastructure. This includes building new data centres, expanding AI capabilities, and skilling 10 million people by 2030. The announcement underscores Microsoft’s commitment to supporting India’s vision of becoming a global AI leader.

Nadella stated at the Microsoft AI Tour in Bengaluru:India is unlocking incredible opportunities with AI. This investment ensures broad access to AI innovation, benefiting people and organizations nationwide.

As part of this initiative, Microsoft Research Lab launched an AI Innovation Network to bridge the gap between research and practical applications. The company is also partnering with SaaS Boomi to strengthen India’s AI and SaaS ecosystem, targeting over 5,000 startups and 10,000 entrepreneurs.

Transforming India into an AI-First Nation

Satya Nadella highlighted India’s growing leadership in the global AI environment, stating:
“India is rapidly unlocking opportunities with AI, and this investment ensures the benefits are accessible to everyone, fostering both innovation and inclusivity.

Puneet Chandok, President of Microsoft India and South Asia, echoed these sentiments, adding:
“From classrooms to boardrooms, Microsoft is making AI accessible to communities across India. We are committed to empowering the nation with the resources to excel globally in the AI era.

Advancing India’s AI and Skills Ecosystem

India is emerging as a global leader in AI skills, with professionals adopting AI expertise faster than ever. According to LinkedIn data:

  • Indian users spent 50% more time on learning weekly compared to global averages.
  • There has been a 122% year-over-year growth in Indian professionals adding AI skills to their profiles, surpassing the global rate of 71%.

Microsoft’s initiatives will accelerate this trend, equipping millions of Indians with the skills needed to thrive in a rapidly evolving job market.

Through its Advantage India program, Microsoft will train millions of Indians in AI skills to prepare them for the evolving job market. The company also emphasized sustainability, reaffirming its goals of becoming carbon-negative, water-positive, and zero waste by 2030.

Training and Sustainability

Puneet Chandok, President of Microsoft India and South Asia, said, “From classrooms to communities, Microsoft is making AI accessible, ensuring India is equipped with resources to thrive in the AI era.

This significant investment cements Microsoft’s role in India’s journey toward becoming an AI-first nation while advancing responsible and sustainable AI practices.

Building AI and Cloud Infrastructure

  • Microsoft will establish new data centres to expand its cloud and AI infrastructure.
  • With three operational data centre regions in India, Microsoft plans to launch a fourth in 2026.
  • This initiative will cater to the growing needs of India’s burgeoning AI startups and research communities, fostering scalable AI innovation.

Training 10 Million Indians in AI Skills

  • As part of the second edition of the ADVANTA(I)GE India program, Microsoft aims to skill 10 million people by 2030 in AI competencies.
  • The training initiative will support Indian professionals in adapting to the evolving nature of jobs, focusing on empowering students, entrepreneurs, and enterprises with future-ready AI expertise.

Launching the AI Innovation Network

Microsoft Research Lab has introduced the AI Innovation Network, which bridges the gap between research and real-world applications, enabling businesses to unlock the full potential of AI.

Empowering India’s SaaS Ecosystem

In collaboration with SaaSBoomi, Microsoft is driving the AI and SaaS ecosystem to achieve a trillion-dollar economy. This partnership will impact over 5,000 startups and 10,000 entrepreneurs, significantly boosting innovation in the region.

A New Era of AI-Driven Growth

With this massive investment, Microsoft is not just fueling AI innovation but also transforming India into a hub for AI-driven solutions. By empowering startups, training professionals, and fostering sustainability, the company is creating an ecosystem where technology catalyzes growth and inclusivity.

click here to read: Self-Driving Cars Take Center Stage at CES 2025

OpenAI Faces Loss Due to Excessive ChatGPT Pro Usage, Says CEO

OpenAI runs into Financial Despair.

Sam Altman, the CEO of OpenAI is apparently in distress as he stated that due to the unexpected excessive ChatGPT Pro usage, they are facing immense loss on its 200$ per month ChatGPT Pro Plan. Altman in one of his recent posts on X stated, “I determined the cost and figured that we would earn money.” This implies a lack of strategy and long-term vision, as a ChatGPT-powered AI chat box did not have a completely structured pricing plan that could be followed. This is not the first time OpenAI swiftly set its pricing structures without considering them in detail.  

Enchanting Features

ChatGPT Pro emerged a year ago and enchanted its users with various features, such as unlimited access to the o1 reasoning AI Model with o1 professional mode and even access to Sora, a video-generating tool made by the OpenAI squad.

Is ChatGPT Pro worth $2,400 a year?

The fact that ChatGPT Pro was primarily criticized for being too expensive lies in the idea of it being considered mainstream. The annual fee of about $2,400 is quite a hefty price, especially when the worth of o1 pro mode at least has not been established. As per Altman’s tweets, those who chose this plan are just heavy grey users consuming the service to its max. Altman, in one of his latest interviews with Bloomberg, recognized that the markup for ChatGPT was charged in the beginning without conducting any research. He said, we considered $20 and $42 as two possible prices and they went with $20 because $42 was believed to be too much, it was not a thoughtful plan and a snap decision towards the end of December 2022 or the start of January.

OpenAI’s Lofty Revenue Targets

After garnering cumulatively around $20 billion since its startup, OpenAI continues to experience a loss of around $5 billion on revenue of $3.7 billion. To become profitable, the company is believed to be increasing subscription fees, and some services may be transitioned into usage-based pricing. In the future, OpenAI intends to reach $100 billion in revenue in 2029.