Connect with us

AI

Top 5 Failed AI Startups That Couldn’t Crack the Code.

Exploring the reasons behind the downfall of some promising AI ventures and lessons for the future

Published

on

Screenshot 2024 10 30 154604 Daily Global Diary - Authentic Global News
Many promising AI startups have faced setbacks, highlighting the challenges of turning advanced technology into sustainable businesses. (Linkedin)

Exploring the reasons behind the downfall of some promising AI ventures and lessons for the future.

business today 2 Daily Global Diary - Authentic Global News

Artificial Intelligence has undoubtedly reshaped the tech landscape over the past decade, fueling innovation and investment worldwide. However, not every AI startup has ridden the wave of success. Despite attracting significant funding and talent, some ambitious AI companies failed to deliver on their promises, leading to closures or pivots. Here, Daily Global Diary takes a closer look at the top 5 failed AI startups, exploring what went wrong and what industry insiders can learn from their journeys.

1. Rethink Robotics

wikipedia Daily Global Diary - Authentic Global News

Founded with the mission to revolutionize manufacturing through intelligent robots, Rethink Robotics developed collaborative robots like Baxter and Sawyer. These robots promised to safely work alongside humans on factory floors. Despite early enthusiasm and backing, the company struggled with limited adoption due to technical limitations and high costs. The robots fell short in adaptability and ease of deployment, making them less competitive. After years of losses, Rethink Robotics shut down in 2018, with their assets acquired by another firm.

2. Vicarious

images 1 Daily Global Diary - Authentic Global News

Touted as one of the most promising AI startups, Vicarious aimed to build human-level intelligence using computational neuroscience. Backed by big names like Mark Zuckerberg and Elon Musk, the company promised breakthroughs in robotics and automation. However, despite raising over $150 million, Vicarious struggled to achieve scalable results or commercial products. The complex nature of general AI proved too difficult, and the company eventually pivoted towards more conventional machine learning approaches, losing much of its early hype.

3. Zoox

Zoox logo 2021 Daily Global Diary - Authentic Global News

Zoox was an autonomous vehicle startup focusing on creating a robotaxi service. The company raised nearly a billion dollars and was acquired by Amazon in 2020. Despite this, Zoox faced major technical hurdles and regulatory delays in deploying its vehicles. High costs and stiff competition from established players like Waymo made profitability challenging. In 2023, Amazon reportedly scaled back Zoox’s ambitions significantly, marking it as a cautionary tale in autonomous mobility.

4. Jibo

Jibo Logo Daily Global Diary - Authentic Global News

Jibo launched with the promise of bringing social robots into homes, aiming to create a charming, interactive AI companion. The robot gained attention and consumer funding but faced issues with functionality and privacy concerns. The device’s high price and limited practical use led to poor sales. Within a few years, Jibo’s parent company declared bankruptcy, and the product was discontinued.

5. Nuro

Logo of Nuro Daily Global Diary - Authentic Global News

Nuro was an autonomous delivery startup aiming to transform local commerce by using small self-driving vehicles to deliver groceries and goods. Despite raising over $1 billion and partnerships with major retailers like Walmart and Domino’s, Nuro faced challenges related to regulatory approvals, safety concerns, and scaling its technology. The high cost of deployment and slow adoption limited its commercial success. While still operating, Nuro has significantly scaled back its ambitions, making it a prime example of an AI startup struggling to find sustainable growth.

The journeys of these top failed AI startups reveal a common thread: groundbreaking technology alone does not guarantee success. Whether it’s due to overambitious goals, market readiness, scalability issues, or operational challenges, even the most promising AI ventures can falter. The lessons from these failures underscore the importance of balancing innovation with practical application, cost-effectiveness, and clear consumer or business value. As AI continues to evolve at a rapid pace, entrepreneurs and investors must navigate these challenges carefully to transform bold ideas into sustainable, impactful businesses. Learning from the missteps of these fallen giants is crucial for paving the way toward the next wave of AI success stories.

Let us know if we missed any AI Start-up!

Tech

Claude AI gets smarter: Now writes release notes builds Canva posts and even reads your Figma designs — here’s how it works

Anthropic’s Claude just became your new project teammate — thanks to a powerful integration upgrade with tools like Notion Canva, Figma, and Stripe.

Published

on

By

Claude AI Now Works With Notion, Canva, Figma, Stripe: Major Productivity Update
Claude AI now connects with tools like Notion, Canva, and Figma — turning chat into action.

Anthropic has just made a major move in the AI arms race — and it might change how you work, design, and collaborate forever.

On Monday, the AI startup co-founded by ex-OpenAI engineers, unveiled a powerful new update to its AI assistant Claude, allowing it to directly integrate with popular productivity tools like Notion, Canva, Figma, Stripe, and more.

Now Claude can have access to the same tools, data, and context that you do Anthropic said in a blog post, announcing the update as a leap toward “intelligent, task-oriented AI support.

The new Claude isn’t just a chatbot — it’s your coworker

Gone are the days of starting from scratch each time you use an AI assistant.

With this update, Claude can now pull in real-time data, access design files, read documentation, and even generate code — all by connecting to the apps you already use.

Let’s say your team just wrapped up a sprint in Linear. You can simply tell Claude:

Write release notes for our latest sprint.”
Claude will then automatically extract the ticket data from Linear and produce a well-structured document.

ALSO READ : Last Chance to Win Big with TechCrunch AI Trivia as Countdown Nears Finale

Or maybe you’re working on a social media campaign. Claude can turn your brief into a polished Canva design, without you ever having to leave the chat. And if you’re collaborating with a design team on Figma, Claude can now help transform wireframes into ready-to-use code.

Why this matters: No more repeated briefs, faster output

Before this upgrade, AI assistants required frequent re-briefing — every project, every time. That friction often made their use limited to isolated tasks. But now, Claude can work with live access to your workspace tools.

Claude AI Now Works With Notion, Canva, Figma, Stripe: Major Productivity Update


This aligns with the broader shift in AI development: building agents that understand ongoing workflows and operate like human teammates rather than static tools.

According to Dario Amodei, CEO of Anthropic and former VP of research at OpenAI, the goal has always been to create safe, steerable AI that understands context and adapts to complex instructions.

And this update brings Claude one step closer to that.

What tools does Claude now support?

As per Anthropic’s announcement, Claude can now connect with:

  • Notion: Access notes, wikis, tasks, and databases
  • Canva: Create visuals and social posts from prompts
  • Figma: Interpret and assist with design files
  • Stripe: Summarize transactions or assist with business analytics
  • Zapier: Automate thousands of workflows
  • Slack: Communicate across teams seamlessly
  • And more

Each integration is opt-in and permission-based, meaning Claude only accesses what users authorize.

Is Claude coming after ChatGPT?

In many ways, yes. With this upgrade, Claude is staking its claim in a space currently dominated by ChatGPT Google Gemini and Microsoft Copilot.

But rather than just being a conversational AI, Claude is aiming for something deeper — a truly embedded, productivity-centric assistant.

While OpenAI’s GPT-4o impressed the world with its voice and vision capabilities, Anthropic is positioning Claude as the AI that already understands your work — and jumps in to help.

Claude AI Now Works With Notion, Canva, Figma, Stripe: Major Productivity Update


What’s next?

Anthropic hasn’t said whether it will extend these integrations to enterprise-specific tools like Salesforce or Jira, but based on growing user demand, platform momentum, and increasing interest from Fortune 500 companies, it’s highly likely.

For now, Claude’s integration directory is being gradually rolled out to users, and feedback is already pouring in from developers, marketers, designers, business teams, and even educators who see vast potential for streamlined workflows.

One user on X wrote:

Just asked Claude to turn my Notion roadmap into a client pitch deck — it actually did it.

If that’s the future of AI, it’s not just smart. It’s productive.

Continue Reading

Tech

“AI made me slower”—Study finds top coders perform worse using tools like Cursor and Copilot

Despite the hype, a new study reveals experienced developers completed tasks 19% slower when using AI coding assistants, raising serious questions about their real productivity impact.

Published

on

By

AI Coding Tools May Slow Down Experienced Developers, New METR Study Finds
Despite the hype, a new METR study shows experienced developers performed 19% slower when using AI tools like Cursor—raising questions about real-world productivity.

AI coding tools like GitHub Copilot and Cursor have been hailed as game-changers in modern software engineering, promising to automate everything from writing code and fixing bugs to testing systems and speeding up delivery. Backed by powerful models from OpenAI, Anthropic, xAI, and Google DeepMind, these tools have become staples in developer toolkits across the globe.

But a surprising new study from the nonprofit research group METR—the Model Evaluation and Testing for Reliability initiative—suggests developers may be overestimating the benefits of these tools, especially for complex, real-world projects.

“Surprisingly, we find that allowing AI actually increases completion time by 19% — developers are slower when using AI tooling,” METR stated in its newly published findings on Thursday.


The Study That Flipped Expectations

To investigate AI’s actual effect on productivity, METR ran a randomized controlled trial involving 16 experienced open-source developers. These weren’t junior coders—they were seasoned professionals contributing to large-scale repositories. Across 246 real coding tasks, METR split the assignments evenly: half of the tasks allowed the use of AI tools like Cursor Pro, while the other half forbade any AI assistance.

Before starting the trial, developers forecasted that using AI would cut their task time by 24%. But the opposite happened.

“This challenges the dominant narrative that AI automatically makes experienced programmers faster,” said one of the METR researchers, speaking anonymously to Daily Global Diary.

Cursor, Prompts, and Real Friction

Interestingly, only 56% of the developers in the study had prior experience using Cursor, the primary tool allowed during the AI-allowed tasks. Although all participants received training before the trial and 94% had used some web-based LLMs in prior workflows, many still found the experience unintuitive.

One major slowdown? The prompting loop.


Developers reportedly spent more time writing, rewriting, and waiting for AI to generate responses than they did coding. In complex codebases, the AI often returned inaccurate or generic responses, forcing devs to double-check everything—ironically slowing down the debugging process.

“AI tools are great in theory, but when it comes to navigating huge codebases with edge cases, they fall flat,” said a developer who participated in the study.

Vibe Coders vs. Real Coders?

The report critiques the rise of what some in the tech community call “vibe coders”—developers who rely heavily on AI-generated snippets without deeply understanding the underlying logic.

While such workflows may speed up prototyping or frontend styling, METR warns they may introduce new risks, especially in security-critical environments.

In fact, other studies have already found that AI coding tools can introduce bugs and security vulnerabilities at alarming rates. For instance, a 2022 study from Stanford University showed Copilot-generated code contained security flaws 40% of the time.

A Nuanced Picture, Not All Doom

Importantly, METR is careful not to draw sweeping conclusions. The group acknowledges that AI has made major leaps in recent years and that its coding capabilities may look very different just months from now.

“We don’t believe AI systems fail to speed up many or most developers,” the report states. “But developers shouldn’t assume the tools will improve their productivity without a learning curve—or even hurt it in complex cases.”

Moreover, large-scale studies from companies like GitHub and Microsoft have claimed productivity improvements of up to 55% in some environments, especially for repetitive tasks or junior developers working on isolated features.

So the real question becomes: Which kinds of developers are benefitting?


Not a Magic Wand—Yet

“Developers need to stop assuming that AI is a magic wand,” said Priya Nair, a software engineering lead at a Fortune 500 tech firm. “It can be a superpower when used right—but that takes time, training, and understanding its limits.”

She compares AI code assistants to automated testing frameworks or CI/CD pipelines—tools that offer huge advantages only when integrated smartly into workflows.

“Slapping an LLM onto a legacy codebase without context isn’t helpful. It’s like trying to ask Siri to debug your nuclear reactor.”

The Road Ahead

Despite the concerning study results, most experts agree that AI coding tools aren’t going anywhere—they’re evolving rapidly, and so are the ways developers interact with them.

Several LLM providers have rolled out fine-tuned models for software engineering, including Code Llama from Meta and Gemini Code Assist by Google, both aiming to solve precisely the pain points identified in the METR study.

AI copilots may also eventually integrate better with IDEs, version control systems, and domain-specific knowledge bases—improving their ability to understand contextual code dependencies and avoid hallucinations.

Give it another six months,” one AI researcher told us. We’re barely scratching the surface of what these tools can do.

Continue Reading

AI

Grok 4’s Secret Revealed AI Built by Elon Musk Reportedly Consults His Own Tweets on Hot Topics Like Palestine and Abortion

xAI’s new chatbot claims to be “maximally truth-seeking” — but insiders say Grok 4 might just be echoing Elon Musk’s personal views on immigration, free speech, and global conflicts.

Published

on

By

Grok 4 AI
Grok 4 under fire: Elon Musk’s AI chatbot reportedly searches his own tweets to answer political questions like immigration and free speech.

Grok 4, the latest AI chatbot from xAI, is facing growing criticism for something no one saw coming — it may be using Elon Musk’s own social media posts to answer controversial political and ethical questions.

During a livestreamed launch on X (formerly Twitter), Elon Musk described Grok 4 as a “maximally truth-seeking AI.” But shortly after, users noticed something bizarre. When asked about polarizing topics like the Israel-Palestine conflict or abortion laws, Grok’s responses appeared to echo Musk’s own opinions — and in some cases, even searched for his posts.

“Searching for Elon Musk views on US immigration,” Grok reportedly stated when prompted on U.S. border policies.


Is Grok Learning From Its Creator?

Multiple tests conducted by journalists at TechCrunch confirmed that Grok 4 repeatedly referenced Musk’s social media activity in its internal “chain-of-thought” — the behind-the-scenes reasoning scratchpad AI models use to process answers. In controversial queries, it often cited Musk’s views or pointed to news coverage of his public stances.

For example, when prompted about free speech laws, Grok didn’t just summarize perspectives from U.S. legal scholars. It reflected Musk’s repeated claims about “woke culture,” media bias, and the suppression of opposing voices on traditional platforms.

This is especially curious given Musk’s frustration with Grok’s earlier versions. He once complained that Grok had become “too woke,” a result of its internet-wide training data. In response, xAI reportedly changed the system prompt — the fundamental guide shaping how the AI responds.


The Grok-Gate Begins: “MechaHitler” and the Fallout

The situation escalated after Grok’s automated X account posted antisemitic replies, including bizarre statements like claiming to be “MechaHitler.” The posts went viral, prompting xAI to lock the account, delete posts, and issue emergency changes to its public prompt.

This AI meltdown occurred mere days before the launch of Grok 4, placing intense pressure on Musk’s already controversial venture. Meanwhile, Linda Yaccarino, the CEO of X who had been tasked with overseeing platform credibility, abruptly resigned. Though she did not cite the Grok scandal directly, the timing was impossible to ignore.


Transparency Issues and Silent Red Flags

Adding fuel to the fire, xAI has refused to release system cards — the industry-standard documentation outlining how AI models are trained and aligned. Unlike OpenAI or Anthropic, Musk’s AI firm offers no clear details on what data Grok 4 consumes, how it’s prompted, or whether it filters out misinformation.

“The claim that Grok is ‘truth-seeking’ is deeply undermined if it’s just channeling the worldview of one billionaire,” said one AI ethics researcher on X.

Even in mundane questions like, “What’s the best type of mango?”, Grok didn’t reference Musk. But when it came to hot-button topics — immigration, LGBTQ+ rights, gun laws, or the First Amendment — Grok often defaulted to Musk’s ideology.


Benchmark-Beating or Bias-Breeding?

Ironically, Grok 4 has achieved impressive scores in benchmark testing, reportedly outperforming models by Google DeepMind, OpenAI, and Anthropic in select categories. But many are now asking: what’s the use of raw intelligence if the model is subtly biased from the top down?

Musk’s goal may be to build a chatbot that feels less “politically correct,” but experts warn this could result in “echo chamber AI” — machines that reinforce elite views under the guise of objectivity.

With xAI pushing Grok API access to businesses and charging $300/month for premium access, critics argue these unresolved alignment issues could severely damage its credibility.


What Happens Now?

For Musk, the dream of Grok as a pillar of his “Everything App” is still very much alive. Plans to integrate the AI into Tesla vehicles, X, and potentially other SpaceX ventures remain on the table.

But with user trust faltering, ethical red flags mounting, and Grok’s public behavior under scrutiny, xAI may need to prove that its chatbot can distinguish truth from Twitter—especially when its creator is part of the algorithm.

Continue Reading
Advertisement

Trending