Connect with us

Technology News

OpenAI reverses course after Hollywood backlash — Sam Altman promises “granular IP control” and potential revenue share for creators in Sora

Following criticism over the use of Hollywood characters and likenesses in its viral AI video app Sora, OpenAI CEO Sam Altman announced plans for stricter IP controls and a new revenue-sharing model for rightsholders.

Published

on

OpenAI’s Sam Altman promises “granular IP control” and revenue share in Sora after Hollywood backlash
OpenAI’s Sam Altman promises “granular IP control” and revenue share in Sora after Hollywood backlash

The explosive rise of SoraOpenAI’s text-to-video generation app — has captured global attention for its astonishing realism and creativity. But as users began flooding social media with AI-generated clips featuring familiar Hollywood characters, the app also ignited an intense backlash from studios, actors, and copyright holders concerned about the unauthorized use of their intellectual property.

ALSO READ : Dutch firm Amdax raises $23M to chase 1% of Bitcoin supply what it means for global markets

Now, Sam Altman, OpenAI’s CEO, is stepping in to address those concerns. In a late-night blog post on Friday, Altman announced that the company will introduce “more granular control” for rightsholders and is actively exploring a revenue-sharing program for creators whose IP appears in user-generated Sora videos.

“First, we will give rightsholders more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls,” Altman wrote. “We are hearing from a lot of rightsholders who are very excited for this new kind of ‘interactive fan fiction,’ but they want the ability to specify how their characters can be used — including not at all.”

From “opt-out” to “opt-in” — a major policy shift

The move represents a notable reversal from OpenAI’s earlier “opt-out” policy, which allowed characters, brands, and other copyrighted material to appear in user-generated videos unless the owners explicitly requested removal. Under the new rules, OpenAI will adopt a “stricter opt-in model”, meaning that creators and companies must grant permission before their IP can be generated within Sora.

OpenAI’s Sam Altman promises “granular IP control” and revenue share in Sora after Hollywood backlash


This shift mirrors OpenAI’s existing system for individuals, which allows users to control whether their likeness or voice can be used in generative AI content. However, the company is extending these protections to fictional characters, trademarks, and franchise IP, in response to growing pressure from entertainment giants like Disney, Warner Bros. Discovery, and Sony Pictures.

Altman’s statement also suggests that while Sora’s output policies are changing, the system may still be trained on media containing known characters or copyrighted visuals — a gray area that is likely to fuel further debate about AI training data and copyright law.

Hollywood’s reaction and the IP dilemma

The announcement comes amid rising tensions between Silicon Valley and Hollywood, as generative AI technology increasingly intersects with the entertainment industry.

Major studios and guilds — including the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and the Writers Guild of America (WGA) — have voiced concerns that AI tools like Sora could be used to replicate actors, writers, and creative works without consent or compensation.

Altman’s statement appears to be a direct response to those concerns. By offering an opt-in framework and potential profit-sharing system, OpenAI aims to appease rightsholders while keeping creators engaged.

“People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences,” Altman wrote. “We are going to try sharing some of this revenue with rightsholders who want their characters generated by users.”

He added that the revenue model is still in development and will likely undergo “trial and error” before it becomes standardized.

OpenAI’s Sam Altman promises “granular IP control” and revenue share in Sora after Hollywood backlash


A new frontier: “interactive fan fiction”

Despite the controversy, Altman’s tone suggested cautious optimism about what Sora represents for storytelling and fandom. He described a growing community of creators using AI to remix familiar universes — not for exploitation, but for creative expression.

“We’re hearing from rightsholders who see this as a new kind of interactive fan fiction,” Altman said. “They believe this kind of engagement will accrue a lot of value — if it’s done ethically.”

That perspective aligns with a larger cultural shift in online creativity. Sora’s users have produced everything from reimagined Marvel storylines to Star Wars fan films, blurring the boundary between fan art and studio IP.

But the same tools that empower fans also threaten established creative industries, raising urgent questions about ownership, consent, and monetization in the AI era.

The road ahead for OpenAI and Sora

Sora remains in its early-access phase, with select creators and developers experimenting under limited release. Still, the platform has already demonstrated how generative AI could revolutionize filmmaking — and, in the process, disrupt traditional media production models.

OpenAI’s promise of more transparent IP governance and potential revenue sharing could set a precedent for how tech companies collaborate with — rather than compete against — creative industries.

Analysts believe the company’s next major challenge will be ensuring copyright compliance across billions of generated videos while maintaining creative freedom for users.

As the entertainment industry grapples with the implications of AI, one thing is certain: tools like Sora are redefining what it means to create, share, and profit from visual storytelling.

“This is new territory for everyone,” Altman concluded. “We want to build a system that rewards creativity — both human and artificial — without crossing ethical or legal lines.”

Technology News

Sam Altman Breaks Silence: Molotov Cocktail Scare, “Incendiary” Probe, and a Candid Reckoning With His Past…

The OpenAI chief addresses a shocking security incident and pushes back against a high-profile investigation, while reflecting on mistakes that shaped his leadership.

Published

on

By

Sam Altman Responds to Molotov Cocktail Incident and New Yorker Investigation
Sam Altman addresses controversy and security concerns in a candid blog post amid growing scrutiny of AI leadership.

In a rare and deeply personal blog post published Friday, Sam Altman, CEO of OpenAI, pulled back the curtain on a troubling security incident involving a Molotov cocktail—while also responding to what he described as an “incendiary” investigation by The New Yorker.

Altman’s post, striking in both tone and transparency, covered far more than just headlines. It offered readers a glimpse into the pressures of leading one of the world’s most scrutinized AI companies, while confronting narratives he believes have misrepresented his character and decisions.

A Disturbing Incident Comes to Light

Altman confirmed that a Molotov cocktail incident had indeed taken place, raising concerns about the growing intensity of public sentiment surrounding artificial intelligence and its key figures. While details remain limited, the acknowledgment alone underscores the increasingly volatile environment in which tech leaders now operate.

ALSO READ : Younghoe Koo Explains Botched Field Goal After Slip: “The Ball Was Moving So I Pulled Up”

The incident serves as a stark reminder of how polarizing AI has become, particularly as companies like OpenAI continue to push the boundaries of innovation with tools such as ChatGPT.

Pushing Back Against The New Yorker

A significant portion of Altman’s post was dedicated to addressing an investigation by journalists Ronan Farrow and Andrew Marantz, published in The New Yorker. The piece reportedly examined Altman’s leadership style, past controversies, and internal dynamics at OpenAI.

Altman did not hold back, labeling the article as “incendiary” and suggesting it painted an incomplete and, at times, misleading picture. While acknowledging that scrutiny comes with the territory, he emphasized the importance of fairness and context in reporting.

“There are parts of my past I’m not proud of,” Altman admitted, “but they don’t define the work we’re doing today.”

A Rare Moment of Self-Reflection

Perhaps the most compelling aspect of the blog post was Altman’s willingness to revisit his own past mistakes. In an industry often marked by carefully curated public personas, his candid tone stood out.

Sam Altman Responds to Molotov Cocktail Incident and New Yorker Investigation


He reflected on earlier decisions in his career—some of which have been criticized—and framed them as learning experiences that informed his leadership today. This introspection appeared to be both a defense against criticism and an attempt to humanize a figure often seen as emblematic of Big Tech ambition.

The Broader Context: AI Under the Spotlight

Altman’s remarks come at a time when artificial intelligence is facing unprecedented scrutiny from governments, media, and the public. From ethical concerns to job displacement fears, companies like OpenAI are navigating a complex web of expectations and criticisms.

The CEO’s decision to address both a security scare and a media investigation in one sweeping post suggests a deliberate effort to regain control of the narrative—and perhaps rebuild trust.

A Leader Under Pressure

For Altman, this moment is about more than just rebutting an article or confirming an incident. It reflects the reality of leading a transformative yet controversial field.

As AI continues to reshape industries, figures like Altman are finding themselves not just as innovators, but as lightning rods for debate.

Whether his candid approach will resonate with critics remains to be seen. But one thing is clear: Sam Altman is choosing to confront the storm head-on—on his own terms.

Continue Reading

Technology News

Amazon’s AWS Cloud Went Dark Over Dubai and Iran’s Drones May Have Just Changed the Internet Forever…

2. SUBTITLE:
Iranian missile and drone strikes hit Amazon Web Services data centers in the UAE and Bahrain, taking down dozens of cloud services and raising terrifying questions about the future of global digital infrastructure in a war zone.

Published

on

By

Amazon AWS Data Centers Hit by Iran Drone Strikes in Dubai — Cloud Services Down Across Middle East

The Gulf had one simple promise for Silicon Valley: Bring your servers. We’ll keep them safe.

On Sunday, March 1, 2026, that promise burned — quite literally.

At around 4:30 AM PST, one of Amazon Web Services‘ availability zones — specifically the mec1-az2 cluster in its ME-CENTRAL-1 region — was hit by unidentified objects that struck the data center, triggering sparks and a fire. 404 Media What followed was not just a tech outage. It was a wake-up call for every business, government, and startup that had trusted the Middle East with their data.

What Exactly Happened?

Amazon confirmed that two of its data center facilities in the United Arab Emirates were directly struck, while in Bahrain, a drone strike in close proximity to one of its facilities caused physical damage to its infrastructure.

ALSO READ : Tanzyn Crawford Breaks Silence on Racial Backlash Over Her Role in A Knight of the Seven Kingdoms

Power to the UAE facility was cut by local authorities to contain the blaze. Amazon hasn’t officially specified what the “objects” were — but the data center appears to have been caught squarely in the crossfire between U.S. and Iranian forces operating in the region.

Amazon‘s popular EC2 virtual server service, its S3 storage platform, and its DynamoDB database service were among the roughly 60 applications experiencing elevated error rates and degraded availability. AWS confirmed that recovery would be prolonged “given the nature of the physical damage involved.”

And customers? They were told to pack up and leave — digitally speaking.

AWS advised customers with workloads in the region to consider backing up their data or migrating to other AWS regions entirely. CNBC That’s a remarkable admission from one of the world’s most powerful tech companies.

The Bigger Picture: How Did We Get Here?

On Saturday, the United States and Israel launched Operation Epic Fury, striking targets inside Iran and killing several political and military leaders — including Ayatollah Ali Khamenei, Iran’s Supreme Leader. In retaliation, Iran unleashed hundreds of drone and missile attacks against Israel and multiple U.S.-allied targets across the Middle East, including the UAE, Qatar, Kuwait, and Saudi Arabia. 404 Media

The UAE military intercepted 165 ballistic missiles, two cruise missiles, and 541 drones over two days. But 35 drones and 5 projectiles still got through — striking airports, Jebel Ali Port, and even the facade of the iconic Burj Al Arab hotel. Three migrant workers were killed. Rest of World

The Amazon data centers were not the only casualties. According to multiple reports, Iranian armaments struck the headquarters of the U.S. Navy’s Fifth Fleet in Manama, Bahrain. Google, Amazon, Microsoft, and Oracle all operate cloud facilities in nations now under Iranian bombardment. The Register Yet it is Amazon’s infrastructure that has suffered the most visible blow.

A Vulnerability Nobody Planned For

The uncomfortable truth is that nobody in Silicon Valley or the Gulf capitals ever seriously planned for this.

The January 2026 Pax Silica initiative had brought the UAE and Qatar into a U.S.-led effort to keep advanced chips away from China. The security frameworks were designed around geopolitics and supply chain control — not around protecting physical buildings during a missile and drone war. Rest of World

Amazon AWS Data Centers Hit by Iran Drone Strikes in Dubai — Cloud Services Down Across Middle East


As Ali Bakir, an assistant professor of international affairs and defense at Qatar University, bluntly put it: the physical security of strategic digital infrastructure may have been assumed to fall under broader national defense — without ever being treated as a distinct vulnerability. Rest of World

Data management firm Snowflake attributed its own service disruptions in the region directly to the AWS outage in the UAE, showing just how far the knock-on effects spread through the cloud ecosystem. The Register

What Happens Next?

It remains unclear how long it will take for Amazon to fully restore services. The company’s dashboard warned of at least a day’s recovery time — but the war is far from over, and Iran continues to strike targets across the Middle East. 404 Media

Ryan Bohl, senior analyst for the Middle East and North Africa at RANE Network, noted that while the region’s core advantages remain intact for now, the trajectory depends heavily on how the conflict evolves. Companies are watching closely to see whether this was a contained episode or the start of a more sustained cycle of disruption. Rest of World

One thing, however, is already clear: the Gulf’s era as an unquestioned “safe harbor” for the world’s data may be over. And the next time a Silicon Valley executive signs a billion-dollar infrastructure deal in the Middle East, they’ll be asking a question nobody used to ask — what happens if the missiles come for the servers?

Continue Reading

Technology News

Inside the Mind of the Man Who Trusts Dogs to Lead Movies

From AI labs to film sets, BARK innovation chief Mikkel Holm has a radical idea — what if dogs weren’t just stars, but storytellers?

Published

on

By

Meet the Man Who Thinks Dogs Should Be Film Directors | Daily Global Diary

In an era where artificial intelligence is already writing scripts, composing music, and generating entire films, one creative mind is asking a question that feels equal parts absurd and oddly profound: Why shouldn’t dogs be directors?

That mind belongs to Mikkel Holm, the Chief AI & Innovation Officer at BARK, the pet brand best known for turning dog culture into a billion-dollar business. Holm isn’t pitching a gimmick. He’s questioning how creativity itself is defined — and who gets to own it.

From Fetch to Final Cut

Holm’s thinking sits at the crossroads of AI, storytelling, and animal behavior. With generative tools becoming more intuitive, he believes creativity no longer needs to start with a human idea. A dog’s reactions — what excites them, what scares them, what keeps their attention — could become the raw data that shapes narratives.

“Dogs already tell us what they like,” Holm has suggested in industry conversations. “We just haven’t been listening in a cinematic way.”

ALSO READ : Younghoe Koo Explains Botched Field Goal After Slip: “The Ball Was Moving So I Pulled Up”

Using sensors, computer vision, and behavioral AI models, a dog’s gaze, movement, or excitement could guide editing decisions, pacing, or even story arcs. The result wouldn’t be about dogs — it would be cinema filtered through a non-human perspective.

The Birth of the First Park Chan-Woof?

Holm jokingly refers to the possibility of minting the next Park Chan-wook — except this auteur would wag instead of walk the red carpet. The joke lands because it highlights something serious: great directors don’t just tell stories, they feel them. And dogs, arguably, are pure instinct.

Unlike human creators shaped by trends, algorithms, or box-office anxiety, dogs respond honestly. They don’t care about three-act structures or Rotten Tomatoes scores. They react in real time — and Holm believes that authenticity is something modern storytelling desperately needs.

Meet the Man Who Thinks Dogs Should Be Directors 
The Chief AI & Innovation Officer of BARK, Mikkel Holm, has a few ideas for minting the next Park Chan-woof.


Why BARK Is the Perfect Place for This Idea

At BARK, data about canine behavior isn’t abstract. It’s central to the business. Millions of interactions — toys chewed, treats rejected, boxes loved — already inform product design. Translating that behavioral intelligence into creative output feels like a natural extension.

Holm’s role isn’t about replacing human creators. Instead, it’s about collaboration — humans setting the framework, AI translating signals, and dogs influencing the final creative choices in ways we’ve never seen before.

Is This Art or Absurdity?

Skeptics, of course, will laugh. Dogs as directors sounds like a headline built for clicks. But then again, so did AI-written novels, virtual influencers, and fully synthetic pop stars — until they weren’t jokes anymore.

Holm’s idea taps into a deeper cultural shift: creativity is no longer exclusively human. As tools evolve, authorship becomes shared — between humans, machines, and perhaps, one day, animals.

And if the result is strange, emotional, or unexpectedly beautiful? That might be the point.

A Future Where Creativity Isn’t Just Human

Cinema has always evolved with technology — from silent films to sound, black-and-white to color, analog to digital. Holm’s vision suggests the next leap might not be technical, but philosophical.

What happens when we stop asking who is allowed to create?

If the first dog-directed short film ever premieres at a festival someday, don’t be surprised if it doesn’t explain itself. Dogs, after all, have never felt the need to justify their instincts. Maybe storytellers shouldn’t either.

Continue Reading
Advertisement

Trending