UK Threatens to Ban Elon Musk’s X Over Grok Deepfakes – Ofcom Probe Fast-Tracked

Credit: X

The UK government has intensified pressure on X (formerly Twitter), warning of potential bans or heavy fines over Grok-generated deepfake images.

Media regulator Ofcom has fast-tracked its investigation under the Online Safety Act following public outrage at explicit AI-generated images of celebrities and public figures.

Prime Minister Keir Starmer condemned the deepfakes as “disgraceful” and “not to be tolerated,” pledging swift enforcement. Ofcom is considering penalties that could include fines of up to 10% of X’s global revenue or even a full platform block in the UK.

In response, X announced that Grok’s image generation tool will now be restricted to paying subscribers only, with stricter account consequences for users who create illegal content.

The controversy comes amid wider global scrutiny of Grok’s uncensored image tool, which critics say allows highly realistic deepfakes with fewer guardrails compared to rivals like DALL-E or Midjourney.

The Online Safety Act 2023 - Explained

The Online Safety Act 2023 is landmark UK legislation that came into full legal force in stages, with major enforcement powers kicking in from 2025–2026. It aims to make the internet safer, especially for children, by holding online platforms legally accountable for harmful content.

Key Objectives

Protect users (especially kids) from:

  • Illegal content (child sexual abuse material, terrorism, hate speech, revenge porn, etc.)
  • Legal but harmful content (suicide/self-harm promotion, eating disorders, bullying, misinformation that causes significant harm)
  • Adult content being easily accessible to children

Who It Applies To

Any online service with significant UK users or targeting the UK market, including:

  • Social media (X, Instagram, TikTok, Facebook)
  • Video-sharing platforms (YouTube, Twitch)
  • Search engines (Google)
  • Forums, dating apps, gaming sites with user-generated content

Very small/low-risk platforms are exempt or lightly regulated.

Main Duties of Platforms (Big Tech)

  1. Risk Assessments – Identify & assess risks of illegal & harmful content.
  2. Safety-by-Design – Build features that reduce harm (e.g., strong age verification, content filters).
  3. Content Moderation – Swiftly remove illegal content; restrict harmful material.
  4. Transparency Reports – Publish annual reports on actions taken.
  5. User Tools – Give users controls (block, mute, report, limit exposure).
  6. Child Protection – Strictest rules: age checks, prevent children seeing porn or self-harm content.

Enforcement & Penalties

Regulator: Ofcom (UK media watchdog)
Fines up to £18 million or 10% of global annual turnover (whichever is higher).
In extreme cases: block access to the platform in the UK (e.g., ban X in Britain).
Ofcom can also force platforms to change algorithms, suspend features, or pay compensation to victims.

Timeline (as of January 10, 2026)

  • 2023–2024: Act passed & initial rules set.
  • 2025: Illegal content duties in force; platforms submit risk assessments.
  • 2026: Full rollout of harmful content rules, age verification enforcement, and steep fines begin.
  • Jan 2026: Ofcom fast-tracking probes (e.g., Grok deepfakes case).

Controversy & Debate

  • Free speech: Critics (including Elon Musk) say it gives government too much power to censor.
  • Effectiveness: Supporters argue it’s long overdue protection for kids.
  • Global impact: UK rules are influencing laws in EU, Australia, Canada.

Basically, the Online Safety Act is the UK’s attempt to make Big Tech legally responsible for what happens on their platforms – with massive fines and even bans as the ultimate stick.

0 Comment(s)


Leave a Comment

Related Articles