Sources tell Reuters that Elon Musk’s DOGE team in the U.S. government uses his xAI startup’s AI chatbot Grok to examine data. Reports suggest they pitched the tool to Homeland Security officials without getting the green light first.
Possible Privacy Breaches and Ethical Dangers
If Grok got its hands on sensitive federal info, experts caution it might break privacy and security rules. They also point out a potential clash of interests if Musk, who works for the government in a special role, swayed choices that could pad his private company’s wallet.
DHS Says No Pressure But Puts AI Use on Hold
DHS stopped its staff from accessing all commercial AI tools, including ChatGPT and Anthropic’s Claude, in May. This decision came after worries about improper data handling. A DHS spokesperson said DOGE didn’t push employees to use Grok. However, the department hasn’t explained if Grok was used or how they checked it out. Before this, DHS allowed limited AI use under strict rules within the department. They let staff use commercial models for tasks that weren’t sensitive. For more sensitive work, they used their own AI system.
AI in Government Raises Red Flags, Say Experts
Watchdogs and civil liberties groups warn that putting Grok to use, along with DOGE’s reach into huge federal databases, poses a real threat to data privacy, civil service safeguards, and fair competition in federal contracting. Some sources claim DOGE staff have looked into using AI to keep tabs on employee “loyalty” to political goals, pushing ethical boundaries even further. The uproar also brings to light wider worries about the unchecked use of generative AI in government when powerful tools like Grok get pushed outside normal regulatory channels. Critics say that without proper checks, Musk’s dual roles in the public and private sectors might open doors for self-dealing and shake public faith in how the government rolls out AI.
Content Writer