***
DOGE’s Plans To Replace Humans With AI Are Already Under Way –
When I first heard about the concept of DOGE, I immediately understood that it had nothing about cutting costs or waste in government, but rather it was about the wholesale reformation of government with AI. The Atlantic is exposing the tip of the iceberg. How many times have I said, “DATA IS THE NEW OIL”? Further, AI doesn’t need humans to perform work, so this is a paradigm of the “crushing of wages” that Marc Andreessen recently talked about. As human value shrinks, AI and its owners are fabulously enriched by the data.
It’s very sad that America is oblivious to this clear and present danger. It’s not that they weren’t warned. ⁃ Patrick Wood, Editor. technocracy.news
A new phase of the president and the Department of Government Efficiency’s attempts to downsize and remake the civil service is under way. The idea is simple: use generative AI to automate work that was previously done by people.
The Trump administration is testing a new chatbot with 1,500 federal employees at the General Services Administration and may release it to the entire agency as soon as this Friday—meaning it could be used by more than 10,000 workers who are responsible for more than $100 billion in contracts and services. This article is based in part on conversations with several current and former GSA employees with knowledge of the technology, all of whom requested anonymity to speak about confidential information; it is also based on internal GSA documents that I reviewed, as well as the software’s code base, which is visible on GitHub.
The bot, which GSA leadership is framing as a productivity booster for federal workers, is part of a broader playbook from DOGE and its allies. Speaking about GSA’s broader plans, Thomas Shedd, a former Tesla engineer who was recently installed as the director of the Technology Transformation Services (TTS), GSA’s IT division, said at an all-hands meeting last month that the agency is pushing for an “AI-first strategy.” In the meeting, a recording of which I obtained, Shedd said that “as we decrease [the] overall size of the federal government, as you all know, there’s still a ton of programs that need to exist, which is a huge opportunity for technology and automation to come in full force.” He suggested that “coding agents” could be provided across the government—a reference to AI programs that can write and possibly deploy code in place of a human. Moreover, Shedd said, AI could “run analysis on contracts,” and software could be used to “automate” GSA’s “finance functions.”
A small technology team within GSA called 10x started developing the program during President Joe Biden’s term, and initially envisioned it not as a productivity tool but as an AI testing ground: a place to experiment with AI models for federal uses, similar to how private companies create internal bespoke AI tools. But DOGE allies have pushed to accelerate the tool’s development and deploy it as a work chatbot amid mass layoffs (tens of thousands of federal workers have resigned or been terminated since Elon Musk began his assault on the government). The chatbot’s rollout was first noted by Wired, but further details about its wider launch and the software’s previous development had not been reported prior to this story.
The program—which was briefly called “GSAi” and is now known internally as “GSA Chat” or simply “chat”—was described as a tool to draft emails, write code, “and much more!” in an email sent by Zach Whitman, GSA’s chief AI officer, to some of the software’s early users. An internal guide for federal employees notes that the GSA chatbot “will help you work more effectively and efficiently.” The bot’s interface, which I have seen, looks and acts similar to that of ChatGPT or any similar program: Users type into a prompt box, and the program responds. GSA intends to eventually roll the AI out to other government agencies, potentially under the name “AI.gov.” The system currently allows users to select from models licensed from Meta and Anthropic, and although agency staff currently can’t upload documents to the chatbot, they likely will be permitted to in the future, according to a GSA employee with knowledge of the project and the chatbot’s code repository. The program could conceivably be used to plan large-scale government projects, inform reductions in force, or query centralized repositories of federal data, the GSA worker told me.
Spokespeople for DOGE did not respond to my requests for comment, and the White House press office directed me to GSA. In response to a detailed list of questions, Will Powell, the acting press secretary for GSA, wrote in an emailed statement that “GSA is currently undertaking a review of its available IT resources, to ensure our staff can perform their mission in support of American taxpayers,” and that the agency is “conducting comprehensive testing to verify the effectiveness and reliability of all tools available to our workforce.”
At this point, it’s common to use AI for work, and GSA’s chatbot may not have a dramatic effect on the government’s operations. But it is just one small example of a much larger effort as DOGE continues to decimate the civil service. At the Department of Education, DOGE advisers have reportedly fed sensitive data on agency spending into AI programs to identify places to cut. DOGE reportedly intends to use AI to help determine whether employees across the government should keep their job. In another TTS meeting late last week—a recording of which I reviewed—Shedd said he expects that the division will be “at least 50 percent smaller” within weeks. (TTS houses the team that built GSA Chat.) And arguably more controversial possibilities for AI loom on the horizon: For instance, the State Department plans to use the technology to help review the social-media posts of tens of thousands of student-visa holders so that the department may revoke visas held by students who appear to support designated terror groups, according to Axios.
Rushing into a generative-AI rollout carries well-established risks. AI models exhibit all manner of biases, struggle with factual accuracy, are expensive, and have opaque inner workings; a lot can and does go wrong even when more responsible approaches to the technology are taken. GSA seemed aware of this reality when it initially started work on its chatbot last summer. It was then that 10x, the small technology team within GSA, began developing what was known as the “10x AI Sandbox.” Far from a general-purpose chatbot, the sandbox was envisioned as a secure, cost-effective environment for federal employees to explore how AI might be able to assist their work, according to the program’s code base on GitHub—for instance, by testing prompts and designing custom models. “The principle behind this thing is to show you not that AI is great for everything, to try to encourage you to stick AI into every product you might be ideating around,” a 10x engineer said in an early demo video for the sandbox, “but rather to provide a simple way to interact with these tools and to quickly prototype.”
But Donald Trump appointees pushed to quickly release the software as a chat assistant, seemingly without much regard for which applications of the technology may be feasible. AI could be a useful assistant for federal employees in specific ways, as GSA’s chatbot has been framed, but given the technology’s propensity to make up legal precedents, it also very well could not. As a recently departed GSA employee told me, “They want to cull contract data into AI to analyze it for potential fraud, which is a great goal. And also, if we could do that, we’d be doing it already.” Using AI creates “a very high risk of flagging false positives,” the employee said, “and I don’t see anything being considered to serve as a check against that.” A help page for early users of the GSA chat tool notes concerns including “hallucination”—an industry term for AI confidently presenting false information as true—“biased responses or perpetuated stereotypes,” and “privacy issues,” and instructs employees not to enter personally identifiable information or sensitive unclassified information. How any of those warnings will be enforced was not specified.
Of course, federal agencies have been experimenting with generative AI for many months. Before the November election, for instance, GSA had initiated a contract with Google to test how AI models “can enhance productivity, collaboration, and efficiency,” according to a public inventory. The Departments of Homeland Security, Health and Human Services, and Veterans Affairs, as well as numerous other federal agencies, were testing tools from OpenAI, Google, Anthropic, and elsewhere before the inauguration. Some kind of federal chatbot was probably inevitable.
But not necessarily in this form. Biden took a more cautious approach to the technology: In a landmark executive order and subsequent federal guidance, the previous administration stressed that the government’s use of AI should be subject to thorough testing, strict guardrails, and public transparency, given the technology’s obvious risks and shortcomings. Trump, on his first day in office, repealed that order, with the White House later saying that it had imposed “onerous and unnecessary government control.” Now DOGE and the Trump administration appear intent on using the entire federal government as a sandbox, and the more than 340 million Americans they serve as potential test subjects.
***
Tonight’s musical offering:
Those with machine minds being ruled by an A.I. what a surprise. government workers being replaced with a machine doesn’t surprise me. not a bit. the money saved will go into elon musk’s pockets or trumps. a new wealth fund for the new king (not sure which one is king). The same workers that worked to destroy lives of others are having their lives destroyed. karma.
The ultimate goal is rule by machine. A.I. government instead of people. it is already here. almost all payment systems and ordering systems are ruled by a machine. a computer. call to make a payment. it is a machine. make a payment on line. its a machine. phone support of any kind? A.I machine first. have to work hard to reach a person. and even then are they are one? or just a better class of machine?
the A.I phone will soon be inside the body. or in the brain. WE ARE BORG….YOU WILL ASSIMULATED…and yet people will do it happily. willingly. very few humans will be left alive outside the 15 minute prison cities. or the LINE like arabia which is a tomb for the living. no escape. no grounding to the earth because it is a concrete prison. no wind. no sun. no air other then processed. no food. no water unless your credit score says your a good slave! those too will have chips in the brain. with an off switch they can turn off by remote control.
clearing the land for the 10 kingdoms and the 10 kings. playgrounds for the rich and powerful and their carefully selected and controlled servants. the rest of the population will be turned into bone cities and bone artwork like the catacombs of paris. trophies for the insane. one area at time burned out. hawaii, california. flooded like north carolina. and spain. mexico. populations injected with a poison that will drop them like flies a new bird flu requires a new population to be killed. one step at a time.
but don’t worry your A.I masters are in control. and your phone is your prison warden.
LikeLiked by 1 person