Behold the Social Security Administration’s AI Training Video


In the middle of chaos and upheavals to the Social Security Administration (SSA) caused by the so-called Ministry of Elon Musk’s Government (DOGE), employees have now been invited to integrate the use of a generative AI chatbot in their daily work.

But before one of them could use it, they must all watch a four -minute training video with a lively woman with four fingers attracted in a style that would not be in her place on websites created at the beginning of this century.

Aside from the web 1.0 graphics used, the video also fails with its main objective of informing the staff of the ASS on one of the most important aspects of the use of the chatbot: do not use any personally identifiable information (PII) when using the assistant.

There is nothing wrong with your speakers; Wired has disabled the sound. Via the SSA

“Our apologies for monitoring our training video,” SSA wrote in a chatbot information sheet that was shared in an email to employees last week. The information sheet, which Wired has examined, adds that employees using the chatbot must “refrain from downloading Pii to the chatbot”.

The work on the chatbot, called the agency’s support companion, started about a year ago, long before Musk or Doge arrived at the agency, an SSA employee with knowledge of the development of the application indicates Wired. The application has been in limited test since February, before it is deployed to all SSA staff last week.

In an email announcing their availability to all the staff this week, and examined by Wired, the agency wrote that the chatbot was “designed to help employees with daily tasks and improve productivity”.

Several SSA employees, including the staff of the Front Office, told Wired that they completely ignored email on the chatbot because they were too busy with real work, compensating Reduced staff in the SSA offices. Others said they had briefly tested the chatbot but had not been immediately impressed.

“Honestly, no one really talked about it,” a source told Wired. “I’m not sure that most of my colleagues even watched the training video. I played a little with the chatbot and several of the answers I received were incredibly vague and / or inaccurate.”

Another source said their colleagues made fun of the training video.

“You could hear my colleagues making fun of the graphics. Nobody I know is [using it]. It’s so clumsy and bad, “said the source, adding that they have also received inaccurate information by the chatbot.

Leave a Reply

Your email address will not be published. Required fields are marked *