When it was officially released late last year, ChatGPT immediately became a craze and was hailed as one of the most impressive technological advancements of the year.
This artificial intelligence (AI) chatbot can generate text on almost any topic, from writing poetry in Shakespeare’s sonnet form to interpreting complex mathematical theorems in language a 5-year-old child can understand. understandable.
Within a week, ChatGPT had over a million users. According to statistics, ChatGPT surpassed 10 million users in just 40 days, far exceeding the initial growth rate of social networks like Instagram (it took 355 days to reach 10 million registrations).
OpenAI, the software research and development company that produced ChatGPT, is attracting large capital from investors, including $10 billion from Microsoft. Perhaps it is not surprising that one day, OpenAI is among the most valuable AI companies in the world.
Efforts to prevent harmful content
ChatGPT works based on scanning documents on the internet and gives results based on real time. Just ask a question, the tool will give results immediately.
The content that ChatGPT gives is sometimes very clever, clever, even this tool knows to change the answer based on user suggestions, but sometimes it is completely wrong, even though the interpretation sounds. convince. It is because it is in the final stage that the results that ChatGPT returns will be more or less controversial.
ChatGPT’s predecessor, GPT-3, also possesses an impressive ability to chain and aggregate information. But because the amount of data on the internet that GPT-3 imports is very large, the operator cannot control all the false and malicious information. Apps are difficult to sell because of their tendency to include violent, sexist, and ethnic content. It is estimated that even with hundreds of people, it takes a decade to scour huge data sets manually.
Since then, a more advanced version of GPT-3, ChatGPT, was born, with the ability to better eliminate malicious information on its own. But not because of that, ChatGPT is “clean” completely, because malicious content related to racism, misogyny, hate, violence, crime, swearing still appears. OpenAI CEO Sam Altman admitted that ChatGPT is in the process of being popular with the community, like any other AI, it is controversial and does not exclude the risk of abuse.
In order to minimize unhealthy content, OpenAI continues to hire many content moderators.
OpenAI partnered with Sama, an organization based in San Francisco. The company hires content moderators from Kenya, Uganda and India. Sama bills itself as an “ethical” AI company that has lifted more than 50,000 people out of poverty, but Time magazine reviewed internal Sama and OpenAI documents — including review of payslips and interviewed anonymously, concluded that data filter workers are paid between $1.32 and $2 per hour depending on seniority and performance, with a peak of $3.74 per hour after tax.
Sama Company employs content moderators from Kenya, Uganda and India.
Although they are exposed to toxic content about violence, sex, and self-harm every day, they are only paid a meager salary. ChatGPT’s story of hidden workers reveals a little-known side of the AI industry.
According to Partnership on AI, an AI organization that OpenAI belongs to, “human” resources have become blurred in part because people want to promote and celebrate the independence of artificial intelligence.
Contract of Sama and OpenAI
In a slowing economy amid a recession, investors have poured billions of dollars into “next generation AI,” with the optimism that computer-generated text, images, video, and audio It will change the way innumerable industries and key areas of business do business such as creative arts, law, programming or computer technology.
But contrary to that glamor and promising future, there is a group of people in the southern hemisphere who are struggling to contribute to a billion-dollar empire for only minimum wages.
Under the working contract between OpenAI and Sama, OpenAI will pay Sama $12.5 per hour, 6 to 9 times what Sama employees actually receive. The spokesperson explained the difference: “The $12.50 price tag for the project includes all costs, such as infrastructure costs, salaries and benefits for associates as well as other benefits. analysts”, because of such division, the number of employees receiving will be much less.
In terms of work process, about three dozen employees were divided into three teams, each focusing on distributing different types of content. Three employees interviewed said that they had to read and classify between 150 and 250 passages of text during each nine-hour shift. All of them suffered from mental trauma because of work. Despite receiving health counseling, according to them, these sessions are not really helpful.
An employee at Sama tasked with reading and classifying documents for OpenAI told TIME magazine that he often hallucinates when he reads immoral sexual content. “You read about that kind of content all week, and by the end of the week, your mind has been worn out,” he says.
Because the workload is too terrible and can seriously hurt employee morale, in February 22, Sama canceled the cooperation with OpenAI, 8 months earlier than expected.
The decision to end its relationship with OpenAI also means that Sama employees no longer have to struggle with malicious texts and images, but their livelihoods are also affected. The majority of employees were transferred to other lower-paid work streams, a few lost their jobs. Sama herself must try to solve the media crisis.
Besides OpenAI, Sama also works with Facebook. However, earlier this year, Sama decided to cancel all work related to handling sensitive content. The company will not renew its $3.9 million content moderation contract with Facebook, resulting in the loss of about 200 jobs in Nairobi, Kenya.
Sama has stopped, but the need to classify, filter and cull content on AI systems remains, at least for now.
Andrew Strait, a researcher on how AI works, said: “ChatGPT and similar models produce impressive results but not as magical as magic. These tools still have to rely on huge supply chains of human labor and data collected online, much of which is used without consent. These are fundamental, serious problems that OpenAI has yet to solve.”