Large Language Models (LLMs), a type of generative AI, are powerful tools that can generate natural language texts based on a given input. They have been used for various applications, such as chatbots, summarization, translation, and content creation. LLM sounds like a great tool for social media users who want to save time and effort, or who want to express themselves creatively while admins can add more complex rule detection bots.
However, using LLMs for social media platforms or for any mass users applications may pose some challenges and risks that need to be considered. For instance, cyber security experts are not required to have deep technical understanding of a website to exploit or to explore a zero-day vulnerability anymore, they can just prompt a LLM to write all possible scenarios and associated codes. In this article, we will examine the pros and cons of LLM for social media specifically. Since the popularity of using multiple types of generative AI are on the rise, we cannot discuss LLMs alone without mentioning its relative.
Picture: Popular social medias. Source: Wix.
One of the potential benefits of using LLMs for social media is that they can enhance the user experience by providing personalized and engaging content. For example, an LLM can generate captions, hashtags, comments, or replies based on the user’s preferences, interests, and mood. Furthermore, LLM can generate original and engaging content that can attract more followers, likes, and shares. This will undoubtedly ease the job of social media content writers. In fact, Stability AI just recently announced a LLM that enables users to create advertisement in video or image format.
However, using LLMs for social media also has some drawbacks and dangers that may outweigh the advantages. For instance, an LLM may generate misleading, inaccurate, or harmful content that can damage the reputation, credibility, or safety of the user or others. For example, an LLM may generate false or defamatory statements about a person or an organization, or incite violence or hatred against a group of people. Furthermore, as controversial contents attract the most interaction on social media, in addition to the ease of creating contents, the market can easily be flooded with low quality, harmful and misinformation content. Therefore, it can potentially risk the authenticity and accountability of the user’s online identity and behaviour for manipulation purpose. For example, the deep audio fake video of multiple USA presidents discussing random topics are recently popular on Youtube. What if the content is made in combination with deep fake visual to spread misinformation, leaked "classified documents" or " secret conversation audio" to rig an upcoming presidential election?
Picture: AI fake audio of multiple presidents. Source: YouTube.
Another risk of using LLMs is the potential of unintentionally violating intellectual property rights or privacy laws by copying or disclosing information without permission. For instance, as mentioned above about the Stability AI latest model, its users might not be aware of the Getty images being used to train for the model. Thus, the generated contents from using the model are potentially prone for copyright strike.
One possible issue that can rise from the push for quick adaptation from OpenAI and Microsoft is the lack of privacy or security monitoring. It is easier to understand it as an app store version for LLMs. Every year, there are countless applications are found to be malware on app stores. The situation is already occurring for fake GPT extensions. Of course, this fake extension is not from the official OpenAI extension marketplace. However, given the strict app review of existing Google store or Apple store yet malware apps are able to pass the test, it is possible that malware extension for LLMs can only be found after being too late and thousands of victims are affected.
A possible scenario that illustrates the risks of using LLMs for social media is the case of popular users. For instance, user A, a famous person who has many fans and haters. Due to an unconfirmed allegation, user A is accused of something bad, and the haters start spamming reports against user A. If user A uses an LLM to generate content for their social media account, the LLM may generate content that worsens the situation by either denying or admitting the allegation without user A’s consent or knowledge. The LLM may also generate content that provokes more backlash from the haters or alienates the fans.
Picture: Cancel culture. Source: PixelBay
Finally, the situation can get more tricky for admins for these social media websites. With the example of celebrity A above, the LLM applied for the given social media can auto detect mass negative sentiment and reports. From the video made by LiveOverflow, it is clear that LLM may not be able to distinguish between legitimate and illegitimate reports of user behavior. Without the explicit training or rules, this can leads to an auto permanent ban from the website, which is similar to Meta's Facebook case where a bad actor can copyright strike a picture uploaded by the original user.
Now, it is easy to be tempted to think that with an improved version of AI, it can automatically learn context better and refactor the rules. However, that is only partially true. While it is a fact that the learning cycle of AI is extremely fast, the learning output is difficult to oversight. Beside from the simple trick I used on BingGPT in the previous post, in a complex situation where there is no clear right and wrong, these auto LLM detectors can label controversial content as rule breakers and unintentionally becomes over-censorship.
Therefore, using LLMs for social media is a novel but risky idea that requires careful evaluation and regulation. While LLMs can offer some benefits for enhancing the user experience and generating creative content, they can also pose serious threats to the user’s reputation, credibility, safety, and authenticity. It should be recognized that the technology is very new and it is not clear on how to make policies effectively.
Comments