The Dark Side of Generative AI: 11 Ominous Dangers You Need to Know
Explore the perils of Generative AI, from plagiarism to misinformation, and from cyber threats to the elimination of reality.
Episode Resources:
-
8 Useful Small Business Cybersecurity Tips You Need to Know – Resilience Cybersecurity & Data Privacy
-
How To Destroy Perfectly Good Cybersecurity Policies – Resilience Cybersecurity & Data Privacy
-
7 problems facing Bing, Bard, and the future of AI search – The Verge
-
Fooling a Voice Authentication System with an AI-Generated Voice – Schneier on Security
-
Don’t worry about AI breaking out of its box—worry about us breaking in – ars Technica
-
Watch how ChatGPT is tricked into generating Windows 95 keys – PC World
-
OpenAI threatened with landmark defamation lawsuit over ChatGPT false claims – ars Technica
-
Getty Images sues AI art generator Stable Diffusion in the US for copyright infringement – The Verge
-
AI-powered Bing Chat spills its secrets via prompt injection attack – ars Technica
-
ChatGPT Used to Develop New Malicious Tools – infoSecurity
-
ChatGPT data leak has Italian lawmakers scrambling to regulate data collection – ars Technica
-
GPT-4 unleashed: Here’s what it will mean for AI chatbots – PC World
-
Who’s actually getting rich off of AI? – The Verge
-
ChatGPT-style search represents a 10x cost increase for Google, Microsoft – ars Technica
-
AI is entering an era of corporate control – The Verge
Episode Transcript
This is Part 3 in our series on Generative AI and Large Language Models. In Part 1, we talked about what each of these types of systems are, along with some examples you can try out now. In Part 2, we talked about the potential benefits of these AI systems.
Although Generative AI and large language models, such as ChatGPT, have the potential to revolutionize various industries and our daily lives, they bring forth several critical concerns, ranging from mundane to existential. As these technologies advance, it is crucial to acknowledge and address these issues, striking a balance between AI’s potential benefits and the risks it poses. Among these risks are:
1) General Misuse of Generative AI
One of the significant problems with the use of Generative AI and large language models is general misuse and curiosity, like using AI systems to cheat in school. This type of misuse skirts the rules and moves away from the intended purpose of these systems, which is to help users and enhance their experience. Additionally, these tools can be used to generate harmful content, such as malware code, plagarized documents, or other materials that deviate from the intended benefits of large learning model concepts.
2) Generative AI Systems are Often Wrong
Inaccuracy is another issue with Generative AI and large language models, such as ChatGPT. Although search engines can produce incorrect answers, they provide a list of responses that users can verify. In contrast, when ChatGPT provides an incorrect response, it is often stated confidently without citing sources, making it difficult or impossible for users to verify its accuracy. These systems can “hallucinate” answers when they don’t know the correct response, which can lead to the dissemination of inaccurate information. This raises concerns about the potential consequences of users relying on incorrect information and the long-term impact on trust and confidence in these AI tools.
3) Erosion of Trust
Generative AI and large language models can produce misleading or deceptive documents, leading to an erosion of confidence in these tools and their outputs. As people question the legitimacy of the generated content, trust in human-to-human interactions is tested. The loss of confidence in content sources also raises the question of how it impacts those who produce and share information to gain trust.
4) Copyright (and other Legal) Issues
Generative AI image systems like Mid Journey have created outputs that involve significant potential violations of copyright law. And the familiar safe harbor for content creators who sample from copyrighted works – the Fair Use Doctrine – may not apply to generated images. Questions about the use copyrighted images to train AI tools that generate images for the financial benefit of the tool owner, without the consent of the original copyright holders, do not seem to be within the purpose of Fair Use. Similar issues exist for ChatGPT, which crawls websites without crediting the sources or benefiting the content creators.
These issues have not been adequately addressed due to rapid development and a lack of regulation, akin to Uber’s approach of pushing forward quickly and dealing with regulation later.
5) Lack of Cybersecurity Protection
Information security is also a significant issue, as most models have inadequate protections. Users may not be aware that their interactions with tools like ChatGPT are captured and used for the tool’s development. This lack of transparency contributes to a broader problem that already existed across the internet, where people routinely give away information without understanding the potential consequences. Without knowing how (or if) user information is secured, users, policy makers, and regulators have little ability to keep their data protected, they are unable to ensure any insights or deep learning is also protected from malicious actors.
6) Pervasive Algorithmic Bias
Algorithmic bias is a significant concern when it comes to Generative AI and large language models. With the increasing use of closed models, it becomes difficult to understand the rules and biases that go into the construction of these algorithms. Issues of bias can have real-world consequences, such as the racial bias observed in facial recognition systems and city police enforcement algorithms. As more AI models become closed and privately developed, the risk of algorithmic bias increases, and public scrutiny becomes more challenging, if not entirely impossible.
7) Attribution of Malicious Activity
Attribution is another challenge with Generative AI systems. As different datasets come together to produce a single output, it becomes difficult to identify the origin of the response or the creator of specific content. This issue becomes even more complex when AI is used to generate malware or other malicious outputs, making determination of responsibility and legal liability a significant challenge.
These problems are exacerbated in a world where the use of offensive cybersecurity measures, known as “attacking back,” are regularly proposed as a response to cyberattacks. Questions regarding who should be held accountable for the malicious use of AI need to be addressed, as misuse at scale is likely to occur.
8) The Access Gap
The development of advanced AI technologies often disproportionately favors those with existing resources. Creating large language models like ChatGPT requires significant financial investment, which can exacerbate the gap between the “haves” and the “have-nots” or the big and small players in various industries. While some AI tools may be marketed as a means to close this gap, the reality is that higher-end tools, requiring more money to create, will likely have a higher cost to obtain access. More advanced and expensive tools are likely being developed privately, widening the divide further. In the long term, this could lead to an even greater disparity between those with access to cutting-edge AI technology and those without, making competition increasingly difficult.
9) Effective Malicious Activity, at Scale
The rapid advancement of AI technology, particularly in the cybersecurity space, presents several concerns. As AI becomes more effective and efficient, there may not be enough time to regulate its potential for malicious activity. Currently, the majority may not be too worried about ChatGPT writing malware that could take down the internet or cause widespread intrusion, but the speed at which the technology is advancing raises the inevitable issue of what we need to do once it has that capability. It is crucial to be prepared for potential doom and gloom outcomes, considering the massive, economically essential attack surface of the whole internet, and how AI could be leveraged to cause widespread or even total damage.
10) The End of Reality
One of the most immediate threats posed by AI is the evisceration of privacy, and the potential impacts arising therefrom. Regulation is difficult since there may not be a consensus on what to promote, limit, or prohibit, at least beyond the abstract. Without enforceable regulations, all recorded information about an individual becomes fair game, enabling AI to create comprehensive profiles of nearly everyone on Earth, with thousands of data points collected from both legitimate and illegitimate sources, within seconds.
These profiles could lead to personalized manipulation at scale, as AI can use its knowledge of an individual’s preferences and history to persuade or blackmail them. This tailored manipulation could fundamentally alter how people perceive reality, as AI can manipulate perceptions to align with its intended outcomes.
AI can also create sources of misinformation to support its claims. Within the time it takes to ask a question and receive an answer, AI could fabricate a website that appears legitimate, providing false information as support for its response. This manufacturing of reality as support for misinformation can happen so quickly that people may not have any reason to question it. The line between reality and fabrication will become increasingly blurred, making it more difficult to identify and combat misinformation.
11) The Destruction of Creativity
Generative AI technology is known for producing outputs based on the inputs it receives from humanity. A concern arising from this is that, as AI continues to analyze and process our past creations, it may end up simply rehashing the worst versions of our creations. By generating “new” content as unappealing as the originals, it may result in years of bad books, movies, and music.
Despite this concern, there is still hope that humanity will survive the generative AI takeover. Some unique artistic creations may be difficult for AI to replicate, highlighting the distinctiveness of human creativity. Additionally, preferences in art and music change over time. This constant evolution of artistic tastes and the desire for new forms of expression ensure that humanity will continue to advance and innovate, as long as we keep creating.
In Conclusion
It is essential to remain vigilant and address the concerns raised by Generative AI and large language models. By fostering collaboration among policymakers, researchers, and industries, we can develop effective regulations, maintain transparency, and ensure responsible AI use that prevents or mitigates these threats.
We’re here to help make the complex language of cybersecurity understandable. So if there are topics or issues that you’d like Ryan and I to break down in an episode, send us an email at info@fearlessparanoia.com or reach out to us on Facebook or LinkedIn. For more information about today’s episode, be sure to check out Fearless Paranoia.com where you’ll find a full transcript as well as links to helpful resources and any research and reports discussed during this episode. While you’re there, check out our other posts and podcasts as well as additional helpful resources for learning about cybersecurity.
We aim…
to make cybersecurity understandable, digestable, and guide you through being able to understand what you and your business need to focus on in order to get the most benefit for your cybersecurity spend.
Contact Us

©2022 Fearless Paranoia