The Hidden Dangers of AI
The significant business and personal risks of generative AI are being ignored in the headlong rush to adopt these tools
The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country.
– Edward L. Bernays, Propaganda
ChatGPT, the interactive Artificial Intelligence (AI) powered chatbot from OpenAI, is taking the world by storm. By allowing users to interact with it conversationally, the generative AI model has shown that it can do a host of functions ranging from researching topics, writing essays and news stories, imitating help desks agents and answering technical questions, to analyzing spreadsheets and user-entered data on the fly. The possibilities seem endless, and needless to say, it is driving a flurry of activity as people and companies work to figure out ways to incorporate this type of technology into their lives and business processes, and competitors rush to enter the field.
But are people leaping before they are looking? I think the answer is yes, as there are several downsides to this type of AI that I believe people and companies are either ignoring or unaware of.
Manipulation
It has been repeatedly shown that traditional search engines and social media have been filtering and manipulating their content in order to advance particular social and government agendas. “Free” information does not equal accurate or unbiased information.
Is there any reason to expect that interactive AI will be any different in the results it delivers? Hardly, and if anything, the bias in the answers provided by these types of platforms is on open display. While I expect that there will be some efforts to make the biases in the information provided less obvious, this only illustrates that these services are not unbiased oracles, but rather platforms that reflect the biases and intents of their creators.
As these services gain a greater foothold in business and in our lives, my prediction is that the acknowledgment of these biases will be ignored when balanced against the cost and time-saving advantages of these tools.
Stupification
The impact of technology on humanity, while providing numerous goods in terms of reducing poverty and increasing life span, also has numerous downsides in terms of degrading our abilities to think for ourselves. Who has not had to deal with a retail employee who is unable to make change without the use of a calculator?
If anything, this trend to dumb down students is accelerating, with the result that many schools, even those with the highest levels of funding, cannot show that any of their students are working at grade level. Rather than address the root problem by focusing on core skills of reading, writing, and arithmetic, the answer has been instead to throw technology at the problem by mandating that all students have laptops, tablets and calculators.
By not teaching children how to think (and instead focusing on what to think), we are making them increasingly dependent on technology crutches, and ChatGPT has already advanced to the head of the class in terms of its use by students to cheat. This has started an arms race by companies to provide tools to detect such cheating, with the result that legitimate effort by students is discounted when compared to the results of students using AI crutches to do their work.
Identification
Just as with cheating in school, employees are already “cheating” in business, and exposing their companies to significant risk when they use ChatGPT and similar tools to produce their work output, whether it is business plans or software code. As more people use ChatGPT and other such tools, they are exposing the information that is in their questions to the companies who are behind these tools. Confidential information is already showing up in answers, and the data privacy and information exposure risks of the use of these tools in business are enormous.
Business are not the only ones at risk through the use of these tools, individuals are also exposing themselves to governmental identification and monitoring by their use. Governmental agencies have already been exposed as monitoring social media platforms to identify users who are potential problems as displayed by their interest in disinformation, however disinformation is being defined at the moment.
Generative AI will extend this trend as it becomes a more ubiquitous information resource, eventually supplanting the traditional search engines of today. Though to a certain extent, the introduction of artificial intelligence in government could be a welcome change from the general lack of intelligence that currently exists.
Conclusion
So what can we do to reduce these dangers? Step 1 of the process is to first admit you(we) have a problem, and your reading of this article and awareness of the dangers is the start. From there, your use of these technologies in your personal or business life is up to your personal/business risk tolerance, however at this point you can no longer say you weren’t aware of what these risks are. Good luck!
— This article was written by an actual human and not a generative AI program (but then again, how would you know?)