THANK YOU FOR SUBSCRIBING

Is the Writing on the Wall for AI Chat Bots?
Raymond Kent, Associate AIA, LEED AP BD+C, Principal, Innovative Technology Design Group Leader, DLR Group


Raymond Kent, Associate AIA, LEED AP BD+C, Principal, Innovative Technology Design Group Leader, DLR Group
Recently a fellow faculty colleague wrote a post on social media where they encountered a student who used an artificial intelligence chatbot (Open AI’s ChatGPT) to write a midterm paper for them and submitted it as their own. Under normal circumstances, the professor would have assumed that this student competently wrote a paper on the topic and graded it accordingly. The rub here was that this particular professor was aware of this type of technology and that this student was not known to produce the quality document presented as their own. Suspecting plagiarism, the professor followed the University’s procedure in submitting it through the various channels but came up empty. On a whim, he decided to use a new algorithm (GPTZero) that detects the use of AI chatbots and hits pay dirt.
While there are many technology companies and research Universities working on exactly this type of artificial intelligence, it presents many ethical and potentially litigious avenues that can be misused in advertently or with malicious intent. It is widely recognized many of the benefits of this technology, including in academia where virtual teaching assistants can aid in coursework freeing up a faculty’s time for more direct research and coursework, where information can be formulated based on an ever-expanding knowledge base to generate answers to problems in a common language and in acoherent way giving even the likes of Google Assistant, Alexa, and Siri a run for their money.
It can create reams of text that are clear, plausible, and can be expanded upon in a way that can further improve understanding of a subject. This is also its downfall.
Even with safety guards built in place, the information output can be manipulated in a way that would be less desirable or in some cases wreak havoc simply by how the inquiry is presented. Take for instance the construction industry if a chatbot was allowed to generate a legal contract or specifications for a building. Who checks the validity of information and to what degree of accuracy is there? The information produced by the chatbot can present very different information given who and how the questions are asked. Is this a knowledgeable licensed professional or an inexperienced layperson who asks just enough questions of the AI to generate a plausible response? Then imagine that response being presented to other laypeople, who may not have the expertise to know the information presented is flawed. What is the information generated by that layperson, say an intern, who hands it off to the professional in which it isn’t really reviewed thoroughly, then handed off to a client who is also a novice? This is not to say that human written information is infallible, just that the back check may make it more difficult to determine the origins of authorship. This is not too dissimilar when CAD programs allowed designers to simply copy and paste a detail without always checking for accuracy.
Even with safety guards built in place, the information output can be manipulated in a way that would be less desirable or in some cases wreak havoc simply by how the inquiry is presented
Applications like ChatGPT and even efforts by Open AI and other chatbot developers are an indication that there is a real problem here. We have seen this in other AI-based applications beyond the written text or spoken word in what are known as “deep-fake” videos where AI can take some images of a person and provide a fairly realistic fake video of a real person. Great for bringing back long-deceased actors for films such as Star Wars, but really troubling when it comes to geopolitical leaders being spoofed. In general, the power of these systems can be extremely useful but need to be taken with a real sense of caution and skepticism.
Weekly Brief
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info
Read Also
