As artificial intelligence (AI) large models continue to upgrade, generative AI tools represented by ChatGPT, Wen Xinyiyan, BingChat, etc. have had a great impact on the education field. This type of technical tool can analyze reading materials, translate texts, write and polish papers according to user instructions, and has received a lot of attention and use among college students.
So, what are the characteristics and limitations of content texts written by generative AI tools? As far as writing is concerned, does it conflict with the university's goal of offering general humanities courses if students use AI to write? In the future, is it possible for us to transform such generative AI tools from "accomplices" of "academic misconduct" to "helpers" of humanities education?
Generative AI writing tools bring new challenges to teaching
The author offers the "Comparative Chinese and Western Culture" course at a Sino-foreign joint university. This is a required humanities course at the school. Every year, more than 4,000 students choose the course and participate in the course assessment. In the past, many freshmen would bring excerpts of their high school compositions into college classes and quote them without knowing it, thus causing problems of academic misconduct.
After OpenAI launched ChatGPT in 2022, many Internet companies have also launched their own large language models. After 2023, freshmen entering the university will also keep pace with the times and begin to write directly using the latest technology in assignments or final papers. As a result, the high plagiarism check rate of small papers submitted by students has been significantly reduced. At the same time, the traces of writing using AI tools have significantly increased.
Among the thousands of course final papers, more than 10% have obvious traces that the full text was generated using AI tools. There are also many course papers that are suspected of being partially written using AI software. In the process of reviewing assignments, the author often finds that some students forget to delete the instructions fed to the AI at the end of the text, forget to delete the trial watermark of an AI writing tool, or use English AI tools to generate content and then use translation software to translate it into Chinese, which leads to strange results. A case of grammatical errors.
For students, using AI tools to ghostwrite papers is obviously full of temptations: in just a few minutes, they only need to enter the paper topic required by the course, and they can generate an article that is readable and has a high probability of avoiding the system's plagiarism check. of articles, especially during the busy final semester, which is an irresistible convenience for students. According to the author's statistics, among the documents submitted by students with AI writing features, the shortest editing time is only 1 minute. But from a teaching perspective, the emergence of such AI tools has brought many new requirements and challenges to teachers’ work.
AI is indeed outputting, but it is not writing papers
In the past two years, many AI-generated papers with complete content and smooth logic have appeared on the Internet, making it difficult to tell whether these works were written by real people. However, among the thousands of course final papers that the author has faced, the characteristics of AI writing are very obvious.
The first is the similarity in format and structure. This type of article often uses words such as "first", "secondly", "finally" and "summary" at the beginning of the paragraph, trying to give readers the impression of clear logic and orderly order. However, in the explanation of each paragraph, there are only a few general words, which will make people think, "Does such a thin point of view need to be divided into so many levels?"
At the same time, because the generative AI writing tool does not really think, but searches and reorganizes the language based on the keywords entered by the user, it also places great emphasis on the output of keywords when generating articles, and uses The answer-based format around corresponding keywords is presented, resulting in the homogeneity of the format and structure of this type of text.
For example, for a paper with the theme "Discussing the Differences between Chinese and Western Culture from an Educational Perspective", students writing with AI have a very clear text content format, which can even be said to be "neat and neat". The text often revolves around a few keywords to make it clear. point statement. But in fact, if you take a closer look, you will find that the article has a rigid, answer-style sense of fragmentation, which is far from the "complete writing" required by the course.
Let’s take a short paper on topics related to educational traditions as an example. The content generated by AI mentions the respective characteristics of Chinese education and Western education. These conclusions are familiar to everyone, but these expressions are not consistent with the corresponding topics in our courses. The views conveyed are completely different. If students combined what they learned in the course, they would never list such a one-sided statement. For example, in our special topic "Academies and Universities", we compared the academies in China during the Song Dynasty and the universities that gradually emerged with the rise of cities in the Western High Middle Ages, and provided students with papers and bibliographies for further reading. The short papers generated by AI will not involve relevant cases and historical materials at all.
Being "fed" answers continues to deepen students' stereotypes
In fact, the reason why our courses are listed as required general humanities courses is that the teaching objectives include “enhancing students’ understanding of the diversity of Chinese and Western cultures, and on this basis, forming and strengthening a sense of identity with their own culture.” ". Through the method of comparative research, the course hopes to break some of students' previous impressions and gain a broader and diverse learning perspective. However, the content generated by AI continues to deepen students' stereotypes with a homogeneous view that cannot guarantee accuracy. , promoting cultural bias.
The reason is that such AI writing tools are not real thinking subjects. Their operating mechanism is to extract data from a large amount of text generated by humans, and then generate answers through integration. Therefore, the generation of these answers depends on being "invested in the online world". Which data is fed, and how much is fed, can easily lead to bias issues.
At the same time, AI-generated content cannot guarantee its accuracy. For example, in an assignment, there was a common-sense mistake of mistakenly writing "An Autumn Night Will Come Out of the Fence Gate to Welcome the Coolness" by the Song Dynasty poet Lu You as "An Autumn Night Will Come Out of the Fence Gate to Welcome the Coolness" by the Tang Dynasty poet Du Mu. , there are also erroneous claims such as "the Ming Dynasty novel "A Dream of Red Mansions"", and when coupled with references that have no evidence of this, it can be described as serious nonsense.
At the same time, the logic of the articles generated by AI is chaotic and disorderly. Parallel with the content errors is the lack of clear, reasonable, and orderly logic in the writing, which further aggravates the "nonsense" nature of this type of text. In an assignment that attempts to discuss the differences between Chinese and Western cultures from the perspective of economic development, it is mentioned that the advantage of Chinese culture is that it focuses on the relationship between family and society. This view also appears in many other AI-generated assignments on different topics. Not to mention, the many repeated and confusing sentences take up a lot of space in the entire text.
Let AI shift from promoting “academic misconduct” to “learning assistant”
Generative AI tools have become more and more deeply involved in academic writing in colleges and universities, and it is urgent to standardize and govern such technologies. However, when facing the texts generated by many students using AI in the past two years, the author also discovered some interesting phenomena, which makes people think: in addition to managing the use of AI tools at the normative and institutional level, for teachers , can reforms be carried out from the perspective of course design, teaching model, etc., so that technology can transform from promoting "academic misconduct" to "learning assistant"?
First of all, this type of generative AI tool is not good at outputting papers for some more complex and sophisticated problems. The final assessment of the course requires students to choose only one of three questions for writing. Two questions are grand and one is more specific (students are required to set specific historical situations and characters). As a result, all students who were identified as having submitted AI-written assignments chose the first two questions, and the characteristics of AI-generated content were very typical.
Looking back at the short papers that were daily assignments during the semester, the writing topics were closely integrated with the corresponding topics, and the specific literature that students needed to read and the practical content that needed to be completed were listed. For example, regarding the topic of "City and Commerce", students need to observe "Pingjiang Map", visit the ancient city of Suzhou, and visit the Suzhou City Planning Exhibition Hall, and discuss relevant urban planning in China in history based on urban research treatises written by domestic and foreign scholars. Issues concerning the evolution of the Hefang City system. Papers on this type of topic are almost never AI-generated. Therefore, in the design of course content and tasks, teachers may wish to avoid setting questions that are easy to talk about in general terms, but more closely integrate the questions with the course topics, and even strengthen practical and experiential requirements to promote greater participation of learners.
Secondly, there are obvious differences in the styles of papers generated by the foreign AI software selected by students and the domestic AI software. This can't help but raise a question: Is it the "writing" ability of domestic and foreign generative AI tools that are different, or are there differences in the way students who choose different AI tools themselves ask questions and organize text content?
In fact, even when AI technology is used to generate full text, the quality of articles on the same topic varies. Therefore, teachers can actually use this phenomenon in the teaching process to guide students to think critically about AI-generated content from a comparative cultural perspective. For example, compare content written by students with content generated by AI; compare answers given by different AI tools on the same topic/question; compare texts generated by free and paid AI tools to ask what are the differences between different texts. Difference, why does this difference occur? The teacher can then present this process of comparison and analysis into homework. On the one hand, it can prevent students from directly feeding test questions to AI and "just sit back and enjoy the results." On the other hand, it can guide students to use new technologies rationally while deepening their understanding of the problem. Understanding of course topics and corresponding topics.
Faced with generative AI tools, instead of worrying that students will resist or ban them because they become "lazy", teachers should transform them into new teaching tasks that allow students to exert more autonomy, enhance their sense of participation, and actively discover Questions, assistants who ask questions. After all, the quality of the content of AI-generated articles is closely related to the angle of questioning.
It is foreseeable that in the future, more and more job content will contain traces of AI generation. In addition to making new attempts in the teaching process, we hope that schools will make more sound regulations on the use of AI technology. Here, we may leave one more question: the current generative AI technology is often used by students to ghostwrite papers. So will there be an AI software that can automatically review papers one day? If students know that teachers use AI technology to generate scores to "use magic against magic", will they be so confident and bold in relying on AI for full-text ghostwriting?