ARTIFICIAL NEURAL NETWORK TEXT GENERATION AS A NEW TYPE OF MASKING THE AUTHORSHIP
Abstract and keywords
Abstract (English):
Artificial neural network text generation may be used to conceal the authorship of a text. As a result, experts have to be able to tell the difference between natural and generative tests to establish authorship. In this research, human-made texts and texts generated by GPT 4.5 were analyzed using the methods of compositional-semantic, structural-semantic, and grammatical-syntactic analysis. The analysis made it possible to identify and describe some typical flaws in the AI-generated texts caused by violations in their implicative and referential semantics. It revealed some current issues of language modeling and the possibilities of generative language models based on neural network algorithms. The quality of the generated text depended on additional parameters to the prompt. The article describes linguistic features typical of generated texts and provides each case with comparative examples. The prospects for further research lie in an in-depth study of complex AI-generated text structures, e.g., preloaded idiosyncratic texts. The practical result may yield a diagnostic complex of signs based on a list of typical linguistic features of AI-generated texts.

Keywords:
authorship expertise, neural networks, text generation, written speech, masking of written speech, language models
Text
Text (PDF): Read Download
References

1. Alekseeva L. G., Alekseev P. S. Prompt language, or features of formulation of queries to generative neural networks for image creation. Verba, 2024, (3): 50–61. (In Russ.) https://doi.org/10.34680/VERBA-2024-3(13)-50-61

2. Burnashev R. F., Alamova A. S. The role of neural networks in linguistic research. Science and Education, 2023, 4(3): 258–269. (In Russ.)

3. Glazova L. I., Luzgina A. D., Pugachevsky A., Kochetova A. N., Feyzullov D., Chizh A. V., Vinogradov M. Yu. Artificial intelligence as an effective communication tool. Rossiiskaia shkola sviazei s obshchestvennostiu, 2024, (33): 48–65. (In Russ.) https://elibrary.ru/jurxuo

4. Gulyaeva P. S. Large language models use in the context of rulemaking digitalization. The Academic Journal of Moscow City University. Series "Legal Sciences", 2023, (3): 126–137. (In Russ.) https://doi.org/10.25688/2076-9113.2023.51.3.11

5. Ziryanova I. N., Chernavskiy A. S. Generative language patterns and the phenomenon of anti-anthropocentrism – new perspectives on the linguistic paradigm of "posthumano" and "general/strong" AI. Bulletin of Baikal State University, 2024, 34(1): 144–152. (In Russ.) https://doi.org/10.17150/2500-2759.2024.34(1).144-152

6. Kozlovsky A. V., Melnik Ya. E., Voloshchuk V. I. On the approach for automatic generation of narrative-linked text. Izvestiya Tula State University, 2022, (9): 160–167. (In Russ.) https://elibrary.ru/dirfga

7. Krivosheev N. A., Ivanova Yu. A., Spitsyn V. G. Automatic generation of short texts based on the use of neural networks LSTM and SecGAN. Tomsk State University Journal of Control and Computer Science, 2021, (57): 118–130. (In Russ.) https://doi.org/10.17223/19988605/57/13

8. Ogorelkov I. V. Peculiarities of the diagnostic author’s study of anonymous document on the basis of signs, characterizing the authority imitation. Russkii iazyk za rubezhom, 2020, (1): 66–69. (In Russ.) https://elibrary.ru/seglhn

9. Rubtsova I. I., Ermolova E. I., Bezrukova A. I., Ogorelkov I. V. Establishing the fact of masking written speech in the text of an anonymous document: Methodological recommendations. Moscow: EKTS MVD Rossii, 2013, 64. (In Russ.)

10. Soldatkina Ya. V., Chernavskiy A. S. Generative language models as a crucial phenomenon of media culture at the beginning of the XXI century. Science and School, 2023, (4): 42–54. (In Russ.) https://doi.org/10.31862/1819-463X-2023-4-44-56

11. Stetsyk M. The union of linguistics and prompt engineering: Linguistic features of prompts to neural networs. Vilnius University Open Series, 2024: 155–166. (In Russ.) https://doi.org/10.15388/SV-I-II.2024.14

12. Telpov R. E., Lartsina S. V. Typological differences of natural and neural network-generated texts in a quantitative aspect. Nauchnyi dialog, 2023, 12(7): 258–269. (In Russ.) https://doi.org/10.24224/2227-1295-2023-12-7-47-65

13. Fishcheva I. N., Peskisheva T. A., Goloviznina V. S., Kotelnikov E. V. A method for classifying aspects of argumentation in Russian-language texts. Program systems: Theory and applications, 2023, 14(4): 23–45. (In Russ.) https://doi.org/10.25209/2079-3316-2023-14-4-25-45

14. Floridi L., Chiriatti M. GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 2020, 30(4): 681–694. https://doi.org/10.1007/s11023-020-09548-1

15. Jurafsky D., Martin J. H. Speech and Language Processing. Pearson Prentice Hall, 2009, 988.

16. Maynez J., Narayan S., Bohnet B., Mcdonald R. On faithfulness and factuality in abstractive summarization. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, 1906–1919. https://doi.org/10.18653/v1/2020.acl-main.173

17. Mikhaylovskiy N., Churilov I. Autocorrelations decay in texts and applicability limits of language models. Computational Linguistics and Intellectual Technologies: Papers from the Annual International Conference "Dialogue", 2023, (22), 350–360. https://doi.org/10.48550/arXiv.2305.06615

18. Ostyakova L., Petukhova K., Smilga V., Zharikova D. Linguistic annotation generation with ChatGPT: A synthetic dataset of speech functions for discourse annotation of casual conversations. Computational Linguistics and Intellectual Technologies: Papers from the Annual International Conference "Dialogue", 2023, (22), 386–403.

19. Radford A., Narasimhan K., Salimans T., Sutskever I. Improving language understanding by generative pre-training. OpenAI, 2018. URL: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf (accessed 10 Mar 2025).

20. Sutskever I., Vinyals O., Le Q. V. Sequence to sequence learning with neural networks. Advances in Neural Information Processing Systems 27, 2014, 3104–3112.

21. Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A. N., Kaiser L., Polosukhin I. Attention is all you need. Advances in Neural Information Processing Systems 30, 2017, 5998–6008.

22. Zhang C., Bengio S., Hardt M., Recht B., Vinyals O. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 2021, 64(3): 107–115. https://doi.org/10.1145/3446776

23. Zupan J., Gasteiger J. Neural networks: A new method for solving chemical problems or just a passing phase? Analytica Chimica Acta, 1991, 248(1): 1–30. https://doi.org/10.1016/S0003-2670(00)80865-X


Login or Create
* Forgot password?