Prompt choice for large language models: Business communications
https://doi.org/10.26907/2782-4756-2025-79-1-66-74
Abstract
The given article highlights the role of LLM in natural language processing and the possibilities of its use for various tasks, in particular in the field of business communications. It is essential to formulate prompts correctly for effective interaction with neural networks. Prompt engineering is becoming not only a technical, but also a creative process for a user to be able to effectively build communication with a neural network. The article outlines various methods for classifying prompt patterns depending on tasks and situational contexts, and provides specific prompt templates. The latest studies have revealed new ways of using LLM in business communication, for example, to create pitches or imitate the styles of famous CEOs to make up business content. We analyze the use of large language models as a tool for generating and adapting business communication texts. We have established that the choice of language means in AI-generated text affects the effectiveness of the content presentation and the result of the business project as a whole. As a result, the proper use of generative AI, including the ability to construct queries, is an important part of digital competence which affects the success of business communication and other fields of human activity.
About the Author
E. V. KomarovaRussian Federation
Komarova Elena Valerievna, Ph.D. in Philology, Associate Professor,
76 Vernadskiy Prospect, Moscow, 119454
References
1. Hochreiter S., Schmidhuber J. (1997). Long Short-Term Memory. Neural Computation. Vol. 9, No. 8, pp. 1735–1780. (In English)
2. Huang, J. and Chang, K. C.-C. (2022). Towards Reasoning in Large Language Models: A Survey. arXiv preprint arXiv: 2212.10403. (In English)
3. Muhammad Usman Hadi, Qasem Al Tashi, Rizwan Qureshi, et al. (2023). A Survey on Large Language Models: Applications, Challenges, Limitations, and Practical Usage. TechRxiv. July 10. (In English)
4. White, J. et al. (2023). A Prompt Pattern Catalog to Enhance Prompt Engineering with Chatgpt. arXiv preprint arXiv:2302.11382. (In English)
5. van Dis E. A., Bollen J., Zuidema W., van Rooij R., Bockting C. L. (2023). Chatgpt: Five Priorities for Research. Nature, Vol. 614, No. 7947, pp. 224–226. (In English)
6. Reynolds, L. and McDonell, K. (2021). Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm. CoRR, Vol. abs/2102.07350. URL: https://arxiv.org/abs/2102.07350 (accessed: 02.02.2025). (In English)
7. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E. H., Le, Q., Zhou, D. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models. CoRR, Vol. abs/2201.11903. URL: https://arxiv.org/abs/2201.11903 (accessed: 02.02.2025). (In English)
8. Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D., Chi, E. H., Hashimoto, T., Vinyals, O., Liang, P., Dean, J., Fedus, W. (2022). Emergent Abilities of Large Language Models. URL: https://arxiv.org/abs/2206.07682 (accessed: 02.02.2025). (In English)
9. Zhou, Y., Muresanu, A. I., Han, Z., Paster, K., Pitis, S., Chan, H., Ba, J. (2022). Large Language Models Are Human-Level Prompt Engineers. URL: https://arxiv.org/abs/2211.01910 (accessed: 02.02.2025). (In English)
10. Shin, T., Razeghi, Y., Logan IV, R. L., Wallace, E., Singh, S. (2020). Autoprompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. CoRR, Vol. abs/2010.15980. URL: https://arxiv.org/abs/2010.15980 (accessed: 02.02.2025). (In English)
11. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I. (2019). Language Models Are Unsupervised Multitask Learners. (In English)
12. Zhou, D., Sch¨arli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E. (2022). Least-tomost Prompting Enables Complex Reasoning in Large Language Models. URL: https://arxiv.org/abs/2205.10625 (accessed: 02.02.2025). (In English)
13. Jung, J., Qin, L., Welleck, S., Brahman, F., Bhagavatula, C., Bras, R. L., Choi, Y. (2022). Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations.URL: https://arxiv.org/abs/2205.11822 (accessed: 02.02.2025). (In English)
14. S. Arora, A. Narayan, M. F. Chen, L. Orr, N. Guha, K. Bhatia, I. Chami, and C. Re. (2023). Ask Me Anything: A Simple Strategy for Prompting Language Models. International Conference on Learning Representations. URL: https://openreview.net/forum?id= bhUPJnS2g0X (accessed: 02.02.2025). (In English)
15. Frîncu, I. (2023). In Search of the Perfect Prompt. (In English)
16. Zhao, Z. et al. (2021). Calibrate before Use: Improving Few-Shot Performance of Language Models. International conference on machine learning. Pp. 12697– 12706. PMLR. (In English)
17. Svendsen, Adam and Garvey, Bruce. (2023). An Outline for an Interrogative/Prompt Library to help improve output quality from Generative-AI Datasets (May 2023). URL: https://ssrn.com/abstract=4495319 or http://dx.doi.org/10.2139/ssrn.4495319 (accessed: 02.02.2025). (In English)
18. Ibrahim, John. (2023). The Art of Asking ChatGPT for High-Quality Answers (Nzunda Technologies Ltd: January 2023). (In English)
19. Svendsen, A., Garvey, B. (2023). PromptEngineering Testing ChatGPT4 and Bard for Assessing Generative-AI Efficacy to Support Decision-Making. Available at SSRN 4495320. (In English)
20. Short, C. E., Short, J. C. (2023). The Artificially Intelligent Entrepreneur: ChatGPT, Prompt Engineering, and Entrepreneurial Rhetoric Creation. Journal of Business Venturing Insights. Т. 19, p. e00388. (In English)
21. Anglin, A. H. et al. (2022). Role Theory Perspectives: Past, Present, and Future Applications of Role Theories in Management Research. Journal of Management. Т. 48. No. 6, pp. 1469–1502. (In English)
22. Roccapriore, A. Y., Pollock, T. G. (2023). I Don’t Need a Degree, I’ve Got Abs: Influencer Warmth and Competence, Communication Mode, and Stakeholder Engagement on Social Media. Academy of Management Journal. Т. 66. No. 3, pp. 979–1006. (In English)
23. Lavanchy, M., Reichert, P., Joshi, A. (2022). Blood in the Water: An Abductive Approach to Startup Valuation on ABC’s Shark Tank. Journal of Business Venturing Insights. Т. 17, p. e00305. (In English)
24. Short, C. E., Hubbard, T. D. (2023). Do Boards and the Media Recognize Quality? An Assessment of CEO Contextual Quality Using Pay, Dismissal, Awards, and Linguistics. Academy of Management Discoveries. Т. 9. No. 4, pp. 525–548. (In English)
25. Ouyang, L. et al. (2022). Training Language Models to Follow Instructions with Human Feedback. Advances in neural information processing systems. Т. 35, pp. 27730–27744. (In English)
26. Radford, A. (2018). Improving Language Understanding by Generative Pre-Training. (In English)
27. Jha, A. et al. (2023). How to Construct and Deliver an Elevator Pitch: A Recipe for the Research Scientist. (In English)
28. Lounsbury, M., Glynn, M. A. (2001). Cultural Entrepreneurship: Stories, Legitimacy, and the Acquisition of Resources. Strategic management journal. Т. 22. No. 6–7, pp. 545–564. (In English)
29. Nugroho, S. et al. (2023). The Role of ChatGPT in Improving the Efficiency of Business Communication in Management Science. Jurnal Minfo Polgan. Т. 12. No. 1, pp. 1482–1491. (In English)
30. Abubakar, A. M. et al. (2019). Knowledge Management, Decision-Making Style and Organizational Performance. Journal of Innovation & Knowledge. Т. 4. No. 2, pp. 104–114. (In English)
31. Korzynski, P. et al. (2023). Artificial Intelligence Prompt Engineering as a New Digital Competence: Analysis of Generative AI Technologies such as ChatGPT. Entrepreneurial Business and Economics Review. Т. 11. No. 3, pp. 25–37. (In English)
32. Riina, V., Stefano, K., Yves, P. (2022). DigComp 2.2: The Digital Competence Framework for CitizensWith New Examples of Knowledge, Skills and Attitudes. Joint Research Centre, No. JRC128415. (In English)
Review
For citations:
Komarova E.V. Prompt choice for large language models: Business communications. Philology and Culture. 2025;(1):66-74. (In Russ.) https://doi.org/10.26907/2782-4756-2025-79-1-66-74