10.3 Understanding AI Writing Applications (AIWAs)
There are a variety of AIWAs you can freely use or subscribe to generate text and images, proofread content, help you research, and more. You begin your first task in the effective management of AI (and AI literacy) in your business communication by understanding how to use AIWAs and choosing tools that align with your communication tasks (Cardon, et al., 2023a). By understanding these tools, you can better select what type of assistance you need to improve your communication and be within the ethical and legal parameters of your organization.
Let’s start with some vocabulary and conceptualization of AIWA and how they work.
- Natural Language Processing: Natural Language Processing (NPL) is a branch of artificial intelligence focused on enabling machines to understand, interpret, and generate human language. It includes tasks like text classification, language translation, sentiment analysis, and more.
- Large Language Model: A large language model (LLM) is a type of artificial intelligence that can understand and generate human-like text. It works by analyzing vast amounts of written data it is given as training material (like books, websites, and articles) to learn patterns in language. This training allows the model to predict what words or sentences are likely to come next based on the input it receives. LLMs, like ChatGPT, can answer questions, write essays, summarize information, and hold conversations, all by recognizing patterns and relationships in language. LLMs don’t think in the sense humans do, but they can simulate conversation by using the information they’ve learned. These models are called “large” because they have billions of parameters (data points) that help them make more accurate predictions and provide more realistic responses.
- Training Materials: Training materials for LLMs refer to the large amounts of text, images, or other data used to teach the AI how to perform tasks like writing, answering questions, or creating art. The process of selecting training materials is based on choosing data that represents a wide range of topics and language uses. However, training materials are incomplete and biased because it’s impossible to include all perspectives and topics the AI might need to know, and certain content is not digitally available. This means the AI might not understand certain niche topics, cultural nuances, or new developments that weren’t part of the data it was trained on. As a result, LLMs may produce incorrect or limited information on topics outside their training data.
- Generative AI (GenAI): This term refers to a broad category of artificial intelligence that can create content, like text, code, or images. LLMs like Gemini and ChatGPT fall under this umbrella.
- AI Writing Tools (AIWT): This term encompasses a specific set of software applications that use features of GenAI to assist with the writing process. While some AI writing tools may have basic content generation capabilities, their primary focus is on tasks like grammar checking, paraphrasing, or suggesting sentence improvements. Grammarly, for example, is an AI-powered editing tool, but it isn’t categorized as a pure GenAI tool in the same way as an LLM.
- Prompt Engineering: Prompt engineering is the practice of designing and refining the input (or “prompt”) given to an LLM, like ChatGPT, to get the best possible output or response. A prompt can be a question, instruction, or description. Since models like ChatGPT generate text based on the prompts they receive, the way you phrase or structure your prompt can greatly influence the quality and relevance of the response.
- AI Hallucinations: AI hallucinations refer to situations where an AI, like ChatGPT, generates information that sounds convincing but is actually false, made-up, or inaccurate. For example, if you ask ChatGPT for a market analysis, it might generate a report claiming a new product has a 25% market share in Europe, even though it hasn’t launched there. Or you might ask Gemini for a bio for a keynote speaker at your professional conference, and it will include a fake citation for a book the person hasn’t written. This happens because the AI doesn’t truly understand the information; instead, it predicts what words or facts are most likely to follow based on its training data. The AI is making educated guesses based on patterns it has learned, not verifying facts. This is why it’s important to double-check AI outputs, especially when dealing with important or specialized topics.
Now that you have a better understanding of what AIWAs are and how they work, let’s see in the next several chapter sections how they are being used for specific tasks in business communication.