1 6.1 AI and Workplace Communication

Rose Helens-Hart

A recent survey of professionals revealed that more than 80% of those surveyed were using AI writing tools—such as content creators like ChatGPT and Gemini, as well as lower-level editors like Grammarly—in their work (Cardon et al., 2023b). Of those who were not using these tools, some reported being prohibited from doing so. Employees, particularly knowledge workers and managers who were responsible for frequent communication (Cardon & Marshall, 2024), were primarily using AI for research and ideation, drafting, editing and revising messages, writing reports and longer documents, and creating social media posts (Cardon et al., 2023b). A Spring 2024 survey of on-campus FHSU students (Duffy, Helens-Hart, & Weigel, 2023) found that students who used AI for academic writing were using it for similar reasons–mainly brainstorming, editing, and some drafting, with fewer using it for research. This highlights the importance of learning ethical and responsible use of AI in writing before entering the workforce, as you are likely experimenting or even regularly using it now.

Professionals using AI in their writing have reported believing it improved their professionalism and efficiency (Cardon et al. 2023b). Thus, if a higher level of writing professionalism—demonstrating courtesy and care—is expected in the AI-assisted workplace, what will define the new standard for excellent communication? Cardon et al. (2023b) suggest that key competencies such as integrity (ethics), oral communication, data literacy, and creativity will set employees apart, as there are growing concerns about how AI can compromise data privacy, introduce bias or inaccuracies in business communication, and damage relationships. AI must be used in the workplace with a strong respect for moral values and interpersonal trust.

It is likely you will or already have encountered opportunities or mandates to use AI to assist you in your professional communication. How then have you or will you decide what AI to use and how to use it? What parameters will you put in place around your use of AI to ensure you are being ethical and responsible when your job, your relationships, and your personal and your company’s reputations are at stake?

This section introduces a framework for AI literacy, which can empower you to make informed, strategic, ethical, and defensible decisions about when and how to use AI to construct business messages (Cardon et al., 2023a). As you develop your AI literacy, it’s essential to also consider ethical authorship (Lentz, 2024)—a complementary concept that focuses on producing AI-assisted communication that is transparent, accurate, audience-centered, and reflective of your values and integrity as a communicator. Together, AI literacy and ethical authorship guide you in using AI effectively and responsibly in your professional communication.

In this section, we explore Lentz’s (2024) definition of ethical authorship along with the four key capabilities of AI literacy: application, authenticity, accountability, and agency. These capabilities provide the foundation for using AI ethically and strategically in business communication. As we break down each capability, we will explore reflective questions to guide your interactions with AI tools, helping you ensure that your use of AI aligns with ethical standards and enhances your communication effectiveness.

AI Literacy and Ethical Authorship

Lentz (2024) considered three ethical frameworks (Aristotle’s Virtue Ethics, Kant’s Categorical Imperative, and Mill’s Utilitarian Ethics) to create a definition of ethical authorship specifically for business communicators. In addition, she proposed a series of reflective statements to help guide communicators toward ethical authorship, which we will look at in a moment. She defines ethical authorship in the context of AI-assisted writing as:

“The authorship of business discourse in ways that positively reflect an author’s values and that create ethical, clear, complete, transparent, and audience-centered communication. In addition, ethical authorship requires that an author be aware of and mitigate the risks of using AI-generated content, including but not limited to the use of AI hallucinations (false data) and the use of copyrighted material” (p. 597).

This definition fits well with the framework we will use for developing AI literacy for business communication (Cardon et al., 2023a). This framework consists of four capabilities: application, authenticity, accountability, and agency (p.277). Combined, these capabilities make it possible to be an ethical, AI-assisted author (Lentz, 2024) of business communication. In other words, AI-literacy enables ethical authorship.

Let’s examine each capability one by one to understand how they support ethical authorship, along with a series of questions (adapted from Cardon et al., 2023a and Lentz, 2024) that you can ask yourself to guide your actions when interacting with AI to develop business messages.

Application: Professionals need to be familiar with AI’s capabilities and limitations, and how to align them with specific tasks (Cardon et al., 2023a). The widespread use of applications by college students and professionals suggests that these tools are relatively easy to use. However, to maximize their effectiveness, professionals must learn how to refine prompts and adjust them for better results (e.g., modifying tone, style, or level of detail).

The proliferation of AI applications for writing will outpace any textbook publication, so it is impossible to give you an exhaustive explanation of those AI applications that are available to you. Rather, you will need to listen to and read about what is happening in your industry and explore and experiment with applications. As you do this, consider some guiding questions (Cardon, et al., 2023a, p. 278) to improve your application capabilities:

  • Based on their capabilities and limitations, which AI should I use?
  • What are best practices for optimizing my use of this AI? (e.g., use of commands, prompts, or queries)
  • What underlying data set informs the AI? What are the strengths and weaknesses of this dataset and how will they affect the AI’s output?

Authenticity: Professionals must prioritize genuine, personalized communication when using AI (Cardon et al., 2023a; Cardon et al., 2023b; Coman & Cardon, 2024; Deptual et al., 2024). Despite the growing capabilities of AI, AI-generated messages won’t reflect your unique voice and will not be tailored to the specific needs of those receiving the messages. AI-mediated communication is seen as less authentic (less sincere and caring) by professionals, though the messages are still seen as professional and achieving their practical purpose (Coman & Cardon, 2024; Piller, 2024). This reinforces the need for communicators to consider multiple goals when working with AI.

Every message your send says something about what you want, who you are, and what your relationship with another person is. What do you want to practically achieve with your communication (such as getting information from someone), what do you want your communication to say about your identity as a professional (such as that you are kind and competent), and what do you want your communication to say about you relationships with others (such as that there is a power difference)? Each of these questions needs to be considered whether you are writing with the assistance of AI or not. It may not matter to a shopper if AI (rather than a human) generates a summary of product reviews for an online boutique, but it does matter and damage relationships with stakeholders in other scenarios that carry more significance, such as crisis communication and delivering bad news (Piller, 2024).

To focus on producing genuine, human-centered communication, consider the following questions (Cardon, et al., 2023a, p. 278; Lentz, 2024, p.604):

  • To what degree have I inserted my own voice, personality, and style into the message?
  • Does the message meet my identity goals and reflect who I am and want to be as a professional?
  • To what degree have I ensured the message focuses on my receiver’s needs and relational goals?
  • To what degree have I built trust with my receiver through this message?

Accountability: Professionals must take responsibility for the accuracy and appropriateness of AI-generated or influenced content used in their communication (Cardon et al., 2023a; Cardon et al., 2023b; Getchell et al., 2022; Lentz, 2024). This includes using AI in a fair and equitable manner and developing strong information literacy (the ability to find, evaluate, and use information effectively) (Cardon et al, 2023; Dobrin, 2023). This means making sure AI content doesn’t reinforce biases or discrimination against certain groups of people, and that all content is verifiable.

AI can “hallucinate” and generate information that sounds convincing but is actually false, made-up, or inaccurate. For example, if you ask ChaptGPT for a market analysis, it might generate a report claiming a new product has a 25% market share in Europe, even though it hasn’t launched there. Or you might ask Gemini for a bio for a keynote speaker at your professional conference, and it will include a fake citation for a book the person hasn’t written. This happens because the AI doesn’t truly understand the information; instead, it predicts what words or facts are most likely to follow based on its training data. The AI is making educated guesses based on patterns it has learned, not verifying facts. This is why it’s important to double-check AI outputs, especially when dealing with important or specialized topics.

AI mistakes in your writing tend to be judged more harshly than human mistakes, highlighting the importance of maintaining high standards of reliability in AI-mediated communication (Cardon, et al., 2023a). To keep your accountability in mind, consider the following reflective questions (Cardon, et al., 2023a, p. 278; Lentz, 2024, p.604):

  • Have I verified the content of my message as factually correct?
  • Is the logic of the message solid and coherent?
  • Does the message contain depth? What perspectives may have been left out?
  • Do my stakeholders have equal access to the AI I’ve used?
  • Is credit and attribution given to the original authors of content in my message when I am quoting, summarizing, or paraphrasing ideas that are not my own?
  • Have I respected my organization’s terms for using AI at work?
  • Have I checked my communication for bias and corrected the reflection of that bias in my work?
  • Have I done all I can to make sure I am not misleading my receivers?

Agency: Retaining control of AI-mediated communication means being the “human-in-the-loop” ensuring AI is used as a tool to enhance your decision-making, not replace it (Cardon et al., 2023a; Dobrin, 2023). You must remain actively involved and be the ultimate decision-maker in the communication process, even when using AI. An AI might suggest or draft something, but you are the one who reviews it, edits it, and makes sure it fits your purpose.

To draw your attention to maintaining control and making your own choices when using AI, consider the following questions (Cardon, et al., 2023a, p. 278; Lentz, 2024, p.604):

  • Am I retaining or expanding my personal choices through the use of AI?
  • Am I enhancing my knowledge, skills, and human potential while using AI?
  • Can I make independent human decisions while using AI?
  • Have I considered other choices I could make when I am tempted to use AI unethically or in ways that are not allowed in my place of work?
  • Have I made the choice to use AI freely or because I am desperate, possibly running out of time or not understanding how to complete a task?

AI is disrupting business communication practices, and it is likely you will use them in your professional and academic work if you haven’t already (Cardon et al., 2023a; Dobrin, 2023; Getchell et al., 2023). While they offer potential benefits like increased efficiency and professionalism, it’s essential that you use them responsibly and ethically to enhance your abilities and avoid costly mistakes in decision-making and damaged reputations (yours and your business’s). To do this, you must be an ethical user of AI (Lentz, 2024) and develop AI literacy (Cardon, et al., 2023a), a skill set that enables individuals to understand AI’s capabilities and limitations (application), retain their unique voice and enhance their relationships (authenticity), exercise human judgment, control, critical thinking, and creativity in the communication process (accountability and agency).

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Introduction to Professional Development Copyright © 2022 by Rachel Dolechek & Rose Helens-Hart is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book