Three Reasons Not to Train Your Staff to Use AI—And Why (and How) We Did It Anyway

In Blog by Mia TignorLeave a Comment

By Anna Haney-Withrow, Director, Institute of Innovative and Emerging Technologies

Florida SouthWestern State College

Abstract

This article explores Florida SouthWestern State College’s decision to launch AI upskilling sessions for professional staff, despite valid concerns about burnout, deskilling, and reduced collaboration. Rather than dismiss these challenges, the initiative embraced them head-on through a values-based approach. Guided by principles of impact, skill-building, and connection, our training sessions addressed fear and uncertainty while fostering intentional, ethical AI adoption. Drawing on current research, the article outlines three common arguments against AI training and demonstrates how each concern can be transformed into an opportunity for growth. The essay offers practical examples from our program and reflects on what it means to proceed with discernment when addressing the need for AI upskilling. It also aligns with broader themes of faculty and staff development, organizational culture, and the responsible use of generative AI in higher education.


There is a short and simple phrase I rely upon in many aspects of my life, a key to tempering my natural inclination to embrace every possibility with enthusiasm. This gem came from an advanced yoga training many years ago. At one point, a fellow student eagerly asked the teacher to provide instruction for a challenging yoga pose, confidently expressing her certainty that she had the physical prowess to accomplish it. The teacher’s face narrowed into an expression just shy of withering. “Just because you can do something,” she said, leaving plenty of time for the uncomfortable pause, “doesn’t mean you should.”

Not only has this wisdom empowered me to set things aside in yoga (no more headstands for me, thanks), but it has also sharpened my discernment about the endeavors I choose to embrace professionally. And if there’s one thing the rapid introduction of AI into our lives has made clear, it’s the need to center discernment.

4

By late 2023, I felt confident that I could start to provide training on generative AI for our college staff. But should I? While the promise was obvious, subtle and overt signals alike suggested there could be serious downsides: Would this training inadvertently make my colleagues more burned out or even fearful about their roles? Might it lead to a loss of critical thinking or practical skills over time? Could it deepen isolation and decrease collaboration?

These concerns mirror those we wrestled with in our classrooms—on top of the larger, urgent questions surrounding ethics and appropriate guardrails.

Why We Moved Forward Anyway

Ultimately, a cross-functional team including Human Resources, Information Technology, and the Institute of Innovative and Emerging Technologies decided that moving forward with training was the best decision. This collaboration allowed us to approach the challenge strategically, ensuring that our AI upskilling program reflected not only best practices in adopting generative AI in the workplace but also our shared commitment to the College’s mission and strategic directions.

First, we recognized that AI was already becoming an invisible thread in the fabric of higher education, shaping operations, workflows, and student experiences whether we engaged with it or not. Avoiding the conversation would have left our staff more vulnerable, not less, to the negative impacts we feared. By equipping them with knowledge and reflective tools, we could build awareness, confidence, and a shared understanding that would allow us to shape our institutional response purposefully.

Second, we framed training not as a technology mandate, but as a culture-building opportunity and an exploration of how AI might reflect our values, assumptions, and systems. Our sessions were designed not to just introduce AI platforms, but to invite reflection: What tasks deserve our human attention? Where can AI support problem solving or deepen collaboration? In this way, training became a venue for co-creating a thoughtful approach to innovation, rooted in care and curiosity.

This approach allowed us to design a program that explicitly addressed the downsides head-on within a human-centered framework. We focused on three key values for using generative AI at work, each chosen to align with our innovative culture—our propensity to embrace possibilities with enthusiasm—while safeguarding against unintended consequences. Below, I’ll spell out each of the reasons we considered to not offer AI upskilling, paired with its value-based antidote, and an example from our training that shows the value in action.

1. AI Could Lead to Burnout and Fear. Antidote: Emphasize Impact

Despite the enthusiasm surrounding generative AI, many employees experience significant anxiety regarding its integration into the workplace. A 2023 survey by Ernst & Young revealed that 71% of U.S. employees are concerned about AI’s impact on their jobs, with 65% expressing anxiety over the possibility of AI replacing their roles. Additionally, 75% fear that AI could render certain jobs obsolete. These concerns often lead to employees feeling uncertain about how to engage with AI tools effectively, potentially resulting in stress and burnout. (Ernst & Young LLP).

We wanted to flip that narrative. In our AI Essentials session, we didn’t start with how AI works—we started with what our colleagues care about. We posed questions like: If AI could save you two hours a week, how could you spend that time in a way that increased your impact and your enjoyment in your role? By grounding AI in personal values and meaningful outcomes, we reframed the conversation as one of agency and empowerment, not pressure.

2. AI Could Lead to Deskilling and Lower-Quality Work. Antidote: Focus on Skill-Building and Use AI as a Collaborator

A growing body of research shows that while AI can increase productivity, it can also cause professionals to lean on it in ways that diminish expertise if used carelessly. The “jagged frontier” of generative AI requires professionals to actively discern when to lean in, when to intervene, and when to rethink its use altogether (Dell’Acqua et al.).

We wanted to ensure that we build a culture where AI is used as a collaborator that promotes skill building, even as it helps people realize productivity gains. For example, our session on using AI for Presentations, instead of showcasing tools that could instantly generate slideshows, we emphasized how AI could support authentic connection with an audience, ensure accessibility, and provide personalized coaching on delivery. By modeling these practices, we helped participants build transferable skills while working with AI tools.

3. AI Could Decrease Collaboration and Mentoring. Antidote: Center Connection

One of the most subtle yet concerning risks of AI adoption is the erosion of mentoring and collaboration. Renowned futurist Amy Webb points out that productivity gains at the individual level often fail to translate into institutional performance—especially when AI use is fragmented or hidden (Webb).

Matt Beane, in The Skill Code, further warns that intelligent technologies may undermine the expert-novice relationships that are foundational to skill development. He identifies Challenge, Complexity, and Connection as the essential conditions for learning. Yet when AI is introduced without care, these conditions can erode, leaving newer professionals to “shadow learn” without guidance. In other words, while AI can accelerate outcomes, it can also unintentionally bypass the conversations, feedback, and shared practice that help people grow (Beane).

In our own sessions, especially those on communication and supervision, we intentionally positioned AI as a tool for building, not replacing, connection. We explored how generative tools can help staff express ideas with more clarity, give feedback with more intentionality, and communicate in ways that foster trust. We also emphasized transparency, encouraging participants not only to disclose their use of AI, but to share their experiments, successes, and missteps. In doing so, we built a culture of inquiry rather than competition and invited connection rather than secrecy.

Conclusion: We Trained Anyway—And We’re Better for It

Ultimately, we listened to the reasons not to train—fear, burnout, deskilling, disconnection—and let them sharpen our resolve to create a better path forward. It was a deliberate, human-centered decision to meet uncertainty with inquiry and invest in our institutional values. In many ways, our approach is still unfolding. But what we’ve learned so far is that the best way to prepare people for a future shaped by AI is to inquire what kind of future we want to shape together, not just what we can do but what we should do.

AI Usage and Process Statement

Portions of this article were drafted with the assistance of OpenAI’s ChatGPT and refined in collaboration with the author to maintain a personal, reflective voice and to ensure accuracy. AI tools were used to assist with outlining, drafting, editing, and citation formatting. All final content, perspectives, and interpretations are the author’s own.

Works Cited

Beane, Matt. The Skill Code: How to Save Human Ability in an Age of Intelligent Machines. Harper Business, 2024.

Dell’Acqua, Fabrizio, et al. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013, 15 Sept. 2023. SSRN, https://ssrn.com/abstract=4573321.

Ernst & Young LLP. “EY Research Shows Most US Employees Feel AI Anxiety.” EY US, 6 Dec. 2023, https://www.ey.com/en_us/newsroom/2023/12/ey-research-shows-most-us-employees-feel-ai-anxiety.

Webb, Amy. “How to Prepare for a GenAI Future You Can’t Predict.” Harvard Business Review, 31 Aug. 2023, https://hbr.org/2023/08/how-to-prepare-for-a-genai-future-you-cant-predict.

Leave a Comment