The One-Sentence Problem
Every tech tool that achieved mass adoption in education had a clear, one-sentence purpose that anyone could articulate:
- Email: This is how I communicate.
- Excel: This is how I organize and view data.
- PowerPoint: This is how I present information.
- Blackboard: This is where my course lives.
- Zoom: This is how I meet with people remotely.
Does AI have this kind of sentence?
If you're a power user, you know it intersects with all of the above. It's a writing tool and a research tool and an analysis tool and a brainstorming tool and a coding tool and an administrative tool. It doesn't replace one specific workflow. It sits on top of all of them.
And that's exactly what makes it so hard to adopt.
AI is everything and nothing at the same time. The answer to "what do I use it for?" is "it depends on what you're already doing."
Three Questions Leaders Are Trying to Answer
How do I keep up? The same way you keep up with anything in your domain — you seek it out. You stay plugged in. You follow what's happening with AI within your specific discipline. This is just professional development. Nothing new.
How do I learn it? The same way you learn most things — by doing it. Repeatedly. Over time. Not a single training event. You have to use it to learn it.
How do I integrate it? This is where everyone stalls. Because unlike email, which was forced adoption with a clear purpose, AI doesn't have an obvious reason to use it yet for most people.
"So we don't get left behind" is a competition argument — but where's the use case? People don't adopt tools because of abstract competitive pressure. They need to see, concretely, what the tool does for their work.
And that's the fundamental problem. The institution can push. It can provide resources, subscriptions, training. But it can't hand someone a reason. The reason has to come from the individual understanding their own work well enough to see where AI fits.
• • •
The Catch-22
Which creates a catch-22.
There's a tempting idea: couldn't you use AI to figure out how to use AI? If you prompted a model with your school's mission, strategic plan, enrollment data, and curriculum structure, couldn't it help you strategize and implement itself?
And partially, yes. You can do that. You can feed a model your institutional context and get back a reasonable strategy document. It will suggest where AI fits into advising workflows, how to redesign syllabi, where to automate administrative bottlenecks. That output has real value.
But it breaks down in two critical places.
First, implementation doesn't happen at the institutional level. It happens at the individual level. One professor, one workflow, one bottleneck at a time. The AI can generate the playbook, but it can't sit with someone during office hours. It doesn't know that Dr. So-and-so in the English department hasn't opened a new software tool since 2014. It doesn't know the provost is skeptical because the last tech initiative was a disaster. It can't read the room.
Second, and this is the deeper problem: the people who need AI guidance the most are the least equipped to prompt for it. That's the catch-22. You have to already understand AI well enough to ask it the right questions. If you could do that, you wouldn't need the help.
So the AI can be part of the solution, but only if a skilled operator sits between the institution and the tool — someone who understands the school deeply enough to ask the right questions and understands the AI deeply enough to build something useful from the answers. The AI isn't the teacher. It's the textbook. You still need the instructor.
• • •
We've Done This Before
The good news is that large-scale technology adoption in education isn't unprecedented. We've watched this movie several times:
Every one of these required institutional infrastructure, dedicated support people, training, and time. The pattern is clear: adoption doesn't happen by memo. It happens by investment.
But here's the problem. None of these parallels actually hold for AI.
• • •
Where the Parallels Break Down
Email is deterministic. You click send, it sends. Blackboard is deterministic. You upload a file, it's there. Zoom is deterministic. You click the link, you're in the meeting. The tool works the same regardless of who's using it.
AI doesn't work like that.
Two people with the same ChatGPT subscription get fundamentally different results. The output depends entirely on what the operator brings to it — their clarity of thought, their ability to describe what they need, their understanding of their own workflow. Give two deans the same prompt and watch what happens. The gap isn't access. It's fluency.
And there's a second problem the historical parallels don't capture: AI doesn't stabilize. Email hasn't fundamentally changed in 25 years. Zoom is the same tool it was in 2020. But AI capabilities shift every few months. New models. New features. New limitations. So even if you train someone today, what you taught them may be partially obsolete by next semester. A one-time training model doesn't work when the tool keeps changing.
AI is skill adoption dressed up like tool adoption. It looks like software, so people treat it like Zoom. But it functions like writing or teaching.
• • •
The Core Distinction: Tool Adoption vs. Skill Adoption
This is the crux of the entire argument. There are two fundamentally different kinds of technology adoption, and most institutions — and most funders — are treating AI like the wrong one.
Tool adoption is learning the interface. Where to click. How the menu works. The institution buys the tool, rolls it out, trains people on the UI, and it performs the same regardless of who's using it. The investment is in access and onboarding. Email. Blackboard. Zoom. Microsoft 365. You learn it once and you're mostly good.
Skill adoption is entirely different. The tool is only as powerful as what the operator brings to it. The interface may be simple — anyone can open it. But the value you extract depends on your clarity of thought, your ability to frame problems, your domain knowledge, and your practice over time. Two people with identical access produce wildly different outcomes. The investment isn't in teaching people where to click. It's in cultivating the capacities that make them effective operators.
The parallels that actually hold for AI aren't email and Zoom. They're:
- Google Search and information literacy. Same tool, wildly different outcomes. When the internet hit higher ed, two students with the same Google access could produce a research paper or a mess. That's why information literacy became a whole discipline. Librarians didn't just teach people where to search — they taught people how to think about searching.
- Excel. One person tracks a grocery list. Another builds a financial model. The gap between those users isn't access — it's fluency. And nobody runs a single Excel training and calls it done.
- Writing. Everyone has a word processor. Not everyone can write. The tool is identical. The output is entirely dependent on the operator's clarity of thought, their ability to structure an argument, their vocabulary.
- Teaching itself. This may be the parallel that resonates most for this audience. Two professors can have the same curriculum, same classroom, same students — and deliver completely different experiences. Pedagogy is non-deterministic. It's operator-dependent. That's why universities didn't just hand faculty a syllabus template and say "go." They built teaching and learning centers. Peer observation. Ongoing faculty development. Because getting better at teaching is a practice, not a deployment.
The thread connecting all of these: when the tool's value depends on the operator's skill, the investment isn't in the tool — it's in developing the operator. And that development is continuous, individualized, and cannot be standardized from the top down.
• • •
The Answer
How do faculty keep up with AI?
The same way they got better at teaching.
Not one workshop. Not a manual. Not a subscription. Ongoing practice. Peer learning. Support embedded into their actual processes. Institutional commitment to developing that skill over time. And measurement — are people actually getting better?
This is a fundamentally different investment model than what most institutions and funders are currently considering. It's not about buying the technology. It's about building the human infrastructure to make the technology useful.
• • •
The Gap
Most institutions don't have the infrastructure to do any of this. They don't have a way to measure whether faculty are getting better at using AI. They don't have people who can facilitate that development. And the people they'd need occupy an intersection that barely exists as a discipline yet — between the technology and learning development.
IT handles tool adoption. The provost's office handles academic strategy. The Center for Teaching and Learning handles pedagogy. But AI fluency sits across all three, and none of them own it.
What's needed is a new institutional function — the equivalent of what teaching and learning centers became for pedagogy, but for AI. A person, a team, a resource center that can:
- Design ongoing AI development programs for faculty and staff
- Offer embedded support — office hours, one-on-one coaching, workflow consultations
- Stay current as the technology evolves and translate those changes for the institution
- Measure progress and adapt the approach
- Champion AI literacy across every department, not as a mandate, but as a practice
This is not a workshop series. This is a permanent function. Every campus needs an AI lead, a resource center, and ongoing infrastructure for skill development. That's the investment. That's the specific, fundable thing.
• • •
The Urgency
Look at the adoption curves for every prior educational technology. Each one had one of two things: a long runway — 10 to 18 years of gradual institutional adoption — or a crisis that forced instant change, the way COVID did for Zoom and online learning.
Generative AI has neither. There's no 15-year window. The technology is moving too fast, the workforce is changing too fast, and students are already using these tools whether faculty are ready or not. But there's also no external crisis forcing everyone's hand overnight.
The forcing function has to be built intentionally. That's what makes this moment different from every one that came before. Institutions can't wait for a crisis to push them into adoption, and they can't afford to take 15 years. The window is now, and the investment is in people, not platforms.
• • •
Anticipating the Pushback
This argument will meet resistance. Here's where it's likely to come from, and how to think about it.
"We adopted Teams in six months. Why is this different?"
Because Teams is tool adoption. It works the same for everyone. AI doesn't. If you want to see the difference in real time, give two of your deans the same ChatGPT prompt and compare the results. The gap will be immediately obvious. The technology is identical. The output depends entirely on the operator.
"We don't have the budget for a new function."
Many institutions are already under-resourced. Asking a president who's struggling to keep the lights on to create a new institutional function is a legitimate tension. But the minimum viable version doesn't have to be a center. It could be one faculty fellow with a course release. It could be a shared resource across a consortium of schools. It could be a fractional AI lead who serves multiple institutions. The key is that somebody owns it.
"Can't the Center for Teaching and Learning handle this?"
In theory, yes. In practice, CTL staff themselves often don't have the technical fluency yet. That's the chicken-and-egg problem. The people who would logically own faculty AI development are in the same boat as the faculty they'd be supporting. This either means investing in CTL capacity first or bringing in someone from outside the traditional structure.
"The vendors will handle training."
Microsoft, Google, OpenAI — they're all building education-specific tools with onboarding baked in. And they will handle the tool adoption side. They'll teach the interface. But vendor training doesn't teach the skill. It doesn't help a history professor figure out how to use AI for their specific research methodology. It doesn't help an admissions office rethink their workflow. That's the gap between the platform and the practice.
"You compared AI to teaching, but we're bad at faculty development too."
This is the sharpest pushback, and it comes from inside the house. If the model for AI skill development is pedagogy, and most institutions are honestly mediocre at ongoing pedagogical development, then the argument seems to advocate replicating something that doesn't work well. The answer is: this is an opportunity to build it better. The AI function doesn't have to inherit the limitations of existing faculty development. It can be leaner, more embedded, more responsive. But the acknowledgment is important — we're not pretending the existing model is perfect.
"Show me the ROI."
Funders want measurable outcomes. What changes when you invest in this? At minimum, the theory of measurement should include faculty AI fluency benchmarks, integration into course design, administrative efficiency gains, and student preparedness metrics. The field is young enough that hard data is scarce, but the directional case is clear: institutions that build this capacity now will be meaningfully ahead of those that wait.
"Students are ahead of faculty. Let them lead."
Some schools are already experimenting with student AI fellows who support faculty. It's cheaper, it builds student skills, and it addresses the gap immediately. This is a real option and should be part of the ecosystem. But it's not sufficient on its own — institutional AI strategy can't depend on the most transient members of the community. Students graduate. The function has to be permanent.
"Won't the technology just get easier?"
It will. Interfaces are getting more intuitive. Models are getting better at understanding vague prompts. The skill floor is dropping. But the hard part was never the prompting. The hard part is knowing what to ask for in the first place — understanding your own work well enough to know where AI fits. That's a human capacity problem, and it doesn't go away just because the interface improves. If anything, easier tools make the skill gap more consequential, because more people will be using AI, and the difference between skilled and unskilled use will become the primary competitive differentiator.
• • •
The Bottom Line
The question everyone's asking — how do faculty and administrators keep up with AI? — doesn't have a technology answer. It has a human answer.
AI adoption doesn't follow a technology deployment model. It follows a skill development model. And that changes everything about what the investment looks like. Not licenses and rollouts. Coaches, embedded support, and ongoing practice infrastructure. Not a workshop. A permanent institutional function.
The institutions that figure this out first won't just be better at using AI. They'll be the ones who define what AI fluency in higher education actually looks like. That's not just an operational advantage. It's a leadership opportunity.
Fund the people, not the platforms. The AI will keep changing. The human infrastructure is what makes it useful.
• • •
Frequently Asked Questions
Why is AI adoption different from other technology adoption?
Previous technologies like email, Zoom, and LMS platforms are deterministic — they work the same for everyone. AI is operator-dependent. Two people with identical subscriptions get wildly different results based on their skill, clarity of thought, and domain knowledge. This makes AI skill adoption, not tool adoption.
Why don't AI workshops work?
One-time workshops treat AI like tool adoption — learn the interface and you're done. But AI fluency requires ongoing practice, like writing or teaching. You can't workshop your way to fluency. And because AI capabilities change every few months, what you taught may be obsolete by next semester.
What should institutions invest in instead of AI subscriptions?
Human infrastructure: dedicated AI leads, ongoing coaching and office hours, embedded support in actual workflows, peer learning programs, and measurement systems to track whether people are actually getting better. Not a workshop — a permanent institutional function.
How do faculty keep up with AI?
The same way they got better at teaching: ongoing practice, peer learning, support embedded into their actual processes, and institutional commitment to developing that skill over time. Professional development, not a training event.
What is the AI adoption catch-22?
The people who need AI guidance the most are the least equipped to prompt for it. You have to already understand AI well enough to ask it the right questions. If you could do that, you wouldn't need the help. That's why skilled human facilitators are essential — they bridge the gap between the institution and the tool.