hbr.org - To Drive AI Adoption, Build Your Team’s Product Management Skills
Much of the conversation about how to work effectively with generative AI has focused on prompt engineering or, more recently, context engineering: the semi-technical skill of crafting inputs so that large language models produce useful outputs. These skills are helpful, but they are only part of the story. The real payoff comes when employees learn how to apply generative AI in their day jobs in a way that improves how they work. This requires defining valuable problems within workflows, evaluating possible solutions, rapidly experimenting, and integrating new practices sustainably into day-to-day work—disciplines that are core to the work of product managers.
Without a product-minded approach, employees’ attempts to use AI often remain shallow or short-lived. Lacking a clear sense of how to apply AI to their highest-value problems, employees may struggle to find a place to start. Many employees fall back on low-impact, one-off uses—such as drafting simple copy for emails or blog posts—rather than rethinking the complex, repetitive, or data-heavy processes that, if automated, would fundamentally expand a team’s capacity and impact. And when early experiments run into friction, many employees conclude the technology isn’t worth the hype or their time and abandon it altogether.
For managers asking, “How do I get my people to use AI?,” the answer is often better product discipline rather than more prompt training. Product managers are trained to discover problems worth solving, understand technology solutions, experiment with possible solutions, and iteratively refine what works in practice. As Stanford researchers who have spent 18 months studying generative AI adoption at Google through hundreds of interviews and observations, we found that those who have successfully implemented AI in their individual work have deployed exactly this skill set. We also observed similar skills among nearly 2,000 professionals across industries who did capstone projects for our executive AI leadership class.
If every employee needs to act like a product manager in adopting generative AI, managers play a critical role in creating the conditions that make this skill set visible, valued, and useful. This article shows what these skills look like in practice—and what managers can do to foster them.
Define the Problem to Solve and Its Value
Section titled “Define the Problem to Solve and Its Value”When organizations roll out new workplace technologies, they typically train employees on a narrow set of prescribed use cases, explicitly linking the tool to the specific problems it is meant to solve. As a result, employees naturally approach new technologies by asking, “What can this tool do?”
Your Leader Profile
What’s Your Leadership Superpower?
Take this quiz to discover your superpower and unlock Your Leader Profile.
Unlike many previous workplace technologies, however, generative AI is broadly applicable and rapidly evolving; overly specified use cases or trainings won’t help employees unlock its power.
Instead, employees need to develop the skill of discovering for themselves what the technology can do to support their work. A product manager starts by specifying what value looks like for users—what needs users have and which are most critical. This skill is where employees also need to start when approaching gen AI: by deciding whether they are looking to it to save time, improve quality, reduce errors, and/or expand creative range.
As an example, a Google manager spent hours each week revising her team’s updates before sending them to executive stakeholders. Her first instinct was a common one: She copied individual updates into Gemini and prompted it to “write a summary.” But this one-off use produced generic drafts that still required heavy editing, and the extra copying and pasting across tools meant she wasn’t saving time. At that point, she was tempted to give up on generative AI altogether and stick with her existing manual process.
Instead, she got more precise about the problem she was trying to solve. She didn’t just want summaries; she needed a meaningful reduction in her own effort, and to achieve that, executive-ready communications. With that clearer definition of the value, she built a custom Gem—which would result in a repeatable workflow, saving her time—with explicit instructions tailored to her executives’ expectations. She also redesigned the full workflow so her team uploaded updates directly into the Gem. It then generated drafts that were 80–90% ready to send, turning a frustrating experiment into a simple automated pipeline that delivered real time savings.
So how can you help your employees identify these valuable uses? Most of our research participants said they better understood gen AI’s applicability when they heard someone else explain how they applied it to a concrete, high-value problem. Create opportunities for this kind of modeling by structuring time for your team and organization to share demonstrations of their own high-value uses of AI. We also saw the benefits of leaders participating in these demonstrations themselves: when managers modeled their own problem-solving process, employees felt more encouraged to experiment, rather than waiting for prescriptive training. Across our research, success came from what participants described as “meeting the individual where they were” and helping employees connect a gen AI tool to an employee’s specific need.
Evaluate Technology Options
Section titled “Evaluate Technology Options”To develop a product effectively, product managers constantly evaluate the range of technical solutions available and how each works in order to identify which tool is best for the job. Similarly, there are many different kinds of gen AI tools, so matching the right tool to the problem can be difficult.
Leaders can help employees navigate this landscape by teaching them to evaluate gen AI tools against their specific needs by considering:
- The interface and workflow: A simple chat interface is often best for individual brainstorming, but a tool that allows for a “pipeline”—like a custom Gem or a shared notebook like Google’s NotebookLM—can be better for repetitive team processes.
- Data integration: Some tools are “closed,” relying only on the model’s pretrained knowledge, while others allow users to pull in their own live data, such as a folder of contracts or a database of customer feedback.
- Tool capability: Because foundation models and tool interfaces evolve rapidly, a tool that fell short for a complex task last month may be well suited today.
As an example, a program manager we studied spent considerable time maintaining spreadsheets to communicate the status of upcoming deadlines, creating customized views for different stakeholders. When she decided to explore gen AI as a way to streamline the work, she encountered a wide range of possible tools.
Rather than being overwhelmed by choice, she approached the search by investigating and evaluating each option against what the job required. For example, she knew from past experience with NotebookLM that it was not compatible with spreadsheets, so she experimented with the Gemini integration in Google Sheets, through which she learned that it was not configured to input data directly into the spreadsheet as she needed. (Note: both of these functions have since been made available by Google.) Finally, an internal company newsletter focused on gen AI use cases pointed her to an internal Chrome extension that could read the full contents of webpages. She experimented with it and the fit became clear: The tool worked within her existing workflow, could access live spreadsheet data, and produced outputs suited to her stakeholders’ needs—significantly reducing manual effort.
To educate employees on these rapidly changing tools, institute meetings, newsletters, and other resources that model and demonstrate their use. For example, at Google, Google School for Leaders Product Lead Debbie Newhouse provides hands-on coaching sessions that help colleagues match their pressing problems with the appropriate AI tool to solve them. This process, nicknamed “AI makeovers,” is being scaled through several Google development programs. Cloud AI/ML Enablement Lead Olivia Tam leads what she and her team call “AI Spark” sessions where they model and explain relevant tool use for colleagues.
Experiment
Section titled “Experiment”Effective product management depends on experimentation through the typical product cycle. This helps to quickly refine the value of a solution. Similarly, trial and error is needed for employees to test solutions and develop AI workflows that are actually valuable. Experimentation may be getting some bad press for inefficiency lately, but building new automated workflows depends on learning what is possible, showing potential value, and refining both the problem definition and the criteria for success—all things that experimentation enables.
Take the experience of a web developer we studied. His direct manager was skeptical about adopting AI for the team, dismissing the tools as hype and a potential drain on development time. Determined to try it out, the developer ran a series of small proof-of-concept experiments to explore whether AI could help his team analyze large volumes of unstructured information and distill insights for other groups. An initial demo showed the tool was sufficiently reliable for the workflow he had in mind—enough to change the conversation with his manager and create space to continue learning. Subsequent experiments revealed that the tool performed well for some categories of insights but not others, allowing the team to automate about half of the work while leaving the rest untouched. He could not have articulated this outcome at the outset; it emerged only through experimentation that made both the limits and the value of the technology visible.
Product Management Skills for AI Adoption
What the skills look like in action, and how managers can help employees develop and apply them
Define the Problem to Solve and Its Value
Section titled “Define the Problem to Solve and Its Value”What it looks like
Ask “In my workflow, what problem is worth solving?” rather than “what can this tool do?”
How leaders can foster it
Structure frequent demos that model asking that question and considering possible tools, even for use cases well outside their field.
Evaluate Technology Options
Section titled “Evaluate Technology Options”What it looks like
Explore possible AI tools, considering technical and operational feasibility as well as the tradeoffs between value and effort, risk and reward.
How leaders can foster it
Peers, AI champions, and AI-forward communities can help employees stay up to date on specific generative AI tools and learn to better evaluate them for the high-value problems they can help solve.
Experiment
Section titled “Experiment”What it looks like
Frame each experiment with gen AI as an MVP to refine the value of the solution.
How leaders can foster it
Normalize and encourage experimentation by modeling your own experiments, addressing concerns directly, and identifying low-risk activities for initial testing.
Integrate
Section titled “Integrate”What it looks like
Build, launch, and manage solutions which sustainably integrate into individuals’ and groups’ day-to-day work.
How leaders can foster it
Encourage two kinds of interoperability: technical and process. Connect employees with technical partners to help bridge systems, and encourage them to think about automating larger roles and routines, not just specific workflows.
Unfortunately, the mindset needed for experimentation can be hard to instill. In our research, even some employees who advocated for gen AI adoption struggled to experiment with the tools. Employees cited lack of time, uncertainty over whether experimenting was a legitimate use of work hours, fear of being judged if their experiments failed, and concern that their success would be “cheating.” As a manager, beyond encouraging AI use broadly, address these concerns head on. Modeling—showing that you also experiment with and use the technology—is invaluable here. Encourage team members to frame each experiment as an MVP, rather than trying to perfect a prompt or workflow from the start. Finally, help your employees identify low-risk activities for AI experimentation so that if something does go wrong or takes longer than expected, the overall business isn’t adversely affected.
Integrate
Section titled “Integrate”A prototype is not a product, and an experiment is not the end of gen AI adoption. Product managers know that a brilliant feature is useless if it doesn’t fit into the user’s existing ecosystem. Similarly, leaders and employees can learn from product managers that the “last mile” of AI adoption is integration: moving the AI solution out of a separate chat window and into integrated flows with tools that they and their teams already use every day.
Leaders can help employees bridge this gap by encouraging two kinds of interoperability:
Technical
Section titled “Technical”Encourage employees to ask, “Where do I actually need this information to go?” Instead of manually copying AI-generated summaries into a project tracker, a product-minded employee looks for ways to automate that handoff. In some cases, frontline employees can handle this integration themselves—some we studied even used tools like Gemini to learn enough about APIs to automate data flows. But when the technical lift is too high, a manager’s role is to connect the employee with technical partners who can help bridge systems. This feedback on high-value integrations is essential for turning generative AI’s potential into real business impact—especially when leaders actively gather and prioritize such requests.
Process
Section titled “Process”Gen AI tools can often change the necessary order of operations for a team. When one person automates a task, it creates a ripple effect for whomever next receives that work. Managers should help employees redesign roles and routines, ensuring that the new AI-enhanced workflow doesn’t create uneven loads or bottlenecks.
.
Gen AI adoption at work rarely fails because people can’t write good prompts. It fails because employees struggle to see how AI fits into their real workflows—and because early attempts feel inefficient, fragile, or hard to sustain. Managers who make the biggest difference don’t solve this by mandating usage or offering more technical training; they make the path to value visible.
One of the most powerful change-management tools we observed was simple but underused: demos and modeling. When leaders and peers show—not tell—how someone identified a high-value problem, evaluated tools, ran a small experiment, and integrated the result into daily work, they give others a concrete template to follow. These demonstrations reduce fear, legitimize experimentation, and clarify what “good” looks like.
A product management mindset offers a helpful lens for this work—but it should not be applied too rigidly. In a fast-moving AI landscape, individuals need to be exploratory and adaptive, not bogged down by formal roadmaps or premature ROI calculations. What matters are the core habits: spotting opportunities worth solving, testing ideas quickly, learning from friction, and embedding what works.
When managers model these behaviors and create space for others to practice them, AI stops being a novelty—and starts becoming a source of real, durable impact.