OpenAI just released the most comprehensive study of consumer AI usage ever conducted. An NBER working paper backed by Harvard economist David Deming, it analyzes 1.5 million ChatGPT conversations across three years. The sample is massive, the methodology is rigorous, and the findings deserve serious attention from every business leader trying to figure out what AI adoption actually looks like inside their organization.
But the most important thing the study reveals is what it doesn’t measure.
The Taxonomy Is the Finding
The researchers classified all ChatGPT usage into three categories: Asking, Doing, and Expressing. Asking means seeking information or advice. Doing means completing a discrete task; drafting an email, writing code, planning a trip. Expressing means creative or personal exploration.
The breakdown: 49% “Asking”, 40% “Doing”, 11% “Expressing”.
That taxonomy was not designed in a vacuum. It was built to reflect what 1.5 million actual conversations look like. And what those conversations look like is this: half of all AI usage, three years after launch, is essentially a better search engine.
“Doing” in this taxonomy means content generation. There is no category for “built a recurring workflow”. No category for “integrated AI into an existing business process.” No category for “used AI to coordinate across functions” or “created a reusable system that compounds over time.” Not because the researchers forgot. Because there is not enough of that behavior happening to warrant tracking it.
That is not on OpenAI. That is on the 700 million users.
The Difference Between Access and Leverage
The study celebrates the democratization of access, and rightly so. The gender gap in adoption has nearly closed. Growth in low- and middle-income countries is outpacing wealthy nations by a factor of four. Over 700 million people use ChatGPT every week; nearly 10% of the world’s adult population.
But access is not leverage. Signing into ChatGPT is not the same as extracting compounding value from it. The report documents a world in which the dominant AI use case is still transactional: ask a question, get an answer, move on. Even the “Doing” category tops out at single-session task completion. Draft this. Summarize that. The interaction ends when the task ends.
What the study cannot capture is whether any of those interactions led to something durable. Did the drafted email become a template? Did the summarized report feed a recurring analysis? Did the trip plan become a reusable process? The research has no way to know, because the unit of analysis is the conversation, not the trajectory.
Why This Matters for Business Leaders
If you are a CEO or CIO evaluating your organization’s AI maturity, this study should recalibrate your expectations. The most common metric companies use to assess AI adoption is penetration: what percentage of our employees are using AI tools? But penetration tells you almost nothing about value creation.
A company where 80% of employees use ChatGPT to ask questions is not more advanced than a company where 20% of employees have embedded AI into workflows that run every week. The first company has adoption. The second has leverage. The study measures the first. Nobody is systematically measuring the second.
This is the same pattern we have seen with every major technology wave. Email adoption was universal within a few years of its introduction; that did not mean every organization was using it effectively. CRM penetration hit critical mass a decade ago; most companies still struggle with data quality and process integration. The tool is not the transformation. The integration of the tool into how work actually gets done is the transformation.
What a Maturity Framework Would Measure
The missing dimension is progression. Not “what are people doing with AI” but “how is their usage evolving over time, and is it creating durable value?”
A useful maturity framework would track at least three stages. First, stabilization: employees are experimenting, asking questions, building familiarity and trust. This is where most of the 700 million users live today. Second, optimization: AI is embedded into recurring workflows, replacing or augmenting manual processes with consistent, repeatable results. Third, monetization: AI-augmented processes are generating measurable business outcomes; new revenue, reduced cost, faster cycle times, better decisions at scale.
The OpenAI study is an excellent map of stage one. It documents a world that is overwhelmingly in the experimentation phase. The question for business leaders is not whether their people are using AI. It is whether their people are progressing beyond asking questions and completing one-off tasks toward building systems that compound.
The Starting Line, Not the Race
None of this diminishes what OpenAI has accomplished, either with ChatGPT or with this research. The study is the best empirical snapshot we have of how a transformative technology is being adopted at global scale. It should be required reading for anyone making strategic decisions about AI.
But it is a snapshot of the starting line. The race is not “how many people are using AI.” The race is how quickly organizations move from transactional usage to structural integration. From asking to building. From conversations that end when the task ends to systems that keep running after the conversation is over.
The next study that matters will not count conversations. It will measure whether AI usage is accumulating into something that lasts.


