What it is: Ethan Mollick’s Guide to AI — everything you need to know
Who it’s for: Beginners and professionals looking for practical guidance
Best if: You want actionable steps you can use today
Skip if: You’re already an expert on this specific topic
AI Assistant Summary: Ethan Mollick is a Wharton School professor, bestselling author of Co-Intelligence, and one of the most influential voices on practical AI adoption. This guide covers his key recommendations for 2026, including why he calls Claude Code + Cowork the most powerful AI package available, his daily experiment habit, and his framework for delegating work to AI. If you only follow one AI thought leader, Mollick is the one beginners should choose.
Bottom Line Up Front (BLUF)
Ethan Mollick, tenured professor at the Wharton School of the University of Pennsylvania, has become the de facto guide for anyone learning how to use AI productively. Through his Substack newsletter One Useful Thing, which reaches over 800,000 subscribers, his New York Times bestselling book Co-Intelligence, and his research through Wharton’s Interactive AI Lab, Mollick provides grounded, experiment-driven insights that cut through the hype cycle. His core recommendation for 2026: treat AI as a colleague, experiment with it daily, and learn to delegate to it using management skills you already have.
Key Takeaways
- Mollick recommends Claude Code paired with Cowork mode as the most capable AI development package available in early 2026
- His “daily experiment” habit means using AI for at least one real task every single day to build intuition
- The delegation framework treats AI like a new employee: give it clear context, check its work, and iterate on your instructions
- He identifies the “jagged frontier” of AI capability, meaning AI is excellent at some tasks and terrible at adjacent ones, and you can only discover the boundary through experimentation
- His SSRN research papers, conducted with real students and professionals, provide empirical evidence for AI productivity gains of 20-40% in knowledge work
Who Is Ethan Mollick?
Ethan Mollick is a tenured Associate Professor at the Wharton School, University of Pennsylvania, where he studies innovation, entrepreneurship, and the effects of artificial intelligence on work and education. He holds a PhD from MIT Sloan School of Management and has been publishing research on technology and organizations for over fifteen years. His academic work has appeared in leading journals including Management Science, Strategic Management Journal, and the Academy of Management Proceedings.
What sets Mollick apart from other AI commentators is his dual identity: he is both a rigorous academic researcher who publishes peer-reviewed studies on AI’s impact, and a prolific public communicator who tests every new AI system personally and reports his findings in accessible language. His Substack newsletter, One Useful Thing, launched in 2022, has grown to become one of the most widely read sources on practical AI use, with readership spanning educators, executives, policymakers, and curious beginners. According to Grokipedia, Mollick is considered one of the most influential voices in the AI-and-work discourse globally.
His 2024 book Co-Intelligence: Living and Working with AI became a New York Times bestseller and was named one of the best books of the year by both The Economist and the Financial Times. The book argues that AI should not be treated as a tool to be used occasionally but as a co-intelligence, a thinking partner that changes how we approach every knowledge task. For beginners just starting their AI journey, this framing is essential: it shifts the question from “What can AI do?” to “How should I work alongside AI?” If you are new to artificial intelligence, Mollick’s work is the most practical on-ramp available.
The One Useful Thing Newsletter: Why It Matters
Mollick’s Substack, One Useful Thing, is not a typical tech newsletter. Each post typically opens with a personal experiment, something Mollick built, tested, or discovered using AI that week, and then expands into broader implications for work, education, or society. The newsletter’s tagline captures his philosophy: practical insights based on actual use, not speculation.
What makes the newsletter uniquely valuable for beginners is its emphasis on showing, not telling. Mollick regularly posts screenshots of conversations with AI, shares the exact prompts he used, describes what worked and what failed, and reflects on what those experiments reveal about AI capabilities. In a field dominated by product announcements and breathless predictions, this experiment-first approach gives readers a realistic picture of what AI can and cannot do in March 2026.
The newsletter covers several recurring themes. Model releases and capability assessments appear whenever a major new system launches. Education and AI is a frequent topic, drawing on Mollick’s classroom experience. Agentic AI and coding tools have become increasingly prominent in early 2026, reflecting the rapid evolution of systems like Claude and its agentic features. Business strategy and AI delegation appear regularly, connecting Mollick’s management research to practical AI workflows. According to research from Stanford HAI, the kind of hands-on experimentation Mollick advocates is the single best predictor of successful AI adoption in organizations.
Mollick’s Key AI Recommendations for 2026
Claude Code + Cowork: The Most Powerful Package
In his January 2026 newsletter post titled “Claude Code and What Comes Next,” Mollick made a striking declaration: the combination of Claude Code with Cowork mode represents the most powerful AI package currently available to individual users. This was not casual praise. Mollick had spent weeks building projects with Claude Code, including fully functional games created from single prompts, and found that the agentic coding environment represented a genuine capability leap.
Claude Code is Anthropic’s command-line agentic coding tool that can read your entire codebase, write and edit files, run terminal commands, and iterate on its own output. Cowork mode extends this by allowing Claude to work semi-autonomously on tasks while checking in with the user at key decision points. Mollick found this combination particularly powerful because it bridges the gap between fully manual prompting, where the user must specify every step, and fully autonomous operation, where the AI might go off track. You can learn more about this in our Claude AI review.
For non-developers, the significance is not about coding itself. Mollick argues that agentic AI tools like Claude Code preview the future of all AI interaction: you describe what you want, the AI works on it independently, and you review and refine the output. This is the delegation model that Mollick believes will define how everyone uses AI within the next two to three years.
The Daily Experiment Habit
Mollick’s most consistent recommendation, repeated in nearly every talk and newsletter post, is to use AI for at least one real task every single day. Not a toy demonstration, not a test prompt, but a genuine work task that matters to you. The reasoning is empirical: Mollick’s research at Wharton, published through SSRN, shows that the biggest predictor of AI benefit is frequency of use, not expertise, not the specific tool, and not the user’s technical background.
This daily experiment habit serves multiple purposes. It builds intuition about what AI does well and where it fails, a personal map of what Mollick calls the “jagged frontier.” It creates a feedback loop where you get better at prompting and delegating. It reduces the anxiety and uncertainty that many people feel about AI by making it a familiar, routine part of work rather than an intimidating new technology. For beginners, this is the single most actionable piece of advice in the entire AI landscape. Pick one task tomorrow, try doing it with AI, note what happens, and repeat the next day.
The Delegation Framework
Mollick’s delegation framework emerges from his management research and treats AI interaction as fundamentally a management challenge, not a technical one. The framework rests on a key insight: the skills that make someone a good manager of people, clarity of instruction, context-setting, quality review, and iterative feedback, are exactly the skills that make someone effective with AI.
The framework has several components. First, provide complete context. When delegating to AI, include all the background information a competent new employee would need. Do not assume the AI knows your preferences, constraints, or goals. Second, specify the output format. Tell the AI exactly what the deliverable should look like: length, tone, structure, audience. Third, review and iterate. Never accept the first output. Provide specific feedback and ask for revisions. Fourth, document what works. Keep notes on prompts and approaches that produced good results so you can reuse and refine them.
This framework is particularly valuable because it does not require any technical knowledge. If you have ever managed an intern, trained a new hire, or given instructions to a contractor, you already have the skills needed to work effectively with AI. Mollick explicitly makes this point to counter the narrative that AI requires coding skills or technical expertise to use well. For a deeper dive into the skills you need, see our guide to essential AI skills.
Which AI Should You Use? Mollick’s Guide for the Agentic Era
One of Mollick’s most frequent questions from followers is which AI system they should use. His answer in 2026 has become more nuanced than in previous years because the landscape has shifted from a few dominant chatbots to a complex ecosystem of agentic tools, specialized models, and integrated workflows.
Mollick’s general guidance is to maintain active subscriptions to at least two frontier AI systems because each has different strengths along the jagged frontier. He specifically recommends Claude (Anthropic) for long-form writing, analysis, and agentic coding tasks, and notes that Claude’s extended thinking mode produces notably better reasoning on complex problems. He recommends ChatGPT (OpenAI) for its broad plugin ecosystem, image generation, and web browsing capabilities. He recommends Gemini (Google) for tasks that benefit from Google’s data integration, including research that requires searching across large document sets.
For the agentic era specifically, Mollick highlights that the choice of which AI to use increasingly depends on the harness, the software layer that wraps around the base model and gives it the ability to take actions, use tools, and work autonomously. Claude Code, Cursor, Windsurf, and similar agentic tools represent a different category from chatbot interfaces, and Mollick argues they should be evaluated as work environments rather than as individual AI models. If you are just starting your exploration, our AI for dummies guide provides a solid foundation.
Why Mollick Matters for Beginners
The AI landscape is noisy. Social media is filled with influencers making extraordinary claims about AI capabilities, vendors promising transformation, and commentators predicting doom or utopia. For someone just starting to learn about AI, this noise makes it nearly impossible to form accurate expectations or develop a practical learning plan.
Mollick cuts through this noise for several reasons. First, he uses AI himself, extensively, every day. His recommendations come from hundreds of hours of personal experimentation, not from reading press releases. Second, he is an academic researcher who publishes peer-reviewed findings. When he claims that AI improves knowledge worker productivity by 20-40%, that number comes from controlled studies with real participants, published on SSRN and in academic journals. Third, he teaches at Wharton, where he requires students to use AI and observes firsthand how beginners learn, struggle, and eventually succeed with these tools.
Fourth, and perhaps most importantly, Mollick acknowledges uncertainty. He does not pretend to know exactly where AI is headed. He regularly notes when AI systems fail at tasks he expected them to handle, and he updates his recommendations as capabilities change. This intellectual honesty is rare in the AI commentary space and makes his guidance particularly trustworthy for beginners who need reliable information.
The ADAPT Framework: Learning AI Mollick’s Way
Mollick’s approach to learning AI aligns closely with the ADAPT framework used at Beginners in AI: Assess your current workflow, Discover the right tools, Apply them to real tasks, Practice daily, and Track your results. This is not a coincidence. The ADAPT framework is built on the same evidence base that Mollick draws from, specifically the research showing that consistent, hands-on practice is the fastest path to AI competence.
If you want to accelerate your learning beyond free resources, the AI Agent Starter Kit ($19 bundle) provides structured templates and workflows built on the same delegation and experimentation principles Mollick advocates. It includes prompt templates, delegation checklists, and evaluation rubrics that turn his framework into a step-by-step practice system.
Mollick’s Research: The Data Behind the Advice
What separates Mollick from most AI commentators is the research backing his claims. His SSRN paper “Navigating the Jagged Technological Frontier” (2023), co-authored with colleagues from Harvard Business School and Boston Consulting Group, studied 758 consultants performing 18 realistic business tasks. The results were striking: consultants using GPT-4 completed tasks 25.1% faster and produced results that were 40% higher quality compared to the control group. However, on tasks outside AI’s capability frontier, consultants using AI actually performed 19% worse than those working without it.
This jagged frontier finding became one of the most cited results in AI research during 2024 and 2025. It demonstrates empirically what Mollick has long argued: AI capabilities are not uniform. AI might be excellent at writing marketing copy but terrible at factual research on niche topics. It might produce brilliant code but generate confident-sounding nonsense about recent events. The only way to discover where the frontier sits for your specific work is to experiment, which brings us back to the daily experiment habit.
A 2025 follow-up study from Mollick’s lab at Wharton examined how AI adoption patterns evolve over time. Researchers tracked 200 knowledge workers over six months and found that initial AI adoption follows a J-curve pattern: productivity often dips in the first two weeks as workers learn to integrate AI into their workflows, then rises sharply as they develop effective prompting habits and learn the boundaries of their tools. Workers who used AI daily reached the productivity inflection point in 8-12 days, while those who used it weekly took 6-8 weeks. This data directly supports Mollick’s insistence on daily experimentation.
Practical Getting Started Guide Based on Mollick’s Advice
For readers who want to follow Mollick’s approach, here is a practical starting sequence drawn from his newsletter posts, book, and public talks. Week one: sign up for Claude and ChatGPT free tiers. Use AI for one work task each day, starting with low-stakes tasks like summarizing a document or drafting an email. Note what works and what does not.
Week two: begin using AI for higher-stakes tasks such as writing first drafts of reports, analyzing data sets, or brainstorming strategies. Practice the delegation framework by being extremely specific in your instructions and always reviewing the output critically. Week three: try an agentic tool. Claude Code is Mollick’s current recommendation, but Cursor or similar tools also work. Give the AI a larger project and observe how it breaks down the work.
Week four: reflect on what you have learned. Which tasks did AI handle well? Where did it fail? What patterns do you see in successful prompts versus unsuccessful ones? This reflection step is what turns experimentation into expertise. Mollick emphasizes that most people skip reflection and therefore plateau in their AI skills far below their potential.
Free Resource: Claude Essentials Guide
Want to master the AI tool Mollick recommends most? Download our free Claude Essentials Guide at beginnersinai.com/subscribe. It covers everything from basic prompting to advanced features like extended thinking and Projects, designed specifically for beginners following Mollick’s experiment-first approach.
Frequently Asked Questions
Who is Ethan Mollick and why is he important in AI?
Ethan Mollick is an Associate Professor at the Wharton School, University of Pennsylvania, and the author of the New York Times bestseller Co-Intelligence: Living and Working with AI. He is important because he combines rigorous academic research on AI’s impact on work with hands-on daily experimentation, providing evidence-based guidance that cuts through hype. His Substack newsletter One Useful Thing reaches over 800,000 subscribers and is widely considered the best single source for practical AI advice.
What AI tools does Ethan Mollick recommend in 2026?
As of early 2026, Mollick recommends maintaining subscriptions to at least two frontier AI systems. He specifically highlights Claude Code with Cowork mode as the most powerful package for agentic AI work. He also recommends ChatGPT for its plugin ecosystem and web browsing, and Gemini for Google-integrated research tasks. His key insight is that the choice of harness (the agentic tool wrapping the AI) matters more than the choice of base model in the agentic era.
What is the jagged frontier of AI that Mollick describes?
The jagged frontier is Mollick’s concept, backed by peer-reviewed research, describing how AI capabilities are uneven rather than uniform. AI might excel at one task but fail at a closely related one. For example, it might write excellent marketing copy but produce unreliable factual claims. His 2023 study with 758 consultants showed that workers using AI on tasks within the frontier performed 40% better, while those using AI on tasks outside the frontier performed 19% worse. The only way to map the frontier for your work is through daily experimentation.
How can beginners start using AI following Mollick’s advice?
Mollick’s core advice for beginners is to start experimenting immediately with real work tasks, not tutorials or toy examples. Sign up for a free AI tool like Claude or ChatGPT, use it for one genuine task each day, and note what works and what does not. Over four weeks, gradually increase the complexity of tasks, practice giving detailed instructions using the delegation framework, and try an agentic tool like Claude Code. The key is consistency: his research shows daily users reach proficiency in 8-12 days versus 6-8 weeks for weekly users.
Is Co-Intelligence by Ethan Mollick worth reading for AI beginners?
Yes, Co-Intelligence is widely considered the best introductory book on AI for non-technical readers. It was named a best book of the year by The Economist and the Financial Times, reached the New York Times bestseller list, and provides a practical framework for thinking about AI as a collaborative partner rather than a tool. The book is particularly valuable because it addresses both practical use (how to work with AI today) and bigger-picture questions (how AI will change work, education, and society). It remains relevant in 2026 because its framework for thinking about AI adapts to new capabilities as they emerge.
Related Articles
- How to Use Claude AI
- What Is Artificial Intelligence?
- Claude AI Review
- Essential AI Skills
- AI for Dummies
Sources
Stay ahead of the AI curve. Join thousands of readers who get plain-English AI insights delivered weekly. Subscribe to our free newsletter.
Sources
This article draws on official documentation, product pages, and industry reporting. Specific sources are linked inline throughout the text.
Last reviewed: April 2026

Leave a Reply