Sitemap

Beyond Code with Claude Code: Why My Daily Driver AI Isn’t What You’d Expect

8 min readSep 15, 2025
Press enter or click to view image in full size

TL;DR: Advanced users should consider Claude Code as their daily driver AI, not just for coding. It’s the clearest glimpse we have of the agentic AI future: where agents actually have agency. I now use it as my primary tool for writing documents, creating presentations, and many other tasks, and it’s far more capable than any other system.

I give regular seminars about using AI to boost productivity for non-technical people, especially in the non-profit sector. Without fail, someone asks: “Which AI do you use the most? Should I pay for Gemini?”

There’s always a flash of panic across my face. How do I explain that I use something called Claude Code, which is kind of like Claude but not really? That it’s meant for coding, but the presentation slides they just watched were created in it?

“Claude Code,” I answer, “but you probably shouldn’t use it. If you can only use one, use Gemini.”

The confusion is immediate and understandable.

When I say Claude Code, I’m talking about Anthropic’s command-line interface that connects Claude to your local development environment. It looks like something from the 1990s: white text on a black background, no buttons, no menus. You type, it responds, files appear and change on your hard drive.

Yet this anachronistic-looking tool is where I believe the whole agentic AI movement is heading. It’s not just a coding tool. It’s become my daily driver for documents, presentations and more. I spend more time in Claude Code than any other AI system.

The Accidental Power User Tool

Claude Code was built for software development. Its core purpose: write, debug, and refactor code. But the capabilities that make it excellent at programming create something unexpectedly powerful for non-coding tasks.

Five key capabilities transform Claude Code from a coding assistant into a general-purpose agent:

  1. Editing the file system is like editing its brain: Claude Code’s ability to read and write the filesystem gives you a huge playground.
  2. Teach it as you go: The above doesn’t just apply to the what of the project but the how. The approach is just as editable as the data. I keep a file called CLAUDE.md in each project with preferences, style rules, and domain-specific guidance that Claude Code reads and updates as we work together.
  3. Ad hoc tool crafting: If it can’t do it itself, Claude Code can write Python or Typescript to solve the problem.
  4. MCP gives you an extensible brain: Claude Code has a mature, first-class MCP (Model Context Protocol) implementation with granular permissions and strong local workflows. As we will see, this even allows you to use other LLMs like Gemini and GPT-5.
  5. Refinement loops: Because Claude Code can execute code, check results, and try again, you can define success criteria and let it iterate unsupervised. “Make this work” becomes an actual instruction, not wishful thinking.

Some of the other AIs touch one or two of these capabilities. NotebookLM and Claude Projects offer some file-linked context, but they don’t combine capabilities 2–5 into the cohesive workflow that makes Claude Code uniquely powerful as a daily driver AI.

Case Study: Claude Code for making presentations

Let me demonstrate with a real example: how Claude Code became my presentation tool of choice. Wait, I hear you say. You use Claude Code for presentations? Of all the things to use a command line tool for …

Yeah, I get the irony. It’s the worst system for preparing slide presentations. Except for all the other ones.

I’ve experimented with two workflows for presentations with Claude Code: generate markdown that I feed to Gamma (for example this presentation on AI and the Muslim community), or use Marp to create more technical presentations (like this Qur’anic Computing seminar on Ansari Chat).

Using it for first draft of a presentation

The first one is perhaps more straightforward: I had previously done a presentation that covered some of the same material. Not only that, but I’d previously given it instructions about how I like to present slides: slightly texty, and each slide taking about a minute. I then took the email from the organizers of the event about what they wanted to hear from me, how long I had, and the audience and told it to produce a markdown suitable for pasting into Gamma.

This is where the magic comes in. I use an MCP server called Zen that allows Claude Code to consult other LLMs. I then ask both Gemini Pro and GPT-5 to review the slides and offer any suggestions. It gives me a menu of improvements, and I choose 3 out of the 5 offered.

I then open the markdown file. I carefully review every line (hallucination, non-compliance and randomness are still an issue with AI). Where I need it to do something different, I leave a comment denoted by a //:

(Claude Code reads CLAUDE.md for domain specific guidance).

I then ask it to address the comments I left in the document, and get a second review from Gemini, GPT-5 once it has addressed them. A few more iterations and it’s in good shape.

I then take the markdown file, and upload it to Gamma Gamma has specific formatting preferences for slide separation, so I ask Claude Code to reformat the markdown accordingly. I then tell it to update CLAUDE.md so it remembers Gamma’s formatting preferences for next time.

I reupload the file to Gamma, and it merrily creates the slides. I finish the work in Gamma, but once I’m done, I download a PDF version of the slides and save it in the same directory so the next time I have to give a presentation it’s there for me to draw on.

So let’s review how all five capabilities came together:

  1. Editing the file system is like editing its brain: Every iteration of the presentation lived in a markdown file that both Claude and I could edit. Previous presentations were stored there for reference.
  2. Teach it as you go: When it added fake statistics, I told it to update CLAUDE.md to never do that again. When I explained Gamma’s formatting preferences, it remembered for next time.
  3. Ad hoc tool crafting: Need to reformat the markdown? Claude Code writes a quick script to transform the content to match Gamma’s requirements.
  4. MCP gives you an extensible brain: Through Zen, I had Gemini Pro and GPT-5 review the slides, each bringing different strengths.
  5. Refinement loops: Each iteration of comments and reviews formed a loop — define what needs fixing, let Claude address it, verify the results, repeat until done.

Generating slides end-to-end

The second workflow goes deeper: creating technical presentations entirely within Claude Code.

This time I started with a high level structure — the different parts I wanted the presentation to consist of. But one of the challenges was exactly how to generate the slides from the Markdown. So I asked Claude Code.

It recommended Marp: a framework that converts Markdown files into beautiful presentations. Claude Code installed it with npm install -g @marp-team/marp-cli and generated the initial slides.

The slides worked but looked plain. “Can you match the color scheme from my last presentation?” I asked, pointing to a recent slide deck in the same directory.

Claude Code extracted the colors from the CSS, created a custom Marp theme file, and applied it. The slides now matched my brand.

Then came the graphs. I wanted to include architecture diagrams and data visualizations. Claude Code suggested Mermaid: a diagramming tool that uses text-based syntax. This was a great idea, except when I tried to present as HTML or PDF, the dynamic graphs didn’t render.

“Convert them to static images,” I said.

This is where refinement loops come into play. I defined the success criteria: “All diagrams must render as static images that appear correctly in both HTML and PDF output.” Then I let Claude Code work.

It tried multiple approaches:

  • First attempt: Using Mermaid’s browser-based renderer (failed due to dependencies)
  • Second attempt: Node.js script with mermaid-cli (worked but had font issues)
  • Third attempt: Python script with proper font handling (success)

Once it found what worked, it:

  1. Identified all Mermaid blocks in the markdown
  2. Wrote a Python script to preprocess charts and render them as images
  3. Saved the images to an _charts/ directory
  4. Updated the markdown to reference the static images instead of the Mermaid code
  5. Created a Python build script that handled everything: python build.py

“I need this as a PDF for the conference,” I mentioned.

Again, a refinement loop kicked in. The success criteria: “Generate a PDF that preserves formatting, includes all images, and works on conference projectors.” Claude Code tried several approaches before discovering that Marp’s --pdf flag actually uses Chromium in headless mode under the hood (via Puppeteer). This was perfect - it gave us the rendering fidelity of a real browser. It updated the build script: python build.py pdf now generated a perfectly formatted PDF.

The key here is that I didn’t have to supervise each attempt. I set the goal, defined what success looked like, and Claude Code iterated until it worked.

Again, the five capabilities at work:

  1. Editing the file system is like editing its brain: Theme files, image directories, build scripts: all organized and accessible
  2. Teach it as you go: Claude Code learned my visual preferences and encoded them in the theme file
  3. Ad hoc tool crafting: On the fly, it crafted a custom Python workflow for custom needs
  4. MCP gives you an extensible brain: Used GPT-5 to optimize the data visualizations and Gemini to suggest better diagram layouts
  5. Refinement loops: The entire process of converting Mermaid diagrams to static images and finding the right PDF generation method — Claude Code iterating through approaches until success criteria were met

The Reality Check

It’s not a silver bullet. There are some things to consider.

Technical barrier to entry: Command-line interface, API key configuration, Python/Node.js installation. Not for casual users.

Safety considerations: File system access and code execution require constant vigilance. One misunderstood instruction can delete files or install unwanted packages. Avoid blanket approvals — grant tools least-privilege permissions, approve commands individually, and always work inside a git-tracked directory so you can diff and revert unintended changes. (And yes, Claude Code can help you set up git too.)

Cost management: Complex iterations consume tokens rapidly. A presentation generation session with multiple refinements can cost $5–10. Rate limits can interrupt workflows.

Debugging requirements: When tools fail (and they will), you need to understand error messages, resolve dependency conflicts, and sometimes manually fix generated code.

Platform limitations: Some tools work differently across operating systems. Windows users face additional challenges with shell commands.

The Bigger Picture

Claude Code represents a fundamental shift in how we interact with AI. Current AI assistants are consultants: they advise, explain, and generate text. Claude Code is an agent: it acts, builds, and iterates.

And here’s the thing: I used the exact same techniques to write this article. The same five capabilities. Claude Code maintains my drafts as markdown files, runs them through Gemini and GPT-5 for feedback via MCP, and iterates based on comments I leave in the text. Whether it’s presentations, articles, documentation, or data analysis, the pattern is the same: define what you want, let Claude Code try approaches until it works, teach it your preferences as you go.

One of my favorite quotes is from William Gibson: “The future is already here — it’s just not evenly distributed.” In this case, it’s arriving in your terminal window first.

--

--

Waleed Kadous
Waleed Kadous

Written by Waleed Kadous

Agent whisperer; AI for Good self-appointed missionary; fmr Chief Scientist @ StockApp & Anyscale; ex-Principal Engineer++ @Canva, Uber, Google; PhD in AI

Responses (1)