
OpenAI recently released a series of videos showcasing different use cases for ChatGPT. I wanted to summarize them here, because otherwise I'd have to dig through each video again to find those useful examples—and that would be annoying.
MCP is a very cool piece of tech. The idea of an API for AI opens up a whole new field of possibilities. Right now, it still feels a bit detached from our day-to-day work—but we’re definitely looking for ways to bridge that gap.
Across the industry, everyone is searching for ways to integrate AI into their existing workflows. It’s clear there’s no going back.
So what does MCP actually do? It offers a protocol for communication between an AI client and a "server." This "server" doesn’t have to live in the cloud—it can run locally on the same machine as the client. No problem there.
Let’s break down what we mean by "client" and "server" with a simple example.
A "server" is any service you want to expose to your AI system. Right now, it mostly works well with one category of systems—Agents. But I’ll touch on that more later. Your service doesn’t have to involve AI at all. It could be:
You’d like to plug this into your AI workflow. For example, you might want your AI agent to check tomorrow’s weather in New York, pull stock market data, or summarize your monthly expenses—without ever opening the UI of each app. That alone is convenient. But it really shines when the agent aggregates data from multiple sources with a single prompt.
So what’s the "client" in this scenario? Today, there aren’t many. But your IDE might be one—if it supports Agent capabilities. Think Claude or VS Code (among others). These clients can integrate with an MCP server to let agents interact with your tools.
Let’s talk about where MCP really makes sense—and where it might be overkill.
This approach is best suited for closed systems—those where the functionality isn’t visible just by reading files. For example:
In these cases, AI doesn’t have natural visibility. A local database is just binary data—AI can’t do much with it, unless it’s a known format like an image or video. If the data lives on a remote server, the same problem applies.
In both cases, MCP can act as a visibility layer, helping the AI understand and interact with the system.
For hardware or services triggered over the network, there’s an API to follow—and your agent needs a hand to make that happen. MCP becomes that abstraction layer, that provides a "helping hand".
But again, that doesn’t mean MCP fits every use case. I say this not to downplay its importance, but to highlight that we should always choose the right tool for the job. If your system is file-based and uses plain text, you might not need MCP at all.
If your tool creates or manages plain text files, you may just need a system instructions file. This can expose project logic directly to your AI agent, allowing it to manipulate files without extra layers.
For example, a static site generator that builds from the file system may only need basic instructions for the agent to work effectively.
This can be done using .github/copilot-instructions.md
, which is recognized by GitHub Copilot. The agent will use it to guide its responses. GitHub even provides best practices for writing this file.
Because this file follows convention, you don’t even have to mention it in your prompt—just ask, and Copilot will know what to do (at least in VS Code; other IDEs may vary). That alone might be enough to help you move faster. And if you later decide you need more advanced integration, you can always introduce MCP—or use an off-the-shelf solution without reinventing the wheel.
Based on my experiments, the best approach is: start rough, then use ChatGPT to refine. Here’s a simple structure you can use—adapt it to fit your project:
Overview
Briefly describe the project, what it does, and its goals.
This is a static site generator written in TypeScript. It transforms
.md
and.tsx
files into HTML, with support for custom layouts, blog previews, and file-based routing.
Project Structure
Describe key folders and file types.
/src # Core logic
/pages # Content pages (.md or .tsx)
/public # Static assets
/scripts # Build and deployment scripts
File Types & Conventions
Explain how different files are used.
.md
files are parsed into static pages..tsx
files are React-based components.- Blog posts go in
/pages/blog/
and follow theYYYY-MM-DD-title.md
naming pattern.
Build System
Describe how the app is built.
- Node.js build using
esbuild
.- No external template engine; rendering is done in code.
- Output goes to
/dist
, ready for GitHub Pages.
Custom Features
Highlight anything that’s unique.
- Blog previews use
excerpt.md
andthumbnail.png
(if present).- Routing is based on file paths.
- Supports dark mode via Tailwind CSS classes.
Coding Guidelines
Mention preferred coding practices.
- Use modern JS/TS features.
- Prefer functional patterns.
- Use
async/await
instead of raw Promises.
That’s it. Sometimes all you need is clear, human-readable instructions. If your tool lives in the file system—skip the server. No need to overengineer it.
You might also be interested in the following posts:
OpenAI recently released a series of videos showcasing different use cases for ChatGPT. I wanted to summarize them here, because otherwise I'd have to dig through each video again to find those useful examples—and that would be annoying.
I ran a simple experiment to see how well AI models like ChatGPT and Gemini can summarize long YouTube videos. Using subtitle files and a consistent prompt, I compared their ability to extract key topics and timestamps. The results were... mixed.