VS Code, Copilot, and MCP: A Trio for Rapid AI-Powered Development

The age of AI is not just about using large language models (LLMs) in chatbots; it’s about integrating AI capabilities deeply into our workflows and applications. Imagine an AI assistant that doesn’t just talk, but acts – managing tasks, retrieving data, and controlling services through natural language. How can we build the bridge between powerful AI models and our custom applications?

Enter a powerful combination: Visual Studio Code (VS Code)GitHub Copilot, and a framework like MCP (ModelContext Protocol). In this post, we’ll explore how this trio allows us to:

  1. Rapidly develop backend services using VS Code and Copilot’s AI-powered suggestions.
  2. Expose application functionality in a structured way using MCP, making it accessible to AI agents.
  3. Demonstrate AI capabilities by showing how an AI could interact with our custom-built MCP server.

We’ll use a simple task management application built with Python and SQLite as our example, exposing its features through an MCP server.

The Tools of the Trade

  • VS Code: Our development playground. A feature-rich, extensible code editor that’s become a standard for many developers.
  • GitHub Copilot: The AI pair programmer living inside VS Code. It suggests code snippets, completes lines, writes functions, generates documentation, and even helps explain code, significantly speeding up development.
  • MCP (Model Context Protocol / Your MCP Library): MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.

Building the Task Manager MCP Server (with Copilot!)

Let’s look at the core components of our example Python application :

import os
import sqlite3
import aiosqlite
import asyncio
import json
from contextlib import asynccontextmanager
from collections.abc import AsyncIterator
from dataclasses import dataclass

from mcp.server.fastmcp import Context, FastMCP

# Global DB connection for resources
_db = None

# Create FastMCP server with dependencies
mcp = FastMCP("My App", dependencies=["pandas", "numpy", "aiosqlite"])

@dataclass
class AppContext:
    db: aiosqlite.Connection

async def ensure_db_exists(db_path: str = "tasks.db") -> None:
    """
    Ensure the SQLite database file and tasks table exist.

    Creates the file and the 'tasks' table if they do not already exist.

    Args:
        db_path: Path to the SQLite database file.

    Returns:
        None
    """
    if not os.path.exists(db_path):
        conn = sqlite3.connect(db_path)
        cursor = conn.cursor()
        cursor.execute(
            """
            CREATE TABLE IF NOT EXISTS tasks (
                id INTEGER PRIMARY KEY AUTOINCREMENT,
                description TEXT NOT NULL,
                progress INTEGER NOT NULL DEFAULT 0
            );
            """
        )
        conn.commit()
        conn.close()

@asynccontextmanager
async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
    """
    Manage application startup and shutdown.

    On startup ensures the database exists and connects asynchronously.
    On shutdown closes the database connection.

    Args:
        server: The FastMCP server instance.

    Yields:
        AppContext: Holds the async database connection.
    """
    await ensure_db_exists()
    db = await aiosqlite.connect("tasks.db")
    global _db
    _db = db
    try:
        yield AppContext(db=db)
    finally:
        await db.close()
        _db = None

# Pass lifespan to FastMCP server
mcp = FastMCP(
    "My App",
    dependencies=["pandas", "numpy", "aiosqlite"],
    lifespan=app_lifespan
)

# --- Core database functions (can be tested from main) ---
async def add_task_db(db: aiosqlite.Connection, description: str) -> int:
    """
    Insert a new task into the database.

    Args:
        db: Async SQLite connection.
        description: The task description text.

    Returns:
        The ID of the newly created task.
    """
    cursor = await db.execute(
        "INSERT INTO tasks(description) VALUES (?)",
        (description,)
    )
    await db.commit()
    return cursor.lastrowid

async def update_task_db(db: aiosqlite.Connection, task_id: int, progress: int) -> bool:
    """
    Update the progress of an existing task.

    Args:
        db: Async SQLite connection.
        task_id: ID of the task to update.
        progress: New progress percentage (0-100).

    Returns:
        True if the update affected a row, False otherwise.
    """
    cursor = await db.execute(
        "UPDATE tasks SET progress = ? WHERE id = ?",
        (progress, task_id)
    )
    await db.commit()
    return cursor.rowcount > 0

async def delete_task_db(db: aiosqlite.Connection, task_id: int) -> bool:
    """
    Delete a task by its ID.

    Args:
        db: Async SQLite connection.
        task_id: ID of the task to delete.

    Returns:
        True if a task was deleted, False otherwise.
    """
    cursor = await db.execute(
        "DELETE FROM tasks WHERE id = ?",
        (task_id,)
    )
    await db.commit()
    return cursor.rowcount > 0

async def list_tasks_db(db: aiosqlite.Connection) -> list[tuple[int, str, int]]:
    """
    Retrieve all tasks from the database.

    Args:
        db: Async SQLite connection.

    Returns:
        A list of tuples containing (id, description, progress).
    """
    cursor = await db.execute("SELECT id, description, progress FROM tasks")
    rows = await cursor.fetchall()
    return rows

# --- FastMCP tool wrappers ---
@mcp.tool()
async def add_task(ctx: Context, description: str) -> str:
    """
    Add a new task.

    Args:
        ctx: FastMCP request context with DB connection.
        description: Text description of the new task.

    Returns:
        Confirmation message with the new task ID and its description.
    """
    db = ctx.request_context.lifespan_context.db
    task_id = await add_task_db(db, description)
    return f"Added task {task_id}: '{description}'"

@mcp.tool()
async def update_task(ctx: Context, task_id: int, progress: int) -> str:
    """
    Update the progress of an existing task.

    Args:
        ctx: FastMCP request context with DB connection.
        task_id: ID of the task to update.
        progress: New progress percentage (0-100).

    Returns:
        Success message if updated, or not-found message.
    """
    db = ctx.request_context.lifespan_context.db
    success = await update_task_db(db, task_id, progress)
    if success:
        return f"Updated task {task_id} to progress {progress}%"
    return f"Task {task_id} not found."

@mcp.tool()
async def delete_task(ctx: Context, task_id: int) -> str:
    """
    Delete a task by ID.

    Args:
        ctx: FastMCP request context with DB connection.
        task_id: ID of the task to delete.

    Returns:
        Confirmation message if deleted, or not-found message.
    """
    db = ctx.request_context.lifespan_context.db
    success = await delete_task_db(db, task_id)
    if success:
        return f"Deleted task {task_id}."
    return f"Task {task_id} not found."

@mcp.tool()
async def list_tasks(ctx: Context) -> str:
    """
    List all tasks with their progress.

    Args:
        ctx: FastMCP request context with DB connection.

    Returns:
        A formatted string of all tasks, "id: description (progress%)" per line.
    """
    db = ctx.request_context.lifespan_context.db
    tasks = await list_tasks_db(db)
    if not tasks:
        return "No tasks found."
    lines = [f"{tid}: {desc} ({prog}%)" for tid, desc, prog in tasks]
    return "\n".join(lines)

# --- FastMCP resource wrappers ---
@mcp.resource("tasks://all")
async def get_all_tasks() -> str:
    """
    Return every task as a JSON list of objects.

    Returns:
        JSON string of a list of {id, description, progress}.
    """
    db = _db
    rows = await list_tasks_db(db)
    payload = [{"id": tid, "description": desc, "progress": prog} for tid, desc, prog in rows]
    return json.dumps(payload)

@mcp.resource("tasks://{task_id}")
async def get_task(task_id: int) -> str:
    """
    Return one task’s data or an error as JSON.

    Args:
        task_id: ID of the task to fetch.

    Returns:
        JSON string of {id, description, progress} or {error, id}.
    """
    db = _db
    cursor = await db.execute(
        "SELECT id, description, progress FROM tasks WHERE id = ?", (task_id,)
    )
    row = await cursor.fetchone()
    if not row:
        return json.dumps({"error": "Task not found", "id": task_id})
    tid, desc, prog = row
    return json.dumps({"id": tid, "description": desc, "progress": prog})

@mcp.resource("config://task-priorities")
def task_priorities() -> str:
    """
    Provide static mapping of task priority levels.

    Returns:
        JSON string of priority mappings {low, medium, high} -> int.
    """
    return json.dumps({"low": 0, "medium": 1, "high": 2})

# --- Main for testing ---
if __name__ == "__main__":
    mcp.run(transport="stdio")

Demonstrating AI Capabilities: Interacting with the MCP Server

Our MCP server is now running, exposing tools and resources. How does this demonstrate AI capabilities?

Imagine an LLM-based agent or chatbot connected to this MCP server. Instead of just generating text, it can use the tools we defined:

  1. User: “Add a task to buy milk.”
    • AI Agent: (Understands the intent) -> (Identifies the add_task tool) -> (Calls MCP server’s add_task tool with description=”buy milk”) -> (Receives confirmation: “Added task 5: ‘buy milk'”) -> AI Response: “Okay, I’ve added ‘buy milk’ to your tasks.”
  2. User: “What are my current tasks?”
    • AI Agent: (Understands the intent) -> (Identifies the list_tasks tool) -> (Calls MCP server’s list_tasks tool) -> (Receives task list: “1: Write blog post (50%)\n5: buy milk (0%)”) -> AI Response: “Here are your tasks: Task 1 ‘Write blog post’ is 50% done, and Task 5 ‘buy milk’ is 0% done.”
  3. User: “Update task 1 progress to 75%.”
    • AI Agent: (Understands the intent) -> (Identifies the update_task tool) -> (Calls MCP server’s update_task tool with task_id=1, progress=75) -> (Receives confirmation: “Updated task 1 to progress 75%”) -> AI Response: “Done. Task 1 progress is now 75%.”
  4. AI Needing Data: An AI performing analysis might need structured data.
    • AI Agent: (Needs all task data for analysis) -> (Identifies the tasks://all resource) -> (Queries MCP server for tasks://all) -> (Receives JSON data: [{“id”: 1, …}, {“id”: 5, …}]) -> (Processes the JSON data).

The MCP server acts as the bridge, allowing the AI’s language understanding capabilities to translate into concrete actions and data retrieval within our custom application’s domain.

Select the copilot chat from the top next to the search

Configuring of the MCP server

MCP server can be configured from the settings.json of vscode where we use uv as command and below is the screenshot

on running successfully we will be able to see in the list of servers

on running will be able to interact and will be able to call the functions as shown below.

Why Use a Framework like MCP?

  • Standardization: Provides a consistent way for AI agents to discover and interact with available tools and resources.
  • Discoverability: AI agents can potentially query the MCP server to understand what capabilities it offers.
  • Modularity: Separates the core application logic (database functions) from the interaction layer (MCP tools/resources).
  • Control: You explicitly define what actions and data are exposed, ensuring safety and security.

Putting It All Together

The workflow looks like this:

  1. Develop: Use VS Code + GitHub Copilot to rapidly build your application logic and the MCP server interface.
  2. Deploy: Run your MCP server.
  3. Integrate: Connect an AI agent to your MCP server, providing it with the descriptions of the available tools and resources.
  4. Interact: Users (or other systems) interact with the AI agent using natural language.
  5. Act: The AI agent translates requests into calls to your MCP server’s tools and resources, performing actions and retrieving data within your application.

Conclusion

In essence, combining a modern IDE like VS Code, an AI coding assistant like GitHub Copilot, and an interaction framework like MCP creates a powerful ecosystem for building the next generation of AI-integrated applications. Copilot drastically accelerates the development of the necessary backend services and APIs, while MCP provides the structured interface needed for AI agents to interact meaningfully with those services.

While this simple task management example only scratches the surface, it demonstrates the core potential. Imagine AI agents managing complex workflows, interacting with multiple microservices via MCP, or providing intelligent natural language interfaces to legacy systems. The VS Code + Copilot + MCP combination provides a practical and efficient way to start building that future today.

Ready to try it yourself? Grab the example code from GitHub:

https://github.com/sanjeevrayasam/taskmanager_mcp

Fire up VS Code, enable Copilot, and see how quickly you can build and experiment with your own AI-accessible tools!

R Sanjeev Rao
R Sanjeev Rao
Articles: 12

Leave a Reply

Your email address will not be published. Required fields are marked *