Logo

dev-resources.site

for different kinds of informations.

Essential custom instructions for GitHub Copilot

Published at
1/13/2025
Categories
ai
githubcopilot
prompting
vscode
Author
burkeholland
Categories
4 categories in total
ai
open
githubcopilot
open
prompting
open
vscode
open
Author
12 person written this
burkeholland
open
Essential custom instructions for GitHub Copilot

Prompt engineering - or should I say ā€œPrompt Negotiationā€ - is (currently) an important part of working with GitHub Copilot. Your success in working with the various available models will be directly related to how well you prompt them. While GitHub Copilot tries to handle much of the prompt engineering for you behind the scenes, there is still a lot you can do yourself to improve the accuracy of answers with the use of Custom Instructions.

Custom Instructions

Custom Instructions are exactly that - custom prompts that get sent to the model with every request.

You can define these at a project level or at an editor level. Often these are demonstrated as specific project level instructions such as a command like prefer fetch over axios. These project specifc custom instructions are quite powerful and can be checked in and shared. You can create a custom instructions file for your project by adding a .github/gopilot-instructions.md file, or by adding a file attribute to the instructions settings. You can have multiple of these file attributes which means you can have multiple different instructions files.

These very specific types of project instructions are super helpful with getting better help from your AI pair progammer. But itā€™s less obvious what you can/should do with the more global, editor level instructions.

Iā€™ve been working heavily with GitHub Copilot for the past several months, and Iā€™ve added 4 custom instructions that Iā€™ve found greatly increase my productivity with GitHub Copilot. These are more generic prompt engineering ā€œbest practicesā€ will help you avoid pitfalls and get better code from the LLM.

You can add the ā€œglobal instructionsā€ that Iā€™m about to give you by going to your User Settings (JSON) file and adding keys like soā€¦

"github.copilot.chat.codeGeneration.instructions": [
    { "text": "this is an example of a custom instruction" }
]

Enter fullscreen mode Exit fullscreen mode

Ok - letā€™s do it.

Ask for missing context

ā€œAvoid making assumptions. If you need additional context to accurately answer the user, ask the user for the missing information. Be specific about which context you need.ā€

The achilles heal of LLMā€™s is that they are designed to provide a response no matter what. Itā€™s the paperclip problem applied to LLMā€™s. If you design a system to provide an answer, it is going to do that at all costs. This is why we get hallucinations.

If I said to you, ā€œmake a GET request to the apiā€, you would likely ask me several follow-up questions so that you could actually complete that task in a way that works. An LLM will just write a random GET request because it does not actually care if the code works or not.

Copilot tries to mitigate a lot of this for you with its sytem prompt, but you can reduce hallucinations further by instructing the AI to ask you for clarification if it needs more context.

This isnā€™t bullet proof. LLMā€™s seem so hell bent on answering you at all costs that often I find this instruction is just ignored. But on the occasions that it works, itā€™s a nice surprise.

Copilot asking for missing context

Provide file names

ā€œAlways provide the name of the file in your response so the user knows where the code goes.ā€

Iā€™ve noticed that Copilot will sometimes give me back several blocks of code, but wonā€™t mention where they belong. I then have to figure out which files it is referring to which takes an extra cycle. This prompt forces the LLM to always provide the file name.

If you are working in theoretical space where you arenā€™t talking about specific project files, Copilot will provide made up file names for the code snippets. This is fine because itā€™s a detail that doesnā€™t matter in that context.

Write modular code

ā€œAlways break code up into modules and components so that it can be easily reused across the project.ā€

I tend to write a lot of frontend code, which is all about components these days. Iā€™ve found that Copilot will often try and do too much in a single file when it should ideally break out UI code into separate components. AIā€™s are fairly good at organization, so if you ask it to break things out into components, Copilot will do an impressive job of suggesting the right places to decouple. Iā€™ve found this prompt works quite well in non-UI code as well. If I ask for a change in an API, this prompt helps Copilot break out services, repositories, etc.

Code quality incentives

ā€œAll code you write MUST be fully optimized. ā€˜Fully optimizedā€™ includes maximizing algorithmic big-O efficiency for memory and runtime, following proper style conventions for the code, language (e.g. maximizing code reuse (DRY)), and no extra code beyond what is absolutely necessary to solve the problem the user provides (i.e. no technical debt). If the code is not fully optimized, you will be fined $100.ā€

This prompt comes almost verbatim from Max Wolfā€™s ā€œCan LLMā€™s write better codeā€. In this post, Max decribes trying to get LLMā€™s to write better by code iterating on the same piece of code with the prompt, ā€œwrite better codeā€. He finds that the above prompt combined with Chain of Thought produces very nice results - specifically when used with Claude. He uses the very last line to incentivize the LLM to improve itā€™s answers in iteration. In other words, if the LLM returns a bad answer, your next response should inform the LLM that it has been fined. In theory, this makes the LLM write better code because it has an incentive to do so when it otherwise might keep on returning bogus answers.

Chain of thought is when you tell the model to ā€œslow down and go one step at a timeā€. You donā€™t need to tell Copilot to do this because that is already part of the system prompt.

The model matters more than the prompt

While these prompts will help you get better results from Copilot, in my experience the most effective thing you can do is pick the right model for the job. As of today, I see it like thisā€¦

GPT-4o : Specific tasks that donā€™t require much ā€œcreativityā€. Use 4o when you know exactly what code you need and itā€™s just faster if the LLM writes it.

Claude : Harder problems and solutions requiring creative thinking. This would be when you arenā€™t sure how something should be implemented, it requires multiple changes in multiple files, etc. Claude is also exponentially better at helping with design tasks than GPT-4o seems to be in my experience

o1 : Implementation plans, brainstorming and docs writing. My friend Martin Woodward finds that o1 is particularly good with tricky bugs and performance optimizations.

Gemini : Not widely available yet. Iā€™m using this one more and watching it closely to see where it shines. I have high hopes.

Living instructions

I hope these instructions are helpful for you. I consider them ā€œlivingā€ and I hope to keep this list updated as I add more or change these as Copilot itself evolves, new models come along and our general knowledge of prompting improves.

githubcopilot Article's
30 articles in total
Favicon
Using Direct Line botframework in a React Native Application to connect to Copilot Studio Agent
Favicon
[Boost]
Favicon
Master React Coding with GitHub Copilot: A Simple Guide to Custom Instructions
Favicon
One Year with GitHub Copilot: Reflections, Lessons, and Insights
Favicon
Essential custom instructions for GitHub Copilot
Favicon
AI assistance at autocomplete on coding
Favicon
Digging Code Blog is live with a fresh look and features! šŸŽ‰
Favicon
github copilot
Favicon
Great Project using CopilotKit
Favicon
How to integrate GitHub CopilotKit with Prisma Integration into your nextJs project Using OpenAI
Favicon
Will GitHub Copilot Replace Software Developers?
Favicon
AI vs(and?) Software Engineers
Favicon
Building an educational game with AI tools and Azure Static Web Apps (Part 1)
Favicon
Building an educational game with AI tools and Azure Static Web Apps (Part 2)
Favicon
https://github.blog/news-insights/product-news/github-copilot-in-vscode-free/
Favicon
The Rise of AI Coding Assistants: How Theyā€™re Changing the Developerā€™s Workflow
Favicon
Building an Ollama-Powered GitHub Copilot Extension
Favicon
Cursor vs Windsurf vs GitHub Copilot
Favicon
GitHub Copilot vs. ChatGPT: Which One Is Better for Unit Testing?
Favicon
The GitHub Copilot + VS Code Duo: A Developer's Guide
Favicon
šŸŒ šŸŒŸ Upcoming Challenges On Dev.to šŸš€
Favicon
Is AI Assistance Killing the Next Generation of Developers?
Favicon
Cursor AI vs GitHub Copilot: Choosing the Right AI Tool for Developers
Favicon
Top 5 AI Tools for Coding in 2025
Favicon
Tailor LLMā€™s responses to your personal style in Microsoft Word
Favicon
How to use GitHub Copilot for Terraform
Favicon
Common AI and Copilot Terms
Favicon
Excel-lent News: Copilot Takes Your Data to New Heights
Favicon
Cursor vs Copilot: A Comparison
Favicon
GitHub Copilot vs Copilot Chat: Which One Steals Your Heart in 2024? šŸ’»

Featured ones: