Egen Solutions and SpringML have merged to unleash the power of cloud, data, AI, and platforms for organizations. Our combined company is now named Egen.

We are excited to bring together our two companies, joining platform engineering expertise and experience with data, AI, analytics, and cloud capabilities to strengthen the impact and value we deliver for organizations advancing their digital journeys.

You can expect the same high standards from us as we combine forces to bridge technology and human empowerment.

Btn Arrow Btn Arrow
Read the Press Release

Copilot and the Future of AI-Assisted Coding: Insights from a Software Engineer

Written by

Bryce Dunn

Published on

May 02, 2023

Reading time

7 min read

  • ai-assisted coding
  • ML & AI

GitHub's Copilot is an IDE-integrated tool that streamlines coding by offering real-time text completion suggestions, predicting what developers might type next.

AI is currently a popular topic and has taken over the tech space. One important aspect of AI development in recent years has been its ability to assist with writing, refactoring, documenting, and testing code formats, such as OpenAI’s ChatGPT, allowing a software engineer to chat with a bot to write, refactor, or debug code. Another key tool that has emerged is GitHub’s Copilot.

Copilot is an IDE-integrated tool that acts as a text-completion service, estimating what the developer might type next and displaying that suggestion onscreen. A simple press of the tab button “accepts” the suggestion and inputs the text.

Although it’s been over a year since its inception, in the first few stages of Copilot, developers, and users worked through the kinks and bugs. It has now evolved into a well-serviced coding AI that individuals and companies use.

I installed and used it while undertaking minimal research beforehand to see how Copilot could benefit my daily coding activities with virtually no training. To my surprise, it took minimal effort to set up and get started, and it immediately proved to be beneficial.

Here’s a summary of some of the ways it helped me in my daily coding.

Basic Completion and World Knowledge

The most common use case for Copilot is as a language prediction model that helps by typing in the code I already knew how to type before I did. This is where it excels and serves as a text-completion service. It’s quite successful at simple code completion, but what surprised me was its context awareness and ability to assist me beyond writing code.

For instance, I needed to create some mock data quickly. I used sports as an example. I typed the first object about basketball and each person’s favorite team. Copilot immediately suggested another sport – football – and filled in theoretical favorites for each person.

If you know anything about American college football, you’ll immediately understand what I mean by “context awareness.” Alabama, Florida, and Ohio State are some of the premier football programs of the last two decades, with at least one of these teams appearing in the championship game in 13 of the past 20 years. Did Copilot randomly suggest football schools and happen to pick notable ones, or did it recognize the picks I entered for basketball (all very successful schools) and find similar picks for football? The latter seems more likely. For someone creating mock data, this greatly speeds up the process of creating accurate data.

In fact, a lesser-known feature of Copilot is its ability to answer questions. I needed to remind myself about the default state of a specific property of the coding language I was using. I asked what the default for this property was, and Copilot helped answer my inquiry. This feature functions as text prediction, similar to when completing pre-written code, but it can also operate in a question-and-answer format as it did here.

Writing Functions

A common use case for Copilot is having it write the functions you need. You can enter a comment ahead of the function as an instruction and then start typing the function you want. Copilot will handle the rest. For example, I needed to find the intersection of two JavaScript sets. I entered the instruction in the comment above, started to write the function, and Copilot finished the function.

However, you can also simply start writing the function without commented instructions. In this example, Copilot not only correctly flipped the Boolean flags for filtering but also correctly called the search function, which should be called when filters are changed.

Consider a more specific situation, such as inside a Redux/Ngrx selector. In this use case, I needed to first check if the user had saved preferred colors. If the user did have preferences, I would select those colors.

Otherwise, I would use the default configuration colors. I never informed Copilot about any of this, but I had written similar selectors before. Nevertheless, Copilot responded perfectly by checking for user preferences and implementing those before defaulting to the base configuration if the user had no saved preferences.

We can also see how Copilot can help us research while we code. For example, when I was writing a function to encode a plus sign for an API request, I named the function “encodePlus.” Copilot was able to understand my intention and provide the relevant solution:

As you can see, without any leading comment or extra instruction to Copilot, the AI knew that the function titled “encodePlus” with an input as a string would need to replace all instances of “+” with “%2B” which is the standard URI encoding for a plus sign.

While I knew that “%2B” is what I needed to type, this would have been even more helpful to a developer who didn’t know what the encoding standard for a plus sign was.

Pitfalls of Copilot

Although Copilot’s suggestion to use “%2B” was helpful in the previous example, it highlights one of the potential pitfalls of relying on AI models: they can be incorrect. If a junior developer were to use this function and Copilot suggested using “%3B” instead, would the developer recognize the mistake? Or might the erroneous code slip through and make it into production? Consider this example, where it is difficult to detect Copilot’s error:

Copilot is helping me write a Jest test for an Angular pipe that should accept a Unix number and output it as a string in the correct time zone. The test is structurally correct and would run, but it would fail. Why?

The Unix time (1602314210000) and the date time string (‘10/10/2020 6:10 AM’) aren’t quite right. Using the Unix time, we’d expect “10/10/2020 3:16:50 AM” in the provided time zone (New York). This mistake is almost impossible to spot without typing the Unix time into a time converter (can anyone just ‘read’ Unix times?!).

That being said, the test failed immediately. So, I typed the Unix time into an online converter and adjusted the test as necessary. Overall, I probably saved time by using Copilot to write this test. However, it’s important to note that Copilot (or any AI) can make mistakes. For example, consider this silly scenario when trying to ask a question outside of a coding context:

I suppose, at the least, Copilot suggested the name of a former president instead of a random name. Although Copilot is a useful tool, it’s important to double-check its output. And validating the output is not always a black and white matter of “right” or “wrong.” For instance, Copilot might generate a valid Redux/Ngrx reducer function, but not in the succinct style that I prefer:



Both pieces of code perform the same task, but one adheres to the style of the project while the other does not. This is a minor issue when using AI coding assistance, which should improve over time.

Overall, my experience with Copilot has been positive and productive as a software engineer (not to mention fun!). While on rare occasions, Copilot sends me on a wild goose chase with poor code, the vast majority of the time, it provides correct and helpful code or code that’s close enough for me to edit to my liking.

Copilot is especially helpful for repetitive tasks, such as writing similar blocks of code many times (like test cases or mock data). It has also surprised me with its ability to suggest better code, including methods I hadn’t considered before.

However, I caution users to double-check everything Copilot generates (though I already do this with my own code and other developers code during reviews). And some in the software community believe that AI-assisted coding tools can be dangerous for junior developers who may rely solely on Copilot instead of learning how and why the code it generates is valid or efficient.

But I believe this goes both ways. Junior developers should be intentional about understanding the code they submit, whether written by them or by Copilot. From the inverse perspective, however, Copilot could be a great teaching tool to help junior developers learn.

All my experiences recounted here were conducted prior to GitHub’s announcement of the next stage of IDE-integrated AI coding help, GitHub Copilot X.

This chat-based AI has already generated significant hype for enabling developers to chat with AI for producing, editing, refactoring, documenting, and testing code. Technology has always been a rapidly changing space.

Still, in the next few years, it has the potential to fundamentally overhaul the way software and software engineers write and review code.

Liked it? Share it with your network

You might also like

All Insights