Building a chrome extension using only AI

I’ve been dabbling with generative AI tools for the last couple months and being a hobby programmer, checking out and tinkering with them is my favorite pass time(full time given that I am on a sabbatical). Like anybody else, I’ve been amazed by the recent developments in AI coding tools particularly. AI tools like AlphaCode, a system for code generation achieved an average ranking in the top 54.3% on recent programming competitions on Codeforces. This is pretty impressive and AlphaCode2 which was released last week pushes this up to 85%. These developments are fundamentally going to change software engineering in the next 5 years. Similarly, given that competitive programming questions are the primary ways candidates are also judged during interviews, we are going to see a new paradigm of evaluation come up for software engineers. Codebase onboarding for new employees, pair programming, debugging etc are all going to fundamentally change and the developer workflows 10yr down the line would be drastically different than when I started out.

I wanted to give these tools a try and see if its actually possible to build an end to end project with >95% code written by AI. To keep it simple I decided to build a simple chrome extension which takes an OpenAPI key as an input, and generates an image based on either a text prompt or text selected in the current webpage. The last chrome extension I built was some 8yrs ago and the only thing I remembered was that it has a manifest.json file which contains all the config. That was literally the only thing I remembered about building Chrome extensions and I did not reference the docs even once throughout this whole exercise.

My setup for building this was fairly simple. Cursor.sh along with my OpenAPI key was all I used. I simply loaded the extension manually into Chrome and used developer tools for inspecting any errors.

This was my first prompt I used

build a chrome extension to generate images using DALL E api based on the text selected on the browser or by inputting a custom text prompt. 
The extension should take the API key from the user and generate images inside the popup. Generate all the files. 
make me a popup.html page. it should have a text field to input and save an openai key, input for a text prompt, download button to download the generated images, regenerate button

After some to and fro, I got the basic wire frame ready for my extension. Loading the popup.html looked something like this:

After this I followed up the following prompt:

include the dalle api interaction, the download logic for images, 
also, after successfully saving the api key, show a small "saved" icon beside the "save" button. 
Edit my popup.js and popup.html to make this work

This required some debugging on my end feeding the errors back to the AI. It helped debug a CORS issue where I missed adding the OpenAI domain permissions to the manifest file. I also had a type error due to improper handling of DALL E API response which was again handled by GPT4. Finally I asked it to beautify my popup.html with some spacings and unique colors for every button. Every single JS function worked flawlessly as intended despite me not even writing 2% of the code.

You can find the source for the extension here on my Github. Overall I’ve been pretty impressed by its coding abilities and tools like these exponentially increase the productivity of hobbyist developers like me and I am really looking forward to coding more again.

· ai, coding, cursor, extension, llm