If you have just stumbled upon my OSD600 series of blog posts, it has been created to document and share my learnings as I progress through my Open Source Development college course.
Another week, another blog post. This week, my classmates and I focused on creating a Continuous Integration workflow for our projects using GitHub Actions. Our mission was to create a GitHub Actions workflow that ensures our projects never get broken by automatically building the code and running tests on every Push and Pull Request to the main branch.
If you are curious, feel free to check out my project on GitHub:
a CLI tool for adding comments to your source code files
ADDCOM
addcom is a CLI source code documenter tool which provides coders with an easy way to add comments to their source code files
Give it a relative/absolute path to your file and it will analyze its contents and add comments using a Large Language Model's chat completion.
See a demo of the functionality on YouTube: addcom-demo
By default, addcom uses the Groq API endpoint for chat completion. However, you can specify a custom endpoint using the --base-url or -u flag option. (If you do this, make sure to obtain an appropriate API key and specify the model supported by the chosen provider using…
I welcome feedback/contributions, so don’t hesitate to take a look and get involved 😊
Creating a GitHub Actions Workflow
To set up my workflow, I used the GitHub Actions web interface and followed GitHub's Building and testing Python guide. I started with the Python Application workflow template. I tweaked it to build and test my project based on the guide's instructions.
The final workflow file with the build-and-test job: python-app.yml
name:Run Unit Testson:push:branches:["main"]pull_request:branches:["main"]jobs:build-and-test:runs-on:ubuntu-lateststeps:-name:Checkout codeuses:actions/checkout@v4-name:Set up Pythonuses:actions/setup-python@v5with:python-version:"3.12"-name:Build the toolrun:|pip install .-name:Test with pytestrun:|pip install pytest pytest-mockpytest
On every push and pull request to main, this workflow sets up a Python 3.12 environment, installs the project as a package, and runs tests using pytest to maintain code reliability.
Adding Tests to Another Project
After setting up and testing my CI workflow, the next step was adding a new test case to one of my classmates' projects. This week, I worked on Jin's project:
CodeMage is a tool that translates a source file written in one programming language into another language
The translation will be done by Large Language Model AI(such as ChatGPT)
Release 0.1
Features
1. Supported Languages: Javascript, Python, C++, Java
2. Default target Language is Python
3. Supported LLM model: openrouter, groq
4. Default LLM model is openrouter(sao10k/l3-euryale-70b)
As I scanned Jin's repository, I noticed that our testing and workflow setups share many similarities, likely because we both have Python projects and use pytest for testing. However, there were a few notable differences: Jin uses Poetry for packaging and dependency management, causing the differences in the build and test setup; he organizes test cases by file rather than by function, as I do; and he relies on unittest.mock instead of pytest-mock for mocking.
Since I was already familiar with the stack, writing a new test case for Jin's project didn’t pose any challenges. The main hurdle was identifying an untested area, as Jin had already written comprehensive tests for most of the functionality. After discussing it with him, he suggested focusing on argument parsing.
Following his recommendation, I reviewed the existing argument parsing function and tests and decided to write a test case for handling invalid flag argument options:
# Test invalid flag argument option
deftest_arg_parser_invalid_option():withpatch("sys.argv",["code_mage.py","--invalid"]):withpytest.raises(SystemExit)asexcinfo:arg_parser({})assertexcinfo.typeisSystemExitassertexcinfo.value.code!=0
I added this test to the tests/test_argsParser.py, ran it locally to ensure that it works as intended, and submitted a PR. My code passed Jin's CI workflow and got merged to main almost right away.
I added a new test case in test_argsParser.py to make sure that the argument parser terminates the program with a non-zero code when invoked with an invalid flag.
# Test invalid flag argument optiondeftest_arg_parser_invalid_option():
withpatch("sys.argv", ["code_mage.py", "--invalid"]):
withpytest.raises(SystemExit) asexcinfo:
arg_parser({})
assertexcinfo.typeisSystemExitassertexcinfo.value.code!=0
Although I wasn’t too familiar with writing CI workflows at the start of this week, the entire process turned out to be straightforward (largely thanks to the simplicity of my CLI tool). + I’m glad I had the opportunity to dive into the GitHub Actions documentation, as I know that it will certainly come in handy in my future projects.
Having been on both sides of the process - as a contributor and a repository maintainer, I can confidently say that a solid CI workflow is very helpful. It streamlines the development and provides peace of mind, knowing that no breaking changes will sneak into my main branch.