Taken in Fall 2022 at the University of Michigan, taught by Westley Weimer.
EECS 481 at The University of Michigan
This course teaches the fundamentals of software engineering with a focus on five core topics: Measurement and Risk, Quality Assurance, Software Defects, Software Design, and Productivity at Scale. These concepts are taught through six projects, beginning with generating high coverage test suites and ending with making a contribution to an open-source codebase.
The course uses many different tools and libraries in the assignments throughout the semester. All projects except Project 6th are done in an Ubuntu Virtual Machine. Some of the most notable are Gcov and Cobertura for test coverage, American Fuzzy Lop and Evosuite for test input generation. Lastly, we used static analysis tools GrammaTech's CodeSonar and Facebook/Meta's Infer as static analysis tools to detect potential defects in various subject programs.
Testing: Coverage, Automation, Mutation Testing
Test Coverage
The first assignment focuses on generating a high-coverage test suite for the libpng library. The test suite consists of PNG images that we selected or created. The goal for the assignment was to get >= 36% statement coverage using the gcov test coverage program for gcc. Getting above 30% statement coverage was straightforward, but getting to 36% required us to be more creative with what images we included in our test suite. We eventually got 38% coverage by manually editing properties of the PNGs such as gamma values and shading properties.
Test Automation
The second project built on top of the first, using the libpng's pngtest program as the subject program for which we generated test input using the fuzz testing tool American Fuzzy Lop.
Most of the trouble with this assignment came with instrumenting AFL to work with my virtual machine. Our goal for the assignment was to reach 510 total paths and increase our statement coverage from the seed images we used in the first project. We reached 513 paths after letting AFL run for about 3 hours.
Our coverage increased significantly using AFL along with our manually-selected test suite, increasing by about eight percent. The project was a great lesson in the ease of use of black-box testing and how it can produce great results while requiring less time to use.
Mutation Testing
The third project had us create a series of python programs where we created mutants of a source python test suite. We did this in a variety of ways, mainly by changing comparison operations, swapping binary operators such as + and -, and deleting function calls. This project taught me how to maximize coverage of all types of faults and issues in a codebase.