NSWI170 Computer Systems
Students' feedback rebuttal
We are constantly struggling to make the course better. We got a lot of feedback from the past runs and we would like to address the strongest issues since most of the problems likely started as misunderstandings.
Lecture vs labs
The labs and the lectures are quite independent. Aside from the initial C++ tutorial, the labs are not directly connected to the topics covered in lectures. It is not an accident, it is by design. Some of the lectures could indeed be enhanced by additional labs; however, it would also require increasing the lab hours (to have labs every week), which is currently not possible due to many reasons such as manpower shortage and curriculum size limitations.
It is our opinion that getting in touch with real hardware whilst learning fundamentals of C/C++ is much more valuable than having additional seminars to deepen the topics from lectures (especially considering the lectures are mostly an overview and there are subsequent courses that will be more thorough). If a student feels that some of the topics are not clear enough from the lectures, one can always arrange a consultation with the lecturers or seek additional information on the web.
Exam format
The exam is a test that is taken in digital form (on PCs in the labs). It has been suggested that an oral exam would be preferable. We also received complaints regarding the fact that the grading only counts the correct answers and does not consider the intermediate calculations or procedures. We do understand these concerns and there are arguments to support their point of view. On the other hand, there are arguments (which we consider stronger) that support our way and we plan to keep the format for the foreseeable future. The main reasons are:
- A test is much more objective than an oral exam. The test is never biased by the current mood of the examiner, the appearance, tone of voice, how well the examiner and the student are acquainted, etc. The student's performance will not be affected by how the interview is conducted since sometimes it is possible that the phrasing of a side question could lead to a hint or misguide a student to a wrong answer.
- Some people are good speakers, some are more introverted. The former have a significant advantage in oral exams as they can better cover the gaps in knowledge than the latter. Instead of rhetorical skills, we prefer to test the punctuality of the students (i.e., providing correct answers not just correct procedures).
- The test is much faster to take and since it is taken in digital form, it is immediately graded (so the students do not have to wait for the results). Furthermore, given our current personal capacities, conducting individual interviews would not be technically possible (if we want to maintain some reasonable level of quality).
Grading the test questions strictly (each question is either completely correct or wrong) may seem harsh, but we have also considered this thoroughly and we think this is the desired level due to some mitigating factors:
- A student can have one 3-point answer wrong and still get the best mark.
- There is no time pressure. If anyone needs more time to (double)check the answers, it will be granted (within a reason).
- The presented problems are simple enough. Basically, it is a direct application of what students should know from the lectures. So there is no need for designing new algorithms or solving difficult equations.
- We know that one technical mistake (like a numeric error in the calculation) may lead to a wrong answer. On the other hand, this level of accuracy will be likely demanded from our graduates in their professional careers. A small typo in the code may have disastrous and costly consequences. So, the students are encouraged and required to check their outputs including the calculations to reduce the possibility of such mistakes to the minimum (as mentioned before, extra time is not the issue here).
- All the topics raised in the more complex questions are covered with examples in the lectures (and we are continuously improving this part based on the detailed feedback). Furthermore, sample instances of these questions will be presented and explained in the last lecture.
In addition, we have added a special dispensation that the students may leave the test once it has started to avoid getting a bad grade (should they decide that they have underestimated the preparation). This dispensation can be invoked only once by each student.
Imbalance in lab requirements
We have registered complaints that some lab teachers had their requirements set at a different level than others. We have designed more detailed guidelines for the lab teachers based on an internal discussion. Furthermore, we have written down the coding guidelines so it is clear which issues we emphasize (since code quality is the most subjective topic in this class).
However, if anyone gets indications that a lab teacher gets overly excessive with the lab requirements, do not hesitate to bring it up to the course guarantors, so we can level things out. On the other hand, please understand, that each lab teacher has an individual style and these differences cannot be leveled.
Arduino simulator in ReCodEx
The remarks related to Moccarduino emulator used in ReCodEx can be summarized into two larger issues:
1) The emulator does not support some features. That is correct, but the list of differences is not that long and it should be bearable to write solutions that avoid these few pitfalls. In addition, some things are disabled explicitly (e.g., blocking the loop()
) since we do not wish them to be used in the assignments (using delay()
, bypassing standard API, or using interrupts/timers). The undesired practices are explicitly mentioned by the teachers in the labs and the step-by-step guides for the assignments are designed so that the students can avoid them without extra effort.
2) It is difficult to debug. A code could often seemingly work on the device, but it is not passing the tests in ReCodEx. In most cases that we reviewed manually, the reason was that the student did not follow the specification of the assignment diligently (e.g., the timing was off) or made a typical mistake for which the ReCodEx was specifically designed to test. We have improved the test outputs in 2023/24 the best we could to minimize the problem. Furthermore, the students are not expected to try debugging their code in an error-trial approach using ReCodEx as a black-box evaluator. Should this happen, the recommended course of action is to re-check the assignment specification (and the Moccarduino differences -- mentioned in the first point), and subsequently consult the lab teacher (via Mattermost) if the bug is persistent and elusive. Lab teachers should have enough experience and also have better means for testing, so they can help to find the problem.
Emphasis on code quality
There is particular emphasis on code quality in the labs and some students were complaining that the code quality has nothing to do with understanding the Arduino platform or being able to write functionally correct solutions. Technically, that is true, but writing good code is an essential skill that usually takes many years to develop. Hence, we need to incorporate it in all programming courses so the students can get enough practice. On the other hand, we have reduced the scope to five important topics to make this more manageable.
In addition, code quality can be seen as a balancing element that attempts to make the labs similarly difficult for all levels of students. If you feel that the Arduino assignments are not challenging enough, focus more on the code quality and find some challenges there so you can still learn new things instead of dwelling at the bottom of your comfort zone, simply utilizing what you already know.
Finally, we are working on synchronizing the code quality requirements with Programming I and II, so the students get more accustomed to them and handle them more easily. However, such changes take time since Programming I and II courses have some momentum and cannot be changed overnight.
Final lab tests
First of all, let us answer the question, why was the test introduced? It came to our attention that many students get undesired help from AI tools like Copilot or GPT, although these tools are forbidden in this class. As a result, such students were not able to code on their own without these tools. The test is designed to verify the ability of the students to write their own code without any assistance, which is one of the most important learning outcomes of the lab part of the course.
We received some complaints regarding the length of the test. Currently, the test is set to two hours and according to the experience, that should be sufficient considering the students can use their own code from the home assignments. Furthermore, supervising lab teachers on the test may extend the time limit if the situation warrants it. On the other hand, it might be quite difficult to complete the test in time if one needs to write everything from scratch or needs to consult the Arduino API documentation frequently. Therefore, we strongly suggest to prepare yourselves thoroughly (especially putting enough effort into home assignments).
Furthermore, there was some confusion about the maximal number of attempts. In the first year, we announced 2 attempts and then increased it to three attempts due to formal complaints. At present, we decided to keep three attempts, but the third attempt will be special. The terms for the third attempt will be in September and the students will be required to polish and finalize their home assignments (over the summer) before they can attend the third attempt. Check the lab test rules for more details.