11 Matching Annotations
  1. Dec 2025
    1. GitHub Copilot

      Maybe it would be useful to link to this website where they explain and give recommendations on how to use Github Copilot: https://github.blog/developer-skills/github/how-to-use-github-copilot-in-your-ide-tips-tricks-and-best-practices/?ref_product=copilot&ref_type=engagement&ref_style=text

    2. This bundle provides everything reviewers need. It also ensures that anyone who maintains the code later won’t be flying blind.

      We could include here my suggestion of documenting what functions generated by AI were "touch" and/or alter by the user and which are as suggested by AI. Just to make sure which functions the authors have more knowledge over because they modify them.

    3. Testing and Edge Cases

      I think before testing we need to create a section for efficiency check (the issue that Zander mentioned in the meeting). We could either create a protocol to ask AI to check if the objective can be done more efficient, or review it on our own and find places where it seems there is not needed code. I think the second option is better because it gives us the possibility to check if the author really review the code create by the AI (at least skimmed).

    4. how this code works

      This might be a bit vague. We could decide if doing it per function or maybe per task. Also, it would be great that if it is per task, we ask to create a diagram of the new functions and how do they interact with old functions.

    5. Please keep a concise running summary of our interaction, including:

      I think one of the most important part of creating the prompts is what is the context the AI is using. These are the documents/files we attach when creating the prompts. So I think this could also go in the summary. Record the context used to produce the responses. (Also, maybe even what AI is answering, GTP vs Claude)

    6. Explain this code step-by-step. Describe the purpose of each major block. List all assumptions you’re making. Identify any cases where this code might break.

      Here is where I disagree somehow with the approach. I find it safer to ask ** Copilot how would it solve it first, show me the steps and the plan. Then modified its plan according to what you think is right. Then ask the agent ** to modify the code. I think testing the logic before the modifications makes it easier.

    7. Once the task is done, ask Copilot:

      We might need to be more clear on what is the level of specification of a Task. Would/Could we have many tasks summaries per PR? If we are, are we going to keep all summaries for all tasks? Or are we doing clean up of these summaries at some point? A suggestion: maybe one summary of the task summaries per PR.