10 Matching Annotations
  1. Last 7 days
    1. For many of us, the main audience of our research is our colleagues. But colleagues is about as vague and useless a term as “the public” and so when you develop a new project you should pin down (for yourself) the scholarly topics that your work will contribute to, and some examples of individuals who do related work.

      I have never trained an LLM to do a specific function for myself before. It will be interesting to see if I will have to fully direct the program to research different sites specifically (narrow the hypothesis down for the program).

    1. “Who gets to use what I make? Who am I leaving out? How does what I make facilitate or hinder access?”

      This ethical question stresses me out less about the implications of the work created by a LLM trained by myself - the work is meant for me and not meant to be shared amongst many people. Although this shouldn't deter people from using digital means, it should remain an important thought through the creation process.

    1. A better approach, from a pedagogical point of view, is to encourage students to explore and try things out, with the grading being focused on documenting the process rather than on the final outcome.

      The whole purpose of this course is exactly this - we are introduced to digital archaeology, new note-taking methods, programs, etc., create a hypothesis and start an experiment. There is no end goal of a proven hypothesis but instead a focus on the journey to an answer.

    1. By manipulating the code that produces these images in both random and patterned ways, we manipulate the meaning of the image and the way in which these images communicate information to the viewer.

      This directly correlates with my last annotation - if someone was to look for answers pertaining to something they must not rely directly off the information given by a program; programs are trained by humans - we are easily susceptible to perpetuating our own biases and this exhibits itself in our work.

    1. Digital tools and their use are not theory-free nor without theoretical implications. There is no such thing as neutral, when digital tools are employed.

      This is a common theme amongst papers pertaining to digital being more incorporated into different streams. The implications of relying more and more on the digital means leaves opportunities of theoretical changes.

      One of the things I am skeptical about when using AI trained programs is it authenticity regarding sourcing, biases, etc. and is definitely something that relates to my topic, as it pertains to religious beliefs, ethics, etc.

  2. Oct 2025
    1. Trust – we tend to trust the devices we employ (why would we use them otherwise?) but what is this trust based upon?

      I have heard stories of AI programs being incorrect about things and this definitely makes me more cautious when using applications - I try to use them as a secondary source, something to use after information has already been gathered.

      I'm curious to see if my trust in AI changes after being on the other side of things (not just a consumer), while trying to prove my hypothesis with a LLM trained by me.

    2. Information flow between agent and artefact, including one-way flow (from artefact to agent, where we simply look at the artefact to extract the information we require), two-way flow (typically where we store information on the artefact and subsequently retrieve it); reciprocal flow (a two-way flow that is incremental, additive, and cyclical, so there is a continuous information exchange); and system flow (where there are multiple agents and multiple artefacts cooperating in the exchange) (Heersmink 2012, 49-50; 2015, 583-6).

      This specifically correlates to the hypothesis I will feed to the LLM relating to my topic - I want the LLM to analyze two historical artefacts and compare their relation to each other in order to figure out if one was influenced by the other. Information will be given to me from the LLM - if i train a program to search for this information, I will need to feed the program training/information to achieve an outcome.

    1. Cognitive artefacts may be seen in terms of functioning in a similar fashion to the equivalent human cognitive process. This is the basis for seeing computer reasoning as a model of the human mind

      Producers of these programs had to adjust their approach when introducing AI to society - instead of a cold robotic, non-existent relationship, they created a false impression of a real "entity" with reason/logic; when in reality it is a program with instructions and training.

      In Gemini vs. ChatGPT exercise I asked the programs opinions on my topic and whether they thought the hypothesis was correct according to the findings (in their opinion), like it had thoughts of its own to see what both programs would think.

    1. we have witnessed a move from analogue dumpy levels and tapes to digital total stations and electronic distance meters that employ built-in algorithms to capture, record and process data through a mixture of semi-automated and fully automated methods

      Increasingly, over the last decade we have seen changes in the way humans interact with archaeology - we have continuously moved to a digital medium and this has had impacts on the way we interact with history/artifacts and our relationship to them.

      We continue to move to a digital platform and forget that the relationship we have with history can ultimately be changed when mediums are changed.

    1. These cognitive artefacts support us in performing tasks that otherwise at best we would have to conduct using more laborious and time-consuming methods (film photography or measured survey using tapes, for instance) or that we would not be able to undertake (we cannot physically see beneath the ground, or determine the chemical constituents of an object, for example). Furthermore, a characteristic of archaeology is the way that we adopt and apply tools and techniques developed in other domains (Schollar 1999, 8; Lull 1999, 381). Consequently, most if not all of the cognitive artefacts used in archaeology are designed outside their discipline of application, meaning we have little or no control over their development and manufacture, and hence their internal modes of operation have to be taken at face value.

      Humans have become more prone to laziness as technology advances. "Convenience" has had social implications, as we continue to change mediums from physical/public to digital.

      Using technology that wasn't created for digital archaeology seems to be another type of convenience that ultimately negatively effects the users.

      This relates to my topic because instead of using my skills to look for the information I need for my topic, I am training a program to do it for me. Connected to the information but not quite as the students once were without the convenience of a robot or even computer.