12 Matching Annotations
  1. Last 7 days
    1. All three of the areas I’ve described here - multimodal generation, Deep Research, and handwriting transcription - have changed dramatically in the past twelve months.

      The rapidness of how much better it is getting is really scary. I do not think anyone predicted just how much it would evolve last year. This just means the advancements will keep getting faster and faster!

    2. This year also saw the rise of reasoning and agentic models. To oversimplify, reasoning models break their responses down into smaller sequential steps (called “chain of thought”) and spend more time and computing power on generating an answer.

      This is really scary! Full-blown Artificial Intelligence is not that far away! I really cannot believe we are in this timeline. Pop culture was warning about this for decades!

    3. Any historian interested in public engagement needs to pay attention to this development.

      Absolutely. With the rise of this new technology has brought with it an ever-increasing need to ensure that history is actually preserved, and not a false one.

    4. Whether or not you guessed correctly, this exercise illustrates a larger point: generative AI has gotten really good, really fast, at generating multimodal content, or stuff that isn’t just text (i.e. images, video, and audio).

      Wow, I thought I guessed correctly as to which one was AI. I could have sworn it was the second one! But nope, I was wrong. Very scary!

    1. We hope this overview helps explain why digitization within archival institutions proceeds the way it does – and why we may never, in fact, digitize everything.

      It genuinely did, I wondered the same thing at one point, but seeing the process that goes behind every single digital archive makes me see why.

    2. Because digitization involves an investment of time and resources, we need to make sure we get it right – that the electronic files we produce are adequately representing the archival originals. That means our process will need to incorporate quality control checks.

      With the amount of time that it takes to digitize something, I can see why they would not want to do it with EVERYTHING. However, it does pay off by making it accessible!

    3. At best it results in a digital “surrogate,” an approximation (even if a very good one) of a dimension of the record.

      Reading this it can be clear why someone may be disappointed with a digital record, since it is technically not the original record. However, it still has all of the same information, so I don't see it as that big of a downgrade.

    4. As archivists we like these questions because they tell us that people are eager for access to archival records. They also show that people realize that not everything is digitized. Indeed only a tiny fraction of the world’s primary resources are available digitally.

      Sure, some individuals may be more eager for physical records, but it should not be a question that digital archives are significantly easier to access. So I think that is a big factor to consider.

    1. ne development far outstripped the impact of any other during the 1990s. This was the arrival of the Internet, but more especially the World Wide Web. The first graphical browser, Mosaic, appeared on the scene in 1993. Now the use of the Internet is a vital part of any academic activity. A generation of students has grown up with it and naturally looks to it as the first source of any information.

      This is really where a lot of that data entry that was done before really began to pay off. Now this data could easily be accessed by everyone and anyone using the Internet, giving rise to the digital humanities we know today.

    2. If any single-word term can be used to describe this period, it would almost certainly be "consolidation." More people were using methodologies developed during the early period. More electronic texts were being created and more projects using the same applications were started. Knowledge of what is possible had gradually spread through normal scholarly channels of communication, and more and more people had come across computers in their everyday life and had begun to think about what computers might do for their research and teaching.

      This would be on the path to getting to GUI, so I think "consolidation" is a good term for this period. As well as making data entry significantly more accessible to your average person.

    3. The other widely used citation scheme was more dependent on punched card format. In this scheme, often called "fixed format", every line began with a coded sequence of characters giving citation information. Each unit within the citation was positioned in specific columns across the line, for example the title in columns 1–3, verse number in columns 5–6, and line number in columns 7–9. The entry of this information was speeded up by functions on the punched card machine, but the information also occupied more space within the computer file.

      Like I said before, this feels so cryptic and difficult in comparison to what we have today.

    4. At this time much attention was paid to the limitations of the technology. Data to be analyzed were either texts or numbers. They were input laboriously by hand either on punched cards, with each card holding up to eighty characters or one line of text (uppercase letters only), or on paper tape, where lower-case letters were perhaps possible but which could not be read in any way at all by a human being.

      As someone who was born in an age where the graphic user interface (GUI) was the standard when it came to Computers, it is always really interesting to hear about this time in history. Inputing all of that data sounds like such a painstaking process for the time.