- Mar 2024
-
ecampusontario.pressbooks.pub ecampusontario.pressbooks.pub
-
Using spatial proxy data to develop information on past climates and environments is one thing; communicating this information to a broader public is another. In the following example, scholars at the University of Minnesota Duluth show how geospatial data (e.g., global annual surface air temperatures
This raises questions about the potential for other interdisciplinary collaborations that could transform data into more accessible and engaging formats,
-
The Polar Data Catalogue (PDC) also features a PDC Geospatial Search tool, which allows users to search or input Arctic and Antarctic data through a mapping interface. At the time of writing this module, the Geospatial collections included Polar Data and Metadata, RADARSAT Arctic SAR Imagery, RADARSAT Mosaics of the Antarctic, and Canadian Ice Service Sea Ice Charts (Polar Data Catalogue, 2022).
I agree with the integration of diverse datasets in the Polar Data Catalogue as an example of digital humanities’ interdisciplinary potential. It's cool that such platforms can bridge the gap between scientific research and humanities, fostering a more holistic understanding of issues like climate change and enabling a wider audience to engage with complex data.
-
In Canada, geospatial metadata policies and standards related to “the use, integration, and preservation of spatial data” are primarily set through the Canadian Geospatial Data Infrastructure (CGDI). For more information on definitions and (inter)national standards for metadata documentation and terminology, see also Natural Resources Canada and the Federal Geographic Data Committee.
I agree with the significance attributed to metadata in enhancing the usability and understanding of digital spatial data. It's cool that this attention to detail supports greater transparency and interoperability in digital humanities projects, which is essential for collaborative research and the democratization of information.
-
These challenges are addressed at least in part through the use of metadata schema which document the characteristics of data (including who collected the data, where, when, why, and how, to the extent that such information is available).
This part emphasis metadata schema for documenting geographic data which perhaps touches upon the importance of transparency and standardization in digital humanities research. We worked with metadata quite a bit earlier in the semester and witnessed its implications so it is worthwhile to bring up. However, this raises questions about the balance between standardization and the unique, contextual needs of different data sets and research objectives. How do we reconcile the need for universal standards with the diverse, often localized nature of geographical information and historical contexts?
-
-
www.esri.com www.esri.com
-
The Thurgood Marshall Institute observed the 50th anniversary of the Fair Housing Act of 1968 with “The Fight for Fair Housing in the United States”, observing that “housing discrimination and segregation are deeply rooted in the fabric of this nation”, and locating dozens of fair housing legal actions throughout the latter half of the 20th
This is indeed a critical intersection between historical legislation and ongoing social injustices, revealing how digital humanities can reveal and present systemic patterns of discrimination. Moreover, it can also challenge us to consider the role of historical context in contemporary social justice efforts and the potential of digital platforms to bridge that gap.
-
A less scholarly but perhaps more addictive Story Map was created by novelist Susan Straight, who compiled a list of 737 Novels about the American Experience. Obviously an avid reader, Susan locates each novel’s setting, shows us each book’s dust jacket, and provides eloquent, single-sentence tributes for each volume.
This project seems to show the potential of digital humanities to engage with literature in new, interactive ways. But does the geographical mapping of novels contribute to our understanding of the 'American Experience'? Could this approach be applied to other literary traditions and cultures effectively outside of the presented culture?
-
-
www.propublica.org www.propublica.org
-
Another aspect of how these timelines work involves the "near" and the "far." We think of our visualizations and news apps as having a near view and having a far view. In my head I think of these as the "at-a-glance" and the "up-close-and-personal" view, but in any case, the former gives you the overall picture, and the latter gives you the details.
The concept of designing timelines with both an "at-a-glance" and a "detailed" seems to be important for user engagement. How can digital humanities developers balance these two aspects to cater to both casual browsers and serious researchers?
-
I learned how to set up the CSV file and make the timeline from these instructions, which were surprisingly simple. I also learned about DocumentCloud's "Embed a Note" feature, which let us stick an image of a portion of the document right into its TimelineSetter entry.
The integration of DocumentCloud with TimelineSetter is reminiscent of digital humanities tools like Omeka which was used extensively at the start of the semester. It seems to be an effective tool in addressing content management system specifically designed for digital collections and exhibitions.
-
-
engl201.opened.ca engl201.opened.ca
-
This type of spiral visualization is widely used inschoolbooks to represent how long and complex thehistory of life on earth has been before humankind. Here,the spiral is used to convey the sense that the more we goback in time, the more our knowledge becomes sparse
here is a cool video which addresses how timeline shapes impact task completion time, correctness, and user preference. https://youtu.be/MeIM6gKS6fY?si=kSkyYFN6Hwe96mQH
-
Within this study, we did not find that dataset choice affects thereadability of the timeline. Therefore, we recommend designersto be flexible with their choice of timeline shape to maximizereadability or improve engagement. However, if the dataset iscomplex, even for mixed data,we recommend using a lineartimeline.
How does the complexity of a dataset influence the decision to use linear timelines over other shapes, and what specific characteristics of a dataset classify it as 'complex'? furthermore, could there be alternative strategies or visual aids that could make non-linear timelines more accessible for complex datasets? food for thought.
-
This gave us a notion of how theparticipants felt about certain shapes to an extent that they caredenough to express it in an optional feedback section. In this casetoo the majority of the comments were in favor of the horizontal andvertical lines.
It seems that some users found spiral timelines both functional and visually appealing, while others found them confusing and hard to use. This perhaps raises some questions about the user interface and design elements that can make non-linear timelines more user-friendly. With that being said, what specific features of the spiral design contribute to its perceived difficulty, and how could these be mitigated?
-
Here is a summary of our findings about timing. A “<” signmeans that the timing on the first shape is, overall, lessthan the second one, while a “=” sign means that there isno timing difference between the two shapes.Here is a chart of all of the p-values for the timing resultingfrom the experiment.
It seems that the paper suggests non-recurrent, recurrent, and mixed datasets might be best represented by line, circle, and spiral shapes. However, we should consider some of the cultural and cognitive differences in perceiving time and space where these shapes may not resonate well with other groups and cultures. With that being said, are there any alternative shapes that are equally or more effective in conveying temporal data? How might these shapes impact the user's ability to interpret and interact with the data?
-
- Feb 2024
-
engl201.opened.ca engl201.opened.ca
-
For example, Google accumu-lates and blogs positive stories about people using its search service.
It is also worthwhile to ask if google can also be a medium for other storytellers as well from a business/marketing point of view? for example, other advertisements rely off google to showcase their adds based off what a given user tends to search for online. this can perhaps drive the storytelling narrative further or even make matters more complex. It's also interesting to note that I can't really remember the last time I have seen a "google" advertisement versus other advertisements that "use" google as their medium. food for thought.
-
-
engl201.opened.ca engl201.opened.ca
-
The latter do notrequire an especially responsive interface, so the relatively slow keyboardand mouse serve well enough. As with console games, PC users can playby themselves (against a game’s artificial intelligence or AI) or with severalnetworked fellows. In Rome: Total War (see chapter 7), for example, oneplayer can take on the role of the Julii family, trying to defeat the Claudii ona Gallic battlefield; the other family is played by a user located elsewhere inthe real world or by the game’s AI
People often group PC gamers with console gamers in a very generalized way. However, I think there's major differences in the entire gaming experience when juxtaposing consoles and PC. PC definitely relies on more inputs and navigations with less reliance on a responsive interface versus the console where the controller allows more ease of use.
-
any of these platforms are deeply siloed or closed, not allowinginteroperability. An Xbox 360 game, for example, cannot be played on aPC, phone, or Nintendo DS. When one game is re-created for another plat-form (a process known as “porting”), the game interface, the way it uses thenew platform’s hardware, and all o
I feel like this landscape for gamers has changed drastically over the past couple of years. For example, the big attraction for gamers nowadays is the multiplayer aspect and I think most big games allow for porting, which is more often termed cross platform. But I remember the days when this was very limited and you could only play with friends and connect with others gamers only using the same platform.
-
-
-
“The amount of people who have access to the engineering education required to be in programming is very, very small,” says Anna Anthropy, a game developer whose book “Rise of the Videogame Zinesters” helped put Twine on the map in 2012. “And even within that, there are a lot of ways that people are filtered out by the culture.”
Even though the games designed on Twine are very limited in terms of their mechanics compared to larger scale projects (relying off engineering and CS skills) is there a middle ground here? for instance, perhaps the stories designed on Twine can be inspire larger gaming companies to make spin offs based off the content created in Twine itself? I feel like its hard to follow through with a solid story based game that sells well and Twine could revive some useful ideas here.
-
At the same time, like most women with public personas on the Internet, Porpentine has also received her share of hostile feedback: emails and tweets wishing her dead, and at least one detractor who called the existence of Howling Dogs “a crime.”
People can get really invested in the gaming scene and this is a common thing to see. I think when the the last of us part 2 came out a lot of the voicing cast received death threats from fans due to how the plot was designed in the sequel.
-
It took her only seven days to make it, but soon even mainstream gaming critics were praising it, and The Boston Phoenix named it one of the five most important independent games of the year.
As a gamer myself it is pretty astonishing how such a simple text based game can become so popular and influential. sometimes it's more about the message/story than how the actual game "functions" from a mechanics point of view. Most of the time I guess people crave a good story they can relate to.
Tags
Annotators
URL
-
-
-
ne of those times, it is followed by “page”, and the other time it is followed by “article”—so the probability that it is followed by “page” is 50 percent. This is not a very robust language model for English—the vocabulary is incredibly small, and there is no variety of syntactic structures. A more representative sample of English, then, would require a much larger collection of sentences. We’ll return to this in a moment.
This seems like the same style of model Voyant uses to detect word count frequency in a corpus. It's cool how it uses word choice frequency to predict sentence structure based off basic statistics. It is now starting to make sense why Voyant cares so much about measuring word choice frequency!
-
-
every.to every.to
-
To me, this is why the AI phenomenon we’re living through is so fascinating. Considering how transformative this technology is, it’s not actually that complicated. A few simple mathematical concepts, a whole lot of training data, a sprinkle of salt and pepper, and you’ve essentially built yourself a thinking machine.
Sure, the algorithm behind AI I suppose is basic in its intrinsic form but the actual development of AI tools and applications would indeed come with complexities no? I think the author wants us to see the basic forms of how AI was discovered but its integration with software and the various hurdles that brings can indeed be complex in its own right. For example, the development of AI tools like chatGPT obviously requires a talented team of engineering and software professionals to figure things out and probably utilize methods that are extremely complicated to manage.
-
Train a model to understand the relationships between words based on how often they appear in similar contexts. “A word is categorized by the company it keeps.” Feed it a ton of human-written data (and when I say a ton, I essentially mean the entire internet), and let it nudge word coordinates around appropriately.
When training a model, how do we establish what we are looking for to the AI? Sure we are giving it vast amounts of data to find specific patterns in the vector-space but how do we train it to stick to a particular path? There's this concept called objective functions which basically refers to the general optimization of an AI model why was this left out? https://www.larksuite.com/en_us/topics/ai-glossary/objective-function
-
-
engl201.opened.ca engl201.opened.ca
-
ecause changes of scale are easy to describe, journalists o!en stophere—reducing recent intellectual history to the buzzword “big data.” "e more inter-esting part of the story is philosophical rather than technical, and involves what LeoBreiman, # !een years ago, called a new “culture” of statistical modeling (Breiman)
I totally agree that a lot of people that utilize language/text models or AI in their professional lives throw the word "big data" around quite frequently. Does it just refer to the general computational handling of large data sets analogous to distant reading? or could it imply something deeper; not only handling large amounts of work but also finding a specific pattern or objective within those vast datasets?
-
-
uta.pressbooks.pub uta.pressbooks.pub
-
For my purposes, I primarily rely on two main software: Voyant Tools and Topic Modeling Tool. Both tools are good at analyzing large bodies of text in different ways.
Analyzing large bodies of texts can indeed be useful in the digital humanities space. However, I think it limits us to the variety of works out there that may not be all text but rather audio. This video discusses how language models in the music realm can not only predict music recommendations based of lyrical content. It for sure uses text analysis and mining tools but it can also rely on factors like sample spectrum analysis to analyze large bodies of music and to recommend music with similar frequencies and amplitudes in a given beat. pretty cool stuff!
https://www.youtube.com/watch?v=PFAu93xxoGA&ab_channel=EuroPythonConference
-
ission to “explain ideas debated in culture with visual essays.” I have grown fond of the work of Matt Daniels, founder of the Pudding, over the last few years. His continued work producing hip hop visualizations shows me how I might incorporate DH concepts such as “distant reading” into my own projects.
This defiantly has potential to become an influential tool in the music space. If large language/text models can analyze musical works and understand lyrical patterns a given genre of music uses, this defiantly can be incorporated into out daily lives or the current apps we use. For example, apple music has a feature which can recommend new music based off previous styles listened to in the past. I wonder if "distant reading" style of algorithms is used in this context? perhaps its a combination of many different language models in one tool? I'm sure apple has the resources for this regardless!
-
-
engl201.opened.ca engl201.opened.ca
-
While Twitter may not persist long into the future there will surely be more like itaround the corner. In a world turning toward predominant born digital knowledgeproduction a naturalized disposition toward data is required. It is required in order toestablish arguments that have purchase on the complexity and scale of the data traceswe leave behind. What might we gain by thinking of a source as data? It is a seeminglysimple distinction that a growing community of scholars and cultural heritage profes-sionals are finding value in
I honestly find it hard to believe that Twitter can be considered digital knowledge. Twitter in itself is known for its notorious spread of false information and toxic behavior. Nevertheless, thinking of a source as data to me is using accurate knowledge to better ones understanding or arguments on a specific topic. Any form of discovered knowledge can be considered data whether digitized or not, its just a term used to aid ones argument or understanding.
-
ne of the biggest affordances of the World Wide Web is the ability for users torespond; to comment, to upload and “share”. This has not been lost on historians andarchivists. Projects like the September 11 Digital Archive illustrate the possibility to“crowdsource” an archive and create a collection of born digital materials around aparticular issue or topic
This could potentially be a great addition to historical and archivist professions. However, it does indeed bring in a potential social media context into historical works which historians and archivist may or may not be a fan of. It's hard to tell honestly how they would interpret this feature. While it could potentially be beneifical for others to share their or upload relevent histrocial context to varous works, it could also bring in some of the bad. for example just look at the toxic behaviors seen on twitter when someone makes a comment about some historical event. While twitter is indeed quite off topic, the idea of people publicly posting comments and opinions on historical work can't come without controversial backlash.
-
Generally, libraries, archives and museums have only digitizeda sliver of their entire holdings. One must be able to contextualize a source andunderstand why they have it at hand and as such it is important to think through thekinds of limitations on claims relative to what you know about the policies of a library,archive, or museum.
This makes sense from a logistics standpoint. Libraries (the big ones) host millions of works and the vast majority of people probably only read a small fraction of those. I would assume digitizing the entire holdings of a library would take a considerable effort even with access to OCR software. Nevertheless, this ties nicely with what we are learning this week since it pinpoints how exactly effective is OCR technology in the context of historical digitization?
-
Increasingly these collections are digital. This change of state is the productof decades of digitization effort commingling with the collection of contemporaryculture that begins its life in digital form – think email, word documents, photos frommobile phones, websites, software, code, and social media data.How does Historical scholarship change when the evidentiary basis shifts toward thedigital? How is interacting with digital archives different or similar? What does it evenmean to have a “digital archive”?
I think we are at a point where the outcomes of digitization in the realm of historical documentation and archives are quite apparent. Having a "digital archive" over a "physical archive" in my opinion, offers more pros than cons considering the increased accessibility and efficiency of accessing historical knowledge. However, it is true that there is defiantly less of an emotional connection to work when millions of files are shown on your laptop screen versus physically analyzing them in person at some sort of historical library.
-
- Jan 2024
-
journalofdigitalhumanities.org journalofdigitalhumanities.org
-
For historians, historiography signals a shift from “primary” sources—often archival ones—to “secondary” sources—or the historical arguments, interpretations, and interventions that use the archives to mount claims about the past. Of course, this distinctio
https://www.youtube.com/watch?v=QXMQR-YwYRI&ab_channel=RecordingArchaeology here is a video that delves into metadata in the context of data preservation and reuse. sort of ties in nicely with the concepts being portrayed in the readings on how metadata can impact so many different domain of data storage and historical knowledge.
-
How we develop digital archives so that they are not a one-size-fits-all platform, how we fight the urge for standardization while still harnessing the power of interconnectivity in the digital arena—this becomes one of the great challenges for archivists and historians alike.
This video I found highlights the long term preservation of knowledge in a digital format. This outlines the shift in how archivists organized information in the shift from paper to digital. While the video is somewhat boring, it paints a realistic picture of how information if sorted and stored in a digital format using different file systems, portraying a "one-size-fits-all" platform. https://www.youtube.com/watch?v=kLd6WFBrwbg&ab_channel=CouncilofStateArchivists
-
In the new online spaces where the digital archive meets digital history, the relationship between these two professions takes on new and unexpected possibilities—and tensions. We will need to think carefully about how the digital returns us to buried institutional wounds that date back in the United States to the 1930s, when archivists and historians parted ways in their professional affiliations.
I find it somewhat strange why tensions were high between archivists and historians. I mean both parties use metadata to their advantage in the online space? why would tensions rise if they benefit from the same technological advantages?
-
-
-
Relational databases (most common type of database) store and provide access not only data but also metadata in a structure
I wasn't aware of how strongly relational databases ties into metadata. I guess this makes sense since there needs to be some source of data which can provide a clear structure of the database by defining tables, columns, data types, and relationships between tables. As a potential use case, metatdata, (in the context of relational databases for instance) defines rules and constraints, such as primary keys, foreign keys, unique constraints, and check constraints. These rules ensure the accuracy and integrity of the data by preventing incorrect, duplicate, or inconsistent data from being entered into the database.
-
-
engl201.opened.ca engl201.opened.ca
-
These tools allow users to make transcriptionsof the digital images of documents in the same interface, presenting theimage alongside a text-editing window. For instance, a user can uploadan image of a handwritten letter in one window and transcribe the let-ter into text format in a window alongside.
I find this tool super underrated in today's technological landscape. This type of technological tool can be very useful in historical contexts such as transcribing old biblical texts and artistic representations dating back centuries. AI tools like chatGPT for instance is indeed improving in this department but if the old text transcriptions are dated and of poor quality then I can't see how any current AI tool can 100% transcribe the text with precision. Maybe it will take a few more years to catch on?
-
-
openbooks.lib.msu.edu openbooks.lib.msu.edu
-
The stories of Canada’s founding and future have often drowned out those of Indigenous nations; through our presentation of a missionary’s diary, we hope to make visible and audible the stories of people that he met on Ojibwe land in 1898, with the help of people we met when visiting there in the twenty-first century.
here is a video I found deal the digital storytelling collaboration. It basically outlines how First Nation and Parks Canada to begin to rebuild their relationship and find a way to work together. https://www.youtube.com/watch?v=_FzUJU_945E&ab_channel=GCIndigenous
-
-
engl201.opened.ca engl201.opened.ca
-
An additional dimension was added to humanities electronic resources in the early 1990s, when itbecame possible to provide multimedia information in the form of images, audio, and video. In theearly days of digital imaging there was much discussion about file formats, pixel depth, and othertechnical aspects of the imaging process and much less about what people can actually do with theseimages other than view them. There are of course many advantages in having access to images ofsource material over the Web, but humanities computing practitioners, having grown used to theflexibility offered by searchable text, again tended to regard imaging projects as not really theirthing, unless, like the Beowulf Project (Kiernan 1991), the images could be manipulated andenhanced in some way. Interesting research has been carried out on linking images to text, down tothe level of the word (Zweig 1998)
The integration of images, audio, and video into digital humanities projects introduced a new layer of complexity and potential in scholarly research. Initially, the focus in this area was largely technical, revolving around file formats and pixel depth, indicating a period of adjustment as practitioners adapted to these new tools. However, the real transformative potential lay in the integration of these multimedia elements with traditional text-based research. Projects like the Beowulf Project, which involved the manipulation and enhancement of images, began to show how these new forms of media could be more than just visual supplements; they could become integral, interactive components of humanities research. This shift challenged the previous text-centric perspective of many in the field and paved the way for more dynamic, multi-dimensional approaches to humanities studies. The annotation highlights the gradual but significant shift from a focus on technical specifications to exploring the broader implications and applications of multimedia in humanities computing.
-
This presented an opportunity for thoseinstitutions and organizations that were contemplating getting into humanities computing for the firsttime. They saw that the Web was a superb means of publication, not only for the results of theirscholarly work, but also for promoting their activities among a much larger community of users. Anew group of users had emerged.
The internet, along with its multiple digital derivatives completely changed the computing industry with the evolution of information retrieval and web interfaces. This segment underscores the transformative impact of the Internet, particularly the World Wide Web, on humanities computing and academic research from the 1990s onwards. The advent of the graphical browser Mosaic was a milestone, significantly enhancing the accessibility and usability of the Internet for academic purposes. The initial skepticism among some established practitioners in humanities computing towards the Web's potential highlights a common resistance to technological change, similar to the initial reservations held by major companies like Microsoft. However, the recognition of the Web's capabilities by new entrants in the field marked a pivotal shift. They saw the Web not just as an information retrieval tool, but as a dynamic platform for publishing and disseminating scholarly work. This shift expanded the scope and reach of humanities research, breaking the constraints of traditional print formats. The use of hypertext links for annotations and the ability to update and amend publications revolutionized the accessibility and interactivity of scholarly material. This era marks a significant evolution in the digital humanities, where the focus expanded from data analysis and text digitization to the broader dissemination and interactive engagement with scholarly content. This shift also reflects a broader trend in academia, where digital technologies began to redefine the norms of research, publication, and scholarly communication.
-
At last it was possible to see Old English characters, Greek, Cyrillic, and almostany other alphabet, on the screen and to manipulate text containing these characters easily.Secondly, the Macintosh also came with a program that made it possible to build some primitivehypertexts easily. HyperCard provided a model of file cards with ways of linking between them. Italso incorporated a simple programming tool making it possible for the first time for humanitiesscholars to write computer programs easily. The benefits of hypertext for teaching were soonrecognized and various examples soon appeared. A good example of these was the BeowulfWorkstation created by Patrick Conner (Conner 1991).
I am sort of familiar with macintosh history with the evolution of computing interfaces but never in the context of digital humanities. It seems that the Macintosh's ability to display a wide range of non-standard characters, including Old English, Greek, Cyrillic, and others, represented a major advancement for humanities computing. This capability enabled scholars to work with a diverse array of texts in their original scripts, which was crucial for accurate and authentic research in languages and literature. Furthermore, the introduction of HyperCard was a game changer. It allowed for the creation of primitive hypertexts, facilitating the development of interconnected, multimedia content. This technology was particularly useful for humanities scholars, as it enabled them to easily develop computer programs for the first time, significantly enhancing their ability to present and analyze textual data. The Beowulf Workstation and the first version of the Perseus Project are prime examples of how these advancements were utilized to create innovative educational tools. These developments underscored the increasing importance of personal computers in academic research and marked a shift towards more accessible and versatile computing tools in the humanities. Nowadays most people depend on iPhone and Macs brought to in part by the evolution of Macintosh!
-
For example, Augustus de Morgan ina letter written in 1851 proposed a quantitative study of vocabulary as a means of investigating theauthorship of the Pauline Epistles (Lord 1958) and T. C. Mendenhall, writing at the end of thenineteenth century, described his counting machine, whereby two ladies computed the number ofwords of two letters, three, and so on in Shakespeare, Marlowe, Bacon, and many other authors inan attempt to determine who wrote Shakespeare (Mendenhall 1901). But the advent of computersmade it possible to record word frequencies in much greater numbers and much more accuratelythan any human being can. In 1963, a Scottish clergyman, Andrew Morton, published an article in aBritish newspaper claiming that, according to the computer, St Paul only wrote four of his epistles.Morton based his claim on word counts of common words in the Greek text, plus some elementarystatistics. He continued to examine a variety of Greek texts producing more papers and booksconcentrating on an examination of the frequencies of common words (usually particles) and also onsentence lengths, although it can be argued that the punctuation which identifies sentences wasadded to the Greek texts by modern editors (Morton 1965; Morton and Winspear 1971)
Here we see the significance of historical context of quantitative methods in literature and language studies, predating the advent of computers. The examples of Augustus de Morgan’s proposal for a quantitative study of the Pauline Epistles and T. C. Mendenhall’s word count experiments reflect the longstanding interest in applying statistical methods to literary analysis. The introduction of computers significantly enhanced these approaches, allowing for more accurate and extensive analysis, as illustrated by Andrew Morton’s work on the authorship of St. Paul's epistles. This section connects the advancements in digital humanities back to its roots in traditional literary analysis, showing the evolution of methodologies over time.
-