10,000 Matching Annotations
  1. Sep 2025
    1. There are two common models of interprocess communication: the message-passing model and the shared-memory model. In the message-passing model, the communicating processes exchange messages with one another to transfer information. Messages can be exchanged between the processes either directly or indirectly through a common mailbox. Before communication can take place, a connection must be opened. The name of the other communicator must be known, be it another process on the same system or a process on another computer connected by a communications network. Each computer in a network has a host name by which it is commonly known. A host also has a network identifier, such as an IP address. Similarly, each process has a process name, and this name is translated into an identifier by which the operating system can refer to the process. The get_hostid() and get_processid() system calls do this translation. The identifiers are then passed to the general-purpose open() and close() calls provided by the file system or to specific open_connection() and close_connection() system calls, depending on the system's model of communication. The recipient process usually must give its permission for communication to take place with an accept_connection() call. Most processes that will be receiving connections are special-purpose daemons, which are system programs provided for that purpose. They execute a wait_for_connection() call and are awakened when a connection is made. The source of the communication, known as the client, and the receiving daemon, known as a server, then exchange messages by using read_message() and write_message() system calls. The close_connection() call terminates the communication.

      Explain the steps involved in interprocess communication using the message-passing model. Include the roles of the client, server (daemon), and system calls such as open_connection(), accept_connection(), read_message(), and close_connection().

    2. Many operating systems provide a time profile of a program to indicate the amount of time that the program executes at a particular location or set of locations. A time profile requires either a tracing facility or regular timer interrupts. At every occurrence of the timer interrupt, the value of the program counter is recorded. With sufficiently frequent timer interrupts, a statistical picture of the time spent on various parts of the program can be obtained.

      Many operating systems can track how much time a program spends running at different points in its code. This is called a time profile. To create one, the system either traces the program or uses regular timer interrupts. Every time the timer interrupts, the system records the program’s current position. By doing this often frequently, it can give a statistical view of which parts of the program take the most time to execute.

    3. Many system calls exist simply for the purpose of transferring information between the user program and the operating system. For example, most systems have a system call to return the current time() and date(). Other system calls may return information about the system, such as the version number of the operating system, the amount of free memory or disk space, and so on.

      Many system calls are used for just passing the information back and forth between the program and the operating system. For example, many systems are used for processing a function which is used to retrieve the current time and its date. Additional calls can offer the information regarding the system, such as the version of the operating system, the amount of the available memory or disk space, and other related details

    4. Once the device has been requested (and allocated to us), we can read(), write(), and (possibly) reposition() the device, just as we can with files. In fact, the similarity between I/O devices and files is so great that many operating systems, including UNIX, merge the two into a combined file–device structure. In this case, a set of system calls is used on both files and devices. Sometimes, I/O devices are identified by special file names, directory placement, or file attributes.

      Once the device has been requested (and allocated to us), we can read(), write(), and (possibly) reposition() the device, just as we can with files. In fact, the similarity between I/O devices and files is so great that many operating systems, including UNIX, merge the two into a combined file–device structure. In this case, a set of system calls is used on both files and devices. Sometimes, I/O devices are identified by special file names, directory placement, or file attributes.

    5. The various resources controlled by the operating system can be thought of as devices. Some of these devices are physical devices (for example, disk drives), while others can be thought of as abstract or virtual devices (for example, files). A system with multiple users may require us to first request() a device, to ensure exclusive use of it. After we are finished with the device, we release() it. These functions are similar to the open() and close() system calls for files. Other operating systems allow unmanaged access to devices. The hazard then is the potential for device contention and perhaps deadlock, which are described in Chapter 8.

      The resources that an operating system manages can be thought of as devices. Some of these are physical, like the disk drives, while the others are abstract or virtual, like the files. In the systems where the multiple users, a program may need to request() the device to ensure that it has an exclusive access, and then release() it when it's finished. These actions are similar to open() and close() for files. Some operating systems let programs access devices without this kind of control, but doing so can lead to problems like device conflicts or deadlocks, which we’ll discuss in Chapter 8.

    6. We may need these same sets of operations for directories if we have a directory structure for organizing files in the file system. In addition, for either files or directories, we need to be able to determine the values of various attributes and perhaps to set them if necessary. File attributes include the file name, file type, protection codes, accounting information, and so on. At least two system calls, get_file_attributes() and set_file_attributes(), are required for this function. Some operating systems provide many more calls, such as calls for file move() and copy(). Others might provide an API that performs those operations using code and other system calls, and others might provide system programs to perform the tasks. If the system programs are callable by other programs, then each can be considered an API by other system programs.

      We often need similar operations for directories as we do for files, especially when using a directory structure to organize files. For both files and directories, it’s important to check or modify their attributes when necessary. Attributes can include things like the name, type,or the access permissions, and the accounting information. To handle this, operating systems usually provide system calls such as get_file_attributes() and set_file_attributes(). Some systems go further, offering extra calls for tasks like moving or copying files. In other cases, these actions are handled through APIs or system programs. If other programs can call these system programs, they effectively act as the APIs themselves.

    7. The file system is discussed in more detail in Chapter 13 through Chapter 15. Here, we identify several common system calls dealing with files. We first need to be able to create() and delete() files. Either system call requires the name of the file and perhaps some of the file's attributes. Once the file is created, we need to open() it and to use it. We may also read(), write(), or reposition() (rewind or skip to the end of the file, for example). Finally, we need to close() the file, indicating that we are no longer using it.

      This part elaborates on the primary file-management system calls offered by an operating system. A program can begin by generating the new file or removing the existing one, identifying its name and the additional attributes as said to be required. Once when the file is generated, it can be accessed by using the open(), allowing the program to engage with it—either by reading, writing, or the repositioning the file pointer with the reposition(). After the program has completed its operations within the file, close() is called for indicating that the file is no longer in use or accessed

    1. For example, under acidification, fleshy seaweeds outcompete calcareous species

      How would this potential change impact the organisms that rely on the calcareous species for food or protection?

    2. The native mussel T. hirsuta grew more under warming (Fig. 1; ANOVA Species × Temperature F1,32 = 6.13, P < 0.05; Supplementary Table 2). In contrast, M. galloprovincialis grew the same at ambient and elevated temperatures (Fig. 1; Supplementary Table 2). There was no effect of elevated pCO2 on growth in either of the mussel species (ANOVA CO2 F1,32 = 0.53, P > 0.05; Supplementary Table 2).

      The authors present an interesting point here. The research suggests that temperature is the primary driver for the difference in growth between the native T. hisuta and the M. galloprovincialis. Based on these results, would these results be consistent in another shellfish species with the same tolerance for temperature and sensitivities to carbon dioxide?

    1. not only can such freedom be granted without prejudice to the public peace, but also, that without such freedom, piety cannot flourish nor the public peace be secure.

      holland an example of free speech

    2. How many evils spring from luxury, envy, avarice, drunkenness, and the like, yet these are tolerated

      some things are tolerated now because they cannot be legally enforced... more would come of preventing speech

    1. I mean the pace of the finished film, how the edits speed up or slow down to serve the story, producing a kind of rhythm to the edit.

      this video allows me to connect the overall rhythm of each shot some are speed up and others are longer. this helps me understand what rhythm means in a film.

    2. ther ways cinema manipulates time include sequences like flashbacks and flashforwards. Filmmakers use these when they want to show events from a character’s past, or foreshadow what’s coming in the future.

      i've seen this in a lot of films where they will add the end of the movie at the beginning and then we watch how the story plays out. for example fight club demonstrates a flashforward.

    3. The most obvious example of this is the ellipsis, an edit that slices out time or events we don’t need to see to follow the story. Imagine a scene where a car pulls up in front of a house, then cuts to a woman at the door ringing the doorbell. We don’t need to spend the screen time watching her shut off the car, climb out, shut and lock the door, and walk all the way up to the house.

      i think this saved the directors time and the suidences attentions for another example a person in the film will be eating food and then cut to her washing the dishes or into another scene we don't need to waste time watching that person eat.

    4. He wants you to feel the terror of those peasants being massacred by the troops, even if you don’t completely understand the geography or linear sequence of events. That’s the power of the montage as Eisenstein used it: A collage of moving images designed to create an emotional effect rather than a logical narrative sequence.

      i think this video shows the emotions a lot more the actually understand the logic behind the emotions.

    5. he audience was projecting their own emotion and meaning onto the actor’s expression because of the juxtaposition of the other images. This phenomenon – how we derive more meaning from the juxtaposition of two shots than from any single shot in isolation – became known as The Kuleshov Effect.

      i can see what the directors was trying to get across to the audience you can see the emotions of the actor in each cut.

    6. ilm editing and how it worked on an audience. He had a hunch that the power of cinema was not found in any one shot, but in the juxtaposition of shots. So, he performed an experiment. He cut together a short film and showed it to audiences in 1918. Here’s the film:

      this is interesting because technology advancements have also created film just like this and the dynamics and editing skills are so much more clear and advanced then back then.

    7. but it is the juxtaposition of that word (or shot) in a sentence (or scene) that gives it its full power to communicate. As such, editing is fundamental to how cinema communicates with an audience.

      i do think that grammar and editing words into the fim allow the director to connect with the audience.

  2. learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com
    1. When patterns of specialization become unsustainable, theindividuals affected can face periods of unemployment. Theyare like soldiers waiting for new orders, except that the orderscome not from a commanding general but from the decen-tralized actions of many entrepreneurs testing ideas in searchof profit.

      When old job patterns no longer make sense, soldiers waiting for new orders, instead of commanding officers they're are getting it from entrepreneurs experimenting. Meaning their livelihood is at the hands of people who are just testing things and an uncertainty if there will be a new role created. It raises a question if this decentralized adjustment make unemployment longer or less predictable?

    2. Look at the list of ingredients in the cereal. Those ingre-dients had to be refined and shipped to the cereal manu-facturer. Again, those processes required many machines,which in turn had to be manufactured. The cereal grainsand other ingredients had to be grown, harvested, and pro-cessed. Machines were involved in those processes, and thosemachines had to be manufactured.

      Machines is such a big part of the industry. It is used for transportation and manufacturing products. Products such as cereal which comes from a chain of production. Every ingredient had to be grown, processed, and transported using machines and the machine itself had to be designed and built to keep the industry running. If machines weren't involve whatsoever would the industry be able to keep afloat?

    1. While navigating through the text, you’ll notice that the major part of the text you’re working within is identified at the top of the page

      This will be helpful to be able to save time finding the correct section I am working through.

    1. Define important concepts such as: authority, peer review, bias, point of view, editorial process, purpose, audience, information privilege and more.

      This is really useful, mainly because a lot of the times when a professor asked me to find a peer reviewed article i struggle to find an actual good one, so i can really use the help.

  3. Aug 2025
    1. men would daily be thinking one thing and saying another”—a practice that will weave deceit and hypocrisy into the social fabric, thereby permitting “the avaricious, the flatterers, and other numskulls” to rise to the top.

      only puts unfit in rule

    2. Unlike many earlier defenders of toleration, he did not exclude atheists, Jews, Catholics, and the like.

      so long as your conduct is good, you may believe whatevr

    3. The sovereign’s obligation to respect the liberty of his subjects is solely a matter of self-​interest; to mistreat subjects is bound to generate resentment and possibly seditious tendencies, and those sentiments, in turn, will render the sovereign’s authority less secure than it would otherwise be

      mistreating subjects will make them less likely to trust you and thus give you less power?

    1. The filmmakers behind Deadpool (2016), for example, shot 555 hours of raw footage for a final film of just 108 minutes. That’s a shooting ratio of 308:1. It would take 40 hours a week for 14 weeks just to watch all of the raw footage, much less select and arrange it all into an edited film![2]

      this is a lot of retakes and 555 hours of footage seems a bit overwhelming. I don't think i would have to patience to lookover the footage in 14 weeks and 40 hours. This is a huge dedication to the director

    2. When the screenwriter hands the script off to the director, it is no longer a literary document, it’s a blueprint for a much larger, more complex creation. The production process is essentially an act of translation, taking all of those words on the page and turning them into shots, scenes and sequences.

      i never knew once you hand over a script to the director is is a blueprint i also never knew this process of turning a script into a shot was called act of translation.

  4. learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com
    1. economics is inherently a social subject. It’s not just technical forces liketechnology and productivity that matter. It’s also the interactions and relationshipsbetween people that make the economy go around.

      Economics is grounded and isn't what people usually associate it with. As its the working people that make the economy go around.

    2. Needless to say, this state of affairs was not socially sustainable. Working peopleand others fought hard for better conditions, a fairer share of the incredible wealththey were producing, and democratic rights. Under this pressure, capitalismevolved, unevenly, toward a more balanced and democratic system. Labour lawsestablished minimum standards; unions won higher wages; governments becamemore active in regulating the economy and providing public services. But thisprogress was not “natural” or inevitable; it reflected decades of social struggle andconflict. And that progress could be reversed if and when circumstances changed –such as during times of war or recession. Indeed, the history of capitalism has beendominated by a rollercoaster pattern of boom, followed by bust.

      Early capitalism created harsh conditions that society could not tolerate, which workers got together to fight for a better condition, fairer wages, and democratic rights. With this capitalism evolved into a more balanced system, with labor laws, unions, and improved government regulation providing public services. Capitalism only became more balanced because people forced it to change, not because the system was designed to provide fairness. If capitalism was achieved through struggle rather than naturally, does that mean capitalism itself is inherently resistant to fairness?

    3. Is our present economy a good economy? In some ways, modern capitalism hasdone better than any previous arrangement in advancing many of these goals. Inother ways, it fails the “good economy” test miserably. The rest of this book willendeavour to explain how the capitalist economy functions, the extent to which itmeets (and fails to meet) these goals – and whether or not there are any better waysto do the job.

      It isn't clear if our present economy is a good economy or not it is evident that modern capitalism is making a better progress in advancing the goals that are intended. The author established a balanced evaluation and acknowledges capitalisms past strength while also looking at its weaknesses. It states that modern capitalism fails the 'good economy' test miserably in some ways, though it is improving there is still flaw to it. What will the author show to meet the goals and a better way to do the job.

    4. The Scottish writer Adam Smith is often viewed as the “father” of free-marketeconomics. (This stereotype is not quite accurate; in many ways Smith’s theories arevery different from modern-day neoclassical economics.) And his famous Wealthof Nations (published in 1776, the same year as American independence) cameto symbolize (like America itself) the dynamism and opportunity of capitalism.Smith identified the productivity gains from large-scale factory production andits more sophisticated division of labour (whereby different workers or groups ofworkers are assigned to different specialized tasks). To support this new system, headvocated deregulation of markets, the expansion of trade, and policies to protectthe profits and property rights of the early capitalists (who Smith celebrated asvirtuous innovators and accumulators). He argued that free-market forces (whichhe called the “invisible hand”) and the pursuit of self-interest would best stimulateinnovation and growth. However, his social analysis (building on the Physiocrats)was rooted more in class than in individuals: he favoured policies to undermine thevested interests of rural landlords (who he thought were unproductive) in favour ofthe more dynamic new class of capitalists.

      'Smith identified the productivity gains from large-scale factory production… division of labour” and “free-market forces… and the pursuit of self-interest would best stimulate innovation and growth.” This shows how Adam Smith laid the groundwork for capitalism and the idea of the “invisible hand,” but his focus was more on class dynamics and supporting productive capitalists than purely individual self-interest.

    5. Why? Because even the short-changed partneris still better off (by one penny) than if they had rejected the offer – and that’s allthey care about. So there is no rational reason for the offer to be rejected.In practice, of course, anyone with the gall to propose such a lopsided bargainwould face certain rejection. Experiments with real money have shown that splitsas lopsided as 75–25 are almost always rejected (even though a partner rejectingthat split forgoes a real $2.50 gain). And the most common offer proposed is a50–50 split. That won’t surprise many people – but it does, strangely, surpriseneoclassical economists! In short, the real-world behaviour of humans is notremotely consistent with the assumption of blind, individualistic greed.

      “real-world behaviour of humans is not remotely consistent with the assumption of blind, individualistic greed.” This shows how experiments (like the 50–50 split being most common) challenge neoclassical economic theory, proving people value fairness and social norms over pure self-interest.

    6. Homo sapiens have existed on this planet for approximately 100,000 years. Theyhad an economy all of that time. Humans have always had to work to meet thematerial needs of their survival (food, clothing, and shelter) – not to mention,when possible, to enjoy the “finer things” in life. Capitalism, in contrast, has existedfor around 250 years. If the entire history of Homo sapiens to date was a 24-hourday, then capitalism has existed for three-and-a-half minutes.What we call “the economy” went through many different stages en route tocapitalism. (We’ll study more of this economic history in Chapter 3.) Even today,different kinds of economies exist. Some entire countries are non-capitalist. Andwithin capitalist economies, there are important non-capitalist parts (althoughmost capitalist economies are becoming more capitalist as time goes by).I think it’s a pretty safe bet that human beings will eventually find other, betterways to organize work in the future – maybe sooner, maybe later. It’s almostinconceivable that the major features of what we call “capitalism” will exist for the

      capitalism is only a very recent system compared to the long history of human economies. Note that humans have always worked to meet needs, but capitalism (about 250 years old) is just one stage among many and will likely be replaced by new ways of organizing work in the future. This helps put capitalism in perspective as temporary, not permanent.

    7. Some jobs link compensation directly to work effort. Piece-work systems, whichpay workers for each bit of work they perform, are one example of this approach; soare contract workers (hired to perform a specific task, and paid only when that taskis completed). This strategy has limited application, however: usually employerswant their workers to be more flexible, performing a range of hard-to-specifyfunctions (rather than simply churning out a certain number of widgets per hour).Even in straightforward jobs, piece-work systems produce notoriously bad quality,

      Referencing back to the taxi industry: Stanford is describing the power struggle of workers. He's going more into depth about it here than when he first mentioned precarious work. He is framing it as bargaining power and labor insecurity rather than Kling claiming it as market disruption by the government. Either way, there is not a balance in the industry where there is a "happy medium" between the workers and establishments that already exist.

    8. The UN defineshuman development on the basis of three key indicators: GDP per capita,life expectancy, and educational attainment.

      UN’s Human Development Index (HDI) = GDP + health + education.

    9. An “efficient” economy is one which maximizes, throughexchange, the usefulness of that initial endowment

      Allocate efficiency = maximizing usefulness of existing resources through exchange, ignoring fairness or distribution.

    10. Real wages stagnated in most countries, in the face ofhigher unemployment, attacks on unions, and reductions in income securityand other measures that supported workers’ bargaining power. Yet productivitycontinued to grow, thanks both to employers’ renewed power in the workplace andto continued technological progress. Figure 8.1 illustrates the sharp divergence ofreal wages from labour productivity in the US economy that coincides perfectlywith the arrival of neoliberalism.

      As we move to the right of the graph in figure 8.1 and it widens, this makes me wonder about other countries. Specifically the divergence in countries with higher labor protections (the Netherlands) or those with little to no labor protections (the Phillipines).

    11. There is still much debate and controversy within economics today – althoughnot nearly as much as there should be. Economics instruction in most English-speaking countries conforms especially narrowly to neoclassical doctrine; thereis more diversity in economics in continental Europe, Latin America, and a fewother countries.

      As Stanford critiques neoclassicism and the idea of "automatic balance," I wonder what the laws would be like for the minimum wage or the unemployed. That would just not balance supply and demand. Stanford shows us in chapter 7 that unemployment is not neutral since it just increases bargaining power.

    12. Creative companies can devise all sorts of different ways of earning profits.Some are useful: developing higher-quality new products, and developing better,more efficient ways of producing them. But competitive markets can also rewardcompanies for doing things that are utterly useless, from the perspective of humanwelfare (see Table 7.3). And if lax laws and regulations allow them to, profit-seekingcompanies will do things that are downright destructive to workers, communities,customers, and innocent bystanders.

      Previously, Stanford mentioned the different strategies that businesses use to earn the most profit using absolute and relative surplus value. Stanford includes the use of extending hours of workers (absolute surplus) and raising productivity per hour (relative surplus). Which strategy is more productive and used dominantly today, and why? In chapter 8, he analyzes technological productivity, which would be relative surplus, in my opinion.

    13. Economics and politics have always gone hand-in-hand. Indeed, the firsteconomists called their discipline “political economy.” The connections betweeneconomics and politics reflect, in part, the importance of economic conditions topolitical conditions. The well-being of the economy can influence the rise and fallof politicians and governments, even entire social systems.But here, too, the influence goes both ways. Politics also affects the economy– and economics itself. The economy is a realm of competing, often conflictinginterests. Determining whose interests prevail, and how conflicts are managed,is a deeply political process. (Neoclassical economists claim that anonymous“market forces” determine all these outcomes, but don’t be fooled: what theycall the “market” is itself a social institution in which some people’s interests areenhanced at the expense of others’.) Different economic actors use their politicalinfluence and power to advance their respective economic interests. The extent towhich groups of people tolerate economic outcomes (even unfavourable ones) alsodepends on political factors: such as whether or not they believe those outcomesare “natural” or “inevitable,” and whether or not they feel they have any power tobring about change.Finally, the social science which aims to interpret and explain all this scrabbling,teeming behaviour – economics – has its own political assumptions and biases.In Chapter 4 we’ll review how most economic theories over the years have beenmotivated by political considerations. Modern economics (including this book!) isno different: economics is always a deeply political subject.

      This passage explains the strong relationship between economics and politics. Economic theories themselves carry political assumptions. Knowing that political influences economy.

    14. he economy must be a very complicated, volatile thing. At least that’s how it seemsin the business pages of the newspaper. Mind-boggling stock market tables. Chartsand graphs. GDP statistics. Foreign exchange rates. It’s little wonder the media turnto economists, the high priests of this mysterious world, to tell us what it means,and why it’s important.

      This passage highlights how the economy is complex due to being an average person in the US. Because of this, society relies on economists through data.

    15. Economics encompasses several sub-disciplines. Economic history; moneyand finance; household economics; labour studies and labour relations; businesseconomics and management; international economics; environmental economics;and others. A broad (and rather artificial) division is often made betweenmicroeconomics (the study of the economic behaviour of individual consumers,workers, and companies) and macroeconomics (the study of how the economyfunctions at the aggregate level)

      This section sows that the economy is a broad field that ranges from many different global trade. and shows the different between micro and macro. My question is: why do micro and macro have a stronger influence on overall range in our economy?

    16. Economics is the study of human economic behaviour: the production anddistribution of the goods and services we need and want. Hence, economicsis a social science, not a physical science.

      I think this definds the economics as the study of human choices and how much it would emphasize the idea of economy in a social science because it primary focuses on people's behavior. My question is: why do we think its important to classify economics as a social science ?

    1. The special effects make-up for the gory bits of your favorite horror films can sometimes take center stage.

      the special effects create better scenes in films like horror movies. This can create a better experience for the audience as well.

    1. The dataset was normalized to 10000 counts per cell, Log1p transformed and filtered to contain2000 highly variable genes. The first important observation is that state-of-the-art approaches,except CPM

      Does marker‑gene expression change monotonically along the CPM geodesic from root to leaf?

    1. There are so many facets of and variations in process control that we next use two examples—one involving a single-tasking system and the other a multitasking system—to clarify these concepts. The Arduino is a simple hardware platform consisting of a microcontroller along with input sensors that respond to a variety of events, such as changes to light, temperature, and barometric pressure, to just name a few. To write a program for the Arduino, we first write the program on a PC and then upload the compiled program (known as a sketch) from the PC to the Arduino's flash memory via a USB connection. The standard Arduino platform does not provide an operating system; instead, a small piece of software known as a boot loader loads the sketch into a specific region in the Arduino's memory

      This passage explains process control in simple and multitasking systems using the Arduino as an example. The Arduino is a microcontroller platform with sensors that detect various events. Programs, called sketches, are written and compiled on a PC and then uploaded to the Arduino’s flash memory. Unlike more complex systems, the standard Arduino does not use a full operating system; a bootloader simply loads the sketch into memory, demonstrating a single-tasking environment.

    2. Quite often, two or more processes may share data. To ensure the integrity of the data being shared, operating systems often provide system calls allowing a process to lock shared data. Then, no other process can access the data until the lock is released. Typically, such system calls include acquire_lock() and release_lock().

      This paragraph explores how operating systems handle the data exchanged among various processes. For the data integrity preservation, the OS can protect the shared data via the system calls like acquire_lock(), stopping other processes from accessing it until it is freed with the release_lock().This mechanism is crucial for avoiding conflicts and ensuring consistent data in environments with simultaneous processing

    3. A process executing one program may want to load() and execute() another program. This feature allows the command interpreter to execute a program as directed by, for example, a user command or the click of a mouse. An interesting question is where to return control when the loaded program terminates. This question is related to whether the existing program is lost, saved, or allowed to continue execution concurrently with the new program. If control returns to the existing program when the new program terminates, we must save the memory image of the existing program; thus, we have effectively created a mechanism for one program to call another program. If both programs continue concurrently, we have created a new process to be multiprogrammed. Often, there is a system call specifically for this purpose (create_process()).

      This text explains how one application can load and run another application, for instance, when a user issues a command or selects an icon. It emphasizes the main problem of control flow once the new program ends: control might revert to the original program, necessitating its memory image to be preserved, or both programs could operate simultaneously, resulting in a multiprogramming situation. The excerpt mentions that operating systems typically offer a specific system call, like create_process(), to enable this functionality.

    4. A running program needs to be able to halt its execution either normally (end()) or abnormally (abort()). If a system call is made to terminate the currently running program abnormally, or if the program runs into a problem and causes an error trap, a dump of memory is sometimes taken and an error message generated. The dump is written to a special log file on disk and may be examined by a debugger—a system program designed to aid the programmer in finding and correcting errors, or bugs—to determine the cause of the problem. Under either normal or abnormal circumstances, the operating system must transfer control to the invoking command interpreter. The command interpreter then reads the next command. In an interactive system, the command interpreter simply continues with the next command; it is assumed that the user will issue an appropriate command to respond to any error. In a GUI system, a pop-up window might alert the user to the error and ask for guidance. Some systems may allow for special recovery actions in case an error occurs. If the program discovers an error in its input and wants to terminate abnormally, it may also want to define an error level. More severe errors can be indicated by a higher-level error parameter. It is then possible to combine normal and abnormal termination by defining a normal termination as an error at level 0. The command interpreter or a following program can use this error level to determine the next action automatically.

      This passage explains how a running program can terminate either normally using end() or abnormally using abort(). In the case of abnormal termination or an error trap, the operating system may create a memory dump and an error log for debugging. After termination, control is returned to the command interpreter, which continues processing user commands or provides GUI prompts for guidance. The passage also highlights the use of error levels to indicate the severity of errors, allowing subsequent programs or the command interpreter to respond appropriately.

    5. System calls can be grouped roughly into six major categories: process control, file management, device management, information maintenance, communications, and protection. Below, we briefly discuss the types of system calls that may be provided by an operating system. Most of these system calls support, or are supported by, concepts and functions that are discussed in later chapters. Figure 2.8 summarizes the types of system calls normally provided by an operating system. As mentioned, in this text, we normally refer to the system calls by generic names. Throughout the text, however, we provide examples of the actual counterparts to the system calls for UNIX, Linux, and Windows systems.

      This section explains that system calls can be categorized into six primary groups: process management, file handling, device control, information upkeep, communication, and security. The text emphasizes that most system calls relate to concepts discussed later and provides examples from UNIX, Linux, and Windows. Figure 2.8 gives a summary of these categories.

    6. hree general methods are used to pass parameters to the operating system. The simplest approach is to pass the parameters in registers. In some cases, however, there may be more parameters than registers. In these cases, the parameters are generally stored in a block, or table, in memory, and the address of the block is passed as a parameter in a register (Figure 2.7). Linux uses a combination of these approaches.

      This passage describes three ways that system-call parameters can be passed to the operating system. The simplest method is using CPU registers to hold the parameters. If there are too many parameters for the available registers, the parameters are then placed in the memory block or the table, and the address of that block is passed into the register. Linux uses the mix of both the methods depending on the situation.

    7. System calls occur in different ways, depending on the computer in use. Often, more information is required than simply the identity of the desired system call. The exact type and amount of information vary according to the particular operating system and call. For example, to get input, we may need to specify the file or device to use as the source, as well as the address and length of the memory buffer into which the input should be read. Of course, the device or file and length may be implicit in the call.

      This passage explains that system calls often require additional information beyond identifying the call itself. Parameters—like the source file or the device, memory buffer address, and buffer length—may need to be specified so the operating system understands how to process the request. The precise specifics rely on the particular operating system and the system calls being utilized

    8. The caller need know nothing about how the system call is implemented or what it does during execution. Rather, the caller need only obey the API and understand what the operating system will do as a result of the execution of that system call. Thus, most of the details of the operating-system interface are hidden from the programmer by the API and are managed by the RTE.

      This text highlights the importance of abstraction within system calls. Programmers working with an API do not have to understand the internal mechanisms or execution specifics of a system call. They just need to follow to the API's instructions and also understand the expected outcome. The run-time environment (RTE) manages the inner networks of interacting with the operating system, successfully concealing the underlying specifics from the developer.

    9. Another important factor in handling system calls is the run-time environment (RTE)—the full suite of software needed to execute applications written in a given programming language, including its compilers or interpreters as well as other software, such as libraries and loaders. The RTE provides a system-call interface that serves as the link to system calls made available by the operating system. The system-call interface intercepts function calls in the API and invokes the necessary system calls within the operating system. Typically, a number is associated with each system call, and the system-call interface maintains a table indexed according to these numbers

      This passage describes the role of the run-time environment (RTE) in managing system calls. The RTE includes compilers, interpreters, libraries, and loaders, and provides a system-call interface that connects API function calls to the operating system’s system calls. Each system call is typically assigned a number, and the interface uses a table indexed by these numbers to invoke the correct system call within the OS.

    10. Why would an application programmer prefer programming according to an API rather than invoking actual system calls? There are several reasons for doing so. One benefit concerns program portability. An application programmer designing a program using an API can expect her program to compile and run on any system that supports the same API (although, in reality, architectural differences often make this more difficult than it may appear). Furthermore, actual system calls can often be more detailed and difficult to work with than the API available to an application programmer. Nevertheless, there often exists a strong correlation between a function in the API and its associated system call within the kernel. In fact, many of the POSIX and Windows APIs are similar to the native system calls provided by the UNIX, Linux, and Windows operating systems.

      This passage describes why the application programmers prefer using the APIs instead of directly invoking the system calls. APIs provide the portability, allowing the programs to run on any system which supports the same API, and also simplify the programming by offering the higher-level, easier-to-use functions. While the system calls are often more detailed and complex, APIs usually have a close correspondence with the underlying system calls, as seen in the POSIX and the Windows APIs.

    11. As you can see, even simple programs may make heavy use of the operating system. Frequently, systems execute thousands of system calls per second. Most programmers never see this level of detail, however. Typically, application developers design programs according to an application programming interface (API). The API specifies a set of functions that are available to an application programmer, including the parameters that are passed to each function and the return values the programmer can expect. Three of the most common APIs available to application programmers are the Windows API for Windows systems, the POSIX API for POSIX-based systems (which include virtually all versions of UNIX, Linux, and macOS), and the Java API for programs that run on the Java virtual machine

      This passage highlights that even simple programs rely heavily on the operating system through system calls, often executing thousands per second. However, the programmers usually interact with the higher-level APIs rather than making the system calls directly. APIs likethe Windows API, POSIX API, and the Java API provide the standardized functions, parameters, and the expected return values, simplifying the program development while hiding the underlying OS complexity.

    12. When both files are set up, we enter a loop that reads from the input file (a system call) and writes to the output file (another system call). Each read and write must return status information regarding various possible error conditions. On input, the program may find that the end of the file has been reached or that there was a hardware failure in the read (such as a parity error). The write operation may encounter various errors, depending on the output device (for example, no more available disk space).

      This passage emphasizes that reading from and writing to files in a program involves repeated system calls, each of which must report status and handle potential errors. It illustrates how the operating system monitors both input and output operations, accounting for conditions like reaching the end of a file, hardware read failures, or insufficient disk space during writing.

    13. Once the two file names have been obtained, the program must open the input file and create and open the output file. Each of these operations requires another system call. Possible error conditions for each system call must be handled. For example, when the program tries to open the input file, it may find that there is no file of that name or that the file is protected against access. In these cases, the program should output an error message (another sequence of system calls) and then terminate abnormally (another system call).

      This passage explains that each file operation—opening an input file, creating and opening an output file—requires a separate system call. It highlights the need for handling potential errors, such as a missing file or insufficient access permissions, using system calls to display error messages and terminate the program if necessary.

    14. Before we discuss how an operating system makes system calls available, let's first use an example to illustrate how system calls are used: writing a simple program to read data from one file and copy them to another file. The first input that the program will need is the names of the two files: the input file and the output file. These names can be specified in many ways, depending on the operating-system design

      This passage introduces the concept of using system calls with a practical example: a program that reads from one file and writes to another. It emphasizes that the program first needs the file names and notes that how these names are specified can vary depending on the operating system’s design.

    15. System calls provide an interface to the services made available by an operating system. These calls are generally available as functions written in C and C++, although certain low-level tasks (for example, tasks where hardware must be accessed directly) may have to be written using assembly-language instructions.

      This passage explains that system calls act as the bridge between programs and the operating system’s services. Most system calls are accessible through high-level languages like C and C++, but some low-level operations—especially those requiring direct hardware access—may need to be implemented in assembly language.

    16. Although there are apps that provide a command-line interface for iOS and Android mobile systems, they are rarely used. Instead, almost all users of mobile systems interact with their devices using the touch-screen interface. The user interface can vary from system to system and even from user to user within a system; however, it typically is substantially removed from the actual system structure. The design of a useful and intuitive user interface is therefore not a direct function of the operating system. In this book, we concentrate on the fundamental problems of providing adequate service to user programs. From the point of view of the operating system, we do not distinguish between user programs and system programs.

      This passage emphasizes that mobile users almost exclusively use touch-screen interfaces rather than command-line interfaces. While user interfaces may differ across systems and users, their design is largely separate from the underlying operating system. The focus of the book, as noted here, is on the operating system’s role in providing consistent and adequate service to programs, treating user and system programs equivalently.

    17. In contrast, most Windows users are happy to use the Windows GUI environment and almost never use the shell interface. Recent versions of the Windows operating system provide both a standard GUI for desktop and traditional laptops and a touch screen for tablets. The various changes undergone by the Macintosh operating systems also provide a nice study in contrast.

      This passage differntiates the typical Windows users with the command-line users, noting that most Windows users rely primarily on the GUI and rarely use the shell. Modern Windows versions support both desktop GUIs and touch interfaces for tablets. The passage also points out that the evolution of Macintosh operating systems offers a useful comparison in understanding how GUI design and user interaction have developed over time.

    18. The choice of whether to use a command-line or GUI interface is mostly one of personal preference. System administrators who manage computers and power users who have deep knowledge of a system frequently use the command-line interface. For them, it is more efficient, giving them faster access to the activities they need to perform. Indeed, on some systems, only a subset of system functions is available via the GUI, leaving the less common tasks to those who are command-line knowledgeable

      This text emphasizes that the decision between using the graphical user interface (GUI) and the command-line interface (CLI) usually depends on individual preference and the user's skill level. System administrators and experienced users typically prefer the CLI for its quicker, more efficient access to system features Certain tasks might only be accessible through the CLI, highlighting the significance for users requiring specific or uncommon functions.

    19. Because a either a command-line interface or a mouse-and-keyboard system is impractical for most mobile systems, smartphones and handheld tablet computers typically use a touch-screen interface. Here, users interact by making gestures on the touch screen—for example, pressing and swiping fingers across the screen. Although earlier smartphones included a physical keyboard, most smartphones and tablets now simulate a keyboard on the touch screen

      This text demonstrates that mobile devices like smartphones and tablets depend on touch-screen interfaces rather than conventional command-line or mouse-and-keyboard systems. Users typically engage directly with the display by using gestures like tapping or swiping.While the early smartphones had physical keyboards, modern devices typically display a virtual keyboard on the touch screen for the input, optimizing portability and also usability.

    20. Graphical user interfaces first appeared due in part to research taking place in the early 1970s at Xerox PARC research facility. The first GUI appeared on the Xerox Alto computer in 1973. However, graphical interfaces became more widespread with the advent of Apple Macintosh computers in the 1980s. The user interface for the Macintosh operating system has undergone various changes over the years, the most significant being the adoption of the Aqua interface that appeared with macOS. Microsoft's first version of Windows—Version 1.0—was based on the addition of a GUI interface to the MS-DOS operating system

      This passage outlines how the historical development of the graphical user interfaces (GUIs). GUIs were at the beginning examined at the Xerox PARC in the early 1970s, with the Xerox Alto being the first computer to have one. Widespread usage took place in the 1980s with Apple’s Macintosh computers. Over time, GUIs evolved, such as Apple’s adoption of the Aqua interface in macOS. Microsoft also integrated a GUI with Windows 1.0, layering it over the MS-DOS operating system.

    21. In one approach, the command interpreter itself contains the code to execute the command. For example, a command to delete a file may cause the command interpreter to jump to a section of its code that sets up the parameters and makes the appropriate system call. In this case, the number of commands that can be given determines the size of the command interpreter, since each command requires its own implementing code.

      This passage explains one method of implementing the commands in a command interpreter: the interpreter directly contains the code for executing each command. For instance, a delete-file command triggers a specific section of the interpreter’s code to set parameters and perform the system call. The number of supported commands directly affects the interpreter’s size, as each command needs its own dedicated code.

    22. The main function of the command interpreter is to get and execute the next user-specified command. Many of the commands given at this level manipulate files: create, delete, list, print, copy, execute, and so on. The various shells available on UNIX systems operate in this way. These commands can be implemented in two general ways.

      This passage highlights how the command interpreter’s primary role is to receive and execute user commands, many of which involve file manipulation, such as creating, deleting, or copying the files. It also notes that the UNIX shells implement these commands, which can be carried out using two general approaches.

    23. Most operating systems, including Linux, UNIX, and Windows, treat the command interpreter as a special program that is running when a process is initiated or when a user first logs on (on interactive systems). On systems with multiple command interpreters to choose from, the interpreters are known as shells. For example, on UNIX and Linux systems, a user may choose among several different shells, including the C shell, Bourne-Again shell, Korn shell, and others

      This passage explains that the command interpreter, or shell, is a special program that runs when a process starts or when a user logs on. On systems like UNIX and Linux, multiple shells are available, allowing users to choose their preferred interface for entering commands.

    24. Protection and security. The owners of information stored in a multiuser or networked computer system may want to control use of that information. When several separate processes execute concurrently, it should not be possible for one process to interfere with the others or with the operating system itself. Protection involves ensuring that all access to system resources is controlled. Security of the system from outsiders is also important.

      This passage describes that operating systems enforce protection and security by controlling access to system resources. In multiuser or the networked environments, this ensures that processes do not interfere with one another, and also safeguards the system against the external threats.

    25. Logging. We want to keep track of which programs use how much and what kinds of computer resources. This record keeping may be used for accounting (so that users can be billed) or simply for accumulating usage statistics. Usage statistics may be a valuable tool for system administrators who wish to reconfigure the system to improve computing services.

      This passage explains how the operating systems maintains the logs of the program resource usage. These logs can support accounting, billing, or help administrators analyze usage patterns to optimize system performance.

    26. Resource allocation. When there are multiple processes running at the same time, resources must be allocated to each of them. The operating system manages many different types of resources. Some (such as CPU cycles, main memory, and file storage) may have special allocation code, whereas others (such as I/O devices) may have much more general request and release code.

      This passage highlights that the operating system is responsible for resource allocation, distributing CPU time, memory, file storage, and I/O devices among multiple running processes to ensure fair and efficient usage.

    27. Error detection. The operating system needs to be detecting and correcting errors constantly. Errors may occur in the CPU and memory hardware (such as a memory error or a power failure), in I/O devices (such as a parity error on disk, a connection failure on a network, or lack of paper in the printer), and in the user program (such as an arithmetic overflow or an attempt to access an illegal memory location).

      This passage explains that the operating system continuously detects and handles errors. These errors can arise in hardware (CPU, memory, or I/O devices) or in user programs, such as illegal memory access or arithmetic overflow, ensuring system stability.

    28. Communications. There are many circumstances in which one process needs to exchange information with another process. Such communication may occur between processes that are executing on the same computer or between processes that are executing on different computer systems tied together by a network

      This passage describes that operating systems provide mechanisms for interprocess communication, allowing processes to exchange the information either on the same computer or across different computers connected via a network.

    29. File-system manipulation. The file system is of particular interest. Obviously, programs need to read and write files and directories. They also need to create and delete them by name, search for a given file, and list file information. Finally, some operating systems include permissions management to allow or deny access to files or directories based on file ownership.

      This passage explains that operating systems manage file-system operations, including reading, writing, creating, deleting, searching, and listing files and directories. Some systems also enforce permissions to control access based on file ownership.

    30. Program execution. The system must be able to load a program into memory and to run that program. The program must be able to end its execution, either normally or abnormally (indicating error).

      This passage highlights that an operating system manages program execution by loading programs into memory, running them, and handling their termination, whether it ends normally or due to an error.

    31. An operating system provides an environment for the execution of programs. It makes certain services available to programs and to the users of those programs. The specific services provided, of course, differ from one operating system to another, but we can identify common classes.

      This passage states that an operating system provides a platform for running programs, offering services to both programs and users. While the specific services vary across operating systems, there are common classes of services that can generally be identified.

    32. We can view an operating system from several vantage points. One view focuses on the services that the system provides; another, on the interface that it makes available to users and programmers; a third, on its components and their interconnections. In this chapter, we explore all three aspects of operating systems, showing the viewpoints of users, programmers, and operating system designers. We consider what services an operating system provides, how they are provided, how they are debugged, and what the various methodologies are for designing such systems. Finally, we describe how operating systems are created and how a computer starts its operating system.

      This passage explains that operating systems can be understood from multiple perspectives: the services they provide, the interfaces available to users and programmers, and their internal components and connections. The chapter will explore these viewpoints, covering OS services, debugging, design methodologies, creation processes, and how a computer boots its operating system.

    33. Another advantage of working with open-source operating systems is their diversity. GNU/Linux and BSD UNIX are both open-source operating systems, for instance, but each has its own goals, utility, licensing, and purpose. Sometimes, licenses are not mutually exclusive and cross-pollination occurs, allowing rapid improvements in operating-system projects. For example, several major components of OpenSolaris have been ported to BSD UNIX. The advantages of free software and open sourcing are likely to increase the number and quality of open-source projects, leading to an increase in the number of individuals and companies that use these projects.

      Another benefit of utilizing the open-source operating systems is that their variety. GNU/Linux and the BSD UNIX are both considered the open-source operating systems, for example, yet they each have distinct goals, functions, licenses, and purposes. Occasionally, licenses are not exclusive, and cross-pollination takes place, facilitating swift advancements in operating-system initiatives. For instance, numerous key elements of OpenSolaris have been adapted to BSD UNIX. The benefits of free software and open sourcing are expected to enhance the quantity and quality of open-source projects, resulting in a rise in the number of people and businesses that utilize these projects.

    34. The free-software movement is driving legions of programmers to create thousands of open-source projects, including operating systems. Sites like http://freshmeat.net/ and http://distrowatch.com/ provide portals to many of these projects. As we stated earlier, open-source projects enable students to use source code as a learning tool. They can modify programs and test them, help find and fix bugs, and otherwise explore mature, full-featured operating systems, compilers, tools, user interfaces, and other types of programs. The availability of source code for historic projects, such as Multics, can help students to understand those projects and to build knowledge that will help in the implementation of new projects.

      This passage highlights how the free-software movement motivates the programmers to create the numerous open-source projects, including the operating systems. Portals like FreshMeat and DistroWatch provide access to these projects. Open-source code serves as a learning tool, allowing students to modify, test, and debug programs, explore full-featured systems, and study historic projects like Multics to gain knowledge useful for developing new software.

    35. Solaris is the commercial UNIX-based operating system of Sun Microsystems. Originally, Sun's SunOS operating system was based on BSD UNIX. Sun moved to AT&T's System V UNIX as its base in 1991. In 2005, Sun open-sourced most of the Solaris code as the OpenSolaris project. The purchase of Sun by Oracle in 2009, however, left the state of this project unclear

      This passage outlines the history of Solaris, Sun Microsystems’ commercial UNIX-based OS. SunOS was initially based on BSD UNIX, but in 1991 it switched to System V UNIX. In 2005, most Solaris code was open-sourced as OpenSolaris, though Oracle’s acquisition of Sun in 2009 left the project’s future uncertain.

    36. As with many open-source projects, this source code is contained in and controlled by a version control system—in this case, “subversion” (https://subversion.apache.org/source-code). Version control systems allow a user to “pull” an entire source code tree to his computer and “push” any changes back into the repository for others to then pull. These systems also provide other features, including an entire history of each file and a conflict resolution feature in case the same file is changed concurrently. Another version control system is git, which is used for GNU/Linux, as well as other programs (http://www.git-scm.com).

      This text describes how open-source projects typically utilize version control systems to oversee the source code. Subversion (employed by BSD) and Git (utilized by GNU/Linux) enable the users for extracting the code, implement the modifications, and then subsequently upload the updates back to the repository. These systems monitor file histories, handle simultaneous changes, and assist in conflict resolution, facilitating collaborative development and effective code management

    37. Just as with Linux, there are many distributions of BSD UNIX, including FreeBSD, NetBSD, OpenBSD, and DragonflyBSD. To explore the source code of FreeBSD, simply download the virtual machine image of the version of interest and boot it within Virtualbox, as described above for Linux. The source code comes with the distribution and is stored in /usr/src/. The kernel source code is in /usr/src/sys. For example, to examine the virtual memory implementation code in the FreeBSD kernel, see the files in /usr/src/sys/vm. Alternatively, you can simply view the source code online at https://svnweb.freebsd.org.

      This passage explains how the BSD UNIX, like the Linux, has the multiple distributions such as the FreeBSD, NetBSD, OpenBSD, and the DragonflyBSD. FreeBSD’s source code is included with its distribution and can be explored locally (e.g., in /usr/src/ and /usr/src/sys) or online via the FreeBSD repository. Virtual machine images allows the users to boot and examine the OS safely, making it accessible for learning and also experimentation.

    38. BSD UNIX has a longer and more complicated history than Linux. It started in 1978 as a derivative of AT&T's UNIX. Releases from the University of California at Berkeley (UCB) came in source and binary form, but they were not open source because a license from AT&T was required. BSD UNIX's development was slowed by a lawsuit by AT&T, but eventually a fully functional, open-source version, 4.4BSD-lite, was released in 1994.

      This passage summarizes the history of BSD UNIX. Originating in 1978 as a derivative of AT&T UNIX, early BSD releases from UC Berkeley required an AT&T license and were not fully open source. Development was delayed by legal issues, but a fully functional open-source version, 4.4BSD-lite, was eventually released in 1994.

    39. The resulting GNU/Linux operating system (with the kernel properly called Linux but the full operating system including GNU tools called GNU/Linux) has spawned hundreds of unique distributions, or custom builds, of the system. Major distributions include Red Hat, SUSE, Fedora, Debian, Slackware, and Ubuntu. Distributions vary in function, utility, installed applications, hardware support, user interface, and purpose. For example, Red Hat Enterprise Linux is geared to large commercial use. PCLinuxOS is a live CD—an operating system that can be booted and run from a CD-ROM without being installed on a system's boot disk. A variant of PCLinuxOS—called PCLinuxOS Supergamer DVD—is a live DVD that includes graphics drivers and games. A gamer can run it on any compatible system simply by booting from the DVD. When the gamer is finished, a reboot of the system resets it to its installed operating system.

      This passage discusses GNU/Linux as an example of a free and open-source operating system. By 1991, the GNU Project had developed most components except for a fully functional kernel. Linus Torvalds then released a basic UNIX-like kernel like using the GNU tools and the invited global contributions, leading to the development of the Linux kernel and the complete GNU/Linux system.

    40. As an example of a free and open-source operating system, consider GNU/Linux. By 1991, the GNU operating system was nearly complete. The GNU Project had developed compilers, editors, utilities, libraries, and games—whatever parts it could not find elsewhere. However, the GNU kernel never became ready for prime time. In 1991, a student in Finland, Linus Torvalds, released a rudimentary UNIX-like kernel using the GNU compilers and tools and invited contributions worldwide.

      This passage discusses GNU/Linux as an example of a free and open-source operating system. By 1991, the GNU Project had developed most components except for a fully functional kernel. Linus Torvalds then released a basic UNIX-like kernel using GNU tools and invited global contributions, leading to the development of the Linux kernel and the complete GNU/Linux system.

    41. The FSF uses the copyrights on its programs to implement “copyleft,” a form of licensing invented by Stallman. Copylefting a work gives anyone that possesses a copy of the work the four essential freedoms that make the work free, with the condition that redistribution must preserve these freedoms. The GNU General Public License (GPL) is a common license under which free software is released. Fundamentally, the GPL requires that the source code be distributed with any binaries and that all copies (including modified versions) be released under the same GPL license. The Creative Commons “Attribution Sharealike” license is also a copyleft license; “sharealike” is another way of stating the idea of copyleft.

      This passage explains how the “copyleft,” is a licensing approach that was developed by Richard Stallman and used by the Free Software Foundation (FSF). Copyleft ensures that the software remains free by granting the users the four essential freedoms while requiring that any of the redistribution preserves about these freedoms. The GNU General Public License (GPL) is a widely used copyleft license, mandating that source code accompany binaries and that modified versions remain under the same license. Creative Commons’ “Attribution Sharealike” license follows a similar principle.

    42. To counter the move to limit software use and redistribution, Richard Stallman in 1984 started developing a free, UNIX-compatible operating system called GNU (which is a recursive acronym for “GNU's Not Unix!”). To Stallman, “free” refers to freedom of use, not price. The free-software movement does not object to trading a copy for an amount of money but holds that users are entitled to four certain freedoms: (1) to freely run the program, (2) to study and change the source code, and to give or sell copies either (3) with or (4) without changes. In 1985, Stallman published the GNU Manifesto, which argues that all software should be free. He also formed the Free Software Foundation (FSF) with the goal of encouraging the use and development of free software.

      This passage explains how the Richard Stallman’s creation of the GNU operating system in the 1984 to promote about the software freedom. “Free” refers to liberty, not price, granting users the rights to run, study, modify, and distribute software with or without changes. Stallman’s GNU Manifesto and the Free Software Foundation (FSF) advocate for these freedoms and encourage the development and use of free software.

    43. Computer and software companies eventually sought to limit the use of their software to authorized computers and paying customers. Releasing only the binary files compiled from the source code, rather than the source code itself, helped them to achieve this goal, as well as protecting their code and their ideas from their competitors. Although the Homebrew user groups of the 1970s exchanged code during their meetings, the operating systems for hobbyist machines (such as CPM) were proprietary. By 1980, proprietary software was the usual case.

      This passage explains how computer and software companies began restricting software use to authorized users and paying customers. By distributing only compiled binaries instead of source code, companies protected their intellectual property and ideas. While early hobbyist groups shared code freely, operating systems like CPM were proprietary, and by 1980, proprietary software had become the norm.

    44. In the early days of modern computing (that is, the 1950s), software generally came with source code. The original hackers (computer enthusiasts) at MIT's Tech Model Railroad Club left their programs in drawers for others to work on. “Homebrew” user groups exchanged code during their meetings. Company-specific user groups, such as Digital Equipment Corporation's DECUS, accepted contributions of source-code programs, collected them onto tapes, and distributed the tapes to interested members. In 1970, Digital's operating systems were distributed as source code with no restrictions or copyright notice.

      This passage explains how the early history of software distribution in the 1950s–1970s. The Software often came with the source code, and the communities of the enthusiasts—like the MIT hackers, Homebrew groups, and company user groups such as DECUS—shared, modified, and distributed programs freely. Digital Equipment Corporation even distributed operating systems as unrestricted source code, highlighting the collaborative culture of early computing.

    45. There are many benefits to open-source operating systems, including a community of interested (and usually unpaid) programmers who contribute to the code by helping to write it, debug it, analyze it, provide support, and suggest changes. Arguably, open-source code is more secure than closed-source code because many more eyes are viewing the code. Certainly, open-source code has bugs, but open-source advocates argue that bugs tend to be found and fixed faster owing to the number of people using and viewing the code.

      This passage highlights how the benefits of open-source operating systems. A community of the programmers contributes by the writing, debugging, analyzing, and also improving the code. The Open-source code can be more secure and reliable than closed-source software because more people examine it, helping to identify and fix bugs more quickly.

    46. Starting with the source code allows the programmer to produce binary code that can be executed on a system. Doing the opposite—reverse engineering the source code from the binaries—is quite a lot of work, and useful items such as comments are never recovered. Learning operating systems by examining the source code has other benefits as well. With the source code in hand, a student can modify the operating system and then compile and run the code to try out those changes, which is an excellent learning tool.

      This passage explains the advantages of studying operating systems using source code. Starting from the source allows programmers to compile executable binaries directly, whereas reverse-engineering binaries is difficult and loses valuable information like comments. Access to source code also lets students modify, compile, and test the OS, providing a hands-on learning experience.

    47. The study of operating systems has been made easier by the availability of a vast number of free software and open-source releases. Both free operating systems and open-source operating systems are available in source-code format rather than as compiled binary code. Note, though, that free software and open-source software are two different ideas championed by different groups of people (see http://gnu.org/philosophy/open-source-misses-the-point.html for a discussion on the topic).

      This passage highlights how the studying of the operating systems is easier thanks to free and open-source software, which is available in source-code form. While both provide access to the code, free software and open-source software are distinct concepts promoted by different communities.

    48. A real-time system has well-defined, fixed time constraints. Processing must be done within the defined constraints, or the system will fail. For instance, it would not do for a robot arm to be instructed to halt after it had smashed into the car it was building. A real-time system functions correctly only if it returns the correct result within its time constraints. Contrast this system with a traditional laptop system where it is desirable (but not mandatory) to respond quickly.

      This passage explains that real-time systems have strict, well-defined timing requirements. The system must process data and respond within set time constraints, or it fails—unlike traditional computers, where fast responses are desirable but not critical. For example, a robot arm must stop on time to avoid the damage, illustrating the importance of timing in real-time systems.

    49. Embedded systems almost always run real-time operating systems. A real-time system is used when rigid time requirements have been placed on the operation of a processor or the flow of data; thus, it is often used as a control device in a dedicated application. Sensors bring data to the computer. The computer must analyze the data and possibly adjust controls to modify the sensor inputs.

      This passage explains how the embedded systems typically run real-time operating systems (RTOS). RTOS are used when strict timing is required for processing or data flow, such as in control applications. Sensors provide data, and the system must quickly analyze it and adjust controls as needed.

    50. The use of embedded systems continues to expand. The power of these devices, both as standalone units and as elements of networks and the web, is sure to increase as well. Even now, entire houses can be computerized, so that a central computer—either a general-purpose computer or an embedded system—can control heating and lighting, alarm systems, and even coffee makers. Web access can enable a home owner to tell the house to heat up before she arrives home. Someday, the refrigerator will be able to notify the grocery store when it notices the milk is gone.

      This passage highlights the growing use and potential of embedded systems. They are increasingly very powerful, both as sthe tandalone devices and as the networked components. Examples include smart homes such as where a central computer can control the heating, lighting, alarms, and the appliances, and the future possibilities like the refrigerators that are automatically notifying stores when supplies run out.

    51. These embedded systems vary considerably. Some are general-purpose computers, running standard operating systems—such as Linux—with special-purpose applications to implement the functionality. Others are hardware devices with a special-purpose embedded operating system providing just the functionality desired

      This passage notes that the embedded systems can be varied widely. Some are the general-purpose computers running standard OSs likethe Linux with their specialized applications, while the others use a dedicated embedded operating systems that provide only the specific functionality required for that device.

    52. Embedded computers are the most prevalent form of computers in existence. These devices are found everywhere, from car engines and manufacturing robots to optical drives and microwave ovens. They tend to have very specific tasks. The systems they run on are usually primitive, and so the operating systems provide limited features.

      This passage explains how the embedded computers are the most common type of computers, found in the devices like the car engines, robots, and the household appliances. They are designed for the specific tasks, and their operating systems are considered typically simple, offering only essential features.

    53. Certainly, there are traditional operating systems within many of the types of cloud infrastructure. Beyond those are the VMMs that manage the virtual machines in which the user processes run. At a higher level, the VMMs themselves are managed by cloud management tools, such as VMware vCloud Director and the open-source Eucalyptus toolset. These tools manage the resources within a given cloud and provide interfaces to the cloud components, making a good argument for considering them a new type of operating system.

      Cloud infrastructure uses the traditional OSs and the virtual machine monitors (VMMs) to manage the virtual machines. Tools like the VMware vCloud Director and the Eucalyptus manage the VMMs and provide the interfaces, acting as a higher-level OS for the cloud environments.

    54. Cloud computing is a type of computing that delivers computing, storage, and even applications as a service across a network. In some ways, it's a logical extension of virtualization, because it uses virtualization as a base for its functionality. For example, the Amazon Elastic Compute Cloud (ec2) facility has thousands of servers, millions of virtual machines, and petabytes of storage available for use by anyone on the Internet.

      This passage explains how the cloud computing delivers computing power, storage, and applications as services over a network. It builds on virtualization, allowing resources to be shared efficiently. For example, Amazon EC2 provides millions of virtual machines and massive storage that can be accessed by users over the Internet.

    55. Skype is another example of peer-to-peer computing. It allows clients to make voice calls and video calls and to send text messages over the Internet using a technology known as voice over IP (VoIP). Skype uses a hybrid peer-to-peer approach

      This passage describes how Skype as an example of peer-to-peer (P2P) computing. It enables the voice and the video calls, as well as the text messaging, over the Internet using the voice over IP (VoIP) technology. Skype employs a hybrid P2P approach, combining direct peer connections with centralized services for tasks like user authentication.

    56. Peer-to-peer networks gained widespread popularity in the late 1990s with several file-sharing services, such as Napster and Gnutella, that enabled peers to exchange files with one another. The Napster system used an approach similar to the first type described above: a centralized server maintained an index of all files stored on peer nodes in the Napster network, and the actual exchange of files took place between the peer nodes

      This passage describes how the peer-to-peer (P2P) networks became popular in the late 1990s through file-sharing services like Napster and Gnutella. Napster used a hybrid approach: a central server kept an index of files, while the actual file transfers occurred directly between peers, combining centralized indexing with distributed file sharing.

    57. Another structure for a distributed system is the peer-to-peer (P2P) system model. In this model, clients and servers are not distinguished from one another. Instead, all nodes within the system are considered peers, and each may act as either a client or a server, depending on whether it is requesting or providing a service. Peer-to-peer systems offer an advantage over traditional client–server systems. In a client–server system, the server is a bottleneck; but in a peer-to-peer system, services can be provided by several nodes distributed throughout the network.

      This passage explains the peer-to-peer (P2P) model of distributed systems, where all nodes are equal and can act as either client or server. Unlike traditional client–server systems, which can have a server bottleneck, P2P systems distribute services across multiple nodes, improving scalability and reducing single points of failure.

    58. Two operating systems currently dominate mobile computing: Apple iOS and Google Android. iOS was designed to run on Apple iPhone and iPad mobile devices. Android powers smartphones and tablet computers available from many manufacturers. We examine these two mobile operating systems in further detail in Chapter 2.

      This passage notes that the mobile computing market is dominated by two operating systems: Apple iOS, which runs on iPhones and iPads, and Google Android, which powers devices from multiple manufacturers. The text indicates that these two OSs will be explored in more detail in Chapter 2.

    59. To provide access to on-line services, mobile devices typically use either IEEE standard 802.11 wireless or cellular data networks. The memory capacity and processing speed of mobile devices, however, are more limited than those of PCs. Whereas a smartphone or tablet may have 256 GB in storage, it is not uncommon to find 8 TB in storage on a desktop computer. Similarly, because power consumption is such a concern, mobile devices often use processors that are smaller, are slower, and offer fewer processing cores than processors found on traditional desktop and laptop computers.

      This passage explains how the mobile devices connect to the online services through the Wi-Fi (IEEE 802.11) or the cellular networks. However, they have the limitations as compared with PCs: less storage, slower and smaller processors, and fewer cores, mainly to conserve power. For example, a smartphone might have 256 GB of storage, while a desktop could have 8 TB.

    60. Today, mobile systems are used not only for e-mail and web browsing but also for playing music and video, reading digital books, taking photos, and recording and editing high-definition video. Accordingly, tremendous growth continues in the wide range of applications that run on such devices. Many developers are now designing applications that take advantage of the unique features of mobile devices, such as global positioning system (GPS) chips, accelerometers, and gyroscopes. An embedded GPS chip allows a mobile device to use satellites to determine its precise location on Earth.

      This passage highlights that the expanding of the capabilities of the mobile devices beyond the basic tasks like email and web browsing. Modern devices are used to handle the media playback, digital books, photography, and the high-definition video editing. Developers are creating the applications which are used to leverage the built-in features like GPS, accelerometers, and gyroscopes, enabling location-based services and motion-sensing functionality.

    61. Mobile computing refers to computing on handheld smartphones and tablet computers. These devices share the distinguishing physical features of being portable and lightweight. Historically, compared with desktop and laptop computers, mobile systems gave up screen size, memory capacity, and overall functionality in return for handheld mobile access to services such as e-mail and web browsing. Over the past few years, however, features on mobile devices have become so rich that the distinction in functionality between, say, a consumer laptop and a tablet computer may be difficult to discern. In fact, we might argue that the features of a contemporary mobile device allow it to provide functionality that is either unavailable or impractical on a desktop or laptop computer.

      This passage explains how the mobile computing involves handheld devices like smartphones and tablets, which are portable and lightweight. While early mobile devices sacrificed screen size, memory, and functionality, modern devices now offer features comparable to—or even exceeding—those of desktops and laptops, making them highly capable for tasks like web browsing, email, and other services.

    62. Traditional time-sharing systems are rare today. The same scheduling technique is still in use on desktop computers, laptops, servers, and even mobile computers, but frequently all the processes are owned by the same user (or a single user and the operating system). User processes, and system processes that provide services to the user, are managed so that each frequently gets a slice of computer time. Consider the windows created while a user is working on a PC, for example, and the fact that they may be performing different tasks at the same time. Even a web browser can be composed of multiple processes, one for each website currently being visited, with time sharing applied to each web browser process.

      This text emphasizes that although the traditional time-sharing systems are now seen as very uncommon, the scheduling method remains to be prevalent. Contemporary computers—desktops, laptops, servers, and mobile devices—utilize the time-sharing for controlling the numerous user and system processes. For instance, a PC can manage various windows, and a web browser can execute several processes at once, with each process getting portions of CPU time.

    63. In the latter half of the 20th century, computing resources were relatively scarce. (Before that, they were nonexistent!) For a period of time, systems were either batch or interactive. Batch systems processed jobs in bulk, with predetermined input from files or other data sources. Interactive systems waited for input from users. To optimize the use of the computing resources, multiple users shared time on these systems. These time-sharing systems used a timer and scheduling algorithms to cycle processes rapidly through the CPU, giving each user a share of the resources.

      This passage explains how computing evolved when resources were limited. Early systems are either the batch (processing jobs in bulk) or the interactive (waiting for user input). Time-sharing systems were introduced to optimize the resource use, allowing multiple users to share CPU time through timers and scheduling algorithms.

    64. At home, most users once had a single computer with a slow modem connection to the office, the Internet, or both. Today, network-connection speeds once available only at great cost are relatively inexpensive in many places, giving home users more access to more data. These fast data connections are allowing home computers to serve up web pages and to run networks that include printers, client PCs, and servers. Many homes use firewalls to protect their networks from security breaches. Firewalls limit the communications between devices on a network.

      This passage describes how the home computing has evolved with faster and more affordable network connections. Modern home networks can include multiple devices like PCs, printers, and servers, and can even serve web pages. Firewalls are commonly used to protect these networks by controlling and limiting communications between devices.

    65. Today, web technologies and increasing WAN bandwidth are stretching the boundaries of traditional computing. Companies establish portals, which provide web accessibility to their internal servers. Network computers (or thin clients)—which are essentially terminals that understand web-based computing—are used in place of traditional workstations where more security or easier maintenance is desired. Mobile computers can synchronize with PCs to allow very portable use of company information. Mobile devices can also connect to wireless networks and cellular data networks to use the company's web portal (as well as the myriad other web resources).

      This passage explains how modern web technologies and faster WAN connections have expanded traditional computing. Companies now use web portals for internal access, thin clients for secure and easy-to-maintain workstations, and mobile devices that sync with PCs or connect via wireless/cellular networks, enabling flexible and portable access to company resources.

    66. As computing has matured, the lines separating many of the traditional computing environments have blurred. Consider the “typical office environment.” Just a few years ago, this environment consisted of PCs connected to a network, with servers providing file and print services. Remote access was awkward, and portability was achieved by use of laptop computers.

      This passage describes how the traditional computing environments, like the typical office setup, have evolved. Previously, offices had PCs connected to servers for file and print services, with limited remote access and portability mostly relying on laptops. It highlights how computing has become more flexible and interconnected over time.

    67. The power of bitmaps becomes apparent when we consider their space efficiency. If we were to use an eight-bit Boolean value instead of a single bit, the resulting data structure would be eight times larger. Thus, bitmaps are commonly used when there is a need to represent the availability of a large number of resources. Disk drives provide a nice illustration. A medium-sized disk drive might be divided into several thousand individual units, called disk blocks. A bitmap can be used to indicate the availability of each disk block.

      This passage highlights the space efficiency of bitmaps. Using the single bit per item instead of larger data types drastically reduces memory usage, making bitmaps ideal for tracking large numbers of resources. For example, disk drives use bitmaps to indicate which disk blocks are available or in use.

    68. A bitmap is a string of n binary digits that can be used to represent the status of n items. For example, suppose we have several resources, and the availability of each resource is indicated by the value of a binary digit: 0 means that the resource is available, while 1 indicates that it is unavailable (or vice versa). The value of the ith position in the bitmap is associated with the ith resource.

      This passage explains that a bitmap is the sequence of binary digits used to represent the status of multiple items. Each position in the bitmap corresponds to a specific resource, with values like 0 or 1 indicating whether the resource is available or is unavailable.

    69. One use of a hash function is to implement a hash map, which associates (or maps) [key:value] pairs using a hash function. Once the mapping is established, we can apply the hash function to the key to obtain the value from the hash map (Figure 1.21). For example, suppose that a user name is mapped to a password. Password authentication then proceeds as follows: a user enters her user name and password. The hash function is applied to the user name, which is then used to retrieve the password. The retrieved password is then compared with the password entered by the user for authentication.

      This text describes how hash functions can be utilized to create hash maps that hold data in key–value pairs. Using a hash function on a key allows the system to swiftly access its corresponding value. In password authentication, the username is hashed to retrieve the stored password, which is then matched against the user’s input to confirm identity

    70. One potential difficulty with hash functions is that two unique inputs can result in the same output value—that is, they can link to the same table location. We can accommodate this hash collision by having a linked list at the table location that contains all of the items with the same hash value. Of course, the more collisions there are, the less efficient the hash function is.

      This passage highlights the limitation for the hash functions: different inputs can produce the same output, causing a hash collision. To handle this, a linked list can store all the items that share the same hash index. However, the frequent collisions reduce the efficiency of the hash function, making the retrieval slower.

    71. A hash function takes data as its input, performs a numeric operation on the data, and returns a numeric value. This numeric value can then be used as an index into a table (typically an array) to quickly retrieve the data. Whereas searching for a data item through a list of size n can require up to O(n) comparisons, using a hash function for retrieving data from a table can be as good as O(1), depending on implementation details. Because of this performance, hash functions are used extensively in operating systems.

      This passage explains how that a hash function converts the data into the numeric value, which can be used as an index to quickly access the data in the table. Unlike searching a list, which can take O(n) time, a hash table can often retrieve data in O(1) time. This efficiency is why operating systems frequently use hash functions for tasks like indexing and also the quick lookups.

    72. A tree is a data structure that can be used to represent data hierarchically. Data values in a tree structure are linked through parent–child relationships. In a general tree, a parent may have an unlimited number of children. In a binary tree, a parent may have at most two children, which we term the left child and the right child. A binary search tree additionally requires an ordering between the parent's two children in which left_child <= right_child. Figure 1.20 provides an example of a binary search tree. When we search for an item in a binary search tree, the worst-case performance is O(n) (consider how this can occur). To remedy this situation, we can use an algorithm to create a balanced binary search tree. Here, a tree containing n items has at most lg n levels, thus ensuring worst-case performance of O(lg n). We shall see in Section 5.7.1 that Linux uses a balanced binary search tree (known as a red-black tree) as part its CPU-scheduling algorithm.

      This passage explains how a tree is a hierarchical data structure with the parent–child relationships. Binary trees limit the parents to two children, and the binary search trees (BSTs) impose that an order for efficient searching. In the worst case, a BST can have O(n) search time, but while balancing the tree reduces this to O(log n). Linux uses the balanced trees, such as the red-black trees, in the CPU scheduling to improve the performance.

    73. A queue, in contrast, is a sequentially ordered data structure that uses the first in, first out (FIFO) principle: items are removed from a queue in the order in which they were inserted. There are many everyday examples of queues, including shoppers waiting in a checkout line at a store and cars waiting in line at a traffic signal. Queues are also quite common in operating systems—jobs that are sent to a printer are typically printed in the order in which they were submitted, for example. As we shall see in Chapter 5, tasks that are waiting to be run on an available CPU are often organized in queues.

      What is a queue, and how does the first-in, first-out (FIFO) principle works? Give some examples of how queues are used both in everyday life and in operating systems.

    74. A stack is a sequentially ordered data structure that uses the last in, first out (LIFO) principle for adding and removing items, meaning that the last item placed onto a stack is the first item removed. The operations for inserting and removing items from a stack are known as push and pop, respectively. An operating system often uses a stack when invoking function calls. Parameters, local variables, and the return address are pushed onto the stack when a function is called; returning from the function call pops those items off the stack.

      What is a stack, and how does the last-in, first-out (LIFO) principle determine the push and pop operations? Additionally, how does an operating system use a stack during function calls?

    75. Linked lists accommodate items of varying sizes and allow easy insertion and deletion of items. One potential disadvantage of using a list is that performance for retrieving a specified item in a list of size n is linear—O(n), as it requires potentially traversing all n elements in the worst case. Lists are sometimes used directly by kernel algorithms. Frequently, though, they are used for constructing more powerful data structures, such as stacks and queues.

      This passage highlights how that the linked lists are flexible, supporting the variable-sized items and easy insertion or the deletion. However, searching for a specific element can be slow (O(n)). Lists are often used directly in kernel algorithms or as building blocks for other structures like stacks and queues.

    76. After arrays, lists are perhaps the most fundamental data structures in computer science. Whereas each item in an array can be accessed directly, the items in a list must be accessed in a particular order. That is, a list represents a collection of data values as a sequence. The most common method for implementing this structure is a linked list, in which items are linked to one another

      This section explains that the lists are a fundamental data structure where these elements are accessed sequentially rather than directly. Linked lists are a common implementation, connecting each item to the next, which allows flexible insertion and removal of elements compared to arrays.

    77. An array is a simple data structure in which each element can be accessed directly. For example, main memory is constructed as an array. If the data item being stored is larger than one byte, then multiple bytes can be allocated to the item, and the item is addressed as “item number × item size.” But what about storing an item whose size may vary? And what about removing an item if the relative positions of the remaining items must be preserved? In such situations, arrays give way to other data structures.

      This passage describes arrays as the basic data structure which allows the direct access to elements, making them simple and efficient for fixed-size items. However, arrays have limitations when storing variable-sized data or when items need to be removed while maintaining order, which is why more flexible data structures (like linked lists) are used in such cases.

    78. Some operating systems have taken the concept of networks and distributed systems further than the notion of providing network connectivity. A network operating system is an operating system that provides features such as file sharing across the network, along with a communication scheme that allows different processes on different computers to exchange messages. A computer running a network operating system acts autonomously from all other computers on the network

      This section explains that a network operating system goes beyond basic connectivity by enabling features like file sharing and inter-process communication across machines. Each computer still runs independently, but the OS provides tools to make collaboration and resource sharing possible across the network.

    79. The media to carry networks are equally varied. They include copper wires, fiber strands, and wireless transmissions between satellites, microwave dishes, and radios. When computing devices are connected to cellular phones, they create a network. Even very short-range infrared communication can be used for networking. At a rudimentary level, whenever computers communicate, they use or create a network. These networks also vary in their performance and reliability.

      This passage highlights the many types of transmission media used in networking, from traditional copper wires to advanced fiber optics and wireless methods like satellite or cellular. It shows that networks can exist at any scale—even short-range infrared—and that their performance and reliability depend on the medium used.

    80. Networks are characterized based on the distances between their nodes. A local-area network (LAN) connects computers within a room, a building, or a campus. A wide-area network (WAN) usually links buildings, cities, or countries. A global company may have a WAN to connect its offices worldwide, for example. These networks may run one protocol or several protocols

      This section explains that networks are classified by distance. LANs cover small areas like buildings or campuses, while WANs span larger regions such as cities or even countries. Companies often use WANs to connect global offices, and these networks may rely on one or multiple communication protocols.

    81. A network, in the simplest terms, is a communication path between two or more systems. Distributed systems depend on networking for their functionality. Networks vary by the protocols used, the distances between nodes, and the transport media. TCP/IP is the most common network protocol, and it provides the fundamental architecture of the Internet. Most operating systems support TCP/IP, including all general-purpose ones

      This passage emphasizes that networks are the backbone of distributed systems, enabling communication between computers. It highlights that networks differ in protocol, distance, and media, but TCP/IP has become the universal standard—forming the foundation of the Internet and being supported by nearly all major operating systems.

    82. A distributed system is a collection of physically separate, possibly heterogeneous computer systems that are networked to provide users with access to the various resources that the system maintains. Access to a shared resource increases computation speed, functionality, data availability, and reliability. Some operating systems generalize network access as a form of file access, with the details of networking contained in the network interface's device driver. Others make users specifically invoke network functions.

      This section defines distributed systems as independent computers working together through a network. The benefit is that these resources can be shared,by improving speed, functionality, andthe reliability. It also notes that the operating systems handle the networking differently—some make it seamless by treating network access like file access, while others require users to call specific network functions.

    83. Within data centers, virtualization has become a common method of executing and managing computing environments. VMMs like VMware ESXand Citrix XenServer no longer run on host operating systems but rather are the host operating systems, providing services and resource management to virtual machine processes.

      This passage explains that in data centers, virtualization is not just an add-on but the foundation of the system. Modern Virtual Machine Monitors (like VMware ESX or Citrix XenServer) act like an actual operating system, directly managing hardware resources and running virtual machines. This shows how central virtualization has become in enterprise environments.

    84. Virtualization allows operating systems to run as applications within other operating systems. At first blush, there seems to be little reason for such functionality. But the virtualization industry is vast and growing, which is a testament to its utility and importance.

      This section points out that virtualization might seem unnecessary at first because operating systems already manage multiple applications. However, its growth shows how valuable it really is—virtualization enables flexibility, testing, security isolation, and efficient use of hardware, which explains why the industry keeps expanding.

    85. Even though modern operating systems are fully capable of running multiple applications reliably, the use of virtualization continues to grow. On laptops and desktops, a VMM allows the user to install multiple operating systems for exploration or to run applications written for operating systems other than the native host

      This passage explains about why the virtualization is still widely being used even though the modern operating systems can be used for multitasking. A Virtual Machine Monitor (VMM) lets the usersto run the different operating systems on the same hardware, which is considered useful for experimenting, testing the software, or running the programs that aren’t compatible with the host system.

    1. you will not benefit fully from this class.

      again, this defeats the purpose of paying for education. if you are going to rely on AI rather than prioritizing learning, what is the point of school and learning environments?

    2. drought and global warming

      many who consider themselves to be environmental advocates (knowingly and unknowingly) partake in harmful activities in the name of convenience

    3. You can and should be building knowledge, thinking, and reasoning

      analytical skills are crucial outside the school environment, and should be worked on while at school

    1. You observed that for ambiguous cases or high-levels of missing data, the model tended to predict the PUR population, suggesting it acts as a "default". Since PUR is an admixed population, does this imply the model learns that a state of high uncertainty or mixed/missing signals is most characteristic of admixed genomes in the training set? Could this "default" behavior be mitigated by training with a null or "uncertain" class?

    1. most K–12 teachers andhigher education instructors receive more training in their content area than on the processesof teaching and learning

      I wonder how much that has shifted in recent years? Ask anyone in education for a number of years and they will tell you that it's much different in terms of classroom management or attention span than it was 30 years ago.

    1. In capitalist countries, the changes are made by writers and moviemakers not so much for ideological reasons as for financial ones.

      Although I do agree that profit has become a major motivation in modern retellings of stories, I still think the changes made by creators have just as much ideological weight as financial weight. Writers and filmmakers in capitalist societies may not set out to push ideology, but they are inevitably influenced by capitalist logic. Especially when working within major corporate institutions like Disney or Pixar, creatives are constantly surrounded by capitalistic values such as competition and upward mobility that they internalize to a certain extent and (consciously or subconsciously) reproduce in stories. Disney's Cinderella, for instance, promotes consumerist values through the use of the magical dress, carriage, and palace that reinforce the idea that wealth and beauty equal happiness. - Janu Kandalu, German 2254.02

    1. The physically unequal mother in all cultures typically breast-feeds andprotects, rather than bullies or browbeats, the vulnerable infant and child. The powerfulmother nurtures so as to give life and create growth in the weak. She does not impose so asto inscribe her will

      babies are vulnerable??? there are many ways they use that nurturing to control the child. Motherhood isn't all just nurturing.

    2. Girls and women saw the world as made up not of separated, self-seeking individuals, butof interrelationships, connections webbing everyone together in communities of concern;they made moral decisions not through abstract reasoning from rules but by balancingthe infinitesimal and acute needs of everybody concerned (25-63)

      Well yes! that is how things should be, no? Please don't indoctrinate us with American individualism...

    1. An emergency need arose for someone to write 300 words o

      something that i'm thinking about is a parallel between restaurants -- so "junk words" and "fine dining words". Chat/LLM speaks to me as fluff, filler, words that come out just for convenience. Shakespeare, Mary Oliver, etc are the words that pack a punch. And that makes me think about art's irreplaceability - once someone creates authentic art, it loses its value if it's replicated, even down to the specifics.

    2. at's because the appetite for "content" is at least as much about creating new targets for advertisingrevenue as it is actual sustenance for human audience

      Valid, i've seen that content creators are gravitating more towards their clever ability to promote to an audience

  5. learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com
    1. “Macroeconomics and Misgivings” argues that it is a mis-conception, albeit one that is well entrenched in the mindsof both professional economists and the general public, tothink of the economy as an engine with spending as its gaspedal.

      “misconception…to think of the economy as an engine with spending as its gas pedal.” This emphasizes the author’s critique of oversimplified macroeconomic models that treat the economy like a machine, ignoring complexity and human behavior.

    2. “Finance and Fluctuations” deals with the misconceptionsabout finance that are common among economists, who oftenfail to appreciate the process of financial intermediation. Thissection looks at the special role played by financial intermedi-aries in enabling specialization. Intermediation is particularlydependent on trust, and as that trust ebbs and flows, the finan-cial sector can amplify fluctuations in the economy’s ability tocreate patterns of sustainable specialization and trade.

      financial intermediaries…enable specialization” and “as that trust ebbs and flows, the financial sector can amplify fluctuations.” This shows the author’s point that finance is crucial for specialization but is sensitive to trust, which can magnify economic ups and downs.

    3. “Specialization and Sustainability” exposes the misconcep-tion that we must undertake extraordinary efforts in order toconserve specific resources. This section explains how the pricesystem guides the economy toward sustainable use of resources.In contrast, individuals who attempt to override the pricesystem through their individual choices or by imposing gov-ernment regulations can easily miscalculate the costs of theiractions.

      the pricesystem guides the economy toward sustainable use of resources” and “individuals who attempt to override the pricesystem…can easily miscalculate the costs.” This emphasizes that the author argues the price system naturally encourages sustainability, while personal or government interference can backfire.

    4. “Machine as Metaphor” attacks the misconception held bymany economists and embodied in many textbooks that theeconomy can be analyzed like a machine. This section looksat a widely used but misguided approach to economic analysis,treating it as if it were engineering. The economic engineersare stuck in a mindset that grew out of the Second WorldWar, a conflict that was dominated by airplanes, tanks, andother machines. Their approach fails to take account of themany nonmechanistic aspects of the economy.

      “attacks the misconception…that the economy can be analyzed like a machine” and “fails to take account of the many nonmechanistic aspects of the economy.” This shows the author’s critique of treating economics purely like engineering, emphasizing that human behavior and social factors make the economy more complex than a machine.

    5. He knows that his breakfast depends upon workerson the coffee plantations of Brazil, the citrus groves ofFlorida, the sugar fields of Cuba, the wheat farms ofthe Dakotas, the dairies of New York; that it has beenassembled by ships, railroads, and trucks, has beencooked with coal from Pennsylvania in utensils madeof aluminum, china, steel, and glass.

      He knows that his breakfast depends upon workers on the coffee plantations…utensils made of aluminum, china, steel, and glass.” This emphasizes the global interconnection of labor and resources—showing how everyday items rely on a complex, international network of production and trade.

    6. How much commerce andnavigation in particular, how many ship-builders,sailors, sail-makers, rope-makers, must have beenemployed in order to bring together the differentdrugs made use of by the dyer, which often come fromthe remotest corners of the world! What a variety oflabour too is necessary in order to produce the toolsof the meanest of those workmen!

      how many ship-builders, sailors, sail-makers, rope-makers, must have been employed…What a variety of labour too is necessary in order to produce the tools of the meanest of those workmen!” Note that this illustrates the vast network of specialized labor required even for basic production, showing the complexity and interdependence of economies.

    7. The woollen coat, for example, which covers theday-labourer, as coarse and rough as it may appear,is the produce of the joint labour of a great multitudeof workmen.

      the produce of the joint labour of a great multitude of workmen.” Note that even simple goods rely on the coordinated work of many people, emphasizing the importance of specialization and trade in everyday life.

    8. The roundabout process (or high capital intensity) creates agap of time between the initial steps in the production pro-cess and the final sale of goods and services. During thattime gap, workers involved in the early stages of the pro-duction process must receive income before consumers havemade purchases. (Think of the producer of farm equipment,which must receive payment from a farmer before the farmercan use the equipment to harvest a crop.) That preconditionrequires financial intermediation. As the economy becomesmore specialized and the production becomes more round-about, the financial sector takes on more significance.

      As the economy becomes more specialized and the production becomes more roundabout, the financial sector takes on more significance.” Note that higher capital intensity and longer production processes increase the need for financial systems to support early-stage workers and investments.

    9. The steel must be transported,which may require a railroad or a ship for transportation.And so on. Most of the people whose work enables thefarmer to harvest wheat have no idea that they are partof the wheat production process. The Austrian school ofeconomics would describe this multistep production pro-cess as very roundabout.

      “Most of the people whose work enables the farmer to harvest wheat have no idea that they are part of the wheat production process.” Note that complex production involves many unseen contributors, illustrating the concept of “roundabout” production in the Austrian school.

    10. Improvements in transportation accompany specializa-tion. The farther that you can cheaply transport goods,the more specialization you will see. Before the adventof the railroad, water transport was relatively efficient,so that specialization tended to be most extensive neargood harbors and navigable rivers. Improvements intransportation have connected the world’s regions moreclosely, promoting greater specialization

      Improvements in transportation have connected the world’s regions more closely, promoting greater specialization”. Note that better transport enables wider trade networks, which increases economic efficiency and interdependence.

    11. Trade accompanies specialization. The more you spe-cialize, the more you need to trade to obtain what youwant. In a society where people specialize, you will findthem exchanging goods and services.

      “The more you specialize, the more you need to trade to obtain what you want”. Note that this emphasizes the link between specialization and trade—economic interdependence grows as individuals focus on specific tasks.

    12. If Cheryl’s bank no longer needed a mortgage paymentprocessing system, her value would be reduced. If her bankwent completely out of business, her value would be reducedmore. If the mortgage servicing industry consolidated, usingfewer systems, her value would be reduced more still. And ifcomputers suddenly became much more expensive and bankswent back to using mechanical calculators, her value wouldbe reduced still more. That last hypothetical is extreme, butthe point is that specialization is subtle, deep, and highlydependent on context.

      specialization is subtle, deep, and highly dependent on context” and the examples before it. Note that this shows how the value of specialized skills depends on the broader economic and technological environment—changes in industry or technology can increase or decrease the importance of a person’s work.

    13. The machines were made out of materials that hadto be mined and transported. That transportation requiredmany other people and machines. The transportation equip-ment itself had to be manufactured, which required miningand shipping materials to the place where the transportationequipment was manufactured

      materials that had to be mined and transported” and “transportation equipment itself had to be manufactured”. Note that this emphasizes the interconnectedness of production—how even simple goods rely on a vast network of labor, materials, and technology.

    14. Picture yourself watching news on cable television whileeating a bowl of cereal. However, instead of giving you thenews, the TV announcer asks you to consider what you wouldneed to do to make your cereal completely from scratch.You would need to grow the cereal grains yourself. If youuse tools to harvest the grain, you would have to make thosetools yourself

      “what you would need to do to make your cereal completely from scratch” and “If you use tools…you would have to make those tools yourself.” Note that the passage illustrates how modern life relies on complex production processes and specialized skills, showing how dependent we are on the broader economy.

    15. Even more strikingis the fact that almost everything you consume is somethingyou could not possibly produce. Your daily life depends on thecooperation of hundreds of millions of other people.Just as it is inconceivable that human society would haveevolved to its present state without language, it is inconceiv-able that we would have gotten to this point without special-ization and trade. Moreover, in order for society to progressfurther, patterns of specialization and trade must continue toevolve.

      “almost everything you consume is something you could not possibly produce” and “human society…without specialization and trade”. Note that the author emphasizes the essential role of cooperation, trade, and specialization in supporting daily life and societal progress.

    16. always asks, “How do you know that?” The MIT approachsuppresses that question and instead presumes that economicresearchers and policymakers are capable of obtaining knowl-edge that in reality is beyond their grasp.2 That is particu-larly the case in the field known as macroeconomics, whosepractitioners claim to know how to manage the overall levelsof output and employment in the economy.

      The MIT approach suppresses that question…” and “macro-economics… claim to know how to manage the overall levels of output and employment”. Note that the author is criticizing the overconfidence of economists, especially in macroeconomics, and how MIT-style training discourages healthy skepticism about what can truly be known or controlled.

    17. Early in 2015, I came across a volume of essays edited byE. Roy Weintraub titled MIT and the Transformation ofAmerican Economics.1 After digesting the essays, I thought tomyself, “So that’s how it all went wrong.”Let me hasten to mention that my own doctorate in eco-nomics, which I obtained in 1980, comes from MIT. Also,the writers of Weintraub’s book are generally laudatorytoward MIT and its influence.Yet I have come to believe in the wake of the MIT trans-formation, which began soon after World War II, that econo-mists have lost the art of critical thinking. The critical thinker

      “I have come to believe… that economists have lost the art of critical thinking.” This emphasizes the author’s critique of modern economics, particularly how MIT’s influence after WWII shifted the field toward less critical, more formulaic thinking, signaling a departure from questioning underlying assumptions.

    18. Increased wealth accompanies specialization. Our ances-tors were much less specialized than we are. As recentlyas the 18th century, many households still sewed theirown clothes, built their own homes, and grew much oftheir own food. As of 1700, nearly everyone in the worldlived in economic misery by today’s standards. Even in theUnited Kingdom, the most advanced economy at the time,the average income per person was only about $2,500 intoday’s dollars.4 Today, in the United States, a householdwould need twice that average income to even reach th

      This passage highlights the importance on how specialization has played a role in increasing the wealth over time. The example that was shown in the article shows the contrast between the 1700's and today living standards in the US and how much the economy has grown from high expectations. My question is: if specialization has increased in the modern day economy on wealth and living standards, could there be anymore downsides from goods and services?

    1. ှ ှ Yှ =ှ Zှ =$ှ =   ှ  ှ IF ?ှ =$ှ IŊ$=I ̈  ှ  ှ Ɨc= ?ှ =$ှ   ှ ೩ ှ ̈# ?ှ   ?ှ ̈ #I  ှ  ှ =#I  ̈  Ĉှ

      It seems that the author classifies merrymaking as forms of play, including masquerades as play. I'm interested to see how other authors write on similar topics as we read more literature and are exposed to more opinions.

    2. ှ ǜ= ှ ̈# ှ  ှ I ̈ ှ   ှ $I= ̈ှ ှ ှ =ฌှ$=I ̈ ှ =$ှ =#༗ှ  ှ ှ ှ ྨญ ှ ှ I##Iှ =$ှ ̈ ှ  ှ=ှ $

      Just a thought, but earlier in the foreword, Huizinga mentioned how they had to fill in the gaps of their knowledge themself and that the reader should not expect documentation of every word. I wonder how much of what the author says is in consensus with other historians and how much of what the author says is their own thoughts.

    3. ှ # ှ

      I'm a little confused by the author's definitions. Earlier, the author stated that concepts like justice, good, truth, beauty, and seriousness can be denied while play is undeniable. Now, the author is stating that play can be serious. How can a concept like "seriousness" be denied but also be used to describe an irrefutable concept?

    4. Bှ # ှ BZှ B#Vှ

      Under these definitions, would a prayer be considered as play? It involves imagination and problem-solving with an end goal to ensure well-being. Where do we draw the line for what is considered play?

    5. ှ ှ ှ Y ှ $ှ # ှ ှ ှ ှ ˢ ှ ှ ှ $ှ ှ Ĉှ

      I find it interesting how the author situates play as the foundation of civilization. I never considered that play is involved in language. I feel that the author is classifying anything involving imagination or problem-solving as "play" (language, myths, stories, etc). Where is the line drawn for what is play?

    6.  Vှ Ø  ှ ှ ှ $ှ$ှ Y?ှ  ှ ှ v  Ą ှએă ?ှ  ှ ှ 'ှ $#  řှ  ှ #ှ #cμှ

      This is the author's definition of what play is, but I feel that play does not necessarily have to be social. Sometimes people play games by themselves to pass the time - is that still considered social?

    7. u ှ ##ှ ှ Ż ှ ှ ှ ှ Y ှ  # ှ Ɨ  ှ c$ਭှ #'#ှ 

      This makes me wonder if play plays a role in forming culture itself. Similar to the chicken or the egg question, does play form culture or does culture form play?

    8. ĥှ ှ |E ှ  ”|?ှ”ɭÐှ (ှ ( %(|ှ šှ ှ ”šűĘှ צ$ှ , ȥ|%ɭ ှ (Kှ>ှ | ှ (%E”š” Eှ $ှ >ှ ň E

      Did not understand what the author was conveying in this sentence. Would love if anyone would like to share their interpretations of this sentence.

    9. ှ ňှ (?ှ ”$ှ ှ ”ɭ?ှ %ှ ှÐ %ň” ӫှ μ# ”ň?ှ |#?ှ %#>?ှ ( ?ှ E”(?ှ £(nှ ɺှňှ (ှ %”  ?ှ |ှ ှ gှ

      I find it interesting how the author asserts play as a universal truth, yet states other similar abstractions can be denied. As of right now, I don't see why play is an abstraction that cannot be denied while the abstraction of truth or seriousness can be denied.

    10. ှ Òှ $ 'ှ 

      I find this statement interesting. Is it possible to measure play using the empirical evidence we usually think of when we think of research? How do historians approach researching abstract concepts such as play? Are there standard procedures to study concepts such as play?

    11. ှ ''ှ ှ$Eှ ှ EÒှ ှ ƴှ Eှ !ှ E੃ÒQှ Òှ Òှ ,"ȥ'ܕှ ှ Òှ Eှ !ှ Eှ ¿Ò ှ $ှ |Ò'QÒ'ှ ှ

      I find this interesting because I also assumed that play is meant to serve some evolutionary function, such as building skills needed to survive in the world. I'm curious to know what everyone's hypotheses are about the function of play. Furthermore, is there a general consensus people have reached about the function of play or is the community divided across theories about the function of play?

    12.  ÒQှ ှ ှ Òှ !ှှ ှ ¤Òှ ှÒှ  $'ှ ှ ှ Ò Ò!Ò 

      This makes me wonder if play is a form of training that socializes a being to survive in the world, similar to how play can be used to educate kids.

    13. &EJ ှ Jှ μ 1ှ J&¿ှ E

      I remember in class the question of whether play is strictly limited to people was asked. I wonder what the argument for that would be, because it seems very obvious to me that play is a phenomenon found across species. Adding to that, I wonder what species could arguably not play. For example, do insects play?

    1. Thoughtful questions

      These questions are a very helpful guide to better understanding the text you are reading- as a reader I love to annotate the books I'm reading and these questions are usually ones I ask myself when I read.

    1. Strikes challenged American industry throughout the late nineteenth and early twentieth centuries. Workers seeking higher wages, shorter hours, and safer working conditions had struck throughout the antebellum era, but organized unions were fleeting and transitory. The Civil War and Reconstruction seemed to briefly distract the nation from the plight of labor, but the failure of the Great Railroad Strike of 1877 convinced workers of the need to organize. Union memberships began to climb. The Knights of Labor enjoyed considerable success in the early 1880s, due in part to its efforts to unite skilled and unskilled workers. The Knights welcomed all laborers, including women (they only barred lawyers, bankers, and liquor dealers). By 1886, the Knights had over seven hundred thousand members. The Knights envisioned a cooperative producer-centered society that rewarded labor, not capital, but, despite their sweeping vision, the Knights focused on practical gains that could be won through the organization of workers into local unions.

      It's amazing how long the strikes continued. It shows good insite on how long unions have been around.

    2. Skills mattered less and less in an industrialized, mass-producing economy, and their strength as individuals seemed ever smaller and less significant when companies grew in size and power and managers gained wealth and political influence. Long hours, dangerous working conditions, and the difficulty of supporting a family on meager and unpredictable wages compelled workers to organize armies of labor and battle against the power of capital.

      I find it interesting that this is another instance that suggests that history repeats itself. Every time we go through a wave of technological advancements we render workers who have mastered the old method obsolete.

    3. “The Depression”. The Panic began with the failure of the largest bank in America, owned by railroad speculator Jay Cooke. The United States government’s decision to stop  coining silver dollars in 1873 and return to the gold standard in 1875 exacerbated the financial distress, and lower wages and deflation led to labor disputes like the Great Railroad Strike of 1877.

      Who thought it was a good idea to change the standard on which the whole system was based, when it was already in utter termoil? Probaably not their best idea.

    1. The injured ankle should be positioned and supported in the maximum dorsiflexion allowed by pain and effusion. Maximal dorsiflexion places the joint in its close-packed position or position of greatest congruency, allowing for the least capsular distention and resultant joint effusion. With ankle sprains, this position approximates the torn ligament ends in grade III injuries to reduce the amount of gap scarring and tension in grade I and II injured ligaments.

      place ankle in max DF -- CPP allows for max congruency + approximates ligaments in sprain (grade III)

    1. To say a notion is imprinted on the mind, and yet at the same time to say that themind is ignorant of it, and never yet took notice of it, is to make this impressionnothing. No proposition can be said to be in the mind which it never yet knew, whichit was never yet conscious of

      Yes and No! neuroscience

    2. There is nothing more commonly taken for granted, than that there are certainprinciples both speculative and practical (for they speak of both) universally agreedupon by all mankind: which, therefore they argue, must needs be the constantimpressions which the souls of men receive in their first beings, and which theybring into the world with them, as necessarily and really as they do any of theirinherent faculties

      this is false in the sense that there are people for what we call as having disorders and differences but these people are people nonetheless

      "what is a human" that's a long story in itself. But one without said disorders could also work against said instinct

    3. the soul receives in its very first being; and brings into theworld with it.

      these are called instincts

      our flesh is the thing that liberates us into existence but it's the thing that limits the way we perceive the world. Principles derived from evolution and in evolution there is survival and things that happen per chance

    4. arithmetic, geometry

      it is almost incoceivable of a reality where a single thing and another single thing together does not equal two single things. Although one could argue that the usage of a base 10 numerical system is fabricated but in a sense we cannot say it is or is not real.

    Annotators

  6. drive.google.com drive.google.com
    1. Access to recording equipment and space, editing stations with specialized multimedia software, andtechnicalsupport for students’ development of their academic media projects.

      Does anyone know if the software access here includes adobe creative cloud or adobe premiere pro for editing?

    1. This study has found that T. hirsuta may be displaced by M. galloprovincialis in a future ocean, causing a shift in the biogenic habitat of the Australian shores. Such a shift in habitat may affect the infauna; future conditions may cause infauna to prefer specific mussel habitats (either T. hirsuta or M. galloprovincialis) and lead to an overall decline in infaunal molluscs.

      What would happen if the restoration projects were focused on planting native mussels and the invasive species still continue to take over, should the project focus shift to the invasive species, or stay on the native species? Would it be more ethical to let the new invasive species take over and form new habitats because they might survive climate change better than the native species?

    2. Polychaetes generally had a positive response to climate change scenarios. Warming and elevated pCO2 interacted to increase the number of species of polychaetes in the elevated pCO2 treatment at ambient temperature (Fig. 4; ANOVA Temp × CO2, F1,32 = 11.03, P < 0.02, Supplementary Table 4). Under warming, there were fewer polychaete species recruiting to T. hirsuta compared with that observed at ambient temperature, but there was no effect of warming on the number of polychaete species that recruited to M. galloprovincialis (Supplementary Table 4). When T. hirsuta and M. galloprovincialis were present in the same mesocosms, there were significantly more polychaetes under ambient temperature than under warming (Fig. 4; ANOVA Temp × Presence, F1,32 = 4.66, P < 0.05; Species × Presence, F1,32 = 4.66, P < 0.05, Supplementary Table 4). There were no significant effects of any treatments on the number of species, the number of individuals of Crustacea and the number of species of Mollusca (Fig. 4; Supplementary Table 4). Molluscs were negatively affected by elevated pCO2 but unaffected by warming (Fig. 4).

      These results indicate that the worms would benefit from climate change, but the mollusks would be harmed. Mollusks play a vital role in many ecosystems through filtering water, recycling nutrients, and building the strong shells that many other organisms inhabit. If this organism were to decline would the loss create more long-term issues, other than the worm population being increased. What would this due to the food web? Would this change how many reefs or shorelines are built?

    3. This study has shown that the invasive M. galloprovincialis was more tolerant of elevated pCO2 compared with the native T. hirsuta. We have also shown that the two mussel species possess unique infaunal communities, which are also altered by climate change conditions.

      The authors make an interesting point here that the invasive species of mussels are more tolerant than the native species to climate change. With that being said, from a scientific standpoint, should scientists be worried about the invasive species replacing the native ones, and if they are concerned about this what should conservationists or scientists do to stop this?

    4. Species do not exist in isolation but rather in communities where they interact. These interactions can be broadly grouped into interactions that reduce the overall abundance of species i.e. “negative interactions” (e.g. competition and predation) or interactions that increase their abundance i.e. “positive interactions” (Bertness et al., 1999).

      Here the authors are suggesting the idea that species do not live alone, instead they are part of much bigger communities. How might potential climate change affect these interactions? Would they make some of these interactions stronger or weaker than others? Would this benefit one species and impair another? And what does this mean for those invasive species compared to native species?

    1. I thought the Research Paper and Writing workshop page was very important. "🚨 Honors Students: Your final essay must be at 2000-2500 words to meet the final requirement. It will also ask you to include a paper that uses at least three more academic journals (6-10 total sources) as source material. "

      This part stood out to me because the research paper is worth almost half the grade, which shows how central it is to the course. It feels like a big responsibility, but also a chance to pull together everything we’ve learned. Knowing this, and especially as an honors student, I plan to pace myself and start gathering sources early so I’m not rushing at the end.

    1. Presidential dominance, in short, is the usualway to characterize US foreign policy making

      this shows the reality that the president typically plays the leading role in shaping and directing foreign policy. Even though Congress has constitutional powers in this area, historical precedent, the need for quick decisions in crises, and the president’s position as Commander-in-Chief and chief diplomat have made presidential dominance the norm in foreign affairs.

    Annotators

    1. A Briefe declaration of the chief Islands in the Bay of Mexico being under the king of Spain, with their havens and forts, and what commodities they yeide.

      Hakluyt might be trying to emphasize that Spain already has a lot of Yields coming to them from the Bay of Mexico, so if they colonize they might have the same possibly more.