1. Last 7 days
    1. Program loading and execution. Once a program is assembled or compiled, it must be loaded into memory to be executed. The system may provide absolute loaders, relocatable loaders, linkage editors, and overlay loaders. Debugging systems for either higher-level languages or machine language are needed as well.

      Program loading and execution services handle the process of getting compiled programs into memory so they can run. These include loaders (absolute, relocatable, overlay) and tools like linkage editors. Debugging support is also part of this category, helping programmers test and fix errors in either high-level code or machine language.

    2. File management. These programs create, delete, copy, rename, print, list, and generally access and manipulate files and directories.

      File management services provide everyday tools for working with files and directories. They let users create, delete, copy, rename, print, and list files, making it easier to organize and manage data without needing to use low-level system calls directly.

    3. Another aspect of a modern system is its collection of system services. Recall Figure 1.1, which depicted the logical computer hierarchy. At the lowest level is hardware. Next is the operating system, then the system services, and finally the application programs. System services, also known as system utilities, provide a convenient environment for program development and execution. Some of them are simply user interfaces to system calls. Others are considerably more complex. They can be divided into these categories:

      This part explains where system services fit in the computer hierarchy. They are situated between the operating system and the application programs, making it very easy for the developers and the users to interact with the system. Some services are just the simple tools which act as the front end for the system calls, while the others are more advanced and are used to provide the broader functionality. Essentially, system services (or utilities) give the programmers a convenient way for developing and running the programs without dealing directly with the low-level details.

    4. Typically, system calls providing protection include set_permission() and get_permission(), which manipulate the permission settings of resources such as files and disks. The allow_user() and deny_user() system calls specify whether particular users can—or cannot—be allowed access to certain resources. We cover protection in Chapter 17 and the much larger issue of security—which involves using protection against external threats—in Chapter 16.

      This paragraph highlights about how the operating systems are used to employ the certain system calls to manage protection and access control. Functions such as the set_permission() and the get_permission()manage permissions for its resources, whereas allow_user() and the deny_user() are used to specify which of the users can access the particular files or the devices. This indicates that the protection is meant exclusively to manage internal access rights, whereas the security (addressed later) focuses on defending against external threats

    5. Protection provides a mechanism for controlling access to the resources provided by a computer system. Historically, protection was a concern only on multiprogrammed computer systems with several users. However, with the advent of networking and the Internet, all computer systems, from servers to mobile handheld devices, must be concerned with protection.

      Why has protection become an important concern for all the computer systems, not just the multiprogrammed systems with its multiple users?

    6. Both of the models just discussed are common in operating systems, and most systems implement both. Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided. It is also easier to implement than is shared memory for intercomputer communication. Shared memory allows maximum speed and convenience of communication, since it can be done at memory transfer speeds when it takes place within a computer. Problems exist, however, in the areas of protection and synchronization between the processes sharing memory.

      What are the main advantages and the disadvantages of using the message passing versus the shared memory for interprocess communication, and in what situations is each model more suitable?

    7. There are two common models of interprocess communication: the message-passing model and the shared-memory model. In the message-passing model, the communicating processes exchange messages with one another to transfer information. Messages can be exchanged between the processes either directly or indirectly through a common mailbox. Before communication can take place, a connection must be opened. The name of the other communicator must be known, be it another process on the same system or a process on another computer connected by a communications network. Each computer in a network has a host name by which it is commonly known. A host also has a network identifier, such as an IP address. Similarly, each process has a process name, and this name is translated into an identifier by which the operating system can refer to the process. The get_hostid() and get_processid() system calls do this translation. The identifiers are then passed to the general-purpose open() and close() calls provided by the file system or to specific open_connection() and close_connection() system calls, depending on the system's model of communication. The recipient process usually must give its permission for communication to take place with an accept_connection() call. Most processes that will be receiving connections are special-purpose daemons, which are system programs provided for that purpose. They execute a wait_for_connection() call and are awakened when a connection is made. The source of the communication, known as the client, and the receiving daemon, known as a server, then exchange messages by using read_message() and write_message() system calls. The close_connection() call terminates the communication.

      Explain the steps involved in interprocess communication using the message-passing model. Include the roles of the client, server (daemon), and system calls such as open_connection(), accept_connection(), read_message(), and close_connection().

    8. Many operating systems provide a time profile of a program to indicate the amount of time that the program executes at a particular location or set of locations. A time profile requires either a tracing facility or regular timer interrupts. At every occurrence of the timer interrupt, the value of the program counter is recorded. With sufficiently frequent timer interrupts, a statistical picture of the time spent on various parts of the program can be obtained.

      Many operating systems can track how much time a program spends running at different points in its code. This is called a time profile. To create one, the system either traces the program or uses regular timer interrupts. Every time the timer interrupts, the system records the program’s current position. By doing this often frequently, it can give a statistical view of which parts of the program take the most time to execute.

    9. Many system calls exist simply for the purpose of transferring information between the user program and the operating system. For example, most systems have a system call to return the current time() and date(). Other system calls may return information about the system, such as the version number of the operating system, the amount of free memory or disk space, and so on.

      Many system calls are used for just passing the information back and forth between the program and the operating system. For example, many systems are used for processing a function which is used to retrieve the current time and its date. Additional calls can offer the information regarding the system, such as the version of the operating system, the amount of the available memory or disk space, and other related details

    10. Once the device has been requested (and allocated to us), we can read(), write(), and (possibly) reposition() the device, just as we can with files. In fact, the similarity between I/O devices and files is so great that many operating systems, including UNIX, merge the two into a combined file–device structure. In this case, a set of system calls is used on both files and devices. Sometimes, I/O devices are identified by special file names, directory placement, or file attributes.

      Once the device has been requested (and allocated to us), we can read(), write(), and (possibly) reposition() the device, just as we can with files. In fact, the similarity between I/O devices and files is so great that many operating systems, including UNIX, merge the two into a combined file–device structure. In this case, a set of system calls is used on both files and devices. Sometimes, I/O devices are identified by special file names, directory placement, or file attributes.

    11. The various resources controlled by the operating system can be thought of as devices. Some of these devices are physical devices (for example, disk drives), while others can be thought of as abstract or virtual devices (for example, files). A system with multiple users may require us to first request() a device, to ensure exclusive use of it. After we are finished with the device, we release() it. These functions are similar to the open() and close() system calls for files. Other operating systems allow unmanaged access to devices. The hazard then is the potential for device contention and perhaps deadlock, which are described in Chapter 8.

      The resources that an operating system manages can be thought of as devices. Some of these are physical, like the disk drives, while the others are abstract or virtual, like the files. In the systems where the multiple users, a program may need to request() the device to ensure that it has an exclusive access, and then release() it when it's finished. These actions are similar to open() and close() for files. Some operating systems let programs access devices without this kind of control, but doing so can lead to problems like device conflicts or deadlocks, which we’ll discuss in Chapter 8.

    12. We may need these same sets of operations for directories if we have a directory structure for organizing files in the file system. In addition, for either files or directories, we need to be able to determine the values of various attributes and perhaps to set them if necessary. File attributes include the file name, file type, protection codes, accounting information, and so on. At least two system calls, get_file_attributes() and set_file_attributes(), are required for this function. Some operating systems provide many more calls, such as calls for file move() and copy(). Others might provide an API that performs those operations using code and other system calls, and others might provide system programs to perform the tasks. If the system programs are callable by other programs, then each can be considered an API by other system programs.

      We often need similar operations for directories as we do for files, especially when using a directory structure to organize files. For both files and directories, it’s important to check or modify their attributes when necessary. Attributes can include things like the name, type,or the access permissions, and the accounting information. To handle this, operating systems usually provide system calls such as get_file_attributes() and set_file_attributes(). Some systems go further, offering extra calls for tasks like moving or copying files. In other cases, these actions are handled through APIs or system programs. If other programs can call these system programs, they effectively act as the APIs themselves.

    13. The file system is discussed in more detail in Chapter 13 through Chapter 15. Here, we identify several common system calls dealing with files. We first need to be able to create() and delete() files. Either system call requires the name of the file and perhaps some of the file's attributes. Once the file is created, we need to open() it and to use it. We may also read(), write(), or reposition() (rewind or skip to the end of the file, for example). Finally, we need to close() the file, indicating that we are no longer using it.

      This part elaborates on the primary file-management system calls offered by an operating system. A program can begin by generating the new file or removing the existing one, identifying its name and the additional attributes as said to be required. Once when the file is generated, it can be accessed by using the open(), allowing the program to engage with it—either by reading, writing, or the repositioning the file pointer with the reposition(). After the program has completed its operations within the file, close() is called for indicating that the file is no longer in use or accessed

    14. There are so many facets of and variations in process control that we next use two examples—one involving a single-tasking system and the other a multitasking system—to clarify these concepts. The Arduino is a simple hardware platform consisting of a microcontroller along with input sensors that respond to a variety of events, such as changes to light, temperature, and barometric pressure, to just name a few. To write a program for the Arduino, we first write the program on a PC and then upload the compiled program (known as a sketch) from the PC to the Arduino's flash memory via a USB connection. The standard Arduino platform does not provide an operating system; instead, a small piece of software known as a boot loader loads the sketch into a specific region in the Arduino's memory

      This passage explains process control in simple and multitasking systems using the Arduino as an example. The Arduino is a microcontroller platform with sensors that detect various events. Programs, called sketches, are written and compiled on a PC and then uploaded to the Arduino’s flash memory. Unlike more complex systems, the standard Arduino does not use a full operating system; a bootloader simply loads the sketch into memory, demonstrating a single-tasking environment.

    15. Quite often, two or more processes may share data. To ensure the integrity of the data being shared, operating systems often provide system calls allowing a process to lock shared data. Then, no other process can access the data until the lock is released. Typically, such system calls include acquire_lock() and release_lock().

      This paragraph explores how operating systems handle the data exchanged among various processes. For the data integrity preservation, the OS can protect the shared data via the system calls like acquire_lock(), stopping other processes from accessing it until it is freed with the release_lock().This mechanism is crucial for avoiding conflicts and ensuring consistent data in environments with simultaneous processing

    16. A process executing one program may want to load() and execute() another program. This feature allows the command interpreter to execute a program as directed by, for example, a user command or the click of a mouse. An interesting question is where to return control when the loaded program terminates. This question is related to whether the existing program is lost, saved, or allowed to continue execution concurrently with the new program. If control returns to the existing program when the new program terminates, we must save the memory image of the existing program; thus, we have effectively created a mechanism for one program to call another program. If both programs continue concurrently, we have created a new process to be multiprogrammed. Often, there is a system call specifically for this purpose (create_process()).

      This text explains how one application can load and run another application, for instance, when a user issues a command or selects an icon. It emphasizes the main problem of control flow once the new program ends: control might revert to the original program, necessitating its memory image to be preserved, or both programs could operate simultaneously, resulting in a multiprogramming situation. The excerpt mentions that operating systems typically offer a specific system call, like create_process(), to enable this functionality.

    17. A running program needs to be able to halt its execution either normally (end()) or abnormally (abort()). If a system call is made to terminate the currently running program abnormally, or if the program runs into a problem and causes an error trap, a dump of memory is sometimes taken and an error message generated. The dump is written to a special log file on disk and may be examined by a debugger—a system program designed to aid the programmer in finding and correcting errors, or bugs—to determine the cause of the problem. Under either normal or abnormal circumstances, the operating system must transfer control to the invoking command interpreter. The command interpreter then reads the next command. In an interactive system, the command interpreter simply continues with the next command; it is assumed that the user will issue an appropriate command to respond to any error. In a GUI system, a pop-up window might alert the user to the error and ask for guidance. Some systems may allow for special recovery actions in case an error occurs. If the program discovers an error in its input and wants to terminate abnormally, it may also want to define an error level. More severe errors can be indicated by a higher-level error parameter. It is then possible to combine normal and abnormal termination by defining a normal termination as an error at level 0. The command interpreter or a following program can use this error level to determine the next action automatically.

      This passage explains how a running program can terminate either normally using end() or abnormally using abort(). In the case of abnormal termination or an error trap, the operating system may create a memory dump and an error log for debugging. After termination, control is returned to the command interpreter, which continues processing user commands or provides GUI prompts for guidance. The passage also highlights the use of error levels to indicate the severity of errors, allowing subsequent programs or the command interpreter to respond appropriately.

    18. System calls can be grouped roughly into six major categories: process control, file management, device management, information maintenance, communications, and protection. Below, we briefly discuss the types of system calls that may be provided by an operating system. Most of these system calls support, or are supported by, concepts and functions that are discussed in later chapters. Figure 2.8 summarizes the types of system calls normally provided by an operating system. As mentioned, in this text, we normally refer to the system calls by generic names. Throughout the text, however, we provide examples of the actual counterparts to the system calls for UNIX, Linux, and Windows systems.

      This section explains that system calls can be categorized into six primary groups: process management, file handling, device control, information upkeep, communication, and security. The text emphasizes that most system calls relate to concepts discussed later and provides examples from UNIX, Linux, and Windows. Figure 2.8 gives a summary of these categories.

    19. hree general methods are used to pass parameters to the operating system. The simplest approach is to pass the parameters in registers. In some cases, however, there may be more parameters than registers. In these cases, the parameters are generally stored in a block, or table, in memory, and the address of the block is passed as a parameter in a register (Figure 2.7). Linux uses a combination of these approaches.

      This passage describes three ways that system-call parameters can be passed to the operating system. The simplest method is using CPU registers to hold the parameters. If there are too many parameters for the available registers, the parameters are then placed in the memory block or the table, and the address of that block is passed into the register. Linux uses the mix of both the methods depending on the situation.

    20. System calls occur in different ways, depending on the computer in use. Often, more information is required than simply the identity of the desired system call. The exact type and amount of information vary according to the particular operating system and call. For example, to get input, we may need to specify the file or device to use as the source, as well as the address and length of the memory buffer into which the input should be read. Of course, the device or file and length may be implicit in the call.

      This passage explains that system calls often require additional information beyond identifying the call itself. Parameters—like the source file or the device, memory buffer address, and buffer length—may need to be specified so the operating system understands how to process the request. The precise specifics rely on the particular operating system and the system calls being utilized

    21. The caller need know nothing about how the system call is implemented or what it does during execution. Rather, the caller need only obey the API and understand what the operating system will do as a result of the execution of that system call. Thus, most of the details of the operating-system interface are hidden from the programmer by the API and are managed by the RTE.

      This text highlights the importance of abstraction within system calls. Programmers working with an API do not have to understand the internal mechanisms or execution specifics of a system call. They just need to follow to the API's instructions and also understand the expected outcome. The run-time environment (RTE) manages the inner networks of interacting with the operating system, successfully concealing the underlying specifics from the developer.

    22. Another important factor in handling system calls is the run-time environment (RTE)—the full suite of software needed to execute applications written in a given programming language, including its compilers or interpreters as well as other software, such as libraries and loaders. The RTE provides a system-call interface that serves as the link to system calls made available by the operating system. The system-call interface intercepts function calls in the API and invokes the necessary system calls within the operating system. Typically, a number is associated with each system call, and the system-call interface maintains a table indexed according to these numbers

      This passage describes the role of the run-time environment (RTE) in managing system calls. The RTE includes compilers, interpreters, libraries, and loaders, and provides a system-call interface that connects API function calls to the operating system’s system calls. Each system call is typically assigned a number, and the interface uses a table indexed by these numbers to invoke the correct system call within the OS.

    23. Why would an application programmer prefer programming according to an API rather than invoking actual system calls? There are several reasons for doing so. One benefit concerns program portability. An application programmer designing a program using an API can expect her program to compile and run on any system that supports the same API (although, in reality, architectural differences often make this more difficult than it may appear). Furthermore, actual system calls can often be more detailed and difficult to work with than the API available to an application programmer. Nevertheless, there often exists a strong correlation between a function in the API and its associated system call within the kernel. In fact, many of the POSIX and Windows APIs are similar to the native system calls provided by the UNIX, Linux, and Windows operating systems.

      This passage describes why the application programmers prefer using the APIs instead of directly invoking the system calls. APIs provide the portability, allowing the programs to run on any system which supports the same API, and also simplify the programming by offering the higher-level, easier-to-use functions. While the system calls are often more detailed and complex, APIs usually have a close correspondence with the underlying system calls, as seen in the POSIX and the Windows APIs.

    24. As you can see, even simple programs may make heavy use of the operating system. Frequently, systems execute thousands of system calls per second. Most programmers never see this level of detail, however. Typically, application developers design programs according to an application programming interface (API). The API specifies a set of functions that are available to an application programmer, including the parameters that are passed to each function and the return values the programmer can expect. Three of the most common APIs available to application programmers are the Windows API for Windows systems, the POSIX API for POSIX-based systems (which include virtually all versions of UNIX, Linux, and macOS), and the Java API for programs that run on the Java virtual machine

      This passage highlights that even simple programs rely heavily on the operating system through system calls, often executing thousands per second. However, the programmers usually interact with the higher-level APIs rather than making the system calls directly. APIs likethe Windows API, POSIX API, and the Java API provide the standardized functions, parameters, and the expected return values, simplifying the program development while hiding the underlying OS complexity.

    25. When both files are set up, we enter a loop that reads from the input file (a system call) and writes to the output file (another system call). Each read and write must return status information regarding various possible error conditions. On input, the program may find that the end of the file has been reached or that there was a hardware failure in the read (such as a parity error). The write operation may encounter various errors, depending on the output device (for example, no more available disk space).

      This passage emphasizes that reading from and writing to files in a program involves repeated system calls, each of which must report status and handle potential errors. It illustrates how the operating system monitors both input and output operations, accounting for conditions like reaching the end of a file, hardware read failures, or insufficient disk space during writing.

    26. Once the two file names have been obtained, the program must open the input file and create and open the output file. Each of these operations requires another system call. Possible error conditions for each system call must be handled. For example, when the program tries to open the input file, it may find that there is no file of that name or that the file is protected against access. In these cases, the program should output an error message (another sequence of system calls) and then terminate abnormally (another system call).

      This passage explains that each file operation—opening an input file, creating and opening an output file—requires a separate system call. It highlights the need for handling potential errors, such as a missing file or insufficient access permissions, using system calls to display error messages and terminate the program if necessary.

    27. Before we discuss how an operating system makes system calls available, let's first use an example to illustrate how system calls are used: writing a simple program to read data from one file and copy them to another file. The first input that the program will need is the names of the two files: the input file and the output file. These names can be specified in many ways, depending on the operating-system design

      This passage introduces the concept of using system calls with a practical example: a program that reads from one file and writes to another. It emphasizes that the program first needs the file names and notes that how these names are specified can vary depending on the operating system’s design.

    28. System calls provide an interface to the services made available by an operating system. These calls are generally available as functions written in C and C++, although certain low-level tasks (for example, tasks where hardware must be accessed directly) may have to be written using assembly-language instructions.

      This passage explains that system calls act as the bridge between programs and the operating system’s services. Most system calls are accessible through high-level languages like C and C++, but some low-level operations—especially those requiring direct hardware access—may need to be implemented in assembly language.

    29. Although there are apps that provide a command-line interface for iOS and Android mobile systems, they are rarely used. Instead, almost all users of mobile systems interact with their devices using the touch-screen interface. The user interface can vary from system to system and even from user to user within a system; however, it typically is substantially removed from the actual system structure. The design of a useful and intuitive user interface is therefore not a direct function of the operating system. In this book, we concentrate on the fundamental problems of providing adequate service to user programs. From the point of view of the operating system, we do not distinguish between user programs and system programs.

      This passage emphasizes that mobile users almost exclusively use touch-screen interfaces rather than command-line interfaces. While user interfaces may differ across systems and users, their design is largely separate from the underlying operating system. The focus of the book, as noted here, is on the operating system’s role in providing consistent and adequate service to programs, treating user and system programs equivalently.

    30. In contrast, most Windows users are happy to use the Windows GUI environment and almost never use the shell interface. Recent versions of the Windows operating system provide both a standard GUI for desktop and traditional laptops and a touch screen for tablets. The various changes undergone by the Macintosh operating systems also provide a nice study in contrast.

      This passage differntiates the typical Windows users with the command-line users, noting that most Windows users rely primarily on the GUI and rarely use the shell. Modern Windows versions support both desktop GUIs and touch interfaces for tablets. The passage also points out that the evolution of Macintosh operating systems offers a useful comparison in understanding how GUI design and user interaction have developed over time.

    31. The choice of whether to use a command-line or GUI interface is mostly one of personal preference. System administrators who manage computers and power users who have deep knowledge of a system frequently use the command-line interface. For them, it is more efficient, giving them faster access to the activities they need to perform. Indeed, on some systems, only a subset of system functions is available via the GUI, leaving the less common tasks to those who are command-line knowledgeable

      This text emphasizes that the decision between using the graphical user interface (GUI) and the command-line interface (CLI) usually depends on individual preference and the user's skill level. System administrators and experienced users typically prefer the CLI for its quicker, more efficient access to system features Certain tasks might only be accessible through the CLI, highlighting the significance for users requiring specific or uncommon functions.

    32. Because a either a command-line interface or a mouse-and-keyboard system is impractical for most mobile systems, smartphones and handheld tablet computers typically use a touch-screen interface. Here, users interact by making gestures on the touch screen—for example, pressing and swiping fingers across the screen. Although earlier smartphones included a physical keyboard, most smartphones and tablets now simulate a keyboard on the touch screen

      This text demonstrates that mobile devices like smartphones and tablets depend on touch-screen interfaces rather than conventional command-line or mouse-and-keyboard systems. Users typically engage directly with the display by using gestures like tapping or swiping.While the early smartphones had physical keyboards, modern devices typically display a virtual keyboard on the touch screen for the input, optimizing portability and also usability.

    33. Graphical user interfaces first appeared due in part to research taking place in the early 1970s at Xerox PARC research facility. The first GUI appeared on the Xerox Alto computer in 1973. However, graphical interfaces became more widespread with the advent of Apple Macintosh computers in the 1980s. The user interface for the Macintosh operating system has undergone various changes over the years, the most significant being the adoption of the Aqua interface that appeared with macOS. Microsoft's first version of Windows—Version 1.0—was based on the addition of a GUI interface to the MS-DOS operating system

      This passage outlines how the historical development of the graphical user interfaces (GUIs). GUIs were at the beginning examined at the Xerox PARC in the early 1970s, with the Xerox Alto being the first computer to have one. Widespread usage took place in the 1980s with Apple’s Macintosh computers. Over time, GUIs evolved, such as Apple’s adoption of the Aqua interface in macOS. Microsoft also integrated a GUI with Windows 1.0, layering it over the MS-DOS operating system.

    34. In one approach, the command interpreter itself contains the code to execute the command. For example, a command to delete a file may cause the command interpreter to jump to a section of its code that sets up the parameters and makes the appropriate system call. In this case, the number of commands that can be given determines the size of the command interpreter, since each command requires its own implementing code.

      This passage explains one method of implementing the commands in a command interpreter: the interpreter directly contains the code for executing each command. For instance, a delete-file command triggers a specific section of the interpreter’s code to set parameters and perform the system call. The number of supported commands directly affects the interpreter’s size, as each command needs its own dedicated code.

    35. The main function of the command interpreter is to get and execute the next user-specified command. Many of the commands given at this level manipulate files: create, delete, list, print, copy, execute, and so on. The various shells available on UNIX systems operate in this way. These commands can be implemented in two general ways.

      This passage highlights how the command interpreter’s primary role is to receive and execute user commands, many of which involve file manipulation, such as creating, deleting, or copying the files. It also notes that the UNIX shells implement these commands, which can be carried out using two general approaches.

    36. Most operating systems, including Linux, UNIX, and Windows, treat the command interpreter as a special program that is running when a process is initiated or when a user first logs on (on interactive systems). On systems with multiple command interpreters to choose from, the interpreters are known as shells. For example, on UNIX and Linux systems, a user may choose among several different shells, including the C shell, Bourne-Again shell, Korn shell, and others

      This passage explains that the command interpreter, or shell, is a special program that runs when a process starts or when a user logs on. On systems like UNIX and Linux, multiple shells are available, allowing users to choose their preferred interface for entering commands.

    37. Protection and security. The owners of information stored in a multiuser or networked computer system may want to control use of that information. When several separate processes execute concurrently, it should not be possible for one process to interfere with the others or with the operating system itself. Protection involves ensuring that all access to system resources is controlled. Security of the system from outsiders is also important.

      This passage describes that operating systems enforce protection and security by controlling access to system resources. In multiuser or the networked environments, this ensures that processes do not interfere with one another, and also safeguards the system against the external threats.

    38. Logging. We want to keep track of which programs use how much and what kinds of computer resources. This record keeping may be used for accounting (so that users can be billed) or simply for accumulating usage statistics. Usage statistics may be a valuable tool for system administrators who wish to reconfigure the system to improve computing services.

      This passage explains how the operating systems maintains the logs of the program resource usage. These logs can support accounting, billing, or help administrators analyze usage patterns to optimize system performance.

    39. Resource allocation. When there are multiple processes running at the same time, resources must be allocated to each of them. The operating system manages many different types of resources. Some (such as CPU cycles, main memory, and file storage) may have special allocation code, whereas others (such as I/O devices) may have much more general request and release code.

      This passage highlights that the operating system is responsible for resource allocation, distributing CPU time, memory, file storage, and I/O devices among multiple running processes to ensure fair and efficient usage.

    40. Error detection. The operating system needs to be detecting and correcting errors constantly. Errors may occur in the CPU and memory hardware (such as a memory error or a power failure), in I/O devices (such as a parity error on disk, a connection failure on a network, or lack of paper in the printer), and in the user program (such as an arithmetic overflow or an attempt to access an illegal memory location).

      This passage explains that the operating system continuously detects and handles errors. These errors can arise in hardware (CPU, memory, or I/O devices) or in user programs, such as illegal memory access or arithmetic overflow, ensuring system stability.

    41. Communications. There are many circumstances in which one process needs to exchange information with another process. Such communication may occur between processes that are executing on the same computer or between processes that are executing on different computer systems tied together by a network

      This passage describes that operating systems provide mechanisms for interprocess communication, allowing processes to exchange the information either on the same computer or across different computers connected via a network.

    42. File-system manipulation. The file system is of particular interest. Obviously, programs need to read and write files and directories. They also need to create and delete them by name, search for a given file, and list file information. Finally, some operating systems include permissions management to allow or deny access to files or directories based on file ownership.

      This passage explains that operating systems manage file-system operations, including reading, writing, creating, deleting, searching, and listing files and directories. Some systems also enforce permissions to control access based on file ownership.

    43. Program execution. The system must be able to load a program into memory and to run that program. The program must be able to end its execution, either normally or abnormally (indicating error).

      This passage highlights that an operating system manages program execution by loading programs into memory, running them, and handling their termination, whether it ends normally or due to an error.

    44. An operating system provides an environment for the execution of programs. It makes certain services available to programs and to the users of those programs. The specific services provided, of course, differ from one operating system to another, but we can identify common classes.

      This passage states that an operating system provides a platform for running programs, offering services to both programs and users. While the specific services vary across operating systems, there are common classes of services that can generally be identified.

    45. We can view an operating system from several vantage points. One view focuses on the services that the system provides; another, on the interface that it makes available to users and programmers; a third, on its components and their interconnections. In this chapter, we explore all three aspects of operating systems, showing the viewpoints of users, programmers, and operating system designers. We consider what services an operating system provides, how they are provided, how they are debugged, and what the various methodologies are for designing such systems. Finally, we describe how operating systems are created and how a computer starts its operating system.

      This passage explains that operating systems can be understood from multiple perspectives: the services they provide, the interfaces available to users and programmers, and their internal components and connections. The chapter will explore these viewpoints, covering OS services, debugging, design methodologies, creation processes, and how a computer boots its operating system.

    46. Another advantage of working with open-source operating systems is their diversity. GNU/Linux and BSD UNIX are both open-source operating systems, for instance, but each has its own goals, utility, licensing, and purpose. Sometimes, licenses are not mutually exclusive and cross-pollination occurs, allowing rapid improvements in operating-system projects. For example, several major components of OpenSolaris have been ported to BSD UNIX. The advantages of free software and open sourcing are likely to increase the number and quality of open-source projects, leading to an increase in the number of individuals and companies that use these projects.

      Another benefit of utilizing the open-source operating systems is that their variety. GNU/Linux and the BSD UNIX are both considered the open-source operating systems, for example, yet they each have distinct goals, functions, licenses, and purposes. Occasionally, licenses are not exclusive, and cross-pollination takes place, facilitating swift advancements in operating-system initiatives. For instance, numerous key elements of OpenSolaris have been adapted to BSD UNIX. The benefits of free software and open sourcing are expected to enhance the quantity and quality of open-source projects, resulting in a rise in the number of people and businesses that utilize these projects.

    47. The free-software movement is driving legions of programmers to create thousands of open-source projects, including operating systems. Sites like http://freshmeat.net/ and http://distrowatch.com/ provide portals to many of these projects. As we stated earlier, open-source projects enable students to use source code as a learning tool. They can modify programs and test them, help find and fix bugs, and otherwise explore mature, full-featured operating systems, compilers, tools, user interfaces, and other types of programs. The availability of source code for historic projects, such as Multics, can help students to understand those projects and to build knowledge that will help in the implementation of new projects.

      This passage highlights how the free-software movement motivates the programmers to create the numerous open-source projects, including the operating systems. Portals like FreshMeat and DistroWatch provide access to these projects. Open-source code serves as a learning tool, allowing students to modify, test, and debug programs, explore full-featured systems, and study historic projects like Multics to gain knowledge useful for developing new software.

    48. Solaris is the commercial UNIX-based operating system of Sun Microsystems. Originally, Sun's SunOS operating system was based on BSD UNIX. Sun moved to AT&T's System V UNIX as its base in 1991. In 2005, Sun open-sourced most of the Solaris code as the OpenSolaris project. The purchase of Sun by Oracle in 2009, however, left the state of this project unclear

      This passage outlines the history of Solaris, Sun Microsystems’ commercial UNIX-based OS. SunOS was initially based on BSD UNIX, but in 1991 it switched to System V UNIX. In 2005, most Solaris code was open-sourced as OpenSolaris, though Oracle’s acquisition of Sun in 2009 left the project’s future uncertain.

    49. As with many open-source projects, this source code is contained in and controlled by a version control system—in this case, “subversion” (https://subversion.apache.org/source-code). Version control systems allow a user to “pull” an entire source code tree to his computer and “push” any changes back into the repository for others to then pull. These systems also provide other features, including an entire history of each file and a conflict resolution feature in case the same file is changed concurrently. Another version control system is git, which is used for GNU/Linux, as well as other programs (http://www.git-scm.com).

      This text describes how open-source projects typically utilize version control systems to oversee the source code. Subversion (employed by BSD) and Git (utilized by GNU/Linux) enable the users for extracting the code, implement the modifications, and then subsequently upload the updates back to the repository. These systems monitor file histories, handle simultaneous changes, and assist in conflict resolution, facilitating collaborative development and effective code management

    50. Just as with Linux, there are many distributions of BSD UNIX, including FreeBSD, NetBSD, OpenBSD, and DragonflyBSD. To explore the source code of FreeBSD, simply download the virtual machine image of the version of interest and boot it within Virtualbox, as described above for Linux. The source code comes with the distribution and is stored in /usr/src/. The kernel source code is in /usr/src/sys. For example, to examine the virtual memory implementation code in the FreeBSD kernel, see the files in /usr/src/sys/vm. Alternatively, you can simply view the source code online at https://svnweb.freebsd.org.

      This passage explains how the BSD UNIX, like the Linux, has the multiple distributions such as the FreeBSD, NetBSD, OpenBSD, and the DragonflyBSD. FreeBSD’s source code is included with its distribution and can be explored locally (e.g., in /usr/src/ and /usr/src/sys) or online via the FreeBSD repository. Virtual machine images allows the users to boot and examine the OS safely, making it accessible for learning and also experimentation.

    51. BSD UNIX has a longer and more complicated history than Linux. It started in 1978 as a derivative of AT&T's UNIX. Releases from the University of California at Berkeley (UCB) came in source and binary form, but they were not open source because a license from AT&T was required. BSD UNIX's development was slowed by a lawsuit by AT&T, but eventually a fully functional, open-source version, 4.4BSD-lite, was released in 1994.

      This passage summarizes the history of BSD UNIX. Originating in 1978 as a derivative of AT&T UNIX, early BSD releases from UC Berkeley required an AT&T license and were not fully open source. Development was delayed by legal issues, but a fully functional open-source version, 4.4BSD-lite, was eventually released in 1994.

    52. The resulting GNU/Linux operating system (with the kernel properly called Linux but the full operating system including GNU tools called GNU/Linux) has spawned hundreds of unique distributions, or custom builds, of the system. Major distributions include Red Hat, SUSE, Fedora, Debian, Slackware, and Ubuntu. Distributions vary in function, utility, installed applications, hardware support, user interface, and purpose. For example, Red Hat Enterprise Linux is geared to large commercial use. PCLinuxOS is a live CD—an operating system that can be booted and run from a CD-ROM without being installed on a system's boot disk. A variant of PCLinuxOS—called PCLinuxOS Supergamer DVD—is a live DVD that includes graphics drivers and games. A gamer can run it on any compatible system simply by booting from the DVD. When the gamer is finished, a reboot of the system resets it to its installed operating system.

      This passage discusses GNU/Linux as an example of a free and open-source operating system. By 1991, the GNU Project had developed most components except for a fully functional kernel. Linus Torvalds then released a basic UNIX-like kernel like using the GNU tools and the invited global contributions, leading to the development of the Linux kernel and the complete GNU/Linux system.

    53. As an example of a free and open-source operating system, consider GNU/Linux. By 1991, the GNU operating system was nearly complete. The GNU Project had developed compilers, editors, utilities, libraries, and games—whatever parts it could not find elsewhere. However, the GNU kernel never became ready for prime time. In 1991, a student in Finland, Linus Torvalds, released a rudimentary UNIX-like kernel using the GNU compilers and tools and invited contributions worldwide.

      This passage discusses GNU/Linux as an example of a free and open-source operating system. By 1991, the GNU Project had developed most components except for a fully functional kernel. Linus Torvalds then released a basic UNIX-like kernel using GNU tools and invited global contributions, leading to the development of the Linux kernel and the complete GNU/Linux system.

    54. The FSF uses the copyrights on its programs to implement “copyleft,” a form of licensing invented by Stallman. Copylefting a work gives anyone that possesses a copy of the work the four essential freedoms that make the work free, with the condition that redistribution must preserve these freedoms. The GNU General Public License (GPL) is a common license under which free software is released. Fundamentally, the GPL requires that the source code be distributed with any binaries and that all copies (including modified versions) be released under the same GPL license. The Creative Commons “Attribution Sharealike” license is also a copyleft license; “sharealike” is another way of stating the idea of copyleft.

      This passage explains how the “copyleft,” is a licensing approach that was developed by Richard Stallman and used by the Free Software Foundation (FSF). Copyleft ensures that the software remains free by granting the users the four essential freedoms while requiring that any of the redistribution preserves about these freedoms. The GNU General Public License (GPL) is a widely used copyleft license, mandating that source code accompany binaries and that modified versions remain under the same license. Creative Commons’ “Attribution Sharealike” license follows a similar principle.

    55. To counter the move to limit software use and redistribution, Richard Stallman in 1984 started developing a free, UNIX-compatible operating system called GNU (which is a recursive acronym for “GNU's Not Unix!”). To Stallman, “free” refers to freedom of use, not price. The free-software movement does not object to trading a copy for an amount of money but holds that users are entitled to four certain freedoms: (1) to freely run the program, (2) to study and change the source code, and to give or sell copies either (3) with or (4) without changes. In 1985, Stallman published the GNU Manifesto, which argues that all software should be free. He also formed the Free Software Foundation (FSF) with the goal of encouraging the use and development of free software.

      This passage explains how the Richard Stallman’s creation of the GNU operating system in the 1984 to promote about the software freedom. “Free” refers to liberty, not price, granting users the rights to run, study, modify, and distribute software with or without changes. Stallman’s GNU Manifesto and the Free Software Foundation (FSF) advocate for these freedoms and encourage the development and use of free software.

    56. Computer and software companies eventually sought to limit the use of their software to authorized computers and paying customers. Releasing only the binary files compiled from the source code, rather than the source code itself, helped them to achieve this goal, as well as protecting their code and their ideas from their competitors. Although the Homebrew user groups of the 1970s exchanged code during their meetings, the operating systems for hobbyist machines (such as CPM) were proprietary. By 1980, proprietary software was the usual case.

      This passage explains how computer and software companies began restricting software use to authorized users and paying customers. By distributing only compiled binaries instead of source code, companies protected their intellectual property and ideas. While early hobbyist groups shared code freely, operating systems like CPM were proprietary, and by 1980, proprietary software had become the norm.

    57. In the early days of modern computing (that is, the 1950s), software generally came with source code. The original hackers (computer enthusiasts) at MIT's Tech Model Railroad Club left their programs in drawers for others to work on. “Homebrew” user groups exchanged code during their meetings. Company-specific user groups, such as Digital Equipment Corporation's DECUS, accepted contributions of source-code programs, collected them onto tapes, and distributed the tapes to interested members. In 1970, Digital's operating systems were distributed as source code with no restrictions or copyright notice.

      This passage explains how the early history of software distribution in the 1950s–1970s. The Software often came with the source code, and the communities of the enthusiasts—like the MIT hackers, Homebrew groups, and company user groups such as DECUS—shared, modified, and distributed programs freely. Digital Equipment Corporation even distributed operating systems as unrestricted source code, highlighting the collaborative culture of early computing.

    58. There are many benefits to open-source operating systems, including a community of interested (and usually unpaid) programmers who contribute to the code by helping to write it, debug it, analyze it, provide support, and suggest changes. Arguably, open-source code is more secure than closed-source code because many more eyes are viewing the code. Certainly, open-source code has bugs, but open-source advocates argue that bugs tend to be found and fixed faster owing to the number of people using and viewing the code.

      This passage highlights how the benefits of open-source operating systems. A community of the programmers contributes by the writing, debugging, analyzing, and also improving the code. The Open-source code can be more secure and reliable than closed-source software because more people examine it, helping to identify and fix bugs more quickly.

    59. Starting with the source code allows the programmer to produce binary code that can be executed on a system. Doing the opposite—reverse engineering the source code from the binaries—is quite a lot of work, and useful items such as comments are never recovered. Learning operating systems by examining the source code has other benefits as well. With the source code in hand, a student can modify the operating system and then compile and run the code to try out those changes, which is an excellent learning tool.

      This passage explains the advantages of studying operating systems using source code. Starting from the source allows programmers to compile executable binaries directly, whereas reverse-engineering binaries is difficult and loses valuable information like comments. Access to source code also lets students modify, compile, and test the OS, providing a hands-on learning experience.

    60. The study of operating systems has been made easier by the availability of a vast number of free software and open-source releases. Both free operating systems and open-source operating systems are available in source-code format rather than as compiled binary code. Note, though, that free software and open-source software are two different ideas championed by different groups of people (see http://gnu.org/philosophy/open-source-misses-the-point.html for a discussion on the topic).

      This passage highlights how the studying of the operating systems is easier thanks to free and open-source software, which is available in source-code form. While both provide access to the code, free software and open-source software are distinct concepts promoted by different communities.

    61. A real-time system has well-defined, fixed time constraints. Processing must be done within the defined constraints, or the system will fail. For instance, it would not do for a robot arm to be instructed to halt after it had smashed into the car it was building. A real-time system functions correctly only if it returns the correct result within its time constraints. Contrast this system with a traditional laptop system where it is desirable (but not mandatory) to respond quickly.

      This passage explains that real-time systems have strict, well-defined timing requirements. The system must process data and respond within set time constraints, or it fails—unlike traditional computers, where fast responses are desirable but not critical. For example, a robot arm must stop on time to avoid the damage, illustrating the importance of timing in real-time systems.

    62. Embedded systems almost always run real-time operating systems. A real-time system is used when rigid time requirements have been placed on the operation of a processor or the flow of data; thus, it is often used as a control device in a dedicated application. Sensors bring data to the computer. The computer must analyze the data and possibly adjust controls to modify the sensor inputs.

      This passage explains how the embedded systems typically run real-time operating systems (RTOS). RTOS are used when strict timing is required for processing or data flow, such as in control applications. Sensors provide data, and the system must quickly analyze it and adjust controls as needed.

    63. The use of embedded systems continues to expand. The power of these devices, both as standalone units and as elements of networks and the web, is sure to increase as well. Even now, entire houses can be computerized, so that a central computer—either a general-purpose computer or an embedded system—can control heating and lighting, alarm systems, and even coffee makers. Web access can enable a home owner to tell the house to heat up before she arrives home. Someday, the refrigerator will be able to notify the grocery store when it notices the milk is gone.

      This passage highlights the growing use and potential of embedded systems. They are increasingly very powerful, both as sthe tandalone devices and as the networked components. Examples include smart homes such as where a central computer can control the heating, lighting, alarms, and the appliances, and the future possibilities like the refrigerators that are automatically notifying stores when supplies run out.

    64. These embedded systems vary considerably. Some are general-purpose computers, running standard operating systems—such as Linux—with special-purpose applications to implement the functionality. Others are hardware devices with a special-purpose embedded operating system providing just the functionality desired

      This passage notes that the embedded systems can be varied widely. Some are the general-purpose computers running standard OSs likethe Linux with their specialized applications, while the others use a dedicated embedded operating systems that provide only the specific functionality required for that device.

    65. Embedded computers are the most prevalent form of computers in existence. These devices are found everywhere, from car engines and manufacturing robots to optical drives and microwave ovens. They tend to have very specific tasks. The systems they run on are usually primitive, and so the operating systems provide limited features.

      This passage explains how the embedded computers are the most common type of computers, found in the devices like the car engines, robots, and the household appliances. They are designed for the specific tasks, and their operating systems are considered typically simple, offering only essential features.

    66. Certainly, there are traditional operating systems within many of the types of cloud infrastructure. Beyond those are the VMMs that manage the virtual machines in which the user processes run. At a higher level, the VMMs themselves are managed by cloud management tools, such as VMware vCloud Director and the open-source Eucalyptus toolset. These tools manage the resources within a given cloud and provide interfaces to the cloud components, making a good argument for considering them a new type of operating system.

      Cloud infrastructure uses the traditional OSs and the virtual machine monitors (VMMs) to manage the virtual machines. Tools like the VMware vCloud Director and the Eucalyptus manage the VMMs and provide the interfaces, acting as a higher-level OS for the cloud environments.

    67. Cloud computing is a type of computing that delivers computing, storage, and even applications as a service across a network. In some ways, it's a logical extension of virtualization, because it uses virtualization as a base for its functionality. For example, the Amazon Elastic Compute Cloud (ec2) facility has thousands of servers, millions of virtual machines, and petabytes of storage available for use by anyone on the Internet.

      This passage explains how the cloud computing delivers computing power, storage, and applications as services over a network. It builds on virtualization, allowing resources to be shared efficiently. For example, Amazon EC2 provides millions of virtual machines and massive storage that can be accessed by users over the Internet.

    68. Skype is another example of peer-to-peer computing. It allows clients to make voice calls and video calls and to send text messages over the Internet using a technology known as voice over IP (VoIP). Skype uses a hybrid peer-to-peer approach

      This passage describes how Skype as an example of peer-to-peer (P2P) computing. It enables the voice and the video calls, as well as the text messaging, over the Internet using the voice over IP (VoIP) technology. Skype employs a hybrid P2P approach, combining direct peer connections with centralized services for tasks like user authentication.

    69. Peer-to-peer networks gained widespread popularity in the late 1990s with several file-sharing services, such as Napster and Gnutella, that enabled peers to exchange files with one another. The Napster system used an approach similar to the first type described above: a centralized server maintained an index of all files stored on peer nodes in the Napster network, and the actual exchange of files took place between the peer nodes

      This passage describes how the peer-to-peer (P2P) networks became popular in the late 1990s through file-sharing services like Napster and Gnutella. Napster used a hybrid approach: a central server kept an index of files, while the actual file transfers occurred directly between peers, combining centralized indexing with distributed file sharing.

    70. Another structure for a distributed system is the peer-to-peer (P2P) system model. In this model, clients and servers are not distinguished from one another. Instead, all nodes within the system are considered peers, and each may act as either a client or a server, depending on whether it is requesting or providing a service. Peer-to-peer systems offer an advantage over traditional client–server systems. In a client–server system, the server is a bottleneck; but in a peer-to-peer system, services can be provided by several nodes distributed throughout the network.

      This passage explains the peer-to-peer (P2P) model of distributed systems, where all nodes are equal and can act as either client or server. Unlike traditional client–server systems, which can have a server bottleneck, P2P systems distribute services across multiple nodes, improving scalability and reducing single points of failure.

    71. Two operating systems currently dominate mobile computing: Apple iOS and Google Android. iOS was designed to run on Apple iPhone and iPad mobile devices. Android powers smartphones and tablet computers available from many manufacturers. We examine these two mobile operating systems in further detail in Chapter 2.

      This passage notes that the mobile computing market is dominated by two operating systems: Apple iOS, which runs on iPhones and iPads, and Google Android, which powers devices from multiple manufacturers. The text indicates that these two OSs will be explored in more detail in Chapter 2.

    72. To provide access to on-line services, mobile devices typically use either IEEE standard 802.11 wireless or cellular data networks. The memory capacity and processing speed of mobile devices, however, are more limited than those of PCs. Whereas a smartphone or tablet may have 256 GB in storage, it is not uncommon to find 8 TB in storage on a desktop computer. Similarly, because power consumption is such a concern, mobile devices often use processors that are smaller, are slower, and offer fewer processing cores than processors found on traditional desktop and laptop computers.

      This passage explains how the mobile devices connect to the online services through the Wi-Fi (IEEE 802.11) or the cellular networks. However, they have the limitations as compared with PCs: less storage, slower and smaller processors, and fewer cores, mainly to conserve power. For example, a smartphone might have 256 GB of storage, while a desktop could have 8 TB.

    73. Today, mobile systems are used not only for e-mail and web browsing but also for playing music and video, reading digital books, taking photos, and recording and editing high-definition video. Accordingly, tremendous growth continues in the wide range of applications that run on such devices. Many developers are now designing applications that take advantage of the unique features of mobile devices, such as global positioning system (GPS) chips, accelerometers, and gyroscopes. An embedded GPS chip allows a mobile device to use satellites to determine its precise location on Earth.

      This passage highlights that the expanding of the capabilities of the mobile devices beyond the basic tasks like email and web browsing. Modern devices are used to handle the media playback, digital books, photography, and the high-definition video editing. Developers are creating the applications which are used to leverage the built-in features like GPS, accelerometers, and gyroscopes, enabling location-based services and motion-sensing functionality.

    74. Mobile computing refers to computing on handheld smartphones and tablet computers. These devices share the distinguishing physical features of being portable and lightweight. Historically, compared with desktop and laptop computers, mobile systems gave up screen size, memory capacity, and overall functionality in return for handheld mobile access to services such as e-mail and web browsing. Over the past few years, however, features on mobile devices have become so rich that the distinction in functionality between, say, a consumer laptop and a tablet computer may be difficult to discern. In fact, we might argue that the features of a contemporary mobile device allow it to provide functionality that is either unavailable or impractical on a desktop or laptop computer.

      This passage explains how the mobile computing involves handheld devices like smartphones and tablets, which are portable and lightweight. While early mobile devices sacrificed screen size, memory, and functionality, modern devices now offer features comparable to—or even exceeding—those of desktops and laptops, making them highly capable for tasks like web browsing, email, and other services.

    75. Traditional time-sharing systems are rare today. The same scheduling technique is still in use on desktop computers, laptops, servers, and even mobile computers, but frequently all the processes are owned by the same user (or a single user and the operating system). User processes, and system processes that provide services to the user, are managed so that each frequently gets a slice of computer time. Consider the windows created while a user is working on a PC, for example, and the fact that they may be performing different tasks at the same time. Even a web browser can be composed of multiple processes, one for each website currently being visited, with time sharing applied to each web browser process.

      This text emphasizes that although the traditional time-sharing systems are now seen as very uncommon, the scheduling method remains to be prevalent. Contemporary computers—desktops, laptops, servers, and mobile devices—utilize the time-sharing for controlling the numerous user and system processes. For instance, a PC can manage various windows, and a web browser can execute several processes at once, with each process getting portions of CPU time.

    76. In the latter half of the 20th century, computing resources were relatively scarce. (Before that, they were nonexistent!) For a period of time, systems were either batch or interactive. Batch systems processed jobs in bulk, with predetermined input from files or other data sources. Interactive systems waited for input from users. To optimize the use of the computing resources, multiple users shared time on these systems. These time-sharing systems used a timer and scheduling algorithms to cycle processes rapidly through the CPU, giving each user a share of the resources.

      This passage explains how computing evolved when resources were limited. Early systems are either the batch (processing jobs in bulk) or the interactive (waiting for user input). Time-sharing systems were introduced to optimize the resource use, allowing multiple users to share CPU time through timers and scheduling algorithms.

    1. Therefore, some states have begun to change tenure laws to adhere to the accountability requirements stipulated by the U.S. Department of Education as it relates to teacher evaluation and student achievement.

      Getting rid of tenure in favor of "merit" based protections might sound good in theory, and it is something that the current administration is pushing for but I argue that rewarding teachers with career protections based on "merit" is very subjective and could easily be used by states/districts to discriminate against teachers or support teachers that fit their vision.

    2. (CCSSO, InTASC Standard #9, 2013).

      The majority of these standards allude to teachers having personal responsibility. I think it is good to be mindful of the huge impact teachers, their attitudes, and actions have on students; not only in the classroom, but also on a student's self esteem, future, and overall feelings about learning.

  2. learn-us-east-1-prod-fleet02-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet02-xythos.content.blackboardcdn.com
    1. f the reader is transformed into a “vessel” filled by extracts from an internalizedtext

      There is a passive relationship between a reader and the author. Learners are the "vessel" for information waiting to be filled by the opinions of the authors instead of being challenged and wanting to challenge. It critiques readers being empty for someone elses knowledge.

    1. “There has never been a more important time for children to become storytellers, and there have never been so many ways for them to share their stories” (p. 3). Our students and their stories should be an essential part of our teaching. As educators, we need to encourage students to tell their stories and help build community. Each shared story has the potential of teaching us.

      I think it is super important for storytelling to be apart of a child’s curriculum. The mind can develop to a great extent through storytelling. It is apart of daily life that I think is often overlooked or taken advantage of.

    2. When students’ lives are taken off the margins and placed in the curriculum, they don’t feel the same need to put down someone else” (p. 7). Students need to feel that their voices matter, that they have a story to contribute or share and that their stories are a rich part of the curriculum

      This is true. If we avoid stereotypes, it invites a more comfortable environment for students to share authentic stories. It relieves the pressures and ideas that certain people have to live up to a specific standard or act a certain way.

    3. Students who search their memories for details about an event as they are telling it orally will later find those details easier to capture in writing

      It can be hard to find the right words when telling a story orally. For me personally, I often struggle to find the right words to describe certain things, or often myself using the wrong words, so writing it out definitely helps me brainstorm different ways I can describe certain details. It also gives me an opportunity to expand on those details to make the story more captivating or interesting

    4. “there has probably never been a human society in which people did not tell stories”

      This is fascinating to think about. If you think about native traditions, you will find that all, if not most, come from story telling. A lot of them are from oral tradition/storytelling, so it’s definitely interesting to think about how far story telling dates back to.

    1. Overall summary: author thinks the one advantage we have over AI is the originality that humans possess and it is critical that we continue to embrace that instead of becoming more like AI.

    2. Having said that, always remember that artificial intelligence is only an assistant; an executive’s value comes from his or her own intelligence.

      Summary: AI is useful for busywork or simple tasks.

    3. That which diverges from the run-of-the-mill is not only valuable; it is indeed becoming invaluable in the age of AI.

      Summary: Breaking rules and being truly original is the one advantage humans have over AI.

    4. Our priority should be to discover and innovate, not imitate neural networks.

      Summary: The author warns against becoming like AI in the process of creating.

    5. We can use AI for unengaging and repetitive tasks, but we should also remember that humaneness is the key to creativity.

      How would this author define humaneness? AI is technically just regurgitating human work, and it was invented by humans.

    1. The Egyptian empires lasted for nearly 2300 years before being conquered, in succession, by the Assyrians, Persians, and Greeks between about 700 BCE and 332 BCE.

      I find this to be insane that the Egyptian empires lasted this long! I had always heard the quote of Empires fall after 250 years, so 2300 years is absolutely wild!

    2. Farming developed in a number of different parts of the ancient world, before the beginning of recorded history. That means it’s very difficult for historians to describe early agricultural societies in as much detail as we’d like. Also, because there are none of the written records historians typically use to understand the past, we rely to a much greater extent on archaeologists, anthropologists, and other specialists for the data that informs our histories. And because the science supporting these fields has advanced rapidly in recent years, our understanding of this prehistoric period has also changed – sometimes abruptly.

      This surprised me a lot I thought there would be a decent amount of evidence of early farming and maybe stuff that was written down and annotated. It is still wild that there isn't much known about early stages of farming!

  3. learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com
    1. Never trust an economist with your job

      I noticed that someone else responded to this one and I wanted to give my opinion on it as well. I really do believe you shouldn't trust someone being in charge of your job when their job is solely to increase efficiency, for instance they might want to put policies into place that make for high turnover but make more money for the company.

    2. But quite apart from whether you think capitalism is good or bad, capitalism issomething we must study. It’s the economy we live in, the economy we know. Andthe more ordinary people understand about capitalism

      Capitalism is something we must understand as people of society. I am interested to learn about all the different aspects of it and how it influences our economy.

    3. I alsobelieve that it is ultimately possible to build an alternative economic system guideddirectly by our desire to improve the human condition, rather than by a hunger forprivate profit. (Exactly what that alternative system would look like, however, isnot at all clear today.) We’ll consider these criticisms of capitalism, and alternativevisions, in the last chapters of this book

      How different would the economy be with this alternate system? Would it be better for the people?

    4. Unfortunately, most professional economists don’t think about economics inthis common-sense, grass-roots context. To the contrary, they tend to adopt arather superior attitude in their dealings with the untrained masses. They invokecomplicated technical mumbo-jumbo – usually utterly unnecessary to theirarguments – to make their case. They claim to know what’s good for the people

      What would it be like if economists did not have such superio attitudes and came from a more "for the people" perspective? How different would the economy be?

    5. Most production of goods and services is undertaken by privately-ownedcompanies, which produce and sell their output in hopes of making a profit.This is called production for profit.2. Most work in the economy is performed by people who do not own theircompany or their output, but are hired by someone else to work in return for amoney wage or salary. This is called wage labour.

      Stanford tends to focus on how important production is, what they sell, who they hire, and how much they pay. While Kling focuses on specialization and free-market. Stanford also writes as if his audience is not very economic educated while Kling writes like his audience has economic background knowledge.

    6. economics is inherently a social subject. It’s not just technical forces liketechnology and productivity that matter. It’s also the interactions and relationshipsbetween people that make the economy go around.

      Economics is grounded and isn't what people usually associate it with. As its the working people that make the economy go around.

    7. Needless to say, this state of affairs was not socially sustainable. Working peopleand others fought hard for better conditions, a fairer share of the incredible wealththey were producing, and democratic rights. Under this pressure, capitalismevolved, unevenly, toward a more balanced and democratic system. Labour lawsestablished minimum standards; unions won higher wages; governments becamemore active in regulating the economy and providing public services. But thisprogress was not “natural” or inevitable; it reflected decades of social struggle andconflict. And that progress could be reversed if and when circumstances changed –such as during times of war or recession. Indeed, the history of capitalism has beendominated by a rollercoaster pattern of boom, followed by bust.

      Early capitalism created harsh conditions that society could not tolerate, which workers got together to fight for a better condition, fairer wages, and democratic rights. With this capitalism evolved into a more balanced system, with labor laws, unions, and improved government regulation providing public services. Capitalism only became more balanced because people forced it to change, not because the system was designed to provide fairness. If capitalism was achieved through struggle rather than naturally, does that mean capitalism itself is inherently resistant to fairness?

    8. Is our present economy a good economy? In some ways, modern capitalism hasdone better than any previous arrangement in advancing many of these goals. Inother ways, it fails the “good economy” test miserably. The rest of this book willendeavour to explain how the capitalist economy functions, the extent to which itmeets (and fails to meet) these goals – and whether or not there are any better waysto do the job.

      It isn't clear if our present economy is a good economy or not it is evident that modern capitalism is making a better progress in advancing the goals that are intended. The author established a balanced evaluation and acknowledges capitalisms past strength while also looking at its weaknesses. It states that modern capitalism fails the 'good economy' test miserably in some ways, though it is improving there is still flaw to it. What will the author show to meet the goals and a better way to do the job.

    9. The Scottish writer Adam Smith is often viewed as the “father” of free-marketeconomics. (This stereotype is not quite accurate; in many ways Smith’s theories arevery different from modern-day neoclassical economics.) And his famous Wealthof Nations (published in 1776, the same year as American independence) cameto symbolize (like America itself) the dynamism and opportunity of capitalism.Smith identified the productivity gains from large-scale factory production andits more sophisticated division of labour (whereby different workers or groups ofworkers are assigned to different specialized tasks). To support this new system, headvocated deregulation of markets, the expansion of trade, and policies to protectthe profits and property rights of the early capitalists (who Smith celebrated asvirtuous innovators and accumulators). He argued that free-market forces (whichhe called the “invisible hand”) and the pursuit of self-interest would best stimulateinnovation and growth. However, his social analysis (building on the Physiocrats)was rooted more in class than in individuals: he favoured policies to undermine thevested interests of rural landlords (who he thought were unproductive) in favour ofthe more dynamic new class of capitalists.

      'Smith identified the productivity gains from large-scale factory production… division of labour” and “free-market forces… and the pursuit of self-interest would best stimulate innovation and growth.” This shows how Adam Smith laid the groundwork for capitalism and the idea of the “invisible hand,” but his focus was more on class dynamics and supporting productive capitalists than purely individual self-interest.

    10. Why? Because even the short-changed partneris still better off (by one penny) than if they had rejected the offer – and that’s allthey care about. So there is no rational reason for the offer to be rejected.In practice, of course, anyone with the gall to propose such a lopsided bargainwould face certain rejection. Experiments with real money have shown that splitsas lopsided as 75–25 are almost always rejected (even though a partner rejectingthat split forgoes a real $2.50 gain). And the most common offer proposed is a50–50 split. That won’t surprise many people – but it does, strangely, surpriseneoclassical economists! In short, the real-world behaviour of humans is notremotely consistent with the assumption of blind, individualistic greed.

      “real-world behaviour of humans is not remotely consistent with the assumption of blind, individualistic greed.” This shows how experiments (like the 50–50 split being most common) challenge neoclassical economic theory, proving people value fairness and social norms over pure self-interest.

    11. Homo sapiens have existed on this planet for approximately 100,000 years. Theyhad an economy all of that time. Humans have always had to work to meet thematerial needs of their survival (food, clothing, and shelter) – not to mention,when possible, to enjoy the “finer things” in life. Capitalism, in contrast, has existedfor around 250 years. If the entire history of Homo sapiens to date was a 24-hourday, then capitalism has existed for three-and-a-half minutes.What we call “the economy” went through many different stages en route tocapitalism. (We’ll study more of this economic history in Chapter 3.) Even today,different kinds of economies exist. Some entire countries are non-capitalist. Andwithin capitalist economies, there are important non-capitalist parts (althoughmost capitalist economies are becoming more capitalist as time goes by).I think it’s a pretty safe bet that human beings will eventually find other, betterways to organize work in the future – maybe sooner, maybe later. It’s almostinconceivable that the major features of what we call “capitalism” will exist for the

      capitalism is only a very recent system compared to the long history of human economies. Note that humans have always worked to meet needs, but capitalism (about 250 years old) is just one stage among many and will likely be replaced by new ways of organizing work in the future. This helps put capitalism in perspective as temporary, not permanent.

    12. Some jobs link compensation directly to work effort. Piece-work systems, whichpay workers for each bit of work they perform, are one example of this approach; soare contract workers (hired to perform a specific task, and paid only when that taskis completed). This strategy has limited application, however: usually employerswant their workers to be more flexible, performing a range of hard-to-specifyfunctions (rather than simply churning out a certain number of widgets per hour).Even in straightforward jobs, piece-work systems produce notoriously bad quality,

      Referencing back to the taxi industry: Stanford is describing the power struggle of workers. He's going more into depth about it here than when he first mentioned precarious work. He is framing it as bargaining power and labor insecurity rather than Kling claiming it as market disruption by the government. Either way, there is not a balance in the industry where there is a "happy medium" between the workers and establishments that already exist.

    1. The idea in the West is to make a product which will sell well.

      It is incredibly interesting to see how society influences the story telling. With each translation the story of Ashenputtel/ Cinderalla which altered to fit the culture and society. Though I knew there was a distinction between the American and original German depiction of the same story, I never thought about why and how this came to be. After reflection, I realize that the American versions unconsciously or consciously sells the American dream. It sells the idea that success and a better life is possible for anyone regardless of where they start off. The American dream relies on the concept that America is a country of endless opportunities and where social and class mobility is possible. The story of cinderella portrays just that, of course with a bit of romantic touch, it follows the story of a girl who was poor and miserable but ended up with the prince- being able to jump social classes.

    2. In capitalist countries, the changes are made by writers and moviemakers not so much for ideological reasons as for financial ones.

      Although I do agree that profit has become a major motivation in modern retellings of stories, I still think the changes made by creators have just as much ideological weight as financial weight. Writers and filmmakers in capitalist societies may not set out to push ideology, but they are inevitably influenced by capitalist logic. Especially when working within major corporate institutions like Disney or Pixar, creatives are constantly surrounded by capitalistic values such as competition and upward mobility that they internalize to a certain extent and (consciously or subconsciously) reproduce in stories. Disney's Cinderella, for instance, promotes consumerist values through the use of the magical dress, carriage, and palace that reinforce the idea that wealth and beauty equal happiness. - Janu Kandalu, German 2254.02

    1. Temperature also affected the behavioural preferences of the infauna associated with mussels. Polychaetes, crustaceans, and molluscs altered their behaviour to colonise the habitat created by one species of mussel to another. This altered behavioural preference of infauna can be driven by habitat-specific cues and the ability of infauna to make habitat choices

      The authors talked about some behavioral changes to the infauna that is associated with the mussels. Would the behavioral changes be positive or negative effect towards them or other species in their environment?

    2. After the 4-week acclimation period, the mussels were defaunated by carefully removing all infauna and separating adult mussels (>1 cm) into 10-cm-diameter clumps (Cole, 2010).

      Would we have seen different result if they acclimation period was longer for the mussels? Would a longer acclimation or a shorter one not really affect the mussels or the result to much?

    3. The outdoor experiment was performed in a purpose-built facility (Pereira et al., 2019) at the Sydney Institute of Marine Science (SIMS), Chowder Bay, Sydney Harbour, New South Wales, Australia. The experiment was performed during the summer peak recruitment period of marine invertebrates in Sydney Harbour.

      Would the researchers get similar or the same results, if they did not perform the experiment during the peak recruitment period? How different would the results be if they where during the low recruitment period?

    4. Previous studies have shown that the loss of a biogenic habitat in an ecosystem can be functionally replaced (or the loss of function is slowed to some extent) by another habitat-forming organism (Nagelkerken et al., 2016; Sunday et al., 2017).

      What would happen if another habitat-forming organism was introduces to the area? Would it benefit the overall ecology of the area or would it prove to be detrimental to the organisms that already exist in that area? Would it be ethical to perform this in order to prevent the replacement of a habitat?

    5. For example, under acidification, fleshy seaweeds outcompete calcareous species

      How would this potential change impact the organisms that rely on the calcareous species for food or protection?

    6. Molluscs actively chose to colonise T. hirsuta and actively avoided M. galloprovincialis, regardless of warming or pCO2 levels (Table 1).

      What caused molluscs to choose to colonize T. hisuta regardless of warming or pCO2 levels? What deterred them from colonizing M. galloprovincialis?

    7. The native mussel T. hirsuta grew more under warming (Fig. 1; ANOVA Species × Temperature F1,32 = 6.13, P < 0.05; Supplementary Table 2). In contrast, M. galloprovincialis grew the same at ambient and elevated temperatures (Fig. 1; Supplementary Table 2). There was no effect of elevated pCO2 on growth in either of the mussel species (ANOVA CO2 F1,32 = 0.53, P > 0.05; Supplementary Table 2).

      The authors present an interesting point here. The research suggests that temperature is the primary driver for the difference in growth between the native T. hisuta and the M. galloprovincialis. Based on these results, would these results be consistent in another shellfish species with the same tolerance for temperature and sensitivities to carbon dioxide?

  4. learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com
    1. Instead of headlines, the “crawl” on the TV lists all of thetasks and people needed to produce your breakfast. Your cerealwas manufactured in a factory that had a variety of workersand many machines. People had to manage the factory. Orga-nization of the firm required many functions in finance andadministration. First, however, people had to build the factory

      Kling speaks over the amount of work and tools that are used for an item that looks simple to make. He says "We carry on our lives not really conscious of the complexity of that specialization." The steps and process line up to get just a bowl of cereal. What would happens if something steps out of line, or breaks down? will it slow down the production process? Or what if over time there are less workers to getting production moving?

    2. When patterns of specialization become unsustainable, theindividuals affected can face periods of unemployment. Theyare like soldiers waiting for new orders, except that the orderscome not from a commanding general but from the decen-tralized actions of many entrepreneurs testing ideas in searchof profit.

      When old job patterns no longer make sense, soldiers waiting for new orders, instead of commanding officers they're are getting it from entrepreneurs experimenting. Meaning their livelihood is at the hands of people who are just testing things and an uncertainty if there will be a new role created. It raises a question if this decentralized adjustment make unemployment longer or less predictable?

    3. Look at the list of ingredients in the cereal. Those ingre-dients had to be refined and shipped to the cereal manu-facturer. Again, those processes required many machines,which in turn had to be manufactured. The cereal grainsand other ingredients had to be grown, harvested, and pro-cessed. Machines were involved in those processes, and thosemachines had to be manufactured.

      Machines is such a big part of the industry. It is used for transportation and manufacturing products. Products such as cereal which comes from a chain of production. Every ingredient had to be grown, processed, and transported using machines and the machine itself had to be designed and built to keep the industry running. If machines weren't involve whatsoever would the industry be able to keep afloat?

    4. “Macroeconomics and Misgivings” argues that it is a mis-conception, albeit one that is well entrenched in the mindsof both professional economists and the general public, tothink of the economy as an engine with spending as its gaspedal.

      “misconception…to think of the economy as an engine with spending as its gas pedal.” This emphasizes the author’s critique of oversimplified macroeconomic models that treat the economy like a machine, ignoring complexity and human behavior.

    5. “Finance and Fluctuations” deals with the misconceptionsabout finance that are common among economists, who oftenfail to appreciate the process of financial intermediation. Thissection looks at the special role played by financial intermedi-aries in enabling specialization. Intermediation is particularlydependent on trust, and as that trust ebbs and flows, the finan-cial sector can amplify fluctuations in the economy’s ability tocreate patterns of sustainable specialization and trade.

      financial intermediaries…enable specialization” and “as that trust ebbs and flows, the financial sector can amplify fluctuations.” This shows the author’s point that finance is crucial for specialization but is sensitive to trust, which can magnify economic ups and downs.

    6. “Specialization and Sustainability” exposes the misconcep-tion that we must undertake extraordinary efforts in order toconserve specific resources. This section explains how the pricesystem guides the economy toward sustainable use of resources.In contrast, individuals who attempt to override the pricesystem through their individual choices or by imposing gov-ernment regulations can easily miscalculate the costs of theiractions.

      the pricesystem guides the economy toward sustainable use of resources” and “individuals who attempt to override the pricesystem…can easily miscalculate the costs.” This emphasizes that the author argues the price system naturally encourages sustainability, while personal or government interference can backfire.

    7. “Machine as Metaphor” attacks the misconception held bymany economists and embodied in many textbooks that theeconomy can be analyzed like a machine. This section looksat a widely used but misguided approach to economic analysis,treating it as if it were engineering. The economic engineersare stuck in a mindset that grew out of the Second WorldWar, a conflict that was dominated by airplanes, tanks, andother machines. Their approach fails to take account of themany nonmechanistic aspects of the economy.

      “attacks the misconception…that the economy can be analyzed like a machine” and “fails to take account of the many nonmechanistic aspects of the economy.” This shows the author’s critique of treating economics purely like engineering, emphasizing that human behavior and social factors make the economy more complex than a machine.

    8. He knows that his breakfast depends upon workerson the coffee plantations of Brazil, the citrus groves ofFlorida, the sugar fields of Cuba, the wheat farms ofthe Dakotas, the dairies of New York; that it has beenassembled by ships, railroads, and trucks, has beencooked with coal from Pennsylvania in utensils madeof aluminum, china, steel, and glass.

      He knows that his breakfast depends upon workers on the coffee plantations…utensils made of aluminum, china, steel, and glass.” This emphasizes the global interconnection of labor and resources—showing how everyday items rely on a complex, international network of production and trade.

    9. How much commerce andnavigation in particular, how many ship-builders,sailors, sail-makers, rope-makers, must have beenemployed in order to bring together the differentdrugs made use of by the dyer, which often come fromthe remotest corners of the world! What a variety oflabour too is necessary in order to produce the toolsof the meanest of those workmen!

      how many ship-builders, sailors, sail-makers, rope-makers, must have been employed…What a variety of labour too is necessary in order to produce the tools of the meanest of those workmen!” Note that this illustrates the vast network of specialized labor required even for basic production, showing the complexity and interdependence of economies.

    10. The woollen coat, for example, which covers theday-labourer, as coarse and rough as it may appear,is the produce of the joint labour of a great multitudeof workmen.

      the produce of the joint labour of a great multitude of workmen.” Note that even simple goods rely on the coordinated work of many people, emphasizing the importance of specialization and trade in everyday life.

    11. The roundabout process (or high capital intensity) creates agap of time between the initial steps in the production pro-cess and the final sale of goods and services. During thattime gap, workers involved in the early stages of the pro-duction process must receive income before consumers havemade purchases. (Think of the producer of farm equipment,which must receive payment from a farmer before the farmercan use the equipment to harvest a crop.) That preconditionrequires financial intermediation. As the economy becomesmore specialized and the production becomes more round-about, the financial sector takes on more significance.

      As the economy becomes more specialized and the production becomes more roundabout, the financial sector takes on more significance.” Note that higher capital intensity and longer production processes increase the need for financial systems to support early-stage workers and investments.

    12. The steel must be transported,which may require a railroad or a ship for transportation.And so on. Most of the people whose work enables thefarmer to harvest wheat have no idea that they are partof the wheat production process. The Austrian school ofeconomics would describe this multistep production pro-cess as very roundabout.

      “Most of the people whose work enables the farmer to harvest wheat have no idea that they are part of the wheat production process.” Note that complex production involves many unseen contributors, illustrating the concept of “roundabout” production in the Austrian school.

    13. An increase in capital intensity accompanies an increasein specialization. Think of capital as tools that are usedto produce things. Farm equipment helps produce food.Manufacturing plants help build farm equipment. Steeland concrete production facilities help build manufac-turing plants. Workers with powerful tools are moreproductive. It is easier to excavate a foundation with abulldozer than with a spoon.

      “Workers with powerful tools are more productive. It is easier to excavate a foundation with a bulldozer than with a spoon.” Note that more specialized work requires better tools (capital), which increases efficiency and output.

    14. Improvements in transportation accompany specializa-tion. The farther that you can cheaply transport goods,the more specialization you will see. Before the adventof the railroad, water transport was relatively efficient,so that specialization tended to be most extensive neargood harbors and navigable rivers. Improvements intransportation have connected the world’s regions moreclosely, promoting greater specialization

      Improvements in transportation have connected the world’s regions more closely, promoting greater specialization”. Note that better transport enables wider trade networks, which increases economic efficiency and interdependence.

    15. Trade accompanies specialization. The more you spe-cialize, the more you need to trade to obtain what youwant. In a society where people specialize, you will findthem exchanging goods and services.

      “The more you specialize, the more you need to trade to obtain what you want”. Note that this emphasizes the link between specialization and trade—economic interdependence grows as individuals focus on specific tasks.

    16. If Cheryl’s bank no longer needed a mortgage paymentprocessing system, her value would be reduced. If her bankwent completely out of business, her value would be reducedmore. If the mortgage servicing industry consolidated, usingfewer systems, her value would be reduced more still. And ifcomputers suddenly became much more expensive and bankswent back to using mechanical calculators, her value wouldbe reduced still more. That last hypothetical is extreme, butthe point is that specialization is subtle, deep, and highlydependent on context.

      specialization is subtle, deep, and highly dependent on context” and the examples before it. Note that this shows how the value of specialized skills depends on the broader economic and technological environment—changes in industry or technology can increase or decrease the importance of a person’s work.

    17. The machines were made out of materials that hadto be mined and transported. That transportation requiredmany other people and machines. The transportation equip-ment itself had to be manufactured, which required miningand shipping materials to the place where the transportationequipment was manufactured

      materials that had to be mined and transported” and “transportation equipment itself had to be manufactured”. Note that this emphasizes the interconnectedness of production—how even simple goods rely on a vast network of labor, materials, and technology.

    18. Picture yourself watching news on cable television whileeating a bowl of cereal. However, instead of giving you thenews, the TV announcer asks you to consider what you wouldneed to do to make your cereal completely from scratch.You would need to grow the cereal grains yourself. If youuse tools to harvest the grain, you would have to make thosetools yourself

      “what you would need to do to make your cereal completely from scratch” and “If you use tools…you would have to make those tools yourself.” Note that the passage illustrates how modern life relies on complex production processes and specialized skills, showing how dependent we are on the broader economy.

    19. Even more strikingis the fact that almost everything you consume is somethingyou could not possibly produce. Your daily life depends on thecooperation of hundreds of millions of other people.Just as it is inconceivable that human society would haveevolved to its present state without language, it is inconceiv-able that we would have gotten to this point without special-ization and trade. Moreover, in order for society to progressfurther, patterns of specialization and trade must continue toevolve.

      “almost everything you consume is something you could not possibly produce” and “human society…without specialization and trade”. Note that the author emphasizes the essential role of cooperation, trade, and specialization in supporting daily life and societal progress.

    20. always asks, “How do you know that?” The MIT approachsuppresses that question and instead presumes that economicresearchers and policymakers are capable of obtaining knowl-edge that in reality is beyond their grasp.2 That is particu-larly the case in the field known as macroeconomics, whosepractitioners claim to know how to manage the overall levelsof output and employment in the economy.

      The MIT approach suppresses that question…” and “macro-economics… claim to know how to manage the overall levels of output and employment”. Note that the author is criticizing the overconfidence of economists, especially in macroeconomics, and how MIT-style training discourages healthy skepticism about what can truly be known or controlled.

    21. Early in 2015, I came across a volume of essays edited byE. Roy Weintraub titled MIT and the Transformation ofAmerican Economics.1 After digesting the essays, I thought tomyself, “So that’s how it all went wrong.”Let me hasten to mention that my own doctorate in eco-nomics, which I obtained in 1980, comes from MIT. Also,the writers of Weintraub’s book are generally laudatorytoward MIT and its influence.Yet I have come to believe in the wake of the MIT trans-formation, which began soon after World War II, that econo-mists have lost the art of critical thinking. The critical thinker

      “I have come to believe… that economists have lost the art of critical thinking.” This emphasizes the author’s critique of modern economics, particularly how MIT’s influence after WWII shifted the field toward less critical, more formulaic thinking, signaling a departure from questioning underlying assumptions.

    1. All work turned in must adhere to the following format.

      I appreciate the example of the format we are supposed to use. This gives us clear expectations of what you want and can be used all year long.

    2. All assignments for this course must be written and submitted directly in Google Docs

      As a Google Docs lover I am so pumped for this! Most of my other classes have to be submitted through something else and this will be so helpful for me throughout the class.

    3. It places too high of aburden on me to investigate and evaluate AI possible AI usage instead of focusingon the important educational aspects of the course.

      As a future educator, I find this extremely truthful. It can be so hard to find and investigate the use of AI because it is so authentic.

    1. The speculative bubble created by railroad financing burst in the Panic of 1873, which began a period called the Long Depression that lasted until nearly the end of the century and was so bad that before the Great Depression of the 1930s the period was known simply as “The Depression”.

      The fact that this was caused by a few things, that might've looked inconsequential, or not so much of a deal, all worked together to cause one of the most devastating depressions in U.S. history. It makes me wonder if there was anything they could've done to avoid it.

    2. Nearly 100 Americans died in “The Great Upheaval.” Workers destroyed nearly $40 million worth of property. The strike galvanized the country. It convinced laborers of the need for institutionalized unions, persuaded businesses of the need for even greater political influence and government aid, and foretold a half century of labor conflict in the United States.

      The fact that workers had to come to such drastic measures just to get a voice in what they're paid, or even reduced work hours. They had to destroy nearly $40 million dollars (about $1,174,720,000 today) worth of property, and there were many casualties. It makes me thankful that we have the unions we have today, but also wonder what would happen if something like this happened in modern days. Would it be as catastrophic, or would the government avoid all of it by complying?

    3. Strikes challenged American industry throughout the late nineteenth and early twentieth centuries. Workers seeking higher wages, shorter hours, and safer working conditions had struck throughout the antebellum era, but organized unions were fleeting and transitory. The Civil War and Reconstruction seemed to briefly distract the nation from the plight of labor, but the failure of the Great Railroad Strike of 1877 convinced workers of the need to organize. Union memberships began to climb. The Knights of Labor enjoyed considerable success in the early 1880s, due in part to its efforts to unite skilled and unskilled workers. The Knights welcomed all laborers, including women (they only barred lawyers, bankers, and liquor dealers). By 1886, the Knights had over seven hundred thousand members. The Knights envisioned a cooperative producer-centered society that rewarded labor, not capital, but, despite their sweeping vision, the Knights focused on practical gains that could be won through the organization of workers into local unions.

      It's amazing how long the strikes continued. It shows good insite on how long unions have been around.

    1. not only can such freedom be granted without prejudice to the public peace, but also, that without such freedom, piety cannot flourish nor the public peace be secure.

      holland an example of free speech

    2. How many evils spring from luxury, envy, avarice, drunkenness, and the like, yet these are tolerated

      some things are tolerated now because they cannot be legally enforced... more would come of preventing speech

    3. men would daily be thinking one thing and saying another”—a practice that will weave deceit and hypocrisy into the social fabric, thereby permitting “the avaricious, the flatterers, and other numskulls” to rise to the top.

      only puts unfit in rule

    4. Unlike many earlier defenders of toleration, he did not exclude atheists, Jews, Catholics, and the like.

      so long as your conduct is good, you may believe whatevr

    5. The sovereign’s obligation to respect the liberty of his subjects is solely a matter of self-​interest; to mistreat subjects is bound to generate resentment and possibly seditious tendencies, and those sentiments, in turn, will render the sovereign’s authority less secure than it would otherwise be

      mistreating subjects will make them less likely to trust you and thus give you less power?

    1. I mean the pace of the finished film, how the edits speed up or slow down to serve the story, producing a kind of rhythm to the edit.

      this video allows me to connect the overall rhythm of each shot some are speed up and others are longer. this helps me understand what rhythm means in a film.

    2. ther ways cinema manipulates time include sequences like flashbacks and flashforwards. Filmmakers use these when they want to show events from a character’s past, or foreshadow what’s coming in the future.

      i've seen this in a lot of films where they will add the end of the movie at the beginning and then we watch how the story plays out. for example fight club demonstrates a flashforward.

    3. The most obvious example of this is the ellipsis, an edit that slices out time or events we don’t need to see to follow the story. Imagine a scene where a car pulls up in front of a house, then cuts to a woman at the door ringing the doorbell. We don’t need to spend the screen time watching her shut off the car, climb out, shut and lock the door, and walk all the way up to the house.

      i think this saved the directors time and the suidences attentions for another example a person in the film will be eating food and then cut to her washing the dishes or into another scene we don't need to waste time watching that person eat.

    4. He wants you to feel the terror of those peasants being massacred by the troops, even if you don’t completely understand the geography or linear sequence of events. That’s the power of the montage as Eisenstein used it: A collage of moving images designed to create an emotional effect rather than a logical narrative sequence.

      i think this video shows the emotions a lot more the actually understand the logic behind the emotions.

    5. he audience was projecting their own emotion and meaning onto the actor’s expression because of the juxtaposition of the other images. This phenomenon – how we derive more meaning from the juxtaposition of two shots than from any single shot in isolation – became known as The Kuleshov Effect.

      i can see what the directors was trying to get across to the audience you can see the emotions of the actor in each cut.

    6. ilm editing and how it worked on an audience. He had a hunch that the power of cinema was not found in any one shot, but in the juxtaposition of shots. So, he performed an experiment. He cut together a short film and showed it to audiences in 1918. Here’s the film:

      this is interesting because technology advancements have also created film just like this and the dynamics and editing skills are so much more clear and advanced then back then.

    7. but it is the juxtaposition of that word (or shot) in a sentence (or scene) that gives it its full power to communicate. As such, editing is fundamental to how cinema communicates with an audience.

      i do think that grammar and editing words into the fim allow the director to connect with the audience.

    8. The filmmakers behind Deadpool (2016), for example, shot 555 hours of raw footage for a final film of just 108 minutes. That’s a shooting ratio of 308:1. It would take 40 hours a week for 14 weeks just to watch all of the raw footage, much less select and arrange it all into an edited film![2]

      this is a lot of retakes and 555 hours of footage seems a bit overwhelming. I don't think i would have to patience to lookover the footage in 14 weeks and 40 hours. This is a huge dedication to the director

    9. When the screenwriter hands the script off to the director, it is no longer a literary document, it’s a blueprint for a much larger, more complex creation. The production process is essentially an act of translation, taking all of those words on the page and turning them into shots, scenes and sequences.

      i never knew once you hand over a script to the director is is a blueprint i also never knew this process of turning a script into a shot was called act of translation.

    1. While navigating through the text, you’ll notice that the major part of the text you’re working within is identified at the top of the page

      This will be helpful to be able to save time finding the correct section I am working through.

    1. Define important concepts such as: authority, peer review, bias, point of view, editorial process, purpose, audience, information privilege and more.

      This is really useful, mainly because a lot of the times when a professor asked me to find a peer reviewed article i struggle to find an actual good one, so i can really use the help.

    1. The special effects make-up for the gory bits of your favorite horror films can sometimes take center stage.

      the special effects create better scenes in films like horror movies. This can create a better experience for the audience as well.

    1. The dataset was normalized to 10000 counts per cell, Log1p transformed and filtered to contain2000 highly variable genes. The first important observation is that state-of-the-art approaches,except CPM

      Does marker‑gene expression change monotonically along the CPM geodesic from root to leaf?

    1. you will not benefit fully from this class.

      again, this defeats the purpose of paying for education. if you are going to rely on AI rather than prioritizing learning, what is the point of school and learning environments?

    2. drought and global warming

      many who consider themselves to be environmental advocates (knowingly and unknowingly) partake in harmful activities in the name of convenience

    3. You can and should be building knowledge, thinking, and reasoning

      analytical skills are crucial outside the school environment, and should be worked on while at school

    1. You observed that for ambiguous cases or high-levels of missing data, the model tended to predict the PUR population, suggesting it acts as a "default". Since PUR is an admixed population, does this imply the model learns that a state of high uncertainty or mixed/missing signals is most characteristic of admixed genomes in the training set? Could this "default" behavior be mitigated by training with a null or "uncertain" class?

    1. most K–12 teachers andhigher education instructors receive more training in their content area than on the processesof teaching and learning

      I wonder how much that has shifted in recent years? Ask anyone in education for a number of years and they will tell you that it's much different in terms of classroom management or attention span than it was 30 years ago.

    1. The physically unequal mother in all cultures typically breast-feeds andprotects, rather than bullies or browbeats, the vulnerable infant and child. The powerfulmother nurtures so as to give life and create growth in the weak. She does not impose so asto inscribe her will

      babies are vulnerable??? there are many ways they use that nurturing to control the child. Motherhood isn't all just nurturing.

    2. Girls and women saw the world as made up not of separated, self-seeking individuals, butof interrelationships, connections webbing everyone together in communities of concern;they made moral decisions not through abstract reasoning from rules but by balancingthe infinitesimal and acute needs of everybody concerned (25-63)

      Well yes! that is how things should be, no? Please don't indoctrinate us with American individualism...

    1. ှ ှ  ှ Fှ Y=# IှYĈှ

      I disagree. People could participate in activities such as tournaments, music, and pageants (which Huizing classifies as play) without necessarily doing it voluntarily. They may have been forced by parents or be forced to participate out of a sense of responsibility, and while they are participating in these forms of play, it's not always voluntarily and it's not always out of joy. Yet, it's still play.

    2. ှ ှ Yှ =ှ Zှ =$ှ =   ှ  ှ IF ?ှ =$ှ IŊ$=I ̈  ှ  ှ Ɨc= ?ှ =$ှ   ှ ೩ ှ ̈# ?ှ   ?ှ ̈ #I  ှ  ှ =#I  ̈  Ĉှ

      It seems that the author classifies merrymaking as forms of play, including masquerades as play. I'm interested to see how other authors write on similar topics as we read more literature and are exposed to more opinions.

    3. ှ ǜ= ှ ̈# ှ  ှ I ̈ ှ   ှ $I= ̈ှ ှ ှ =ฌှ$=I ̈ ှ =$ှ =#༗ှ  ှ ှ ှ ྨญ ှ ှ I##Iှ =$ှ ̈ ှ  ှ=ှ $

      Just a thought, but earlier in the foreword, Huizinga mentioned how they had to fill in the gaps of their knowledge themself and that the reader should not expect documentation of every word. I wonder how much of what the author says is in consensus with other historians and how much of what the author says is their own thoughts.

    4. ှ ှ ှ ှ ှHZှ L·ှ ှ $ףှ ှ ှv]ှ $ှ ှ$ ှ ¹ ှ  ှ ှ +ှ ှ ှ ှ ¡ #ှ    ှ$ှ ှ z ှ  ှ #

      Essentially, play is a distinct concept from actions such as laughing or joking. While these actions may be part of play, play itself stands separate from these ideas.

    5. ှ # ှ

      I'm a little confused by the author's definitions. Earlier, the author stated that concepts like justice, good, truth, beauty, and seriousness can be denied while play is undeniable. Now, the author is stating that play can be serious. How can a concept like "seriousness" be denied but also be used to describe an irrefutable concept?

    6. Bှ # ှ BZှ B#Vှ

      Under these definitions, would a prayer be considered as play? It involves imagination and problem-solving with an end goal to ensure well-being. Where do we draw the line for what is considered play?

    7. ှ ှ ှ Y ှ $ှ # ှ ှ ှ ှ ˢ ှ ှ ှ $ှ ှ Ĉှ

      I find it interesting how the author situates play as the foundation of civilization. I never considered that play is involved in language. I feel that the author is classifying anything involving imagination or problem-solving as "play" (language, myths, stories, etc). Where is the line drawn for what is play?

    1. An emergency need arose for someone to write 300 words o

      something that i'm thinking about is a parallel between restaurants -- so "junk words" and "fine dining words". Chat/LLM speaks to me as fluff, filler, words that come out just for convenience. Shakespeare, Mary Oliver, etc are the words that pack a punch. And that makes me think about art's irreplaceability - once someone creates authentic art, it loses its value if it's replicated, even down to the specifics.

    2. at's because the appetite for "content" is at least as much about creating new targets for advertisingrevenue as it is actual sustenance for human audience

      Valid, i've seen that content creators are gravitating more towards their clever ability to promote to an audience

    1. Thoughtful questions

      These questions are a very helpful guide to better understanding the text you are reading- as a reader I love to annotate the books I'm reading and these questions are usually ones I ask myself when I read.

    1. The injured ankle should be positioned and supported in the maximum dorsiflexion allowed by pain and effusion. Maximal dorsiflexion places the joint in its close-packed position or position of greatest congruency, allowing for the least capsular distention and resultant joint effusion. With ankle sprains, this position approximates the torn ligament ends in grade III injuries to reduce the amount of gap scarring and tension in grade I and II injured ligaments.

      place ankle in max DF -- CPP allows for max congruency + approximates ligaments in sprain (grade III)