1. Last 7 days
    1. Another example is QNX, a real-time operating system for embedded systems. The QNX Neutrino microkernel provides services for message passing and process scheduling. It also handles low-level network communication and hardware interrupts. All other services in QNX are provided by standard processes that run outside the kernel in user mode.

      QNX demonstrates the microkernel design in the context of the real-time and embedded systems. Its Neutrino microkernel is said to manage the essential functions like the message passing, process scheduling, network communication, and the hardware interrupts, while all the other services run as a separate user-mode processes. This separation is said to enhance the reliability and it also simplifies the system maintenance and its updates.

    2. Perhaps the best-known illustration of a microkernel operating system is Darwin, the kernel component of the macOS and iOS operating systems. Darwin, in fact, consists of two kernels, one of which is the Mach microkernel. We will cover the macOS and iOS systems in further detail in Section 2.8.5.1.

      Darwin serves as an important example of a microkernel-based on the operating system. It forms the kernel foundation for macOS and iOS, incorporating the Mach microkernel as one of its core components. This highlights how modern operating systems can use microkernel principles while supporting complex, feature-rich environments.

    3. One benefit of the microkernel approach is that it makes extending the operating system easier. All new services are added to user space and consequently do not require modification of the kernel. When the kernel does have to be modified, the changes tend to be fewer, because the microkernel is a smaller kernel. The resulting operating system is easier to port from one hardware design to another. The microkernel also provides more security and reliability, since most services are running as user—rather than kernel—processes. If a service fails, the rest of the operating system remains untouched.

      The microkernel design is said to offer the several advantages: it simplifies the extending the operating system, since the new services can be added in the user space without the changing of the kernel. The smaller the kernel is, the easier it is to modify and port across the hardware platforms. Additionally, because most of the services run in the user space, the system gains the improved security and the reliability—if a service crashes, it does not affect the rest of the operating system.

    4. The main function of the microkernel is to provide communication between the client program and the various services that are also running in user space. Communication is provided through message passing, which was described in Section 2.3.3.5. For example, if the client program wishes to access a file, it must interact with the file server. The client program and service never interact directly. Rather, they communicate indirectly by exchanging messages with the microkernel.

      In a microkernel system, the kernel’s primary role is to act as a communication hub between user-space programs and services. Instead of direct interaction, clients and services exchange messages via the microkernel. For instance, a program requesting file access communicates with the file server through the kernel, ensuring controlled, indirect interaction and maintaining the modular structure of the system.

    5. We have already seen that the original UNIX system had a monolithic structure. As UNIX expanded, the kernel became large and difficult to manage. In the mid-1980s, researchers at Carnegie Mellon University developed an operating system called Mach that modularized the kernel using the microkernel approach. This method structures the operating system by removing all nonessential components from the kernel and implementing them as user-level programs that reside in separate address spaces. The result is a smaller kernel. There is little consensus regarding which services should remain in the kernel and which should be implemented in user space. Typically, however, microkernels provide minimal process and memory management, in addition to a communication facility. Figure 2.15 illustrates the architecture of a typical microkernel.

      The microkernel approach has emerged for addressing the complexity of the large monolithic kernels, like the UNIX. By moving the nonessential services out of the kernel into the user-space programs, the kernel becomes smaller and very easier to manage. Microkernels usually are said to handle only the core tasks—such as the process and the memory management and the interprocess communication—while the other services are said to run separately, improving the modularity and the maintainability.

    6. Layered systems have been successfully used in computer networks (such as TCP/IP) and web applications. Nevertheless, relatively few operating systems use a pure layered approach. One reason involves the challenges of appropriately defining the functionality of each layer. In addition, the overall performance of such systems is poor due to the overhead of requiring a user program to traverse through multiple layers to obtain an operating-system service. Some layering is common in contemporary operating systems, however. Generally, these systems have fewer layers with more functionality, providing most of the advantages of modularized code while avoiding the problems of layer definition and interaction.

      While the layered approach is said to offer the clarity and the modularity, it is rarely used in its pure form in operating systems. Defining precise responsibilities for each layer is difficult, and performance can suffer because service requests must pass through multiple layers. Modern systems often use a compromise: fewer, broader layers that retain modular benefits while reducing overhead and complexity.

    7. Each layer is implemented only with operations provided by lower-level layers. A layer does not need to know how these operations are implemented; it needs to know only what these operations do. Hence, each layer hides the existence of certain data structures, operations, and hardware from higher-level layers.

      Each layer acts like a “black box,” using services from lower layers without needing to know their internal workings. This abstraction hides implementation details, so higher layers focus only on what operations do, not how they are carried out, which simplifies design and enhances modularity.

    8. The main advantage of the layered approach is simplicity of construction and debugging. The layers are selected so that each uses functions (operations) and services of only lower-level layers. This approach simplifies debugging and system verification. The first layer can be debugged without any concern for the rest of the system, because, by definition, it uses only the basic hardware (which is assumed correct) to implement its functions. Once the first layer is debugged, its correct functioning can be assumed while the second layer is debugged, and so on. If an error is found during the debugging of a particular layer, the error must be on that layer, because the layers below it are already debugged. Thus, the design and implementation of the system are simplified.

      The layered approach makes the building and the debugging an operating system much easier. Every layer depends solely on the ones beneath it, allowing the developers to test any one layer individually. After a lower layer is verified for the functioning properly, any errors that are detected in the higher layers must be traceable to it, simplifying that the process of identifying and resolving issues while enhancing the overall reliability of the system

    9. An operating-system layer is an implementation of an abstract object made up of data and the operations that can manipulate those data. A typical operating-system layer—say, layer M—consists of data structures and a set of functions that can be invoked by higher-level layers. Layer M, in turn, can invoke operations on lower-level layers.

      An operating-system layer functions as a fundamental component, integrating the data with the operations that manipulate that data. Each layer (such as layer M) offers the services to the layers situated above it while relying on the layers beneath it for the foundational operations. This structured method helps in the system organization, enhancing its comprehensibility, maintainability, and the adaptability.

    10. The monolithic approach is often known as a tightly coupled system because changes to one part of the system can have wide-ranging effects on other parts. Alternatively, we could design a loosely coupled system. Such a system is divided into separate, smaller components that have specific and limited functionality. All these components together comprise the kernel. The advantage of this modular approach is that changes in one component affect only that component, and no others, allowing system implementers more freedom in creating and changing the inner workings of the system.

      Monolithic kernels are said to be challenging to implement and modify due to their very large, unified structure. However, they offer high performance because system calls involve minimal overhead and internal kernel communication is very fast. This speed advantage is why monolithic designs remain common in operating systems like UNIX, Linux, and Windows despite their complexity.

    11. Despite the apparent simplicity of monolithic kernels, they are difficult to implement and extend. Monolithic kernels do have a distinct performance advantage, however: there is very little overhead in the system-call interface, and communication within the kernel is fast. Therefore, despite the drawbacks of monolithic kernels, their speed and efficiency explains why we still see evidence of this structure in the UNIX, Linux, and Windows operating systems.

      Monolithic kernels are said to be challenging to implement and modify due to their very large, unified structure. However, they offer high performance because system calls involve minimal overhead and internal kernel communication is very fast. This speed advantage is why monolithic designs remain common in operating systems like UNIX, Linux, and Windows despite their complexity.

    12. The Linux operating system is based on UNIX and is structured similarly, as shown in Figure 2.13. Applications typically use the glibc standard C library when communicating with the system call interface to the kernel. The Linux kernel is monolithic in that it runs entirely in kernel mode in a single address space, but as we shall see in Section 2.8.4, it does have a modular design that allows the kernel to be modified during run time.

      Linux, like the UNIX, follows a largely monolithic structure but this includes the modular features. The kernel operates completely in the kernel mode within the unified address space and it facilitates the loadable modules, enabling the components of the kernel to be added, removed, or updated while the system is running. Applications are used to interact with the kernel through the glibc standard C library, serving as the conduit for the system calls.

    13. An example of such limited structuring is the original UNIX operating system, which consists of two separable parts: the kernel and the system programs. The kernel is further separated into a series of interfaces and device drivers, which have been added and expanded over the years as UNIX has evolved. We can view the traditional UNIX operating system as being layered to some extent, as shown in Figure 2.12. Everything below the system-call interface and above the physical hardware is the kernel. The kernel provides the file system, CPU scheduling, memory management, and other operating-system functions through system calls. Taken in sum, that is an enormous amount of functionality to be combined into one single address space.

      The original UNIX OS illustrates a partially layered structure. While it is mostly monolithic, it separates the kernel from system programs and further divides the kernel into interfaces and device drivers. The kernel handles core functions—like file systems, CPU scheduling, and the memory management—through system calls, all within a single address space, demonstrating how even limited structuring can help organize a complex operating system.

    14. The simplest structure for organizing an operating system is no structure at all. That is, place all of the functionality of the kernel into a single, static binary file that runs in a single address space. This approach—known as a monolithic structure—is a common technique for designing operating systems.

      A monolithic structure is the simplest way for organizing the operating system: all the kernel functions are compiled into the single large binary which runs in one address space. While straightforward, this design can make the debugging, updating, and maintaining the system becomes more difficult, because every part of the kernel is tightly interconnected. Many of the early operating systems have used this approach.

    15. A system as large and complex as a modern operating system must be engineered carefully if it is to function properly and be modified easily. A common approach is to partition the task into small components, or modules, rather than have one single system. Each of these modules should be a well-defined portion of the system, with carefully defined interfaces and functions. You may use a similar approach when you structure your programs: rather than placing all of your code in the main() function, you instead separate logic into a number of functions, clearly articulate parameters and return values, and then call those functions from main().

      Modern operating systems are meant to be extremely complex, so breaking them into the modules makes the development and the maintenance manageable.Every module manages a distinct, clearly defined function and interacts with the other modules via explicit interfaces. This modular method resembles effective programming practices, as the code is separated into functions with specified inputs and outputs instead of consolidating everything within the main(). It enhances readability, maintainability, and also decreases errors.

    16. As is true in other systems, major performance improvements in operating systems are more likely to be the result of better data structures and algorithms than of excellent assembly-language code. In addition, although operating systems are large, only a small amount of the code is critical to high performance; the interrupt handlers, I/O manager, memory manager, and CPU scheduler are probably the most critical routines. After the system is written and is working correctly, bottlenecks can be identified and can be refactored to operate more efficiently.

      As is true in other systems, major performance improvements in operating systems are more likely to be the result of better data structures and algorithms than of excellent assembly-language code. In addition, although operating systems are large, only a small amount of the code is critical to high performance; the interrupt handlers, I/O manager, memory manager, and CPU scheduler are probably the most critical routines. After the system is written and is working correctly, bottlenecks can be identified and can be refactored to operate more efficiently.

    17. The advantages of using a higher-level language, or at least a systems-implementation language, for implementing operating systems are the same as those gained when the language is used for application programs: the code can be written faster, is more compact, and is easier to understand and debug. In addition, improvements in compiler technology will improve the generated code for the entire operating system by simple recompilation. Finally, an operating system is far easier to port to other hardware if it is written in a higher-level language. This is particularly important for operating systems that are intended to run on several different hardware systems, such as small embedded devices, Intel x86 systems, and ARM chips running on phones and tablets.

      Using the higher-level languages for the operating system development offers the several key benefits: code can be written more quickly, is easier to read and debug, and is generally more compact. Compiler improvements automatically enhance the efficiency of the OS through recompilation. Additionally, the high-level languages make the porting of the OS to the different hardware platforms much easier—such as a crucial advantage for the systems designed for running on diverse devices, from embedded systems to desktop PCs and mobile ARM-based devices.

    18. Early operating systems were written in assembly language. Now, most are written in higher-level languages such as C or C++, with small amounts of the system written in assembly language. In fact, more than one higher-level language is often used. The lowest levels of the kernel might be written in assembly language and C. Higher-level routines might be written in C and C++, and system libraries might be written in C++ or even higher-level languages. Android provides a nice example: its kernel is written mostly in C with some assembly language. Most Android system libraries are written in C or C++, and its application frameworks—which provide the developer interface to the system—are written mostly in Java. We cover Android's architecture in more detail in Section 2.8.5.2.

      This passage emphasizes the importance and the evolution of the operating system development from the assembly language to the higher-level languages like C and C++. Modern OS kernels often use a mix of languages: low-level routines for the hardware control in the assembly or C, system libraries in the C or C++, and the higher-level application frameworks in the languages such as Java. Android is a clear example, showing how the different layers of the OS stack are used for implementation in the different languages to balance the performance, portability, and also the developer accessibility.

    19. Policy decisions are important for all resource allocation. Whenever it is necessary to decide whether or not to allocate a resource, a policy decision must be made. Whenever the question is how rather than what, it is a mechanism that must be determined.

      This passage emphasizes the difference between the policy and the mechanism in the resource management. A policy defines what should be done—for example, deciding which of the process gets the access to the resource—while a mechanism defines how the decision should be implemented, such as a specific algorithm or the procedure used to allocate such a resource. Recognizing this separation helps in designing the flexible and adaptable operating systems.

    20. We can make a similar comparison between commercial and open-source operating systems. For instance, contrast Windows, discussed above, with Linux, an open-source operating system that runs on a wide range of computing devices and has been available for over 25 years. The “standard” Linux kernel has a specific CPU scheduling algorithm (covered in Section 5.7.1), which is a mechanism that supports a certain policy. However, anyone is free to modify or replace the scheduler to support a different policy.

      This passage illustrates the separation of policy and mechanism in practice, using the Windows versus Linux as the examples. In Linux, the CPU scheduler is used for representing a mechanism, while the scheduling algorithm (policy) determines how much of the CPU time is being allocated. Unlike most of the commercial operating systems, Linux is an open source, so the users can be able to modify or replace the scheduler to implement a different policy without changing any of the underlying mechanism. This flexibility is a key advantage of open-source systems.

    21. The separation of policy and mechanism is important for flexibility. Policies are likely to change across places or over time. In the worst case, each change in policy would require a change in the underlying mechanism. A general mechanism flexible enough to work across a range of policies is preferable. A change in policy would then require redefinition of only certain parameters of the system. For instance, consider a mechanism for giving priority to certain types of programs over others. If the mechanism is properly separated from policy, it can be used either to support a policy decision that I/O-intensive programs should have priority over CPU-intensive ones or to support the opposite policy.

      This passage highlights why separating policy from mechanism increases flexibility in an operating system. Policies often change depending on context or over time, and if mechanisms were tightly coupled to policies, any change would require redesigning the mechanism. By keeping the mechanisms general and flexible, only the policy parameters needs to be adjusted. For example, the priority mechanism can be used to support the different policies, such as giving the preference to the I/O-intensive programs or the CPU-intensive programs, without modifying any of the underlying mechanism itself.

    22. One important principle is the separation of policy from mechanism. Mechanisms determine how to do something; policies determine what will be done. For example, the timer construct (see Section 1.4.3) is a mechanism for ensuring CPU protection, but deciding how long the timer is to be set for a particular user is a policy decision.

      This principle emphasizes distinguishing between mechanisms and policies. Mechanisms define how the task is performed, while policies define what is to be done. For instance, a timer is a mechanism that enforces CPU usage limits, but setting the duration of the timer for each user is a policy decision. This separation allows flexibility in system behavior without changing the underlying implementation.

    23. Specifying and designing an operating system is a highly creative task. Although no textbook can tell you how to do it, general principles have been developed in the field of software engineering, and we turn now to a discussion of some of these principles.

      Designing an operating system requires a high degree of creativity, as there is no single formula or textbook method for doing it. However, software engineering principles provide general guidelines and best practices that can help structure the design process, ensuring the system is reliable, efficient, and maintainable.

    24. There is, in short, no unique solution to the problem of defining the requirements for an operating system. The wide range of systems in existence shows that different requirements can result in a large variety of solutions for different environments. For example, the requirements for Wind River VxWorks, a real-time operating system for embedded systems, must have been substantially different from those for Windows Server, a large multiaccess operating system designed for enterprise applications.

      There is, in short, no unique solution to the problem of defining the requirements for an operating system. The wide range of systems in existence shows that different requirements can result in a large variety of solutions for different environments. For example, the requirements for Wind River VxWorks, a real-time operating system for embedded systems, must have been substantially different from those for Windows Server, a large multiaccess operating system designed for enterprise applications.

    25. The first problem in designing a system is to define goals and specifications. At the highest level, the design of the system will be affected by the choice of hardware and the type of system: traditional desktop/laptop, mobile, distributed, or real time.

      The initial step in designing the operating system is defining the clear goals and their specifications. Key design decisions will depend on the hardware platform and the type of the system that is being developed—whether it’s the traditional desktop or the laptop, a mobile device, a distributed system, or the real-time system. These factors are used to influence the performance, capabilities, and the overall architecture for the OS.

    26. In sum, all of these differences mean that unless an interpreter, RTE, or binary executable file is written for and compiled on a specific operating system on a specific CPU type (such as Intel x86 or ARMv8), the application will fail to run. Imagine the amount of work that is required for a program such as the Firefox browser to run on Windows, macOS, various Linux releases, iOS, and Android, sometimes on various CPU architectures.

      Ultimately, an application can only run on a system if its interpreter, runtime environment (RTE), or compiled binary is designed for that specific operating system and CPU architecture. This explains why cross-platform applications, like the Firefox browser, require significant effort to support multiple OSes and hardware types, including Windows, macOS, Linux distributions, iOS, and Android, often across different processor architectures.

    27. Each operating system has a binary format for applications that dictates the layout of the header, instructions, and variables. Those components need to be at certain locations in specified structures within an executable file so the operating system can open the file and load the application for proper execution.

      Every operating system defines its own binary file format for applications, which specifies how the executable’s header, instructions, and variables are arranged. This structure ensures that the OS can correctly load the program into memory and execute it, making the binary format a critical factor in application compatibility across different systems.

    28. In theory, these three approaches seemingly provide simple solutions for developing applications that can run across different operating systems. However, the general lack of application mobility has several causes, all of which still make developing cross-platform applications a challenging task. At the application level, the libraries provided with the operating system contain APIs to provide features like GUI interfaces, and an application designed to call one set of APIs (say, those available from IOS on the Apple iPhone) will not work on an operating system that does not provide those APIs (such as Android). Other challenges exist at lower levels in the system, including the following.

      While the interpreted languages, virtual machines, and the cross-compilers can help the applications run on the multiple operating systems, achieving the true cross-platform compatibility is said to remain difficult. One major reason is that the different operating systems offer the different libraries and the APIs, particularly for the GUI and the system-level features. An app designed for the one OS (like iOS) may fail on the other(like Android) if the expected APIs aren’t available. Additionally, the lower-level differences in the system architecture, the memory management, and the file handling are used to create the further challenges for the developers trying to make the applications that are portable.

    29. In theory, these three approaches seemingly provide simple solutions for developing applications that can run across different operating systems. However, the general lack of application mobility has several causes, all of which still make developing cross-platform applications a challenging task. At the application level, the libraries provided with the operating system contain APIs to provide features like GUI interfaces, and an application designed to call one set of APIs (say, those available from IOS on the Apple iPhone) will not work on an operating system that does not provide those APIs (such as Android). Other challenges exist at lower levels in the system, including the following.

      Although the interpreted languages, the virtual machines, and the cross-compilers help with the cross-platform development, true application portability is still considered difficult. A major reason is that the operating systems are used to provide the different APIs, especially for the features like the graphical interfaces. An app is built for one OS (e.g., iOS) may fail on another (e.g., Android) because of the expected APIs aren’t said to be available. Additional challenges also arise from the low-level system differences, making this cross-platform development complex.

    30. 1. The application can be written in an interpreted language (such as Python or Ruby) that has an interpreter available for multiple operating systems. The interpreter reads each line of the source program, executes equivalent instructions on the native instruction set, and calls native operating system calls. Performance suffers relative to that for native applications, and the interpreter provides only a subset of each operating system's features, possibly limiting the feature sets of the associated applications.

      Applications developed in interpreted languages such as Python or Ruby can operate on various operating systems since the interpreter functions as an intermediary layer. The interpreter executes the program line by line, converting it into native instructions and calling the OS when needed. This enables compatibility across platforms but might reduce performance compared to native apps and could limit access to some features exclusive to specific operating systems.

    31. Based on our earlier discussion, we can now see part of the problem—each operating system provides a unique set of system calls. System calls are part of the set of services provided by operating systems for use by applications. Even if system calls were somehow uniform, other barriers would make it difficult for us to execute application programs on different operating systems. But if you have used multiple operating systems, you may have used some of the same applications on them. How is that possible?

      Each of the operating system has its own set of the system calls, which makes it very hard to run the applications across the different systems. Even if the system calls were standardized, the differences in the design and the implementation would still cause the ompatibility issues. Yet, we often are able to see the same applications (like the browsers or the word processors) working across the Windows, Linux, and the macOS. This is possible because of the applications are usually written against the APIs or the cross-platform frameworks, rather than directly using system calls, allowing them to be adapted to different operating systems.

    32. Object files and executable files typically have standard formats that include the compiled machine code and a symbol table containing metadata about functions and variables that are referenced in the program. For UNIX and Linux systems, this standard format is known as ELF (for Executable and Linkable Format). There are separate ELF formats for relocatable and executable files. One piece of information in the ELF file for executable files is the program's entry point, which contains the address of the first instruction to be executed when the program runs. Windows systems use the Portable Executable (PE) format, and macOS uses the Mach-O format.

      Executable and the object files follow the standard formats which include both the actual machine code and the metadata (like details about functions and variables). On UNIX and the Linux systems, this format is called ELF (Executable and Linkable Format), with the different versions for the relocatable and the executable files. ELF files also are used specify the entry point, which is the first instruction to run when the program starts. Other operating systems use different formats—Windows uses PE (Portable Executable), and macOS uses Mach-O.

    33. Source files are compiled into object files that are designed to be loaded into any physical memory location, a format known as an relocatable object file. Next, the linker combines these relocatable object files into a single binary executable file. During the linking phase, other object files or libraries may be included as well, such as the standard C or math library (specified with the flag -lm).

      When the programs are compiled, then the source code is initially transformed into the relocatable object files, which can be loaded into any of the memory addresses. The linker merges these types of the object file types into one of the executable file, also including external object files or libraries when necessary (for example, the math library with -lm). This process ensures that the completed program is comprehensive and ready to run

    34. The view of the operating system seen by most users is defined by the application and system programs, rather than by the actual system calls. Consider a user's PC. When a user's computer is running the macOS operating system, the user might see the GUI, featuring a mouse-and-windows interface. Alternatively, or even in one of the windows, the user might have a command-line UNIX shell. Both use the same set of system calls, but the system calls look different and act in different ways. Further confusing the user view, consider the user dual-booting from macOS into Windows. Now the same user on the same hardware has two entirely different interfaces and two sets of applications using the same physical resources. On the same hardware, then, a user can be exposed to multiple user interfaces sequentially or concurrently.

      Users primarily interact with the operating system through the interfaces (GUIs or command lines) and applications, rather than directly getting through the system calls. For example, macOS users can interact through the graphical interface or the UNIX shell, both able to use the same system calls, although they appear quite different.The Dual-booting macOS and the Windows showcases how the identical hardware can result in the distinctly different type of the user experiences and the environments, despite relying on the same foundational system resources

    35. Program loading and execution. Once a program is assembled or compiled, it must be loaded into memory to be executed. The system may provide absolute loaders, relocatable loaders, linkage editors, and overlay loaders. Debugging systems for either higher-level languages or machine language are needed as well.

      Program loading and execution services handle the process of getting compiled programs into memory so they can run. These include loaders (absolute, relocatable, overlay) and tools like linkage editors. Debugging support is also part of this category, helping programmers test and fix errors in either high-level code or machine language.

    36. File management. These programs create, delete, copy, rename, print, list, and generally access and manipulate files and directories.

      File management services provide everyday tools for working with files and directories. They let users create, delete, copy, rename, print, and list files, making it easier to organize and manage data without needing to use low-level system calls directly.

    37. Another aspect of a modern system is its collection of system services. Recall Figure 1.1, which depicted the logical computer hierarchy. At the lowest level is hardware. Next is the operating system, then the system services, and finally the application programs. System services, also known as system utilities, provide a convenient environment for program development and execution. Some of them are simply user interfaces to system calls. Others are considerably more complex. They can be divided into these categories:

      This part explains where system services fit in the computer hierarchy. They are situated between the operating system and the application programs, making it very easy for the developers and the users to interact with the system. Some services are just the simple tools which act as the front end for the system calls, while the others are more advanced and are used to provide the broader functionality. Essentially, system services (or utilities) give the programmers a convenient way for developing and running the programs without dealing directly with the low-level details.

    38. Typically, system calls providing protection include set_permission() and get_permission(), which manipulate the permission settings of resources such as files and disks. The allow_user() and deny_user() system calls specify whether particular users can—or cannot—be allowed access to certain resources. We cover protection in Chapter 17 and the much larger issue of security—which involves using protection against external threats—in Chapter 16.

      This paragraph highlights about how the operating systems are used to employ the certain system calls to manage protection and access control. Functions such as the set_permission() and the get_permission()manage permissions for its resources, whereas allow_user() and the deny_user() are used to specify which of the users can access the particular files or the devices. This indicates that the protection is meant exclusively to manage internal access rights, whereas the security (addressed later) focuses on defending against external threats

    39. Protection provides a mechanism for controlling access to the resources provided by a computer system. Historically, protection was a concern only on multiprogrammed computer systems with several users. However, with the advent of networking and the Internet, all computer systems, from servers to mobile handheld devices, must be concerned with protection.

      Why has protection become an important concern for all the computer systems, not just the multiprogrammed systems with its multiple users?

    40. Both of the models just discussed are common in operating systems, and most systems implement both. Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided. It is also easier to implement than is shared memory for intercomputer communication. Shared memory allows maximum speed and convenience of communication, since it can be done at memory transfer speeds when it takes place within a computer. Problems exist, however, in the areas of protection and synchronization between the processes sharing memory.

      What are the main advantages and the disadvantages of using the message passing versus the shared memory for interprocess communication, and in what situations is each model more suitable?

    41. There are two common models of interprocess communication: the message-passing model and the shared-memory model. In the message-passing model, the communicating processes exchange messages with one another to transfer information. Messages can be exchanged between the processes either directly or indirectly through a common mailbox. Before communication can take place, a connection must be opened. The name of the other communicator must be known, be it another process on the same system or a process on another computer connected by a communications network. Each computer in a network has a host name by which it is commonly known. A host also has a network identifier, such as an IP address. Similarly, each process has a process name, and this name is translated into an identifier by which the operating system can refer to the process. The get_hostid() and get_processid() system calls do this translation. The identifiers are then passed to the general-purpose open() and close() calls provided by the file system or to specific open_connection() and close_connection() system calls, depending on the system's model of communication. The recipient process usually must give its permission for communication to take place with an accept_connection() call. Most processes that will be receiving connections are special-purpose daemons, which are system programs provided for that purpose. They execute a wait_for_connection() call and are awakened when a connection is made. The source of the communication, known as the client, and the receiving daemon, known as a server, then exchange messages by using read_message() and write_message() system calls. The close_connection() call terminates the communication.

      Explain the steps involved in interprocess communication using the message-passing model. Include the roles of the client, server (daemon), and system calls such as open_connection(), accept_connection(), read_message(), and close_connection().

    42. Many operating systems provide a time profile of a program to indicate the amount of time that the program executes at a particular location or set of locations. A time profile requires either a tracing facility or regular timer interrupts. At every occurrence of the timer interrupt, the value of the program counter is recorded. With sufficiently frequent timer interrupts, a statistical picture of the time spent on various parts of the program can be obtained.

      Many operating systems can track how much time a program spends running at different points in its code. This is called a time profile. To create one, the system either traces the program or uses regular timer interrupts. Every time the timer interrupts, the system records the program’s current position. By doing this often frequently, it can give a statistical view of which parts of the program take the most time to execute.

    43. Many system calls exist simply for the purpose of transferring information between the user program and the operating system. For example, most systems have a system call to return the current time() and date(). Other system calls may return information about the system, such as the version number of the operating system, the amount of free memory or disk space, and so on.

      Many system calls are used for just passing the information back and forth between the program and the operating system. For example, many systems are used for processing a function which is used to retrieve the current time and its date. Additional calls can offer the information regarding the system, such as the version of the operating system, the amount of the available memory or disk space, and other related details

    44. Once the device has been requested (and allocated to us), we can read(), write(), and (possibly) reposition() the device, just as we can with files. In fact, the similarity between I/O devices and files is so great that many operating systems, including UNIX, merge the two into a combined file–device structure. In this case, a set of system calls is used on both files and devices. Sometimes, I/O devices are identified by special file names, directory placement, or file attributes.

      Once the device has been requested (and allocated to us), we can read(), write(), and (possibly) reposition() the device, just as we can with files. In fact, the similarity between I/O devices and files is so great that many operating systems, including UNIX, merge the two into a combined file–device structure. In this case, a set of system calls is used on both files and devices. Sometimes, I/O devices are identified by special file names, directory placement, or file attributes.

    45. The various resources controlled by the operating system can be thought of as devices. Some of these devices are physical devices (for example, disk drives), while others can be thought of as abstract or virtual devices (for example, files). A system with multiple users may require us to first request() a device, to ensure exclusive use of it. After we are finished with the device, we release() it. These functions are similar to the open() and close() system calls for files. Other operating systems allow unmanaged access to devices. The hazard then is the potential for device contention and perhaps deadlock, which are described in Chapter 8.

      The resources that an operating system manages can be thought of as devices. Some of these are physical, like the disk drives, while the others are abstract or virtual, like the files. In the systems where the multiple users, a program may need to request() the device to ensure that it has an exclusive access, and then release() it when it's finished. These actions are similar to open() and close() for files. Some operating systems let programs access devices without this kind of control, but doing so can lead to problems like device conflicts or deadlocks, which we’ll discuss in Chapter 8.

    46. We may need these same sets of operations for directories if we have a directory structure for organizing files in the file system. In addition, for either files or directories, we need to be able to determine the values of various attributes and perhaps to set them if necessary. File attributes include the file name, file type, protection codes, accounting information, and so on. At least two system calls, get_file_attributes() and set_file_attributes(), are required for this function. Some operating systems provide many more calls, such as calls for file move() and copy(). Others might provide an API that performs those operations using code and other system calls, and others might provide system programs to perform the tasks. If the system programs are callable by other programs, then each can be considered an API by other system programs.

      We often need similar operations for directories as we do for files, especially when using a directory structure to organize files. For both files and directories, it’s important to check or modify their attributes when necessary. Attributes can include things like the name, type,or the access permissions, and the accounting information. To handle this, operating systems usually provide system calls such as get_file_attributes() and set_file_attributes(). Some systems go further, offering extra calls for tasks like moving or copying files. In other cases, these actions are handled through APIs or system programs. If other programs can call these system programs, they effectively act as the APIs themselves.

    47. The file system is discussed in more detail in Chapter 13 through Chapter 15. Here, we identify several common system calls dealing with files. We first need to be able to create() and delete() files. Either system call requires the name of the file and perhaps some of the file's attributes. Once the file is created, we need to open() it and to use it. We may also read(), write(), or reposition() (rewind or skip to the end of the file, for example). Finally, we need to close() the file, indicating that we are no longer using it.

      This part elaborates on the primary file-management system calls offered by an operating system. A program can begin by generating the new file or removing the existing one, identifying its name and the additional attributes as said to be required. Once when the file is generated, it can be accessed by using the open(), allowing the program to engage with it—either by reading, writing, or the repositioning the file pointer with the reposition(). After the program has completed its operations within the file, close() is called for indicating that the file is no longer in use or accessed

    48. There are so many facets of and variations in process control that we next use two examples—one involving a single-tasking system and the other a multitasking system—to clarify these concepts. The Arduino is a simple hardware platform consisting of a microcontroller along with input sensors that respond to a variety of events, such as changes to light, temperature, and barometric pressure, to just name a few. To write a program for the Arduino, we first write the program on a PC and then upload the compiled program (known as a sketch) from the PC to the Arduino's flash memory via a USB connection. The standard Arduino platform does not provide an operating system; instead, a small piece of software known as a boot loader loads the sketch into a specific region in the Arduino's memory

      This passage explains process control in simple and multitasking systems using the Arduino as an example. The Arduino is a microcontroller platform with sensors that detect various events. Programs, called sketches, are written and compiled on a PC and then uploaded to the Arduino’s flash memory. Unlike more complex systems, the standard Arduino does not use a full operating system; a bootloader simply loads the sketch into memory, demonstrating a single-tasking environment.

    49. Quite often, two or more processes may share data. To ensure the integrity of the data being shared, operating systems often provide system calls allowing a process to lock shared data. Then, no other process can access the data until the lock is released. Typically, such system calls include acquire_lock() and release_lock().

      This paragraph explores how operating systems handle the data exchanged among various processes. For the data integrity preservation, the OS can protect the shared data via the system calls like acquire_lock(), stopping other processes from accessing it until it is freed with the release_lock().This mechanism is crucial for avoiding conflicts and ensuring consistent data in environments with simultaneous processing

    50. A process executing one program may want to load() and execute() another program. This feature allows the command interpreter to execute a program as directed by, for example, a user command or the click of a mouse. An interesting question is where to return control when the loaded program terminates. This question is related to whether the existing program is lost, saved, or allowed to continue execution concurrently with the new program. If control returns to the existing program when the new program terminates, we must save the memory image of the existing program; thus, we have effectively created a mechanism for one program to call another program. If both programs continue concurrently, we have created a new process to be multiprogrammed. Often, there is a system call specifically for this purpose (create_process()).

      This text explains how one application can load and run another application, for instance, when a user issues a command or selects an icon. It emphasizes the main problem of control flow once the new program ends: control might revert to the original program, necessitating its memory image to be preserved, or both programs could operate simultaneously, resulting in a multiprogramming situation. The excerpt mentions that operating systems typically offer a specific system call, like create_process(), to enable this functionality.

    51. A running program needs to be able to halt its execution either normally (end()) or abnormally (abort()). If a system call is made to terminate the currently running program abnormally, or if the program runs into a problem and causes an error trap, a dump of memory is sometimes taken and an error message generated. The dump is written to a special log file on disk and may be examined by a debugger—a system program designed to aid the programmer in finding and correcting errors, or bugs—to determine the cause of the problem. Under either normal or abnormal circumstances, the operating system must transfer control to the invoking command interpreter. The command interpreter then reads the next command. In an interactive system, the command interpreter simply continues with the next command; it is assumed that the user will issue an appropriate command to respond to any error. In a GUI system, a pop-up window might alert the user to the error and ask for guidance. Some systems may allow for special recovery actions in case an error occurs. If the program discovers an error in its input and wants to terminate abnormally, it may also want to define an error level. More severe errors can be indicated by a higher-level error parameter. It is then possible to combine normal and abnormal termination by defining a normal termination as an error at level 0. The command interpreter or a following program can use this error level to determine the next action automatically.

      This passage explains how a running program can terminate either normally using end() or abnormally using abort(). In the case of abnormal termination or an error trap, the operating system may create a memory dump and an error log for debugging. After termination, control is returned to the command interpreter, which continues processing user commands or provides GUI prompts for guidance. The passage also highlights the use of error levels to indicate the severity of errors, allowing subsequent programs or the command interpreter to respond appropriately.

    52. System calls can be grouped roughly into six major categories: process control, file management, device management, information maintenance, communications, and protection. Below, we briefly discuss the types of system calls that may be provided by an operating system. Most of these system calls support, or are supported by, concepts and functions that are discussed in later chapters. Figure 2.8 summarizes the types of system calls normally provided by an operating system. As mentioned, in this text, we normally refer to the system calls by generic names. Throughout the text, however, we provide examples of the actual counterparts to the system calls for UNIX, Linux, and Windows systems.

      This section explains that system calls can be categorized into six primary groups: process management, file handling, device control, information upkeep, communication, and security. The text emphasizes that most system calls relate to concepts discussed later and provides examples from UNIX, Linux, and Windows. Figure 2.8 gives a summary of these categories.

    53. hree general methods are used to pass parameters to the operating system. The simplest approach is to pass the parameters in registers. In some cases, however, there may be more parameters than registers. In these cases, the parameters are generally stored in a block, or table, in memory, and the address of the block is passed as a parameter in a register (Figure 2.7). Linux uses a combination of these approaches.

      This passage describes three ways that system-call parameters can be passed to the operating system. The simplest method is using CPU registers to hold the parameters. If there are too many parameters for the available registers, the parameters are then placed in the memory block or the table, and the address of that block is passed into the register. Linux uses the mix of both the methods depending on the situation.

    54. System calls occur in different ways, depending on the computer in use. Often, more information is required than simply the identity of the desired system call. The exact type and amount of information vary according to the particular operating system and call. For example, to get input, we may need to specify the file or device to use as the source, as well as the address and length of the memory buffer into which the input should be read. Of course, the device or file and length may be implicit in the call.

      This passage explains that system calls often require additional information beyond identifying the call itself. Parameters—like the source file or the device, memory buffer address, and buffer length—may need to be specified so the operating system understands how to process the request. The precise specifics rely on the particular operating system and the system calls being utilized

    55. The caller need know nothing about how the system call is implemented or what it does during execution. Rather, the caller need only obey the API and understand what the operating system will do as a result of the execution of that system call. Thus, most of the details of the operating-system interface are hidden from the programmer by the API and are managed by the RTE.

      This text highlights the importance of abstraction within system calls. Programmers working with an API do not have to understand the internal mechanisms or execution specifics of a system call. They just need to follow to the API's instructions and also understand the expected outcome. The run-time environment (RTE) manages the inner networks of interacting with the operating system, successfully concealing the underlying specifics from the developer.

    56. Another important factor in handling system calls is the run-time environment (RTE)—the full suite of software needed to execute applications written in a given programming language, including its compilers or interpreters as well as other software, such as libraries and loaders. The RTE provides a system-call interface that serves as the link to system calls made available by the operating system. The system-call interface intercepts function calls in the API and invokes the necessary system calls within the operating system. Typically, a number is associated with each system call, and the system-call interface maintains a table indexed according to these numbers

      This passage describes the role of the run-time environment (RTE) in managing system calls. The RTE includes compilers, interpreters, libraries, and loaders, and provides a system-call interface that connects API function calls to the operating system’s system calls. Each system call is typically assigned a number, and the interface uses a table indexed by these numbers to invoke the correct system call within the OS.

    57. Why would an application programmer prefer programming according to an API rather than invoking actual system calls? There are several reasons for doing so. One benefit concerns program portability. An application programmer designing a program using an API can expect her program to compile and run on any system that supports the same API (although, in reality, architectural differences often make this more difficult than it may appear). Furthermore, actual system calls can often be more detailed and difficult to work with than the API available to an application programmer. Nevertheless, there often exists a strong correlation between a function in the API and its associated system call within the kernel. In fact, many of the POSIX and Windows APIs are similar to the native system calls provided by the UNIX, Linux, and Windows operating systems.

      This passage describes why the application programmers prefer using the APIs instead of directly invoking the system calls. APIs provide the portability, allowing the programs to run on any system which supports the same API, and also simplify the programming by offering the higher-level, easier-to-use functions. While the system calls are often more detailed and complex, APIs usually have a close correspondence with the underlying system calls, as seen in the POSIX and the Windows APIs.

    58. As you can see, even simple programs may make heavy use of the operating system. Frequently, systems execute thousands of system calls per second. Most programmers never see this level of detail, however. Typically, application developers design programs according to an application programming interface (API). The API specifies a set of functions that are available to an application programmer, including the parameters that are passed to each function and the return values the programmer can expect. Three of the most common APIs available to application programmers are the Windows API for Windows systems, the POSIX API for POSIX-based systems (which include virtually all versions of UNIX, Linux, and macOS), and the Java API for programs that run on the Java virtual machine

      This passage highlights that even simple programs rely heavily on the operating system through system calls, often executing thousands per second. However, the programmers usually interact with the higher-level APIs rather than making the system calls directly. APIs likethe Windows API, POSIX API, and the Java API provide the standardized functions, parameters, and the expected return values, simplifying the program development while hiding the underlying OS complexity.

    59. When both files are set up, we enter a loop that reads from the input file (a system call) and writes to the output file (another system call). Each read and write must return status information regarding various possible error conditions. On input, the program may find that the end of the file has been reached or that there was a hardware failure in the read (such as a parity error). The write operation may encounter various errors, depending on the output device (for example, no more available disk space).

      This passage emphasizes that reading from and writing to files in a program involves repeated system calls, each of which must report status and handle potential errors. It illustrates how the operating system monitors both input and output operations, accounting for conditions like reaching the end of a file, hardware read failures, or insufficient disk space during writing.

    60. Once the two file names have been obtained, the program must open the input file and create and open the output file. Each of these operations requires another system call. Possible error conditions for each system call must be handled. For example, when the program tries to open the input file, it may find that there is no file of that name or that the file is protected against access. In these cases, the program should output an error message (another sequence of system calls) and then terminate abnormally (another system call).

      This passage explains that each file operation—opening an input file, creating and opening an output file—requires a separate system call. It highlights the need for handling potential errors, such as a missing file or insufficient access permissions, using system calls to display error messages and terminate the program if necessary.

    61. Before we discuss how an operating system makes system calls available, let's first use an example to illustrate how system calls are used: writing a simple program to read data from one file and copy them to another file. The first input that the program will need is the names of the two files: the input file and the output file. These names can be specified in many ways, depending on the operating-system design

      This passage introduces the concept of using system calls with a practical example: a program that reads from one file and writes to another. It emphasizes that the program first needs the file names and notes that how these names are specified can vary depending on the operating system’s design.

    62. System calls provide an interface to the services made available by an operating system. These calls are generally available as functions written in C and C++, although certain low-level tasks (for example, tasks where hardware must be accessed directly) may have to be written using assembly-language instructions.

      This passage explains that system calls act as the bridge between programs and the operating system’s services. Most system calls are accessible through high-level languages like C and C++, but some low-level operations—especially those requiring direct hardware access—may need to be implemented in assembly language.

    63. Although there are apps that provide a command-line interface for iOS and Android mobile systems, they are rarely used. Instead, almost all users of mobile systems interact with their devices using the touch-screen interface. The user interface can vary from system to system and even from user to user within a system; however, it typically is substantially removed from the actual system structure. The design of a useful and intuitive user interface is therefore not a direct function of the operating system. In this book, we concentrate on the fundamental problems of providing adequate service to user programs. From the point of view of the operating system, we do not distinguish between user programs and system programs.

      This passage emphasizes that mobile users almost exclusively use touch-screen interfaces rather than command-line interfaces. While user interfaces may differ across systems and users, their design is largely separate from the underlying operating system. The focus of the book, as noted here, is on the operating system’s role in providing consistent and adequate service to programs, treating user and system programs equivalently.

    64. In contrast, most Windows users are happy to use the Windows GUI environment and almost never use the shell interface. Recent versions of the Windows operating system provide both a standard GUI for desktop and traditional laptops and a touch screen for tablets. The various changes undergone by the Macintosh operating systems also provide a nice study in contrast.

      This passage differntiates the typical Windows users with the command-line users, noting that most Windows users rely primarily on the GUI and rarely use the shell. Modern Windows versions support both desktop GUIs and touch interfaces for tablets. The passage also points out that the evolution of Macintosh operating systems offers a useful comparison in understanding how GUI design and user interaction have developed over time.

    65. The choice of whether to use a command-line or GUI interface is mostly one of personal preference. System administrators who manage computers and power users who have deep knowledge of a system frequently use the command-line interface. For them, it is more efficient, giving them faster access to the activities they need to perform. Indeed, on some systems, only a subset of system functions is available via the GUI, leaving the less common tasks to those who are command-line knowledgeable

      This text emphasizes that the decision between using the graphical user interface (GUI) and the command-line interface (CLI) usually depends on individual preference and the user's skill level. System administrators and experienced users typically prefer the CLI for its quicker, more efficient access to system features Certain tasks might only be accessible through the CLI, highlighting the significance for users requiring specific or uncommon functions.

    66. Because a either a command-line interface or a mouse-and-keyboard system is impractical for most mobile systems, smartphones and handheld tablet computers typically use a touch-screen interface. Here, users interact by making gestures on the touch screen—for example, pressing and swiping fingers across the screen. Although earlier smartphones included a physical keyboard, most smartphones and tablets now simulate a keyboard on the touch screen

      This text demonstrates that mobile devices like smartphones and tablets depend on touch-screen interfaces rather than conventional command-line or mouse-and-keyboard systems. Users typically engage directly with the display by using gestures like tapping or swiping.While the early smartphones had physical keyboards, modern devices typically display a virtual keyboard on the touch screen for the input, optimizing portability and also usability.

    67. Graphical user interfaces first appeared due in part to research taking place in the early 1970s at Xerox PARC research facility. The first GUI appeared on the Xerox Alto computer in 1973. However, graphical interfaces became more widespread with the advent of Apple Macintosh computers in the 1980s. The user interface for the Macintosh operating system has undergone various changes over the years, the most significant being the adoption of the Aqua interface that appeared with macOS. Microsoft's first version of Windows—Version 1.0—was based on the addition of a GUI interface to the MS-DOS operating system

      This passage outlines how the historical development of the graphical user interfaces (GUIs). GUIs were at the beginning examined at the Xerox PARC in the early 1970s, with the Xerox Alto being the first computer to have one. Widespread usage took place in the 1980s with Apple’s Macintosh computers. Over time, GUIs evolved, such as Apple’s adoption of the Aqua interface in macOS. Microsoft also integrated a GUI with Windows 1.0, layering it over the MS-DOS operating system.

    68. In one approach, the command interpreter itself contains the code to execute the command. For example, a command to delete a file may cause the command interpreter to jump to a section of its code that sets up the parameters and makes the appropriate system call. In this case, the number of commands that can be given determines the size of the command interpreter, since each command requires its own implementing code.

      This passage explains one method of implementing the commands in a command interpreter: the interpreter directly contains the code for executing each command. For instance, a delete-file command triggers a specific section of the interpreter’s code to set parameters and perform the system call. The number of supported commands directly affects the interpreter’s size, as each command needs its own dedicated code.

    69. The main function of the command interpreter is to get and execute the next user-specified command. Many of the commands given at this level manipulate files: create, delete, list, print, copy, execute, and so on. The various shells available on UNIX systems operate in this way. These commands can be implemented in two general ways.

      This passage highlights how the command interpreter’s primary role is to receive and execute user commands, many of which involve file manipulation, such as creating, deleting, or copying the files. It also notes that the UNIX shells implement these commands, which can be carried out using two general approaches.

    70. Most operating systems, including Linux, UNIX, and Windows, treat the command interpreter as a special program that is running when a process is initiated or when a user first logs on (on interactive systems). On systems with multiple command interpreters to choose from, the interpreters are known as shells. For example, on UNIX and Linux systems, a user may choose among several different shells, including the C shell, Bourne-Again shell, Korn shell, and others

      This passage explains that the command interpreter, or shell, is a special program that runs when a process starts or when a user logs on. On systems like UNIX and Linux, multiple shells are available, allowing users to choose their preferred interface for entering commands.

    71. Protection and security. The owners of information stored in a multiuser or networked computer system may want to control use of that information. When several separate processes execute concurrently, it should not be possible for one process to interfere with the others or with the operating system itself. Protection involves ensuring that all access to system resources is controlled. Security of the system from outsiders is also important.

      This passage describes that operating systems enforce protection and security by controlling access to system resources. In multiuser or the networked environments, this ensures that processes do not interfere with one another, and also safeguards the system against the external threats.

    72. Logging. We want to keep track of which programs use how much and what kinds of computer resources. This record keeping may be used for accounting (so that users can be billed) or simply for accumulating usage statistics. Usage statistics may be a valuable tool for system administrators who wish to reconfigure the system to improve computing services.

      This passage explains how the operating systems maintains the logs of the program resource usage. These logs can support accounting, billing, or help administrators analyze usage patterns to optimize system performance.

    73. Resource allocation. When there are multiple processes running at the same time, resources must be allocated to each of them. The operating system manages many different types of resources. Some (such as CPU cycles, main memory, and file storage) may have special allocation code, whereas others (such as I/O devices) may have much more general request and release code.

      This passage highlights that the operating system is responsible for resource allocation, distributing CPU time, memory, file storage, and I/O devices among multiple running processes to ensure fair and efficient usage.

    74. Error detection. The operating system needs to be detecting and correcting errors constantly. Errors may occur in the CPU and memory hardware (such as a memory error or a power failure), in I/O devices (such as a parity error on disk, a connection failure on a network, or lack of paper in the printer), and in the user program (such as an arithmetic overflow or an attempt to access an illegal memory location).

      This passage explains that the operating system continuously detects and handles errors. These errors can arise in hardware (CPU, memory, or I/O devices) or in user programs, such as illegal memory access or arithmetic overflow, ensuring system stability.

    1. Speaking: Explain how igneous, sedimentary, and metamorphic rocks differ to a partner using compare and contrast language

      I feel like all of these sound like normal objectives that we are taught to write in all of our education classes. I am failing to see how these are special or different. The first ones just seem like terrible objectives to begin with and the second ones just go into more detail.

    2. If we don’t challenge the status quo regarding the improvement of educational opportunities for multilingual learners through equitable practices and policies, who will?

      This should be required by law. As teachers we are required to provide free and appropriate education to all students, regardless of background.

    3. Additionally, multilingual learners may face challenges when teachers use “tricky” language, such as idiomatic expressions (e.g., “learn this by heart” or “the assignment is a walk in the park”

      I think that while these idiomatic expressions may be more difficult for ELL students that they should be taught them because they are often used in everyday language. Although they might not be on a standardized test, they could help students understand more conversationally and be able to communicate better with their peers.

    4. acknowledging the implicit and explicit ideologies and power structures inherent in language, (2) understanding that the use of such language, even unintentionally, can and does legitimate and reproduce social inequalities, and (3) striving to become agents of long-term change in society

      I think that it is important to be aware of terms that are offensive, even if unintentionally and definitely avoid using them. Also though, if a student is using them, shut that behavior down and tell them that it is inappropriate school talk and let them know that it can be hurtful to others.

    5. Getting to know your students should begin on the first day—or even earlier, if possible, by meeting them (and their families) before the academic year starts—and should continue throughout the year

      This should go for every student, not just ELL because it opens the door for communication between families which helps both parties be able to do what is best for their child. It also allows for the teacher to get to know each family and be able to differentiate per the students background knowledge.

    6. “I’m not sure what the problem is. These kids can’t speak well in English or Spanish. Rather than teaching them both languages, we should just focus on English

      I think this approach is lowkey a white supramacist and racist view. Students native languages are just as important as English. If they are not speaking well in either, than they are not being taught well in either side and that is up to the teachers to help teach. It is not a teachers job to be deciding what language they are going to be learning or talking outside of school.

  2. learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com
    1. So the economy is about work: organizing it, doing it, and dividing up andmaking use of its final output. And in our work, one way or another, we alwayswork (directly or indirectly) with other people.The link between the economy and society goes two ways. The economy is afundamentally social arena. But society as a whole depends strongly on the stateof the economy. Politics, culture, religion, and international affairs are all deeplyinfluenced by the progress of the economy. Governments are re-elected or turfedfrom office depending on the state of the economy. Family life is organized aroundthe demands of work (both inside and outside the home). Being able to comfortablysupport oneself and one’s family is a central determinant of happiness.

      Why does Stanford structure the economy about work, it reflects a deliberate shift away from traditional economic narratives that prioritize consumption or production.By centering labor, he emphasizes the abstraction of the mainstream economics. Instead, Stanford argues that work is the connective tissue between individuals and society, what responsibilities do the governments and institutions have to ensure that work is accessible and available. How does unpaid forms of labor fit into this structure like caregiving or volunteerism and why are they often excluded.

    2. land and driven into cities, where they suffered horrendous exploitation andconditions that would be considered intolerable today: seven-day working weeks,twelve-hour working days, child labour, frequent injury, early death. Vast profitswere earned by the new class of capitalists, most of which they ploughed backinto new investment, technology, and growth – but some of which they used tofinance their own luxurious consumption. The early capitalist societies were not atall democratic: the right to vote was limited to property owners, and basic rights tospeak out and organize (including to organize unions) were routinely (and oftenviolently) trampled.

      The start of capitalism during the Industrial Revolution was harsh and unfair. Workers were forced into cities and faced terrible conditions while capitalists made huge profits. Do you think capitalism could have grown the same way if workers had better protections back then...;?

    3. Instead, economists refer simply to “the economy” – as if there is only one kindof economy, and hence no need to name or define it. This is wrong. As we havealready seen, “the economy” is simply where people work to produce the things weneed and want. There are different ways to organize that work. Capitalism is justone of them.

      Klings view of the economy is that we need to be good at something and trade it with others who can provide the same service in their specialty. Stanford says the economy is about human beings prioritizing their needs with limited resources, repeating the production cycle continuing to improve. Both perspectives, regardless of how we consume or produce it, makes us rely on each other. No matter how the economy is studied or what economists say, everyone plays a role.

    4. Instead, economists refer simply to “the economy” – as if there is only one kindof economy, and hence no need to name or define it. This is wrong. As we havealready seen, “the economy” is simply where people work to produce the things weneed and want. There are different ways to organize that work. Capitalism is justone of them

      Stanford is saying that capitalism hasn’t always been here and might not stick around forever. It makes me think if something new takes its place, what would it look like, and would it deal with inequality any better?

    5. An economy in which private, profit-seeking companies undertake mostproduction, and in which wage-earning employees do most of the work, is acapitalist economy. We will see that these twin features (profit-driven productionand wage labour) create particular patterns and relationships, which in turn shapethe overall functioning of capitalism as a system.Any economy driven by these two features – production for profit and wagelabour – tends to replicate the following trends and patterns, over and over again:• Fierce competition between private companies for markets, sales, and profit.• Innovation, as companies constantly experiment with new technologies,new products, and new forms of organization – in order to succeed in thatcompetition.• An inherent tendency to growth, resulting from the desire of each individualcompany to make more profit.• Deep inequality between those who own successful companies, and the restof society who do not own companies.• A general conflict of interest between those who work for wages, and theemployers who hire them.• Economic cycles or “rollercoasters,” with periods of strong growth followedby periods of stagnation or depression; sometimes these cycles even producedramatic economic and social crises

      Stanford says “companies undertake most production and in which wage-earning employees do most of the work.” Since in private companies, production repeats itself, and continues to innovate in order to grow, how does that affect employees are just working for money? What strategies or methods are used in order to keep workers and attract more knowing there can be future changes? Although Stanford speaks that the economy is based off individual choices, specialization, and trade still place a role for their own production. The repeated cycle that is made from private ownership in businesses require multiple humans to keep producing. It cannot be done on its own.

    6. Conventionally trained economists take it as a proven fact that free tradebetween two countries always makes both sides better off. People who questionor oppose free trade – trade unionists, social activists, nationalists – must eitherbe acting from ignorance, or else are pursuing some narrow vested interest thatconflicts with the broader good. These troublesome people should be lectured to(and economists love nothing better than expounding their beautiful theory ofcomparative advantage*), or simply ignored. And that’s exactly what mostgovernments do. (Ironically, even some conventional economists now recognizethat traditional free trade theory is wrong, for many reasons – some of whichwe’ll discuss in Chapter 22 of this book. But that hasn’t affected the profession’snear-religious devotion to free trade policies.)

      He’s pointing out that economists act like free trade is more of a belief than a fact. I take this to mean that economics sometimes feels more like an ideology than an actual science. If free trade ends up hurting workers, shouldn’t economists question the ideas they’re basing it on?

    7. At its simplest, the “economy” simply consists of all the work that human beingsperform, in order to produce the things we need and use in our lives. (By work,we mean all productive human activity, not just employment; we’ll discuss thatdistinction later.) We need to organize and perform our work (economists call thatproduction). And then we need to divide up the fruits of our work (economistscall that distribution), and use it

      Stanford says “ the economy consist of all the work that human beings perform, in order to produce the things we need and use our lives” When economists study over what is being produced, consumed, traded, the GDP and the market, why is it that some have more resources than others? Does volunteering count as a contribution towards production since it is not involve money?

    8. Economics is the study of human economic behaviour: the production anddistribution of the goods and services we need and want. Hence, economicsis a social science, not a physical science.

      Stanford says that economics is the study make rational choices among competing priorities. That means that we choose what we want to produce and how. Does that mean that individuals make choices based off logic? Knowing that we do not have access to unlimited resources, how do they know they are making the right choice? What if they fail to continue producing perfectly every time?

    9. Never trust an economist with your job

      I noticed that someone else responded to this one and I wanted to give my opinion on it as well. I really do believe you shouldn't trust someone being in charge of your job when their job is solely to increase efficiency, for instance they might want to put policies into place that make for high turnover but make more money for the company.

    10. But quite apart from whether you think capitalism is good or bad, capitalism issomething we must study. It’s the economy we live in, the economy we know. Andthe more ordinary people understand about capitalism

      Capitalism is something we must understand as people of society. I am interested to learn about all the different aspects of it and how it influences our economy.

    11. I alsobelieve that it is ultimately possible to build an alternative economic system guideddirectly by our desire to improve the human condition, rather than by a hunger forprivate profit. (Exactly what that alternative system would look like, however, isnot at all clear today.) We’ll consider these criticisms of capitalism, and alternativevisions, in the last chapters of this book

      How different would the economy be with this alternate system? Would it be better for the people?

    12. Unfortunately, most professional economists don’t think about economics inthis common-sense, grass-roots context. To the contrary, they tend to adopt arather superior attitude in their dealings with the untrained masses. They invokecomplicated technical mumbo-jumbo – usually utterly unnecessary to theirarguments – to make their case. They claim to know what’s good for the people

      What would it be like if economists did not have such superio attitudes and came from a more "for the people" perspective? How different would the economy be?

    13. Most production of goods and services is undertaken by privately-ownedcompanies, which produce and sell their output in hopes of making a profit.This is called production for profit.2. Most work in the economy is performed by people who do not own theircompany or their output, but are hired by someone else to work in return for amoney wage or salary. This is called wage labour.

      Stanford tends to focus on how important production is, what they sell, who they hire, and how much they pay. While Kling focuses on specialization and free-market. Stanford also writes as if his audience is not very economic educated while Kling writes like his audience has economic background knowledge.

    14. economics is inherently a social subject. It’s not just technical forces liketechnology and productivity that matter. It’s also the interactions and relationshipsbetween people that make the economy go around.

      Economics is grounded and isn't what people usually associate it with. As its the working people that make the economy go around.

    1. Although the concept of self-fulfilling prophecies was originally developed to be applied to social inequality and discrimination, it has since been applied in many other contexts, including interpersonal communication. This research has found that some people are chronically insecure, meaning they are very concerned about being accepted by others but constantly feel that other people will dislike them.

      I tend to feel very insecure when I am meeting new people because I am afraid they won't like me. But once I open up and talk to them, I never have a problem. This is a good idea of what to do, and to just manifest positive thoughts. When you allow those negative insecure thoughts to stay, they can keep you from having the opportunity for positive thoughts.

    1. Americans "Cinderella" is not a story of rags to riches, but rather riches recovered; not poor girl into princess but rather rich girl (or princess) rescued from improper or wicked enslavement; not suffering Griselda enduring but shrewd and practical girl persevering and winning a share of the power. It is really a story that is about "the stripping away of the disguise that conceals the soul from the eyes of others "

      I was really intrigued by the way they broke down Cinderella's story and described it. It honestly gave me a new outlook on the whole fairy tale. I was one of the people who had always saw it as a rags to riches story. Seeing this new perspective makes me appreciate the story do much more and makes me want to go back and reread it as well as watch the movies with this new point of view of Cinderella always being a princess.

    2. Since then, America's Cinderella has been a coy, helpless dreamer, a "nice" girl who awaits her rescue with patience and a song.

      I can tell how different this version of cinderella is from the older one. this sentence highlights how Disney made the Cinderella appear more sweet and gentle. This aligns with the American culture at the time and portrayed women submissively.

    3. The idea in the West is to make a product which will sell well.

      It is incredibly interesting to see how society influences the story telling. With each translation the story of Ashenputtel/ Cinderalla which altered to fit the culture and society. Though I knew there was a distinction between the American and original German depiction of the same story, I never thought about why and how this came to be. After reflection, I realize that the American versions unconsciously or consciously sells the American dream. It sells the idea that success and a better life is possible for anyone regardless of where they start off. The American dream relies on the concept that America is a country of endless opportunities and where social and class mobility is possible. The story of cinderella portrays just that, of course with a bit of romantic touch, it follows the story of a girl who was poor and miserable but ended up with the prince- being able to jump social classes.

    1. But each person’s self-concept is also influenced by context, meaning we think differently about ourselves depending on the situation we are in. In some situations, personal characteristics, such as our abilities, personality, and other distinguishing features, will best describe who we are.

      In some places, I can be much more outgoing and seem like I know what Im doing but other times it seems like im really nervous or I have never done this before in my life. When I feel confident in something, I tend to act not necessarily differently, but more myself. And sometimes I'll think highly of myself and others I will think that I don't deserve to be participating like the others.

    1. I’m sure you have a family member, friend, or coworker with whom you have ideological or political differences.

      When I get into an argument with someone like my boyfriend or my friends, I try to avoid thinking about only my point of view. Sometimes when I argue with my bestfriend it seems like she doesn't want to put her ideas aside for the benefit of our friendship, but after I talk it through to her it she starts to understand.

    1. In the conflict thus far, success has been on our side

      What is the success? Is the confederate winning the war, or have people united and buy into the message of the confederate.

    2. our social fabric is firmly planted; and I cannot permit myself to doubt the ultimate success of a full recognition of this principle throughout the civilized and enlightened world.

      I find it so baffling how convincing the confederate leaders were that their values were commonly shared among all of the south. While some may have believed this to my understanding many "confederate" valued people were more interested in not loosing their business or jobs as farmers and losing the farming empires that was the southern United States.

    3. The architect, in the construction of buildings, lays the foundation with the proper material-the granite-then comes the brick or the marble. The substratum of our society is made of the material fitted by nature for it, and by experience we know that it is the best, not only for the superior but for the inferior race, that it should be so. It is, indeed, in conformity with the Creator.

      Compares society to a building, saying some people are like the granite foundation while others are the “higher” materials like brick or marble. The point being made is that different races supposedly have different “natural” places in society, with some meant to be at the bottom. By bringing in “the Creator,” the writer argues this hierarchy is part of God’s plan. Basically, it’s a way of justifying racism and inequality by making it sound natural, logical, and even moral.

    1. We also organize information that we take in based on difference. In this case, we assume that the item that looks or acts different from the rest doesn’t belong with the group. Perceptual errors involving people and assumptions of difference can be especially awkward, if not offensive.

      I have been around many people who tend to say or do things based on only the information they think they know. When they assume certain things, sometimes they are correct, but other times they are embarrassed when they end up being wrong. This is why I think it is important to not stereotype or assume based off the first glance.

    1. In answering this letter, please state if there would be any safety for my Milly and Jane, who are now grown up, and both good-looking girls. You know how it was with poor Matilda and Catherine. I would rather stay here and starve—and die, if it come to that—than have my girls brought to shame by the violence and wickedness of their young masters. You will also please state if there has been any schools opened for the colored children in your neighborhood. The great desire of my life now is to give my children an education, and have them form virtuous habits.

      The only wish to educate his children leaves a sense of hopelessness for himself and simply living a life of a parent, to do what is best and necessary for his children's success.

  3. learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com
    1. He knows that his breakfast depends upon workerson the coffee plantations of Brazil, the citrus groves ofFlorida, the sugar fields of Cuba, the wheat farms ofthe Dakotas, the dairies of New York; that it has beenassembled by ships, railroads, and trucks, has beencooked with coal from Pennsylvania in utensils madeof aluminum, china, steel, and glass.

      To me this symbolizes how one nation's economic growth depends on many other nations through trade.

    2. yourself mine the metals, process them, combine them, andshape them. To mine the metals, you would have to be ableto locate them. You would need machinery, which you wouldhave to build yourself

      To gather the tools, raw material and metals machinery was needed. Machinery is very integrated into the industry, its an industrial technology that has improved productivity. Stanford states "The invention of steam power, semi-automated spinning and weaving machines, and other early industrial technologies dramatically increased productivity." (Stanford, pg 44). The scale of industrial technology is something that surpasses an individual worker capacity. Kling talks about specialization on why individuals are inherently dependent on a broader system. While Stanford goes into why capitalist is necessary to get factories working.

    3. “Filling in Frameworks” wrestles with the misconceptionthat economics is a science. This section looks at the difficul-ties that economists face in trying to adopt scientific methods. Isuggest that economics differs from the natural sciences in thatwe have to rely much less on verifiable hypotheses and muchmore on hard-to-verify interpretative frameworks. Economicanalysis is a challenge, because judging interpretive frame-works is actually harder than verifying scientific hypotheses.

      I find it interesting that Kling has pointed out in this section the difference between natural sciences and the complexity of economics. Noting that economics is largely interpretative and not always subject to the same verifiable scientific methods of study.

    4. “Instructions and Incentives” deals with the misconceptionthat economic activity is directed by planners. This sectionexplains that although people within a firm are guided totasks through instruction from managers, the economy as awhole is not coordinated that way. Instead, the price systemfunctions as the coordination mechanism.

      In Stanford he wants people to learn about the economics and focus on deciding what's best for them instead of listening to the expert whereas Kling sees the price system as the main mechanism that organizes the economy.

    5. Scarcity and choice are certainly important concepts, butmaking them the central focus can lead to economic analysisthat is simplistic and mechanistic. In fact, the approach to eco-nomics that took hold after World War II treats the economyas a machine governed by equations. Textbooks using thatapproach purport to offer a repair manual, with policy toolsto fix the economic machine when something goes wrong.The mechanistic metaphor is inappropriate and even dan-gerous. A better metaphor would be that of a rainforest. Theeconomy is a complex, evolving system

      Klings introduction, highlights that whether we want to or not the economy relies on many people and we cannot avoid it. He has a social view of specialization and trade, even if there is no physical contact. Stanfords view on economics is that individuals work with what they have knowing we don’t have unlimited access. Individuals focus more on what is needed most. Both Klings and Stanfords readings speak over production, society, consumers, and factors that make up the economy.

    6. Even more strikingis the fact that almost everything you consume is somethingyou could not possibly produce. Your daily life depends on thecooperation of hundreds of millions of other people

      Kling states that even if we are able to consume anything we are not able to produce it. This leads to me to wonder if there is anything I use or eat everyday that I could make on my own. "Your daily life depends on millions of other people". Even if I were to consume a meal that I made from scratch, the work of getting the ingredients to me had to be done from hours of labor and machines. However, can the people who produce it, consume their own work? If a person was part of a car making project and bought it after, is that considered consuming your production? You did not make the whole car, but you played a big part of the process. If we rely on millions of people to make just one item, isn't everyone who is involved producing something together? leading to consuming your production as a whole. I argue against since his logic is taking away the focus of the workers behind the production process.

    7. If trade entails trust among strangers, then financial inter-mediation entails trust over time. If people lose trust infinancial intermediaries, then financial intermediation candecline precipitously. That sharp decline can have a broadeffect on the structure of production in the economy

      Kling says that trade requires for there to be trust from strangers. "If trade entails trust among strangers, then financial intermediation entails trust overtime." He mentions how if there is trust lost in the financial systems it will break the flow of production in an economy. What would happen if suddenly we stopped putting trust in banks, or financial systems that are involved in order to make trade successful? How will employment, businesses, and production be affected?

    8. Instead of headlines, the “crawl” on the TV lists all of thetasks and people needed to produce your breakfast. Your cerealwas manufactured in a factory that had a variety of workersand many machines. People had to manage the factory. Orga-nization of the firm required many functions in finance andadministration. First, however, people had to build the factory

      Kling speaks over the amount of work and tools that are used for an item that looks simple to make. He says "We carry on our lives not really conscious of the complexity of that specialization." The steps and process line up to get just a bowl of cereal. What would happens if something steps out of line, or breaks down? will it slow down the production process? Or what if over time there are less workers to getting production moving?

    9. When patterns of specialization become unsustainable, theindividuals affected can face periods of unemployment. Theyare like soldiers waiting for new orders, except that the orderscome not from a commanding general but from the decen-tralized actions of many entrepreneurs testing ideas in searchof profit.

      When old job patterns no longer make sense, soldiers waiting for new orders, instead of commanding officers they're are getting it from entrepreneurs experimenting. Meaning their livelihood is at the hands of people who are just testing things and an uncertainty if there will be a new role created. It raises a question if this decentralized adjustment make unemployment longer or less predictable?

    10. Look at the list of ingredients in the cereal. Those ingre-dients had to be refined and shipped to the cereal manu-facturer. Again, those processes required many machines,which in turn had to be manufactured. The cereal grainsand other ingredients had to be grown, harvested, and pro-cessed. Machines were involved in those processes, and thosemachines had to be manufactured.

      Machines is such a big part of the industry. It is used for transportation and manufacturing products. Products such as cereal which comes from a chain of production. Every ingredient had to be grown, processed, and transported using machines and the machine itself had to be designed and built to keep the industry running. If machines weren't involve whatsoever would the industry be able to keep afloat?

    1. But in the last analysis, it is the people themselves who are filed away through the lack of creativity, transformation, and knowledge in this (at best) misguided system

      It is so important for teachers to be creative. But the argument that I see from my coworkers every year is that the pay does not equal the work. Sadly, this attitude hurts the students and will trickle to the community and the future.

    2. This is the "banking" concept of education, in which the scope of action allowed to the students extends only as far as receiving, filing, and storing the deposits.

      This "banking" of information will make the information hard to keep in the brain. Students will forget the information in the next year.

    3. Words are emptied of their concreteness and become a hollow, alienated, and alienating verbosity.

      Students need to know their why for learning and also how it relates to the real world. This shows the importance of the education.

    4. The contents, whether values or empirical dimensions of reality, tend in the process of being narrated to become lifeless and petrified.

      It is important as educators to involve dialogue amongst students. At the end of the day, students should be "tired" not the teacher due to energy and conversation during the day.

    1. Acceptable Use of AI in this Course

      Will we go over how to use AI as a tool to assist us in our academic coursework and how to use it properly without violating any policies?

    2. Define important concepts such as: authority, peer review, bias, point of view, editorial process, purpose, audience, information privilege and more.

      This is really useful, mainly because a lot of the times when a professor asked me to find a peer reviewed article i struggle to find an actual good one, so i can really use the help.

    1. Generative AI models are trained on vast amounts of internet data. This data, while rich in information, contains both accurate and inaccurate content, as well as societal and cultural biases. Since these models mimic patterns in their training data without discerning truth, they can reproduce any

      this part of the text highlights why ai might be inaccurate and biased in most cases stating that it contains a large amount of information that might be accurate and inaccurate ,it is also a way to emphasizes the purpose of the article.

    2. . These generative AI biases can have real-world consequences. For instance, adding biased generative AI to “virtual sketch artist” software used by police departments could “put already over-targeted populations at an even increased risk of harm ranging from physical injury to unlawful imprisonment”

      it is really overwhelming to know that if we used biased and inaccurate ai in our society how much of damage it might cause.

  4. learn-us-east-1-prod-fleet02-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet02-xythos.content.blackboardcdn.com
    1. herefore of making a provi-sional determination of the absolute values ofthe charges carried by the drop

      I don't entirely understand the connection between using Stokes' Law to cancel out the mass from the electric charge equation and proving electric charge values are discrete. Looking at equation 4, there is a constant coefficient to the variable velocity terms but that doesn't really indicate that electric charges are discrete to me.

    2. supported by evidence from many sourcesthat all electrical charges,

      I'm curious what sources he's referencing here. What other experiments would have been able to confirm that electric charges are quantized?

    1. Others have been tempted to argue that implicit bias is overrated (maybe even justified) and that minorities simply need to toughen up.

      it is surprisingly to find that peoples still try to disprove implicit bias despite there being numerous researches and daily every day examples clearly observed still people try to avoid this idea , implicit bias is something deep within every person , you will not be aware of it consciously but your actions will show it. but it will be also interesting to see the proves and claims of the people who are trying to debate this matter.

    2. Take the example of the Müller-Lyer illusion. Your task is to decide whether line A or line B is the longer one.

      this is a great example used in the article to make readers really visualize what the article trying to prove and shed light to and also a good way to grab the readers attention more to the matter.

    1. The act of study demands a sense of modesty.

      Learning requires us to be humble in the sense that we can add our own perspectives and challenge readings, yet at the same time accept that we do not know everything and have to be open to collaboration and adapting what we know to maybe fit other points and be comfortable with the unknown.

    2. In fact, a book reflects its author’s confrontation with the world. It expresses thisconfrontation.

      100 different people can read the same passage and present 100 different perspectives Lived experiences can lead to lessons, readings, books, etc. to all be interpreted differently. That is why the act of memorization of other people's writings and opinions will not further reader's academic journeys.

    3. This critical attitude is precisely what“banking education” does not engender.

      When students study, they are simply absorbing information presented to them. The expectation is for them to blurt it back out for the exam and be ready to memorize a new set of facts. For students to form a relationship with the content, they must be presented the opportunity to ask questions.

    4. f the reader is transformed into a “vessel” filled by extracts from an internalizedtext

      There is a passive relationship between a reader and the author. Learners are the "vessel" for information waiting to be filled by the opinions of the authors instead of being challenged and wanting to challenge. It critiques readers being empty for someone elses knowledge.

    1. In our survey, respondents most commonly reported using the time they save with AI coding tools to design systems, collaborate, and learn.

      Respondents say their companies are using AI to generate test cases

    2. nearly all of the survey participants reported using AI coding tools both outside of work or at work at some point

      Almost every respondent has used AI coding tools at work

    3. Easier to work with new programming languages, and understand existing codebases.

      AI coding tools make it easy to adopt new programming languages and understand existing code bases.

    1. It is not often that educators are permitted to strike because they are employed by the state and are considered vital to public service.

      I am surprised to learn that you have to ask permission/be permitted to strike. I did not know that some states still have laws against striking.

    2. Therefore, some states have begun to change tenure laws to adhere to the accountability requirements stipulated by the U.S. Department of Education as it relates to teacher evaluation and student achievement.

      Getting rid of tenure in favor of "merit" based protections might sound good in theory, and it is something that the current administration is pushing for but I argue that rewarding teachers with career protections based on "merit" is very subjective and could easily be used by states/districts to discriminate against teachers or support teachers that fit their vision.

    3. (CCSSO, InTASC Standard #9, 2013).

      The majority of these standards allude to teachers having personal responsibility. I think it is good to be mindful of the huge impact teachers, their attitudes, and actions have on students; not only in the classroom, but also on a student's self esteem, future, and overall feelings about learning.

    1. “There has never been a more important time for children to become storytellers, and there have never been so many ways for them to share their stories” (p. 3). Our students and their stories should be an essential part of our teaching. As educators, we need to encourage students to tell their stories and help build community. Each shared story has the potential of teaching us.

      I think it is super important for storytelling to be apart of a child’s curriculum. The mind can develop to a great extent through storytelling. It is apart of daily life that I think is often overlooked or taken advantage of.

    2. When students’ lives are taken off the margins and placed in the curriculum, they don’t feel the same need to put down someone else” (p. 7). Students need to feel that their voices matter, that they have a story to contribute or share and that their stories are a rich part of the curriculum

      This is true. If we avoid stereotypes, it invites a more comfortable environment for students to share authentic stories. It relieves the pressures and ideas that certain people have to live up to a specific standard or act a certain way.

    3. Students who search their memories for details about an event as they are telling it orally will later find those details easier to capture in writing

      It can be hard to find the right words when telling a story orally. For me personally, I often struggle to find the right words to describe certain things, or often myself using the wrong words, so writing it out definitely helps me brainstorm different ways I can describe certain details. It also gives me an opportunity to expand on those details to make the story more captivating or interesting

    4. “there has probably never been a human society in which people did not tell stories”

      This is fascinating to think about. If you think about native traditions, you will find that all, if not most, come from story telling. A lot of them are from oral tradition/storytelling, so it’s definitely interesting to think about how far story telling dates back to.

    1. Overall summary: author thinks the one advantage we have over AI is the originality that humans possess and it is critical that we continue to embrace that instead of becoming more like AI.

    2. Having said that, always remember that artificial intelligence is only an assistant; an executive’s value comes from his or her own intelligence.

      Summary: AI is useful for busywork or simple tasks.

    3. That which diverges from the run-of-the-mill is not only valuable; it is indeed becoming invaluable in the age of AI.

      Summary: Breaking rules and being truly original is the one advantage humans have over AI.

    4. Our priority should be to discover and innovate, not imitate neural networks.

      Summary: The author warns against becoming like AI in the process of creating.

    5. We can use AI for unengaging and repetitive tasks, but we should also remember that humaneness is the key to creativity.

      How would this author define humaneness? AI is technically just regurgitating human work, and it was invented by humans.

    1. The Egyptian empires lasted for nearly 2300 years before being conquered, in succession, by the Assyrians, Persians, and Greeks between about 700 BCE and 332 BCE.

      I find this to be insane that the Egyptian empires lasted this long! I had always heard the quote of Empires fall after 250 years, so 2300 years is absolutely wild!

    2. Farming developed in a number of different parts of the ancient world, before the beginning of recorded history. That means it’s very difficult for historians to describe early agricultural societies in as much detail as we’d like. Also, because there are none of the written records historians typically use to understand the past, we rely to a much greater extent on archaeologists, anthropologists, and other specialists for the data that informs our histories. And because the science supporting these fields has advanced rapidly in recent years, our understanding of this prehistoric period has also changed – sometimes abruptly.

      This surprised me a lot I thought there would be a decent amount of evidence of early farming and maybe stuff that was written down and annotated. It is still wild that there isn't much known about early stages of farming!

    1. Temperature also affected the behavioural preferences of the infauna associated with mussels. Polychaetes, crustaceans, and molluscs altered their behaviour to colonise the habitat created by one species of mussel to another. This altered behavioural preference of infauna can be driven by habitat-specific cues and the ability of infauna to make habitat choices

      The authors talked about some behavioral changes to the infauna that is associated with the mussels. Would the behavioral changes be positive or negative effect towards them or other species in their environment?

    2. After the 4-week acclimation period, the mussels were defaunated by carefully removing all infauna and separating adult mussels (>1 cm) into 10-cm-diameter clumps (Cole, 2010).

      Would we have seen different result if they acclimation period was longer for the mussels? Would a longer acclimation or a shorter one not really affect the mussels or the result to much?

    3. The outdoor experiment was performed in a purpose-built facility (Pereira et al., 2019) at the Sydney Institute of Marine Science (SIMS), Chowder Bay, Sydney Harbour, New South Wales, Australia. The experiment was performed during the summer peak recruitment period of marine invertebrates in Sydney Harbour.

      Would the researchers get similar or the same results, if they did not perform the experiment during the peak recruitment period? How different would the results be if they where during the low recruitment period?

    4. Previous studies have shown that the loss of a biogenic habitat in an ecosystem can be functionally replaced (or the loss of function is slowed to some extent) by another habitat-forming organism (Nagelkerken et al., 2016; Sunday et al., 2017).

      What would happen if another habitat-forming organism was introduces to the area? Would it benefit the overall ecology of the area or would it prove to be detrimental to the organisms that already exist in that area? Would it be ethical to perform this in order to prevent the replacement of a habitat?

    5. For example, under acidification, fleshy seaweeds outcompete calcareous species

      How would this potential change impact the organisms that rely on the calcareous species for food or protection?

    6. Molluscs actively chose to colonise T. hirsuta and actively avoided M. galloprovincialis, regardless of warming or pCO2 levels (Table 1).

      What caused molluscs to choose to colonize T. hisuta regardless of warming or pCO2 levels? What deterred them from colonizing M. galloprovincialis?

    7. The native mussel T. hirsuta grew more under warming (Fig. 1; ANOVA Species × Temperature F1,32 = 6.13, P < 0.05; Supplementary Table 2). In contrast, M. galloprovincialis grew the same at ambient and elevated temperatures (Fig. 1; Supplementary Table 2). There was no effect of elevated pCO2 on growth in either of the mussel species (ANOVA CO2 F1,32 = 0.53, P > 0.05; Supplementary Table 2).

      The authors present an interesting point here. The research suggests that temperature is the primary driver for the difference in growth between the native T. hisuta and the M. galloprovincialis. Based on these results, would these results be consistent in another shellfish species with the same tolerance for temperature and sensitivities to carbon dioxide?

    1. All work turned in must adhere to the following format.

      I appreciate the example of the format we are supposed to use. This gives us clear expectations of what you want and can be used all year long.

    2. All assignments for this course must be written and submitted directly in Google Docs

      As a Google Docs lover I am so pumped for this! Most of my other classes have to be submitted through something else and this will be so helpful for me throughout the class.

    3. It places too high of aburden on me to investigate and evaluate AI possible AI usage instead of focusingon the important educational aspects of the course.

      As a future educator, I find this extremely truthful. It can be so hard to find and investigate the use of AI because it is so authentic.

    1. The speculative bubble created by railroad financing burst in the Panic of 1873, which began a period called the Long Depression that lasted until nearly the end of the century and was so bad that before the Great Depression of the 1930s the period was known simply as “The Depression”.

      The fact that this was caused by a few things, that might've looked inconsequential, or not so much of a deal, all worked together to cause one of the most devastating depressions in U.S. history. It makes me wonder if there was anything they could've done to avoid it.

    2. Nearly 100 Americans died in “The Great Upheaval.” Workers destroyed nearly $40 million worth of property. The strike galvanized the country. It convinced laborers of the need for institutionalized unions, persuaded businesses of the need for even greater political influence and government aid, and foretold a half century of labor conflict in the United States.

      The fact that workers had to come to such drastic measures just to get a voice in what they're paid, or even reduced work hours. They had to destroy nearly $40 million dollars (about $1,174,720,000 today) worth of property, and there were many casualties. It makes me thankful that we have the unions we have today, but also wonder what would happen if something like this happened in modern days. Would it be as catastrophic, or would the government avoid all of it by complying?

    1. not only can such freedom be granted without prejudice to the public peace, but also, that without such freedom, piety cannot flourish nor the public peace be secure.

      holland an example of free speech

    2. How many evils spring from luxury, envy, avarice, drunkenness, and the like, yet these are tolerated

      some things are tolerated now because they cannot be legally enforced... more would come of preventing speech

    3. men would daily be thinking one thing and saying another”—a practice that will weave deceit and hypocrisy into the social fabric, thereby permitting “the avaricious, the flatterers, and other numskulls” to rise to the top.

      only puts unfit in rule

    4. Unlike many earlier defenders of toleration, he did not exclude atheists, Jews, Catholics, and the like.

      so long as your conduct is good, you may believe whatevr

    5. The sovereign’s obligation to respect the liberty of his subjects is solely a matter of self-​interest; to mistreat subjects is bound to generate resentment and possibly seditious tendencies, and those sentiments, in turn, will render the sovereign’s authority less secure than it would otherwise be

      mistreating subjects will make them less likely to trust you and thus give you less power?

    1. I mean the pace of the finished film, how the edits speed up or slow down to serve the story, producing a kind of rhythm to the edit.

      this video allows me to connect the overall rhythm of each shot some are speed up and others are longer. this helps me understand what rhythm means in a film.

    2. ther ways cinema manipulates time include sequences like flashbacks and flashforwards. Filmmakers use these when they want to show events from a character’s past, or foreshadow what’s coming in the future.

      i've seen this in a lot of films where they will add the end of the movie at the beginning and then we watch how the story plays out. for example fight club demonstrates a flashforward.

    3. The most obvious example of this is the ellipsis, an edit that slices out time or events we don’t need to see to follow the story. Imagine a scene where a car pulls up in front of a house, then cuts to a woman at the door ringing the doorbell. We don’t need to spend the screen time watching her shut off the car, climb out, shut and lock the door, and walk all the way up to the house.

      i think this saved the directors time and the suidences attentions for another example a person in the film will be eating food and then cut to her washing the dishes or into another scene we don't need to waste time watching that person eat.

    4. He wants you to feel the terror of those peasants being massacred by the troops, even if you don’t completely understand the geography or linear sequence of events. That’s the power of the montage as Eisenstein used it: A collage of moving images designed to create an emotional effect rather than a logical narrative sequence.

      i think this video shows the emotions a lot more the actually understand the logic behind the emotions.

    5. he audience was projecting their own emotion and meaning onto the actor’s expression because of the juxtaposition of the other images. This phenomenon – how we derive more meaning from the juxtaposition of two shots than from any single shot in isolation – became known as The Kuleshov Effect.

      i can see what the directors was trying to get across to the audience you can see the emotions of the actor in each cut.

    6. ilm editing and how it worked on an audience. He had a hunch that the power of cinema was not found in any one shot, but in the juxtaposition of shots. So, he performed an experiment. He cut together a short film and showed it to audiences in 1918. Here’s the film:

      this is interesting because technology advancements have also created film just like this and the dynamics and editing skills are so much more clear and advanced then back then.

    7. but it is the juxtaposition of that word (or shot) in a sentence (or scene) that gives it its full power to communicate. As such, editing is fundamental to how cinema communicates with an audience.

      i do think that grammar and editing words into the fim allow the director to connect with the audience.

    8. The filmmakers behind Deadpool (2016), for example, shot 555 hours of raw footage for a final film of just 108 minutes. That’s a shooting ratio of 308:1. It would take 40 hours a week for 14 weeks just to watch all of the raw footage, much less select and arrange it all into an edited film![2]

      this is a lot of retakes and 555 hours of footage seems a bit overwhelming. I don't think i would have to patience to lookover the footage in 14 weeks and 40 hours. This is a huge dedication to the director

    9. When the screenwriter hands the script off to the director, it is no longer a literary document, it’s a blueprint for a much larger, more complex creation. The production process is essentially an act of translation, taking all of those words on the page and turning them into shots, scenes and sequences.

      i never knew once you hand over a script to the director is is a blueprint i also never knew this process of turning a script into a shot was called act of translation.

    1. While navigating through the text, you’ll notice that the major part of the text you’re working within is identified at the top of the page

      This will be helpful to be able to save time finding the correct section I am working through.

    1. The special effects make-up for the gory bits of your favorite horror films can sometimes take center stage.

      the special effects create better scenes in films like horror movies. This can create a better experience for the audience as well.

    1. The dataset was normalized to 10000 counts per cell, Log1p transformed and filtered to contain2000 highly variable genes. The first important observation is that state-of-the-art approaches,except CPM

      Does marker‑gene expression change monotonically along the CPM geodesic from root to leaf?

    1. you will not benefit fully from this class.

      again, this defeats the purpose of paying for education. if you are going to rely on AI rather than prioritizing learning, what is the point of school and learning environments?

    2. drought and global warming

      many who consider themselves to be environmental advocates (knowingly and unknowingly) partake in harmful activities in the name of convenience

    1. You observed that for ambiguous cases or high-levels of missing data, the model tended to predict the PUR population, suggesting it acts as a "default". Since PUR is an admixed population, does this imply the model learns that a state of high uncertainty or mixed/missing signals is most characteristic of admixed genomes in the training set? Could this "default" behavior be mitigated by training with a null or "uncertain" class?