Work that involves summarizing large amounts of information
I have found that AI isnt very helpful for summaries, in my personal use.
Work that involves summarizing large amounts of information
I have found that AI isnt very helpful for summaries, in my personal use.
it can be transformative yet must be approached with skepticism,
im very skeptic of AI but I guess I could be more open minded while still holding skepticism
2
2
5
5
5
5
2
2
5
5
5
5
5
5
4
4
4
4
5
5
1
1
2
2
4
4
5
5
5
5
4
4
3
3
1
1
4
4
5
5
5
5
3
3
3
3
1
1
3
3
0
0
2
2
0
0
3
3
5
5
3
3
4
4
2
2
5
5
5
5
5
5
________________________________________________________
yes I am confident in any possible difficlties.
________________________________________________________
time manegment
________________________________________________________
I am not sure yet because I am not sure of what I want to do yet.
________________________________________________________
I plan to be in college for 4 years
This Study
this Study?
chart of the adapted Kinetik-experiment after Flossmann & Richter.
Why you stir for 90 minutes but take the last sample at 60 min? Maybe stir until last measurement?
Unlike the original protocol, a pre-washing step to remove soluble P was not performed
But in the flowchart, the supernantant fluid is also discarded. Isn't this a pre-washing?
Switzerland
Suggestion: across the Swiss Plateau
fixed-plot t
fixed-plot or fixed plot?
To manage this challenge,
What challenge? The challenge of disentangling different P fates? Shortly introduce.
The efficacy of P fertilization is often low due to these rapid and competing immobilization processes, and P lost from agricultural fields can become an environmental pollutant, disturbing P-limited aquatic ecosystem
It's a repetition from previous paragraph.
The efficacy of P fertilization is often low due to these rapid immobilization processes, and P lost from agricultural fields can become an environmental pollutant, disturbing P-limited aquatic ecosystems.
Proposition: Make it two sentences and give a little more specifying detail on the second point as for me, it is not clear, if excess soluble or fixed P becomes a problem. Also, the second point can be better introduced, as it is not an evident consequence of the previous content.
The efficacy of P fertilization is often low due to these rapid immobilization processes. Also, soluble/immoblizied P lost from agricultural fields can become an environmental pollutant, disturbing P-limited aquatic ecosystems.
you need to understand parts of culture more deeply than everyone else using the same AI systems
using AI technology requires effort and prior knowledge of the subject in order to get a more thoughtful response. however, many use these technologies in order to avoid critical and analytical thinking
It shows you, roughly, what is in the 12M images that make up a key database used by AI image generators:
is this just art available online? like where do the 12m images come from
ChatGPT cannot seem to remember or understand
entering a prompt that includes a word that ChatGPT is unfamiliar with may result in a misleading or completely incorrect response
they don’t have your voice, your thoughts, your accent
this is why I personally never use AI to generate writing- I enjoy using my own voice
You must demonstrate curiosity about new subjects and perspectives andbe willing to exert time and energy to pursue that curiosity
Find new take aways from readings and articles, that not only will help you remember key details but will allow you to grow in the course material!
Your comments to colleagues must be focused, specific, and constructive
Its important to derive from what others say in their responses and go deeper into the topic of discussion, while also learning new things and seeing what other's opinions on things are!
You must demonstrate leadership abilities small group discussion, but balance this with an awareness that the quality(rather than the quantity)of speaking and writing completed in a term is the real hallmark of excellence.
I completely agree with this, as one of the most important things we learn about through out our college careers is the use of engaging in what we have previous knowledge in and using that in what we respond in rather than how long a post is.
The free-software movement is driving legions of programmers to create thousands of open-source projects, including operating systems. Sites like http://freshmeat.net/ and http://distrowatch.com/ provide portals to many of these projects. As we stated earlier, open-source projects enable students to use source code as a learning tool. They can modify programs
The free software movement encourages programmers to create many open source projects, including operating systems. These projects are helpful for students because they can read the source code, learn from it, and even change it to practice. Websites like Freshmeat.net and DistroWatch.com are very useful because they collect and share lots of information about open-source software. Freshmeat.net lets people find new software updates, while DistroWatch.com gives details and comparisons of different Linux distributions. Both websites are good resources for learning, exploring, and supporting the open source community.
Finally, blade servers are systems in which multiple processor boards, I/O boards, and networking boards are placed in the same chassis. The difference between these and traditional multiprocessor systems is that each blade-processor board boots independently and runs its own operating system. Some blade-server boards are multiprocessor as well, which blurs the lines between types of computers. In essence, these servers consist of multiple independent multiprocessor systems.
This passage explains blade servers, where multiple processor, I/O, and networking boards reside in a single chassis. Each blade boots independently and runs its own OS. Some blades are multiprocessor systems, effectively creating multiple independent multiprocessor systems within one chassis, blurring traditional system classifications.
A potential drawback with a NUMA system is increased latency when a CPU must access remote memory across the system interconnect, creating a possible performance penalty. In other words, for example, CPU0 cannot access the local memory of CPU3 as quickly as it can access its own local memory, slowing down performance. Operating systems can minimize this NUMA penalty through careful CPU scheduling and memory management, as discussed in Section 5.5.2 and Section 10.5.4. Because NUMA systems can scale to accommodate a large number of processors, they are becoming increasingly popular on servers as well as high-performance computing systems.
This section highlights a possible limitation of NUMA architectures: retrieving remote memory via the system interconnect results in greater latency than accessing local memory. Operating systems are used to mitigate this penalty through the effective CPU scheduling and the memory management. Nonetheless, NUMA is being utilized more frequently in servers and high-performance computing because of its scalability
Adding additional CPUs to a multiprocessor system will increase computing power; however, as suggested earlier, the concept does not scale very well, and once we add too many CPUs, contention for the system bus becomes a bottleneck and performance begins to degrade. An alternative approach is instead to provide each CPU (or group of CPUs) with its own local memory that is accessed via a small, fast local bus. The CPUs are connected by a shared system interconnect, so that all CPUs share one physical address space. This approach—known as non-uniform memory access, or NUMA—is illustrated in Figure 1.10. The advantage is that, when a CPU accesses its local memory, not only is it fast, but there is also no contention over the system interconnect. Thus, NUMA systems can scale more effectively as more processors are added.
This section describes the NUMA (Non-Uniform Memory Access) method, which tackles scaling constraints in multiprocessor systems. Every CPU (or group of CPUs) possesses local memory that can be accessed through a rapid local bus, minimizing contention. All CPUs utilize a system interconnect for the global address space, enhancing scalability and performance as additional processors are included
In Figure 1.9, we show a dual-core design with two cores on the same processor chip. In this design, each core has its own register set, as well as its own local cache, often known as a level 1, or L1, cache. Notice, too, that a level 2 (L2) cache is local to the chip but is shared by the two processing cores. Most architectures adopt this approach, combining local and shared caches, where local, lower-level caches are generally smaller and faster than higher-level shared caches. Aside from architectural considerations, such as cache, memory, and bus contention, a multicore processor with N cores appears to the operating system as N standard CPUs. This characteristic puts pressure on operating-system designers—and application programmers—to make efficient use of these processing cores, an issue we pursue in Chapter 4. Virtually all modern operating systems—including Windows, macOS, and Linux, as well as Android and iOS mobile systems—support multicore SMP systems.
This excerpt outlines a dual-core processor design, where each core possesses its own registers and L1 cache, but they both have access to a common L2 cache. Multicore processors are recognized as several CPUs by the OS, necessitating meticulous resource and process management to enhance efficiency. The majority of contemporary operating systems accommodate this multicore SMP architecture
The definition of multiprocessor has evolved over time and now includes multicore systems, in which multiple computing cores reside on a single chip. Multicore systems can be more efficient than multiple chips with single cores because on-chip communication is faster than between-chip communication. In addition, one chip with multiple cores uses significantly less power than multiple single-core chips, an important issue for mobile devices as well as laptops.
This passage explains that multicore systems are now considered multiprocessors. Multiple cores on a single chip improve efficiency due to faster on-chip communication and lower power consumption compared with multiple single-core chips, making them ideal for mobile devices and laptops.
The benefit of this model is that many processes can run simultaneously—N processes can run if there are N CPUs—without causing performance to deteriorate significantly. However, since the CPUs are separate, one may be sitting idle while another is overloaded, resulting in inefficiencies. These inefficiencies can be avoided if the processors share certain data structures. A multiprocessor system of this form will allow processes and resources—such as memory—to be shared dynamically among the various processors and can lower the workload variance among the processors. Such a system must be written carefully, as we shall see in Chapter 5 and Chapter 6.
This excerpt emphasizes the benefits and difficulties of multiprocessor systems. Although multiple CPUs enable the concurrent execution of various processes, distinct processors may result in uneven load distribution. Dynamic sharing of data structures and resources aids in evenly distributing the workload, yet demands meticulous design to prevent inefficiencies and uphold system stability
The most common multiprocessor systems use symmetric multiprocessing (SMP), in which each peer CPU processor performs all tasks, including operating-system functions and user processes. Figure 1.8 illustrates a typical SMP architecture with two processors, each with its own CPU. Notice that each CPU processor has its own set of registers, as well as a private—or local—cache. However, all processors share physical memory over the system bus.
This passage introduces symmetric multiprocessing (SMP), where each CPU handles both OS and user tasks. Each processor has its own registers and local cache, but all share the system’s physical memory, allowing coordinated access and parallel execution.
On modern computers, from mobile devices to servers, multiprocessor systems now dominate the landscape of computing. Traditionally, such systems have two (or more) processors, each with a single-core CPU. The processors share the computer bus and sometimes the clock, memory, and peripheral devices. The primary advantage of multiprocessor systems is increased throughput. That is, by increasing the number of processors, we expect to get more work done in less time. The speed-up ratio with N processors is not N, however; it is less than N. When multiple processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working correctly. This overhead, plus contention for shared resources, lowers the expected gain from additional processors.
This passage discusses multiprocessor systems, which use two or more CPUs to increase overall throughput. While adding processors can speed up work, the speed-up is less than linear due to overhead in coordination and contention for shared resources like memory and buses.
All of these special-purpose processors run a limited instruction set and do not run processes. Sometimes, they are managed by the operating system, in that the operating system sends them information about their next task and monitors their status. For example, a disk-controller microprocessor receives a sequence of requests from the main CPU core and implements its own disk queue and scheduling algorithm. This arrangement relieves the main CPU of the overhead of disk scheduling. PCs contain a microprocessor in the keyboard to convert the keystrokes into codes to be sent to the CPU. In other systems or circumstances, special-purpose processors are low-level components built into the hardware. The operating system cannot communicate with these processors; they do their jobs autonomously. The use of special-purpose microprocessors is common and does not turn a single-processor system into a multiprocessor. If there is only one general-purpose CPU with a single processing core, then the system is a single-processor system. According to this definition, however, very few contemporary computer systems are single-processor systems.
This text describes the function of specialized processors in single-processor systems. These processors manage particular functions (e.g., disk scheduling, keyboard input) either with OS oversight or independently. Their presence does not convert the system into a multiprocessor configuration, which needs multiple general-purpose CPU cores. The majority of contemporary computers are not solely single-processor systems anymore
Many years ago, most computer systems used a single processor containing one CPU with a single processing core. The core is the component that executes instructions and registers for storing data locally. The one main CPU with its core is capable of executing a general-purpose instruction set, including instructions from processes. These systems have other special-purpose processors as well. They may come in the form of device-specific processors, such as disk, keyboard, and graphics controllers.
This passage describes single-processor systems, where one CPU with a single core executes general-purpose instructions. These systems may also include special-purpose processors for tasks like disk, keyboard, or graphics control, supplementing the main CPU.
Interrupts are an important part of a computer architecture. Each computer design has its own interrupt mechanism, but several functions are common. The interrupt must transfer control to the appropriate interrupt service routine. The straightforward method for managing this transfer would be to invoke a generic routine to examine the interrupt information. The routine, in turn, would call the interrupt-specific handler. However, interrupts must be handled quickly, as they occur very frequently. A table of pointers to interrupt routines can be used instead to provide the necessary speed. The interrupt routine is called indirectly through the table, with no intermediate routine needed. Generally, the table of pointers is stored in low memory (the first hundred or so locations). These locations hold the addresses of the interrupt service routines for the various devices. This array, or interrupt vector, of addresses is then indexed by a unique number, given with the interrupt request, to provide the address of the interrupt service routine for the interrupting device. Operating systems as different as Windows and UNIX dispatch interrupts in this manner.
This section explains how interrupts are effectively handled in computer systems. Rather than employing a standard procedure for managing every interrupt, the CPU utilizes the interrupt vector—a low-memory pointer table—to swiftly find and execute the relevant interrupt service routine. This approach enables rapid and regular management of device requests, a strategy employed by operating systems such as Windows and UNIX
In Section 1.2, we introduced the general structure of a typical computer system. A computer system can be organized in a number of different ways, which we can categorize roughly according to the number of general-purpose processors used.
This passage introduces the idea that computer systems can be organized based on processor count. The structure of a system—single-processor or multi-processor—affects how resources are managed and how tasks are executed concurrently.
Recall from the beginning of this section that a general-purpose computer system consists of multiple devices, all of which exchange data via a common bus. The form of interrupt-driven I/O described in Section 1.2.1 is fine for moving small amounts of data but can produce high overhead when used for bulk data movement such as NVS I/O. To solve this problem, direct memory access (DMA) is used. After setting up buffers, pointers, and counters for the I/O device, the device controller transfers an entire block of data directly to or from the device and main memory, with no intervention by the CPU. Only one interrupt is generated per block, to tell the device driver that the operation has completed, rather than the one interrupt per byte generated for low-speed devices. While the device controller is performing these operations, the CPU is available to accomplish other work.
This text describes Direct Memory Access (DMA), enhancing efficiency for large data transfers. Rather than having the CPU manage each byte (causing numerous interrupts), the device controller moves whole blocks of data straight between the device and memory. This minimizes CPU load and enables it to handle additional tasks during the transfer.
A large portion of operating system code is dedicated to managing I/O, both because of its importance to the reliability and performance of a system and because of the varying nature of the devices.
This passage highlights that a significant part of an operating system focuses on I/O management. The diversity of devices and the critical role of I/O in system performance and reliability make it a major responsibility of the OS.
The design of a complete storage system must balance all the factors just discussed: it must use only as much expensive memory as necessary while providing as much inexpensive, nonvolatile storage as possible. Caches can be installed to improve performance where a large disparity in access time or transfer rate exists between two components.
This text describes that creating a storage system involves finding a balance among cost, speed, and capacity. Costly, quick memory is utilized sparingly, whereas more extensive, affordable nonvolatile storage retains the majority of data. Caches assist in reducing delays in access times or transfer speeds across memory levels to enhance performance
Electrical. A few examples of such storage systems are flash memory, FRAM, NRAM, and SSD. Electrical storage will be referred to as NVM. If we need to emphasize a particular type of electrical storage device (for example, SSD), we will do so explicitly.
This passage describes electrical (nonvolatile) storage, including flash memory, FRAM, NRAM, and SSDs, collectively referred to as NVM. These devices store data electronically without moving parts, offering faster access than mechanical storage.
Mechanical. A few examples of such storage systems are HDDs, optical disks, holographic storage, and magnetic tape. If we need to emphasize a particular type of mechanical storage device (for example, magnetic tape), we will do so explicitly.
This passage introduces mechanical storage devices, which include HDDs, optical disks, holographic storage, and magnetic tape. These devices typically rely on moving parts and are distinguished from semiconductor-based memory by their mechanical operation and generally slower speed.
The top four levels of memory in the figure are constructed using semiconductor memory, which consists of semiconductor-based electronic circuits. NVM devices, at the fourth level, have several variants but in general are faster than hard disks. The most common form of NVM device is flash memory, which is popular in mobile devices such as smartphones and tablets. Increasingly, flash memory is being used for long-term storage on laptops, desktops, and servers as well.
This passage describes semiconductor-based memory in the top levels of the storage hierarchy. Nonvolatile memory (NVM), particularly flash memory, is faster than hard disks and widely used in mobile devices, with growing adoption in laptops, desktops, and servers for long-term storage.
The wide variety of storage systems can be organized in a hierarchy (Figure 1.6) according to storage capacity and access time. As a general rule, there is a trade-off between size and speed, with smaller and faster memory closer to the CPU. As shown in the figure, in addition to differing in speed and capacity, the various storage systems are either volatile or nonvolatile. Volatile storage, as mentioned earlier, loses its contents when the power to the device is removed, so data must be written to nonvolatile storage for safekeeping.
This part outlines the storage hierarchy that arranges memory according to speed and storage capacity. The faster, compact memory is claimed to be situated closer to the CPU, while the slower and larger memory is said to be placed further away. Storage can be volatile (loses information when powered off) or nonvolatile (retains information), highlighting the importance of saving essential data to nonvolatile storage
In a larger sense, however, the storage structure that we have described—consisting of registers, main memory, and secondary storage—is only one of many possible storage system designs. Other possible components include cache memory, CD-ROM or blu-ray, magnetic tapes, and so on. Those that are slow enough and large enough that they are used only for special purposes—to store backup copies of material stored on other devices, for example—are called tertiary storage. Each storage system provides the basic functions of storing a datum and holding that datum until it is retrieved at a later time. The main differences among the various storage systems lie in speed, size, and volatility.
This text expands the perspective on storage architectures beyond just registers, primary memory, and auxiliary storage. It presents tertiary storage (e.g., magnetic tapes, CD-ROMs) utilized for backup or specific uses. All storage systems have the fundamental role of keeping data, yet they vary in speed, capacity, and volatility.
All forms of memory provide an array of bytes. Each byte has its own address. Interaction is achieved through a sequence of load or store instructions to specific memory addresses. The load instruction moves a byte or word from main memory to an internal register within the CPU, whereas the store instruction moves the content of a register to main memory. Aside from explicit loads and stores, the CPU automatically loads instructions from main memory for execution from the location stored in the program counter.
This passage explains how memory is organized and accessed. Memory is an array of bytes, each with a unique address. The CPU uses load instructions to bring data from memory into registers and store instructions to write data back. Instructions themselves are also automatically loaded from memory based on the program counter for execution.
Computers use other forms of memory as well. For example, the first program to run on computer power-on is a bootstrap program, which then loads the operating system. Since RAM is volatile—loses its content when power is turned off or otherwise lost—we cannot trust it to hold the bootstrap program. Instead, for this and some other purposes, the computer uses electrically erasable programmable read-only memory (EEPROM) and other forms of firmware—storage that is infrequently written to and is nonvolatile. EEPROM can be changed but cannot be changed frequently. In addition, it is low speed, and so it contains mostly static programs and data that aren't frequently used. For example, the iPhone uses EEPROM to store serial numbers and hardware information about the device.
This text discusses nonvolatile memory such as EEPROM and additional firmware, which preserves information even when the power is turned off. Due to the volatility of RAM, essential programs such as the bootstrap program and static data (e.g., device serial numbers) are kept in EEPROM. Despite being slow and rarely updated, this memory is crucial for system startup and permanent configuration information.
The CPU can load instructions only from memory, so any programs must first be loaded into memory to run. General-purpose computers run most of their programs from rewritable memory, called main memory (also called random-access memory, or RAM). Main memory commonly is implemented in a semiconductor technology called dynamic random-access memory (DRAM).
This passage explains that the CPU executes programs directly from memory, so programs must be loaded into main memory (RAM) before running. Main memory is typically implemented using DRAM, which allows fast, rewritable access for general-purpose computing.
In summary, interrupts are used throughout modern operating systems to handle asynchronous events (and for other purposes we will discuss throughout the text). Device controllers and hardware faults raise interrupts. To enable the most urgent work to be done first, modern computers use a system of interrupt priorities. Because interrupts are used so heavily for time-sensitive processing, efficient interrupt handling is required for good system performance.
This text outlines the function of interrupts in contemporary operating systems. Interrupts enable the system to react to asynchronous events, such as device signals and hardware errors. By assigning priority levels, urgent tasks are addressed first, and effective interrupt management is vital for preserving overall system efficiency.
The interrupt mechanism also implements a system of interrupt priority levels. These levels enable the CPU to defer the handling of low-priority interrupts without masking all interrupts and makes it possible for a high-priority interrupt to preempt the execution of a low-priority interrupt.
This passage describes interrupt priority levels, which allow the CPU to handle more critical interrupts first. Low-priority interrupts can be deferred without blocking all others, while high-priority interrupts can preempt lower-priority ones, ensuring timely responses to urgent events.
Recall that the purpose of a vectored interrupt mechanism is to reduce the need for a single interrupt handler to search all possible sources of interrupts to determine which one needs service. In practice, however, computers have more devices (and, hence, interrupt handlers) than they have address elements in the interrupt vector. A common way to solve this problem is to use interrupt chaining, in which each element in the interrupt vector points to the head of a list of interrupt handlers. When an interrupt is raised, the handlers on the corresponding list are called one by one, until one is found that can service the request. This structure is a compromise between the overhead of a huge interrupt table and the inefficiency of dispatching to a single interrupt handler.
This passage explains vectored interrupts and how they handle multiple devices efficiently. Since there are often more devices than vector entries, interrupt chaining links multiple handlers to a single vector entry. When an interrupt occurs, handlers are checked in order until the correct one services the request, balancing speed and memory usage.
The basic interrupt mechanism works as follows. The CPU hardware has a wire called the interrupt-request line that the CPU senses after executing every instruction. When the CPU detects that a controller has asserted a signal on the interrupt-request line, it reads the interrupt number and jumps to the interrupt-handler routine by using that interrupt number as an index into the interrupt vector. It then starts execution at the address associated with that index. The interrupt handler saves any state it will be changing during its operation, determines the cause of the interrupt, performs the necessary processing, performs a state restore, and executes a return_from_interrupt instruction to return the CPU to the execution state prior to the interrupt. We say that the device controller raises an interrupt by asserting a signal on the interrupt request line, the CPU catches the interrupt and dispatches it to the interrupt handler, and the handler clears the interrupt by servicing the device. Figure 1.4 summarizes the interrupt-driven I/O cycle.
This section describes the I/O cycle driven by interrupts. The CPU tracks an interrupt-request line; when a device sends a signal, the CPU refers to the interrupt number to find the matching interrupt handler in the interrupt vector. The handler preserves the existing state, manages the interrupt, reinstates the state, and hands control back to the interrupted program, ensuring smooth functionality.
The interrupt architecture must also save the state information of whatever was interrupted, so that it can restore this information after servicing the interrupt. If the interrupt routine needs to modify the processor state—for instance, by modifying register values—it must explicitly save the current state and then restore that state before returning. After the interrupt is serviced, the saved return address is loaded into the program counter, and the interrupted computation resumes as though the interrupt had not occurred.
This passage explains that interrupt handling requires saving and restoring CPU state. When an interrupt occurs, the current execution context—like register values and the program counter—is saved so that after the interrupt service routine runs, the CPU can resume the original task seamlessly, as if nothing had been interrupted.
When the CPU is interrupted, it stops what it is doing and immediately transfers execution to a fixed location. The fixed location usually contains the starting address where the service routine for the interrupt is located. The interrupt service routine executes; on completion, the CPU resumes the interrupted computation. A timeline of this operation is shown in Figure 1.3.
This text showcases the procedure for managing the interrupts. When the interrupt is received by its CPU, it stops its ongoing task, moves to the designated address for executing the interrupt service routine, and then continues with a previous task after the routine is said to complete. This system guarantees prompt reactions to hardware or system occurrences
Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually by way of the system bus. (There may be many buses within a computer system, but the system bus is the main communications path between the major components.) Interrupts are used for many other purposes as well and are a key part of how operating systems and hardware interact.
This section explains that hardware interrupts are signals sent to the CPU, often via the system bus, to indicate that immediate attention is needed. Interrupts are essential for coordinating actions between the OS and hardware, enabling the system to respond efficiently to events as they occur.
Consider a typical computer operation: a program performing I/O. To start an I/O operation, the device driver loads the appropriate registers in the device controller. The device controller, in turn, examines the contents of these registers to determine what action to take (such as “read a character from the keyboard”). The controller starts the transfer of data from the device to its local buffer. Once the transfer of data is complete, the device controller informs the device driver that it has finished its operation. The device driver then gives control to other parts of the operating system, possibly returning the data or a pointer to the data if the operation was a read. For other operations, the device driver returns status information such as “write completed successfully” or “device busy”. But how does the controller inform the device driver that it has finished its operation? This is accomplished via an interrupt.
This passage describes the sequence of a typical I/O operation. The device driver sets up the device controller, which carries out the action and uses a buffer for data transfer. Once finished, the controller notifies the driver through an interrupt, allowing the OS to process the data or status information. This illustrates how the OS and hardware communicate efficiently.
In the following subsections, we describe some basics of how such a system operates, focusing on three key aspects of the system. We start with interrupts, which alert the CPU to events that require attention. We then discuss storage structure and I/O structure.
This passage introduces the three key aspects of system operation: interrupts, which notify the CPU of events needing attention, storage structure, and I/O structure. These fundamentals provide the foundation for understanding how the operating system coordinates hardware and software activities.
Typically, operating systems have a device driver for each device controller. This device driver understands the device controller and provides the rest of the operating system with a uniform interface to the device. The CPU and the device controllers can execute in parallel, competing for memory cycles. To ensure orderly access to the shared memory, a memory controller synchronizes access to the memory.
This section explains how operating systems manage hardware devices. Each device has a driver that translates between the OS and the device, providing a standard interface. Since the CPU and devices can operate simultaneously and compete for memory, a memory controller coordinates access to prevent conflicts and ensure smooth operation.
Today, however, if we look at operating systems for mobile devices, we see that once again the number of features constituting the operating system is increasing. Mobile operating systems often include not only a core kernel but also middleware—a set of software frameworks that provide additional services to application developers. For example, each of the two most prominent mobile operating systems—Apple's IOS and Google's Android—features a core kernel along with middleware that supports databases, multimedia, and graphics (to name only a few).
This section highlights how modern mobile operating systems have grown more feature-rich. Beyond the core kernel, they include middleware—software frameworks that provide additional services like databases, multimedia, and graphics—helping developers create sophisticated applications and enhancing the overall functionality of the device.
The matter of what constitutes an operating system became increasingly important as personal computers became more widespread and operating systems grew increasingly sophisticated. In 1998, the United States Department of Justice filed suit against Microsoft, in essence claiming that Microsoft included too much functionality in its operating systems and thus prevented application vendors from competing. (For example, a web browser was an integral part of Microsoft's operating systems.) As a result, Microsoft was found guilty of using its operating-system monopoly to limit competition.
This passage illustrates about how the scope of the operating system can have the legal and the economic implications. As OSs became more sophisticated, Microsoft was sued for bundling extra functionality, like a web browser, which limited competition. It shows that what is included in an OS can affect both market dynamics and regulatory scrutiny.
In addition, we have no universally accepted definition of what is part of the operating system. A simple viewpoint is that it includes everything a vendor ships when you order “the operating system.” The features included, however, vary greatly across systems. Some systems take up less than a megabyte of space and lack even a full-screen editor, whereas others require gigabytes of space and are based entirely on graphical windowing systems. A more common definition, and the one that we usually follow, is that the operating system is the one program running at all times on the computer—usually called the kernel. Along with the kernel, there are two other types of programs: system programs, which are associated with the operating system but are not necessarily part of the kernel, and application programs, which include all programs not associated with the operation of the system.
This section highlights that the boundaries of an operating system are not universally agreed upon. While some view it as everything that is being included when purchased, a more practical definition focuses on the kernel—the core program always running. Surrounding the kernel are system programs (supporting the OS) and application programs (serving users), showing that the OS is both central and part of a larger software ecosystem.
How, then, can we define what an operating system is? In general, we have no completely adequate definition of an operating system. Operating systems exist because they offer a reasonable way to solve the problem of creating a usable computing system. The fundamental goal of computer systems is to execute programs and to make solving user problems easier. Computer hardware is constructed toward this goal. Since bare hardware alone is not particularly easy to use, application programs are developed. These programs require certain common operations, such as those controlling the I/O devices. The common functions of controlling and allocating resources are then brought together into one piece of software: the operating system.
This passage explains that while there’s no single perfect definition of an operating system, it exists to make computing systems usable and efficient. By combining common functions like resource allocation and I/O control into one software layer, the OS simplifies program execution and helps users solve problems effectively.
To explain this diversity, we can turn to the history of computers. Although computers have a relatively short history, they have evolved rapidly. Computing started as an experiment to determine what could be done and quickly moved to fixed-purpose systems for military uses, such as code breaking and trajectory plotting, and governmental uses, such as census calculation. Those early computers evolved into general-purpose, multifunction mainframes, and that's when operating systems were born. In the 1960s, Moore's Law predicted that the number of transistors on an integrated circuit would double every 18 months, and that prediction has held true. Computers gained in functionality and shrank in size, leading to a vast number of uses and a vast number and variety of operating systems. (See Appendix A for more details on the history of operating systems.)
This section traces the evolution of computers to explain the diversity of operating systems. Early fixed-purpose machines for military and government tasks eventually gave way to general-purpose mainframes, prompting the development of operating systems. With rapid advancements in hardware—predicted by Moore’s Law—computers became more powerful and compact, enabling a wide variety of applications and a corresponding variety of OS designs.
By now, you can probably see that the term operating system covers many roles and functions. That is the case, at least in part, because of the myriad designs and uses of computers. Computers are present within toasters, cars, ships, spacecraft, homes, and businesses. They are the basis for game machines, cable TV tuners, and industrial control systems.
This passage emphasizes the versatility of operating systems, which perform many roles depending on the type of computer and its purpose. Because computers are embedded in a wide range of devices—from household appliances to spacecraft—the OS must be designed to meet diverse needs and environments.
A slightly different view of an operating system emphasizes the need to control the various I/O devices and user programs. An operating system is a control program. A control program manages the execution of user programs to prevent errors and improper use of the computer. It is especially concerned with the operation and control of I/O devices.
This section presents the operating system as a control program. Beyond just allocating resources, it ensures programs run safely and correctly, preventing misuse or conflicts. A major focus here is managing I/O devices, since they are often shared and prone to errors if not properly controlled.
From the computer's point of view, the operating system is the program most intimately involved with the hardware. In this context, we can view an operating system as a resource allocator. A computer system has many resources that may be required to solve a problem: CPU time, memory space, storage space, I/O devices, and so on. The operating system acts as the manager of these resources. Facing numerous and possibly conflicting requests for resources, the operating system must decide how to allocate them to specific programs and users so that it can operate the computer system efficiently and fairly.
This passage displys how the operating system as the hardware’s closest partner, serving as a resource allocator. Since programs and the users do constantly compete for the limited resources like the CPU time, memory, storage, and the I/O devices, the OS must make the fair and the efficient decisions about distribution. This role ensures smooth functioning of the entire computer system.
Increasingly, many users interact with mobile devices such as smartphones and tablets—devices that are replacing desktop and laptop computer systems for some users. These devices are typically connected to networks through cellular or other wireless technologies. The user interface for mobile computers generally features a touch screen, where the user interacts with the system by pressing and swiping fingers across the screen rather than using a physical keyboard and mouse. Many mobile devices also allow users to interact through a voice recognition interface, such as Apple's Siri.
This section highlights the importance of how the mobile devices have changed the way the users interact with the computers. Instead of relying on the keyboards and the mice, the users primarily use the touch screens and sometimes also the voice commands. Because these devices are always network-connected, the operating system must be optimized for the mobility, connectivity, and the natural interaction methods.
The user's view of the computer varies according to the interface being used. Many computer users sit with a laptop or in front of a PC consisting of a monitor, keyboard, and mouse. Such a system is designed for one user to monopolize its resources. The goal is to maximize the work (or play) that the user is performing. In this case, the operating system is designed mostly for ease of use, with some attention paid to performance and security and none paid to resource utilization—how various hardware and software resources are shared.
This passage points out that a user’s experience with a computer depends on the type of interface. For the personal systems like the laptops and the PCs, the operating system focuses on making the system very easy to use and also responsive for the single person. In this setup, the efficiency and the convenience take the priority over the resource sharing, since the machine is not meant to be divided among the multiple users.
The hardware—the central processing unit (CPU), the memory, and the input/output (I/O) devices—provides the basic computing resources for the system. The application programs—such as word processors, spreadsheets, compilers, and web browsers—define the ways in which these resources are used to solve users' computing problems. The operating system controls the hardware and coordinates its use among the various application programs for the various users.
This section explains the relationship between hardware, applications, and the operating system. Hardware provides the raw resources, applications give users practical tools, and the operating system acts as the manager that controls and coordinates how those resources are shared among programs and users.
We begin our discussion by looking at the operating system's role in the overall computer system. A computer system can be divided roughly into four components: the hardware, the operating system, the application programs, and a user (Figure 1.1).
This part introduces the layered view of a computer system, showing how the hardware, operating system, application programs, and user interact. The operating system sits in the middle, acting as the bridge between raw hardware and the applications that users depend on.
Because an operating system is large and complex, it must be created piece by piece. Each of these pieces should be a well-delineated portion of the system, with carefully defined inputs, outputs, and functions. In this chapter, we provide a general overview of the major components of a contemporary computer system as well as the functions provided by the operating system. Additionally, we cover several topics to help set the stage for the remainder of the text: data structures used in operating systems, computing environments, and open-source and free operating systems.
This passage highlights that the modular design of the operating systems. Breaking the OS into the smaller, well-defined components makes it very easy to manage the complexity, assign the clear responsibilities, and also maintain the system. The chapter overview also signals that beyond just the OS functions, students will learn about foundational concepts like data structures, different computing environments, and the role of open-source systems, which set the groundwork for deeper exploration later.
In order to explore the role of an operating system in a modern computing environment, it is important first to understand the organization and architecture of computer hardware. This includes the CPU, memory, and I/O devices, as well as storage. A fundamental responsibility of an operating system is to allocate these resources to programs.
This section highlights the importance that the operating system acts like the manager for the core hardware components—CPU, memory, the I/O devices, and the storage. By allocating these resources to the different programs, the OS will ensure the fairness, efficiency, and the stability in the modern computing environment. It reminds us that without this resource management,the multiple programs running at the same time would constantly conflict with each other.
* so se usa para introducir una consecuencia logical.
* so = Asi que, por lo tanto.
* then = sirve para introducir oraciones por secuencia de tiempo.
* then = entonces,luego.
* Should al principio de la oración funciona como un if.
* should he ever = si llegara.
* Put down = ejecutar (para personas), sacrificar (para animales).
Update 2021
Revisar
Concluded
Updates
ToDo
procede de la intención de alterar la naturaleza tripartita delsigno para hacer de la notación el puro encuentro de un objeto yde su expresión.
Nuevo verosímil para Barthes: destrucción del signo de manera regresiva, para cuestionar la representación desde el estudio de lo bello y las experiencias artísticas desde una perspectiva que no se basa en principios religiosos o espirituales, sino en la razón, la experiencia humana y el mundo terrenal.
ilusión referencial
semióticamente construida por el detalle concreto: relación entre el referente y el significante, sin posibilidad de desarrollar una forma de significado, no hay denotación. El significante remite al referente.
lo «real» estaba del lado de la Historia;pero era para oponerse mejor a lo verosímil, es decir, al ordenmismo del relato (de la imitación o «poesía»
La supremacía de la Historia, de lo real, lo verdadero sobre lo verosímil, de la imitación de la poesía.
Todo esto dice que se considera a lo «real» como autosu-ficiente, que es lo bastante fuerte como para desmentir toda ideade «función», que su' enunciación no tiene ninguna necesidadde ser integrada en una estructura y que el haber-estado-allí es unprincipio suficiente de la palabra.
Esto parece un testimonio de la importancia que se le da a las historias "basadas en un hecho real".
vértigo de la notación
Sensación de inestabilidad o desorientación que produce el texto en el lector al desestabilizar el sentido único y monolítico. Este vértigo surge al confrontar un texto que es lo opuesto a un tratado de un solo significado, ya que lo no codificado, lo que queda fuera del discurso normativo, genera esta sensación de profundidad y multiplicidad de interpretaciones.
la finalidad estética de la descripción flaubertianaestá totalmente impregnada de imperativos «realistas», como si enapariencia la exactitud del referente, superior o indiferente a todaotra función, gobernara y justificara, ella sola, el describirlo o—en el caso de las descripciones reducidas a una palabra— eldenotarlo: las exigencias estéticas se impregnan aquí —al menosa título de coartada—: de exipencias referenciales
Finalidad estética de la descripción en Flaubert
el escritor cumple aquí la definición que Platón da del artista:un hacedor en tercer grado, puesto que él imita lo que es ya lacopia de una esencia."
Crítica platónica al artista como una pseudohacedor.
lo verosímil no es aquí referen-cial sino abiertamente discursivo: son las reglas genéricas del dis-curso las que dictan la ley
Lo verosímil en los descriptivo se encuentra en una función estética.
¿cuál es endefinitiva —si se nos permite la expresión— la significación deesta insignificancia?
Pregunta central
La des-cripción aparece así como una suerte de «particularidad» de loslenguajes llamados superiores, en la medida, aparentemente para-doja!, en que no es justificada por ninguna finalidad de accióno de comunicación
Descripción
oposición que antropológicamente tiene su importancia
Oposición entre la estructura general del relato que es predictiva y la descripción que es puramente sumatoria y no va creando una elección de consecuencia sobre el relato como la primera.
la descripción, ésta no tiene ninguna marcapredictiva; en tanto «analógica», su estructura es puramente su-matoria y no contiene ese trayecto de elección y de alternativa queda a la narración el perfil de un amplio dispatching, provisto deuna temporalidad referencial (y ya no sólo discursiva)
estructura desordenada de la notación insignificante de las descripciones
La notación insignificante
Descripciones detalladas en la narrativa que, aunque parecen ser meros registros de lo real, en realidad son selecciones deliberadas que construyen el efecto de realidad, ya que el texto remplaza lo real con lo escrito y crea una ilusión de transparencia
el análisis estructural, ordinariamenteocupado hasta hoy en separar y sistematizar las grandes articu-laciones del relato, deja de lado, sea porque excluyen del inventa-rio (no hablando de ellos) todos los detalles «superfluos» (enrelación con la estructura), sea porque se tratan a estos mismosdetalles (el propio autor de estas líneas lo ha intentado)8 como«rellenos» (catálisis), afectados de un valor funcional indirecto, enla medida en que, al sumarse, constituyen algún indicio de carác-ter o de atmósfera y pueden ser así finalmente recuperados por laestructura.
Omisiones del análisis estructural sobre lo que se considera relleno de valor funcional indirecto
ipotiposis
Figura retórica que consiste en una descripción realista, viva y detallada de algo o alguien, tan vívida que parece ser experimentada en el momento. El término, proveniente del griego hupotúpōsis (que significa "boceto" o "esbozo").
Los residuos irreductibles del análisis funcional tienen esto encomún: denotar lo que corrientemente se llama lo «real concreto»(pequeños gestos, actitudes transitorias, objetos insignificantes, pa-labras redundantes)
lo real concreto de las notaciones insignificantes
Updates
https://doi.org/10.1136/bmj.o1717 2022
Revisar
Storytelling is relating a tale to one or more listeners through voice and gesture. It is not the same as reading a story aloud or reciting a piece from memory or acting out a drama—though it shares common characteristics with these arts. The storyteller looks into the eyes of the audience and together they co-create the experience of the tale. The storyteller begins to see and recreate, through voice and gesture, a series of mental images; the audience, from the first moment of listening, squints, stares, smiles, leans forward, or falls asleep, letting the teller know whether to slow down, speed up, elaborate, or just finish.
I think of Musicians and their performances when I read this passage. When I was younger my grandmother and I would watch Frank Sinatra's recorded big band performances. His body language, stance and gestures created such a stage presence. He was such a great storyteller.
“Insofar as we account for our own actions and for the human events that occur around us principally in terms of narrative, story, drama, it is conceivable that our sensitivity to narrative provides the major link between our own sense of self and our sense of others in the social world around us”
I love this quote. It makes me think of how comedians interpret their comic routines. How they view events that unfold around them and how those events in life can be seen as comical or ironic in nature. The same concept can be seen with screen writers and directors collaborations.
Storytelling involves appealing to the listening audience. “Entertainment is a requirement for successful storytelling. No story works without it; otherwise it becomes a lecture” (Spaulding, 2011, p. 4). In a very real way, the act of telling is a performative act. What is meant here is that the act of telling is different than giving a speech or lecture because the performative event (Bauman, 1975) is a unique experience. It is a practice that depends on direct connection between the teller and the listener. The direction of the story can change from the way the listener reacts or the teller shares the story.
I think that story telling is definitely an art form. For some it comes naturally. For others though however, the art of story telling must be practiced to create that "unique experience" It's not always what you say but how you make someone feel, that's what people remember. Story telling should encompass that theory.
Storytelling is part and parcel of human socialization—a tool for making us known, both to ourselves and to others. In fact, anything we experience that does not get structured narratively does not get remembered
Storytelling has been present and found since prehistoric times. Through cave paintings and other narrative forms. They have shaped traditions and formed wives tales and superstitions which are passed down through generations. I feel its true that most believe strongly that to be remembered you must pass along beliefs and views through stories.
Updates
Updates https://www.bmj.com/content/379/bmj.o2865 2022
Competition – ruthless, unforgiving, to-the-death competition – is a crucialfeature of capitalism. It opens up new opportunities for individual firms: they canexpand revenues and profits by winning a larger share of sales from competitors.But competition also poses new challenges, since other companies are trying todo exactly the same thing: namely, grow their own market share at the expense oftheir competitors. Therefore, it’s not just greed that motivates company efforts tominimize costs and maximize profits; with competition, it’s also fear. If a companycan’t stand up to the competition, it’s not just that they won’t make quite as muchprofit as other companies. Far worse, eventually they will be destroyed by thesecompeting firms producing better products at lower cost.
This paragraph highlights that competition, not just greed, drives company behavior. Compared to the “little circle,” which focuses on a single firm and its workers, this section shows how multiple firms interacting under competitive pressures create fear, innovation, and risk. Both agree firms aim to survive and profit, but competition adds complexity the simpler model doesn’t capture.
The previous chapters of Part Two introduced the major actors in the economyand their assigned tasks. This chapter now fits them all together in a circular loopthat reflects the repeating cycle of the economy: work, production (using tools),income distribution, consumption, and reproduction. These are the core functionsand relationships that make up capitalism. We’ll even draw a simple map of thiscircular system. We’ll call this map the “little circle.” In later chapters, this map willget bigger as we consider more of capitalism’s real-world complexity (includingthe roles of competition, the environment, banks, government, and globalization)
This passage introduces the “little circle,” a simplified model of capitalism that shows the recurring cycle of work, production, income distribution, consumption, and reproduction. It stresses that capitalism functions as a continuous loop where each stage depends on the others, before adding later complexities like government or globalization. What happens to the “little circle” if one part of the cycle such as income distribution fails to function fairly?
Updates https://www.bmj.com/content/377/bmj.o1205 2022
Updates https://doi.org/10.1136/bmj.p2929 2024
Mas los árboles no son “árboles” hasta que se los nombra y contempla, y nunca se los designó así hasta que hubo aquellos que desplegaron el intrincado aliento del lenguaje, débil eco y borrosa imagen del mundo,
Acá aparece el lenguaje como generador del mundo. Además de ejercer el poder del lenguaje en la realidad y crear esa gramática mítica que refiere Tolkien necesaria para la subcreación.
necesitamos limpiar los cristales de nuestras ventanas para que las cosas que alcanzamos a ver queden libres de la monotonía del empañado cotidiano o familiar; y de nuestro afán de posesión.
La renovación por parte de la fantasía desde esa subcreación tiene que ver con deshacer esa fatiga de la realidad y comodidad visual de su cotidianidad.
Crear un Mundo Secundario en el que un sol verde resulte admisible, imponiendo una Creencia Secundaria, ha de requerir con toda certeza esfuerzo e intelecto, y ha de exigir una habilidad especial, algo así como la destreza élfica.
La creación fantástica requiere esfuerzo: diferencia con la irrealidad del sueño.
En este sentido, la fantasía no es, creo yo, una manifestación menor sino más elevada, del Arte, casi su forma más pura, y por ello -cuando se alcanza- la más poderosa.
la subcreación en la literatura como una manifestación más elevada por crear un mundo secundario separado del mundo primario y que se hace más poderosa. Parece hacer eco de la máxima de Mallarmé: "hay que elevar la página a la potencia del cielo estrellado".
la intención de combinar su uso más tradicional y elevado (equivalente a Imaginación) con las nociones derivadas de “irrealidad” (o sea, disimilitud con el Mundo Primario) y liberación de la esclavitud del “hecho” observado
Noción de lo fantástico.
En mi opinión, se tiene muy poco en cuenta este aspecto de la “mitología”: subcreación más que representación o que interpretación simbólica de las bellezas y los terrores del mundo.
La mitología como subcreación más que representación. Se trata más de la posibilidad de moldear el mundo que de volver a los temas míticos de origen.
…La mente humana, dotada de los poderes de generalización y abstracción, no sólo ve hierba verde, diferenciándola de otras cosas (y hallándola agradable a la vista), sino que ve que es verde, además de verla como hierba.
El adjetivo como manifestación de una gramática mítica. Genera un poder de abstracción que se ejerce sobre el mundo exterior a nuestra mente y crear nuevas formas que llevan una fantasía de lo real. Acá es desde donde sitúa Tolkien la idea del hombre (ser humano) como subcreador. El poder de la fantasía es hacer efectiva la voluntad de la visión fantástica.
La definición de un cuento de hadas -qué es o qué debiera ser- no depende, pues, de ninguna definición ni de ningún relato histórico de elfos o de hadas, sino de la naturaleza de Fantasía: el Reino Peligroso mismo y que sopla en ese país.
Contexto de la definición. Características para entender los cuentos de hadas.
el acentose desplaza de la que para los griegos era la esencia de laobra, es decir, el hecho de que algo en ella llegase al serdesde el no-ser, abriendo así el espacio de la verdad ( a —Atj 0eia) y edificando un mundo para el habitar del hombresobre la tierra, al operari del artista, esto es, al genio creativoy a las particulares características del proceso artístico enlas que encuentra expresión.
Desplazamiento de la noción de poiesis en la obra de arte: de aletheia al operari.
Cuando este proceso se lleva a cabo en la épocamoderna, cualquier posibilidad de distinguir entrepoiesis ypraxis se desvanece. El «hacer» del hombre se determinacomo actividad productora de un efecto real (el opus deloperari, el factum del facere, el actus del agere), cuyo valor seaprecia en función de la voluntad que en ella se expresa, esdecir, en relación con su libertad y su creatividad. La experiencia central de la poiesis, la pro-ducción hacia la presencia, cede ahora su sitio a la consideración del «cómo», o sea,del proceso a través del que se ha producido el objeto.
Poiesis en la época moderna
no era nadamás que el principio del movimiento (la voluntad, entendida como unidad de apetito, deseo, volición) que caracteriza la vida
Praxis para Aristóteles
en el centro de la praxis estaba, comoveremos, la idea de la voluntad que se expresa inmediatamente en la acción, la experiencia que estaba en el centrode la poiesis era la pro-ducción hacia la presencia, es decir, elhecho de que, en ella, algo pasase del no-ser al ser, de laocultación a la plena luz de la obra.
Diferencias: praxis como acto, poiesis como revelación (cercano al concepto de aletheia).
distinguíanclaramente entre poiesis (poiein, pro-ducir, en el sentido dellevar a ser) y praxis (prattein, hacer, en el sentido de realizar)
Concepciones griegas
Esta actividad productiva, en nuestro tiempo, se entiende como práctica. Según la opinión habitual, todo el hacer del hombre— tanto el del artista y el del artesano, como el del obrero oel del hombre político— es práctica, es decir: manifestación de una voluntad productora de un efecto concreto.
Según esto toda poiesis es acto.
During [Sigmund] Freud’s university years (the late1870s and early 1880s), young enthusiasts in thefuzzier disciplines, such as psychology, liked to bor-row terminology from the more rigorous and estab-lished field of mechanical physics. The borrowedterms became, in fact, metaphor; and metaphor, likea shrewd servant, has a way of ruling its master. Thus,Freud wound up with the idea that libido or sexual“energy,” as he called it, is a pressure that builds upwithin a closed system to the point where it demandsrelease, as in a steam engine
Freud’s use of physics metaphors, like comparing libido to pressure in a steam engine, inspired curiosity about how scientific metaphors influence psychological theories and whether modern fields still borrow ideas this way.
“Filling in Frameworks” wrestles with the misconceptionthat economics is a science. This section looks at the difficul-ties that economists face in trying to adopt scientific methods. Isuggest that economics differs from the natural sciences in thatwe have to rely much less on verifiable hypotheses and muchmore on hard-to-verify interpretative frameworks. Economicanalysis is a challenge, because judging interpretive frame-works is actually harder than verifying scientific hypotheses.
This passage argues that economics relies more on interpretive frameworks than testable hypotheses, making it less like a natural science. If economics depends on interpretation, does this mean policies reflect ideology as much as evidence?
GRADING: Participation (class attendance, active listening, discussion contributions) [15%] Informal Writing (Canvas, homework, in-class writing) [5%] Annotations (Hypothes.is) [20%] 5 Essays of varying lengths and drafting stages [40%]* Peer review sessions (attendance and quality of participation) [10%] Individual Conferences (and preparation) [10%].>
This is a helpful breakdown of grading for this course.
Intresting that they think a graduated income tax would protect the Americans and that it would bring their power back to the classes
I find it sad that the prices were falling but the debts were rising. like if the prices were falling that means they pry get paid less but then they are struggling to pay their debt. and i feel like going into the political could easy start a war for them.
Thats intresting that they spread across and then gained more then a milliom members. like how did they get people to join there group?
I did not know that the rise of the industrial giants reshaped America and thats when they started calling it their home.
If they are based on a small number of observations, it can be misleading to label the pie slices with percentages.
Making you sure you pay close attention to the pie chart.
personality is shaped by the goodness of fit between the child’s temperamental qualities and characteristics of the environment
Natur + nurtur
infants show an awareness that even though they are uncertain about the unfamiliar situation, their mother is not, and that by “reading” the emotion in her face, infants can learn about whether the circumstance is safe or dangerous, and how to respond.
The trustful adult becomes the navigator in unfamiliar situations for the infant.
“Only the wisest and stupidest of men never change.”
I like this quote because it feels true in real life. Smart people don’t need to change as much since they already understand things, and really stubborn or clueless people don’t change either. Everyone else has to learn and adjust. Kind of funny but also makes sense.
The Chinese economy produced one quarter of the world’s gross domestic product (GDP) in 1500, followed by India which produced nearly another quarter. In comparison, the fourteen nations of western Europe produced just about half of China’s GDP or only one-eight of the global total production. The largest European economy, in Italy, produced only about one-sixth of China’s output.
I didn’t realize China and India were that dominant in 1500. It flips the way I usually think about history, because we’re often told Europe was the center of progress. It’s kind of shocking that all of western Europe together only made half of what China alone produced.
The Chinese, who valued silver higher than gold, called this the silver rule. Confucian social morality is based on this reciprocity and on empathy and understanding others rather than on divinely ordained rules.
It’s surprising that the Chinese valued silver more than gold, since usually gold is seen as the most valuable. I also like how Confucian morality focused on empathy and reciprocity instead of religion. That makes their system feel more about human relationships than divine rules.
The imperial courts sent thousands of highly-educated administrators throughout the empire and China was ruled not by hereditary nobles or even elected representatives, but by a class of men who had received rigorous training and had passed very stringent examinations to prove themselves qualified to lead.
I think it’s really interesting that China didn’t rely on kings or nobles to run things, but instead used these exams to pick leaders. It feels kind of modern, like an early version of meritocracy. Compared to Europe, where leadership was mostly inherited, this shows how different their system was.
econozcoquelarazonseconfunde.frentealprodigiodel_amor,frenteaesaextrafaobsesidn porlacuallacarne,quetan~Poconospreocupacuando.componenuestropropiocuerpo,y-quesolonosmuevealavarla,aalimentarlay,llegadoel caso,aevitarquesufra,puedellegarainspirarnosundeseotanapasio-nadodecaricias,simplementeporqueestaanimadapor unaindi-vidualidaddiferentedelanuestrayporquepresentaciertos.li-neamientosdebellezasobreloscuales,por.lodemas,losmejores,jueces nosehanpuesto deacuerdo.
🧠 “La razón se confunde frente al prodigio del amor”
El emperador admite que el amor desafía la lógica. Lo que parece un fenómeno biológico se transforma en un misterio que no puede explicarse con pura razón.
🫀 La carne propia vs. la carne ajena
Nuestro propio cuerpo nos interesa de manera mínima: lo lavamos, lo alimentamos, lo cuidamos para evitar dolor.
Pero, en el amor, el cuerpo ajeno se convierte en objeto de obsesión, de deseo apasionado.
✨ El enigma del deseo
¿Por qué?
Porque ese cuerpo está animado por una individualidad distinta a la nuestra (tiene un alma propia).
Porque exhibe ciertos trazos de belleza, aunque la belleza es tan subjetiva que ni los mejores jueces coinciden en qué la define.
🗝️ Sentido profundo
El texto muestra cómo el amor erosiona la frontera entre lo racional y lo irracional:
La carne deja de ser mera materia y se convierte en misterio encarnado en el otro.
El deseo no surge de la carne en sí, sino del hecho de que esa carne es otredad viviente.
La belleza funciona como chispa, pero es arbitraria y variable.
👉 En resumen: Adriano reconoce el amor como una paradoja —una fuerza que convierte lo banal (un cuerpo) en lo más precioso, por la simple diferencia de que no es el nuestro.
Partiendo deundespojamientoqueigualaeldelamuerte,deunahumildadqueexcedeladeladerrotaylaple-garia,memaravillodeverrestablecersecadavez lacompleyidaddelasnegativas,lasresponsabilidades,losdones,las tristescon-fesiones,lasfragilesmenuras,l
⚰️ Despojamiento como la muerte
El inicio compara el acto del amor (o de la entrega espiritual) con la muerte: desnudez absoluta, pérdida de todo poder, abandono del yo.
🙇 Humildad más que derrota o plegaria
En la derrota uno es humillado por otro.
En la plegaria uno se humilla ante los dioses.
Pero en el amor, la humildad es todavía más radical: es voluntaria. Uno se vacía por decisión propia.
🔄 El retorno de lo humano
Después de ese instante de anulación, vuelve la vida con toda su carga:
Negativas: límites, rechazos.
Responsabilidades: lo que uno debe asumir.
Dones: lo que se recibe y se da.
Tristes confesiones: la vulnerabilidad.
Frágiles ternuras: la delicadeza que sobrevive en lo cotidiano.
🧭 Sentido
El pasaje muestra cómo el amor o la entrega íntima tiene un movimiento doble:
Vaciamiento absoluto (como la muerte).
Retorno a lo complejo y humano (responsabilidad, ternura, confesiones).
Es casi una dialéctica: el amor como aniquilación momentánea seguida por la reconstrucción del tejido vital.
Partiendo deundespojamientoqueigualaeldelamuerte,deunahumildadqueexcedeladeladerrotaylaple-garia,memaravillodeverrestablecersecadavez lacompleyidaddelasnegativas,lasresponsabilidades,losdones,las tristescon-fesiones,lasfragilesmenuras,l
⚰️ Despojamiento como la muerte
El inicio compara el acto del amor (o de la entrega espiritual) con la muerte: desnudez absoluta, pérdida de todo poder, abandono del yo.
🙇 Humildad más que derrota o plegaria
En la derrota uno es humillado por otro.
En la plegaria uno se humilla ante los dioses.
Pero en el amor, la humildad es todavía más radical: es voluntaria. Uno se vacía por decisión propia.
🔄 El retorno de lo humano
Después de ese instante de anulación, vuelve la vida con toda su carga:
Negativas: límites, rechazos.
Responsabilidades: lo que uno debe asumir.
Dones: lo que se recibe y se da.
Tristes confesiones: la vulnerabilidad.
Frágiles ternuras: la delicadeza que sobrevive en lo cotidiano.
🧭 Sentido
El pasaje muestra cómo el amor o la entrega íntima tiene un movimiento doble:
Vaciamiento absoluto (como la muerte).
Retorno a lo complejo y humano (responsabilidad, ternura, confesiones).
Es casi una dialéctica: el amor como aniquilación momentánea seguida por la reconstrucción del tejido vital.
Noesindispensablequeelbebedor abdiquedesurazon,peroelamantequeconservalasuyanoobedecedeltodo.asudios.
🍷 El bebedor y la razón
Beber no implica necesariamente perder la cabeza: uno puede tomar vino y seguir siendo dueño de sí.
Aquí el vino representa el placer controlado, el exceso manejable.
❤️ El amante y la razón
Pero en el amor, quien se mantiene demasiado lúcido no se entrega del todo.
“No obedece del todo a su dios” = no honra plenamente a Eros.
Amar exige perder algo de control, dejar que la pasión gobierne.
🗝️ Sentido profundo
Yourcenar (a través de Adriano) plantea un contraste:
El vino → placer que se puede modular.
El amor → fuerza divina que pide rendición.
Amar con cálculo frío es casi una contradicción: el dios exige sacrificio de la razón.
👉 Es una idea que viene de la tradición griega: Platón en el Fedro habla de la manía erótica (locura divina del amor) como una forma superior de verdad.