55 Matching Annotations
  1. Last 7 days
    1. This research encompasses a thorough examination of 5595 confirmed exoplanets listed in the Archive as of 10 March 2024, systematically evaluated according to their calculated average surface temperatures and stellar classifications of their host stars, taking into account the biases implicit in the methodologies used for their discovery. Machine learning, in the form of a Random Forest classifier and an XGBoost classifier, is applied in the classification with high accuracies. The feature importance analysis indicates that our approach captures the most important parameters for habitability classification.

      I do wonder about this statement "our approach captures the most important parameters" - at least in their study.

  2. Dec 2025
    1. For SETI to be conducted and eventually succeed, humans must at least consider the possibility thatlife exists beyond Earth. Starting and maintaining the search, they must act as if the conditions of pos-sibility for life and the emergence of technosignatures are actually given. SETI cannot be conductedfrom a pure agnostic and passive position. It requires active scientific exploration and empirical obser-vation and, as such, must presuppose the possible existence of external events that can effect the obser-vational setup and their reliable attribution to causing conditions (Radder, 2021). One might beconsciously aware that this is a purely logical requirement and that our beliefs can change. Yet, presup-posing that there is no life beyond Earth, renders conducting SETI senseless. Deliberately assigning arandom probability to the possibility of extraterrestrials may express uncertainty, but effectively con-ducting SETI requires accepting that among the myriads of signals, we are able to detect some mayand can indeed be traced back to the activity of extraterrestrials. This, of course, does neither tell uswhere they are, how many there are, nor what their activity will exactly look like.
    1. Thales (600BC) is thegodfather of the Western philosopher by propoundingthe existence of plurality of worlds, from then onwards,many theoretical approaches have arisen and sunkenaccording to the signs of times.

      Thales did not propound the plurality of worlds. This is historically inaccurate. Pluralistic cosmology (multiple worlds from the indefinite apeiron) is suggested to be sourced from Anaximander - though, this is a very loose historical interpretation of his works.

    2. In this current context of scientific explosionat all levels (although the exponential growth is not thesame in all scientific disciplines), we find the advent ofnew disciplines and subdisciplines that help us toclassify the areas of knowledge.Thus, to order this informative explosion, itwas convenient to establish a classification system forthe different areas of study. The UNESCO InternationalNomenclature for the fields of Science and Technologywas proposed in 1973 and 1974 by the Science andTechnology Policy Divisions of Science andTechnology of UNESCO and adopted by the Scientificand Technical Research Advisory Commission. It is aknowledge classification system widely used in themanagement of research projects and doctoral theses.And, as a sign that science always brings newhorizons to knowledge, new actors are alwaysappearing in this classification system.In the field that occupies us, however, we findourselves with a great absence. The "Astrobiology",does not appear in the listings of UNESCO. But yes, wefind in them the term "Exobiology" [2, 3]. This "partial"absence denotes the novelty that is still today toscientifically consider the study of life outside Earth.Indeed, until very recently and by manyscientists, it was considered "Exobiology" or"Astrobiology" (which we will consider synonyms), ascience without an area of study. This was especiallytrue until 1995, when Michel Mayor and Didier Quelozdiscovered the first extrasolar planet, 51 Pegasi b.Fortunately, today things are beginning to change andmore and more scientists believe that life will be aubiquitous phenomenon, which will occur anywhere inthe universe where the conditions are right for it.Life will then be an epiphenomenon, an eventthat has no choice but to occur, as soon as thecomplexity of the chemical organization of matterreaches the critical point of interaction between thetrace elements, the essential elements for life. At thebase of it we will find carbon, hydrogen, oxygen,nitrogen, phosphorus and sulphur.As life will be a ubiquitous phenomenon,finally today we already intuit that not even a planet isnecessary for life to prosper, and that life could bemaintained in interstellar space, without planetarysubstratum. But before continuing, it is convenient tofix some definitions.The debate on what is life? has occupied allgenerations of thinkers. It is a very difficult concept todefine. Currently there is consensus in affirming thatlife is a self-contained, autopoietic chemical system(self-sufficient exchanging energy with theenvironment in which it is located), capable ofreproducing itself and experiencing evolution [4]. It isa broad definition. In it the minerals could fit, and eventhe stars themselves, as we will see later.So, in view of the complexity of theknowledge that we are slowly acquiring about theuniverse, and given the challenges posed by thepossibility of assuming that life will be found virtuallyanywhere, it is convenient to establish a series of ethicalvalues that allow a positive integration in the culturalbaggage of society of the new limits of knowledge thatscience gives us.For this reason, a "Philosophy of Science" -code UNESCO 7205.01- was established, under whichsince the 80s we can find the "Philosophy of Biology".Before delving into the Philosophy ofAstrobiology, we will give its definition, based on theconcepts of "Philosophy" and "Astrobiology".

      Authors argue that the growth of the sciences in human culture has driven the need to expand the ontology of scientific categories. As astrobiology matures, more complex studies across disciplines are needed to address evolving areas - e.g., exobiology, philosophy of astrobiology, or my own term exoastronomy which I coined in 2018. These are missing from the UNESCO International nomenclature as of 2025/2026.

    3. To cite an example, the Australian aboriginesexplain with legends that their origin is extraterrestrial.They say that their cave paintings known as"wandjinas" are actually self-portraits made by thesewandjinas, gods or spirits associated with clouds andrain (inhabitants of sky, therefore). In the WesternAustralia region of Kimberley these rock art works areabundant, which have usually been dated to some 4000years old. But aboriginal tradition tells that it was thegods themselves who painted themselves in rockyshelters and who commissioned human artists (see Fig.2) to regularly repaint these manifestations of divinity

      Aborigines mention cave paintings are self-portraits made by gods from the sky. The creation gods who came from the sky (or the sea in some accounts) in the Dreamtime were the Wandjina. It's difficult to necessarily associate them as extraterrestrial since they are also posited to have originated with clouds, rain, fertility, and the creation of the land and its people. This needs more references to validate the claim.

    4. Obviating without detracting the Greekclassics, we will quote as an example ChristiaanHuygens, astronomer, physicist, mathematician andDutch inventor. Among other achievements, heexplained the true nature of Saturn's rings anddiscovered Titan, Saturn's largest moon. In the field ofAstrobiology, in 1698 he wrote "Cosmotheoros",affirming "what a marvellous and splendid picture ofthe magnificent vastness of the universe we haveachieved! Such amount of suns, such amount of earths,each and every one of them provided with plants, treesand animals, and adorned with seas and mountains!And how much increases our admiration andamazement if we stop to analyse the prodigiousdistance and the multitude of stars!""Cosmotheoros" (the observer of the stars), isthe first treaty that conjectures extraterrestrial life froma scientific point of view based on the theories of otherthinkers like Nicholas of Cusa, Giordano Bruno,Kepler, Tycho Brahe or Descartes.In "Cosmotheoros", Huygens describes morethan twenty possible forms of extraterrestrial life [6]

      Early theories of astrobiology include Christian Huygens speculating on forms of extraterrestrial life in Cosmotheros (Latin for "Beholder of the Cosmos") (1698). This may be the first scientific speculation about astrobiology. This is difficult to state outright since authors were fantasizing about life on planets - see Lucian of Samosata's 2nd century work "A True Story" and Voltaire's 1752 novella "Le Micromégas" about beings from Sirius.

    5. In recent times, decade of the 40s of thetwentieth century, another of the pioneers ofAstrobiology was the Soviet astronomer GavriilAdrianovich Tikhov, who laid the foundations of anincipient "Astrobotany".Tikhov studied the albedo formations of Mars,speculating that the origin of chromatic and brightnesschanges on the Martian surface were caused byseasonal cycles of falling leaves in forests populated bydeciduous trees [7], (see Fig. 1).Figure 1. Albedo formations of Mars during the greatopposition of Mars in 2003. (Source: Rafael BalaguerRosa, Astrogirona, Astronomical Society of Girona).3. Astrobiology in ancestral societies.But these conceptions are very modern.Perhaps the idea that life thrives in the entire universe,and that maybe the inhabitants of Earth are sons of anextraterrestrial life, are rooted in our deepest psychefrom the very beginning of our species, Homo sapiens,(and maybe other human species, too), more than200,000 years ago.This idea is based on the fact that manyancestral cultures, different and located throughout theplanet, have interpreted that our human origins, and thevery origin of life on Earth, is actually of extraterrestrialorigin. This certainty is born of the shamanicexperience of the altered states of consciousness, wherethe subjective experiences (and then shared andcollective) suggest the real existence of spiritual orhigher beings, who descend from the sky, from space.

      Soviet astronomer Gavriil Tikhov speculated about life on Mars due to albedo changes. He was a Soviet astronomer becoming one of the very first pioneers in astrobiology and astrobotany (being appointed the head of astrobotany in Alma-Ata to investigate life on planets in the Solar System). He was also an astronomer at the Pulkovo Observatory from 1906 until 1941.

    6. Its relation to their celestial origin is alsoevident in the Maasai culture. In 2005 the Maasai ofSynia, Tanzania, explained to Rafael Balaguer Rosatheir legends, star lore and their astronomicalknowledge, very basic, but that also related their originwith the sky, with space, in charge of their unique godNgai.Ngai travels from heaven to Earth descendingthe Milky Way. They call the Milky Way “nkurrei”,which means “way” too, great example of culturalconvergence; and to the Magellanic Clouds

      The mention of the Maasai culture in Tanzania believing their god Ngai descended from the Milky Way seems speculative and not well referenced. Other sources just note Ngai descended from the sky. One of the authors is referenced - Rafael Balaguer Rosa, Tras los Pasos de Ngai, AstronomíA, 73-74 (2005), July-August 26-35

    7. Figure 4. According to transhumanism merginghumans and technologies might change ourphysical status (Source: Ray Kurzweil's “TheSingularity is Near”, 2005).The figure shows how biological, cultural andtechnological evolution is progressing towards a certainincrease in complexity through different stages ofdevelopment, taking advantage of the capacities thatappear in each one of them, and taking advantage ofthese capabilities in a cumulative way, to take the leapto a higher evolutionary state, as we saw at thebeginning with the growth of science. The mostinteresting thing is that this scheme can be applied notonly to life, to humans and our culture, but to the entireUniverse, since the Universe is the set of everythingthat has existed, everything that exists, everything whatwill exist... and the information it contains. This lastpart, that of the information, is the one that interests usespecially

      Kurzweil's epochs of evolution speculate higher levels of consciousness, transhumanism to a universal consciousness. In The Singularity Is Near he portrays what life will be like after the transhuman event: a human-machine civilization where our experiences become virtual reality, intelligence becomes nonbiological and trillions of times more powerful than current human intelligence.

    8. Presumably, life does not necessarily have tobe constituted by atoms and molecules, it could beassembled from any set of building blocks with therequired complexity. In fact, we already know that lifeis an epiphenomenon, an event that has no other choicebut to happen as soon as matter acquires a certaincritical degree of complexity. If so, perhaps anadvanced civilization could then transcribe itself and itsentire physical realm into new forms of life and matter.In fact, perhaps maybe our universe is one of the newways in which some other civilization transcribed itsworld... Perhaps this would be the new frontier ofAstrobiology

      The formation of life could be assembled via components of progressively higher complexity - consider advanced robotics and AI creating new forms of life. (Need reference that Kurzweil said this - or this is the authors' speculation)

    9. Santilli claims to have detected "at least twotypes" of "Invisible Terrestrial Entities" (ITEs): dark,which leave a dark image on a bright background of adigital camera attached to the telescope, and brightITEs that do the opposite.

      Santilli's unusual claims on detecting invisible alien entities and covert surveillance is highly speculative. His works are consider pseudoscience in many cases and he believes there is a Jewish scientific cabal of corruption suppressing his work.

    10. By storing their essential data in photons, lifecould be equipped with a distributed and delocalisedsystem of vital self-support, and their consciousnesswould no longer be local. And it could go further,manipulating new photons emitted by stars to dictatehow they interact with matter, and we have already seenthat stars could be conscious beings. The fronts ofelectromagnetic radiation could be arriving through thecosmos to set in motion chains of interstellar orplanetary chemistry, generating energies of excitationin atoms and molecules. This is a way in which lifecould disappear from ordinary physics, and embeditself in exotic matter, to live forever... In other words,part of the fabric of the universe could be a product ofintelligence or maybe even of the life itself.

      Authors consider non-local, distributed consciousness through photons and matter interactions (aka exotic matter) based on Caleb Scharf's works. Caleb Scharf is an astrophysicist, the Director of Astrobiology at Columbia University in New York, and a founder of yhousenyc.org, an institute that studies human and machine consciousness.

  3. Sep 2024
    1. But the ruliad took things to another level. For now I could see that the very laws of physics we know were determined by the way we are as observers. I’d always imagined that the laws of physics just are the way they are. But now I realized that we could potentially derive them from the inevitable structure of the ruliad, and very basic features of what we’re like as observers.
    2. Our Physics Project is based on the idea of applying rules to abstract hypergraphs that represent space and everything in it. But given a particular rule, there are in general many ways it can be applied. And a key idea in our Physics Project is that somehow it’s always applied in all these ways—leading to many separate threads of history, that branch and merge—and, importantly, giving us a way to understand quantum mechanics.
    3. And so it was, soon after my birthday in 2019, that we embarked on our Physics Project. It was a mixture of computer experiments and big concepts. But before the end of 2019 it was clear: it was going to work! It was an amazing experience. Thing after thing in physics that had always been mysterious I suddenly understood. And it was beautiful—a theory of such strength built on a structure of such incredible simplicity and elegance.
    4. A major theme of my work since the early 1980s had been exploring the consequences of simple computational rules. And I had found the surprising result that even extremely simple rules could lead to immensely complex behavior. So what about the universe? Could it be that at a fundamental level our whole universe is just following some simple computational rule?
    5. I’ve spent my life alternating between technology and basic science, progressively building a taller and taller tower of practical capabilities and intellectual concepts (and sharing what I’ve done with the world). Five years ago everything was going well, and making steady progress. But then there were the questions I never got to. Over the years I’d come up with a certain number of big questions. And some of them, within a few years, I’d answered. But others I never managed to get around to.
  4. Apr 2024
    1. Empirical idealism, as Kant here characterizes it, is the view that all we know immediately (non-inferentially) is the existence of our own minds and our temporally ordered mental states, while we can only infer the existence of objects “outside” us in space. Since the inference from a known effect to an unknown cause is always uncertain, the empirical idealist concludes we cannot know that objects exist outside us in space.
    1. After 1836 Chaadayev continued to write articles on cultural and political issues "for the desk drawer." Chaadayev defies categorization; he was not a typical Russian Westernizer due to his idiosyncratic interest in religion; nor was he a Slavophile, even though he offered a possible messianic role for Russia in the future. He had no direct followers, aside from his "nephew" and amanuensis, Mikhail Zhikharev, who scrupulously preserved Chaadayev's manuscripts and tried to get some of them published after Chaadayev's death. Chaadayev's lasting heritage was to remind Russian intellectuals to evaluate any of Russia's supposed cultural achievements in comparison with those of the West.
  5. Nov 2023
    1. This illustration shows four alternative ways to nudge an LLM to produce relevant responses:Generic LLM - Use an off-the-shelf model with a basic prompt. The results can be highly variable, as you can experience when e.g. asking ChatGPT about niche topics. This is not surprising, because the model hasn’t been exposed to relevant data besides the small prompt.Prompt engineering - Spend time structuring the prompt so that it packs more information about the desired topic, tone, and structure of the response. If you do this carefully, you can nudge the responses to be more relevant, but this can be quite tedious, and the amount of relevant data input to the model is limited.Instruction-tuned LLM - Continue training the model with your own data, as described in our previous article. You can expose the model to arbitrary amounts of query-response pairs that help steer the model to more relevant responses. A downside is that training requires a few hours of GPU computation, as well as a custom dataset.Fully custom LLM - train an LLM from scratch. In this case, the LLM can be exposed to only relevant data, so the responses can be arbitrarily relevant. However, training an LLM from scratch takes an enormous amount of compute power and a huge dataset, making this approach practically infeasible for most use cases today.

      RAG with a generic LLM - Insert your dataset in a (vector) database, possibly updating it in real time. At the query time, augment the prompt with additional relevant context from the database, which exposes the model to a much larger amount of relevant data, hopefully nudging the model to give a much more relevant response. RAG with an instruction-tuned LLM - Instead of using a generic LLM as in the previous case, you can combine RAG with your custom fine-tuned model for improved relevancy.

    1. Macaulay claimed that his memory was good enough to enable him to write out the whole of Paradise Lost. But when preparing his History of England, he made extensive notes in a multitude of pocketbooks of every shape and colour.

      Thomas Babington Macaulay, 1st Baron Macaulay, PC, FRS, FRSE 25 October 1800 – 28 December 1859) was a British historian and Whig politician, who served as the Secretary at War between 1839 and 1841, and as the Paymaster General between 1846 and 1848. Macaulay's The History of England, which expressed his contention of the superiority of the Western European culture and of the inevitability of its sociopolitical progress, is a seminal example of Whig history that remains commended for its prose style.

    1. Fine-tuning takes a pre-trained LLM and further trains the model on a smaller dataset, often with data not previously used to train the LLM, to improve the LLM’s performance for a particular task.

      LLMs can be extended with both RAG and Fine-Tuning Fine-tuning is appropriate when you want to customize a LLM to perform well in a particular domain using private data. For example, you can fine-tune a LLM to become better at producing Python programs by further training the LLM on high-quality Python source code.

      In contrast, you should use RAG when you are able to augment your LLM prompt with data that was not known to your LLM at the time of training, such as real-time data, personal (user) data, or context information useful for the prompt.

    2. Vector databases are used to retrieve relevant documents using similarity search. Vector databases can be standalone or embedded with the LLM application (e.g., Chroma embedded vector database). When structured (tabular) data is needed, an operational data store, such as a feature store, is typically used. Popular vector databases and feature stores are Weaviate and Hopsworks that both provide time-unlimited free tiers.
    3. RAG LLMs can outperform LLMs without retrieval by a large margin with much fewer parameters, and they can update their knowledge by replacing their retrieval corpora, and provide citations for users to easily verify and evaluate the predictions.
    1. The key enablers of this solution are * The embeddings generated with Vertex AI Embeddings for Text * Fast and scalable vector search by Vertex AI Vector Search

      Embeddings space is a map of the context of the meanings. Basically, values are assigned in n-dimensional space tied to the similar semantic inputs - tying meaning between concepts.

      Example of vectorized n-dimensional embedding

    2. With the embedding API, you can apply the innovation of embeddings, combined with the LLM capability, to various text processing tasks, such as:LLM-enabled Semantic Search: text embeddings can be used to represent both the meaning and intent of a user's query and documents in the embedding space. Documents that have similar meaning to the user's query intent will be found fast with vector search technology. The model is capable of generating text embeddings that capture the subtle nuances of each sentence and paragraphs in the document.LLM-enabled Text Classification: LLM text embeddings can be used for text classification with a deep understanding of different contexts without any training or fine-tuning (so-called zero-shot learning). This wasn't possible with the past language models without task-specific training.LLM-enabled Recommendation: The text embedding can be used for recommendation systems as a strong feature for training recommendation models such as Two-Tower model. The model learns the relationship between the query and candidate embeddings, resulting in next-gen user experience with semantic product recommendation.LLM-enabled Clustering, Anomaly Detection, Sentiment Analysis, and more, can be also handled with the LLM-level deep semantics understanding.
    3. Grounded to business facts: In this demo, we didn't try having the LLM to memorize the 8 million items with complex and lengthy prompt engineering. Instead, we attached the Stack Overflow dataset to the model as an external memory using vector search, and used no prompt engineering. This means, the outputs are all directly "grounded" (connected) to the business facts, not the artificial output from the LLM. So the demo is ready to be served today as a production service with mission critical business responsibility. It does not suffer from the limitation of LLM memory or unexpected behaviors of LLMs such as the hallucinations.
    1. Preparation Steps * Ingest data into a database. The destination may be an array or a JSON data type. * Harmonize data. This is a lightweight data transformation step * Encode data. This step is used to convert the ingested data into embeddings. One option is to use an external API. For example, OpenAI’s ADA and sentence_transformer have many pre-trained models to convert unstructured data like images and audio into vectors. * Load embedding vectors. data is moved to a table that mirrors the original table but has an additional column of type ‘vector, ’ JSON or a blob that stores the vectors. * Performance tuning. SingleStoreDB provides JSON_ARRAY_PACK. And indexing vector using HNSW as mentioned earlier. This allows parallel scans using SIMD.

    2. In the new AI model, you ingest the data in real time, apply your models by reaching to one or multiple GPT services and action on the data while your users are in the online experience. These GPT models may be used for recommendation, classification personalization, etc., services on real-time data. Recent developments, such as LangChain and AutoGPT, may further disrupt how modern applications are deployed and delivered.
    3. Let’s say, for example, you search for a very specific product on a retailer’s website, and the product is not available. An additional API call to an LLM with your request that returned zero results may result in a list of similar products. This is an example of a vector search, which is also known as a similarity or semantic search.
    4. Modes of Private Data consumption: 1. Train Custom LLM - requires massive infrastructure, investment, and deep AI skills 2. Tune the LLM - utilizes model weights to fine-tune an existing model- new category of LLMOps - similar issue to #1 3. Prompt general-purpose LLMs - uses modeled context input with Retrieval Augmented Generation (Facebook)

      For leveraging prompts, there are two options:

      Short-term memory for LLMs that use APIs for model inputs Long-term memory for LLMs that persist the model inputs. Short-term memory is ephemeral while long-term memory introduces persistence.

    5. Conventional search works on keys. However, when the ask is a natural query, that sentence needs to be converted into a structure so that it can be compared with words that have similar representation. This structure is called an embedding. An embedding uses vectors that assign coordinates into a graph of numbers — like an array. An embedding is high dimensional as it uses many vectors to perform semantic search.

      When a search is made on a new text, the model calculates the “distance” between terms. For example, searching for “king” is closer to “man,” than to “woman.” This distance is calculated on the “nearest neighbors” using functions like, cosine, dot product and Euclidean. his is where “approximate nearest neighbors” (ANN) algorithms are used to reduce the vector search space. A very popular way to index the vector space is through a library called ‘Hierarchical Navigable Small World (HNSW).’ Many vector databases and libraries like FAISS use HNSW to speed up vector search.

    6. The different options for storing and querying vectors for long-term memory in AI search. The options include: * Native vector databases - many non-relational DBMSs are adding vectors such as Elastic. Others are Pinecone Qdrant, etc * SingleStoreDB support vector embeddings and semantic search * Apache Parquet or CSV columnar data - slow indicies if used

    1. Retrieval Augmented Generation (RAG) is a method in natural language processing (NLP) that combines the power of both neural language models and information retrieval methods to generate responses or text that are informed by a large body of knowledge. The concept was introduced by Facebook AI researchers and represents a hybrid approach to incorporating external knowledge into generative models.

      RAG models effectively leverage a large corpus of text data without requiring it to be stored in the parameters of the model. This is achieved by utilizing a retriever-generator framework:

      1. The Retriever component is responsible for finding relevant documents or passages from a large dataset (like Wikipedia or a corpus of scientific articles) that are likely to contain helpful information for generating a response. This retrieval is typically based on vector similarity between the query and the documents in the dataset, often employing techniques like dense passage retrieval (DPR).

      2. The Generator component is a large pre-trained language model (like BART or GPT-2) that generates a response by conditioning on both the input query and the documents retrieved by the retriever. It integrates the information from the external texts to produce more informed, accurate, and contextually relevant text outputs.

      The RAG model performs this process in an end-to-end differentiable, meaning it can be trained in a way that updates both the retriever and generator components to minimize the difference between the generated text and the target text. The retriever is typically optimized to select documents that will lead to a correct generation, while the generator is optimized to produce accurate text given the input query and the retrieved documents.

      To summarize, RAG allows a generative model to:

      • Access vast amounts of structured or unstructured external data.
      • Answer questions or generate content that requires specific knowledge not contained within the model itself.
      • Benefit from up-to-date and expansive datasets, assuming the retriever's corpus is kept current.

      RAG addresses the limitation of standard language models that must rely solely on their internal parameters for generating text. By augmenting generation with on-the-fly retrieval of relevant context, RAG-equipped models can produce more detailed, accurate, and nuanced outputs, especially for tasks like question answering, fact-checking, and content creation where detailed world knowledge is crucial.

      This technique represents a significant advancement in generative AI, allowing models to provide high-quality outputs without memorizing all the facts internally, but rather by knowing (GPT4-0web)

  6. Sep 2023
    1. followers of Spinoza adopted his definition of ultimate substance as that which can exist and can be conceived only by itself. According to the first principle of his system of pantheistic idealism, God (or Nature or Substance) is the ultimate reality given in human experience.
    2. Historically, answers to this question have fallen between two extremes. On the one hand is the skepticism of the 18th-century empiricist David Hume, who held that the ultimate reality given in experience is the moment-by-moment flow of events in the consciousness of each individual. That concept compresses all of reality into a solipsistic specious present—the momentary sense experience of one isolated percipient.
    3. two basic forms of idealism are metaphysical idealism, which asserts the ideality of reality, and epistemological idealism, which holds that in the knowledge process the mind can grasp only the psychic or that its objects are conditioned by their perceptibility.
    4. idealism, in philosophy, any view that stresses the central role of the ideal or the spiritual in the interpretation of experience. It may hold that the world or reality exists essentially as spirit or consciousness, that abstractions and laws are more fundamental in reality than sensory things, or, at least, that whatever exists is known in dimensions that are chiefly mental—through and as ideas.
    1. it is this architecture, the one which is in the heads of those writing the code, that is the most important. In adopting this decentralised approach, where the practice of architectural decision-making is much more dispersed, this problem is in many ways, mitigated

      Only true in software architecture. But, in enterprise architecture - that spans domains decentralized decisions create fragmentations.

    1. For example, productivity and satisfaction are correlated, and it is possible that satisfaction could serve as a leading indicator for productivity; a decline in satisfaction and engagement could signal upcoming burnout and reduced productivity.

      Certainly not necessarily true - the correlation is mostly heuristic. I can be highly productive but dissatisfied that the productive work doesn't have value.

    2. • Design and coding. Volume or count of design documents and specs, work items, pull requests, commits, and code reviews. • Continuous integration and deployment. Count of build, test, deployment/release, and infrastructure utilization. • Operational activity. Count or volume of incidents/issues and distribution based on their severities, on-call participation, and incident mitigation.

      Honestly, a well-oiled team with strong collaboration completely outweighs any measured outputs like this. I would never want my engineers faced with performance observability like this.

    3. The SPACE framework provides a way to logically and systematically think about productivity in a much bigger space and to carefully choose balanced metrics linked to goals—and how they may be limited if used alone or in the wrong context.

      Not sure I would classify this as logical but systematic makes sense - definitely trying to put heuristic dimensions on typically unquantifiable and varied human behaviors. Clearly, this is biased to process experts and program managerial personality types that like trying to frame things into organized buckets.

    1. the brain evolved to be uncertainty-averse. When things become less predictable — and therefore less controllable — we experience a strong state of threat. You may already know that threat leads to “fight, freeze, or flight” responses in the brain. You may not know that it also leads to decreases in motivation, focus, agility, cooperative behavior, self-control, sense of purpose and meaning, and overall well-being. In addition, threat creates significant impairments in your working memory: You can’t hold as many ideas in your mind to solve problems, nor can you pull as much information from your long-term memory when you need it.
  7. Aug 2023
    1. Metrics shape behavior, so by adding and valuing just two metrics, you've helped shape a change in your team and organization. This is why it's so important to be sure to pull from multiple dimensions of the framework: it will lead to much better outcomes at both the team and system levels.

      Probably the best statement here - but, the assumption that metrics lead to better outcomes may be false.

    2. The framework is meant to help individuals, teams, and organizations identify pertinent metrics that present a holistic picture of productivity; this will lead to more thoughtful discussions about productivity and to the design of more impactful solutions

      I will give the paper credit for thinking about the issue in general.

  8. May 2023
    1. The fact that a team's need for a decision to be taken can be met by themselves also leads to appropriate levels of bias-to-action, with accountability acting as a brake when it's required.

      Totally disagree - haven't see this in practice