50 Matching Annotations
  1. Last 7 days
    1. Third example: you want to add two packages to your system that depend on the same dependency but requiring different versions. You cannot have multiple classes with the same name in your system, so this is impossible.

      This is one of the most frustrating situations to me. I have a personal principle of no global namespaces for this reason

  2. Oct 2024
    1. Haskell type classes, for example, are a form of static information that has an influence over the dynamic semantics of the program

      For the most part though the missing type annotations won't change the behavior of the program. I guess unless you're using numeric literals with type annotations to change what the type of the number is. Othe wise it normally will just not type check.

    2. One criticism that applies equally to both proposals is that the semantics of the code now depends on the presence or absence of type information – in particular type annotations, while OCaml programmers are used to consider that they are useful for clarity and debugging purposes only.

      Violation of the gradual guarantee. I'm coming around to this being fine but I think that it needs to be declared loudly in a languages design description and possibly alternative syntax for type declarations should be used so it's more obvious it's not just for clarity.

  3. Aug 2024
    1. Here we face situations where things must be handled in parallel. It appears that an index is a “space” to the user, and it always makes sense to bookmark a place in a space. Maps, timelines, and almost all indexes are spaces.

      I have often wished for a URL-equivalent that's an arbitrary index into some space. Both content addressed and lenses into content.

    2. Let us visit an example: a digital geography map. Google Maps will only ever let you view details of one location at a time, as with most map programs. Occasionally, I found myself in need of comparing multiple places. Suppose you are given the task of comparing user comments on three bars. Before a certain Google Maps update, I had to do this: Pan map to location 1 Click on location 1 Pan map to location 2 Click on location 2 Repeat

      My toughts on this problem given a more malleable system:

      The map itself should just be a 2 dimensional geographic canvas annotated with some data that should be accessible within the system. There may be some builtin interactions with regards to hover actions and display but a "user" should be able to do ad-hoc interactions with the data through the interface.

      So a "user" could take multiple of the locations on the map and add them to an ad-hoc defined collection of locations. They could then create a new view (either integrated into the map or not) that would allow them to project out the useful information, comments and such. Importantly this view and collection would be bidirectionally linked back to the map so that you could preserve interactions between the two, i.e. hovering over a location could highlight the associated entry in the collection and vice versa.

      These user-definable tools and workspace would allow for many non-predicted interactions to arise while also reducing the original burden of complexity on the canvas creator.

    3. But here, let me state what it is that I want happen. I am proposing a very generalized theory of GUI navigation, so instead of thinking about pages, bars, go-here-buttons, go-there-buttons, back buttons, bottom tabs, bottom sheets and so on, they are described as special cases under one system of navigation.

      I think part of the problem is the grounding in hypermedia and navigation being under that framing. If we think of more structured data I wonder what other affordances can be built for navigation through rich data structures and graphs.

      For concepts that are truly just references this makes sense for "data navigation". Overall though I think there's a lot of cases where tools are structured as navigation where they could otherwise be structured as a constant workspace

    4. In the first possibility, any navigation act will open an additional frame on the side, and frames do not close unless the user decides to click a Close button.

      Open in new tab/window

    5. The most permissive navigation scheme is one where any navigation act does not inhibit any other navigation act that could have taken place.

      There's some analog of this with no modes

    6. The great importance of the back button has naturally resulted from the stack navigation paradigm seen in apps. However, contrary to popular opinion, I am arguing against it on the basis of user freedom and creativity.

      I'm not sure if this is what the article is about, but I'm immediately drawn to the inherent modality built into page-oriented apps probably influenced by the web. Instead of feeling like you have a tool and workspace you're being shuffled through a wizard.

    1. then attach a proof or belief that these objects are identical in some sense across the sessions

      It also seems beneficial to talk about the relations explicitly. As opposed to limiting to equivalence.

    2. “A, B, and the person holding the Social Security Number 987-65-4321 are the same biological person (whatever ‘biological’ means)”

      I'm definitely a fan of decoupling "nominal" references. Part of the problem here I'm guessing is deciding equality is it's own terrifying prospect.

  4. Jul 2023
    1. REPLs are nice but they work well only for reasonably isolated code with few dependencies. It's hard to set up a complex object to pass into a function. It's harder still to set up an elaborate context of dependencies around that function.

      I wonder how much of this is accomplishable by automatically parameterizing code by the types that aren't used internally so they implementation can forget about the specifics. In addition some sort of meta-programming capability to automatically generate arbitrary instances or a richer form of trace types for user types would go a long way to simplifying the trace generation.

  5. Jan 2023
    1. run on a wide variety of hardware - desktops, laptops, tablets, phones, watches

      Curious if he means can run on different devices or an installation spans multiple devices. I'm interested in considering the operating system as a control plane for many devices. Additionally multi-user support for sharing hardware.

  6. Nov 2022
    1. This preserves a clear separation between text and annotations, keeps the original text freely editable, and avoids circular feedback loops that could happen if annotations were themselves searchable

      Meta programming

    2. The point is to defer this process until it’s absolutely needed. It’s okay to end up with structured schemas when we need them to support computational features, but when they’re not necessary, text is a perfectly adequate representation for humans to interact with

      Lazy formalization. Intuitively this makes most sense to me. I do wonder if there’s often cases where you want to reason with intention and doing it lazily causes burden.

    3. Notably, this process can’t avoid the need to teach the computer how to interpret meaning from freeform data

      Formalism considered a requirement

    4. There’s no way to ambiguously assign a recipe to either Tuesday or Wednesday, which would be natural to do in a paper notebook.

      Refinement types or arrays in place of scalars

    5. In Potluck, we encourage people to write data in freeform text, and define searches to parse structure from the text.

      From a gradual enrichment standpoint I understand but from a data entry standpoint this seems like more work.

    6. Formulas are written using JavaScript, in a live programming environment that resembles a spreadsheet

      Interesting the programming environment is separate from the note text.

  7. Sep 2022
    1. As this quote implies, formalisms are often difficult for people to use because they need to takemany extra steps (and make additional decisions) to specify anything.

      I wonder if the formalisms were more carefully baked into the affordances and interactions if it could feel like less work. Since you're getting something for the formalism rather than doing extra work to embed it.

    2. Experiences with workflow systems, systems which automatically route documents and workthrough defined procedures, show that systems without the ability to handle exceptions to theformalized procedure cannot support the large number of cases when exceptional procedures arerequired (Ellis, Gibbs, Rein, 1991).

      Feels like formalizing organic processes will have this outcome. The goal should be to make an actual formal workflow or build tools that enable users rather than trying to mix them.

    3. Of course, training and supervisionhelped users learning the general techniques for hypermedia authoring, but they tended to avoid(or lose interest in) the more sophisticated formalisms

      What affordances were they given in exchange for the formalisms?

    4. Many times he struggled to create a title for his note; heoften claimed that the most difficult aspect of this task was thinking of good titles

      Avoid requiring canonical naming

    5. Thus, hypertexts end up ashierarchical outlines with full pages of text connected by a single link to the next page of text

      Clearly this is just historical context. I'm wondering if we still have issues with hypertext authoring. There seems to be a stronger intuition built up of how to separate pages of information. I'm curious if we could/should be doing better.

    6. This level of formalization enablesthe system to apply knowledge-based reasoning techniques to support users by performing taskssuch as automated diagnosis, configuration, or planning.

      What I'm getting so far is that the formalization is what gives the users affordances to certain features. I'd imagine sophisticated data mining techniques (such as text-search, classification, etc) can alleviate this partially but is always going to be useful. It would be beneficial to opt into the formalism explicitly for the affordances and maintain bidirectional linking between non-formalized representations. In other words, you want the ability to create a formalized view.

    7. The authors propose, based on these experiences, that the cause ofa number of unexpected difficulties in human-computer interaction lies in users’ unwillingness orinability to make structure, content, or procedures explicit

      I'm curious if this is because of unwillingness or difficulty.

    1. The scalability issue is somewhat related to the versionability issue. In the small, checked exceptions are very enticing. With a little example, you can show that you've actually checked that you caught the FileNotFoundException, and isn't that great? Well, that's fine when you're just calling one API. The trouble begins when you start building big systems where you're talking to four or five different subsystems. Each subsystem throws four to ten exceptions. Now, each time you walk up the ladder of aggregation, you have this exponential hierarchy below you of exceptions you have to deal with. You end up having to declare 40 exceptions that you might throw. And once you aggregate that with another subsystem you've got 80 exceptions in your throws clause. It just balloons out of control. In the large, checked exceptions become such an irritation that people completely circumvent the feature. They either say, "throws Exception," everywhere; or—and I can't tell you how many times I've seen this—they say, "try, da da da da da, catch curly curly." They think, "Oh I'll come back and deal with these empty catch clauses later," and then of course they never do. In those situations, checked exceptions have actually degraded the quality of the system in the large.

      This is another case where I think inference would solve most of the issue.

    2. Anders Hejlsberg: Let's start with versioning, because the issues are pretty easy to see there. Let's say I create a method foo that declares it throws exceptions A, B, and C. In version two of foo, I want to add a bunch of features, and now foo might throw exception D. It is a breaking change for me to add D to the throws clause of that method, because existing caller of that method will almost certainly not handle that exception. Adding a new exception to a throws clause in a new version breaks client code. It's like adding a method to an interface. After you publish an interface, it is for all practical purposes immutable, because any implementation of it might have the methods that you want to add in the next version. So you've got to create a new interface instead. Similarly with exceptions, you would either have to create a whole new method called foo2 that throws more exceptions, or you would have to catch exception D in the new foo, and transform the D into an A, B, or C. Bill Venners: But aren't you breaking their code in that case anyway, even in a language without checked exceptions? If the new version of foo is going to throw a new exception that clients should think about handling, isn't their code broken just by the fact that they didn't expect that exception when they wrote the code? Anders Hejlsberg: No, because in a lot of cases, people don't care. They're not going to handle any of these exceptions. There's a bottom level exception handler around their message loop. That handler is just going to bring up a dialog that says what went wrong and continue. The programmers protect their code by writing try finally's everywhere, so they'll back out correctly if an exception occurs, but they're not actually interested in handling the exceptions. The throws clause, at least the way it's implemented in Java, doesn't necessarily force you to handle the exceptions, but if you don't handle them, it forces you to acknowledge precisely which exceptions might pass through. It requires you to either catch declared exceptions or put them in your own throws clause. To work around this requirement, people do ridiculous things. For example, they decorate every method with, "throws Exception." That just completely defeats the feature, and you just made the programmer write more gobbledy gunk. That doesn't help anybody.

      The issue here seems to be the transitivity issue. If method A calls B which in turn calls C, then if C adds a new checked exception B needs to add it even if it is just proxying it and A is already handling it via "finally". This seems like an issue of inference to me. If method B could dynamically infer its checked exceptions this wouldn't be as big of an issue.

      You also probably want effect polymorphism for the exceptions so you can handle it for higher order functions.

    1. Plus, if we can do this recursively, expanding inline items within inline items, we end up with something familiar: an outliner

      Outline view for a recursive hierarchical structure.

    2. Let’s say you’re in a workspace, listening to a podcast episode. Maybe you opened the podcast episode from a webpage you had open. As the episode plays, you realize that you would like to take some related notes. You open a new pane within your workspace, and take your notes. You can pause and play the podcast in the pane on the left, and you can take your notes in the pane on the right.

      This has me thinking about some sort of parametric workspace/view. Where you could "pull out" the podcast episode and have a generic podcast listening/note view which would change which note you were looking at based on which podcast you were listening to.

    3. The concept involves having an item just like every other in your itemized system: it has a type, attributes, and references to other items.

      I wonder if there's a good way to evolve wiki systems into the OS of the future by adding more sophisticated views and adding the abilities for computation. Seems like a good way to get multiplayer built in from the start. Maybe add the ability to have content addressed items as well.

    1. Now, not every programmer prefers that kind of development. Some programmers prefer to think of development as a process of designing, planning, making blueprints, and assembling parts on a workbench. There’s nothing wrong with that. Indeed, a multibillion-dollar international industry has been built upon it.

      I still think they should worry about it. Production systems need to evolve and contain data; reasoning about the systems completely statically from the source code with no regard to the existing data is a lot more complicated than it needs to be.

    2. In fact, there’s a style of programming, well known in Lisp and Smalltalk circles, in which you define a toplevel function with calls to other functions that don’t yet exist, and then define those functions as you go in the resulting breakloops. It’s a fast way to implement a procedure when you already know how it should work.
    3. Moreover, because the entire language and development system are available, unrestricted, in the repl, you can define the missing function bar, resume foo, and get a sensible result.

      This seems like one of the key points. The ability to edit computations while running. Type holes with resuming gets you most of the way there but there's probably also modifications. I wonder how you can keep it from getting confusing. Something similar to FRP?

  8. Aug 2022
    1. At 3 am he realized he needed to change the process scheduler. He read enough code to find the right method, changed it, and continued with his project

      How do we enable this while preventing people from accidentally nuking their systems?

    2. Dan Ingalls implemented the first opaque overlapping windows to let users see more code and other objects on the screen

      This is interesting context. I wonder if that need has gone away with large screens or if we're not using it the way it was originally intended. My intuition is that auto-layout is generally better but for smaller pieces of data ad hoc overlaps seem fine.

    1. when a programming technology is "too simple", it loses generality, but to compensate it often accretes unguessable magic, which leads to yet more complexity.

      Is there a way to make the spectrum smoother? So that we can have immense simplicity but not transition to "unguessable magic", but rather to comprehensible compositionallity while not requiring naive users to grok it.

    2. when you start with something simple but special purpose, it inevitably accretes features that attempt to increase its generality, as users run into its limitations. But the result of this evolutionary process is usually a complicated mess compared to what could be achieved by designing for generality up-front, in a more holistic way.

      I think this is true, but it's often difficult to design generality upfront. A nice approach is making sure that you are able to back into it and modify after the fact.

      We should be trying to make our technologies have more "two-door" decisions.

    3. you resort to complex hacks to subvert its limitations or combine it with some other special purpose technologies

      A lot of harm has come from using hacks or smoke and mirrors in order to advance technology instead of "wearing the hair shirt" and actually carrying the technology along. This was once necessary when our computing power and software was vastly limited but now it holds us back more than advances us.

    1. One problem is that a person can spend years reading analogies about black hole evaporation, quantum teleportation, and so on. And at the end of all that reading they typically have… not much genuine understanding to show for it. The analogies and heuristic reasoning simply don’t go far. They may be entertaining and produce some feeling of understanding.

      Limits to learning by example

    2. good tools for thought arise mostly as a byproduct of doing original work on serious problems

      In the context of use

  9. May 2022
    1. URLs are not democratic.

      I think monolithic might be a bigger issue than democratic

  10. Mar 2022
    1. A mixin class is a parent class that is inherited from - but not as a means of specialization. Typically, the mixin will export services to a child class, but no semantics will be implied about the child "being a kind of" the parent.

      Does this have to be done via inheritance?

    1. Unlike the web, the classic desktop computing paradigm makes a distinction between apps and files. The default way for a desktop app to save data is to save it to an external file.

      I think the most important part of the file vs web dichotomy is that applications HAVE to have their entire state management stored in files and web APIs design the APIs custom for the presupposed use case. Often the public UX isn't even built upon the original APIs and even if it is the APIs are built up as is necessary to facilitate the UX.

      Files on the other hand are the base level state of the data and therefore an equal playfield for all consumers. I believe a similar outcome would happen if you expose first class database constructs or other forms of state management. Files are too anemic to be a general purpose solution in my opinion.

    2. Because integrations are part of the application code, the developer of the app is responsible for integrating that app with other tools

      There's some need for something like dependency injection here on the part of the user.

    3. One-off API integration doesn’t scale

      My interpretation of this is because there's no language level semantics for REST APIs. In regular programming languages we have mechanisms to facilitate abstractions (interfaces, typeclasses, data structures). Those don't exist at the service level to the same capacity so it's hard to swap out or have encapsulation at the REST level

    1. For example, the range of sin is only defined for values [-1, 1]

      For some reason I feel like just doing mod 2 and subtracting by 1 would be useful. Since you could at least recapture the cyclical nature of sin.

    2. It’s worth noting that runtime errors can be avoided with systems like a static type checker, pushing the problem into compile time. The result is that your program is often in an in-between state where it won’t even start until all errors have been resolved

      You can definitely do the typechecking at edit time which is probably more important than runtime in this context.

    3. You can still easily get an unexpected result, but you’ll always get some result. In our experience, this approach promotes tinkering and feels much closer to sketching than to programming.

      I don't like this approach in general. I've often found that constraints crucial in the problem solving and thinking process. I think having more ways to experiment and expand the solution space are wonderful but it's important to have context where behavior is expected and not easy to have dissonance between what is happening and what you think is happening

    1. If you find a published item of a type your system has never seen before, your system loads the item with the item view that the publisher included

      I wonder how this is manifested. Is this delivered in a way that the view is coupled together with the data along with "sub-views" and a user can choose to modify the views being used at will. Or is the data it's own structure and there's a set of default views shipped separately that can be modified.

      Both options seem intriguing but I think I bias a bit towards the second since it may allow more sane self describing data and defaults to be used.