5 Matching Annotations
  1. Jul 2023
    1. the best way to increase the understandability of the CRUFT rating for documentation would be to create a linear calculation for it

      There are two versions of the calculations. The first formula is logarithmic which means a resulting score tends to increase significantly with small changes in the values of the formula components. The second one is made more user-friendly and forms a linear curve that distributes the results harmoniously.

      I assessed a couple of tickets I created (90%, 50%; 80%; 90%; 90%) by these formulas. One showed 71% of cruftiness, while the other indicated 20%. I guess you could easily understand what formula I favor more. But jokes aside, I indeed think that the second formula is just more handy in understanding how quickly you’d better improve your docs.

      Despite the fact that I do not think that this score should be mandatory, there is still a sense of its existence. Once measuring BA’s impact on the product is a separate topic for discussion, evaluating the artifacts prepared by BA can be a useful activity bringing us some valuable insights about how BA distributes their efforts between different activities, including documentation.

    1. how "crufty" a document actually is

      Hi! :wave: Have you ever wondered if your documentation published in Wiki attracts as much attention from readers as you think it deserves or if it is as right as your team needs? I have, and so do 50 more BAs that took part in Steve Adolph’s BBC Workshop on May 8th, 2023.

      If you are with us, then let’s observe how crufted our documentation is. Huh, now you are wondering what “crufted” means, aren’t you? (Or is it just me discovering this technique recently?) Cruft is a slang word for badly designed, unnecessarily complicated, or unwanted code or software. And already back in 2007, the approach to assess cruftiness of documentation appeared.

      CRUFT criteria are kinda subjective but has it ever bothered anyone when estimating tickets in story points?

    1. the team can see all of the work associated with deciding precisely what to build, not just the coding work

      Let's see what the Rock Crusher approach is.

      Imagine business ideas as rocks, backlog as a stone storage container, and code-writing as a stone processing container. Rocks are mixed in sizes, different in materials, and fall from outside into the storage container unpredictably. The stones are selected from the storage by PO, and BA reshapes them into smaller-sized stones and instructs Developers how to adjust their code-writing stone-processing containers to start dealing with the material of the rocks and their size. When the next rock is selected by PO, BA splits it into a new number of pieces, and Devs re-adjust their processing containers to the new materials and sizes. And all this continues to happen with the pressure on the team to be consistent in delivery speed with unpredictable pieces of rocks.

      The essence of the Rock crusher approach is in displacing the team’s focus. It should no longer be on how well the stone processing container is adjusted and how quickly the rock pieces are processed. Instead, the most attention should be paid to the pre-processing stage - to what is sent to the stone processing. These rocks-ideas should be more carefully selected, better prepared, split, groomed, combined, and assessed separately and in combination with others. That work even sounds huge and should be a team challenge and objective, there the team’s efforts should be put in.

      Analysis of an idea - not development - is a focus and priority for all the team!

    2. The backlog should not be a place where work goes to die

      When some need or idea appears in the minds of business owners, it takes some time and effort to make this need addressed. Simply put, the need is analysed, then transformed into some actionable development tasks, then developed, tested, and released. There are two problems with this approach.

      The first is that business ideas tend to vary in size and impact and appear much quicker than the delivery speed. The second is that the focus of the team is mostly on the implementation of the ideas, coding, and delivery.

      These problems bring unclarity of priorities, excesses in backlog, instability in the team delivery, and much work on a Product Owner's or a Business Analyst's shoulders. The Rock Crusher approach knows how to fix these unwilling outcomes.

    1. processes evolve

      Besides the fact that this article describes indeed an interesting practice of hiring process improvement, I also see it as a great example of how to formalize the feeling of needed changes for any process, functionality, or even Objective from the OKRs list.

      That's how I see the steps:

      1. formulate the problem. It should not be the final, well-stated problem, just write down your concern with some background info;

      2. create a Request-for-comments doc to collect suggestions from various stakeholders about how to solve your concern;

      3. hold some sessions to review the comments. At this moment, it's highly possible that your understanding of the problem will expand, and suggested ideas will transform into new ones. It's time to define your strategy for problem resolution;

      4. for each group of ideas, define metrics to measure the progress. Select the north star metric and its value;

      5. experiment!

      (1) Select stakeholders that are the most interested in the change and do not afraid of being early adapters; (2) hold impact mapping sessions and bet on some improvements to try them in the first place. Define the fundamental part of your changes; (3) validate the progress by the metrics regularly; (4) create a pipeline to visualize your progress;

      1. analyze the results of the experiment. Adjust the changes made before if needed.

      2. In case of positive results, spare the time for less significant changes and continue to track the progress.