73 Matching Annotations
  1. Nov 2019
    1. Example var x = "5" + 2 + 3;

      crazy javascript

      try "5" + 2 + 3 vs 5 + 2 + "3"

    1. Underscore:

      snake_case

    2. Hyphens: first-name, last-name, master-card, inter-city.

      kebab-case

  2. Apr 2019
    1. Starting multiple services from the same image

      After the new extended Docker configuration options, the above example would look like:

      services:

      • name: mysql:latest alias: mysql-1
      • name: mysql:latest alias: mysql-2
    1. * Note that C1 is faster, C2 is slower, but the C1 is slow again! This is because * the profiles for C1 and C2 had merged together. Notice how flawless the measurement * is for forked runs.

      I don't have a clue what is going on here. Oo

    2. * JVMs are notoriously good at profile-guided optimizations. This is bad * for benchmarks, because different tests can mix their profiles together, * and then render the "uniformly bad" code for every test. Forking (running * in a separate process) each test can help to evade this issue.

      I did not understand a thing? What are:

      • profile guided optimizations
      • mixed profiles
      • uniformly bad code?
  3. Mar 2019
    1. * <p>Iteration is the set of benchmark invocations.</p>

      How many time should it execute the method per run

    2. * <p>Trial is the set of benchmark iterations.</p>

      How many time should the benchmark run

    1. It is possible to run benchmarks from within an existing project, and even from within an IDE, however setup is more complex and the results are less reliable.
      Options opts = new OptionsBuilder();
      opts.include(
          ".*" +
          MyBenchmark.class.getSimpleName() + 
          ".*"
      );
      opts.forks(1);
      opts.build();
      
      new Runner(opts).run();
      
    1. The goals that are configured will be added to the goals already bound to the lifecycle from the packaging selected

      Why does maven has to be so bad, this sentence is so hard to read.

    2. tasks

      goals?

    1. @MethodSource(names = "data")

      Expects an array of string!

      @MethodSource({"data"})

      or

      @MethodSource(value = "data")

    2. @MethodSource(names = "genTestData")

      @MethodSource({"data"}) or @MethodSource(value = "data")

    3. Specifies a class that provides the test data. The referenced class has to implement the ArgumentsProvider interface.

      Probably best with inner class

    4. public static int[][] data() { return new int[][] { { 1 , 2, 2 }, { 5, 3, 15 }, { 121, 4, 484 } }; }

      THIS IS A METHOD

    5. 10.9.1. Using Dynamic Tests

      Might be useful, but I not well supported via IDEs or buildtools right now.

    6. If you want to ensure that a test fails if it isn’t done in a certain amount of time you can use the assertTimeout() method

      Very useful if your specs are require time critical execution

    7. 10.7. Grouped assertions

      I'm unable to see the use case, separated assertions should be way more helpful.

    8. This lets you define which part of the test should throw the exception. The test will still fail if an exception is thrown outside of this scope.

      Normally tests will fail if an exception is thrown. But sometimes throwing an exception is the behavior that is supposed to be tested. this is possible with expecting exceptions

    9. 10.5. Test Suites

      This feels like a copy paste example, there is nothing to try here and it does not work

    10. Alternatively you can use Assumptions.assumeFalse or Assumptions.assumeTrue to define a condition for test deactivation. Assumptions.assumeFalse marks the test as invalid, if its condition evaluates to true. Assumptions.assumeTrue evaluates the test as invalid if its condition evaluates to false. For example, the following disables a test on Linux:

      This is quite useful to create conditional tests

    11. 10.2. Usage of JUnit 5 with Maven This example shows how to import all components of JUnit 5 into your project. We need to register the individual components with Maven surefire: <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>3.1</version> <configuration> <source>${java.version}</source> <target>${java.version}</target> </configuration> </plugin> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>2.19.1</version> <configuration> <includes> <include>**/Test*.java</include> <include>**/*Test.java</include> <include>**/*Tests.java</include> <include>**/*TestCase.java</include> </includes> <properties> <!-- <includeTags>fast</includeTags> --> <excludeTags>slow</excludeTags> </properties> </configuration> <dependencies> <dependency> <groupId>org.junit.platform</groupId> <artifactId>junit-platform-surefire-provider</artifactId> <version>${junit.platform.version}</version> </dependency> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-engine</artifactId> <version>${junit.jupiter.version}</version> </dependency> <dependency> <groupId>org.junit.vintage</groupId> <artifactId>junit-vintage-engine</artifactId> <version>${junit.vintage.version}</version> </dependency> </dependencies> </plugin> </plugins> </build> And add the dependencies: <dependencies> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-api</artifactId> <version>${junit.jupiter.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>${junit.version}</version> <scope>test</scope> </dependency> </dependencies> You can find a complete example of a working maven configuration here: https://github.com/junit-team/junit5-samples/blob/r5.0.0-M4/junit5-maven-consumer/pom.xml The above works for Java projects but not yet for Android projects.

      this is not up to date. Use the official maven surefire docs. you only need to import one dependency.

      https://maven.apache.org/surefire/maven-surefire-plugin/examples/junit-platform.html

    12. <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>${junit.version}</version> <scope>test</scope> </dependency>

      Not needed anymore.

    13. This class can be executed like any other Java program on the command line. You only need to add the JUnit library JAR file to the classpath.

      java -cp myBuild.jar:junit.jar de.vogella.junit.first.MuTestRunner

    14. static

      what does import static do?

    1. people believe that asking system for time is dirt cheap

      Not cheap oO?

    1. $ java -cp target/benchmarks.jar your.test.ClassName

      Target will be run from inside your class, that is hopefully inside benchmark.jar, Need to check

    2. $ mvn clean install

      Why do I need to install the my project? As far as I know benchmark.jar will contain everything.

  4. Feb 2019
    1. In this example I have added a nested static class named MyState. The MyState class is annotated with the JMH @State annotation. This signals to JMH that this is a state class. Notice that the testMethod() benchmark method now takes an instance of MyState as parameter. Notice also that the testMethod() body has now been changed to use the MyState object when performing its sum calculation.

      Why do I need to provide a state with an additional class? I could also just provide the whole class as state.

    2. You should let the computer alone while it runs the benchmarks, and you should close all other applications (if possible). If your computer is running other applications, these applications may take time from the CPU and give incorrect (lower) performance numbers.

      or, make the test environment as real as possible

    3. Writing a correct Java microbenchmark typically entails preventung the optimizations the JVM and hardware may apply during microbenchmark execution which could not have been applied in a real production system. That is what JMH - the Java Microbenchmark Harness - is helping you do.

      This is maybe wrong, Benchmarks should be able to give you a snapshot on what is happening. On all possible machines. Why should the be without optimizations? Especially if you need to bench for a real machine.

    1. The column store has the additional advantage that you can use different primitive types for each column. In the record store you are pretty much forced to use a byte array to make sure you can support all types of fields. With a column store, one column can be an array of short, int, long etc. or whatever else you need.

      In general, we are back to C, or even before that.

    2. Using a column store it is very fast to search for records with column values matching a given criteria. You can just scan through the column arrays for the columns you want to search in. This is faster than searching in a record store, since you do not have to skip over unused fields.

      Also fields could be skipped in a record store, but the memory would be more fragmented, resulting in non sequential reads.

    3. A record store is actually a long byte array with "records" in. Each record consists of several fields which are stored after each other in the byte array. Each field may consist of one or more bytes.

      Are fields, like fields of an object or fields like addresses.

    1. The reason is, that all caching strategies are based on the assumption that your program will access data sequentially.

      But why?

    1. Additionally, if your server works on many tasks at the same time (e.g incoming HTTP requests), the other CPUs in your server may already be busy working on their own tasks. Parallelizing tasks gain you nothing then, as the CPUs are already busy. In fact, it may hurt performance (unless you have way more CPUs than you are using on average).

      This is an interesting point, running your system at 100% might slow it down

    2. Object allocation and garbage collection is slow.

      Creating and removing objects is shlow

    3. However, even now with Java 8, this is not correct. Algorithms, data formats, data structures, memory usage patterns, IO usage patterns etc. matter! There are many situations where you can optimize your code better than the Java compiler and JVM, because you know more about what your system is trying to do, its data structures, data usage patterns etc. than Java does.

      Yes, but optimizations are not general, this might create an immense technical debt.

    4. Second, we Java developers have been fed a lot of untrue stories about the Java compiler and the Java Virtual Machine. It is often said that the Java compiler or VM can do a better job of optimizing your code than you can.

      This is an interesting statement. Hope there will be an explanation.

  5. Jan 2019
    1. streams the children set, maps over this stream, creating a new CountingTask for each element, executes each subtask by forking it, collects the results by calling the join method on each forked task, sums the results using the Collectors.summingInt collector. ?123456789101112131415public static class CountingTask extends RecursiveTask<Integer> {     private final TreeNode node;     public CountingTask(TreeNode node) {        this.node = node;    }     @Override    protected Integer compute() {        return node.value + node.children.stream()          .map(childNode -> new CountingTask(childNode).fork())          .collect(Collectors.summingInt(ForkJoinTask::join));    }} The code to run the calculation on an actual tree is very simple: ?123456TreeNode tree = new TreeNode(5,  new TreeNode(3), new TreeNode(2,    new TreeNode(2), new TreeNode(8))); ForkJoinPool forkJoinPool = ForkJoinPool.commonPool();int sum = forkJoinPool.invoke(new CountingTask(tree));

      Good example

    2. This instance controls several re-used threads for executing these tasks.

      It is important to note, that those threads are reused. Even if thread creation requires less overhead than process there is still some. By reusing threads this overhead should be reduced even further.

    1. The plugin will prevent binary files filtering without adding some excludes configuration for the following file extensions jpg, jpeg, gif, bmp and png. If you like to add supplemental file extensions this can simply achieved by using a configuration like the following:

      Should be the other way round. Nothing is filtered if not explicitly stated!

    1. If you have both text files and binary files as resources it is recommended to have two separated folders. One folder src/main/resources (default) for the resources which are not filtered and another folder src/main/resources-filtered for the resources which are filtered.

      This is so incredibly stupid. Totally rips your project apart, now you have to check two or more directories for files that might even have a complex folder structure. MAVEN IS SHIT

    1. Repositories are home to two major types of artifacts. The first are artifacts that are used as dependencies of other artifacts. These are the majority of plugins that reside within central. The other type of artifact is plugins. Maven plugins are themselves a special type of artifact. Because of this, plugin repositories may be separated from other repositories (although, I have yet to hear a convincing argument for doing so). In any case, the structure of the pluginRepositories element block is similar to the repositories element. The pluginRepository elements each specify a remote location of where Maven can find new plugins.

      Why are there even plugin repos ...

    2. Whenever a project has a dependency upon an artifact, Maven will first attempt to use a local copy of the specified artifact. If that artifact does not exist in the local repository, it will then attempt to download from a remote repository.

      But in which order?

    3. Repositories are collections of artifacts which adhere to the Maven repository directory layout.

      So to be a Repo you have to contain artifacts in a maven repo structure. Done

    4. The Build type in the XSD denotes those elements that are available only for the "project build". Despite the number of extra elements (six), there are really only two groups of elements that project build contains that are missing from the profile build: directories and extensions.

      I don't even get what they are talking about. It this whole page somehow expects me to be a hardcore maven developer

    5. id: Self explanatory. It specifies this execution block between all of the others. When the phase is run, it will be shown in the form: [plugin:goal execution: id]. In the case of this example: [antrun:run execution: echodir]

      Not the plugin id

    6. check the entire dependency tree to avoid this problem; mvn dependency:tree is helpful.

      Interesting command

    7. If you then use dependencyManagement to specify an older version, dep2 will be forced to use the older version, and fail.

      Seems stupid, should be handled by maven

    8. Exclusions tell Maven not to include the specified project that is a dependency of this dependency (in other words, its transitive dependency)

      I don't get, why we should exclude transitive dependencies. An example is missing and also a good explanation. This documentation is so bad. :(

    9. nstall the dependency locally using the install plugin.

      Or install with a special local repo and ship with the project

    10. The valid types are Plexus role-hints (read more on Plexus for a explanation of roles and role-hints)

      Goddammit, link this shit!

    11. The POM defined above is the minimum that both Maven will allow

      Once again inconsistent information! The minimal pom looks like this:

      <project> <groupId>gID</groupId> <artifactId>aID</artifactId> <version>x.x.x</version> </project>

      Inheritance takes care of everything else, since the superPom will be parsed first

    12. That is currently the only supported POM version for both Maven 2 & 3, and is always required.

      But gets in inherited

    1. Firefox Send and Firefox Lockbox will continue in active development in 2019 as standalone products. Notes, Firefox Color, Side View, Price Wise, and Email Tabs will all remain available at addons.mozilla.org for the foreseeable future. (ed note: I’ll add links to these new URLs here once I have them early next week)

      I was never aware of those extensions, have to check them out!

    1. these forms are now deprecated and should not be used.

      Which forms, exactly please!

    2. One factor to note is that these variables are processed after inheritance as outlined above. This means that if a parent project uses a variable, then its definition in the child, not the parent, will be the one eventually used.

      what? I don't get it. needs an example, why is the whole manual so badly written :<

    3. To address this directory structure (or any other directory structure), we would have to add the <relativePath> element to our parent section.

      Looks less robust. But might be fine as long as the whole project is shipped with the folderstructure

  6. Dec 2018
    1. Headers in a manifest Header Definition Name The name of the specification. Specification-Title The title of the specification. Specification-Version The version of the specification. Specification-Vendor The vendor of the specification. Implementation-Title The title of the implementation. Implementation-Version The build number of the implementation. Implementation-Vendor The vendor of the implementation.

      It would be nice to have a bit more background, why this stuff is called like this

    1. To load classes in JAR files within a JAR file into the class path, you must write custom code to load those classes. For example, if MyJar.jar contains another JAR file called MyUtils.jar, you cannot use the Class-Path header in MyJar.jar's manifest to load classes in MyUtils.jar into the class path.

      So, other jars have to be extracted.

    2. The Class-Path header points to classes or JAR files on the local network,

      Wait, WHAT? on the local network??? Why would it look on the local network?

    1. To modify the manifest, you must first prepare a text file containing the information you wish to add to the manifest. You then use the Jar tool's m option to add the information in your file to the manifest.

      You don't add a manifest, but you add a second file that contains additional fields

    2. Warning: The text file from which you are creating the manifest must end with a new line or carriage return. The last line will not be parsed properly if it does not end with a new line or carriage return.

      This is very stupid....

    1. As an example, suppose you wanted to put audio files and gif images used by the TicTacToe demo into a JAR file, and that you wanted all the files to be on the top level, with no directory hierarchy. You could accomplish that by issuing this command from the parent directory of the images and audio directories: jar cf ImageAudio.jar -C images . -C audio .

      Don't preserve relative paths

    2. Though the verbose output doesn't indicate it, the Jar tool automatically adds a manifest file to the JAR archive

      OHHHH GOD, WHY!!! it's already verbose. ADD EVERYTHING

    1. The contents of the settings.xml can be interpolated using the following expressions: ${user.home} and all other system properties (since Maven 3.0) ${env.HOME} etc. for environment variables Note that properties defined in profiles within the settings.xml cannot be used for interpolation.

      Dind't get that part

  7. Nov 2018
    1. Interaktives Profiling
    2. Micro-Benchmarks sind Vergleichsmessungen, bei denen die Performance verschiedener, alternativer Algorithmen gemessen und anschließend verglichen wird, um den besseren (d.h. schnelleren) Algorithmus zu bestimmen

      Don't have to be only to determine the fastest algorithm but also to run different variations of data within an unaltered environment (Memory layout, etc.)

    3. Beim Profiling wird, anders als beim Micro-Benchmarking, die gesamte Anwendung gemessen und analysiert.

      Yes, but also the JRE