33 Matching Annotations
  1. Nov 2022
    1. If the MCL is under constant tension is can lead to increased wear and tear of the Meniscus.

      Weź to sobie wyobraź i powiedz mi która łakotka jest przyciskana. Boczna czy przyśrodkowa?

  2. Apr 2022
    1. Celery workers typically run the same code as the Flask app, but they not running as flask servers, so websockets from celery to flask aren't easily a thing. (I've never seen it done, but maybe someone has ironed out the tricky parts.)

      a

  3. Apr 2021
    1. We recommend a minimum of 300 generations of evolution for best results. Note that evolution is generally expensive and time consuming, as the base scenario is trained hundreds of times, possibly requiring hundreds or thousands of GPU hours.
    1. Background images. Background images are images with no objects that are added to a dataset to reduce False Positives (FP). We recommend about 0-10% background images to help reduce FPs (COCO has 1000 background images for reference, 1% of the total).

      da się dodawać puste obrazki bez bboxów

    1. (TensorFlow, TensorRT, PyTorch, ONNX Runtime, or a custom framework)

      Wspiera te formaty

  4. Mar 2021
    1. signature_def_map = { 'serving_default': tf.saved_model.predict_signature_def( {signitures['image_arrays'].name: signitures['image_arrays']}, {signitures['prediction'].name: signitures['prediction']}), }

      albo tutaj

    1. 'serving_default': tf.saved_model.predict_signature_def( {'input': inputs}, output_dict, ) }

      tu coś trzeba zmienić żeby serving działał

    1. Moreover, it is commonly best practice to label the occluded object as if it were fully visible – rather than drawing a bounding box for only the partially visible portion of the object.
    1. So if your batch size is 8 the effective learning rate is 8 times lower than you specified.
    2. add learning_rate=0.001,lr_warmup_init=0.0001 to the --hparams

      to chyba mamy już ogarnięte?

    1. Dumping it into multiple tfrecords to allow better mixing of training data got me to an AP of 0.11 after 2 epochs.
    1. gpu nvidia 1080. I switched to a new machine using Nvidia RTX 3090 with CUDA 11.1.
  5. Feb 2021
    1. a trustable solution as my model never converge on training and I just gave up on using this model because the TF OD Api (using faster_rcnn_resnet101) was enough for me.
  6. Dec 2019
    1. In this thesis, I propose three possible strategies, incre-mental relabeling, importance-weighted label prediction and active Bayesian Networks.

      Więcej ciekawych podejść do tematu.

    1. Use your “gold standard” data to measure the performance ofeach contributor so you know when to retrain workers. Whena contributor’s score falls below 70% accuracy, exclude hiswork and retrain.
    2. “Gold Standard” Data: A Best Practice Methodfor Assessing Labels

      instrukcja adnotacji

    1. A box is considered too loose when there is too much distance between the object and the edges of the bounding box, which leads to unnecessary parts of the image background showing through within the box.

      loose bbox

  7. arxiv.org arxiv.org
    1. The left-hand side is rejected due to too loose bounding box

      wielkość bounding boxa

  8. Nov 2019
    1. We construct a graph from the unlabeled data to representthe underlying structure, such that each node represents adata point, and edges represent the inter-relationships be-tween them. Thereafter, considering the flow of beliefs in thisgraph, we choose those samples for labeling which minimizethe joint entropy of the nodes of the graph.

      ciekawe podejście

    1. Traditional Optical Character Recognition (OCR) systems

      orc w faster rcnn