96 Matching Annotations
  1. Oct 2019
    1. OD is designed to deliver non-HTML cacheable objects (that is, objects that aren't text/html content type) under 100 MB in size.

      Does Akamai support text content/type. This is a MUST have for us to move forward.

  2. Nov 2017
    1. // The search space within the array is changing for each round - but the list // is still the same size. Thus, k does not need to be updated with each round.

      If you do not do this you will have to keep moving the kth index every time respective of the size of the array.

      if (pivotIdx < k - 1): quickSelect(A, pivotIdx + 1, end, k-1 - (pivotIdx + 1)); if (pivotIdx > k - 1): quickSelect(A, start, pivotIdx - 1, k - 1);

  3. Sep 2017
    1. Amazon integrated customer data and payment information with e-book distribution and its Amazon publishing initiative

      Customer data (big data) + payment info (where's the money) + e-book distribution (infrastructure: kindle store and kindle device's seamless integration)

      Earlier guys integrated: Procurement (writers initial draft) + editing + marketing + distribution = Think book reviews and author tours on talk shows.

      Amazon's idea is more insightful and focussed on individual customers and not shooting in the DARK :)

    1. up vote 15 down vote This is one of the few reasons I like to use vim's mouse mode. If you use the GUI version, or your terminal supports sending drag events (such as xterm or rxvt-unicode) you can click on the split line and drag to resize the window exactly where you want, without a lot of guess work using the ctrl-w plus,minus,less,greater combinations. In terminal versions, you have to set mouse mode properly for this to work :set mouse=n (I use 'n', but 'a' also works) and you have to set the tty mouse type :set ttymouse=xterm2 A lot of people say that a lot of time is wasted using the mouse (mostly due to the time it takes to move your hand from the keyboard to the mouse and back), but I find that, in this case, the time saved by having immediate feedback while adjusting window sized and the quickness of re-resizing (keep movving the mouse instead of typing another key sequence) outweighs the delay of moingmy hand.

      Simply amazing setup to get multiple screens vim

    1. "What we think about is that there's a conversation, and inside of that conversation you have a contextual place where you can have all of the interactions that you want or have or need to have with a brand or service, and it can take multiple forms. It can be buttons, it can be UI [user interface] and it can be conversational when it needs to be," Marcus said.

      This is exactly what in context payments mean.

  4. Aug 2017
  5. Jul 2017
    1. up vote 7 down vote accepted When you are starting your kafka broker you can define set of properties in conf/server.properties file. This file is just key value property file. One of the property is auto.create.topics.enable if it set tot true(by default) kafka will create topic automatically when you send message to non existing topic. All config options you can find here Imho Simple rule for creating topics is the following: number of replicas must be not less than number of nodes that you have. Number of topics must be the multiplier of number of node in your cluster for example: You have 9 node cluster your topic must have 9 partitions and 9 replicas or 18 partitions and 9 replicas or 36 partitions and 9 replicas and so on

      Number of replicas = #replicas Number of nodes = #nodes Number of topics = #topic

      replicas >= #nodes

      k x (#topics) = #nodes

    1. Owning stock gives you the right to vote in shareholder meetings, receive dividends (which are the company’s profits) if and when they are distributed, and it gives you the right to sell your shares to somebody else.

      dividends are profits shared amongst share holders

  6. Jun 2017
    1. public void increment(String label, String topic) { + Stats.incr(label); + }

      import io.prometheus.client.Counter; public static final Counter requests = Counter.build() .name("requests_total").help("Total requests.").register();

      public void increment(String label, String topic) { requests.inc(); // Your code here. } }

    1. A better alternative is at least once message delivery. For at least once delivery, the consumer reads data from a partition, processes the message, and then commits the offset of the message it has processed. In this case, the consumer could crash between processing the message and committing the offset and when the consumer restarts it will process the message again. This leads to duplicate messages in downstream systems but no data loss.

      This is what SECOR does.

    2. By electing a new leader as soon as possible messages may be dropped but we will minimized downtime as any new machine can be leader.

      two scenarios to get the leader back: 1.) Wait to bring the master back online. 2.) Or elect the first node that comes back up. But in this scenario if that replica partition was a bit behind the master then the time from when this replica went down to when the master went down. All that data is Lost.

      SO there is a trade off between availability and consistency. (Durability)

    1. On every received heartbeat, the coordinator starts (or resets) a timer. If no heartbeat is received when the timer expires, the coordinator marks the member dead and signals the rest of the group that they should rejoin so that partitions can be reassigned. The duration of the timer is known as the session timeout and is configured on the client with the setting session.timeout.ms. 

      Time to live for the consumers. If the heartbeat doesn't reach the co-ordindator in this duration then the co-ordinator redistributes the partitions to the remaining consumers in the consumer group.

    1. An index can potentially store a large amount of data that can exceed the hardware limits of a single node. For example, a single index of a billion documents taking up 1TB of disk space may not fit on the disk of a single node or may be too slow to serve search requests from a single node alone.

      Indexes may overflow the disk space. Hence you want to get the most out of your instances by indexing the nodes.

  7. May 2017
    1. Optimum buffer size is related to a number of things: file system block size, CPU cache size and cache latency. Most file systems are configured to use block sizes of 4096 or 8192. In theory, if you configure your buffer size so you are reading a few bytes more than the disk block, the operations with the file system can be extremely inefficient (i.e. if you configured your buffer to read 4100 bytes at a time, each read would require 2 block reads by the file system). If the blocks are already in cache, then you wind up paying the price of RAM -> L3/L2 cache latency. If you are unlucky and the blocks are not in cache yet, the you pay the price of the disk->RAM latency as well. This is why you see most buffers sized as a power of 2, and generally larger than (or equal to) the disk block size. This means that one of your stream reads could result in multiple disk block reads - but those reads will always use a full block - no wasted reads. Now, this is offset quite a bit in a typical streaming scenario because the block that is read from disk is going to still be in memory when you hit the next read (we are doing sequential reads here, after all) - so you wind up paying the RAM -> L3/L2 cache latency price on the next read, but not the disk->RAM latency. In terms of order of magnitude, disk->RAM latency is so slow that it pretty much swamps any other latency you might be dealing with. So, I suspect that if you ran a test with different cache sizes (haven't done this myself), you will probably find a big impact of cache size up to the size of the file system block. Above that, I suspect that things would level out pretty quickly. There are a ton of conditions and exceptions here - the complexities of the system are actually quite staggering (just getting a handle on L3 -> L2 cache transfers is mind bogglingly complex, and it changes with every CPU type). This leads to the 'real world' answer: If your app is like 99% out there, set the cache size to 8192 and move on (even better, choose encapsulation over performance and use BufferedInputStream to hide the details). If you are in the 1% of apps that are highly dependent on disk throughput, craft your implementation so you can swap out different disk interaction strategies, and provide the knobs and dials to allow your users to test and optimize (or come up with some self optimizing system).

      What's the cache size to keep when reading from file to a buffer?

    1. The Kafka cluster retains all published records—whether or not they have been consumed—using a configurable retention period. For example, if the retention policy is set to two days, then for the two days after a record is published, it is available for consumption, after which it will be discarded to free up space. Kafka's performance is effectively constant with respect to data size so storing data for a long time is not a problem.

      irrespective of the fact that the consumer has consumed the message that message is kept in kafka for the entire retention policy duration.

      You can have two or more consumer groups: 1 -> real time 2 -> back up consumer group

    2. replication factor N, we will tolerate up to N-1 server failures without losing any records

      Replication Factor means number of nodes/brokers which could go down before we start losing data.

      So if you have a replication factor of 6 for a 11 node cluster, then you will be fault tolerant till 5 nodes go down. After that point you are going to loose data for a particular partition.

    3. Messages sent by a producer to a particular topic partition will be appended in the order they are sent. That is, if a record M1 is sent by the same producer as a record M2, and M1 is sent first, then M1 will have a lower offset than M2 and appear earlier in the log.

      ordering is guaranteed.

    4. Consumers label themselves with a consumer group name, and each record published to a topic is delivered to one consumer instance within each subscribing consumer group. Consumer instances can be in separate processes or on separate machines.

      kafka takes care of the consumer groups. Just create one Consumer Group for each topic.

    1. The first limitation is that each partition is physically represented as a directory of one or more segment files. So you will have at least one directory and several files per partition. Depending on your operating system and filesystem this will eventually become painful. However this is a per-node limit and is easily avoided by just adding more total nodes in the cluster.

      total number of topics supported depends on the total number of partitions per topic.

      partition = directory of 1 or more segment files This is a per node limit

    1. ($20*3)-($20*3*.1) = $54

      10% of $2000(cost of camer) * 3days = Rental Price

      Rental Price - Commission = Rental Made This guy totally forgot taxes here.... :)

      54$ for 3 days 365 days a year about 50 % usage so roughly 180 days. $54 for 3 days $? for 180 days = $3240 about 740$ profit per year for a $2000 investment if he's 50% utilized over the year.

      Camera's Man this guy needed to crunch some more numbers. Camera's have compatibility issues....

    2. We should have built the absolute minimum and then manually maintained transactions through the backend if need be. To hell with formality. We were in the business of proving a hypothesis yet we acted as if it had already been proven.

      quick and dirty solution and see if it sticks...

    1. For a topic with replication factor N, we will tolerate up to N-1 server failures without losing any records committed to the log.

      for Eg for a given topic there are 11 brokers/servers and for each topic the replication factor is 6. That means the topic will start loosing data if more than 5 brokers go down.

    2. The way consumption is implemented in Kafka is by dividing up the partitions in the log over the consumer instances so that each instance is the exclusive consumer of a "fair share" of partitions at any point in time. This process of maintaining membership in the group is handled by the Kafka protocol dynamically. If new instances join the group they will take over some partitions from other members of the group; if an instance dies, its partitions will be distributed to the remaining instances.

      The coolest feature: this way all you need to do is add new consumers in a consumer group to auto scale per topic

    3. the only metadata retained on a per-consumer basis is the offset or position of that consumer in the log. This offset is controlled by the consumer: normally a consumer will advance its offset linearly as it reads records, but, in fact, since the position is controlled by the consumer it can consume records in any order it likes.

      partition offset maintained by kafka. Offset number is maintained so that if the consumer goes down nothing breaks.

    1. But ideally, a client should have to know a single URI only; everything else – individual URIs, as well as recipes for constructing them e.g. in case of queries – should be communicated via hypermedia, as links within resource representations.

      Well something like http://www.example.com/user-ids/1234 this makes each of the userids a separate URI and hence accessible.

    2. REST simply means using HTTP to expose some application functionality. The fundamental and most important operation (strictly speaking, “verb” or “method” would be a better term) is an HTTP GET.

      Almost everybody is going to come with this mindset into the RESTFul world. I started here and still most times fall back to this thought process when things start to hurt my head. :)

    3. HTTP fixes them at GET, PUT, POST and DELETE (primarily, at least), and casting all of your application semantics into just these four verbs takes some getting used to. But once you’ve done that, people start using a subset of what actually makes up REST – a sort of Web-based CRUD (Create, Read, Update, Delete) architecture. Applications that expose this anti-pattern are not really “unRESTful” (if there even is such a thing), they just fail to exploit another of REST’s core concepts: hypermedia as the engine of application state.

      This thought process is pretty hard to get used to initially especially with distributed systems.