Examples
題目沒有寫清楚!!
output是每次滑動時,最大數串起來的字串。 第一次最大數是4 第二次最大數是5
結果就是 45
Examples
題目沒有寫清楚!!
output是每次滑動時,最大數串起來的字串。 第一次最大數是4 第二次最大數是5
結果就是 45
C−aC-aC−a
ex: 1, 3, 4, 2, 2
Identify if repeatedly computing the sum of squares of the digits of number 191919 results in 111.
如果最後能得到 1,那這個數就是快樂數;否則它會陷入一個無限循環的數列。
🔄 所有數的命運只有兩種:快樂與非快樂數。
💡 思路: 設一個 slow 指針:每次走一步(算一次數字平方和) 設一個 fast 指針:每次走兩步(算兩次平方和)
📌 判斷條件: 如果 slow 先到 1 或是 fast 先到 1 → 是快樂數 ✅ 如果 slow 和 fast 在 1 以外的數字相遇 → 出現循環 ❌(不是快樂數)
Finally, the result is processed using the slowPointer, which often points to a meaningful position, such as the middle of the structure or the start of a cycle.
想像跑操場的樣子
Redis as a Distributed Lock
Redis as a Cache
The choice of keys is important as these keys might be stored in separate nodes based on your infrastructure configuration. Effectively, the way you organize the keys will be the way you organize your data and scale your Redis cluster.
https://hackmd.io/5KWXRLhFRwWs2KWclYBcPg#Why-is-key-choice-important-in-Redis-clusters
disk
解釋
Given SQS's built-in support for retries and exponential backoff and the ease with which visibility timeouts can be configured, we'll use SQS for our system.
這張架構圖描繪了一個以 Amazon SQS 和 Amazon S3 為核心的網頁爬蟲與解析系統流程。以下是詳細解釋,並結合你提到的 Kafka 與 SQS 的特性來比較:
系統首先透過 DNS 解析目標網頁的網域名稱。
Crawler - Fetch & Store Webpage
同時更新 URL Metadata,記錄該 URL 的狀態與最後爬取時間。
Parsing Worker
若解析過程中提取出新的 URL,則會將這些 URL 放回 Frontier Queue (SQS),以便後續爬取。
SQS Queues
在這個架構中,由於爬蟲與解析任務是「任務導向」且需要「失敗重試」與「狀態確認」,使用 SQS 是更合適的選擇。Kafka 雖然強大,但在這種需要明確任務完成確認的場景中,SQS 的設計更簡潔且具備內建的可靠性機制。
如果你想進一步了解如何將 Kafka 整合進這個架構(例如用於分析或即時監控),我也可以幫你設計一個擴充架構。你有興趣嗎?
Setting acks=all ensures that the message is acknowledged only when all replicas have received it, guaranteeing maximum durability.
Random partitioning with no key: If you don't provide a key, Kafka will randomly assign a partition to the message, guaranteeing even distribution. The downside is that you lose the ability to guarantee order of messages. If this is not important to your design, then this is a good option.
This should be the main focus of your scaling strategy in an interview and is the main decision you make when dealing with Kafka clusters (since much of the scaling happens dynamically in managed services nowadays).