5 Matching Annotations
  1. May 2014
    1. Triton Shared Computing Cluster Triton Shared Computing Cluster (TSCC) is a new computational cluster for research computing available through UC San Diego's RCI program. Designed as a turnkey, high performance computing resource, it features flexible usage and business models and professional system administration. Unlike traditional clusters, TSCC is a collaborative system wherein the majority of nodes are purchased and shared by the cluster users, known as condo owners. In addition to the participant-contributed condo nodes, TSCC has a collection of hotel nodes which are available to condo owners and to other researchers on a rental basis. The condo and hotel configurations contain both standard two-socket nodes and GPU nodes. The hotel configuration also features eight 512GB large-memory nodes. The table below provides a brief technical summary of TSCC.

      SDSC Triton Share Computing Cluster (TSCC) uses both condo and hotel terminology.

    1. Specifically, we explore three key usage modes (see Figure 1): • HPC in the Cloud , in which researchers out - source entire applications to current public and/ or private Cloud platforms; • HPC plus Cloud , focused on exploring scenarios in which clouds can complement HPC/grid re - sources with cloud services to support science and engineering application workflows—for ex - ample, to support heterogeneous requirements or unexpected spikes in demand; and • HPC as a Service , focused on exposing HPC/grid resources using elastic on-demand cloud abstrac - tions, aiming to combine the flexibility of cloud models with the performance of HPC systems

      Three key usage modes for HPC & Cloud:

      • HPC in the Cloud
      • HPC plus Cloud
      • HPC as a Service
  2. Apr 2014
    1. Mike Olson of Cloudera is on record as predicting that Spark will be the replacement for Hadoop MapReduce. Just about everybody seems to agree, except perhaps for Hortonworks folks betting on the more limited and less mature Tez. Spark’s biggest technical advantages as a general data processing engine are probably: The Directed Acyclic Graph processing model. (Any serious MapReduce-replacement contender will probably echo that aspect.) A rich set of programming primitives in connection with that model. Support also for highly-iterative processing, of the kind found in machine learning. Flexible in-memory data structures, namely the RDDs (Resilient Distributed Datasets). A clever approach to fault-tolerance.

      Spark's advantages:

      • DAG processing model
      • programming primitives for DAG model
      • highly-iterative processing suited for ML
      • RDD in-memory data structures
      • clever approach to fault-tolerance
    1. Clouds establish a new division of responsibilities between platform operators and users than have trad itionally e x- isted in computing infrastructure. In private clouds, where all participants belong to the same organization, this cr e- ates new barriers to effective communication and resource usage. In this paper, we present poncho , a tool that i m- plements APIs that enable communication between cloud operators and their users, for the purposes of minimizing impact of administrative operations and load shedding on highly - utilized private clouds.

      Poncho: Enabling Smart Administration of Full Private Clouds


    2. One of the critical pieces of infrastructure provided by this system is a mechanism that can be used for load shedding, as well as a way to communicate with users when this action is required . As a building block, load shedding enables a whole host of more advanced r e- source management capabilities, like spot instances, advanced reservations, and fairshare scheduling