69 Matching Annotations
  1. May 2020
    1. If your hosting provider does not support HTTPS, the following options are available: You can contact your web hosting provider: tell them you want a free HTTPS certificate through Let’s Encrypt. You’re probably not the only one using your web hosting provider service who wants HTTPS. You can request that your web hosting provider offer Let’s Encrypt HTTPS certificates as a free part of their hosting package. An effective way to make this ask is through email, their help desk system, or by contacting the web hosting provider through social media. You can switch to a different web hosting provider. Find a web hosting provider who offers full HTTPS support as part of their web hosting package by checking our list. You might be able to use Certbot. If you have SSH access to the server your website is hosted on, you might be able to use Certbot. You will need to know the software and system your server is running on. After you confirm the software and system information, you can use the dropdown menus above to generate specific instructions for running Certbot on your server through the command line.
    1. ABSTRACTLet’s Encrypt is a free, open, and automated HTTPS certificate au-thority (CA) created to advance HTTPS adoption to the entire Web.Since its launch in late 2015, Let’s Encrypt has grown to become theworld’s largest HTTPS CA, accounting for more currently valid cer-tificates than all other browser-trusted CAs combined. By January2019, it had issued over 538 million certificates for 223 million do-main names. We describe how we built Let’s Encrypt, including thearchitecture of the CA software system (Boulder) and the structureof the organization that operates it (ISRG), and we discuss lessonslearned from the experience. We also describe the design of ACME,the IETF-standard protocol we created to automate CA–server inter-actions and certificate issuance, and survey the diverse ecosystemof ACME clients, including Certbot, a software agent we created toautomate HTTPS deployment. Finally, we measure Let’s Encrypt’simpact on the Web and the CA ecosystem. We hope that the successof Let’s Encrypt can provide a model for further enhancements tothe Web PKI and for future Internet security infrastructure.
    1. public-benefit digital infrastructure projects, the first of which was the Let's Encrypt certificate authority. ISRG's founding directors were Josh Aas and Eric Rescorla. The group's founding sponsors and partners were Mozilla, the Electronic Frontier Foundation, the University of Michigan, Cisco, and Akamai.
    2. About Internet Security Research Group Mission Our mission is to reduce financial, technological, and educational barriers to secure communication over the Internet.
    1. HTTPS Everywhere HTTPS Everywhere is a Firefox, Chrome, and Opera extension that encrypts your communications with many major websites, making your browsing more secure. Encrypt the web: Install HTTPS Everywhere today.
    1. Certbot is part of EFF’s larger effort to encrypt the entire Internet. Websites need to use HTTPS to secure the web. Along with HTTPS Everywhere, Certbot aims to build a network that is more structurally private, safe, and protected against censorship. Certbot is the work of many authors, including a team of EFF staff and numerous open source contributors.
    2. What’s Certbot? Certbot is a free, open source software tool for automatically using Let’s Encrypt certificates on manually-administrated websites to enable HTTPS. Certbot is made by the Electronic Frontier Foundation (EFF), a 501(c)3 nonprofit based in San Francisco, CA, that defends digital privacy, free speech, and innovation.
    1. The objective of Let’s Encrypt and the ACME protocol is to make it possible to set up an HTTPS server and have it automatically obtain a browser-trusted certificate, without any human intervention. This is accomplished by running a certificate management agent on the web server. To understand how the technology works, let’s walk through the process of setting up https://example.com/ with a certificate management agent that supports Let’s Encrypt. There are two steps to this process. First, the agent proves to the CA that the web server controls a domain. Then, the agent can request, renew, and revoke certificates for that domain. Domain Validation Let’s Encrypt identifies the server administrator by public key. The first time the agent software interacts with Let’s Encrypt, it generates a new key pair and proves to the Let’s Encrypt CA that the server controls one or more domains. This is similar to the traditional CA process of creating an account and adding domains to that account. To kick off the process, the agent asks the Let’s Encrypt CA what it needs to do in order to prove that it controls example.com. The Let’s Encrypt CA will look at the domain name being requested and issue one or more sets of challenges. These are different ways that the agent can prove control of the domain. For example, the CA might give the agent a choice of either: Provisioning a DNS record under example.com, or Provisioning an HTTP resource under a well-known URI on http://example.com/ Along with the challenges, the Let’s Encrypt CA also provides a nonce that the agent must sign with its private key pair to prove that it controls the key pair. The agent software completes one of the provided sets of challenges. Let’s say it is able to accomplish the second task above: it creates a file on a specified path on the http://example.com site. The agent also signs the provided nonce with its private key. Once the agent has completed these steps, it notifies the CA that it’s ready to complete validation. Then, it’s the CA’s job to check that the challenges have been satisfied. The CA verifies the signature on the nonce, and it attempts to download the file from the web server and make sure it has the expected content. If the signature over the nonce is valid, and the challenges check out, then the agent identified by the public key is authorized to do certificate management for example.com. We call the key pair the agent used an “authorized key pair” for example.com. Certificate Issuance and Revocation Once the agent has an authorized key pair, requesting, renewing, and revoking certificates is simple—just send certificate management messages and sign them with the authorized key pair. To obtain a certificate for the domain, the agent constructs a PKCS#10 Certificate Signing Request that asks the Let’s Encrypt CA to issue a certificate for example.com with a specified public key. As usual, the CSR includes a signature by the private key corresponding to the public key in the CSR. The agent also signs the whole CSR with the authorized key for example.com so that the Let’s Encrypt CA knows it’s authorized. When the Let’s Encrypt CA receives the request, it verifies both signatures. If everything looks good, it issues a certificate for example.com with the public key from the CSR and returns it to the agent. Revocation works in a similar manner. The agent signs a revocation request with the key pair authorized for example.com, and the Let’s Encrypt CA verifies that the request is authorized. If so, it publishes revocation information into the normal revocation channels (i.e. OCSP), so that relying parties such as browsers can know that they shouldn’t accept the revoked certificate.
    1. Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. It is a service provided by the Internet Security Research Group (ISRG). We give people the digital certificates they need in order to enable HTTPS (SSL/TLS) for websites, for free, in the most user-friendly way we can. We do this because we want to create a more secure and privacy-respecting Web. You can read about our most recent year in review by downloading our annual report (Desktop, Mobile). The key principles behind Let’s Encrypt are: Free: Anyone who owns a domain name can use Let’s Encrypt to obtain a trusted certificate at zero cost. Automatic: Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal. Secure: Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping site operators properly secure their servers. Transparent: All certificates issued or revoked will be publicly recorded and available for anyone to inspect. Open: The automatic issuance and renewal protocol will be published as an open standard that others can adopt. Cooperative: Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the community, beyond the control of any one organization. We have a page with more detailed information about how the Let’s Encrypt CA works.
  2. Apr 2020
    1. Legal Forms Library Virginia Legal Forms Welcome to the Virginia Legal Forms Library There are several ways to use this resource. Explore using the buttons below or search by Legal Form category or title in the search area above.
    1. Invite employees to see pay stubs and W-2s onlineLearn how to set up QuickBooks Workforce and invite your employees to view and print their pay stubs and W-2s. Each time you run payroll, employees that are set up on QuickBooks Workforce will get an email letting them know they can view their pay stubs and W-2 online. Please note, your employees will only see their W-2s from the current tax-filing season.
    1. Python contributed examples¶ Mic VAD Streaming¶ This example demonstrates getting audio from microphone, running Voice-Activity-Detection and then outputting text. Full source code available on https://github.com/mozilla/DeepSpeech-examples. VAD Transcriber¶ This example demonstrates VAD-based transcription with both console and graphical interface. Full source code available on https://github.com/mozilla/DeepSpeech-examples.
    1. Python API Usage example Edit on GitHub Python API Usage example¶ Examples are from native_client/python/client.cc. Creating a model instance and loading model¶ 115 ds = Model(args.model) Performing inference¶ 149 150 151 152 153 154 if args.extended: print(metadata_to_string(ds.sttWithMetadata(audio, 1).transcripts[0])) elif args.json: print(metadata_json_output(ds.sttWithMetadata(audio, 3))) else: print(ds.stt(audio)) Full source code
    1. DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on Baidu's Deep Speech research paper. Project DeepSpeech uses Google's TensorFlow to make the implementation easier. NOTE: This documentation applies to the 0.7.0 version of DeepSpeech only. Documentation for all versions is published on deepspeech.readthedocs.io. To install and use DeepSpeech all you have to do is: # Create and activate a virtualenv virtualenv -p python3 $HOME/tmp/deepspeech-venv/ source $HOME/tmp/deepspeech-venv/bin/activate # Install DeepSpeech pip3 install deepspeech # Download pre-trained English model files curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/deepspeech-0.7.0-models.pbmm curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/deepspeech-0.7.0-models.scorer # Download example audio files curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/audio-0.7.0.tar.gz tar xvf audio-0.7.0.tar.gz # Transcribe an audio file deepspeech --model deepspeech-0.7.0-models.pbmm --scorer deepspeech-0.7.0-models.scorer --audio audio/2830-3980-0043.wav A pre-trained English model is available for use and can be downloaded using the instructions below. A package with some example audio files is available for download in our release notes.
    1. Library for performing speech recognition, with support for several engines and APIs, online and offline. Speech recognition engine/API support: CMU Sphinx (works offline) Google Speech Recognition Google Cloud Speech API Wit.ai Microsoft Bing Voice Recognition Houndify API IBM Speech to Text Snowboy Hotword Detection (works offline) Quickstart: pip install SpeechRecognition. See the “Installing” section for more details. To quickly try it out, run python -m speech_recognition after installing. Project links: PyPI Source code Issue tracker Library Reference The library reference documents every publicly accessible object in the library. This document is also included under reference/library-reference.rst. See Notes on using PocketSphinx for information about installing languages, compiling PocketSphinx, and building language packs from online resources. This document is also included under reference/pocketsphinx.rst.
    1. Running the example code with python Run like this: cd vosk-api/python/example wget https://github.com/alphacep/kaldi-android-demo/releases/download/2020-01/alphacep-model-android-en-us-0.3.tar.gz tar xf alphacep-model-android-en-us-0.3.tar.gz mv alphacep-model-android-en-us-0.3 model-en python3 ./test_simple.py test.wav To run with your audio file make sure it has proper format - PCM 16khz 16bit mono, otherwise decoding will not work. You can find other examples of using a microphone, decoding with a fixed small vocabulary or speaker identification setup in python/example subfolder
    2. Vosk is a speech recognition toolkit. The best things in Vosk are: Supports 8 languages - English, German, French, Spanish, Portuguese, Chinese, Russian, Vietnamese. More to come. Works offline, even on lightweight devices - Raspberry Pi, Android, iOS Installs with simple pip3 install vosk Portable per-language models are only 50Mb each, but there are much bigger server models available. Provides streaming API for the best user experience (unlike popular speech-recognition python packages) There are bindings for different programming languages, too - java/csharp/javascript etc. Allows quick reconfiguration of vocabulary for best accuracy. Supports speaker identification beside simple speech recognition.
    3. Kaldi API for offline speech recognition on Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
    1. import all the necessary libraries into our notebook. LibROSA and SciPy are the Python libraries used for processing audio signals. import os import librosa #for audio processing import IPython.display as ipd import matplotlib.pyplot as plt import numpy as np from scipy.io import wavfile #for audio processing import warnings warnings.filterwarnings("ignore") view raw modules.py hosted with ❤ by GitHub View the code on <a href="https://gist.github.com/aravindpai/eb40aeca0266e95c128e49823dacaab9">Gist</a>. Data Exploration and Visualization Data Exploration and Visualization helps us to understand the data as well as pre-processing steps in a better way. 
    2. TensorFlow recently released the Speech Commands Datasets. It includes 65,000 one-second long utterances of 30 short words, by thousands of different people. We’ll build a speech recognition system that understands simple spoken commands. You can download the dataset from here.
    3. In the 1980s, the Hidden Markov Model (HMM) was applied to the speech recognition system. HMM is a statistical model which is used to model the problems that involve sequential information. It has a pretty good track record in many real-world applications including speech recognition.  In 2001, Google introduced the Voice Search application that allowed users to search for queries by speaking to the machine.  This was the first voice-enabled application which was very popular among the people. It made the conversation between the people and machines a lot easier.  By 2011, Apple launched Siri that offered a real-time, faster, and easier way to interact with the Apple devices by just using your voice. As of now, Amazon’s Alexa and Google’s Home are the most popular voice command based virtual assistants that are being widely used by consumers across the globe. 
    4. Learn how to Build your own Speech-to-Text Model (using Python) Aravind Pai, July 15, 2019 Login to Bookmark this article (adsbygoogle = window.adsbygoogle || []).push({}); Overview Learn how to build your very own speech-to-text model using Python in this article The ability to weave deep learning skills with NLP is a coveted one in the industry; add this to your skillset today We will use a real-world dataset and build this speech-to-text model so get ready to use your Python skills!
    1. Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research. Use Keras if you need a deep learning library that: Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility). Supports both convolutional networks and recurrent networks, as well as combinations of the two. Runs seamlessly on CPU and GPU. Read the documentation at Keras.io. Keras is compatible with: Python 2.7-3.6.
    1. One can imagine that this whole process may be computationally expensive. In many modern speech recognition systems, neural networks are used to simplify the speech signal using techniques for feature transformation and dimensionality reduction before HMM recognition. Voice activity detectors (VADs) are also used to reduce an audio signal to only the portions that are likely to contain speech. This prevents the recognizer from wasting time analyzing unnecessary parts of the signal.
    2. Most modern speech recognition systems rely on what is known as a Hidden Markov Model (HMM). This approach works on the assumption that a speech signal, when viewed on a short enough timescale (say, ten milliseconds), can be reasonably approximated as a stationary process—that is, a process in which statistical properties do not change over time.
    3. The first component of speech recognition is, of course, speech. Speech must be converted from physical sound to an electrical signal with a microphone, and then to digital data with an analog-to-digital converter. Once digitized, several models can be used to transcribe the audio to text.
    4. How speech recognition works, What packages are available on PyPI; and How to install and use the SpeechRecognition package—a full-featured and easy-to-use Python speech recognition library.
    5. The Ultimate Guide To Speech Recognition With Python
    1. Apache Stanbol's main features are: Content Enhancement Services that add semantic information to “non-semantic” pieces of content. Reasoning Services that are able to retrieve additional semantic information about the content based on the semantic information retrieved via content enhancement. Knowledge Models Services that are used to define and manipulate the data models (e.g. ontologies) that are used to store the semantic information. Persistence Services that store (or cache) semantic information, i.e. enhanced content, entities, facts, and make it searchable.
    2. direct usage from web applications (e.g. for tag extraction/suggestion; or text completion in search fields), 'smart' content workflows or email routing based on extracted entities, topics, etc.
    3. Apache Stanbol provides a set of reusable components for semantic content management.
    1. OpenNLP supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, language detection and coreference resolution. Find out more about it in our manual.
    1. Natural Language Processing with Python – Analyzing Text with the Natural Language Toolkit Steven Bird, Ewan Klein, and Edward Loper
    1. How to setup and use Stanford CoreNLP Server with Python Khalid Alnajjar August 20, 2017 Natural Language Processing (NLP) Leave a CommentStanford CoreNLP is a great Natural Language Processing (NLP) tool for analysing text. Given a paragraph, CoreNLP splits it into sentences then analyses it to return the base forms of words in the sentences, their dependencies, parts of speech, named entities and many more. Stanford CoreNLP not only supports English but also other 5 languages: Arabic, Chinese, French, German and Spanish. To try out Stanford CoreNLP, click here.Stanford CoreNLP is implemented in Java. In some cases (e.g. your main code-base is written in different language or you simply do not feel like coding in Java), you can setup a Stanford CoreNLP Server and, then, access it through an API. In this post, I will show how to setup a Stanford CoreNLP Server locally and access it using python.
    1. Personal VPN to Bypass Internet Censorship, VPN Blocking and Bandwidth Throttling Khalid Alnajjar April 8, 2018 Security Leave a CommentHaving a VPN, Virtual Private Network, is essential nowadays for many reasons, such as accessing restricted content by your ISP or government, bypassing geographically restricted content, protecting your privacy, and so on. In an earlier post, I have reviewed the top three VPN providers. If you are looking for a secure and affordable VPN provider, Private Internet Access is an excellent option as they respect your privacy while offering the service for a low price. If privacy is not your primary concern, check out my review of the top 3 VPN providers.
    1. CoreNLP includes a simple web API server for servicing your human language understanding needs (starting with version 3.6.0). This page describes how to set it up. CoreNLP server provides both a convenient graphical way to interface with your installation of CoreNLP and an API with which to call CoreNLP using any programming language. If you’re writing a new wrapper of CoreNLP for using it in another language, you’re advised to do it using the CoreNLP Server.
    1. Programming languages and operating systems Stanford CoreNLP is written in Java; recent releases require Java 1.8+. You need to have Java installed to run CoreNLP. However, you can interact with CoreNLP via the command-line or its web service; many people use CoreNLP while writing their own code in Javascript, Python, or some other language. You can use Stanford CoreNLP from the command-line, via its original Java programmatic API, via the object-oriented simple API, via third party APIs for most major modern programming languages, or via a web service. It works on Linux, macOS, and Windows. License The full Stanford CoreNLP is licensed under the GNU General Public License v3 or later. More precisely, all the Stanford NLP code is GPL v2+, but CoreNLP uses some Apache-licensed libraries, and so our understanding is that the the composite is correctly licensed as v3+.
    2. Stanford CoreNLP provides a set of human language technology tools. It can give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases and syntactic dependencies, indicate which noun phrases refer to the same entities, indicate sentiment, extract particular or open-class relations between entity mentions, get the quotes people said, etc. Choose Stanford CoreNLP if you need: An integrated NLP toolkit with a broad range of grammatical analysis tools A fast, robust annotator for arbitrary texts, widely used in production A modern, regularly updated package, with the overall highest quality text analytics Support for a number of major (human) languages Available APIs for most major modern programming languages Ability to run as a simple web service
    1. Installation in Windows Compatibility: > OpenCV 2.0 Author: Bernát Gábor You will learn how to setup OpenCV in your Windows Operating System!
    2. Here you can read tutorials about how to set up your computer to work with the OpenCV library. Additionally you can find very basic sample source code to introduce you to the world of the OpenCV. Installation in Linux Compatibility: > OpenCV 2.0
    1. OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code. The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye movements, recognize scenery and establish markers to overlay it with augmented reality, etc. OpenCV has more than 47 thousand people of user community and estimated number of downloads exceeding 18 million. The library is used extensively in companies, research groups and by governmental bodies. Along with well-established companies like Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, Toyota that employ the library, there are many startups such as Applied Minds, VideoSurf, and Zeitera, that make extensive use of OpenCV. OpenCV’s deployed uses span the range from stitching streetview images together, detecting intrusions in surveillance video in Israel, monitoring mine equipment in China, helping robots navigate and pick up objects at Willow Garage, detection of swimming pool drowning accidents in Europe, running interactive art in Spain and New York, checking runways for debris in Turkey, inspecting labels on products in factories around the world on to rapid face detection in Japan. It has C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. OpenCV leans mostly towards real-time vision applications and takes advantage of MMX and SSE instructions when available. A full-featured CUDAand OpenCL interfaces are being actively developed right now. There are over 500 algorithms and about 10 times as many functions that compose or support those algorithms. OpenCV is written natively in C++ and has a templated interface that works seamlessly with STL containers.
    1. Beginner-friendly platforms like Shopify and Etsy do a fine job, but many businesses outgrow them once sales take off. Take a look at your own business: as it expands, you might want functionality, flexibility, and design options beyond a one-size fits all service can offer. Open platforms tend to be easier to scale up or down as business demands it. One year, you might find that you need a robust inventory management system. A few years later, you might decide to scale down and specialize and find you’re stuck paying for more tools and services that you need. Open source platforms can grow as a business does, as long as the developers building out the system understand the importance of having the correct hosting setup.
    1. What's the difference between domain forwarding and masking?Domain forwarding (sometimes called connecting, pointing or redirecting) lets you automatically direct your domain's visitors to a different location on the web. If your domain is registered with GoDaddy and you use our nameservers, you can forward your GoDaddy domain to a site you've created with Wix, Wordpress or any other URL. Domain forwarding has two options: forwarding only and forwarding with masking. Both options will redirect your visitors, but forwarding with masking has additional features you can use. Forwarding only Redirects visitors to a destination URL of your choosing Keeps the destination URL in the browser address bar Example: Assign coolexample.com to forward only to coolwebsite.net. When a visitor types coolexample.com in a browser address bar, they will be redirected to the site for coolwebsite.net. The browser address bar will update to show coolwebsite.net. Forwarding with masking Redirects visitors to a destination URL of your choosing Keeps your domain name in the browser address bar Allows you to enter meta-tags for search engine information Example: Assign coolexample.com to forward with masking to coolwebsite.net. When a visitor types coolexample.com in a browser address bar, they will be redirected to the site for coolwebsite.net. The browser address bar will continue to show coolexample.com, effectively masking the destination URL. Related step Is domain forwarding right for you? Set up forwarding in your GoDaddy account.
    1. Forwarding with masking Hi @jacob2071. Thanks for being part of GoDaddy Community! Our forward masking loads the target website within an iframe. Since the domain that the forwarding is on does not have an SSL certificate, it will not work with HTTPS, even though the target site might have an SSL Certificate. I hope that helps.    JesseW - GoDaddy | Community Manager | 24/7 support available at x.co/247support | Remember to choose a solution and give kudos
    1. Connect a subdomain A subdomain is a subset for your root domain at the beginning of a URL. For example, in the URL shop.johnsapparel.com, shop is the subdomain. The most popular subdomain is www. You can use subdomains to organize your website and make it easier for visitors to find the information that they're looking for.To connect a subdomain, click Online Store > Domains > Connect existing domain and then follow the steps. If you're using a third-party domain, then you need your CNAME records to point to shops.myshopify.com. The name of the CNAME record should match the subdomain that you're adding.
    2. Add an existing domain to your Shopify store
    1. Spikes Spikes are a type of exploration Enabler Story in SAFe. Defined initially in Extreme Programming (XP), they represent activities such as research, design, investigation, exploration, and prototyping. Their purpose is to gain the knowledge necessary to reduce the risk of a technical approach, better understand a requirement, or increase the reliability of a story estimate. Like other stories, spikes are estimated and then demonstrated at the end of the Iteration. They also provide an agreed upon protocol and workflow that Agile Release Trains (ARTs) use to help determine the viability of Epics. Details Agile and Lean value facts over speculation. When faced with a question, risk, or uncertainty, Agile Teams conduct small experiments before moving to implementation, rather than speculate about the outcome or jump to a Solution. Teams may use spikes in a variety of situations: Estimate new Features and Capabilities to analyze the implied behavior, providing insight about the approach for splitting them into smaller, quantifiable pieces Perform feasibility analysis and other activities that help determine the viability of epics Conduct basic research to familiarize them with a new technology or domain Gain confidence in a technical or functional approach, reducing risk and uncertainty Spikes involve creating a small program, research activity, or test that demonstrates some aspect of new functionality. Technical and Functional Spikes Spikes primarily come in two forms: technical and functional. Functional spikes – They are used to analyze overall solution behavior and determine: How to break it down How to organize the work Where risk and complexity exist How to use insights to influence implementation decisions Technical spikes – They are used to research various approaches in the solution domain. For example: Determine a build-versus-buy decision Evaluate the potential performance or load impact of a new user story Evaluate specific technical implementation approaches Develop confidence about the desired solution path Some features and user stories may require both types of spikes. Here’s an example: “As a consumer, I want to see my daily energy use in a histogram so that I can quickly understand my past, current, and projected energy consumption.” In this case, a team might create both types of spikes: A technical spike to research how long it takes to update a customer display to current usage, determining communication requirements, bandwidth, and whether to push or pull the data A functional spike – Prototype a histogram in the web portal and get some user feedback on presentation size, style, and charting Guidelines for Spikes Since spikes do not directly deliver user value, use them sparingly. The following guidelines apply. Quantifiable, Demonstrable, and Acceptable Like other stories, spikes are put in the Team Backlog, estimated, and sized to fit in an iteration. Spike results are different from a story because spikes typically produce information rather than working code. They should develop only the necessary data to identify and size the stories that drive it confidently. The output of a spike is demonstrable, both to the team and to any other stakeholders, which brings visibility to the research and architectural efforts, and also helps build collective ownership and shared responsibility for decision-making. The Product Owner accepts spikes that have been demoed and meet its acceptance criteria. Timing of Spikes Since they represent uncertainty in one or more potential stories, planning for both the spike and the resulting stories in the same iteration is sometimes risky. However, if it’s small and straightforward, and a quick solution is likely to be found, then it can be quite efficient to do both in the same iteration. The Exception, Not the Rule Every user story has uncertainty and risk; that’s the nature of Agile development. The team discovers the right solution through discussion, collaboration, experimentation, and negotiation. Thus, in one sense, every user story contains spike-like activities to identify the technical and functional risks. The goal of an Agile team is to learn how to address uncertainty in each iteration. Spikes are critical when high uncertainty exist, or there are many unknowns.  
    1. How to Handle Unbillable Labor in QuickBooks by Steve McDonnell Related Articles How to Record Gross Sales Into QuickBooks How to Pay Independent Contractors on QuickBooks How to Add Retroactive Pay in QuickBooks How to Set Up QuickBooks for Restaurants Step by Step Instructions for Accounts Receivable in QuickBooks How to Set Up Deferred Revenue in QuickBooks Share on Facebook In a service business, employees often track the amount of time they spend on each customer. Some of that time might be billable and some might be unbillable -- if you have to perform rework or fix an error, for example, that time is unbillable. Employees also have unbillable time for company meetings and other activities. To handle unbilled labor, set up codes to use when you enter employee time sheets. Enter both billable and unbillable hours for each employee. Generate reports to monitor how much unbillable labor you have and how the time is being spent to help improve your productivity.
    1. Enter a single time activity timesheetLearn how to enter a single timesheet in your QuickBooks Online. This is useful for your business when entering or editing a single day or event at a time. If you need to enter a weekly timesheet, get more help here. Note: Timesheets only allow a single hourly rate. If you need to enter multiple hourly rates, you can sign up for a QBO Payroll subscription. What you should do? Select + New. Select Single time activity. Enter the date the activity occurred in the Date field.Note: The current date is automatically entered on the date field, but you can change it if necessary. From the dropdown ▼, select the name of the employee or vendor. For each type of activity, enter an activity line: Choose a customer from the dropdown ▼ if you want to bill the activity to the customer or track expenses for the customer. Complete the following optional fields.Note: If you don't see the fields, they are turned off. You can turn them on by going to your  Account and Settings. Service: If you use the services to enter time, choose a service that represents the activity. Class Location Enter a description of the activity.Note: If the activity is billed to a customer, the description appears on their invoice, depending on your company settings. If you select an item from the optional Service field, text for the description appears automatically. Select the Billable checkbox if you want to bill the activity to the customer. Enter a rate per hour. Select Taxable if applicable. Note: Turn on this option, Go to Settings ⚙, then select the Make time activities billable checkbox. Enter the number of hours and minutes worked in the Time field.Note: Enter the Enter start and end times checkbox to enter the work started, ended and the amount of time taken for Break. Select Save.
    1. Chart of accounts numbering involves setting up the structure of the accounts to be used, as well as assigning specific codes to the different general ledger accounts. The numbering system used is critical to the ways in which financial information is stored and manipulated. The first type of numbering to determine for a chart of accounts involves their structure. This is the layout of an account number, and involves the following components:Division code - This is typically a two-digit code that identifies a specific company division within a multi-division company. It is not used by a single-entity company. The code can be expanded to three digits if there are more than 99 subsidiaries.Department code - This is usually a two-digit code that identifies a specific department within a company, such as the accounting, engineering, or production departments.Account code - This is usually a three digit code that describes the account itself, such as fixed assets, revenue, or supplies expense.For example, a multi-division company with several departments in each company would probably use chart of accounts numbering in this manner: xx-xx-xxxAs another example, a single-division company with multiple departments could dispense with the first two digits, and instead uses the following numbering scheme: xx-xxxAs a final example, a smaller business with no departments at all could just use the three digit code assigned to its accounts, which is: xxxOnce the coding structure is set, the numbering of accounts can take place. This is the three-digit coding referred to previously. A company can use any numbering system that it wants; there is no mandated approach. However, a common coding scheme is as follows:Assets - Account codes 100-199Liabilities - 200-299Equity accounts - 300-399Revenues - 400-499Expenses - 500-599 As a complete example of the preceding outline of numbering, a parent company assigns the "03" designator to one of its subsidiaries, the "07" designator to the engineering department, and "550" to the travel and entertainment expense. This results in the following chart of accounts number:03-07-550
    1. Private registry with AWS Secrets Manager sample for CodeBuild PDF Kindle RSS This sample shows you how to use a Docker image that is stored in a private registry as your AWS CodeBuild runtime environment. The credentials for the private registry are stored in AWS Secrets Manager. Any private registry works with CodeBuild. This sample uses Docker Hub.
    1. Why Use Lambda Functions? The power of lambda is better shown when you use them as an anonymous function inside another function. Say you have a function definition that takes one argument, and that argument will be multiplied with an unknown number: def myfunc(n):   return lambda a : a * n Use that function definition to make a function that always doubles the number you send in:
    1. According to a study by LIVESTRONG.COM, which looked at the 40 foods Americans eat most often by tracking four years of data from millions of MyPlate app users, apples come in as the fourth most popular food — with Gala, Fujis and Granny Smiths as fan favorites. As apples are part of the Environmental Working Group's 'Dirty Dozen' for produce grown with the highest concentration of pesticides, we strongly recommend that you purchase ones labeled USDA Organic.
    1. Bottlerocket OS Welcome to Bottlerocket! Bottlerocket is a free and open-source Linux-based operating system meant for hosting containers. Bottlerocket is currently in a developer preview phase and we’re looking for your feedback. If you’re ready to jump right in, read our QUICKSTART to try Bottlerocket in an Amazon EKS cluster. Bottlerocket focuses on security and maintainability, providing a reliable, consistent, and safe platform for container-based workloads. This is a reflection of what we've learned building operating systems and services at Amazon. You can read more about what drives us in our charter. The base operating system has just what you need to run containers reliably, and is built with standard open-source components. Bottlerocket-specific additions focus on reliable updates and on the API. Instead of making configuration changes manually, you can change settings with an API call, and these changes are automatically migrated through updates. Some notable features include: API access for configuring your system, with secure out-of-band access methods when you need them. Updates based on partition flips, for fast and reliable system updates. Modeled configuration that's automatically migrated through updates. Security as a top priority.
  3. Feb 2020
    1. kubernetes service external ip pending

      kubernetes with minikube and created manifest with kind of service might not work. We need to expose nodeport.

    1. A remote repository serves as a caching proxy for a repository managed at a remote URL (which may itself be another Artifactory remote repository).  Artifacts are stored and updated in remote repositories according to various configuration parameters that control the caching and proxying behavior. You can remove artifacts from a remote repository cache but you cannot actually deploy a new artifact into a remote repository.

      You connect typically to central maven repository.

  4. Jan 2020
  5. Apr 2019
    1. From urban ancient Greece to agrarian societies, work was either something to be outsourced to others – often slaves – or something to be done as quickly as possible so that the rest of life could happen.
    2. For some of these writers, this future must include a universal basic income (UBI) – currently post-work’s most high-profile and controversial idea – paid by the state to every working-age person, so that they can survive when the great automation comes. For others, the debate about the affordability and morality of a UBI is a distraction from even bigger issues.

      Universal basic income looks like a cool idea for innovators, who would like to use this education for the good of all so they don't have to work for living and take care of basic needs (food, clothing, shelter).

    3. In 1845, Karl Marx wrote that in a communist society workers would be freed from the monotony of a single draining job to “hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner”. In 1884, the socialist William Morris proposed that in “beautiful” factories of the future, surrounded by gardens for relaxation, employees should work only “four hours a day”.

      Capitalistic nature of communist economy gave some counties a boost in economy and given a citizen who would like to work hard a boost in better status. Communist based citizen often has no incentive to work hard OR in other words a citizen who is willing to work hard and have a better life has no value. So capitalistic communist approach gave those innovative citizens to have a better life if they choose to.

    4. And finally, beyond all these dysfunctions, loom the most-discussed, most existential threats to work as we know it: automation, and the state of the environment. Some recent estimates suggest that between a third and a half of all jobs could be taken over by artificial intelligence in the next two decades.
    5. Work is badly distributed. People have too much, or too little, or both in the same month. And away from our unpredictable, all-consuming workplaces, vital human activities are increasingly neglected. Workers lack the time or energy to raise children attentively, or to look after elderly relations. “The crisis of work is also a crisis of home,” declared the social theorists Helen Hester and Nick Srnicek in a paper last year. This neglect will only get worse as the population grows and ages.
    6. Unsurprisingly, work is increasingly regarded as bad for your health: “Stress … an overwhelming ‘to-do’ list … [and] long hours sitting at a desk,” the Cass Business School professor Peter Fleming notes in his new book, The Death of Homo Economicus, are beginning to be seen by medical authorities as akin to smoking.