276 Matching Annotations
  1. Last 7 days
    1. Whisk in 1 tbs. butter, when butter melts add another piece. Continuing adding butter pieces—1 cup (two sticks total.) Do not let the butter come to a boil or the butter will separate. Try to keep the butter between 160 and 175 degrees F. Use an instant read thermometer to to keep it under 180 F.

      This is also a tasty way to cook lobster. Add garlic to the mix and make sure to add enough salt to the butter.

    1. Make text substitutions in response bodies, using both regular expressions and fixed strings, in this filter module.

      We need to use this. It's under nginx plus though so does that mean we have to pay for it/it doesn't work with regular nginx?

    1. nginx_substitutions_filter is a filter module which can do both regular expression and fixed string substitutions on response bodies. This module is quite different from the Nginx's native Substitution Module.

      We might need to switch to this if we want to do replacement with regex's.

    1. This is a "protocol-relative" link. It uses http or https depending on what was used to load the current page.

      So src="//…" means use the same scheme as the current page.

  2. Nov 2019
    1. http { proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g; server { location / { proxy_pass; proxy_set_header Host $host; proxy_buffering on; proxy_cache STATIC; proxy_cache_valid 200 1d; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; } } }

      example proxy_cache config.

    1. Default: proxy_cache off;

      So sounds like proxy caching is off by default.

    2. Default: proxy_cache_lock off;

      hmm...Ok this is off by default so probably not a bottleneck in that case.

    3. Default: proxy_cache_lock_age 5s;

      Again-seems like this should be a smaller timeout.

    4. Default: proxy_cache_lock_timeout 5s;

      This is a very long time! We should definitely shorten this. I actually wonder if this is perhaps why we are facing a bottleneck when we see a bunch of requests at once. When I performance tested, I ran without caching so it's possible the caching is actually bottlenecking us.

    5. Sets an offset in bytes for byte-range requests. If the range is beyond the offset, the range request will be passed to the proxied server and the response will not be cached.

      I don't see a need for this right now as it's not the range requests that are really the perceived slowness but it might be worth looking at later.

    6. “GET” and “HEAD” methods are always added to the list, though it is recommended to specify them explicitly.

      I wonder why?

    7. If the value is set to off, temporary files will be put directly in the cache directory.

      So use_temp_path should probably be set to off. I don't see a reason why we would need to first write them to a different directory.

    8. Cached data that are not accessed during the time specified by the inactive parameter get removed from the cache regardless of their freshness. By default, inactive is set to 10 minutes.

      This default seems like it should probably be set to minimally an hr-probably more. Cloudlflare is set at 4 hours which seems perfectly reasonable to me.

    9. The special “cache manager” process monitors the maximum cache size set by the max_size parameter. When this size is exceeded, it removes the least recently used data.

      So we might need to increase this?

    10. Cache data are stored in files.

      So it caches them on the file system by way of storing them in a temp file and renaming them. The default here says --. I thought nginx cached things by default so does this mean that's not the case or maybe it doesn't save them? Confused-need to investigate more.

    11. Default: proxy_cache_min_uses 1;

      This seems right for us.

    12. The levels parameter defines hierarchy levels of a cache: from 1 to 3

      Basically the same concept as a hardware cache with 1 I assume being the first cache that will be checked.

    13. each level accepts values 1 or 2

      I don't understand what this means.

  3. Oct 2019
    1. Really useful page for generating regexes of ip ranges. Note they are missing some parenthesis in places though.

    1. location / { sub_filter '<a href="' '<a href="https://$host/'; sub_filter '<img src="' '<img src="https://$host/'; sub_filter_once on; }

      How to replace strings in an html response on-the-fly.

      Note the Content-Type must be requested as not compressed for this to work.

    1. # request will be sent to backend without uri changed # to '/' due to if location /proxy-pass-uri { proxy_pass; set $true 1; if ($true) { # nothing } }

      This is a weird one. I guess as long as you aren't using $uri it has no impact though.

    1. Is anyone aware of a lua http lib that supports keepalive?

      When sending a request you can pass the following keepalive settings which will keep the connection open:

      local http = require "resty.http"
      local httpc = http.new()
      httpc:connect("", 9081)
      local response, err = httpc:request({
        path = "/proxy" .. ngx.var.request_uri, 
        method = "HEAD",
        headers = ngx.req.get_headers(),
        keepalive_timeout = 60,
        keepalive_pool = 10,
    1. Also, it's always a good idea to add ipv6=off to the resolver directive when your dns server may return IPv6 addresses and your network does not support it.

      This might help.

    1. Another difference about $uri and $request_uri in proxy_cache_key is $request_uri will include anchor tags part, but $uri$is_args$args will ignore it Do a curl operation : curl -I static.io/hello.htm

      This could be a problem. We'll probably have to fix this when we move via2 outside lms.

    1. The proxy_buffering option tells NGINX to pass the response directly back to the client. Otherwise, it will try to buffer it in memory or on disk. I recommend this if the upstream response can be large.

      We may want to not buffer on pdf endpoints.

    1. set $stripped_cookie $http_cookie; if ($http_cookie ~ "(.*)(?:^|;)\s*sessionid=[^;]+(.*)$") { set $stripped_cookie $1$2; } if ($stripped_cookie ~ "(.*)(?:^|;)\s*csrftoken=[^;]+(.*)$") { set $stripped_cookie $1$2; }
    1. Indicate number of NA values placed in non-numeric columns.

      This is only true when using the Python parsing engine.

      Filled 3 NA values in column name

      If using the C parsing engine you get something like the following output:

      Tokenization took: 0.01 ms
      Type conversion took: 0.70 ms
      Parser memory cleanup took: 0.01 ms
    1. location / { proxy_pass http://backend; # You may need to uncomment the following line if your redirects are relative, e.g. /foo/bar #proxy_redirect / /; proxy_intercept_errors on; error_page 301 302 307 = @handle_redirect; } location @handle_redirect { set $saved_redirect_location '$upstream_http_location'; proxy_pass $saved_redirect_location; }

      Usually the redirect is returned as the response and the client follows the redirect. This will follow a redirect inside nginx rather than the client.

    1. The X-Forwarded-Proto request header helps you identify the protocol (HTTP or HTTPS) that a client used to connect to your load balancer. Your server access logs contain only the protocol used between the server and the load balancer; they contain no information about the protocol used between the client and the load balancer.

      The load balancer may talk to the server via http so using $scheme in nginx when there's an AWS load balancer in front may lead to the $scheme being unexpectedly http instead of https.

      http {
          map $http_x_forwarded_proto $original_scheme {
            "" $scheme;
            default $http_x_forwarded_proto;
    1. I had a similar issue with nginx+passenger (for Ruby on Rails / Rack / etc.), and I confirm that by default, multiple slashes are collapsed (in both PATH_INFO and REQUEST_URI). Adding merge_slashes off; in the server context of the nginx configuration fixed it
    1. non-blocking internal requests

      Note ngx.location.capture only works on internal requests which means if you want to request an external endpoint dynamically then you need to setup something like below and call that internal endpoint instead of calling the external url directly.

      Say for example you want to send a request to / endpoint with the thirdparty url as part of the path (http:proxy-server.com/http://example.com).

            location /external/ {
              set $upstream "";
              rewrite_by_lua_file ./lua/get_external.lua;
              proxy_pass $upstream;

      Where lua/get_external.lua:

      -- strip beginning '/' from uri path
      ngx.var.upstream = ngx.var.request_uri:sub(2)
    1. set $template_root /usr/local/openresty/nginx/html/templates;

      We should probably use this instead of root since root has other implications.

    1. Purging by single-file through your Cloudflare dashboard

      This seems like the best way to purge files but I wonder if you can purge by domain or rather files rather than file.

    1. an unexpected surge of traffic hitting the Space City Weather Web server

      Yay unexpected spikes in traffic!

    1. # kill cache add_header Last-Modified $date_gmt; add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0'; if_modified_since off; expires off; etag off;

      disable nginx caching

    1. Because it can handle a high volume of connections, NGINX is commonly used as a reverse proxy and load balancer to manage incoming traffic and distribute it to slower upstream servers – anything from legacy database servers to microservices.
    2. Dynamic sites, built using anything from Node.js to PHP, commonly deploy NGINX as a content cache and reverse proxy to reduce load on application servers and make the most effective use of the underlying hardware.
    3. With its event-driven, asynchronous architecture, NGINX revolutionized how servers operate in high-performance contexts and became the fastest web server available.
    4. NGINX has grown along with it and now supports all the components of the modern Web, including WebSocket, HTTP/2, and streaming of multiple video formats (HDS, HLS, RTMP, and others).
    5. NGINX consistently beats Apache and other servers in benchmarks measuring web server performance.
    1. pyramid_retry.is_error_retryable(request, exc)[source]¶ Return True if the exception is recognized as retryable error. This will return False if the request is on its last attempt. This will return False if pyramid_retry is inactive for the request.

      Test if an error is retryable via pyramid_rety. Note it returns False on last attempt.

  4. Sep 2019
    1. use the REPOSITORY:TAG combination rather than IMAGE ID

      Error response from daemon: conflict: unable to delete c565603bc87f (cannot be forced) - image has dependent child images

      I really feel like this should be the accepted answer here but it does depend on the root cause of the problem. When you create a tag it creates a dependency and thus you have to delete the tag and the image in that order. If you delete the image by using the tag rather than the id then you are effectively doing just that.

    1. This API function (as well as ngx.location.capture_multi) always buffers the whole response body of the subrequest in memory. Thus, you should use cosockets and streaming processing instead if you have to handle large subrequest responses.

      So my interpretation of this is the request is issued, the entire response is buffered in memory, then res.headers is read from that buffer. I wonder if there is a way to cut off this buffer and close the connection early such that only the headers make it into the buffer for responses that are very large.

    2. Network I/O operations in user code should only be done through the Nginx Lua API calls as the Nginx event loop may be blocked and performance drop off dramatically otherwise. Disk operations with relatively small amount of data can be done using the standard Lua io library but huge file reading and writing should be avoided wherever possible as they may block the Nginx process significantly. Delegating all network and disk I/O operations to Nginx's subrequests (via the ngx.location.capture method and similar) is strongly recommended for maximum performance.

      Very important ngx.location.capture does not block in nginx.

    1. Nginx, by default, will consider any header that contains underscores as invalid.


    2. This is a really great introductory article to how nginx works. Definitely worth a read!

    3. While buffering can help free up the backend server to handle more requests, Nginx also provides a way to cache content from backend servers, eliminating the need to connect to the upstream at all for many requests.

      This seems like something we might want to look into long term but also I wonder whether this is really necessary. I thought nginx also cached responses by default so in that scenario I don't think it would even go out to the upstream server anyway, it would just return the previous still valid response but maybe I'm missing something.

    1. local LuaSocket = require("socket") client = LuaSocket.connect("example.com", 80) client:send("GET /login.php?login=admin&pass=admin HTTP/1.0\r\nHost: example.com\r\n\r\n") while true do s, status, partial = client:receive('*a') print(s or partial) if status == "closed" then break end end client:close()

      How to issue an http request through a socket in lua.

    1. 80/20. 45 dollars a month for a family of four. Low deductible.

      That's a pretty good deal.

    2. Insurance, Health & WellnessCheckmarkHealth Insurance (87)CheckmarkDental Insurance (23)CheckmarkFlexible Spending Account (FSA) (12)CheckmarkVision Insurance (23)Health Savings Account (HSA)CheckmarkLife Insurance (17)CheckmarkSupplemental Life Insurance (12)Disability InsuranceCheckmarkOccupational Accident Insurance (12)Health Care On-SiteCheckmarkMental Health Care (13)Retiree Health & MedicalCheckmarkAccidental Death & Dismemberment Insurance (16)Financial & RetirementPension PlanCheckmark401K Plan (83)CheckmarkRetirement Plan (77)Employee Stock Purchase PlanCheckmarkPerformance Bonus (9)CheckmarkStock Options (17)Equity Incentive PlanSupplemental Workers' CompensationCharitable Gift Matching

      Winco benefits.

    3. The health insurance here is about the best you will find. The company is employee owned, so you get stock(full stock ownership after 7 years). The benefits are pretty good - the only downside is the amount of vacation time and the ability to move up. It takes a lot of time to move up, and even in the high positions, you max out at four weeks of paid vacation after 15 years.

      Seems like a pretty good summary.

    4. Cheapest medical and dental insurance. Also you get stock every year and it adds and grows with you for your retirement. Starting pay is lower, but the stock and insurance makes up for it.

      So maybe those benefits do make up for the $1 less in pay compared to Fred Meyer-at least this person thinks so.

    1. Fred Meyer offers employees a stock purchase plan. Stock options are available for some employees.

      So I'm not sure if grocery employees have access to this but note it's purchasing stock as opposed to being gifted stock by the company which is what Winco does (although of course there is a vesting period at Winco).

    1. Stock option is called ESOP (Employee Stock Ownership Plan) and is the retirement plan WinCo gives all of its employees. Must be vested 5 years to receive 100% of stock.

      Seems good.

    1. Insurance, Health & WellnessCheckmarkHealth Insurance (106)CheckmarkDental Insurance (27)Flexible Spending Account (FSA)CheckmarkVision Insurance (28)CheckmarkHealth Savings Account (HSA) (22)CheckmarkLife Insurance (21)Supplemental Life InsuranceCheckmarkDisability Insurance (22)CheckmarkOccupational Accident Insurance (19)Health Care On-SiteCheckmarkMental Health Care (20)Retiree Health & MedicalAccidental Death & Dismemberment InsuranceFinancial & RetirementCheckmarkPension Plan (29)Checkmark401K Plan (101)CheckmarkRetirement Plan (97)CheckmarkEmployee Stock Purchase Plan (20)Performance BonusCheckmarkStock Options (17)Equity Incentive PlanCheckmarkSupplemental Workers' Compensation (17)Charitable Gift Matching

      Fred Meyer benefits.

    2. Good health insurance, some 401k and HSA matching, vacation requests are not always accomodated

      So they do have 401k and HSA and they have match-that's a plus but again might be job dependent.

    3. 7.5% bonus structure based on salary. Expensive health insurance for families. 3 weeks vac each year + floating holidays+4 personal days

      Not sure what job this person had though.

    1. Benefits only take effect after 4 months employment for self, 10 months for family. This is an absurd arrangement that is unfair to employees.


    1. Cashier - Hourly$11/hrRange: $9 - $1863 salaries$11/hr$9$1

      Winco cashier pay is $11/hr. In general Winco is $1 less pay per hour than Fred Meyer but they also have stock and 401k benefits which is a huge plus although when you are making that little of income anyway do you really have money laying around to invest in those things anyways? Probably not would be my guess.

    1. proxy_ssl_trusted_certificate file;

      And we should probably specify this.

    2. proxy_ssl_verify on

      We should probably turn this on.

    3. proxy_ssl_server_name on

      Turn this on when you need to proxy to an https server.

    4. There could be several proxy_redirect directives: proxy_redirect default; proxy_redirect http://localhost:8000/ /; proxy_redirect http://www.example.com/ /;

      In case there are multiple layers of redirection.

    5. proxy_redirect ~*/user/([^/]+)/(.+)$ http://$1.example.com/$2;

      So you can basically redirect anything to be the original url so the client is not aware there were layers of redirects by:

      proxy_redirect ~^.*$ /original/endpoint;
    1. py27-django{18,19,110,111,111tip},

      An example of how to test multiple python versions against multiple library versions

    1. proxy_redirect $scheme://$host:$server_port/ /gitlab/;

      This doesn't seem to be working as I'd expect it to override the location response header.

    1. proxy_redirect: If you need to modify the redirect (i.e. location header) being returned to the browser
    1. more_clear_headers

      Look into this as the nginx set header Cookie "" doesn't appear to be working. Might also be able to use lua header filter.

    2. openresty/headers-more-nginx-module

      Allows you to clear headers.

    1. header_filter_by_lua 'ngx.header.Foo = "blah"';

      Not sure why you wouldn't use raw nginx for this.

    2. access_by_lua ' local res = ngx.location.capture("/auth") if res.status == ngx.HTTP_OK then return end if res.status == ngx.HTTP_FORBIDDEN then ngx.exit(res.status) end ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)

      Potentially something like this for getting the content-type response header and using that to decide which proxy server to send the original request to.

    3. The Lua code cache can be temporarily disabled during development by switching lua_code_cache off

      For dev mode switch the lua file cache off.

    1. # map to different upstream backends based on header map $http_x_server_select $pool { default "apache"; staging "staging"; dev "development"; }

      case statement on the header X-Server-Select. I wonder if we could do something similar for rate limiting ip selection.

    1. add_header Set-Cookie cip=$remote_addr;

      How to set a cookie in nginx or perhaps unset a cookie:

      proxy_set_header Cookie cip="";
    1. location ~* ^/proxy/(?<pschema>https?)/(?<phost>[\w.]+)(?<puri>/.*) { set $adr $pschema://$phost; rewrite .* $puri break; proxy_pass $adr;

      Nice idea as it doesn't use lua but also a bit fragile I think.

    1. echo "GET http://localhost/" | vegeta attack -duration=5s | tee results.bin | vegeta report

      This worked really well for me, producing something like below:

      Requests      [total, rate]       2400, 80.03
      Duration      [total, attack, wait]  41.854241245s, 29.988606s, 11.865635245s
      Latencies     [mean, 50, 95, 99, max]  3.425533859s, 0s, 17.212904925s, 23.748749616s, 39.421327692s
      Bytes In      [total, mean]        309003728, 128751.55
      Bytes Out     [total, mean]      0, 0.00
      Success       [ratio]                 29.67%
      Status Codes  [code:count]   0:1688  200:712

      It appears though that the latency also includes failed requests which is something to be aware of.

    2. 0 = infinity

      This gave me a can't set to 0 error.

    1. You can only really adjust worker_connections since worker_processes is based off the number of CPUs you have available.
    2. the product of your worker_connections and worker_processes directive settings
    3. worker_rlimit_nofile directive. This changes the limit on the maximum number of open files
    1. Vanguard Total International Bond Index Fund (3%)

      % international bonds to invest in. If not available invest more in US bonds.

    2. Vanguard Total Bond Market II Index Fund (7%)

      % US bonds to invest in.

    3. Vanguard Total International Stock Index Fund (35.9%)

      % international stocks to invest in

    4. Vanguard Total Stock Market Index Fund (54.1%)

      % US stocks to invest in.

  5. Aug 2019
    1. Example

      Test annotation for single annotation view - do not delete.

    1. VLBI

      (Very-long-baseline interferometry)[https://en.wikipedia.org/wiki/Very-long-baseline_interferometry]

      Aka: emulating a telescope with a size equal to the maximum separation between the telescopes.

    1. intern

      sys.intern in Python3

    2. ascii letters, digits or underscores

      String composed of these characters are interned. Aka the following will not be interned:

      f = 'f o'

      but the following will be interned:

      f = 'f_o'

    3. sequences generated through peephole optimization are discarded if their length is superior to 20

      So values generated via peephole optimization above 20 are not 'pre-computed' but left as is:

      'a' * 21 # remains in the byte code where as
      'a' * 20 # gets converted to 'aaaaaaaaaaaaaaaaaaaa'
    4. string subclasses cannot be interned


      class NewString(str):
      assert NewString('f') is 'f'

      Will fail. Where as:

      f = 'f'
      assert f is 'f'

      Will pass.

      This is presents a case for not building custom string types in Python as it breaks the string cache and can result in a performance hit.

    5. strings can be compared by a O(1) pointer comparison instead of a O(n) byte-per-byte comparison

      This is a huge advantage. Not only does it save memory by not duplicating simple and common string values, but the comparison method has an early exit that compares the pointers instead of the values. aka in sudo-code form:

      // compare pointers
      if self._value is value:
         return True
      // compare values
      for i, v in enumerate(self._value):
         if v != value[i]:
            return False
      return True
    1. While a break in the hyoid bone is common in victims of homicide by strangulation, Epstein's autopsy also showed signs of other neck fractures

      This says there were multiple bones broken in the neck however this article says there was just one bone. Another article sites the linked article above implying that there were multiple bones broken when in fact, only one bone was broken according to the cited article.

    2. Two prison staff members who'd been guarding the unit where Epstein died by apparent suicide failed to check on him that night for about three hours

      There was a three hour gap in the guards schedule.

    1. NBC News, according to a person familiar with the matter, also reported that the autopsy found a bone break in Epstein's neck.

      The 'also' in this sentance seems to imply that there were other broken bones but the sited article only mentions the hyoid bone.

    1. he broke his hyoid bone, a small horseshoe-shaped bone near the base of the human jaw

      This is the only bone that was broken according to this article.

    1. fracture of the hyoid bone is rare because it is protected by the mandible. In fact, most hyoid bone injuries are caused by strangulation.

      The hypoid bone is almost always a result of strangulation.

    1. There’s a perception that the Old World is the advanced world and transferred all this knowledge to the New one, but we are realizing that they knew a lot, and I think this is one more piece of evidence for that

      It's refreshing to see someone coming to this conclusion based on the research and evidence. It seems most of the time we tend to underestimate the technology that civilizations in this era used.

    2. What happened here is that these rocks were struck by lightning sometime between when they were formed many thousands of years ago, and when they were carved

      It kinda makes you wonder if it wasn't struck by lightning naturally but that the people did it to the rocks intentionally.

    3. The fields found in the statues, however, are far stronger — in some cases nearly four times that of the Earth’s magnetic field.

      That's quite impressive. It's on the same order of magnetisim as the rocks at the Puma Punku site in Bolivia.

    4. artisans carved the figures so that the magnetic areas fell at the navel or right temple — suggesting not only that Mesoamerican people were familiar with the concept of magnetism but also that they had some way of detecting the magnetized spots

      The potbelly statues have very strong magnetic areas on the head and around the belly button suggesting that the people who made them had knowledge of magnetism.

  6. Jun 2019
    1. get_current_request()

      This can be used to get the current request object if the request object is not already available in the current context. It really shouldn't be used in production code but in a debug/test scenario it can be quite handy.

  7. May 2019
    1. Food safety authorities worldwide have set acceptable daily intake (ADI) values for aspartame at 40 mg/kg of body weight. The FDA has set its ADI for aspartame at 50 mg/kg.[39]

      The recommended “safe” daily value is 40 mg/kg of body weight.

    2. Dr Arthur Hull Hayes was appointed as Commissioner of the FDA the day after Reagan's inauguration.[34] In 1981, Hayes sought advice on aspartame's ban from a panel of FDA scientists and a lawyer. It soon became clear that the panel would uphold the ban by a 3-2 decision, but Hull then installed a sixth member on the commission, and the vote became deadlocked.[34] He then personally broke the tie in aspartame's favor.

      Taking advantage of the ability to appoint voters in order to manipulate the government in favor of aspartame, aspartame was approved under Ronald Reagan’s administration.

    3. In November 1983, Hayes left the FDA under a cloud and joined Burson-Marsteller, chief public relations firm for both Monsanto and GD Searle, as a senior medical advisor.[37][38] The appointment was widely seen as a reward for his approval of aspartame.

      This a text book example of how companies are able to influence government officials in their favor.

    4. academic pathologists reviewed 15 aspartame studies by Searle, and concluded that, although minor inconsistencies were found, they would not have affected the studies' conclusions.[2]:4 This conclusion was reached despite the testimony of Dr. M. Adrian Gross, a former senior FDA toxicologist, who stated that Searle's studies were largely unreliable and that least one of the studies has established beyond any reasonable doubt that aspartame is capable of inducing brain tumors in experimental animals, and that by allowing aspartame to be placed on the market, the FDA has violated the Delaney Amendment
    5. nor Skinner's successor, Thomas Sullivan, convened a grand jury, allowing the statute of limitations to expire.[26][27][28] In December, 1977, Sullivan ordered the case dropped for lack of evidence, and Conlon was later also hired by Searle's law firm.

      The new FDA lawyer was also hired by the company and the case was eventually dropped.

    6. in February, 1977, Searle's law firm, Sidley & Austin, offered Skinner a job, which he accepted, recusing himself from the case

      The company being sued for the cover up of the negative side effects of aspartame hires the lawyer that represented the FDA thus resulting in the case never making it to trial.

    7. n January 1977, formally requested that a grand jury be convened to investigate whether indictments should be filed against Searle for knowingly misrepresenting findings and "concealing material facts and making false statements" in aspartame safety tests (the first time in the FDA's history that they request a criminal investigation of a manufacturer)

      Aspartame was so poorly misrepresented in the scientific research provided for its approval that it became the first case in history in which a lawsuit was brought against the manufacturer for concealing of material facts and false statements.

    1. elevated cortisol and gut dysbiosis via interactions with different biogenic amine may also have additional impact to modulate neuronal signaling lead to neurobiological impairments

      In summary aspartame increases the fire of neurotransmitters, increases heart rate and and blood pressure, and imbalances gut bacteria.

    2. gut dysbiosis

      most commonly reported as a condition in the gastrointestinal tract, particularly during small intestinal bacterial overgrowth (SIBO) or small intestinal fungal overgrowth (SIFO)

      Common symptoms include stomach upset after eating, indigestion, the extremely common GERD (reflux), heartburn, slow digestion, or bloating, excessive gas, lower belly pains, constipation, or diarrhea.

      Dysbiosis can take between 3-12 weeks to heal.

    3. elevated cortisol

      Often called the “stress hormone,” cortisol causes an increase in your heart rate and blood pressure

    4. aspartame metabolite; mainly Phy and its interaction with neurotransmitter and aspartic acid by acting as excitatory neurotransmitter causes this pattern of impairments

      Aspartame alters the chemical composition of the brain and excites neurotransmitters.

    1. condensed fragmented nuclei

      a fancy way to say the cell dies

    2. examination showed fetal capillaries with condensed nuclei of endothelial cells, cytotrophoblasts with condensed fragmented nuclei and vacuolated cytoplasm, and syncytiotrophoblasts with irregular condensed fragmented nuclei

      In summary aspartame results in the death of cells that carry nutrients and energy to the growing embryo.

    3. syncytiotrophoblasts

      the epithelial covering of the highly vascular embryonic placental villi, which invades the wall of the uterus to establish nutrient circulation between the embryo and the mother

    4. vacuolated cytoplasm

      the cell fills with pockets of fluid

    5. cytotrophoblasts

      The interior of the trophoblast cells that provide energy and nutrients to the baby.

    6. endothelial cells

      cells that line the interior surface of blood vessels and lymphatic vessels, forming an interface between circulating blood or lymph in the lumen and the rest of the vessel wall.

    7. condensed nuclei

      Double-stranded DNA loops around 8 histones twice, forming the nucleosome, which is the building block of chromatin packaging. DNA can be further packaged by forming coils of nucleosomes, called chromatin fibers. These fibers are condensed into chromosomes during mitosis, or the process of cell division.

    8. Damage in the placenta was detected in the form of rupture of the interhemal membrane, lysis of glycogen trophoblast cells, spongiotrophoblast cells with vacuolated cytoplasm and darkly stained nuclei.

      Aspartame caused placenta cells to rupture and behave like they had been exposed to a pathogen or virus.

    9. vacuolated cytoplasm

      phenomenon observed in mammalian cells after exposure to bacterial or viral pathogens as well as to various natural and artificial low-molecular-weight compounds

    10. spongiotrophoblast cells

      placenta cells that provide nutrients and energy and bind with the dye

    11. glycogen trophoblast cells

      cells that provide energy and nutrients to the embryo and develop into a large part of the placenta

    12. lysis

      the disintegration of a cell by rupture of the cell wall or membrane.

    13. interhemal membrane

      A fancy term for the placenta.

    1. Results reveal that the aspartame molecule is inherently amyloidogenic, and the self-assembly of aspartame becomes a toxic trap for proteins and cells

      Aspartame inhibits the function of organs and tissues via clusters of proteins/“plaque”.

    2. amyloidogenic

      producing or tending to produce amyloid deposits. These are associated with malfunction of organs.

    3. necrosis

      death of cells or tissue through disease or injury

    4. apoptosis

      the death of cells which occurs as a normal and controlled part of an organism's growth or development.

    5. Aspartame fibrils were also found to induce hemolysis, causing DNA damage resulting in both apoptosis and necrosis-mediated cell death.

      This study found that aspartame causes DNA damage and cell death.

    1. Pathogenic amyloids form when previously healthy proteins lose their normal physiological functions and form fibrous deposits in plaques around cells which can disrupt the healthy function of tissues and organs.

      Clusters of these proteins prevent organs from functioning correctly.

    2. Amyloids are aggregates of proteins that become folded into a shape that allows many copies of that protein to stick together, forming fibrils. In the human body, amyloids have been linked to the development of various diseases.

      Amyloids are clusters of proteins that are associated with development of various diseases in humans.

    1. config_file: Path to the config file name

      Since Hypothesis uses a server side config I'm not sure what to do here as there is no config file to provide. 🤔

      If it turns out this isn't feasible (although I'm sure there's a solution for this config_file issue) the alternative approach would be to use the post method.

    2. newrelic-admin record-deploy config_file description [revision changelog user]

      To enable recording of deploys on the python agent via New Relic, you can simply call the newrelic-admin record-deploy command and pass it the necessary revision information. This will place a deployment marker on any graph you view in newrelic as a vertical line-indicating that a new revision of the code was released at that point in time.