<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[special sauce]]></title><description><![CDATA[and random musings]]></description><link>https://blog.random.io/</link><generator>Ghost 0.11</generator><lastBuildDate>Mon, 21 Oct 2024 09:07:23 GMT</lastBuildDate><atom:link href="https://blog.random.io/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Moving Helm release to another Kubernetes Namespace]]></title><description><![CDATA[<p>If you want to cleanup your Helm releases, moving them into more appropriate namespace than the one initially deployed into, you might feel kinda stuck, since Helm CLI does not allow you to move these already deployed release to another namespace. But, it can be accomplished with some manual editing</p>]]></description><link>https://blog.random.io/moving-helm-release-to-another-kubernetes-namespace/</link><guid isPermaLink="false">ca31c18c-a7c3-49b6-9422-bb0228522345</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[helm]]></category><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Mon, 20 Dec 2021 18:22:35 GMT</pubDate><content:encoded><![CDATA[<p>If you want to cleanup your Helm releases, moving them into more appropriate namespace than the one initially deployed into, you might feel kinda stuck, since Helm CLI does not allow you to move these already deployed release to another namespace. But, it can be accomplished with some manual editing of K8s resources...</p>

<p>Information about Helm v3 release is stored in a secret within a namespace where release is deployed. <br>
These secrets can be carefully updated to "move" release from one namespace to another.</p>

<blockquote>
  <p>The following moves Helm release, as visible from output of <code>helm ls -n mynamespace</code> command. It does not move any of the K8s resources which are part of the release.</p>
</blockquote>

<p>Using Jenkins Helm chart as an example:</p>

<ol>
<li><p>reflect on "do I really need to do this?"</p></li>
<li><p>get the original secret</p>

<pre><code>kubectl -n default get secret sh.helm.release.v1.jenkins.v1 -o yaml | neat &gt; jenkins_release_orig.yaml
</code></pre></li>
<li><p>decode the secret's <code>data.release</code> (it's double-base64 encoded, and gzipped)</p>

<pre><code>oq -i yaml -r '.data.release' jenkins_release_orig.yaml  | base64 -D | base64 -D | gzip -d &gt; jenkins_release_secret_decoded.txt
</code></pre></li>
<li><p>change the namespace of the release, and save the changes into the new file</p>

<pre><code>NEW_VALUE=$(cat jenkins_release_data_decoded.txt | jq -c '.namespace="jenkins"' | gzip -c | base64 | base64)
oq -i yaml -o yaml --arg new "$NEW_VALUE" '.data.release = $new' jenkins_release_orig.yaml &gt; jenkins_release_moved.yaml
</code></pre></li>
<li><p>verify and apply</p>

<pre><code>oq -i yaml -r '.data.release' jenkins_release_moved.yaml | base64 -D | base64 -D | gzip -d | jq .namespace
kubectl -n jenkins apply -f jenkins_release_moved.yaml
helm -n default ls
helm -n jenkins ls
</code></pre></li>
<li><p>remove old release record (secret) in original namespace</p>

<pre><code>kubectl -n default delete secret sh.helm.release.v1.jenkins.v1
</code></pre></li>
<li><p>rinse and repeat for other versions / deployments of the release</p></li>
</ol>

<p>Moving resources to another namespace can be achieved grabbing K8s manifests, modifying them with desired <code>metadata.namespace</code> and redeploying them. It helps to be extra careful with PVs, to avoid loosing them. Whenever possible, consider deleting Helm release, and redeploying it in the new namespace to save yourself a headache.</p>]]></content:encoded></item><item><title><![CDATA[k8s-vault: connect to K8s API via SSH jumphost]]></title><description><![CDATA[CLI utility, which makes it easy to reach K8s API via SSH jumphost, using SSH port forwarding.]]></description><link>https://blog.random.io/k8s-vault-connect-to-k8s-api-via-ssh-jumphost/</link><guid isPermaLink="false">0a69431b-1761-430a-a6c9-0f52463609e9</guid><category><![CDATA[kubectl]]></category><category><![CDATA[kubernetes]]></category><category><![CDATA[k8s]]></category><category><![CDATA[k8s-vault]]></category><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Thu, 28 Jan 2021 17:07:55 GMT</pubDate><content:encoded><![CDATA[<blockquote>
  <p>Like aws-vault is a helper for AWS related CLI tools, k8s-vault is a helper for CLI tools using KUBECONFIG. Unlike aws-vault, "vault" is used as a verb, synonymous to leap, jump, spring, etc..</p>
</blockquote>

<p>About two years ago, I've made a script described in <a href="https://blog.random.io/using-ssh-port-forwarding-for-k8s-cli-tools/">Using SSH + Port-Forwarding for K8s CLI tools</a> post.</p>

<p>That CLI script serves as wrapper to other CLI tools using <code>KUBECONFIG</code>. It establishes an SSH Forwarding session via SSH jumphost, while generating temporary <code>KUBECONFIG</code> with server endpoint modified to use the TCP port of the forwarding session.</p>

<p>While the script worked well in my experience, it had to be fixed when newer version of <a href="https://github.com/mikefarah/yq">yq</a> introduced incompatible changes. And then again.. The whole dependency on specific version of helper tools is rather annoying..</p>

<p>Recently, I've implemented the same thing in <a href="https://crystal-lang.org/">Crystal</a>, learning the language. Enter <a href="https://github.com/anapsix/k8s-vault.cr">k8s-vault.cr</a></p>

<p>It works pretty much the same way, with a slight change in <code>k8s-vault.yaml</code> config format.</p>

<p>The repository's <a href="https://github.com/anapsix/k8s-vault.cr/releases">releases page</a> includes statically compiled binary for Linux (x64), as well as dynamic one for macOS.</p>]]></content:encoded></item><item><title><![CDATA[Using TLS-enabled Helm 2.x with multiple clusters]]></title><description><![CDATA[<p>Helm 2.x is using a server component (Tiller) deployed in Kubernetes cluster, which "<em>interacts directly with the Kubernetes API server to install, upgrade, query, and remove Kubernetes resources. It also stores the objects that represent releases</em>".</p>

<p>Tiller runs in-cluster with privileges granted to it by <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/">service account</a>, which is</p>]]></description><link>https://blog.random.io/using-tls-enabled-helm-2-x-with-multiple-clusters/</link><guid isPermaLink="false">dc356b2c-8166-48bf-9c1b-031233ddbb46</guid><category><![CDATA[helm]]></category><category><![CDATA[tiller]]></category><category><![CDATA[helm 2.x]]></category><category><![CDATA[helm 2]]></category><category><![CDATA[tls]]></category><category><![CDATA[ssl]]></category><category><![CDATA[certificates]]></category><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Wed, 06 Nov 2019 13:44:11 GMT</pubDate><content:encoded><![CDATA[<p>Helm 2.x is using a server component (Tiller) deployed in Kubernetes cluster, which "<em>interacts directly with the Kubernetes API server to install, upgrade, query, and remove Kubernetes resources. It also stores the objects that represent releases</em>".</p>

<p>Tiller runs in-cluster with privileges granted to it by <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/">service account</a>, which is suboptimal, since anyone with cluster access would be able to connect to gRCP endpoint Tiller is exposing.</p>

<blockquote>
  <p>Upcoming release of Helm 3.x solves the issue entirely by removing Tiller, having <code>helm</code> CLI communicate directly with K8s API, thus relying on K8s RBAC for auth/authz. Check out a post <a href="https://helm.sh/blog/helm-3-preview-pt2/">A Gentle Farewell to Tiller</a> on Helm blog, and <a href="https://thorsten-hans.com/the-state-of-helm3-hands-on">The state of Helm 3</a> by Thorsten Hans.</p>
</blockquote>

<p>While waiting for release of Helm 3, we have to deal with Tiller.. There are many articles discussing Helm 2.x security:</p>

<ul>
<li><a href="https://engineering.bitnami.com/articles/helm-security.html">Exploring the Security of Helm</a></li>
<li><a href="https://jfrog.com/blog/is-your-helm-2-secure-and-scalable/">Is Your Helm 2 Secure and Scalable?</a></li>
<li><a href="https://rimusz.net/tillerless-helm">Tillerless Helm v2</a></li>
<li>and many more</li>
</ul>

<p>One of the better options is to use <a href="https://rimusz.net/tillerless-helm">Tillerless Helm v2</a>. <br>
Another, is to enable <a href="https://helm.sh/docs/using_helm/#using-ssl-between-helm-and-tiller">TLS-based auth</a>, issuing SSL certs to every user authorized to connect to a specific Tiller instance (since there can be multiple Tiller instances running, each with it's own set of privileges).</p>

<p>Working with multiple TLS-enabled Helm 2.x clusters is somewhat of a pain, since passing it requires use of environment variables or cli options to select specific certificates, without option to use configuration file.</p>

<p>To make it easier, I've made a wrapper script, naturally. When <code>--tls</code> is passed to the wrapper, it sets necessary environment variables based on passed <code>--kube-context</code> argument, and calls <code>helm</code> binary.</p>

<blockquote>
  <p>The script also addresses an annoyance of having to match Helm client CLI version with Tiller (server component), by detecting Tiller version, and downloading an appropriate version of <code>helm</code> cli from <a href="https://github.com/helm/helm/releases">Helm's Github releases page</a>.</p>
</blockquote>

<script src="https://gist.github.com/anapsix/eff8a9248fc5ba446fcb373daae143b9.js"></script>]]></content:encoded></item><item><title><![CDATA[Experimenting with Rancher and K8s in local dev environment]]></title><description><![CDATA[Running Rancher with ephemeral K8s clusters for local testing.]]></description><link>https://blog.random.io/experimenting-with-rancher-and-k8s-in-local-dev-environment/</link><guid isPermaLink="false">562ab19a-7065-4796-91ba-f7230be6efa6</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[k8s]]></category><category><![CDATA[rancher]]></category><category><![CDATA[kind]]></category><category><![CDATA[kubernetes-in-docker]]></category><category><![CDATA[rkind]]></category><category><![CDATA[rancher-kind]]></category><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Thu, 20 Jun 2019 19:06:00 GMT</pubDate><content:encoded><![CDATA[<blockquote>
  <p>Rancher 'n KIND - running Rancher with ephemeral K8s cluster for local testing.</p>
</blockquote>

<p>If you'd like to experiment with Rancher and K8s in local dev environment, <a href="https://kind.sigs.k8s.io/">KIND</a> is just the right thing for ephemeral K8s clusters. Kind is a tool for running local Kubernetes clusters using Docker container "nodes". It's super helpful when developing K8s, to build images with your changes, and quickly spin up an instance for a test. <br>
Even if you are not developing K8s, kind is a great tool to provision a local K8s cluster for testing.</p>

<p><center>❤️❤️❤️</center></p>

<p>Fulfilling a need to test Rancher integration with K8s, and run experiments with this setup, I've made a naive helper script to start Rancher instance in Docker, and launch a kind cluster for it to use.</p>

<p>Available as Github Gist: <a href="https://gist.github.com/anapsix/25a5a66696f14806a4686ec1c707d2d2">rkind.sh</a>  </p>

<script src="https://gist.github.com/anapsix/25a5a66696f14806a4686ec1c707d2d2.js"></script>]]></content:encoded></item><item><title><![CDATA[OAuth for kubectl (and Keycloak)]]></title><description><![CDATA[Multi-cluster helper script to generate OAuth / OIDC tokens for K8s API, for use with kubectl. Can be used as kubectl plugin.]]></description><link>https://blog.random.io/oauth-for-kubernetes-cluster/</link><guid isPermaLink="false">9a439236-e3d6-4df6-9f72-af4b1f882b2e</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[kubectl]]></category><category><![CDATA[keycloak]]></category><category><![CDATA[oauth]]></category><category><![CDATA[oauth2]]></category><category><![CDATA[oidc]]></category><category><![CDATA[identity]]></category><category><![CDATA[sso]]></category><category><![CDATA[kubectl plugin]]></category><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Fri, 31 May 2019 15:03:11 GMT</pubDate><media:content url="http://blog.random.io/content/images/2019/05/keycloak_plus_k8s.png" medium="image"/><content:encoded><![CDATA[<img src="http://blog.random.io/content/images/2019/05/keycloak_plus_k8s.png" alt="OAuth for kubectl (and Keycloak)"><p>Recently, I've setup an internal <a href="https://www.keycloak.org/">Keycloak</a> (an open source Identity and Access Management) instance to manage user (and application) access to K8s cluster. One could certainly create users in K8s directly, but it's rather tedious process involving creation of certificate/key pairs for every user managed that way (see <a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/#step-2-create-the-user-credentials">Bitnami's How-To Configure RBAC in K8s</a>). Then there's dealing with access / certificate revocation, rotation, etc.. <br>
With self-registration, group management, Keycloak is a safer, more robust, and simply better way of managing user and application access to Kube-API server via OAuth.</p>

<p>I highly recommend Bob Killen's article titled "<a href="https://medium.com/@mrbobbytables/kubernetes-day-2-operations-authn-authz-with-oidc-and-a-little-help-from-keycloak-de4ea1bdbbe">Kubernetes Day 2 Operations: AuthN/AuthZ with OIDC and a Little Help From Keycloak</a>"</p>

<p>Inspired by above-mentioned article, and <a href="https://github.com/mrbobbytables/oidckube/blob/master/login.sh"><code>login.sh</code></a> script from Bob's <a href="https://github.com/mrbobbytables/oidckube">oidckube</a> project, I've made somewhat modified version of the script to support easier login in multi-cluster environment.</p>

<p>My version - <a href="https://gist.github.com/anapsix/9e965d646b8c3549df6099d37bcdd3c0"><code>k8s-oidc-login</code></a>, uses YAML config, allowing to configure global or per-cluster OIDC endpoints, username, password, etc.</p>

<blockquote>
  <p>If you save the script as "kubectl-login" and place it in your exec PATH, it can be used as kubectl plugin. <br>
  Usage would look like <code>kubectl login [--kubeconfig=kubectl-config-file] [--context=kubectl-context]</code></p>
</blockquote>

<p>Example config:  </p>

<pre><code class="language-yaml">global:  
  oidc_server: keycloak-server1.hostname.com
  oidc_username: user@domain.com
  oidc_password: bad-idea-to-keep-password-here-it-is-known
  oidc_client_id: kubernetes
clusters:  
  cluster-name-1:
    oidc_server: keycloak-server1.hostname.com
    oidc_username: another-user@domains.com
    oidc_password: bad-idea-to-keep-password-here
    oidc_auth_realm: cluster-name-1-realm
    oidc_client_secret: 33f12b49-faf9-498f-996a-c6cfe5d46d29
  cluster-name-2:
    oidc_auth_realm: cluster-name-2-realm
    oidc_client_secret: b1e512f9-02f0-442b-a1a0-b5c728c7254c
  cluster-name-3:
    oidc_auth_realm: cluster-name-3-realm
    oidc_client_secret: 1091a5fb-7dbe-41fd-9251-8131ab2ec25d
</code></pre>

<p>Naturally, standard disclosures apply, YMMV.. Hopefully, this might come handy for those using Keycloak, or other OAuth providers for Kubernetes RBAC.</p>

<script src="https://gist.github.com/anapsix/9e965d646b8c3549df6099d37bcdd3c0.js"></script>]]></content:encoded></item><item><title><![CDATA[Using SSH + Port-Forwarding for K8s CLI tools]]></title><description><![CDATA[<p>While working with multiple Kubernetes clusters, I came across an annoyance of being unable to reach some of cluster's API endpoints directly from my workstation. <br>
Some production K8s clusters I'm working with have their APIs only available within their respective environments (i.e. production K8s clusters), while others are available</p>]]></description><link>https://blog.random.io/using-ssh-port-forwarding-for-k8s-cli-tools/</link><guid isPermaLink="false">9dafd6ff-a653-46af-8849-47424024baec</guid><category><![CDATA[k8s]]></category><category><![CDATA[kubernetes]]></category><category><![CDATA[kubectl]]></category><category><![CDATA[jumphost]]></category><category><![CDATA[helm]]></category><category><![CDATA[ssh]]></category><category><![CDATA[port-forwarding]]></category><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Tue, 16 Apr 2019 19:18:10 GMT</pubDate><content:encoded><![CDATA[<p>While working with multiple Kubernetes clusters, I came across an annoyance of being unable to reach some of cluster's API endpoints directly from my workstation. <br>
Some production K8s clusters I'm working with have their APIs only available within their respective environments (i.e. production K8s clusters), while others are available directly, via general corp VPN (i.e. non-prod K8s clusters).</p>

<p>Having to SSH to jumphost, or (through jumphost) to one of the nodes within the environment is annoying, and slows down cluster-related work, since YAML manifest changes made locally in my text editor of choice are not instantly available on the jumphost. One could use SSH-FS, or another method to continuously synchronize local changes, but there's an easier way. As a one-off it's easy to start an SSH connection to jumphost, with SSH's TCP Port-Forwarding to forward requests to K8s API via localhost port of choice. Scripting the process improves the experience, especially when frequently working with multiple clusters.</p>

<p>Inspired by <a href="https://github.com/99designs/aws-vault">AWS-Vault</a> tool, I've made utility with similar usage pattern, which automates SSH Port-Forwarding setup for selected (configured) clusters. In this case, "vault" is a verb, synonymous to leap, jump, spring..</p>

<blockquote>
  <p>UPDATE: <code>k8s-vault</code> has been reimplemented in Crystal (and has no external dependencies, other than SSH client): <a href="https://github.com/anapsix/k8s-vault.cr">https://github.com/anapsix/k8s-vault.cr</a></p>
</blockquote>

<p>The whole thing is a BASH script with dependency on <code>jq</code>, <code>yq</code>, <code>grep / ggrep</code> to parse <code>KUBECONFIG</code> (<code>~/.kube/config</code>), <code>nc</code> to check connectivity to K8s API endpoint, and <code>openssh-client</code> to establish connection to SSH jumphost.</p>

<p>Config file looks like this  </p>

<pre><code class="language-yaml">## k8s-vault
k8s_api_timeout: 5 # in seconds  
ssh_ttl: 10 # in seconds  
ssh_forwarding_port:  
  random: true
  static_port: 32845
clusters: # same as in your KUBECONFIG  
  prod:
    enabled: true
    ssh_jump_host: jumphost.prod.example.com
  qa:
    enabled: true
    ssh_jump_host: jumphost.qa.example.com
  dev:
    enabled: false
    ssh_jump_host: jumphost.dev.example.com
</code></pre>

<p>It works by extracting relevant config options from existing <code>KUBECONFIG</code>, and generating new temporary one. Feeding it via environmental variable to instance of whatever CLI tool stated [by k8s-vault]. As long as that tool is capable of using <code>KUBECONFIG</code> environment variable, K8s-Vault can be helpful.</p>

<p>The entire script is available as a GitHub Gist</p>

<script src="https://gist.github.com/anapsix/b5af204162c866431cd5640aef769610.js"></script>

<p>The script is a POC, and may or may not be reimplemented in Go, Rust, or whatever other language I decide to play with.</p>

<p>UPDATE: It's been reimplemented in Crystal: <a href="https://github.com/anapsix/k8s-vault.cr">https://github.com/anapsix/k8s-vault.cr</a></p>]]></content:encoded></item><item><title><![CDATA[K8s CronJob with execution timeout]]></title><description><![CDATA[How to run your Kubernetes CronJobs and/or Jobs with execution timeout, and restart on failure.]]></description><link>https://blog.random.io/k8s-cronjob-with-execution-timeout/</link><guid isPermaLink="false">7fbbe0dc-e6ae-4dd6-a0c6-3feb86deb86d</guid><category><![CDATA[k8s]]></category><category><![CDATA[kubernetes]]></category><category><![CDATA[cronjob]]></category><category><![CDATA[job]]></category><category><![CDATA[timeout]]></category><category><![CDATA[execution timeout]]></category><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Fri, 01 Feb 2019 12:18:49 GMT</pubDate><content:encoded><![CDATA[<p>K8s-native CronJobs are quite convenient to run regularly scheduled tasks. But K8s <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/">CronJob</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#handling-pod-and-container-failures">Job</a> specs does not provide a straight-forward way (at least not that I could find) to specify an execution timeout. So when execution hangs, whatever the reason, container continues running. Best case scenario, util next execution, if <code>concurrencyPolicy: Replace</code> is used. <br>
If your task's code has it's own timeout capability - life is good. When it does not, here's what you can do. <br>
When running a task you'd rather not delay until next try in case of hangup and/or job history needs to be retained via <code>concurrencyPolicy: Forbid</code>, <code>livenessProbe</code> could be used to compare time elapsed since start of the task and timeout value. When that probe fails, container is restarted thanks to <code>restartPolicy: OnFailure</code>.</p>

<p>If job history does not need to be retained, one could use <code>concurrencyPolicy: Replace</code>. However, that will make <code>successfulJobsHistoryLimit</code> and <code>failedJobsHistoryLimit</code> meaningless, as jobs will be replaced each time <code>CronJob</code> schedule kicks off another one.</p>

<p>Perhaps <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api">Downward API</a> can be used to get container start time, but I haven't found the right reference for that yet.</p>

<p>I like to be able to see what went wrong in failed job runs. Counterintuitively, using <code>restartPolicy: Never</code> will keep failed pods around, and available to examine.</p>

<p><strong>CronJob with timeout via livenessProbe example</strong></p>

<script src="https://gist.github.com/anapsix/22ccafce5d41e16be6e2ade7381f6dab.js"></script>]]></content:encoded></item><item><title><![CDATA[checking URLs with curl]]></title><description><![CDATA[<p>since CURL can output useful metrics, such as request time, number of redirects, post-redirect url, etc.. a simple but effective check can be made with a simple wrapper.  </p>

<pre><code>$ ./check_http.sh http://google.com
0,1.659404,200,2,https://www.google.ro/?gws_rd=cr,ssl&amp;dcr=0&</code></pre>]]></description><link>https://blog.random.io/checking-urls-with-curl/</link><guid isPermaLink="false">917e37a3-c79c-4cb0-902c-1e906ecd2d02</guid><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Tue, 09 Jan 2018 11:14:27 GMT</pubDate><content:encoded><![CDATA[<p>since CURL can output useful metrics, such as request time, number of redirects, post-redirect url, etc.. a simple but effective check can be made with a simple wrapper.  </p>

<pre><code>$ ./check_http.sh http://google.com
0,1.659404,200,2,https://www.google.ro/?gws_rd=cr,ssl&amp;dcr=0&amp;ei=QaRUWsWdBqaYjwSBlILoAw  
</code></pre>

<script src="https://gist.github.com/anapsix/67f673b36adb9f2f3cae02f39a1004a9.js"></script>]]></content:encoded></item><item><title><![CDATA[Developing with IAM Roles: proxying EC2 metadata server]]></title><description><![CDATA[<p>sometimes you gotta do what you gotta do..</p>

<script src="https://gist.github.com/anapsix/350939a94648cd62e535a5fb45b21c14.js"></script>]]></description><link>https://blog.random.io/developing-with-iam-roles-proxying-ec2-metadata-server/</link><guid isPermaLink="false">d3130651-e530-4c9a-b52c-674eff95c12e</guid><category><![CDATA[aws]]></category><category><![CDATA[ec2 metadata server]]></category><category><![CDATA[iam]]></category><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Wed, 13 Dec 2017 16:52:06 GMT</pubDate><content:encoded><![CDATA[<p>sometimes you gotta do what you gotta do..</p>

<script src="https://gist.github.com/anapsix/350939a94648cd62e535a5fb45b21c14.js"></script>]]></content:encoded></item><item><title><![CDATA[livestreaming, getting started]]></title><description><![CDATA[getting started livestreaming, good to know tips, selecting right equipment]]></description><link>https://blog.random.io/livestreaming-getting-started/</link><guid isPermaLink="false">b3d3d0f0-0d89-41b1-a822-8a347ded9079</guid><category><![CDATA[devops]]></category><category><![CDATA[livestreaming]]></category><category><![CDATA[streaming live]]></category><category><![CDATA[getting started]]></category><category><![CDATA[recording meetup]]></category><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Tue, 28 Mar 2017 21:21:17 GMT</pubDate><media:content url="http://blog.random.io/content/images/2017/03/IMG_20170329_112615.jpg" medium="image"/><content:encoded><![CDATA[<img src="http://blog.random.io/content/images/2017/03/IMG_20170329_112615.jpg" alt="livestreaming, getting started"><p>This post is derived from my experience recording and livestreaming community events over past few years.</p>

<p>With improved quality of the smartphone cameras anyone can get started recording events using nothing but that. However, keeping a phone steady requires some sort of stand or tripod mount. Plus the sound quality is often disappointing, without an external MIC. While there a MICs and even zoom lenses that connect to a cellphone, I was looking for a more flexible solution, that does not tie up my cell phone. <br>
For that purpose, I needed to get some dedicated equipment: camera, tripod, MIC, way to stream to YouTube.</p>

<h6 id="camera">camera</h6>

<p>I've started with <a href="https://www.zoom-na.com/products/field-video-recording/video-recording/zoom-q8">Zoom Q8</a>, since it's one of the few cameras capable of being connected directly to my laptop via USB and recognized as a webcam. It has an amazing sound quality with included detachable stereo MIC array, two XLR/TRS inputs, and individual gain controls for every one of those inputs. <br>
Unfortunately, despite being a phenomenal value for sound-first recording, it's fixed wide-angle lens, lack of contrast &amp; brightness adjustments, ironic absence of zoom functionality and overall poor video quality - all makes it a bad choice for recording / streaming brightly projected presentations. <br>
Example recording: <a href="https://youtu.be/bzQeMiruVFg?t=28m36s">https://youtu.be/bzQeMiruVFg?t=28m36s</a></p>

<p>Looking for an improved video quality, and more overall control, including contrast &amp; brightness, I've turned to dSLR camera world. Luckily, there are plenty to choose from. As long as it has a clean HDMI out (output video stream without OSD controls, crosshairs, etc) - it would serve the purpose. Only limiting factor is how much you want to spend on it. While salivating over <a href="https://www.blackmagicdesign.com/products/blackmagicpocketcinemacamera">Blackmagic Pocket Cinema Camera</a>, I've willed myself to be more reasonable and ended up finding a second-hand <a href="http://shop.panasonic.com/support-only/DMC-GH3KBODY.html">Panasonic GH3</a> in great condition on eBay. Not only it is a decent camera on it's own, with clean HDMI out. It also takes micro four thirds lenses, ensuring access to a large selection of inventory. <br>
Example recording: <a href="https://youtu.be/7-C_M6JGPuA?t=10m42s">https://youtu.be/7-C_M6JGPuA?t=10m42s</a></p>

<h6 id="capturedevice">capture device</h6>

<p>To be able to stream video from camera's HDMI out, a capture device is needed: enter <a href="https://www.blackmagicdesign.com/products/ultrastudiothunderbolt">Blackmagic UltraStudio Mini Recorder</a> - at $149 MSRP, it was a least expensive ultra-portable capture device I could find that supports both HDMI (with audio) and SDI inputs. Making it perfect for taking advantage of facilities' cameras, when present (at Microsoft NERD Center, for example) and being able to capture any HDMI source. I'd like to note that UltraStudio Mini Recorder is not a <a href="https://en.wikipedia.org/wiki/USB_video_device_class">UVC</a>-webcam device, as it captures and passes on raw video.  </p>

<blockquote>
  <p>A webcam UVC device is presented to computer as regular webcam and feeds already compressed video to your streaming software, taking care of heavy lifting required to encode video stream, relieving CPU of extra work (that would be required to encode raw video). These can even re-scale your source to match desired streaming resolution. Which means, you could deliver high quality livestream using a low powered Chromebook, in theory.</p>
</blockquote>

<p>There are couple of great UVC-webcam capture devices I wish I'd have a chance to play with.</p>

<ul>
<li><a href="https://www.blackmagicdesign.com/products/blackmagicwebpresenter">Blackmagic Web Presenter</a></li>
<li><a href="http://www.magewell.com/usb-capture-hdmi">Magewell USB Capture HDMI</a></li>
</ul>

<p>And, in a league of it's own, <a href="https://teradek.com/collections/vidiu-family">Teradek VidiU</a> devices, cause they are awesome and magical. VidiU allows streaming directly over WiFi (or cell connection) to the service of your choice (YouTube, USTREAM, etc), eliminating need for computer altogether.</p>

<h6 id="microphone">microphone</h6>

<p>As with anything else, there are plenty of choices, and selecting your perfect MIC is gotta be driven by your specific needs. I recommend using a MIC with built-in gain control and high pass filter. <br>
I'm using <a href="http://www.rode.com/microphones/videomicpro">Rode VideoMic Pro</a></p>

<h6 id="tripod">tripod</h6>

<p>Get it, you need a steady camera. Pick whatever size height you need. Be mindful of it's folded size and weight. These can get expensive quickly. Chances are, you can get away with sub-$100 <a href="https://www.amazon.com/dp/B00L6F16L0/">Manfrotto MKC3-H01</a> (that's what I'm using), or whatever else you find on Amazon.</p>

<h6 id="streamingsoftware">streaming software</h6>

<p>I've been using <a href="http://primary.telestream.net/wirecastplay/landing.htm">Wirecast Play for YouTube</a> (available for both: Windows and Mac OS X), as it's fairly easy to use and mostly works. There are some quirks, like occasional glitches - but you can't beat free. Free until recently anyway. Telestream (the maker of Wirecast) is now charging $9.99 for basic version without a watermark. And it's probably worth it, as the price is pretty much the same as couple of cappuccinos would cost.</p>

<p>Alternatively, a free and very powerful solution has been available for some time. And now it's really worth taking a look at, as it improved greatly year over year. <br>
<a href="https://obsproject.com/">OBS</a> (Open Broadcaster Software, supports Windows, Mac OS X, and Linux) is extremely versatile and capable as is, being continuously improved by a fantastic Open Source community (146 contributors as of writing this). It's capable of combining and overlaying multiple A/V inputs, including display/window capture, video/image files, streams, capture devices and more. It support transitions, A/V filters and effects, and can stream to <strong>any RTMP destination</strong> (Facebook, YouTube, USTREAM, Twitch, etc). The interface could be a bit overwhelming to beginners, but it does the job very well, and after all you could always <a href="https://github.com/jp9000/obs-studio/wiki/OBS-Studio-Overview">RTFM</a> - it will get you started. Overall, highly recommended..</p>

<h6 id="postproductioneditingsoftware">post production / editing software</h6>

<p>Apple's Final Cut, Adobe Premier, whatever works for you and your bank account :) <br>
However, Blackmagic is a champ by providing a free version of it's very solid non-linear editor: <a href="https://www.blackmagicdesign.com/products/davinciresolve">DaVinci Resolve</a>. <br>
Fantastic piece of software that will most likely serve all your amateur editing needs + available at the price of free! <br>
Thanks Blackmagic!</p>

<h6 id="laptop">laptop</h6>

<p>It's worth mentioning, that depending on what kind of capture device you use, your laptop requirements will vary. <br>
MacBook Air can barely handle real-time encoding, when paired with Blackmagic UltraStudio Mini Recorder. Depending on streaming resolution (720p, 1080p), I've experienced repeated frame drops, as my Air's CPU struggled to keep up. <br>
In contrast, it's was never an issue with MacBook Pro, even without dedicated graphics card. <br>
Using Magewell or similar devices, which offload encoding from your laptop, allows you to use almost anything that can run your capture software. </p>]]></content:encoded></item><item><title><![CDATA[Logging Docker container output to individual files via Syslog]]></title><description><![CDATA[<p>I'm not big on Rsyslog syntax, it's a bit overwhelming at first, though documentation is plentiful.. <br>
Here is how you can log Docker container output to individual files per container (using tags) via Syslog..</p>

<p>Add a template and a route to either main or individual rsyslog config (in my case</p>]]></description><link>https://blog.random.io/docker-to-syslog/</link><guid isPermaLink="false">4bbd6827-9dc4-4cd2-a746-d73d0c7c1cb0</guid><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Wed, 28 Oct 2015 16:35:09 GMT</pubDate><content:encoded><![CDATA[<p>I'm not big on Rsyslog syntax, it's a bit overwhelming at first, though documentation is plentiful.. <br>
Here is how you can log Docker container output to individual files per container (using tags) via Syslog..</p>

<p>Add a template and a route to either main or individual rsyslog config (in my case /etc/rsyslog.d/10-docker.conf):  </p>

<pre><code># /etc/rsyslog.d/10-docker.conf
# Log Docker container logs to file per tag
$template DockerLogs, "/var/log/docker_%syslogtag:R,ERE,1,ZERO:.*docker/([^\[]+)--end%.log"
if $programname == 'docker' then -?DockerLogs  
&amp; stop
</code></pre>

<p>Restart <strong>rsyslog</strong> and test your container output:  </p>

<pre><code>root@test:~# service rsyslog restart  
root@test:~# docker run -it --rm --log-driver syslog --log-opt syslog-tag=java8u66b17-test1 anapsix/alpine-java java -version  
</code></pre>

<p>Observe newly created log file (give it few seconds to get witten to disk):  </p>

<pre><code>root@test:~# cat /var/log/docker_java8u66b17-test1.log  
Oct 28 15:50:47 test docker/java8u66b17-test1[2492]: java version "1.8.0_66"#015  
Oct 28 15:50:47 test docker/java8u66b17-test1[2492]: Java(TM) SE Runtime Environment (build 1.8.0_66-b17)#015  
Oct 28 15:50:47 test docker/java8u66b17-test1[2492]: Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)#015  
</code></pre>

<blockquote>
  <p>As of Docker 1.9, tagging logs with <em>--log-opt [syslog-tag/fluentd-tag/gelf-tag]</em> will be replaced with simple <strong>--log-opt tag=blah</strong>. Using constructs such as  <em>{{.Name}}</em>, <em>{{.ID}}</em>, <em>{{.ImageName}}</em> and similar will be possible as well. <br>
  For details see <a href="https://github.com/docker/docker/pull/15384">GitHub #15384</a></p>
</blockquote>

<h6 id="reference">Reference</h6>

<p>Docker Logging documentation: <a href="https://docs.docker.com/reference/logging/overview/">https://docs.docker.com/reference/logging/overview/</a> <br>
Rsyslog <em>templates</em> docs: <a href="http://www.rsyslog.com/doc/v8-stable/configuration/templates.html">http://www.rsyslog.com/doc/v8-stable/configuration/templates.html</a> <br>
Rsyslog <em>property replacer</em> docs: <a href="http://www.rsyslog.com/doc/v8-stable/configuration/property_replacer.html">http://www.rsyslog.com/doc/v8-stable/configuration/property_replacer.html</a></p>]]></content:encoded></item><item><title><![CDATA[remaining tickets count from Eventbrite, without API]]></title><description><![CDATA[<p>I've been helping out with an event, where for main event site we used shared GitHub repo with <a href="http://getgrav.org/">GRAV CMS</a>, which is fantastic by the way. On the server where site is hosted, a script watches for updates to <em>master</em> and pulls latest, when available. It's pretty convenient and allows</p>]]></description><link>https://blog.random.io/remaining-tickets-count-from-eventbrite-without-api/</link><guid isPermaLink="false">66c68b78-b3f4-4b6f-a3d8-d8b8c7ab5da3</guid><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Wed, 07 Oct 2015 17:29:24 GMT</pubDate><content:encoded><![CDATA[<p>I've been helping out with an event, where for main event site we used shared GitHub repo with <a href="http://getgrav.org/">GRAV CMS</a>, which is fantastic by the way. On the server where site is hosted, a script watches for updates to <em>master</em> and pulls latest, when available. It's pretty convenient and allows to simply wait a bit before changes submitted to <em>master</em> become go live, without having to do a simple, but distracting release. Sort of a </p>

<p>Recently, I've noticed that one of the event organizers manually updates ticket count on the site every so often.. sort of a drag. Since our event signup is hosted on Eventbrite, I've looked into API method of accessing the ticket counts and while it's possible, it didn't seem like there is a way of using anonymous OAUTH token to get it and I'd rather not expose a private secret by adding it to a JavaScript request in the site code.</p>

<p>Since Eventbrite does not send free-for-all CORS headers, but instead sends <code>x-content-type-options:nosniff</code> with the response (defeating the <em>jsonp</em> trick) , I could not doing a straight-forward jQuery <code>.get/.ajax</code>. Luckily, there are few free proxies available and it even wouldn't be too difficult to roll my own. Of which I ended up using <a href="http://crossorigin.me">crossorigin.me</a> as easiest to work with.</p>

<p>Simple enough and no need to update ticket count manually anymore..  </p>

<iframe height="725" scrolling="no" src="//codepen.io/anapsix/embed/epWwyr/?height=725&theme-id=0&default-tab=js" frameborder="no" allowtransparency="true" allowfullscreen="true" style="width: 100%;"></iframe>]]></content:encoded></item><item><title><![CDATA[[Schrodinger's] cat in a box]]></title><description><![CDATA[<p>Is it dead or alive? - you won't know until you open it..</p>

<p><a href="https://github.com/anapsix/catinabox"><img src="http://random.io/content/images/2015/fork.png" alt="GitHub" title=""></a>
<a href="https://imagelayers.io/?images=anapsix/catinabox:latest"><img src="https://badge.imagelayers.io/anapsix/catinabox:latest.svg" alt="" title=""></a></p>

<hr>

<p><code>docker run --rm anapsix/catinabox</code>
<img src="http://random.io/content/images/2015/catinabox_dead.png" alt=""></p>

<p>again..</p>

<p><code>docker run --rm anapsix/catinabox</code>
<img src="http://random.io/content/images/2015/catinabox_alive.png" alt=""></p>

<p>#docker #fun</p>]]></description><link>https://blog.random.io/catinabox/</link><guid isPermaLink="false">52767468-22a6-47da-aadd-0226322dd2b1</guid><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Tue, 29 Sep 2015 14:13:41 GMT</pubDate><media:content url="http://blog.random.io/content/images/2015/09/catinabox_dead.png" medium="image"/><content:encoded><![CDATA[<img src="http://blog.random.io/content/images/2015/09/catinabox_dead.png" alt="[Schrodinger's] cat in a box"><p>Is it dead or alive? - you won't know until you open it..</p>

<p><a href="https://github.com/anapsix/catinabox"><img src="http://random.io/content/images/2015/fork.png" alt="[Schrodinger's] cat in a box" title=""></a>
<a href="https://imagelayers.io/?images=anapsix/catinabox:latest"><img src="https://badge.imagelayers.io/anapsix/catinabox:latest.svg" alt="[Schrodinger's] cat in a box" title=""></a></p>

<hr>

<p><code>docker run --rm anapsix/catinabox</code>
<img src="http://random.io/content/images/2015/catinabox_dead.png" alt="[Schrodinger's] cat in a box"></p>

<p>again..</p>

<p><code>docker run --rm anapsix/catinabox</code>
<img src="http://random.io/content/images/2015/catinabox_alive.png" alt="[Schrodinger's] cat in a box"></p>

<p>#docker #fun</p>]]></content:encoded></item><item><title><![CDATA[#offrhyme #tinderPoerty]]></title><description><![CDATA[<pre><code>shared economy  
swiping decisions  
delinquent emotions  
lackluster responses  
original content  
by mutual consent  
"hello" as a service
"how are you" as platform
</code></pre>]]></description><link>https://blog.random.io/offrhyme2/</link><guid isPermaLink="false">308ef364-a736-4382-bc69-36089054e644</guid><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Thu, 10 Sep 2015 19:01:03 GMT</pubDate><content:encoded><![CDATA[<pre><code>shared economy  
swiping decisions  
delinquent emotions  
lackluster responses  
original content  
by mutual consent  
"hello" as a service
"how are you" as platform
</code></pre>]]></content:encoded></item><item><title><![CDATA[#offrhyme]]></title><description><![CDATA[<pre><code>"an egg in a chicken"  =&gt; "a chick in an egg",
"a life in a nutshell" =&gt; [
  "is phantom effect",
  "302/redirect",
  "is cruelest verdict",
  "is auto-correct"
]
</code></pre>]]></description><link>https://blog.random.io/offrhyme1/</link><guid isPermaLink="false">a8a786f5-e3cf-4d70-b5b9-2cc6d127e92f</guid><dc:creator><![CDATA[Anastas Dancha]]></dc:creator><pubDate>Thu, 10 Sep 2015 18:40:24 GMT</pubDate><content:encoded><![CDATA[<pre><code>"an egg in a chicken"  =&gt; "a chick in an egg",
"a life in a nutshell" =&gt; [
  "is phantom effect",
  "302/redirect",
  "is cruelest verdict",
  "is auto-correct"
]
</code></pre>]]></content:encoded></item></channel></rss>