<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Stupid SmartOS Tricks]]></title><description><![CDATA[A silly little blog about grand technological triumphs using little more than stupid SmartOS tricks.]]></description><link>https://blog.brianewell.com/</link><generator>Ghost 5.51</generator><lastBuildDate>Tue, 14 Apr 2026 11:24:53 GMT</lastBuildDate><atom:link href="https://blog.brianewell.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[2022 New Language Resolutions]]></title><description><![CDATA[<p>One of the most valuable things a programmer can do with their time is to continue to learn new things.</p><p>Not only does this help to maintain the mental flexibility that is critical for tackling problems in ever-changing environments, but is simply makes life more fun.</p><p>Half a lifetime ago,</p>]]></description><link>https://blog.brianewell.com/2022-new-language-resolutions/</link><guid isPermaLink="false">61f5e2ce63fc3b6bb0fafc52</guid><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 07 Jan 2022 10:01:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1498931299472-f7a63a5a1cfa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDV8fG5ldyUyMHllYXJzfGVufDB8fHx8MTY0MzUwNDM0Mg&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1498931299472-f7a63a5a1cfa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDV8fG5ldyUyMHllYXJzfGVufDB8fHx8MTY0MzUwNDM0Mg&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="2022 New Language Resolutions"><p>One of the most valuable things a programmer can do with their time is to continue to learn new things.</p><p>Not only does this help to maintain the mental flexibility that is critical for tackling problems in ever-changing environments, but is simply makes life more fun.</p><p>Half a lifetime ago, I would pick up programming languages at the pace of one every few months. But ever since learning Ruby in the mid 2000s, I just haven&apos;t felt the need to anymore. It&apos;s almost as if every language I&apos;ve tried to pick up since has been a disappointment in one way or another: Ruby was the hammer to all of life&apos;s nails.</p><p>Ruby is certainly not without it&apos;s faults. Without turning this into a litany of accusations against the language that I terribly appreciate, I <em>do</em> miss my earlier days of learning languages just for the sake of learning languages.</p><p>So that what I&apos;m going to do this year: My New Year&apos;s Resolution for 2022 is to master four programming languages that I&apos;ve at most written a &quot;Hello World&quot; in before this year.</p><h3 id="definition-of-mastery">Definition of Mastery</h3><p>When I say master, I don&apos;t mean achieving an equivalent level of fluency to a language in which I have nearly two decades of experience using. That would be ridiculous.</p><p>Instead I mean intrinsically understanding and applying the fundamentals of what makes that language stand out, both at an intellectual and reflexive level. And the best way to really showcase that would be to write production software in that language.</p><p>So that&apos;s what I&apos;m going to do by the end of the year: Four different capstone projects written in four different languages. While I&apos;d like to ensure that they&apos;re FOSS and available on my github, that may or may not be possible, depending on what I end up working on.</p><p>Here&apos;s what I chose to explore:</p><h2 id="elixir">Elixir</h2><p>A dynamic, <em>functional</em> language, Elixir leverages the BEAM/OTP run-time environment (also utilized by Erlang) to provide a robust low-latency, vertically and horizontally distributed, fault-tolerant platform upon which to run your code.</p><p>I have received multiple recommendations to check out Elixir, along with the <a href="https://hexdocs.pm/ecto/Ecto.html?ref=blog.brianewell.com">Ecto</a> and <a href="https://hexdocs.pm/phoenix/overview.html?ref=blog.brianewell.com">Phoenix</a> frameworks and after finally looking into it, I have to admit I&apos;m quite excited to start learning this. The biggest weakness I perceive of Rails is it&apos;s lack of scalability, which appears to be a complete non-issue with Phoenix.</p><p>As far as a specific capstone goes, I will likely utilize Phoenix to write a web-based something or other, basically one of the following:</p><ul><li>A private internet registry (think ICANN) management system to allow for small groups of users to manage and organize address allocations for private Internets (<a href="https://datatracker.ietf.org/doc/html/rfc1918?ref=blog.brianewell.com">IETF RFC1918</a>).</li><li>The management system for one of the handful of SaaS/PaaS startups I&apos;m involved with.</li></ul><h2 id="dart">Dart</h2><p>Optimized as a client-development oriented language, Dart is a class-based object-oriented garbage-collected language with a C style syntax that compiles to either native or JavaScript, allowing for a unified client development pathway to be applied to multiple platforms (Web, Mobile, Desktop).</p><p>As with Elixir, my interest in Dart is primarily focused on a single framework.</p><p><a href="https://flutter.dev/?ref=blog.brianewell.com">Flutter</a>.</p><p>I&apos;ve never really been that excited for single-page web-applications until now, and while I would like to learn about how Phoenix handles templating and HTML, I would also like to completely avoid that all together and just pass structured data to a much more solidly implemented front-end.</p><p>As far as projects are concerned, any web-backend I write with Elixir/Phoenix will likely be accompanied by a front-end written in Dart/Flutter.</p><h2 id="erlang">Erlang</h2><p>The original language that ran on BEAM/OTP, Erlang compiles to effectively equivalent bytecode that Elixir does, and they can call on each other as well, making learning Erlang an excellent choice after learning Elixir and vice versa.</p><p>While I could easily fold my Erlang capstone into the same Phoenix based capstone I proposed for learning Elixir, I&apos;d much rather push myself to do something a bit <em>grander</em> with it. No idea what it&apos;ll be specifically, but most likely something that plays well to Erlangs strengths, so likely something that can take advantage of low-latency, vertical and horizontal scalability and distribution, along with robust fault-tolerance:</p><ul><li>Reimplementation of some existing server implementation that doesn&apos;t currently scale well.</li></ul><p>That&apos;s all I&apos;m going to say for now, as I still have plenty of time to figure this out.</p><h2 id="rust">Rust</h2><p>A modern systems-level language that is quite possibly the successor to C, Rust is a high performance multi-paradigm language designed for safety, specifically around concurrent memory access. Rust balance of high and low level features makes it very well suited for a lot of different use cases, and it has already seen adoption in the desktop application, CLI application, system application and operating system spaces.</p><p>Beyond functioning quite capably on its own, Rust can also be used to implement Erlang Native Implemented Functions (NIFs) thanks to the<a href="https://hexdocs.pm/rustler/?ref=blog.brianewell.com"> Rustler Mix package</a>, making it perfectly suitable for writing procedures that will be used in the Erlang OTP.</p><p>As with Erlang, I&apos;m not sure exactly what my specific capstone project will be with Rust, though due to the NIF compatibility, I will likely attempt to combine projects for both of these languages.</p><h2 id="honorable-mentions">Honorable Mentions</h2><p>There are a few other languages that I would absolutely love to pick up this year, but don&apos;t want to risk potentially crowding capstone project time by committing to learning them at this time:</p><ul><li><a href="https://go.dev/?ref=blog.brianewell.com">Go</a>. A statically typed compiled garbage-collected general-purpose language, inspired by C, similar to Rust in some ways and different in others, that&apos;s well suited to large concurrent systems.</li><li><a href="https://crystal-lang.org/?ref=blog.brianewell.com">Crystal</a>. A statically typed compiled garbage-collected general-purpose object-oriented language syntactically inspired by Ruby and with concurrency inspirations from Go.</li></ul>]]></content:encoded></item><item><title><![CDATA[Package Caching on SmartOS]]></title><description><![CDATA[<p>If you use package management software commonly across two or more systems, network package caching may be of benefit to you. &#xA0;In this article, we will setup a package caching solution to service multiple package management systems simultaneously.</p><h2 id="methodology">Methodology</h2><p>While caching packages locally in the file system is already</p>]]></description><link>https://blog.brianewell.com/package-caching-on-smartos/</link><guid isPermaLink="false">5f8a4fb0a4033becc921390a</guid><category><![CDATA[SmartOS]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Networking]]></category><category><![CDATA[Nginx]]></category><category><![CDATA[ZFS]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 31 Dec 2021 00:40:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1495741545814-2d7f4d75ea09?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDd8fGxpYnJhcnl8ZW58MHx8fA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1495741545814-2d7f4d75ea09?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDd8fGxpYnJhcnl8ZW58MHx8fA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Package Caching on SmartOS"><p>If you use package management software commonly across two or more systems, network package caching may be of benefit to you. &#xA0;In this article, we will setup a package caching solution to service multiple package management systems simultaneously.</p><h2 id="methodology">Methodology</h2><p>While caching packages locally in the file system is already common practice with most package management systems, it&apos;s difficult to share this local cache with other nearby systems without possible security implications and things getting prohibitively complicated.</p><p>Instead, redirecting requests to a local-network intermediary that provides the same interface as the original repository while also managing its own cache is the preferred solution: Most package management systems use HTTP or HTTPS and refer to their repositories via domain name or URL, and many can be reconfigured to access their repositories through alternate means. In cases where this is not trivial, split-horizon DNS records can be used instead.</p><p>This technique is used extensively by projects such as <a href="https://lancache.net/?ref=blog.brianewell.com">Lancache</a> for game management systems but can also easily be applied to other software package management systems such as Pkgsrc, APT, Yum, etc.</p><h2 id="architecture">Architecture</h2><p>There will be two independent services involved in implementing this solution: a DNS resolver and a caching HTTP server.</p><p>While I will be including notes on the hostnames that need to be configured, the specifics of implementing DNS will be outside the scope of this article. I&apos;m assuming that you have access to your own DNS resolver such as <code>dnsmasq</code> or <code>powerdns-recursor</code> and know how to configure it for split-horizon DNS.</p><p>While I used to prefer redirecting the original domain names, I now change the tld to <code>cache.ewellnet</code> instead, for example:</p><ul><li><code>pkgsrc.joyent.com</code> becomes <code>pkgsrc.joyent.cache.ewellnet</code>.</li><li><code>archive.ubuntu.com</code> becomes <code>archive.ubuntu.cache.ewellnet</code>.</li></ul><p>While this does require the reconfiguration of each host using the cache, overall it&apos;s a cleaner approach and allows hosts to easily bypass the cache if need be. Additionally, my <code>search</code> directive under <code>/etc/resolv.conf</code> is set to <code>ewellnet</code>, so I can use the relatively shorter <code>pkgsrc.joyent.cache</code> or <code>archive.ubuntu.cache</code> instead.</p><p>For our caching HTTP server, we will be using Nginx within a single zone. It&apos;s incredibly high performance, easy to configure, and does its job very efficiently. This should be perfectly performant for most situations, but may be unsuitable in extreme circumstances. This can be remedied by utilizing a cache cluster, which is well outside the scope of today&apos;s article.</p><p>I will be using a common cache optimized configuration with additional configuration files for each supported package management system. Additionally for this article, I will setup a common package cache for all management systems, but the reader can just as easily partition their caches for sets of management systems.</p><p>It is also best that the cache zone be dual-stack. This benefits any IPv6 only hosts on your network that may otherwise not have access to some repositories.</p><h2 id="zone-manifest">Zone Manifest</h2><p>We want to ensure that the zone containing the cache will have enough processing power and memory to not be starved, and enough storage space to handle all of the packages you would like to cache.</p><p>We will be using the following for our example:</p><pre><code>{
  &quot;image_uuid&quot;: &quot;1d05e788-5409-11eb-b12f-037bd7fee4ee&quot;,
  &quot;brand&quot;: &quot;joyent&quot;,
  &quot;alias&quot;: &quot;cache&quot;,
  &quot;hostname&quot;: &quot;cache&quot;,
  &quot;dns_domain&quot;: &quot;ewellnet&quot;,
  &quot;cpu_cap&quot;: 200,
  &quot;max_physical_memory&quot;: 256,
  &quot;quota&quot;: 1024,
  &quot;delegate_dataset&quot;: true,
  &quot;resolvers&quot;: [ &quot;172.22.1.97&quot; ],
  &quot;nics&quot;: [{
    &quot;nic_tag&quot;: &quot;external0&quot;,
    &quot;ips&quot;: [ &quot;172.22.1.98/27&quot;, &quot;addrconf&quot;, &quot;2001:470::98/64&quot; ],
    &quot;gateways&quot;: [ &quot;172.22.1.97&quot; ],
    &quot;primary&quot;: true
  }]
}
</code></pre><p>Create it and login.</p><pre><code># vmadm create -f cache.json
Successfully created VM 49ee67da-9e3c-c20a-d925-aa2aa284f95d
# zlogin 49ee67da-9e3c-c20a-d925-aa2aa284f95d</code></pre><p>After doing any zone cleanup that you prefer, ensure that a dataset exists to handle your cache. I will often times set a quota to ensure a hard upper limit to size slightly beyond the limit that will be set later in Nginx.</p><pre><code># zfs create -o quota=982G -o mountpoint=/var/www/cache zones/&lt;UUID&gt;/data/cache
</code></pre><h2 id="installing-configuring-nginx">Installing &amp; Configuring Nginx</h2><p>Next, we&apos;re going to install Nginx.</p><pre><code># pkgin -y install nginx</code></pre><p>Clear out all of the Nginx configuration files; we&apos;ll be doing something very specific.</p><pre><code># rm -rv /opt/local/etc/nginx/*</code></pre><p>Instead of the default <code>nginx.conf</code>, we&apos;ll be using this one, optimized for caching:</p><p><strong>/opt/local/etc/nginx/nginx.conf</strong>:</p><pre><code>user www www;
worker_processes 2;

events { worker_connections 1024; multi_accept on; }

http {
  default_type  application/octet-stream;

  allow 172.22.1.0/24;
  allow 2001:470::/48;
  deny  all;

  server_tokens off;
  tcp_nopush  on;
  sendfile  on;
  gzip    on;

  proxy_buffering          on;
  proxy_buffers            32 8k;
  proxy_cache              default_cache;
  proxy_cache_lock         on;
  proxy_cache_lock_age     5m;
  proxy_cache_lock_timeout 30s;
  proxy_cache_path         /var/www/cache levels=2:2 use_temp_path=on keys_zone=default_cache:128m inactive=4y max_size=980G;
  proxy_cache_revalidate   on;
  proxy_cache_use_stale    error timeout invalid_header updating http_500 http_502 http_503 http_504 http_403 http_404 http_429;
  proxy_cache_valid        1d;
  proxy_cache_valid        any 1m;
  proxy_http_version       1.1;
  proxy_ignore_headers     X-Accel-Redirect X-Accel-Expires X-Accel-Limit-Rate X-Accel-Buffering X-Accel-Charset Expires Cache-Control Vary;
  proxy_temp_path          /var/www/cache/tmp 1;

  proxy_ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
  proxy_ssl_protocols TLSv1.2;
  proxy_ssl_server_name on;
  proxy_ssl_session_reuse on;
  proxy_ssl_verify_depth  4;

  log_format cache &apos;$remote_addr - $remote_user [$time_local] $status $upstream_cache_status $server_name $request_time [$connection:$connection_requests] &quot;$request_method $scheme://$http_host$request_uri $server_protocol&quot; $body_bytes_sent &quot;$http_referer&quot; &quot;$http_user_agent&quot;&apos;;
  access_log  /var/log/nginx/cache.log cache;

  server {
    listen [::] default_server ipv6only=off;
    server_name default;
    return 421;
  }

  include cache/*.cache;
}</code></pre><p>A quick breakdown of configuration parameters that you will want to tweak:</p><ul><li><code>worker_processes</code> will limit Nginx to spawning that number of worker processes, this is important on compute nodes with high core counts, and should match your <code>cpu_cap/100</code>. In my case, with a <code>cpu_cap</code> of 200, this value should be <code>2</code>.</li><li>The <code>allow</code> and <code>deny</code> directives will allow you to whitelist or blacklist IP prefixes. Use <code>allow</code> to whitelist your local network prefixes and then <code>deny all</code> to prevent all others from accessing your cache. In my case, my local network prefixes are <code>172.22.1.0/24</code> and <code>2001:470::/48</code>.</li></ul><p>At this point you can enable Nginx to confirm it&apos;s happy with its current configuration before adding specific management systems to the cache from the below sections:</p><pre><code># svcadm enable nginx</code></pre><p>If you&apos;d like to take a moment here, I recommend reading the <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html?ref=blog.brianewell.com">Nginx documentation on proxy directives</a> for a more complete understanding of what all has been specified here.</p><p>Also, if you want to make full use of all of the below sub-sections of this article, I recommend recompiling Nginx from a build host with <code>sub</code> and <code>cache-purge</code> enabled.</p><h2 id="package-caching">Package Caching</h2><p>This section will start by demonstrating how to configure <code>pkgsrc</code> package caching and then illustrate the performance differences of caching vs not caching. It will then list additional configured package caching configurations for other operating systems and environments.</p><p>Each package management description will include any local changes that need to be made to the system being configured for caching, the DNS records that need to be configured along with the relevant Nginx configuration files. Please replace any references to <code>ewellnet</code> with your site&apos;s local domain name, or just omit that domain name entirely, your choice.</p><p>I also recommend keeping a terminal open to tail <code>/var/log/nginx/cache.log</code> while configuring this, to monitor for cache misses, hits and revalidations.</p><h3 id="pkgsrc-smartos-">Pkgsrc (SmartOS)</h3><p>The <code>pkgsrc</code> binary package manager is responsible for managing software packages in SmartOS zones. Like most distribution software package managers, it downloads, validates and installs in that order.</p><p>Lets get an idea of what sort of performance we can expect out of non-cached pkgsrc. Testing <code>pkgin update</code> in a freshly provisioned zone gives us the following results:</p><pre><code># time pkgin update
reading local summary...
processing local summary...
processing remote summary (https://pkgsrc.joyent.com/packages/SmartOS/2020Q4/x86_64/All)...
pkg_summary.xz                                   100% 2374KB 791.4KB/s   00:03

real    0m6.941s
user    0m3.294s
sys     0m0.318s</code></pre><p>That&apos;s not bad. Lets see how long it&apos;ll take to download a full upgrade:</p><pre><code># pkgin clean
# time pkgin -dy upgrade
calculating dependencies...done.

17 packages to download:
  wget-1.20.3nb10 sudo-1.9.6p1 rsyslog-8.38.0nb10 python38-3.8.6nb1
  postfix-3.5.10 pkgin-20.12.1 pkg_install-20201218 openssl-1.1.1l
  libssh2-1.9.0nb1 libarchive-3.4.3 mozilla-rootcerts-1.0.20201102
  openldap-client-2.4.56 http-parser-2.9.4 npm-6.14.11 nodejs-14.16.1
  nghttp2-1.42.0nb1 curl-7.75.0
74M to download

wget-1.20.3nb10.tgz                     100% 1244KB   1.2MB/s   00:00
sudo-1.9.6p1.tgz                        100% 1866KB   1.8MB/s   00:01
rsyslog-8.38.0nb10.tgz                  100% 1165KB   1.1MB/s   00:01
python38-3.8.6nb1.tgz                   100%   27MB   3.9MB/s   00:07
postfix-3.5.10.tgz                      100% 2175KB   1.1MB/s   00:02
pkgin-20.12.1.tgz                       100%   98KB  98.4KB/s   00:01
pkg_install-20201218.tgz                100% 9201KB   3.0MB/s   00:03
openssl-1.1.1l.tgz                      100% 6488KB   2.1MB/s   00:03
libssh2-1.9.0nb1.tgz                    100%  389KB 389.0KB/s   00:01
libarchive-3.4.3.tgz                    100%  979KB 979.3KB/s   00:01
mozilla-rootcerts-1.0.20201102.tgz      100%  573KB 573.1KB/s   00:00
openldap-client-2.4.56.tgz              100% 1438KB   1.4MB/s   00:00
http-parser-2.9.4.tgz                   100%   48KB  47.8KB/s   00:00
npm-6.14.11.tgz                         100% 5352KB   2.6MB/s   00:02
nodejs-14.16.1.tgz                      100%   14MB   3.6MB/s   00:04
nghttp2-1.42.0nb1.tgz                   100%  296KB 295.9KB/s   00:01
curl-7.75.0.tgz                         100% 1664KB   1.6MB/s   00:01

real    0m40.083s
user    0m0.630s
sys     0m0.306s</code></pre><p>40 seconds. Not horrible, but surely we can improve upon this.</p><h4 id="dns-records">DNS Records</h4><p>Ensure that the following DNS records are configured:</p><ul><li><code>pkgsrc.joyent.cache.ewellnet</code> points to your cache server.</li></ul><h4 id="cache-configuration">Cache Configuration</h4><p>Ensure that the following configuration file has been added to your Nginx configuration directory in your cache:</p><p><strong>/opt/local/etc/nginx/cache/joyent.cache</strong>:</p><pre><code>upstream pkgsrc.joyent.com {
  server pkgsrc.joyent.com:443;
  keepalive 2;
}

server {
  listen [::];
  server_name pkgsrc.joyent.cache pkgsrc.joyent.cache.ewellnet;

  location ^~ pkg_summary.(bz2|gz|xz)$ {
    proxy_pass https://pkgsrc.joyent.com;
    proxy_cache_valid any 1h;
  }

  location / {
    proxy_pass https://pkgsrc.joyent.com;
  }
}</code></pre><p>This configuration ensures that requests for normal packages will only be revalidated after the default duration has passed, but requests for the package summary will be revalidated every hour.</p><p>Refresh Nginx to enable <code>pkgsrc</code> package caching:</p><pre><code># svcadm refresh nginx</code></pre><h4 id="client-configuration">Client Configuration</h4><p>Pkgsrc clients need to be configured to make use of the local cache. This can be done by altering the repository URL within the configuration of each client:</p><p><strong>/opt/local/etc/pkgin/repositories.conf</strong>:</p><pre><code>...
http://pkgsrc.joyent.cache/packages/SmartOS/2020Q4/x86_64/All</code></pre><p>Run <code>pkgin update</code> to confirm that it&apos;s able to acquire the repository package summary:</p><pre><code># time pkgin update
cleaning database from https://pkgsrc.joyent.com/packages/SmartOS/2020Q4/x86_64/All entries...
reading local summary...
processing local summary...
processing remote summary (http://pkgsrc.joyent.cache/packages/SmartOS/2020Q4/x86_64/All)...
pkg_summary.xz                          100% 2374KB 593.6KB/s   00:04

real    0m7.952s
user    0m4.332s
sys     0m0.581s</code></pre><p>Note that the time is slightly longer, this is due to the clearing out of the previous database and due to the fact that we didn&apos;t have the summary cached. &#xA0;We can also now re-test downloading an upgrade from the cache:</p><pre><code># pkgin clean
# time pkgin -dy upgrade
calculating dependencies...done.

17 packages to download:
  wget-1.20.3nb10 sudo-1.9.6p1 rsyslog-8.38.0nb10 python38-3.8.6nb1
  postfix-3.5.10 pkgin-20.12.1 pkg_install-20201218 openssl-1.1.1l
  libssh2-1.9.0nb1 libarchive-3.4.3 mozilla-rootcerts-1.0.20201102
  openldap-client-2.4.56 http-parser-2.9.4 npm-6.14.11 nodejs-14.16.1
  nghttp2-1.42.0nb1 curl-7.75.0
74M to download

wget-1.20.3nb10.tgz                     100% 1244KB   1.2MB/s   00:01
sudo-1.9.6p1.tgz                        100% 1866KB 933.2KB/s   00:02
rsyslog-8.38.0nb10.tgz                  100% 1165KB   1.1MB/s   00:01
python38-3.8.6nb1.tgz                   100%   27MB   3.9MB/s   00:07
postfix-3.5.10.tgz                      100% 2175KB   2.1MB/s   00:01
pkgin-20.12.1.tgz                       100%   98KB  98.4KB/s   00:00
pkg_install-20201218.tgz                100% 9201KB   3.0MB/s   00:03
openssl-1.1.1l.tgz                      100% 6488KB   2.1MB/s   00:03
libssh2-1.9.0nb1.tgz                    100%  389KB 389.0KB/s   00:00
libarchive-3.4.3.tgz                    100%  979KB 979.3KB/s   00:01
mozilla-rootcerts-1.0.20201102.tgz      100%  573KB 573.1KB/s   00:01
openldap-client-2.4.56.tgz              100% 1438KB   1.4MB/s   00:01
http-parser-2.9.4.tgz                   100%   48KB  47.8KB/s   00:00
npm-6.14.11.tgz                         100% 5352KB   1.7MB/s   00:03
nodejs-14.16.1.tgz                      100%   14MB   4.7MB/s   00:03
nghttp2-1.42.0nb1.tgz                   100%  296KB 295.9KB/s   00:01
curl-7.75.0.tgz                         100% 1664KB   1.6MB/s   00:01

real    0m32.785s
user    0m0.553s
sys     0m0.370s</code></pre><p>Not too much quicker, but again, none of this data had been cached yet. Switch the source repository back, re-update, and switch back to the cache again to test it&apos;s real performance:</p><pre><code>-- Switched back to upstream
# pkgin update
-- Switched back to cache
# time pkgin update
cleaning database from https://pkgsrc.joyent.com/packages/SmartOS/2020Q4/x86_64/All entries...
reading local summary...
processing local summary...
processing remote summary (http://pkgsrc.joyent.cache/packages/SmartOS/2020Q4/x86_64/All)...
pkg_summary.xz                          100% 2374KB 791.4KB/s   00:03

real    0m6.833s
user    0m3.952s
sys     0m0.555s</code></pre><p>Not a whole lot to be excited about with <code>pkgsrc update</code>. Lets check on upgrade:</p><pre><code># pkgin clean
# time pkgin -dy upgrade
calculating dependencies...done.

17 packages to download:
  wget-1.20.3nb10 sudo-1.9.6p1 rsyslog-8.38.0nb10 python38-3.8.6nb1
  postfix-3.5.10 pkgin-20.12.1 pkg_install-20201218 openssl-1.1.1l
  libssh2-1.9.0nb1 libarchive-3.4.3 mozilla-rootcerts-1.0.20201102
  openldap-client-2.4.56 http-parser-2.9.4 npm-6.14.11 nodejs-14.16.1
  nghttp2-1.42.0nb1 curl-7.75.0
74M to download

wget-1.20.3nb10.tgz                     100% 1244KB   1.2MB/s   00:00
sudo-1.9.6p1.tgz                        100% 1866KB   1.8MB/s   00:00
rsyslog-8.38.0nb10.tgz                  100% 1165KB   1.1MB/s   00:00
python38-3.8.6nb1.tgz                   100%   27MB  27.3MB/s   00:00
postfix-3.5.10.tgz                      100% 2175KB   2.1MB/s   00:00
pkgin-20.12.1.tgz                       100%   98KB  98.4KB/s   00:00
pkg_install-20201218.tgz                100% 9201KB   9.0MB/s   00:00
openssl-1.1.1l.tgz                      100% 6488KB   6.3MB/s   00:00
libssh2-1.9.0nb1.tgz                    100%  389KB 389.0KB/s   00:00
libarchive-3.4.3.tgz                    100%  979KB 979.3KB/s   00:00
mozilla-rootcerts-1.0.20201102.tgz      100%  573KB 573.1KB/s   00:00
openldap-client-2.4.56.tgz              100% 1438KB   1.4MB/s   00:00
http-parser-2.9.4.tgz                   100%   48KB  47.8KB/s   00:00
npm-6.14.11.tgz                         100% 5352KB   5.2MB/s   00:00
nodejs-14.16.1.tgz                      100%   14MB  14.2MB/s   00:01
nghttp2-1.42.0nb1.tgz                   100%  296KB 295.9KB/s   00:00
curl-7.75.0.tgz                         100% 1664KB   1.6MB/s   00:00

real    0m1.322s
user    0m0.524s
sys     0m0.342s</code></pre><p>That&apos;s more like it! We can see that package downloads were served from the cache as well with the <code>HIT</code> lines in our logs rather than <code>MISS</code> lines.</p><h3 id="apt-ubuntu-">Apt (Ubuntu)</h3><p>The <code>apt</code> package management system is responsible for managing software packages in Ubuntu based Linux distributions. Like most distribution software package managers, it downloads, validates and installs in that order.</p><h4 id="dns-records-1">DNS Records</h4><p>Ensure that the following DNS records are configured:</p><ul><li><code>apt.ubuntu.cache.ewellnet</code> points to your cache server.</li></ul><h4 id="cache-configuration-1">Cache Configuration</h4><p>Ensure that the following configuration file has been added to your Nginx configuration directory in your cache:</p><p><strong>/opt/local/etc/nginx/cache/ubuntu.cache</strong>:</p><pre><code>upstream archive.ubuntu.com {
  server archive.ubuntu.com;
  keepalive 2;
}

server {
  listen [::];
  server_name apt.ubuntu.cache apt.ubuntu.cache.ewellnet;

  location ^~ deb$ {
    proxy_pass http://archive.ubuntu.com;
  }

  location / {
    proxy_pass http://archive.ubuntu.com;
    proxy_cache_valid any 1h;
  }
}</code></pre><p>This configuration ensures that requests for normal packages will only be revalidated after the default duration has passed, but requests for the package summary will be revalidated every hour.</p><p>Refresh Nginx to enable <code>apt</code> package caching:</p><pre><code># svcadm refresh nginx</code></pre><h4 id="client-configuration-1">Client Configuration</h4><p>Ubuntu clients need to be configured to make use of the local cache. This can be done by altering the repository URLs within the configuration of each client:</p><p><strong>/etc/apt/sources.list</strong>:</p><pre><code>...
deb http://apt.ubuntu.cache/ubuntu/ focal main restricted
deb http://apt.ubuntu.cache/ubuntu/ focal-updates main restricted
deb http://apt.ubuntu.cache/ubuntu/ focal universe
deb http://apt.ubuntu.cache/ubuntu/ focal-updates universe
deb http://apt.ubuntu.cache/ubuntu/ focal multiverse
deb http://apt.ubuntu.cache/ubuntu/ focal-updates multiverse
deb http://apt.ubuntu.cache/ubuntu/ focal-backports main restricted universe multiverse
deb http://apt.ubuntu.cache/ubuntu/ focal-security main restricted
deb http://apt.ubuntu.cache/ubuntu/ focal-security universe
deb http://apt.ubuntu.cache/ubuntu/ focal-security multiverse</code></pre><p>Basically change every reference from <code>archive.ubuntu.com</code> or <code>security.ubuntu.com</code> to <code>apt.ubuntu.cache</code>. Run <code>apt update</code> to confirm that it&apos;s able to acquire the repository package summaries and enjoy!</p><h3 id="apt-debian-">Apt (Debian)</h3><p>Like Ubuntu, the <code>apt</code> package management system is responsible for managing software packages in Debian based Linux distributions. Like most distribution software package managers, it downloads, validates and installs in that order.</p><h4 id="dns-records-2">DNS Records</h4><p>Ensure that the following DNS records are configured:</p><ul><li><code>apt.debian.cache.ewellnet</code> points to your cache server.</li><li><code>security.debian.cache.ewellnet</code> points to your cache server. Note that unlike Ubuntu, Debian does handle security updates separately.</li></ul><h4 id="cache-configuration-2">Cache Configuration</h4><p>Ensure that the following configuration file has been added to your Nginx configuration directory in your cache:</p><p><strong>/opt/local/etc/nginx/cache/debian.cache</strong>:</p><pre><code>upstream cdn-fastly.deb.debian.org {
  server cdn-fastly.deb.debian.org;
  keepalive 2;
}

server {
  listen [::];
  server_name apt.debian.cache apt.debian.cache.ewellnet;

  location ^~ deb$ {
    proxy_pass http://cdn-fastly.deb.debian.org;
  }

  location / {
    proxy_pass http://cdn-fastly.deb.debian.org;
    proxy_cache_valid any 1h;
  }
}

upstream security.debian.org {
  server security.debian.org;
  keepalive 2;
}

server {
  listen [::];
  server_name security.debian.cache security.debian.cache.ewellnet;

  location ^~ deb$ {
    proxy_pass http://security.debian.org;
  }

  location / {
    proxy_pass http://security.debian.org;
    proxy_cache_valid any 1h;
  }
}</code></pre><p>This configuration ensures that requests for normal packages will only be revalidated after the default duration has passed, but requests for the package summaries will be revalidated every hour.</p><p>Refresh Nginx to enable <code>apt</code> package caching:</p><pre><code># svcadm refresh nginx</code></pre><h4 id="client-configuration-2">Client Configuration</h4><p>Debian clients need to be configured to make use of the local cache. This can be done by altering the repository URLs within the configuration of each client:</p><p><strong>/etc/apt/sources.list</strong>:</p><pre><code>deb http://apt.debian.cache/debian stretch main
deb-src http://apt.debian.cache/debian stretch main

deb http://apt.debian.cache/debian stretch-updates main
deb-src http://apt.debian.cache/debian stretch-updates main

deb http://security.debian.cache/ stretch/updates  main
deb-src http://security.debian.cache/ stretch/updates  main</code></pre><p>I could keep going with examples of <code>apt</code> caching on Debian based distributions, but it&apos;s all pretty similar, configure Nginx to cache the upstream repository, then configure the client system to access Nginx through a local domain name. This work about the same with Kali, MX, Pop!OS, basically anything.</p><p>With that out of the way, lets explore some package managers from different distributions.</p><h3 id="dnf-centos-">DNF (CentOS)</h3><p>DNF is the updated version of YUM, the default package management system used by CentOS and derivative Linux distributions.</p><h4 id="dns-records-3">DNS Records</h4><p>Ensure that the following DNS records are configured:</p><ul><li><code>yum.centos.cache.ewellnet</code> points to your cache server.</li></ul><h4 id="cache-configuration-3">Cache Configuration</h4><p>Ensure that the following configuration file has been added to your Nginx configuration directory in your cache:</p><p><strong>/opt/local/etc/nginx/cache/centos.cache</strong>:</p><pre><code>upstream mirror.centos.org {
  server mirror.centos.org;
  keepalive 2;
}

server {
  listen [::];
  server_name yum.centos.cache yum.centos.cache.ewellnet;

  location ^~ rpm$ {
    proxy_pass http://mirror.centos.org;
  }

  location / {
    proxy_pass http://mirror.centos.org;
    proxy_cache_valid any 1h;
  }
}</code></pre><p>This configuration ensures that requests for normal packages will only be revalidated after the default duration has passed, but requests for the package summaries will be revalidated every hour.</p><p>Refresh Nginx to enable <code>yum</code> and <code>dnf</code> package caching:</p><pre><code># svcadm refresh nginx</code></pre><h4 id="client-configuration-3">Client Configuration</h4><p>CentOS clients need to be configured to make use of the local cache. This can be done by altering the repository URLs within the configuration of each client. There are quite a few files involved, nearly everything in <code>/etc/yum.repos.d</code>:</p><p><strong>/etc/yum.repos.d/*</strong>:</p><pre><code>[appstream]
name=CentOS Linux $releasever - AppStream
baseurl=http://yum.centos.cache/$contentdir/$releasever/AppStream/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</code></pre><p>References to <code>mirrorlist</code> should be removed, instead preferring <code>baseurl</code>. Also, don&apos;t change these settings for <code>Debuginfo</code> or <code>Sources</code> repos, as we want those to bypass our cache.</p><p>Once you&apos;re done, run <code>dnf update</code> to confirm everything&apos;s working correctly.</p><h3 id="xbps-void-">XBPS (Void)</h3><p>Void Linux uses the XBPS package manager. As before we&apos;re going to configure DNS, the caching and then each client.</p><h4 id="dns-records-4">DNS Records</h4><p>Ensure that the following DNS records are configured:</p><ul><li><code>xbps.void.cache.ewellnet</code> points to your cache server.</li></ul><h4 id="cache-configuration-4">Cache Configuration</h4><p>Ensure that the following configuration file has been added to your Nginx configuration directory in your cache:</p><p><strong>/opt/local/etc/nginx/cache/void.cache</strong>:</p><pre><code>upstream alpha.de.repo.voidlinux.org {
  server alpha.de.repo.voidlinux.org:443;
  keepalive 2;
}

server {
  listen [::];
  server_name xbps.void.cache xbps.void.cache.ewellnet;

  location ^~ repodata$ {
    proxy_pass https://alpha.de.repo.voidlinux.org;
    proxy_cache_valid any 1h;
  }

  location / {
    proxy_pass https://alpha.de.repo.voidlinux.org;
  }
}</code></pre><p>This configuration ensures that requests for normal packages will only be revalidated after the default duration has passed, but requests for the package summaries will be revalidated every hour.</p><p>Refresh Nginx to enable <code>xbps</code> package caching:</p><pre><code># svcadm refresh nginx</code></pre><h4 id="client-configuration-4">Client Configuration</h4><p>Void Linux clients need to be configured to make use of the local cache. This can be done by setting a local repository URL within the configuration of each client:</p><p><strong>/usr/share/xbps.d/00-repository-main.conf</strong>:</p><pre><code>repository=http://xbps.void.cache/current</code></pre><p>Once this is set, use <code>xbps-install</code> to ensure that everything is working correctly:</p><pre><code># xbps-install -Su
[*] Updating repository `http://xbps.void.cache/current/x86_64-repodata&apos; ...</code></pre><h3 id="node-js-package-manager-npm-">Node.js Package Manager (npm)</h3><p>Other package managers can be configured to make use of a local package cache as well. For instance, the Node.js Package Manager utility (<code>npm</code>).</p><h4 id="dns-records-5">DNS Records</h4><p>Ensure that the following DNS record is configured:</p><ul><li><code>npm.cache.ewellnet</code> points to your cache server.</li></ul><h4 id="cache-configuration-5">Cache Configuration</h4><p>Ensure that the following configuration file has been added to your Nginx configuration directory in your cache:</p><p><strong>/opt/local/etc/nginx/cache/npm.cache</strong>:</p><pre><code>upstream registry.npmjs.org {
  server registry.npmjs.org:443;
  keepalive 2;
}

server {
  listen [::];
  server_name npm.cache npm.cache.ewellnet;

  location / {
    proxy_pass https://registry.npmjs.org;
    proxy_cache_valid any 1h;
  }
}</code></pre><p>This approach is revalidation heavy, but unfortunately that&apos;s the best we&apos;re going to get out of npm, due to how their repository is structured and queried.</p><p>Refresh Nginx to enable <code>npm</code> package caching:</p><pre><code># svcadm refresh nginx</code></pre><h4 id="client-configuration-5">Client Configuration</h4><p>Set the local registry using the <code>npm</code> command-line utility:</p><pre><code># npm set registry http://npm.cache/</code></pre><p>And now <code>npm</code> will install packages through your cache.</p><h3 id="python-package-index-pip-">Python Package Index (pip)</h3><p>The Python Package Index command-line utility (<code>pip</code>) can also be configured to use a cache. This is a bit more involved than previous as pip normally accesses two different base URIs, <a href="https://pipi.org/simple?ref=blog.brianewell.com">https://pipi.org/simple</a> for the repository information which links directly to files normally hosted at <a href="https://files.pythonhosted.org/?ref=blog.brianewell.com">https://files.pythonhosted.org/</a>.</p><p>However, through clever use of the Nginx <code><a href="http://nginx.org/en/docs/http/ngx_http_sub_module.html?ref=blog.brianewell.com">ngx_http_sub_module</a></code>, we should be able to make this work through a single base URI. Note, this module is not currently compiled by default in the build of Nginx distributed via SmartOS <code>pkgsrc</code>, and will need to be custom built to enable this functionality for now.</p><h4 id="dns-records-6">DNS Records</h4><p>Ensure that the following DNS record is configured:</p><ul><li><code>pip.cache.ewellnet</code> points to your cache server.</li></ul><h4 id="cache-configuration-6">Cache Configuration</h4><p>Ensure that the following configuration file has been added to your Nginx configuration directory in your cache:</p><p><strong>/opt/local/etc/nginx/cache/pip.cache</strong>:</p><pre><code>upstream pypi.org {
  server pypi.org:443;
  keepalive 2;
}

upstream files.pythonhosted.org {
  server files.pythonhosted.org:443;
  keepalive 2;
}

server {
  listen [::];
  server_name pip.cache pip.cache.ewellnet;

  location / {
    proxy_pass https://pypi.org;
    proxy_cache_valid 200 301 302 1h;

    sub_filter_once off;
    sub_filter &quot;https://files.pythonhosted.org&quot; &quot;http://pip.cache&quot;;
  }

  location /packages {
    proxy_pass https://files.pythonhosted.org;
  }
}</code></pre><p>Refresh Nginx to enable <code>pip</code> package caching:</p><pre><code># svcadm refresh nginx</code></pre><h4 id="client-configuration-6">Client Configuration</h4><p>Configure clients to use the cache registry by using the <code>pip</code> command-line utility:</p><pre><code># pip config set global.index-url http://pip.cache/simple
Writing to /root/.config/pip/pip.conf
# pip config set global.trusted-host pip.cache
Writing to /root/.config/pip/pip.conf</code></pre><p>And now <code>pip</code> will install packages through your cache. You can also set this globally if you prefer, as an exercise I leave up to you.</p><h3 id="rubygems-gem-">RubyGems (gem)</h3><p>The RubyGems package management command-line utility (<code>gem</code>) can also be configured to use a cache.</p><h4 id="dns-records-7">DNS Records</h4><p>Ensure that the following DNS record is configured:</p><ul><li><code>rubygems.cache.ewellnet</code> points to your cache server.</li></ul><h4 id="cache-configuration-7">Cache Configuration</h4><p>Ensure that the following configuration file has been added to your Nginx configuration directory in your cache:</p><p><strong>/opt/local/etc/nginx/cache/rubygems.cache</strong>:</p><pre><code>upstream rubygems.org {
  server rubygems.org:443;
  keepalive 2;
}

server {
  listen [::];
  server_name rubygems.cache rubygems.cache.ewellnet;

  location /^~ gem$ {
    proxy_pass https://rubygems.org;
  }

  location / {
    proxy_pass https://rubygems.org;
    proxy_cache_valid 200 301 302 1h;
  }
}</code></pre><p>Refresh Nginx to enable <code>gem</code> package caching:</p><pre><code># svcadm refresh nginx</code></pre><h4 id="client-configuration-7">Client Configuration</h4><p>Set the local sources using the <code>gem</code> command-line utility:</p><pre><code># gem sources --add http://rubygems.cache/
http://rubygems.cache/ added to sources
# gem sources --remove https://rubygems.org/
https://rubygems.org/ removed from sources</code></pre><p>And now <code>gem</code> will install packages through your cache.</p><p>As should be illustrated by now, many different package management systems can be tweaked to interface to their upstream repositories through a cache. Additional examples of package managers that should work would be Cargo, Hex, and Go, the Package managers for Rust, Erlang/Elixir and Go, and while it would be fun to walk through those and stand them up here as examples, I don&apos;t use those languages enough to justify the work. Yet.</p><h3 id="steam">Steam</h3><p>A now classic example of using Nginx to cache software resources like this treads directly into the operational space of Lancache, which is configuring a Steam cache.</p><p>Steam has gotten a lot nicer to use with caching in the last few years, but I&apos;d still qualify this section as <strong>experimental</strong>, as it doesn&apos;t yield the performance that I&apos;d like to see out of a steam cache. I will likely be revising this in the near future, but will need better visibility into cache performance that will be available at a later time.</p><h4 id="dns-records-8">DNS Records</h4><p>Ensure that the following DNS record is configured:</p><ul><li><code>lancache.steamcontent.com</code> points to your cache server.</li></ul><p>Steam is nice enough to recognize when this record points to a private routing prefix and will redirect all resource requests to this host. Very nice.</p><h4 id="cache-configuration-8">Cache Configuration</h4><p>Ensure that the following configuration file has been added to your Nginx configuration directory in your cache:</p><p><strong>/opt/local/etc/nginx/cache/steam.cache</strong>:</p><pre><code>upstream steam-upstream {
  server cache1-sea1.steamcontent.com;
  server cache2-sea1.steamcontent.com;
  server cache3-sea1.steamcontent.com;
  server cache4-sea1.steamcontent.com;
  server cache1-lax1.steamcontent.com;
  server cache2-lax1.steamcontent.com;
  server cache3-lax1.steamcontent.com;
  server cache4-lax1.steamcontent.com;
  server cache5-lax1.steamcontent.com;
  server cache6-lax1.steamcontent.com;
  keepalive 32;
}

server {
  listen [::];
  server_name lancache.steamcontent.com *.steamcontent.com;
  slice 8m;

  proxy_cache_key lancache.steamcontent.com$uri$is_args$args$slice_range;
  proxy_set_header Range $slice_range;

  location / {
    proxy_pass http://steam-upstream;
  }
}</code></pre><p>In my case, I&apos;m using upstream caches located in Seattle and Los Angeles, the two closest locations.</p><p>Refresh Nginx to enable Steam caching:</p><pre><code># svcadm refresh nginx</code></pre><h4 id="client-configuration-8">Client Configuration</h4><p>No client configuration is required. Steam will automatically recognize and make use of this cache.</p><h2 id="cache-partitioning">Cache Partitioning</h2><p>While all of the above examples use a single shared cache, an approach I prefer, you can also set independent caches per service.</p><p>First, disable Nginx.</p><pre><code># svcadm mark maintenance nginx</code></pre><p>Create additional ZFS datasets per cache that you&apos;d like to create:</p><pre><code># zfs create -o quota=200G -o mountpoint=/var/www/cache-2 zones/&lt;UUID&gt;/data/cache-2</code></pre><p>Register the cache paths in nginx under the http context. Note that the size has been adjusted to reflect the size set for the ZFS dataset:</p><p><strong>/opt/local/etc/nginx/nginx.conf</strong>:</p><pre><code>...
proxy_cache_path /var/www/cache-2 levels=2:2 use_temp_path=on keys_zone=second_cache:128m inactive=4y max_size=190G;
...</code></pre><p>Adjust the cache configuration to make use of that cache instead of <code>default_cache</code>, for example with Ubuntu:</p><p><strong>/opt/local/etc/nginx/cache/ubuntu.cache</strong>:</p><pre><code>upstream archive.ubuntu.com {
  server archive.ubuntu.com;
  keepalive 2;
}

server {
  listen [::];
  server_name apt.ubuntu.cache apt.ubuntu.cache.ewellnet;
  proxy_cache second_cache;

  location ^~ deb$ {
    proxy_pass http://archive.ubuntu.com;
  }

  location / {
    proxy_pass http://archive.ubuntu.com;
    proxy_cache_valid 200 301 302 1h;
  }
}</code></pre><p>Once you&apos;re done, restart Nginx and enjoy having separate caches:</p><pre><code># svcadm clear nginx</code></pre><h2 id="conclusion">Conclusion</h2><p>While I&apos;m generally happy with the results of this project, there&apos;s clearly room for improvement, specifically around Steam and potentially additional game service caching.</p><p>Digging further into Lancache and investigating what problems they experienced and how they overcame them is probably the best move from here, as well as investing further into visibility tools to determine how and why Nginx is slowing down requests instead of accelerating them.</p><p>But for everything else, this is definitely a good first step.</p>]]></content:encoded></item><item><title><![CDATA[NUT in the Global Zone]]></title><description><![CDATA[<p><a href="https://networkupstools.org/?ref=blog.brianewell.com">Network UPS Tools</a> is a project for connecting with many different power devices, and can access UPS data and coordinate with the device to ensure orderly shutdown of a host such as a SmartOS compute node.</p><p>NUT flies a bit in the face of the design philosophy of SmartOS. Not</p>]]></description><link>https://blog.brianewell.com/nut-in-the-global-zone/</link><guid isPermaLink="false">5f8a4fb0a4033becc92138b4</guid><category><![CDATA[Hardware]]></category><category><![CDATA[SmartOS]]></category><category><![CDATA[SMF]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 24 Dec 2021 10:36:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1508061253366-f7da158b6d46?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDE4fHxudXR8ZW58MHx8fHwxNjQyNDgzOTcy&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1508061253366-f7da158b6d46?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDE4fHxudXR8ZW58MHx8fHwxNjQyNDgzOTcy&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="NUT in the Global Zone"><p><a href="https://networkupstools.org/?ref=blog.brianewell.com">Network UPS Tools</a> is a project for connecting with many different power devices, and can access UPS data and coordinate with the device to ensure orderly shutdown of a host such as a SmartOS compute node.</p><p>NUT flies a bit in the face of the design philosophy of SmartOS. Not only will you need to install it manually (via pkgsrc, which will also need to be installed) in a global zone to use it, but SmartOS would rather rely on ZFS to handle unexpected power events rather than being explicitly shutdown by a UPS monitoring service.</p><p>In this article, we will install NUT (and pkgsrc) into the global zone and configure it to interface with a USB attached UPS device and to (optionally) automatically power down the compute node if the UPS batteries run critically low.</p><p>Lets get to it.</p><h2 id="installing-pkgsrc-into-the-global-zone">Installing Pkgsrc into the global zone</h2><p>The <a href="https://pkgsrc.joyent.com/install-on-illumos/?ref=blog.brianewell.com">Joyent Pkgsrc</a> website has instructions on installing pkgsrc into the global zone. At the time of this writing, this is the bootstrap that we&apos;re going to use:</p><p><strong>/root/install-pkgsrc.sh</strong>:</p><pre><code>#
# Copy and paste the lines below to install the latest 64-bit tools set.
#
BOOTSTRAP_TAR=&quot;bootstrap-trunk-tools-20201019.tar.gz&quot;
BOOTSTRAP_SHA=&quot;9b7a6daff5528d800e8cea20692f61ccd3b81471&quot;

# Ensure you are in a directory with enough space for the bootstrap download,
# by default the SmartOS /root directory is limited to the size of the ramdisk.
cd /var/tmp

# Download the bootstrap kit to the current directory.  Note that we currently
# pass &quot;-k&quot; to skip SSL certificate checks as the GZ doesn&apos;t install them.
curl -kO https://pkgsrc.joyent.com/packages/SmartOS/bootstrap/${BOOTSTRAP_TAR}

# Verify the SHA1 checksum.
[ &quot;${BOOTSTRAP_SHA}&quot; = &quot;$(/bin/digest -a sha1 ${BOOTSTRAP_TAR})&quot; ] || echo &quot;ERROR: checksum failure&quot;

# Verify PGP signature.  This step is optional, and requires gpg.
curl -kO https://pkgsrc.joyent.com/packages/SmartOS/bootstrap/${BOOTSTRAP_TAR}.asc
curl -ksS https://pkgsrc.joyent.com/pgp/DE817B8E.asc | gpg --import
gpg --verify ${BOOTSTRAP_TAR}{.asc,}

# Install bootstrap kit to /opt/tools
tar -zxpf ${BOOTSTRAP_TAR} -C /

# Add to PATH/MANPATH.
PATH=/opt/tools/sbin:/opt/tools/bin:$PATH
MANPATH=/opt/tools/man:$MANPATH</code></pre><p>Run this script on the global zone to bootstrap pkgin:</p><pre><code># /root/install-pkgsrc.sh</code></pre><p>We should now have <code>pkgsrc</code> available now on the global zone.</p><p>If you have a pkgsrc cache available (which we will cover soon), configure pkgin to make use of it:</p><p><strong>/opt/tools/etc/pkgin/repositories.conf</strong>:</p><pre><code>...
http://pkgsrc.joyent.cache/packages/SmartOS/trunk/tools/All</code></pre><p>Update and upgrade pkgsrc.</p><pre><code># pkgin update
processing remote summary (http://pkgsrc.joyent.cache/packages/SmartOS/trunk/tools/All)...
database for http://pkgsrc.joyent.cache/packages/SmartOS/trunk/tools/All is up-to-date
# pkgin -y upgrade
...</code></pre><h2 id="install-nut-into-the-global-zone">Install NUT into the Global Zone</h2><p>This is exceedingly difficult to do using pkgsrc:</p><pre><code># pkgin -y install ups-nut-usb
...</code></pre><p>This should install NUT and its USB drivers into <code>/opt/tools</code> as well.</p><h3 id="configuring-testing-upsdrvctl">Configuring &amp; testing upsdrvctl</h3><p>NUT uses a component called <code>upsdrvctl</code> to interface with USB attached devices. This will need to be told about your UPS, specifically which driver and port it should use to access it. The driver you should use is specific to your UPS model, and can be checked using their <a href="https://networkupstools.org/stable-hcl.html?ref=blog.brianewell.com">Hardware Compatibility List</a>. This is an example of a configuration file setup for a SMT2200 attached as the sole UPS to a compute node:</p><p><strong>/opt/tools/etc/nut/ups.conf</strong>:</p><pre><code>[smt2200]
  driver = usbhid-ups
  port = auto
  desc = &quot;American Power Conversion Smart-UPS 2200&quot;</code></pre><p>Running the following command will enable the driver and ensure that <code>upsd</code> can connect to it later on. You shouldn&apos;t see any error messages here.</p><pre><code># upsdrvctl start
Network UPS Tools - UPS driver controller 2.7.4
Network UPS Tools - Generic HID driver 0.41 (2.7.4)
USB communication driver 0.33
Using subdriver: APC HID 0.96</code></pre><h3 id="configuring-testing-upsd">Configuring &amp; testing upsd</h3><p>The <code>upsd</code> process is responsible for connecting to and polling information from attached UPSes via <code>upsdrvctl</code>. Its default configuration (of an empty file) is perfectly suitable for a standalone configuration.</p><p>If you would like to use <code>upscmd</code> as well, it&apos;s best to configure some users who have permission to manipulate the device.</p><p><strong>/opt/tools/etc/nut/upsd.users</strong>:</p><pre><code>[admin]
  password = secret
  actions = SET
  instcmds = ALL</code></pre><p>Start <code>upsd</code> when you&apos;re ready.</p><pre><code># upsd
Network UPS Tools upsd 2.7.4
fopen /opt/tools/var/db/nut/upsd.pid: No such file or directory
listening on 127.0.0.1 port 3493
listening on ::1 port 3493
Connected to UPS [smt2200]: usbhid-ups-smt2200</code></pre><p>You should now be able to query your ups using <code>upsc</code>:</p><pre><code># upsc smt2200 ups.status
Init SSL without certificate database
OL</code></pre><p>For reference, <code>OL</code> stands for On-Line Power, <code>OB</code> stands for On-Battery Power, and <code>LB</code> stands for Low-Battery Power. You can also issue this command without any additional options to see all information from the UPS:</p><pre><code># upsc smt2200
Init SSL without certificate database
battery.charge: 68
battery.charge.low: 10
battery.charge.warning: 50
...</code></pre><p>The <code>upscmd</code> command can also be used to issue specific commands to your UPS:</p><pre><code># upscmd -l smt2200
Instant commands supported on UPS [smt2200]:

beeper.disable - Disable the UPS beeper
beeper.enable - Enable the UPS beeper
beeper.mute - Temporarily mute the UPS beeper
beeper.off - Obsolete (use beeper.disable or beeper.mute)
beeper.on - Obsolete (use beeper.enable)
load.off - Turn off the load immediately
load.off.delay - Turn off the load with a delay (seconds)
shutdown.reboot - Shut down the load briefly while rebooting the UPS
shutdown.stop - Stop a shutdown in progress</code></pre><p>It will also ask you for credentials before allowing any actual commands to be performed.</p><h3 id="configuring-testing-upsmon-optional-">Configuring &amp; Testing upsmon (optional)</h3><p>If you would like NUT to actually power down your system once power reaches a critical point, then you will need to configure <code>upsmon</code>, the monitoring component of NUT.</p><p>Generally, using this component of NUT is completely optional as SmartOS can generally handle abrupt shutdowns already thanks to ZFS. However if you&apos;re doing something crazy like disabling sync writes on certain datasets, or generally like the idea of a graceful shutdown in the global zone, proceed.</p><p> A rough layout of of <code>upsmon</code>&apos;s flow with our modifications are as follows:</p><ul><li>The UPS goes onto battery power.</li><li>The UPS reaches a low battery state (ups.state goes from <code>OB</code> to <code>LB</code>).</li><li>The <code>upsmon</code> service notices this and generates a <code>NOTIFY_SHUTDOWN</code> event, waits <code>FINALDELAY</code> seconds, creates the <code>POWERDOWNFLAG</code> file and calls <code>SHUTDOWNCMD</code>.</li><li>SMF shuts everything down before finally signaling the UPS to power off and return once the utility power has been restored.</li><li>The system loses power.</li><li>Time passes</li><li>Power returns and the UPS switches back on, restoring supply to the loads.</li><li>All systems reboot and continue on as normal.</li></ul><p>Since it interacts with <code>upsd</code> to actually power down the loads, it will need credentials set in <code>upsd.users</code>. Lets do that now. Add the following user:</p><p><strong>/opt/tools/etc/nut/upsd.users</strong>:</p><pre><code>[monuser]
  password = doesitmatter
  upsmon master</code></pre><p>Reload upsd to update it with the new information.</p><pre><code># upsd -c reload
Network UPS Tools upsd 2.7.4</code></pre><p>Define <code>MONITOR</code> and <code>SHUTDOWNCMD</code> within the <code>upsmon</code> configuration.</p><p><strong>/opt/tools/etc/nut/upsmon.conf</strong>:</p><pre><code>MONITOR smt2200@localhost 1 monuser doesitmatter master
SHUTDOWNCMD /usr/sbin/poweroff</code></pre><p>And then start <code>upsmon</code>.</p><pre><code># upsmon
Network UPS Tools upsmon 2.7.4
fopen /opt/tools/var/db/nut/upsmon.pid: No such file or directory
UPS: smt2200@localhost (master) (power value 1)
Using power down flag file /etc/killpower</code></pre><p>While the automatic shutdown can be tested, it&apos;s probably best to wait until after completing setup of the following section.</p><h3 id="setting-up-smf-manifests">Setting up SMF manifests</h3><p>While the global zone is relatively ephemeral, we do have the option of creating <a href="https://wiki.smartos.org/administering-the-global-zone/?ref=blog.brianewell.com">persistent services</a> by placing SMF manifests into <code>/opt/custom/smf</code>, and since we really don&apos;t want to have to manually set this all up each time, we&apos;re going to do that now.</p><p><strong>/opt/custom/smf/nut.xml</strong>:</p><!--kg-card-begin: html--><script src="https://gist.github.com/brianewell/f4099a3c8ef159111e189bf4e7304d6f.js"></script><!--kg-card-end: html--><p><strong>Note:</strong> Remove the <code>upsmon</code> instance if you don&apos;t want it to start up and attempt to automatically power off your system in the case of a power failure.</p><p>Now, you can either restart, or simply import the manifest to enable these services.</p><pre><code># svccfg import /opt/custom/smf/nut.xml</code></pre><p>Ensure that they are properly running.</p><pre><code># svcs nut
STATE          STIME    FMRI
online         10:46:59 svc:/pkgsrc/nut:upsdrvctl
online         10:46:59 svc:/pkgsrc/nut:upsd
online         10:46:59 svc:/pkgsrc/nut:upsmon</code></pre><h2 id="conclusion">Conclusion</h2><p>While I&apos;m still testing this and may continue to update this article, that&apos;s basically what it takes to get NUT running in your global zone.</p><p>I have to say that I&apos;m disappointed in APC as their SUA1500 model which protects my HP N54L actually has more features exposed through the USB cable than the newer SMT2200-RM2U. Specifically:</p><ul><li>Battery Manufacture Date.</li><li>Battery Temperature.</li><li>Input data, including UPS sensitivity, high and low transfer thresholds, reason for last transfer, and current input voltage.</li><li>Output data, including sinewave frequency and voltage.</li><li>UPS load as a percentage.</li></ul><p>While much of this data is available on the physical front panel, that means I can&apos;t poll it through <code>upsc</code> for monitoring in metrics, which for me was the entire point of installing NUT in the first place. Additionally, the <code>upscmd</code> command is quite limited, lacking the following features that the SUA1500 has:</p><ul><li>Being able to turn the UPS on.</li><li>Adjust the automatic power-on behavior for the UPS (shutdown.return vs shutdown.stayoff).</li><li>Testing the battery.</li><li>Testing the front panel.</li></ul>]]></content:encoded></item><item><title><![CDATA[2022 Home Router Refresh]]></title><description><![CDATA[<p>With the new hardware up and running, it&apos;s time to redeploy new zones. We&apos;re going to start with the router zone, and as it&apos;s almost 2022, I&apos;m going to call this article my 2022 Home Router Refresh.</p><p>This article will overview rebuilding</p>]]></description><link>https://blog.brianewell.com/2022-home-router-refresh/</link><guid isPermaLink="false">61dc18a163fc3b6bb0faee99</guid><category><![CDATA[Networking]]></category><category><![CDATA[Encryption]]></category><category><![CDATA[SmartOS]]></category><category><![CDATA[SMF]]></category><category><![CDATA[Zones]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 17 Dec 2021 12:49:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1600074169098-16a54d791d0d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDV8fHNpZ25wb3N0fGVufDB8fHx8MTY0MzAyODcyMg&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1600074169098-16a54d791d0d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDV8fHNpZ25wb3N0fGVufDB8fHx8MTY0MzAyODcyMg&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="2022 Home Router Refresh"><p>With the new hardware up and running, it&apos;s time to redeploy new zones. We&apos;re going to start with the router zone, and as it&apos;s almost 2022, I&apos;m going to call this article my 2022 Home Router Refresh.</p><p>This article will overview rebuilding my zone based SmartOS router, and draw from three of my previous ones, specifically <a href="https://blog.brianewell.com/home-router-on-smartos/">Home Router on SmartOS</a>, <a href="https://blog.brianewell.com/hurricane-electric-on-smartos/">Hurricane Electric on SmartOS</a>, and <a href="https://blog.brianewell.com/tunnels-on-smartos/">Tunnels on SmartOS</a>. If you&apos;d like more of the theory behind why I&apos;m doing what I&apos;m doing, look to those articles, this is going to be much more nuts and bolts.</p><p>Lets dive in.</p><h2 id="overview">Overview</h2><p>We will be deploying a router that will be responsible for routing between the isolated private subnets in my new network deployment, as well as providing common home network services such as dynamic host configuration protocol (DHCP), domain name service (DNS), and network address translation (NAT).</p><p>In addition, it will be acting as the terminal for a tunnel going out to Hurricane Electric for IPv6 connectivity, as well as IPSec tunnels going to production systems.</p><h2 id="interfaces">Interfaces</h2><p>The local interfaces that we&apos;ll be working with were listed in <a href="https://blog.brianewell.com/homelab-updates-part-two/">the previous article</a>, but for simplicity, are the following:</p><ul><li>Public: <code>net0: nic:admin, vlan_id:5, dhcp, autoconf</code></li><li>Infrastructure: <code>net1: nic:admin, vlan_id:1, 172.22.1.0/27, IPv6/64</code></li><li>Embedded: <code>net2: nic:admin, vlan_id:2, 172.22.1.32/27, IPv6/64</code></li><li>Internal: <code>net3: etherstub:internal, 172.22.1.64/27, IPv6/64</code></li><li>External: <code>net4: etherstub:external, 172.22.1.96/27, IPv6/64</code></li><li>Secure: <code>net5: nic:admin, vlan_id:3, 172.22.1.128/26, IPv6/64</code></li><li>Guest: <code>net6: nic:admin, vlan_id:4, 172.22.1.192/26, IPv6/64</code></li></ul><p>The public interface may end up being directly connected at some point, the 1000BASE-T SFP module I have doesn&apos;t play well with the NDC in my compute node, so for now connecting to the public internet through my switch with a dedicated VLAN is how I&apos;m going to proceed forward at this time. Whenever this network switches up to anything faster, there are still two free SFP+ interfaces on the back of the server to handle that interconnect. in the future.</p><p>Also, I may end up defining an additional interconnect for the 40GBE interface at the router or experiment with 802.1D bridging via <code>dladm create-bridge</code>, we will see when I get there.</p><p>Additionally, I will be creating the following tunnel interfaces:</p><ul><li>Hurricane Electric: <code>v4_he0</code></li><li>Production Tunnels: <code>v4_ze0, v4_ze1</code></li></ul><p>Note that I tend to use the <code>v4_</code> prefix when referring to IPv4 based tunnels and the <code>v6_</code> prefix when referring to IPv6 based tunnels.</p><h2 id="zone-manifest">Zone Manifest</h2><p>This is the manifest that I used to create router zone:</p><pre><code>{
  &quot;image_uuid&quot;: &quot;1d05e788-5409-11eb-b12f-037bd7fee4ee&quot;,
  &quot;brand&quot;: &quot;joyent&quot;,
  &quot;alias&quot;: &quot;router&quot;,
  &quot;hostname&quot;: &quot;router&quot;,
  &quot;dns_domain&quot;: &quot;ewellnet&quot;,
  &quot;cpu_cap&quot;: 200,
  &quot;max_physical_memory&quot;: 128,
  &quot;quota&quot;: 10,
  &quot;resolvers&quot;: [ &quot;::1&quot; ],
  &quot;nics&quot;: [
    {
      &quot;nic_tag&quot;: &quot;admin&quot;,
      &quot;vlan_id&quot;: 5,
      &quot;interface&quot;: &quot;net0&quot;,
      &quot;ips&quot;: [&quot;dhcp&quot;,&quot;addrconf&quot;],
      &quot;primary&quot;: true,
      &quot;allow_ip_spoofing&quot;: &quot;true&quot;
    },
    {
      &quot;nic_tag&quot;: &quot;admin&quot;,
      &quot;interface&quot;: &quot;net1&quot;,
      &quot;ips&quot;: [&quot;172.22.1.1/27&quot;,&quot;addrconf&quot;],
      &quot;allow_dhcp_spoofing&quot;: &quot;true&quot;,
      &quot;allow_ip_spoofing&quot;: &quot;true&quot;
    },
    {
      &quot;nic_tag&quot;: &quot;admin&quot;,
      &quot;vlan_id&quot;: 2,
      &quot;interface&quot;: &quot;net2&quot;,
      &quot;ips&quot;: [&quot;172.22.1.33/27&quot;,&quot;addrconf&quot;],
      &quot;allow_dhcp_spoofing&quot;: &quot;true&quot;,
      &quot;allow_ip_spoofing&quot;: &quot;true&quot;
    },
    {
      &quot;nic_tag&quot;: &quot;internal0&quot;,
      &quot;interface&quot;: &quot;net3&quot;,
      &quot;ips&quot;: [&quot;172.22.1.65/27&quot;,&quot;addrconf&quot;],
      &quot;allow_dhcp_spoofing&quot;: &quot;true&quot;,
      &quot;allow_ip_spoofing&quot;: &quot;true&quot;
    },
    {
      &quot;nic_tag&quot;: &quot;external0&quot;,
      &quot;interface&quot;: &quot;net4&quot;,
      &quot;ips&quot;: [&quot;172.22.1.97/27&quot;,&quot;addrconf&quot;],
      &quot;allow_dhcp_spoofing&quot;: &quot;true&quot;,
      &quot;allow_ip_spoofing&quot;: &quot;true&quot;
    },
    {
      &quot;nic_tag&quot;: &quot;admin&quot;,
      &quot;vlan_id&quot;: 3,
      &quot;interface&quot;: &quot;net5&quot;,
      &quot;ips&quot;: [&quot;172.22.1.129/26&quot;,&quot;addrconf&quot;],
      &quot;allow_dhcp_spoofing&quot;: &quot;true&quot;,
      &quot;allow_ip_spoofing&quot;: &quot;true&quot;
    },
    {
      &quot;nic_tag&quot;: &quot;admin&quot;,
      &quot;vlan_id&quot;: 4,
      &quot;interface&quot;: &quot;net6&quot;,
      &quot;ips&quot;: [&quot;172.22.1.193/26&quot;,&quot;addrconf&quot;],
      &quot;allow_dhcp_spoofing&quot;: &quot;true&quot;,
      &quot;allow_ip_spoofing&quot;: &quot;true&quot;
    }
  ]
}</code></pre><p>Note that I&apos;m setting a resolver of <code>::1</code> in the zone manifest simply because that is ultimately what it will be set to, you will want to temporarily alter <code>/etc/resolv.conf</code> until you have <code>dnsmasq</code> properly configured.</p><p>Creation command and login:</p><pre><code># vmadm create -f router.json
Successfully created VM f8510d07-3852-ebf0-83f3-e850f1ceb5fa
# zlogin f8510d07-3852-ebf0-83f3-e850f1ceb5fa</code></pre><h2 id="configuring-nat-routing">Configuring NAT &amp; Routing</h2><p>Edit <code>/etc/ipf/ipnat.conf</code> so that it will translate IPv4 packets routed from the private networks out onto the internet:</p><pre><code>map net0 172.22.1.0/24 -&gt; 0/32 proxy port ftp ftp/tcp
map net0 172.22.1.0/24 -&gt; 0/32 portmap tcp/udp auto
map net0 172.22.1.0/24 -&gt; 0/32</code></pre><p>Enable IPFilter and IPv4 forwarding:</p><pre><code># svcadm enable ipfilter
# routeadm -ue ipv4-forwarding</code></pre><p>Then lets verify that it&apos;s working properly:</p><pre><code># ipnat -l
List of active MAP/Redirect filters:
map net0 172.22.1.0/24 -&gt; 0.0.0.0/32 proxy port ftp ftp/tcp
map net0 172.22.1.0/24 -&gt; 0.0.0.0/32 portmap tcp/udp auto
map net0 172.22.1.0/24 -&gt; 0.0.0.0/32

List of active sessions:</code></pre><h2 id="installing-dnsmasq">Installing Dnsmasq</h2><p>I&apos;ve come to really appreciate dnsmasq for how small and light it is, and yet how generally capable it is in a home network setting. Sure it isn&apos;t the fastest, and it doesn&apos;t do everything, but it&apos;s fantastic just how much it accomplishes while using minimal resources.</p><p>Installation and configuration is a breeze:</p><pre><code># pkgin in dnsmasq</code></pre><p>And configuration isn&apos;t too difficult either:</p><p><strong>/opt/local/etc/dnsmasq.conf</strong>:</p><pre><code># DNS specific configuration

# Set DNS cache size
cache-size=8192

# Never forward plain names (without a dot or domain part)
domain-needed

# Never forward addresses in the non-routed address spaces.
bogus-priv

# We don&apos;t want dnsmasq to poll resolv files for changes, we will manually notify it with a refresh
no-poll

# Upstream resolvers, use this so that /etc/resolv.conf can refer to localhost
resolv-file=/etc/resolv.upstream

# Domain for local services which should never be forwarded to external resolvers
local=/ewellnet/
domain=ewellnet

# Shared configuration

# Interfaces used for DHCP &amp; DNS
interface=net1
interface=net2
interface=net3
interface=net4
interface=net5
interface=net6

# DHCP specific configuration

# DHCP ranges
dhcp-range=172.22.1.2,172.22.1.30,255.255.255.224,6h
dhcp-range=172.22.1.34,172.22.1.62,255.255.255.224,6h
dhcp-range=172.22.1.130,172.22.1.190,255.255.255.192,6h
dhcp-range=172.22.1.194,172.22.1.254,255.255.255.192,6h

# Read /etc/ethers for static DHCP allocations
read-ethers

# Set the NTP time server addresses to the interface&apos;s
dhcp-option=option:ntp-server

# This should be the authoritative DHCP server on the network
dhcp-authoritative</code></pre><p>For any static assignments, set records in <code>/etc/hosts</code> and (optionally) <code>/etc/ethers</code>. In my case, the records are suffixed with <code>.ewellnet</code>.</p><p>Create the directory at <code>/var/cache</code> and enable the <code>dnsmasq</code> service:</p><pre><code># mkdir /var/cache
# svcadm enable dnsmasq</code></pre><p>Local DHCP and DNS should now be up and running.</p><h2 id="secure-ssh">Secure SSH</h2><p>Depending on your disposition, you may want to prevent password based logins through SSH or disable the service all together. I usually opt for the latter for home, though I&apos;m sure that&apos;ll bite me in the future some time.</p><pre><code># svcadm disable ssh</code></pre><h2 id="tunnel-to-hurricane-electric">Tunnel to Hurricane Electric</h2><p>At the time of this writing, Ziply Fiber still hasn&apos;t rolled out IPv6 connectivity to end users. This is okay though, since we have Hurricane Electric.</p><p>Login to <a href="https://tunnelbroker.net/?ref=blog.brianewell.com">tunnelbroker.net</a> to set and acquire your necessary credentials. Since these addresses are global, I will be making the following replacements for the below examples:</p><ul><li>Server IPv4 address: <code>1.2.3.4</code></li><li>Client IPv4 address: <code>5.6.7.8</code></li><li>Server IPv6 address: <code>2001:470::1</code></li><li>Client IPv6 address: <code>2001:470::2</code></li></ul><p>Run the following commands to bring a Hurricane Electric tunnel up and route default IPv6 traffic over it:</p><pre><code># dladm create-iptun -T ipv4 -a local=5.6.7.8,remote=1.2.3.4 v4_he0
# ipadm create-addr -T static -a local=2001:470::2,remote=2001:470::1 v4_he0/v6
# route -p add -inet6 default 2001:470::1</code></pre><p>Test by pinging any well known IPv6 enabled domain:</p><pre><code># ping -A inet6 google.com
google.com is alive</code></pre><p>Our router can now access the IPv6 Internet.</p><h2 id="advertise-routed-ipv6-over-hurricane-electric">Advertise Routed IPv6 over Hurricane Electric</h2><p>IPv6 connectivity to our router isn&apos;t terribly helpful on it&apos;s own; we&apos;ll want to be able to provide routing to our attached devices.</p><p>First up, we&apos;ll need to start forwarding IPv6 packets:</p><pre><code># routeadm -ue ipv6-forwarding</code></pre><p>While I would normally make use of dnsmasq&apos;s IPv6 router advertisement features, the <code>ndp</code> daemon is normally already running with <code>addrconf</code> enabled on an interface, so we might as well just use the native tools here, especially since they&apos;ll manipulate the routes table on the router for us.</p><p><strong>/etc/inet/ndpd.conf</strong>:</p><pre><code>if net1 AdvSendAdvertisements 1
prefix 2001:470:????::/64 net1
if net2 AdvSendAdvertisements 1
prefix 2001:470:????:1::/64 net2
if net3 AdvSendAdvertisements 1
prefix 2001:470:????:2::/64 net3
if net4 AdvSendAdvertisements 1
prefix 2001:470:????:3::/64 net4
if net5 AdvSendAdvertisements 1
prefix 2001:470:????:4::/64 net5
if net6 AdvSendAdvertisements 1
prefix 2001:470:????:5::/64 net6</code></pre><p>And then restart (not refresh) ndp.</p><pre><code># svcadm restart ndp</code></pre><p>You should now see functional IPv6 connectivity on every stateless device on your network.</p><h2 id="establish-ipsec-policy-key-exchange">Establish IPSec Policy &amp; Key Exchange</h2><p>I also run IPSec tunnels to some production zones in a local datacenter. These tunnels will be called <code>ze0</code> and <code>ze1</code>. First of all, we&apos;re going to need to set the encryption policy, specifically what encryption and authentications will be accepted over these tunnels:</p><p><strong>/etc/inet/ipsecinit.conf</strong>:</p><pre><code>{tunnel v4_ze0 negotiate tunnel}
  ipsec {encr_algs aes encr_auth_algs sha512 sa shared}
{tunnel v4_ze1 negotiate tunnel}
  ipsec {encr_algs aes encr_auth_algs sha512 sa shared}</code></pre><p>After ensuring that the same settings are present on the remote endpoints, I restart ipsec policy:</p><pre><code># svcadm restart ipsec/policy</code></pre><p>Next up is to configure IKE. Normally I would be fancy and use public keys, but pre-shared should be fine in this case:</p><p><strong>/etc/inet/ike/config</strong>:</p><pre><code>p1_lifetime_secs 7200
p1_nonce_len 40

p1_xform {
        auth_method preshared
        oakley_group 5
        auth_alg sha512
        encr_alg aes
}

p2_pfs 2

{
        label &quot;v4_ze0&quot;
        local_addr 1.2.3.4
        remote_addr 5.6.7.8
        p1_xform {
                auth_method preshared
                oakley_group 5
                auth_alg sha512
                encr_alg aes
        }
        p2_pfs 5
}
{
        label &quot;v4_ze1&quot;
        local_addr 1.2.3.4
        remote_addr 5.6.7.9
        p1_xform {
                auth_method preshared
                oakley_group 5
                auth_alg sha512
                encr_alg aes
        }
        p2_pfs 5
}</code></pre><p>Generate some large random 64-byte keys for each tunnel. <strong>These are example keys only, do not use them production</strong>:</p><pre><code># openssl rand -hex 128
0f5451e4e0ad601f64dad77b9a83a0214dd4f460e07935936cfc7faf24c90f1c65609aaebc817ea17894748d999c94324d53a50ba18ce62c7683966674c3d50f93d1333440709f92facd1d10bcfde5b9ccc05e20adfa00a14825025068ba29cac04ba75b3b7e909dfddc01bd84eec2d56fa47a151cd447d0342f4194a910be9c
# openssl rand -hex 128
daa7fafa568601172b01591465e6365f90ad41eb6a778e7730f9ae5ebb9e05b9792fe288a1d7fa9950a2659433ad2b2f31087bc8e3c8ebbc80ef1c1d604847371cf4d1298ebb0b7fe4835b8bd13e969d700312a929ec8c83f63760a8a7b7439441b9e1ec27862c4294214df8ca03bd02eb62c9da6980108e27106c566b3ad7ea</code></pre><p>Create some preshared secret files with these keys:</p><p><strong>/etc/inet/secret/ike.preshared</strong>:</p><pre><code>{
        localidtype IP
        localid 1.2.3.4
        remoteidtype IP
        remoteid 5.6.7.8
        key 0f5451e4e0ad601f64dad77b9a83a0214dd4f460e07935936cfc7faf24c90f1c65609aaebc817ea17894748d999c94324d53a50ba18ce62c7683966674c3d50f93d1333440709f92facd1d10bcfde5b9ccc05e20adfa00a14825025068ba29cac04ba75b3b7e909dfddc01bd84eec2d56fa47a151cd447d0342f4194a910be9c
}

{
        localidtype IP
        localid 1.2.3.4
        remoteidtype IP
        remoteid 5.6.7.9
        key daa7fafa568601172b01591465e6365f90ad41eb6a778e7730f9ae5ebb9e05b9792fe288a1d7fa9950a2659433ad2b2f31087bc8e3c8ebbc80ef1c1d604847371cf4d1298ebb0b7fe4835b8bd13e969d700312a929ec8c83f63760a8a7b7439441b9e1ec27862c4294214df8ca03bd02eb62c9da6980108e27106c566b3ad7ea
}</code></pre><p>After the reciprocal configuration has been set on the remote side, enable or restart IKE:</p><pre><code># svcadm enable ipsec/ike</code></pre><h2 id="establish-ipsec-tunnels-routing">Establish IPSec Tunnels &amp; Routing</h2><p>With all of our IPSec policy and key exchange work out of the way, we can finally test it by establishing some tunnels. We&apos;ll do that with <code>dladm</code>:</p><pre><code># dladm create-iptun -T ipv4 -a local=1.2.3.4,remote=5.6.7.8 v4_ze0
# dladm create-iptun -T ipv4 -a local=1.2.3.4,remote=5.6.7.9 v4_ze1</code></pre><p>Next up is to bring up a local interface for the tunnel, creating addresses will do this implicitly, so we&apos;ll just do that.</p><pre><code># ipadm create-addr -T static -a local=172.22.1.1,remote=10.0.0.1 v4_ze0/v4
# ipadm create-addr -T static -a local=172.22.1.1,remote=10.0.0.2 v4_ze1/v4</code></pre><p>After performing the reciprocal steps on the remote host, you should be able to ping the remote side through the tunnel interface:</p><pre><code># ping 10.0.0.1
10.0.0.1 is alive</code></pre><p>Next up is to set some static routes. Despite plumbing the interface for IPv6, in my case I&apos;ll only be actively using IPv4 over it immediately:</p><pre><code># route -p add 10.0.0.0/28 10.0.0.1
# route -p add 10.0.0.0/28 10.0.0.2</code></pre><p>You should now be able to access the remote side of the IPSec tunnel through any subnet attached to this router.</p><h2 id="securing-everything">Securing Everything</h2><p>Next up is to ensure that everything is secure, and only communicating with what it should be. We will achieve this by altering the host model and setting up an inclusive firewall.</p><p>Normally I would perform this step earlier and then build into a secure environment. That can make troubleshooting quite difficult, so this time I built everything out first with the intention of locking it all down afterwards.</p><h3 id="strong-host-model">Strong Host Model</h3><p>It&apos;s often a good idea to discard packets that arrive on unexpected interfaces. We can do this easily in SmartOS by changing the hostmodel to strong by using <code>ipadm set-prop</code>:</p><pre><code># ipadm set-prop -p hostmodel=strong ipv4
# ipadm set-prop -p hostmodel=strong ipv6</code></pre><p>This will ensure that any packets that come in through an interface that&apos;s doesn&apos;t correlate with the interfaces in the routing table are dropped.</p><h3 id="firewall">Firewall</h3><p>The next thing to do is to ensure that the router is allowing only the correct traffic flows between subnets using IP Filter. We&apos;re going to enact the following policies:</p><ul><li>IPv6 Traffic from the Internet can reach external services <code>172.22.1.96/27</code>.</li><li>Traffic from infrastructure <code>172.22.1.0/27</code> can reach the Internet, internal services <code>172.22.1.64/27</code>, external services <code>172.22.1.96/27</code> and the IPSec tunnels <code>v4_ze0</code> and <code>v4_ze1</code>.</li><li>Traffic from embedded <code>172.22.1.32/27</code> can access internal services <code>172.22.1.64/27</code>.</li><li>Traffic from internal services <code>172.22.1.64/27</code> can access the Internet, infrastructure <code>172.22.1.0/27</code>, embedded <code>172.22.1.32/27</code>, external services <code>172.22.1.96/27</code> and secure <code>172.22.1.128/26</code>.</li><li>Traffic from external services <code>172.22.1.96/27</code> can reach the Internet.</li><li>Traffic from secure <code>172.22.1.128/26</code> can reach the Internet, internal services <code>172.22.1.64/27</code> and external services <code>172.22.1.96/27</code>.</li><li>Traffic from guest <code>172.22.1.192/26</code> can reach the Internet and external services <code>172.22.1.96/27</code>.</li><li>Traffic from the work IPSec tunnels <code>v4_ze0</code> and <code>v4_ze1</code> can reach infrastructure <code>172.22.1.0/27</code>.</li></ul><p>Note: For the sake of this list, &quot;The Internet&quot; includes IPv4 connectivity via NAT through <code>net0</code> and IPv6 connectivity via the <code>v4_he0</code> Hurricane Electric tunnel. Also the IPSec connection between infrastructure and remote infrastructure will likely be constrained further to only allow for administrative console access to the remote network. Next up is to implement these policies in configuration:</p><p><strong>/etc/ipf/ipf.conf</strong>:</p><pre><code># Traffic from infrastructure (net1) [172.22.1.0/27]
block in on net1 from any to 172.16.0.0/12
pass in quick on net1 from 172.22.1.0/27 to 172.22.1.64/26
pass in quick on net1 from 172.22.1.0/27 to 172.22.1.0/27

# Traffic from embedded (net2) [172.22.1.32/27]
block in on net2 all
pass in quick on net2 from 172.22.1.32/27 to 172.22.1.64/27
pass in quick on net2 from 172.22.1.32/27 to 172.22.1.32/27

# Traffic from internal (net3) [172.22.1.64/27]
block in on net3 from any to 172.16.0.0/12
pass in quick on net3 from 172.22.1.64/27 to 172.22.1.128/26
pass in quick on net3 from 172.22.1.64/27 to 172.22.1.0/25

# Traffic from external (net4) [172.22.1.96/27]
block in on net4 from any to 172.16.0.0/12
pass in quick on net4 from 172.22.1.96/27 to 172.22.1.96/27

block out on net4 from 172.16.0.0/12 to any
pass out quick on net4 proto tcp from 172.16.0.0/12 to 172.22.1.96/27 flags S keep state
pass out quick on net4 proto udp from 172.16.0.0/12 to 172.22.1.96/27 keep state
pass out quick on net4 proto icmp from 172.16.0.0/12 to 172.22.1.96/27 icmp-type echo keep state
pass out quick on net4 from 172.22.1.96/27 to 172.22.1.96/27

# Traffic from secure (net5) [172.22.1.128/26]
block in on net5 from any to 172.16.0.0/12
pass in quick on net5 from 172.22.1.128/26 to 172.22.1.64/26
pass in quick on net5 from 172.22.1.128/26 to 172.22.1.128/26

# Traffic from guest (net6) [172.22.1.192/26]
block in on net6 from any to 172.16.0.0/12
pass in quick on net6 from 172.22.1.192/26 to 172.22.1.96/27
pass in quick on net6 from 172.22.1.192/26 to 172.22.1.192/26

# Traffic from IPsec tunnels (v4_ze0,v4_ze1) [192.168.255.224/27]
block in on v4_ze0 all
pass in quick on v4_ze0 from 10.0.0.0/27 to 172.22.1.0/27

block in on v4_ze1 all
pass in quick on v4_ze1 from 10.0.0.0/27 to 172.22.1.0/27</code></pre><p><strong>/etc/ipf/ipf6.conf</strong>:</p><pre><code># Traffic from public (he0) [::/0]
block in on he0 from any to 2001:470::/48
pass in quick on he0 from any to 2001:470:0:3::/64

block out on he0 from 2001:470::/48 to any
pass out quick on he0 from 2001:470:0:3::/64 to any
pass out quick on he0 proto tcp from 2001:470::/48 to any flags S keep state
pass out quick on he0 proto udp from 2001:470::/48 to any keep state
#pass out quick on he0 proto icmp from 2001:470::/48 to any icmp-type echo keep state

# Traffic from infrastructure (net1) [2001:470::/64]
block in on net1 from any to 2001:470::/48
pass in quick on net1 from 2001:470::/64 to 2001:470:0:2::/63
pass in quick on net1 from 2001:470::/64 to 2001:470::/64

# Traffic from embedded (net2) [2001:470:0:1::/64]
block in on net2 all
pass in quick on net2 from 2001:470:0:1::/64 to 2001:470:0:2::/64
pass in quick on net2 from 2001:470:0:1::/64 to 2001:470:0:1::/64

# Traffic from internal (net3) [2001:470:0:2::/64]
block in on net3 from any to 2001:470::/48
pass in quick on net3 from 2001:470:0:2::/64 to 2001:470:0:4::/64
pass in quick on net3 from 2001:470:0:2::/64 to 2001:470::/62

# Traffic from external (net4) [2001:470:0:3::/64]
block in on net4 from any to 2001:470::/48
pass in quick on net4 from 2001:470:0:3::/64 to 2001:470:0:3::/64

block out on net4 from 2001:470::/48 to any
pass out quick on net4 proto tcp from 2001:470::/48 to 2001:470:0:3::/64 flags S keep state
pass out quick on net4 proto udp from 2001:470::/48 to 2001:470:0:3::/64 keep state
#pass out quick on net4 proto icmp from 2001:470::/48 to 2001:470:0:3::/64 icmp-type echo keep state
pass out quick on net4 from 2001:470:0:3::/64 to 2001:470:0:3::/64

# Traffic from secure (net5) [2001:470:0:4::/64]
block in on net5 from any to 2001:470:0::/48
pass in quick on net5 from 2001:470:0:4::/64 to 2001:470:0:2::/63
pass in quick on net5 from 2001:470:0:4::/64 to 2001:470:0:4::/64

# Traffic from guest (net6) [2001:470:0:5::/64]
block in on net6 from any to 2001:470:0::/48
pass in quick on net6 from 2001:470:0:5::/64 to 2001:470:0:3::/64
pass in quick on net6 from 2001:470:0:5::/64 to 2001:470:0:5::/64

# Traffic from IPsec tunnels (v4_ze0,v4_ze1)
block in on v4_ze0 all
block in on v4_ze1 all</code></pre><p>Reload <code>ipfilter</code> to enable those rules:</p><pre><code># svcadm refresh ipfilter</code></pre><p>This enables a stateful IPv4 firewall around external to prevent it from accessing the other network segments without preventing them from accessing it, as well, the IPv6 firewall rules enable two stateful layers, one around the external network segment and another one protecting all networks except for the external segment from access.</p><p>Please note: Using a firewall like IP Filter severely limits network performance. While I have taken steps above to mitigate the impact by reducing the firewall rules to their simplest forms, I will be investigating further possible steps to take in the future.</p><h2 id="forwarding-traffic">Forwarding Traffic</h2><p>There are some additional exceptions that need to be set for various services running in this network, namely:</p><ul><li>Forward NTP traffic <code>udp/123</code> directed to any of the non-routable router addresses to the SmartOS compute node.</li><li>Forward Plex traffic <code>tcp/31400</code> to the Plex Media Server zone.</li><li>I&apos;m sure many more.</li></ul><p>These forwarding rules can be added to <code>/etc/ipf/ipnat.conf</code> along with the map rules we added before:</p><p><strong>/etc/ipf/ipnat.conf</strong>:</p><pre><code>map net0 172.22.1.0/24 -&gt; 0/32 proxy port ftp ftp/tcp
map net0 172.22.1.0/24 -&gt; 0/32 portmap tcp/udp auto
map net0 172.22.1.0/24 -&gt; 0/32

# Plex redirection (public-&gt;media)
rdr net0 0.0.0.0/0 port 32400 -&gt; 172.22.1.70 port 32400 tcp

# NTP server redirection (all-&gt;hv-1)
rdr net1 172.22.1.1/32   port 123 -&gt; 172.22.1.8 port 123 udp
rdr net2 172.22.1.33/32  port 123 -&gt; 172.22.1.8 port 123 udp
rdr net3 172.22.1.65/32  port 123 -&gt; 172.22.1.8 port 123 udp
rdr net4 172.22.1.97/32  port 123 -&gt; 172.22.1.8 port 123 udp
rdr net5 172.22.1.129/32 port 123 -&gt; 172.22.1.8 port 123 udp
rdr net6 172.22.1.193/32 port 123 -&gt; 172.22.1.8 port 123 udp</code></pre><p>IP Filter will need to be refreshed once more:</p><pre><code># svcadm refresh ipfilter</code></pre><h2 id="conclusion">Conclusion</h2><p>This should be a pretty solid start to a router zone, though I may end up changing some things around, namely:</p><ul><li>Some of the high throughput communication paths may end up requiring some zones to be moved to different networks to reduce the traffic being filtered through IP Filter and hopefully improve network performance.</li><li>Adding IPSec policies for mobile devices such as Laptops and Smart Phones to access network resources away from home.</li><li>Adding QoS for IP telephony traffic.</li></ul><p>But for now, this should do.</p>]]></content:encoded></item><item><title><![CDATA[Homelab Updates Part 2]]></title><description><![CDATA[<p>This is a continuation of <a href="https://blog.brianewell.com/homelab-hardware-updates/">the previous article on hardware updates</a> to my homelab, focusing on the configuration of the major hardware components discussed.</p><p>No need for much of a preamble here, lets get into it.</p><h2 id="network-configuration">Network Configuration</h2><p>There are a bunch of advanced features to experiment with on this</p>]]></description><link>https://blog.brianewell.com/homelab-updates-part-two/</link><guid isPermaLink="false">61beda4485160e41459fcb76</guid><category><![CDATA[Hardware]]></category><category><![CDATA[Networking]]></category><category><![CDATA[SmartOS]]></category><category><![CDATA[ZFS]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 10 Dec 2021 10:58:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1520869562399-e772f042f422?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDd8fG5ldHdvcmt8ZW58MHx8fHwxNjM5ODk5NjMz&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1520869562399-e772f042f422?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDd8fG5ldHdvcmt8ZW58MHx8fHwxNjM5ODk5NjMz&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Homelab Updates Part 2"><p>This is a continuation of <a href="https://blog.brianewell.com/homelab-hardware-updates/">the previous article on hardware updates</a> to my homelab, focusing on the configuration of the major hardware components discussed.</p><p>No need for much of a preamble here, lets get into it.</p><h2 id="network-configuration">Network Configuration</h2><p>There are a bunch of advanced features to experiment with on this Cisco switch, but for now just getting it up and running should be good enough.</p><p>I factory reset it, enabled setup mode and hopped onto the console. After running through the initial setup program and assigning some static IP addresses and passwords, I confirmed I could connect with the new addresses and proceeded with the following configuration steps:</p><h3 id="etherchannel-aka-link-aggregation">EtherChannel (aka Link Aggregation)</h3><p>The switch and hypervisor are physically connected via a pair of SFP+ DAC cables. It would be criminal not to configure them for balanced failover. Fortunately, Cisco&apos;s EtherChannel and Illumos&apos; Link Aggregation talk to each other, so we&apos;ll be setting that up.</p><p>I set the following configuration on the switch:</p><pre><code>#configure terminal
 interface Te1/0/1
  switchport mode trunk
  channel-group 1 mode active
 interface Te1/0/2
  switchport mode trunk
  channel-group 1 mode active
end</code></pre><p>Since I had already configured link-aggregation on the SmartOS side, I also confirmed this was functional on the switch side:</p><pre><code>#show etherchannel 1 summary
Flags:  D - down        P - bundled in port-channel
        I - stand-alone s - suspended
        H - Hot-standby (LACP only)
        R - Layer3      S - Layer2
        U - in use      f - failed to allocate aggregator

        M - not in use, minimum links not met
        u - unsuitable for bundling
        w - waiting to be aggregated
        d - default port


Number of channel-groups in use: 1
Number of aggregators:           1

Group  Port-channel  Protocol    Ports
------+-------------+-----------+--------------------------
1      Po1(SU)         LACP      Te1/0/1(P)  Te1/0/2(P)</code></pre><p>Port-channel 1 is currently in use and both 10GBE ports are bundled in it.</p><h3 id="virtual-lan-segments">Virtual LAN segments</h3><p>As this switch also supports VLANs, this enables a bunch of my pre-existing hardware, such as IP phones, Wireless Access Points, and SmartOS itself. In roughly sketching some ideas out, I came to the following layout:</p><ul><li>Infrastructure network (<code>vlan id 1, IPv4/27, IPv6/64</code>) dedicated virtual network for network or infrastructure management interfaces: switches, access points, IP phones, iDRACs, hypervisors.</li><li>Embedded network (<code>vlan id 2, IPv4/27, IPv6/64</code>) dedicated virtual network for embedded devices: chromecasts, IoT devices.</li><li>Internal network (<code>internal etherstub, IPv4/27, IPv6/64</code>) dedicated etherstub for internally facing VMs and zones.</li><li>External network (<code>external etherstub, IPv4/27, IPv6/64</code>) dedicated etherstub for externally facing VMs and zones.</li><li>Private network (<code>vlan id 3, IPv4/27, IPv6/64</code>) dedicated virtual network for physically secured devices, workstations, laptops, etc.</li><li>Guest network (<code>vlan id 4, IPv4/27, IPv6/64</code>) dedicated virtual network for guest wireless devices.</li><li>Public network (<code>vlan id 5, IPv4/DHCP</code>) dedicated virtual network for upstream network connectivity.</li></ul><p>With this in mind, I set the following configuration on the switch:</p><pre><code>#configure terminal
 vlan 2
 name embedded
 vlan 3
 name private
 vlan 4
 name guest
 vlan 5
 name public
end</code></pre><p>And then I confirmed with the following command:</p><pre><code>#show vlan

VLAN Name                             Status    Ports
---- -------------------------------- --------- -------------------------------
1    default                          active    Gi1/0/1, Gi1/0/2, Gi1/0/3
                                                Gi1/0/4, Gi1/0/5, Gi1/0/6
                                                Gi1/0/7, Gi1/0/8, Gi1/0/9
                                                Gi1/0/10, Gi1/0/11, Gi1/0/12
                                                Gi1/0/13, Gi1/0/14, Gi1/0/15
                                                Gi1/0/16, Gi1/0/17, Gi1/0/18
                                                Gi1/0/19, Gi1/0/20, Gi1/0/21
                                                Gi1/0/22, Gi1/0/23, Gi1/0/24
2    embedded                         active
3    private                          active
4    guest                            active
5    public                           active
1002 fddi-default                     act/unsup
1003 token-ring-default               act/unsup
1004 fddinet-default                  act/unsup
1005 trnet-default                    act/unsup

VLAN Type  SAID       MTU   Parent RingNo BridgeNo Stp  BrdgMode Trans1 Trans2
---- ----- ---------- ----- ------ ------ -------- ---- -------- ------ ------
1    enet  100001     1500  -      -      -        -    -        0      0
2    enet  100002     1500  -      -      -        -    -        0      0
3    enet  100003     1500  -      -      -        -    -        0      0
4    enet  100004     1500  -      -      -        -    -        0      0
5    enet  100005     1500  -      -      -        -    -        0      0
1002 fddi  101002     1500  -      -      -        -    -        0      0
1003 tr    101003     1500  -      -      -        -    -        0      0
1004 fdnet 101004     1500  -      -      -        ieee -        0      0
1005 trnet 101005     1500  -      -      -        ibm  -        0      0

Remote SPAN VLANs
------------------------------------------------------------------------------


Primary Secondary Type              Ports
------- --------- ----------------- ------------------------------------------</code></pre><h3 id="saving-the-running-configuration">Saving the Running Configuration</h3><p>After ensuring that everything works as intended, the startup switch configuration needs to be overwritten by the current one. This is done with the following:</p><pre><code>#copy running-config startup config</code></pre><h3 id="conclusion">Conclusion</h3><p>While Cisco IOS has quite the learning curve over what I&apos;m used to in switch configuration, it wasn&apos;t at all unpleasant to work with once I slowed down and took the time required to understand it.</p><p>While the above configuration steps were the bare minimum to get this network up and running, there&apos;s a bunch more stuff of interest in that Cisco switch that I would like to dig into in the future.</p><p>But as usual, we&apos;ll save that for another day.</p><h2 id="smartos-network-configuration">SmartOS Network Configuration</h2><p>As I wanted my administrative interface to function over an link aggregation, I set the following in <code>/usbkey/config</code>:</p><pre><code># Aggregation from Intel 2x10GBE Interfaces (e2,e3)
aggr0_aggr=00:00:00:00:00:00,00:00:00:00:00:00
aggr0_lacp_mode=active

# Administrative Interface
admin_nic=aggr0
admin_ip=172.22.1.8
admin_netmask=255.255.255.224
admin_network=172.22.1.0
admin_gateway=172.22.1.1

# Additional Etherstubs
etherstub=external0,internal0

# Common Configuration
hostname=gz-1
dns_domain=ewellnet
dns_resolvers=172.22.1.1
ntp_conf_file=ntp.conf
root_authorized_keys_file=authorized_keys</code></pre><p>Some brief highlights:</p><ul><li><code>aggr0_aggr</code> refers to the interfaces to use in the link aggregation by hardware address.</li><li><code>aggr0_lacp_mode</code> ensures this side is actively participating in LACP, instead of passively waiting for another active party.</li><li><code>admin_nic</code> sets the administrative interface, in this case, to the link aggregation. The rest of the <code>admin_</code> parameters are as set by the SmartOS installation.</li><li><code>etherstub</code> sets additional etherstubs to be configured upon boot.</li><li>I&apos;m setting my own custom <code>ntp.conf</code> so that I can use my global zone as a network time server across multiple subnets.</li></ul><p><strong>/usbkey/config.inc/ntp.conf</strong>:</p><pre><code>driftfile /var/ntp/ntp.drift
logfile /var/log/ntp.log

# Ignore all network traffic by default
restrict default ignore
restrict -6 default ignore

# Allow localhost to manage ntpd
restrict 127.0.0.1
restrict -6 ::1

# Allow servers to reply to our queries
restrict source nomodify noquery notrap

# Allow local subnets to query this server
restrict 172.22.1.0 mask 255.255.252.0 nomodify

# Time Servers
pool 0.smartos.pool.ntp.org burst iburst minpoll 4
pool 1.smartos.pool.ntp.org burst iburst minpoll 4
pool 2.smartos.pool.ntp.org burst iburst minpoll 4
pool 3.smartos.pool.ntp.org burst iburst minpoll 4</code></pre><h2 id="smartos-zpool-configuration">SmartOS Zpool Configuration</h2><p>After getting a hardware configuration together that worked for Illumos, I spent a few weeks testing various vdev configurations for performance and spatial efficiency. As the tests evolved over that time, I wasn&apos;t fully satisfied with the consistency of the methodology and will be rerunning those tests again for my own information as well as to feature in a future article. There were some pretty solid results that shone through through.</p><ul><li>ZFS pools based on three five-drive RAIDZ vdevs significantly outperformed pools based on two eight-drive RAIDZ2 vdevs in terms of sequential read performance (17.9% faster) and storage efficiency (7%).</li><li>ZFS pools with special allocation class vdevs outperformed pools without them in terms of sequential read performance (18.3% faster).</li></ul><p>The performance advantages of RAIDZ outweigh the resiliency advantages of RAIDZ2 in my case, as this pool configuration also has a hot-spare, reducing temporal exposure to loss of the pool. As well, critical datasets are regularly replicated off-site.</p><p>The zones pool of the new server was manually created during SmartOS installation with the following command:</p><pre><code># zpool create \
  -o autotrim=on -O atime=off \
  -O checksum=edonr -O compression=lz4 \
  -O recordsize=1M -O special_small_blocks=128K \
  zones \
    raidz c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 \
    raidz c1t5d0 c1t6d0 c1t7d0 c1t8d0 c1t9d0 \
    raidz c1t10d0 c1t11d0 c1t12d0 c1t13d0 c1t14d0 \
    spare c1t15t0 \
    special
      mirror c3t1d0 c4t1d0
      mirror c5t1d0 c6t1d0</code></pre><p>Parameters of note:</p><ul><li><code>autotrim=on</code> enables automatic trim for all trim-capable devices in the pool. In this case, all NVMe SSD based special vdev leaf devices.</li><li><code>atime=off</code> disables file access time updates, reducing metadata writes and improving throughput. This is a standard parameter used by SmartOS zones pools.</li><li><code>checksum=edonr</code> uses the edonr checksum instead of fletcher for filesystem checksum calculations. I had found during previous testing that out of all of the cryptographicly strong checksum algorithms available to ZFS, edonr performed the best. This should be re-verified.</li><li><code>compression=lz4</code> explicitly uses lz4 for block compression, which is almost universally a good idea. This could also be set to <code>compression=on</code> and will use the ZFS default compression algorithm, which will attempt to balance compression speed with compression ratio.</li><li><code>recordsize=1M</code> raises the maximum record size from 128K to 1M. This improves performance for large sequential file access by keeping RAIDZ stripes on individual leaf devices relatively large (36K-256K) and ensures that a range of file sizes fit on the normal vdevs instead of the special vdevs, thanks to the next parameter.</li><li><code>special_small_blocks=128K</code> allows for data blocks up to 128K to also be stored on the NVMe SSDs instead of the hard drives. This should drastically improve random IO and overall throughput.</li></ul><p>The combination of the last two parameters effectively creates a hybrid storage pool. All metadata and blocks of up to 128K go to one class of storage while blocks between 128K and 1M. By adjusting <code>recordsize</code> and/or <code>special_small_blocks</code> different storage properties can be achieved for different datasets.</p><h2 id="cache-only-metadata-for-swap-zvol">Cache only metadata for swap zvol</h2><p>I&apos;ve never liked the idea of swap pages being cached in the ARC, and that happens by default in SmartOS. Fortunately it&apos;s easy to switch that behavior on and off at anytime:</p><pre><code># zfs set primarycache=metadata zones/swap</code></pre><p>The above command will ensure that only metadata for zones/swap makes its way into the ARC, preserving it for normal file access. I can&apos;t really foresee a case where I would want to reverse this, perhaps with L2ARC devices installed in this pool? Either way it&apos;s rather trivial to revert back to the normal behavior with:</p><pre><code># zfs inherit primarycache zones/swap</code></pre><h2 id="calming-the-dragon-fans">Calming the Dragon (fans)</h2><p>It was a bit of a surprise when I started this server up the first time after adding a non-certified-by-dell PCIe card. If that sounds like a bit of a shakedown, it is. It also sounded like the building was about to take off. I, as many before me had, discovered that Dell is very conservative when it comes to cooling devices that their firmware can&apos;t monitor the temperatures of. And by conservative, I mean liberal with the cooling.</p><p>Some people solve this problem by completely disabling the automatic thermal profiles and manually stepping up and down the fan speed via <code>ipmitool</code> and some cron scripts that run every minute.</p><p>And struck me as a horrible idea.</p><p>It would be so much better to continue to let the dedicated firmware that monitors system temperature to track component temperatures and adjust airflow to correct for it. Just if there was only a way to tell it not to worry about those PCIe devices behind the curtain.</p><p>Fortunately, at least someone at Dell agrees with me.</p><p>It turns out you can disable the third-party PCIe cooling response, preventing it from loudly complaining about additional PCIe devices in the system. The only utility required to change this behavior is <code>ipmitool</code> which is already part of the SmartOS global zone.</p><p>To check the current cooling response status, run the following command:</p><pre><code># ipmitool raw 0x30 0xCE 0x01 0x16 0x05 0x00 0x00 0x00</code></pre><p>The following response means the third-party cooling response is disabled. Quiet.</p><pre><code> 16 05 00 00 00 05 00 01 00 00</code></pre><p>The following response means the third-party cooling response is enabled. Loud.</p><pre><code> 16 05 00 00 00 05 00 00 00 00</code></pre><p>To disable the third-party cooling response, run the following command:</p><pre><code># ipmitool raw 0x30 0xCE 0x00 0x16 0x05 0x00 0x00 0x00 0x05 0x00 0x01 0x00 0x00</code></pre><p>To enable the third-party cooling response, run the following command:</p><pre><code># ipmitool raw 0x30 0xCE 0x00 0x16 0x05 0x00 0x00 0x00 0x05 0x00 0x00 0x00 0x00</code></pre><p>This setting appears to be maintained across power cycles, but if you do find yourself in a situation where you need to run cards that run hot (like NV1604s), it would be wise to re-enable the default behavior to avoid that potential fire hazard.</p>]]></content:encoded></item><item><title><![CDATA[Homelab Hardware Updates]]></title><description><![CDATA[<p>So much for regular posts to this blog in 2021.</p><p>A lot of what I intended to write about this year hinged on access to better hardware. Not that I couldn&apos;t have done it with the N54L, it was more a case of only wanting to do it</p>]]></description><link>https://blog.brianewell.com/homelab-hardware-updates/</link><guid isPermaLink="false">61a3193c85160e41459fc693</guid><category><![CDATA[Hardware]]></category><category><![CDATA[SmartOS]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 03 Dec 2021 00:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1558494949-ef010cbdcc31?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fHNlcnZlciUyMHJhY2t8ZW58MHx8fHwxNjM5NzEwNjU5&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1558494949-ef010cbdcc31?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fHNlcnZlciUyMHJhY2t8ZW58MHx8fHwxNjM5NzEwNjU5&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Homelab Hardware Updates"><p>So much for regular posts to this blog in 2021.</p><p>A lot of what I intended to write about this year hinged on access to better hardware. Not that I couldn&apos;t have done it with the N54L, it was more a case of only wanting to do it once instead of twice. So instead I focused on updating my homelab with the intent of writing about it once everything was in place.</p><p>Yeah. That took a little longer than anticipated. Months longer.</p><p>But hey, better late than never, right? Lets get into it.</p><p><strong>Please note:</strong> this article is going to be focused on hardware. Software and networking configurations will be covered in a subsequent post.</p><h2 id="updated-demarcation-point">Updated Demarcation Point</h2><p>Well before signing up for <a href="https://ziplyfiber.com/?ref=blog.brianewell.com">Ziply fiber</a> in 2019, my father and I made a weekend project of running some conduit vertically though the house from the mechanical room in the basement to the attic for horizontal cable runs throughout the house.</p><p>This turned out to be quite fortunate, since Optical Network Terminals need to be powered, and there was no readily available circuits on the external wall where my installer initially wanted to place the ONT. I volunteered installing the ONT in the mechanical room instead, and after running one additional conduit through the attic for the fiber and placing up some plywood, that&apos;s exactly what I got.</p><p>That ONT looked a little lonely up there all by itself, so I just had to add a few more things. These are some photos of the build, and what it looks like today.</p><figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20210726_012000638.MP.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20210726_014156040.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20210726_025035613.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20210731_213714009--1-.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20210802_192450978.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211012_011524538.jpg" width="3024" height="4032" loading="lazy" alt="Homelab Hardware Updates"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211019_083832253.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211120_010929424.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211219_001254831.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div></div></div></figure><p>Major components from right to left: Ziply enclosure (housing a FOG420 ONT), <a href="https://www.silicondust.com/hdhomerun/?ref=blog.brianewell.com">SiliconDust HDHomeRun</a> HDHR5-4K network tuner with 3D printed mounting bracket from an <a href="https://www.ebay.com/usr/3ddesignedcreations?ref=blog.brianewell.com">excellent manufacturer on eBay</a>, vertically mounted <a href="https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-2960-s-series-switches/data_sheet_c78-726680.html?ref=blog.brianewell.com">Cisco Catalyst 2960-S-24PD-L</a> PoE switch on a <a href="https://www.startech.com/en-us/server-management/rk119wallv?ref=blog.brianewell.com">1U vertical mounting bracket</a>. Power is routed along the bottom and data is routed along the top supported with <a href="https://southwire.com/electrical-components/raceway-support-hanging-box-positioning/j-cable-support-hook-3-4-/p/p-JHK-12?ref=blog.brianewell.com">Southwire J-Hooks</a> and <a href="https://www.allentel.com/product/distribution-ring-die-cast-aluminum-4-7-8-x-1-7-8-x-3-1-8-x-2-3-4-in/?ref=blog.brianewell.com">Allen Tel Distribution Rings</a> on the left side of the board. Cables are secured to the distribution rings using Velcro straps.</p><p>The major regions of this board are ethernet on the left, coax (from a large antenna in the attic) in the middle, and the Gigabit Passive Optical Network (GPON) ONT on the right. There&apos;s still plenty of room in the middle of the board for additional network tuners or even a cable modem if I&apos;m desperate.</p><p>Additional things to do on the demarcation point:</p><ul><li>Dress and properly anchor the antenna coax, and ethernet cables that run vertically down between the HDHomeRun and the ONT.</li><li>Obtain the correct power cable for the server to bypass the UPS with one of it&apos;s power supplies. Might just do this with a flush cable extension, more on that later.</li><li>Properly dress the power cables going along the bottom of the board.</li></ul><h2 id="updated-an-actual-core-switch"><s>Updated</s> <em>An Actual</em> Core Switch</h2><p>I&apos;ve been running without a proper switch for years now. While this generally hadn&apos;t been an issue, I recently upgraded to <a href="https://www.engeniustech.com/online-store/product/ews357ap-wi-fi-6-2x2-11ax-indoor-wireless-access-point/?ref=blog.brianewell.com">EnGenius Wi-Fi 6 2x2 EWS357AP</a> access points, and my new home hypervisor hardware has 10GBE ports, meaning I was going to outgrow the desktop switch I had sitting next to my current server. Any core switch was going to have the following requirements:</p><ul><li>802.3af PoE support for Access Points and IP Phones. 802.3at PoE support a bonus, but no specific requirement for it.</li><li>SFP+ ports to connect to the nearby hypervisor via DAC cables.</li><li>802.3ad Link Aggregation Control Protocol (LACP) support, because why connect a server over a single 10GBE link when you can connect over two?</li><li>802.1Q VLAN support so that I can properly sequester devices and run multiple isolated SSIDs.</li></ul><p>While there are <a href="https://store.ui.com/collections/unifi-network-switching/products/usw-pro-24-poe?ref=blog.brianewell.com">plenty of new switches</a> that would fulfil these requirements, they can end up being pretty expensive. Enter the Cisco Catalyst 2960-S-24PD-L, a switch that has every feature I was looking for at a fraction of the cost on the used market.</p><p>Sure it&apos;s loud, and takes forever to boot up, but my new server is louder, and how often will I be rebooting this switch? My only real complaint with it so far is how generally finicky Cisco IOS is. This is probably just learning curve though, and again, is probably inconsequential once I get the switch configured as I need it.</p><h2 id="a-new-ceiling-mounted-rack">A New Ceiling-Mounted Rack</h2><p>I wanted to ensure that all of my homelab gear took up as little space as possible, as close to the demarcation point as possible. The natural solution was to bolt it all right to the ceiling.</p><figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20210930_230918924.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211009_055509861.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211020_034954452.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211219_001306604.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div></div></div></figure><p>This custom 7U (with room for an additional 1U above) rack is lag bolted to the ceiling joists. It perfectly occupies the space here; providing just enough clearance around the front and back of the rack for servicing the demarcation point, the back of the rack, and deploying gear into and removing gear from the front of the rack. While 7U may seem like an odd decision, it&apos;s just enough space for two 2U UPSes, a 2U (or two 1U) server, and an automatic transfer switch for any critical components that don&apos;t have redundant power supplies, which is the likely maximum deployment of hardware to this space.</p><h2 id="updated-server-hardware">Updated Server Hardware</h2><p>While I&apos;m still very fond of my <a href="https://blog.brianewell.com/hp-n54l-microserver/">HP N54L</a>, I realistically outgrew it for a core hypervisor years ago in all categories:</p><ul><li>CPU (AMD Turion II Neo, 2-cores, 2.2Ghz): I can peg one core and nearly peg the other when running a speedtest over uncapped GPON using a SmartOS zone as a NATing router, which only hits around 75% advertised bandwidth. Real-time Plex transcoding is out of the question, as is line-speed rsync (via rsyncd, so no ssh overhead) over gigabit, all due to CPU limitations.</li><li>Memory (16GB DDR3 ECC): Upon getting this system I immediately upgraded the memory to the maximum supported 16GB. This was great at the time, and I&apos;d often see ZFS ARC hit rates of 99%, but as time went on and I ran more and more workloads on it, I found my <code>c</code> values and hit rates dropping, sometimes as low as 90% for stretches. I know, 90%, boo-hoo! This was more an issue that there&apos;s no additional room for memory upgrades if the need arises.</li><li>Storage capacity (4x HGST 8TB SATA Hard Drives): I&apos;ve been running at over 90% capacity on my ZFS pool for a year, and 99% for the last few months on a 4-drive RAIDZ based pool.</li><li>Storage bandwidth: Besides some weird missing space issues in this pool, probably due to the disadvantageous 4-drive RAIDZ layout, the system doesn&apos;t have the best sequential read performance either, topping out at just over 300MBps. While this is perfectly fine for a gigabit limited network file server, it&apos;s less suitable for local containers, and a complete non-starter for any significant network upgrades.</li><li>IO Expandability: The N54L comes with a single on-board gigabit ethernet port and PCIe 3.0 x16 and x1 ports. While the x1 port is pretty suitable for a second gigabit ethernet NIC, the fact that there&apos;s only a single x16 port means you&apos;re going to either be limited to improved networking (a quad 10GBE NIC <em>or</em> dual 40GBE NIC) or improved storage (NVMe card), but not both, which defeats the purpose of either.</li></ul><p>Fortunately, a good friend was selling a server he had just grown out of for an excellent price. Enter the Dell R730XD that came with the following configuration:</p><ul><li>2x Intel Xeon E5-2678v3 CPUs. Base clock 2.5GHz Turbo clock: 3.1GHz Cores: 12 Threads: 24 TDP: 120W</li><li>128GB DDR4 ECC Memory</li><li>16x Hitachi 8TB SAS Hard Drives</li><li>Dell PERC H730 Mini Mono RAID Controller</li><li>Dell F6PCP Emulex based Quad 10GBE SFP+ Network Daughter Card</li><li>Dell iDRAC8 Enterprise</li><li>Dell Rear Flex Bay 2.5&quot; Drive Backplane Kit</li><li>Two out of the three riser kits (missing Riser1)</li><li>Dell PowerEdge 2U Ready Rails</li></ul><p>I added the following additional hardware components:</p><ul><li>Dell 68M95 Intel X710 based Quad 10GBE SFP+ Network Daughter Card, as the Emulex based one wasn&apos;t recognized by drivers under Illumos and was flaky under FreeBSD. While this could probably be fixed, the Intel based card wasn&apos;t too expensive used and the cost could be recouped by selling the Emulex card.</li><li>Dell PERC HBA330 Mini Mono Host Bus Adapter, as the H730 was flaky under Illumos while being load tested. The H730 could probably be resold for more than the cost of the HBA330, but I&apos;m more apt to keep this around to try to figure out what&apos;s going on with the driver.</li><li>Cisco QLogic QL45412HLCU dual port 40GB QSFP+ NIC intended for use in experiments with high speed connectivity to a single remote system (workstation) to see what that&apos;s like.</li><li><a href="https://www.asus.com/us/Motherboards-Components/Motherboards/Accessories/HYPER-M-2-X16-CARD-V2/?ref=blog.brianewell.com">ASUS Hyper M.2 x16 PCIe v3.0 x4 v2</a>. A 4x4 M.2 NVMe to PCIe v3.0 x16 expansion card that allows for four M.2 NVMe drives to be connected to the host to experiment with special vdevs. This is the main reason I opted for the R730XD over the R720XD, as PCIe bifurcation is required to use this card effectively, and is only available on the later model.</li><li>Dell Riser 1 (adding 3 PCIe v3.0 x8 slots) did this for the NV1604s mentioned below, but since I ended up not using them, this part wasn&apos;t required. I may end up loading them up with <a href="https://www.supermicro.com/en/products/accessories/addon/AOC-SLG3-2M2.php?ref=blog.brianewell.com">Supermicro AOC-SLG3-2M2</a> cards if I like special vdevs on ZFS and need more space, or perhaps Intel Optane drives to experiment with L2ARC/SLOG.</li><li><s>Two <a href="https://www.microsemi.com/product-directory/flashtec-nvram-drives/4087-nv1604?ref=blog.brianewell.com">Flashtec NV1604 NVRAM PCIe v3.0 x8 NVMe drives</a> for use as mirrored SLOG devices.</s> After having re-read the documentation, it turns out that these cards need to be directed to store their contents to flash upon every power-loss event (I&apos;m really wishing <a href="https://www.storagereview.com/review/pmc-nv1604-flashtec-nvram-drive-review?ref=blog.brianewell.com">storage review</a> mentioned that). While this does improve flash longevity, it&apos;s also quite unfortunate, as it makes them ill-suited for use as SLOG devices: either a third party process will have to run in the global zone to manage these cards, or ZFS will have to be extended to do so. Neither of these approaches are overly palatable, so for now I&apos;m running without any dedicated logging devices.</li><li>Dell PowerEdge 2U Cable Management Arm. Completely unnecessary, but it makes the cable management that much better.</li></ul><p>The ASUS Hyper card ended up conflicting with the power distribution cables of the drive midplane within the R730XD, so I had it milled down there, and around where the card support arm of the case would swing out to support the weight of the card. This is what it looks like after those modifications.</p><figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211020_024711480.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211020_093312153.NIGHT.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211027_001910912.MP.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211027_001936787.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div></div></div></figure><p>This server hardware has expansion opportunities in all directions, and should suit my needs for quite a few years to come.</p><h2 id="a-new-uninterruptable-power-supply">A New Uninterruptable Power Supply</h2><p>Since the new server was going to be rack mounted to the ceiling next to the demarcation point in the mechanical room, it really made sense to buy a rack mount UPS, as my other UPSes (all desktop form factor) would be out of place there. Used APCs work great, so I picked up an APC Smart-UPS (SMT2200RM2U) for cheap, added new batteries and tested to ensure that it works.</p><p>It works.</p><p>Getting the shelf to mount properly was something else though. The 10-32 screws on the front were milled down to fit just flush on the front rails and pairs of nylon washers were used on the back that also just fit in the rail holes to ensure that the rear screws were perfectly aligned when mounting. It was annoying, and took some work, but ultimately, the UPS mounted perfectly in the rack.</p><figure class="kg-card kg-gallery-card kg-width-wide"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211015_002337676.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211018_062652654.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211018_062639278.jpg" width="3024" height="4032" loading="lazy" alt="Homelab Hardware Updates"></div><div class="kg-gallery-image"><img src="https://blog.brianewell.com/content/images/2021/12/PXL_20211018_062648034.jpg" width="4032" height="3024" loading="lazy" alt="Homelab Hardware Updates"></div></div></div></figure><p>Since the server has dual redundant power supplies, one is plugged into the UPS while the other will be plugged into the upstream surge suppressor once I find the correct length and low-profile NEMA receptacle cable. This may change down the road if I get a second UPS, but this is fine for now, as my switch gear and ONT are also plugged into this UPS: if there&apos;s a power delivery issue inside the device, I&apos;m going to have problems across the board.</p><h2 id="conclusion">Conclusion</h2><p>With this all in place, I can start to focus on software updates and actually starting to make use of this. Look forward to more content next week!</p><p>Or maybe next year.</p>]]></content:encoded></item><item><title><![CDATA[SmartOS Manifests]]></title><description><![CDATA[<p>SmartOS manifests, also sometimes referred to as zone manifests, are JSON files that describe the resources and permissions to be allocated and granted to a given guest zone on SmartOS. &#xA0;They are used by SmartOS global zones to instance discrete guest zones, either through Triton or directly on the</p>]]></description><link>https://blog.brianewell.com/smartos-manifests/</link><guid isPermaLink="false">6023803717aa9a455c2c451f</guid><category><![CDATA[SmartOS]]></category><category><![CDATA[Zones]]></category><category><![CDATA[ZFS]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 29 Jan 2021 06:06:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1542621334-a254cf47733d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDF8fGJsdWVwcmludHxlbnwwfHx8&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1542621334-a254cf47733d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDF8fGJsdWVwcmludHxlbnwwfHx8&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="SmartOS Manifests"><p>SmartOS manifests, also sometimes referred to as zone manifests, are JSON files that describe the resources and permissions to be allocated and granted to a given guest zone on SmartOS. &#xA0;They are used by SmartOS global zones to instance discrete guest zones, either through Triton or directly on the global zone command-line through <code>vmadm</code>.</p><p>Since having the wrong manifest for a SmartOS zone can significantly impact its operation, it seems worth it to dedicate a full article to the topic. &#xA0;We&apos;ll be looking at examples of the different types of SmartOS manifests, as well as exploring the specific properties of each.</p><p>Please note that the properties covered in this article are limited to ones I&apos;ve found useful. &#xA0;A more authoritative source of all of this information is probably the <a href="https://wiki.smartos.org/?ref=blog.brianewell.com">SmartOS wiki</a> and the <code>vmadm</code> manual page.</p><p>I tend to keep my manifests on file in the global zone when running SmartOS as a stand-alone containerizor, usually under <code>/usbkey/vmcfg/</code> or <code>/usbkey/manifests/</code> for easy re-creation of zones.</p><h2 id="native-zones">Native Zones</h2><p>Native SmartOS zones are an Illumos based isolated environment that passes everything through to the global zone kernel with no translation at all, except for the isolation provided by virtue of being a zone.</p><p>A slightly modified example manifest from the <a href="https://wiki.smartos.org/how-to-create-a-zone/?ref=blog.brianewell.com">SmartOS wiki</a>:</p><pre><code class="language-json">{
 &quot;brand&quot;: &quot;joyent&quot;,
 &quot;image_uuid&quot;: &quot;1d05e788-5409-11eb-b12f-037bd7fee4ee&quot;,
 &quot;alias&quot;: &quot;test-smartos&quot;,
 &quot;hostname&quot;: &quot;test-smartos&quot;,
 &quot;cpu_cap&quot;: 200,
 &quot;max_physical_memory&quot;: 1024,
 &quot;quota&quot;: 20,
 &quot;delegate_dataset&quot;: true,
 &quot;resolvers&quot;: [&quot;8.8.8.8&quot;, &quot;208.67.220.220&quot;],
 &quot;nics&quot;: [
  {
   &quot;nic_tag&quot;: &quot;admin&quot;,
   &quot;ips&quot;: [&quot;dhcp&quot;]
  }
 ]
}</code></pre><p>For ease of reading, these properties tend to be grouped into the following general arrangements. &#xA0;Properties marked with an asterisk (*) are required for each brand.</p><h3 id="administrative">Administrative</h3><ul><li><code>brand</code>*: String that must be set to <code>joyent</code> or <code>joyent-minimal</code> for this zone type.</li><li><code>image_uuid</code>: String representing the image UUID that this zone should be instanced from. &#xA0;Images are managed by <code>imgadm</code>.</li><li><code>alias</code>: String used for display/lookup purposes from outside the guest zone.</li><li><code>hostname</code>: String used to configure the guest zone&apos;s hostname on creation.</li></ul><h3 id="cpu-process">CPU/Process</h3><ul><li><code>cpu_cap</code>: Integer representing the percentage of each CPU core available to this zone. &#xA0;A value of 300 represents up to 3 full CPU cores.</li><li><code>cpu_shares</code>: Integer representing the number of fair share scheduler (FSS) shares for this zone. &#xA0;Only meaningful relative to other zones on the system and only applies when there is CPU contention between zones. &#xA0;A value of 25 will mean this zone will only have access to \(\frac{1}{4}\) as much CPU time as another zone with the default value of 20.</li><li><code>max_lwps</code>: Integer representing the maximum number of threads a zone is allowed to run. &#xA0;The default value of 2000 should be pretty reasonable.</li></ul><h3 id="memory">Memory</h3><ul><li><code>max_locked_memory</code>: Integer representing the number of MiB of memory this zone is allowed to lock. &#xA0;Locked memory are pages that are explicitly marked as non-swappable, and cannot exceed its default value of <code>max_physical_memory</code>.</li><li><code>max_physical_memory</code>: Integer representing the number of MiB of memory this zone is allowed to use. &#xA0;The default value is 256.</li><li><code>max_swap</code>: Integer representing the number of MiB of virtual memory this zone is allowed to use. &#xA0;This value must be greater than its default value of <code>max_physical_memory</code> or <code>256</code> whichever is greater.</li></ul><h3 id="storage">Storage</h3><ul><li><code>quota</code>: Integer representing the number of GiB that this zone ZFS dataset should have its quota set to.</li><li><code>delegate_dataset</code>: Boolean that determines if a ZFS dataset will be delegated to this zone on creation. &#xA0;If set to true, this zone will get a dataset at <code>&lt;zoneroot dataset&gt;/data</code> (default of: <code>zones/&lt;uuid&gt;/data</code>.) &#xA0;This dataset can be configured many different ways to optimize for databases, snapshots, etc.</li><li><code>indestructible_delegated</code>: Boolean that determines if the delegated ZFS dataset should have a <code>zfs hold</code> set on it to enable two-step deletion. &#xA0;Use this if you&apos;re really unsure about accidentally deleting your data.</li><li><code>indestructible_zoneroot</code>: Same as above but for the entire guest zone.</li><li><code>filesystems</code>: Array of JSON objects representing additional filesystems that would be outside of normal operation to be mounted within zones. &#xA0;Below are the required parameters:</li><li><code>filesystem.*.type</code>: String representing the type of filesystem to be mounted, <code>lofs</code> for a bind mount, <code>pcfs</code> for a pc filesystem, <code>tmpfs</code> etc.</li><li><code>filesystem.*.source</code>: String representing the source directory from the scope of the global zone, primarily useful for <code>lofs</code> mounts.</li><li><code>filesystem.*.target</code>: String representing the mountpoint from the scope of the guest zone.</li><li><code>filesystem.*.raw</code>: String representing a raw device to be associated with the source filesystem, most often, this should be a device file for a drive.</li><li><code>filesystems.*.options</code>: Array of strings representing the mount options for this filesystem when it is mounted into the zone. &#xA0;Eg: <code>[&quot;ro&quot;, &quot;nodevices&quot;]</code></li><li><code>fs_allowed</code>: String representing filesystem types this zone is allowed to mount. &#xA0;If you&apos;re building SmartOS, you will want this as: <code>&quot;ufs,pcfs,tmpfs&quot;</code></li><li><code>tmpfs</code>: Integer representing the number of MiB this zone is allowed to use for its <code>tmpfs</code> mounted at <code>/tmp</code>. &#xA0;Cannot exceed its default value of <code>max_physical_memory</code>.</li><li><code>zfs_filesystem_limit</code>, <code>zfs_snapshot_limit</code>: Integers representing the limits on the number of ZFS filesystems and snapshots a zone can have. &#xA0;Useful when combined with <code>delegate_dataset</code> to prevent runaway resource consumption.</li><li><code>zfs_io_priority</code>: Integer representing the zone&apos;s IO priority when operating on a system with IO contention. &#xA0;Zones with values less than (or greater than) the default value of 100 will have their IO throttled (or prioritized) when both try to use all available storage IO.</li></ul><h3 id="network">Network</h3><ul><li><code>resolvers</code>: Array of strings representing DNS resolvers that will be assigned to <code>/etc/resolv.conf</code> upon zone creation.</li><li><code>maintain_resolvers</code>: Boolean that determines if <code>vmadm</code> should update guest zone resolvers when the above property is updated. &#xA0;Default: <code>false</code></li><li><code>nics</code>: Array of JSON objects representing a guest zone&apos;s network interfaces. &#xA0;Below are the required parameters:</li><li><code>nics.*.primary</code>: Boolean representing which vnic should be used for this zone&apos;s default gateway and nameserver values. &#xA0;Only useful with multiple nics.</li><li><code>nics.*.nic_tag</code>: String representing which physical nic or etherstub that this vnic should be associated with.</li><li><code>nics.*.vlan_id</code>: Integer representing what vlan tag should be used for this vnic.</li><li><code>nics.*.interface</code>: String representing the interface name this zone will use for this interface. &#xA0;Always in the format of <code>netX</code> where &#xA0;<code>X</code> is an integer \(\geq 0\). &#xA0;This parameter is primarily useful for configuring zones with multiple nics.</li><li><code>nics.*.mac</code>: String representing the MAC address of a vnic. &#xA0;This is useful when interfacing with external systems expecting a specific MAC address.</li><li><code>nics.*.ips</code>: Array of strings representing IPv4 CIDR or IPv6 CIDR addresses for a given vnic. &#xA0;The special strings <code>&quot;dhcp&quot;</code> and <code>&quot;addrconf&quot;</code> can be used as well to represent the use of DHCPv4 and SLAAC or DHCPv6, respectively.</li><li><code>nics.*.gateways</code>: Array of strings representing IPv4 addresses that this zone should use as network gateways. &#xA0;If multiple gateways are specified, OS-specific behavior will apply (eg round robin on SmartOS). &#xA0;Not required if using DHCP.</li><li><code>nics.*.routes</code>: JSON object that maps network destinations to gateways. &#xA0;Destinations (keys) can be either IP addresses or IP Subnetworks in CIDR notation. &#xA0;Gateways can be either IP addresses or in the form of <code>nics[0]</code> or <code>macs[aa:bb:cc:12:34:56]</code>.</li><li><code>nics.*.allow_dhcp_spoofing</code>, <code>nics.*.allow_ip_spoofing</code>: Booleans that determine if this zone vnic should be granted certain permissions. &#xA0;DHCP spoofing is required for DHCP servers. &#xA0;IP spoofing is required for routers.</li><li><code>nics.*.allowed_ips</code>: Array of strings representing additional IP addresses from which this vnic is allowed to send traffic. &#xA0;This is useful for IP address failover schemes between multiple zones.</li><li><code>nics.*.blocked_outgoing_ports</code>: Array of integers representing port numbers to which this vnic is prevented from sending traffic. &#xA0;Eg: <code>[80, 443, 8080]</code></li></ul><h3 id="additional-properties">Additional Properties</h3><ul><li><code>limit_priv</code>: String representing the list of privileges that will be available to this zone. &#xA0;The default is normally fine, but some applications may require special permissions to run properly, for instance FreeSwitch apparently needs <code>&quot;default,proc_clock_highres,proc_priocntl&quot;</code> to enable the use of high resolution timers with very small time values and for better control over its scheduling class, both probably important for low latency voice. &#xA0;See <code>man 5 privileges</code>.</li><li><code>customer_metadata</code>: JSON object representing metadata to be associated with this VM. &#xA0;This data can be accessed from within the guest zone by using the <code>mdata-get</code> command, even through this object, eg:</li></ul><pre><code class="language-json">&quot;customer_metadata&quot;: {
 &quot;root_authorized_keys&quot;: &quot;ssh-ed25519 &lt;key data&gt;&quot;,
 &quot;user-script&quot;: &quot;/usr/sbin/mdata-get root_authorized_keys &gt; /root/.ssh/authorized_keys&quot;
}</code></pre><h2 id="linux-branded-guest-manifests">Linux Branded Guest Manifests</h2><p>Linux Branded SmartOS zones are a Linux user-space with an additional translation layer that converts Linux ABI calls from the user-space into Illumos ABI calls before passing them on to the Illumos kernel, effectively allowing Linux user applications to operate under an Illumos kernel.</p><p>A slightly modified example manifest from the <a href="https://wiki.smartos.org/lx-branded-zones/?ref=blog.brianewell.com">SmartOS wiki</a>:</p><pre><code class="language-json">{
 &quot;brand&quot;: &quot;lx&quot;,
 &quot;kernel_version&quot;: &quot;4.2.0&quot;,
 &quot;image_uuid&quot;: &quot;63d6e664-3f1f-11e8-aef6-a3120cf8dd9d&quot;,
 &quot;alias&quot;: &quot;test-debian9&quot;,
 &quot;hostname&quot;: &quot;test-debian9&quot;,
 &quot;cpu_cap&quot;: 400,
 &quot;max_physical_memory&quot;: 4096,
 &quot;quota&quot;: 1000,
 &quot;resolvers&quot;: [&quot;192.168.180.1&quot;, &quot;8.8.8.8&quot;],
 &quot;nics&quot;: [
  {
   &quot;nic_tag&quot;: &quot;external&quot;,
   &quot;vlan_id&quot;: 180,
   &quot;ips&quot;: [&quot;192.168.180.182/24&quot;],
   &quot;gateways&quot;: [&quot;192.168.180.1&quot;]
  }
 ]
}</code></pre><p>The properties of Linux branded zones are almost identical to SmartOS zones, with the following differences:</p><h3 id="administrative-1">Administrative</h3><ul><li><code>brand</code>*: String that must be set to <code>lx</code> for this zone type.</li><li><code>kernel_version</code>: String representing the version of Linux to report/emulate.</li></ul><p>As of January 2021, not all ABI functionality of the latest Linux kernels is supported by the Linux translation layer, meaning that many modern distributions fail to function correctly. &#xA0;This is being worked on.</p><h2 id="hvm-guest-manifests">HVM Guest Manifests</h2><p>Hardware Virtual Machine (HVM) Guest zones contain a hardware virtualization suite utilizing either KVM or Bhyve to emulate hardware for any operating system that can run as a guest.</p><p>A slightly modified example manifest from the <a href="https://wiki.smartos.org/how-to-create-an-hvm-zone/?ref=blog.brianewell.com">SmartOS wiki</a>:</p><pre><code class="language-json">{
 &quot;brand&quot;: &quot;bhyve&quot;,
 &quot;alias&quot;: &quot;test-debian10&quot;,
 &quot;hostname&quot;: &quot;test-debian10&quot;,
 &quot;vcpus&quot;: 4,
 &quot;ram&quot;: 4096,
 &quot;disks&quot;: [
  {
   &quot;image_uuid&quot;: &quot;9bcfe5cc-007d-4f23-bc8a-7e7b4d0c537e&quot;,
   &quot;model&quot;: &quot;virtio&quot;,
   &quot;boot&quot;: true
  }
 ],
 &quot;resolvers&quot;: [&quot;208.67.222.222&quot;, &quot;8.8.4.4&quot;],
 &quot;nics&quot;: [
  {
   &quot;nic_tag&quot;: &quot;admin&quot;,
   &quot;ips&quot;: [&quot;10.33.33.33/24&quot;],
   &quot;gateways&quot;: [&quot;10.33.33.1&quot;],
   &quot;model&quot;: &quot;virtio&quot;,
   &quot;primary&quot;: true
  }
 ]
}</code></pre><p>While there&apos;s quite a bit of divergence between the properties of OS (<code>joyent</code> and <code>lx</code> branded zones) and HVM (<code>kvm</code> and <code>bhyve</code> branded zones), most of the OS properties actually still apply, only to the zone performing the virtualization, not to the guest.</p><p>Please also note that some of these properties are specific to <code>bhyve</code> while others are specific to <code>kvm</code>. &#xA0;I will try to illustrate which is which below:</p><h3 id="administrative-2">Administrative</h3><ul><li><code>brand</code>: String representing which hardware virtualization suite to use for this VM. &#xA0;Must be either <code>kvm</code> or <code>bhyve</code>.</li><li><code>bhyve_extra_opts</code>, <code>qemu_extra_opts</code>: Strings representing additional <code>bhyve</code> and <code>kvm</code> command-line parameters to be appended to the end of the commands. &#xA0;While this was intended for debugging, it&apos;s also generally useful.</li><li><code>boot</code>: String representing the boot order for <code>kvm</code> VMs. Expected format is <code>order=X*</code> where X is either <code>c</code> for the hard drive, <code>d</code> for the first CD-ROM drive, and <code>n</code> for network boot. &#xA0;eg: <code>order=cdn</code> would boot from the hard drive, CD-ROM drive, and network, in that order.</li><li><code>bootrom</code>: String representing the bootrom to use under <code>bhyve</code>. &#xA0;Values are either <code>bios</code>, <code>uefi</code> or a path to a custom bootrom binary relative to the guest zone root.</li></ul><h3 id="cpu-process-1">CPU/Process</h3><ul><li><code>vcpus</code>: Integer representing the number of virtual CPUs the guest will see. &#xA0;This property can be used with <code>cpu_cap</code> and <code>cpu_shares</code> to more closely control CPU utilization.</li></ul><h3 id="memory-1">Memory</h3><ul><li><code>ram</code>: Integer representing the number of MiB of memory that will be made available to the guest kernel. &#xA0;This should be used in place of <code>max_physical_memory</code> as it will need to allocate additional memory to handle the requirements of <code>bhyve</code> or <code>qemu</code>.</li></ul><h3 id="storage-1">Storage</h3><ul><li><code>disks</code>: Array of JSON objects representing disks that should be associated with this VM.</li><li><code>disks.*.block_size</code>: Integer representing the block size of the disk. &#xA0;This property can only be set during disk creation, and cannot be set when cloning a disk.</li><li><code>disks.*.boot</code>: Boolean representing if this disk should be bootable.</li><li><code>disks.*.guest_block_size</code>: String representing the device block size reported to the guest. &#xA0;By default, the block size of the underlying device is reported to the guest. &#xA0;This setting will override the default value. &#xA0;It also supports reporting of both physical and logical block sizes using a string in the form of <code>&quot;logical size/physical size&quot;</code>, eg: <code>&quot;512/4096&quot;</code> to look like a 512e drive. &#xA0;Values must always be powers of 2.</li><li><code>disks.*.image_uuid</code>: String representing the dataset from which to clone this disk. &#xA0;These images are managed by <code>imgadm</code>.</li><li><code>disks.*.refreservation</code>: Integer representing the size of this refreservation in MiB.</li><li><code>disks.*.size</code>: Integer representing the size of this disk in MiB. &#xA0;This property is mutually exclusive from <code>image_uuid</code>, and is useful for creating empty disks.</li><li><code>disks.*.media</code>: String representing whether this disk is a <code>&quot;disk&quot;</code> or a <code>&quot;cdrom&quot;</code>.</li><li><code>disks.*.model</code>: String representing the driver that should be used by the guest to access this disk. &#xA0;Should be one of <code>&quot;virtio&quot;</code>, <code>&quot;ide&quot;</code> or <code>&quot;scsi&quot;</code>.</li><li><code>disk_driver</code>: String representing the default values for <code>disks.*.model</code> above.</li><li><code>flexible_disk_size</code>: Integer representing the number of MiB of storage space that a <code>bhyve</code> instance may use for its disks and snapshots of those disks. &#xA0;This value should be larger than \(\sum_{d}\).</li></ul><h3 id="network-1">Network</h3><ul><li><code>nics.*.allow_unfiltered_promisc</code>: Boolean representing if this guest should be able to utilize multiple MAC addresses, eg: running SmartOS with vnics. &#xA0;Really only suitable for testing containerizors from within a VM.</li><li><code>nics.*.model</code>: String representing the driver that should be used by the guest to access this vnic. &#xA0;Should be one of <code>&quot;virtio&quot;</code>, <code>&quot;e1000&quot;</code> or <code>&quot;rtl8139&quot;</code>.</li><li><code>nic_driver</code>: String representing the default values for <code>nics.*.model</code> above.</li></ul><h3 id="additional-properties-1">Additional Properties</h3><ul><li><code>vnc_port</code>: Integer representing the TCP port that the VNC server attached to this VM will listen on. &#xA0;0 (default) will choose a port at random, -1 will disable VNC server.</li><li><code>vnc_password</code>: String representing the password which will be required when authenticating to the VNC server. &#xA0;This password will be visible from the global zone, and is limited to a maximum of 8 characters.</li></ul>]]></content:encoded></item><item><title><![CDATA[Plex on SmartOS]]></title><description><![CDATA[<p><a href="https://plex.tv/?ref=blog.brianewell.com">Plex</a> is a client-server media player suite combined with a hybrid video streaming service, and is currently the most popular do-it-yourself method of media streaming.</p><p>While the <a href="https://github.com/plexinc/plex-media-player?ref=blog.brianewell.com">Plex Media Player</a> is open source, the Plex Media Server is closed source, and only distributed by Plex to a discrete set of</p>]]></description><link>https://blog.brianewell.com/plex-on-smartos/</link><guid isPermaLink="false">5f8a4fb0a4033becc92138c8</guid><category><![CDATA[SmartOS]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 22 Jan 2021 06:42:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1478720568477-152d9b164e26?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDE1fHxjaW5lbWF8ZW58MHx8fA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1478720568477-152d9b164e26?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDE1fHxjaW5lbWF8ZW58MHx8fA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Plex on SmartOS"><p><a href="https://plex.tv/?ref=blog.brianewell.com">Plex</a> is a client-server media player suite combined with a hybrid video streaming service, and is currently the most popular do-it-yourself method of media streaming.</p><p>While the <a href="https://github.com/plexinc/plex-media-player?ref=blog.brianewell.com">Plex Media Player</a> is open source, the Plex Media Server is closed source, and only distributed by Plex to a discrete set of platforms, excluding Illumos or SmartOS. &#xA0;Fortunately, Plex works fine within an LX branded zone, which is exactly how we will be setting it up today.</p><h2 id="smartos-zone-configuration">SmartOS Zone Configuration</h2><p>While Void Linux&apos;s incredibly light default memory footprint would be ideal for this, Plex tends to update their media server software regularly, and doesn&apos;t distribute packages for Void&apos;s package manager (XBPS); I would prefer a well integrated update mechanism, so this project is probably best suited for a Debian or Ubuntu image.</p><p>This zone will also be using a read-only <code>lofs</code> filesystem mount between this zone and the file server zone featured in <a href="https://blog.brianewell.com/samba-on-smartos">a previous article</a>. &#xA0;This ensures the principle of least privilege, in that the Plex Media Server can only read media (and not modify or delete it) while also isolating our bulk storage in a different zone. &#xA0;This specific approach unfortunately blocks this zone from automatically booting (as the source path is not available until the other zone has fully booted), and it will need to be manually started after your global zone boots.</p><p>Below is an example manifest, please note that <code>{uuid}</code> refers to the uuid of the bulk storage zone:</p><pre><code class="language-json">{
  &quot;image_uuid&quot;: &quot;63d6e664-3f1f-11e8-aef6-a3120cf8dd9d&quot;,
  &quot;brand&quot; : &quot;lx&quot;,
  &quot;kernel_version&quot;: &quot;4.20&quot;,
  &quot;alias&quot;: &quot;plex&quot;,
  &quot;hostname&quot;: &quot;plex&quot;,
  &quot;cpu_cap&quot;: 200,
  &quot;max_physical_memory&quot;: 2048,
  &quot;quota&quot;: 20,
  &quot;delegate_dataset&quot;: true,
  &quot;filesystems&quot;: [
    {
      &quot;type&quot;: &quot;lofs&quot;,
      &quot;source&quot;: &quot;/zones/{uuid}/root/home/brian/media&quot;,
      &quot;target&quot;: &quot;/media&quot;,
      &quot;options&quot;: [ &quot;ro&quot; ]
    },
  ],
  &quot;resolvers&quot;: [ &quot;1.1.1.1&quot; ],
  &quot;nics&quot;: [
    {
      &quot;nic_tag&quot;: &quot;admin&quot;,
      &quot;ips&quot;: [ &quot;dhcp&quot; ],
      &quot;primary&quot;: true
    }
  ],
  &quot;customer_metadata&quot;: {
    &quot;root_authorized_keys&quot;: &quot;ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDrStZlHS0yfE8n71meairBGvFnc5mlDFNKAJy7tQMi2&quot;,
    &quot;user-script&quot;: &quot;/usr/sbin/mdata-get root_authorized_keys &gt; /root/.ssh/authorized_keys&quot;
  }
}</code></pre><p>For more details about the properties configured for this manifest, please <a href="https://blog.brianewell.com/smartos-manifests/">read this article</a>.</p><p>We&apos;re expecting regular and predictable CPU and memory usage per transcoded stream from this zone, if you&apos;re experiencing transcoding performance issues, you may need to increase the values for <code>cpu_cap</code> and <code>max_physical_memory</code>. &#xA0;In testing I&apos;ve found that 1GiB of memory is suitable for a single viewer who is direct streaming video. &#xA0;These requirements are likely much higher for multiple transcoding users.</p><p>The <code>quota</code> value of 20GB is fine for many collections, as that only applies to this zone (which only holds the plex databases and meta-files, not the actual media files, which have been mounted from another zone). &#xA0;Depending on how large your media collection grows, this value may need to be increased.</p><h2 id="ansible-play">Ansible Play</h2><p>This Ansible play is relatively simple and straight forward. &#xA0;It&apos;s broken up into two parts, a <code>common-debian</code> role that mirrors the common role for smartos:</p><ul><li>The <code>PATH</code> variable includes executables under <code>/native</code>.</li><li>The hostname has been set.</li><li>The default delegated ZFS filesystem has been unmounted.</li><li>All packages have been upgraded.</li><li>SSH server configured to only accept public key authentication.</li></ul><p>As well as a plex role that performs plex specific configuration on the zone:</p><ul><li>The Plex package signing key has been added to apt.</li><li>The Plex repository has been added to apt.</li><li>A ZFS filesystem is mounted at <code>/var/lib/plexmediaserver</code>.</li><li>A recordsize tuned ZFS filesystem has been mounted for the Plex sqlite3 databases.</li><li>Ownership of <code>/var/lib/plexmediaserver</code> is set to <code>999:999</code> (necessary for plex to install to it.)</li><li>Installing Plex Media Server</li><li>Enabling Plex Media Server</li><li>Ensuring that system will update all packages regularly, including Plex Media Server.</li></ul><p>Below is an example ansible play:</p><pre><code class="language-yaml">---
- name: &apos;Testing Plex&apos;
  hosts: plex
  roles:
  - plex
  vars:
    hostname: plex</code></pre><h2 id="plex-configuration">Plex Configuration</h2><p>This Plex Media Server will now need to be associated with your Plex account. &#xA0;Point a web browser to <code>https://&lt;ip address&gt;:32400/web</code> to continue.</p><h2 id="port-forwarding">Port Forwarding</h2><p>If your network employs NAT, you may want to port forward your plex server at your router to ensure that clients outside of your local network can have access.</p><p>If you are using a SmartOS Zone to route IP traffic for your network, ensure that the following line exists within <code>/etc/ipf/ipnat.conf</code>, where <code>&lt;external ip&gt;</code> is your router&apos;s public IP address and <code>&lt;internal ip&gt;</code> is your plex server&apos;s IP address:</p><pre><code class="language-/etc/ipf/ipnat.conf"># Plex Redirection
rdr net0 &lt;external ip&gt;/32 port 32400 -&gt; &lt;internal ip&gt; port 32400 tcp</code></pre><h2 id="conclusion">Conclusion</h2><p>The read-only <code>lofs</code> filesystem mount might be better employed mounting a ZFS source filesystem that exists outside of any zone, with a second read-write <code>lofs</code> filesystem mount connecting that same source filesystem to another zone to allow for editing. &#xA0;This configuration might end up being more trouble than it&apos;s worth as well, since ZFS and non-ZFS filesystem mounting is not interlaced when Zones boot up.</p><p>For now, I just remind myself to manually start the Plex zone whenever I reboot the global zone. &#xA0;Perhaps there&apos;s an opportunity to extend SmartOS to respect Zone boot order, specifically allowing Zones to specify dependency zones that need to be booted up before they can boot.</p><p>Also, a special thanks to the Lights and Shapes blog for <a href="http://lightsandshapes.com/plex-on-smartos/?ref=blog.brianewell.com">writing about this</a> significantly earlier than I have.</p>]]></content:encoded></item><item><title><![CDATA[Samba on SmartOS]]></title><description><![CDATA[<p>While <a href="https://www.truenas.com/?ref=blog.brianewell.com">TrueNAS</a> is a powerful tool for setting up simple and easy network addressable storage, there&apos;s a lot of unnecessary feature overlap between TrueNAS and SmartOS. &#xA0;Both operating systems prefer direct hardware access, and passing through ZFS volumes to HVM for TrueNAS to format as ZFS would</p>]]></description><link>https://blog.brianewell.com/samba-on-smartos/</link><guid isPermaLink="false">5f8a4fb0a4033becc92138b6</guid><category><![CDATA[SmartOS]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 15 Jan 2021 02:53:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1587293852726-70cdb56c2866?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDl8fHdhcmVob3VzZXxlbnwwfHx8&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1587293852726-70cdb56c2866?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDl8fHdhcmVob3VzZXxlbnwwfHx8&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Samba on SmartOS"><p>While <a href="https://www.truenas.com/?ref=blog.brianewell.com">TrueNAS</a> is a powerful tool for setting up simple and easy network addressable storage, there&apos;s a lot of unnecessary feature overlap between TrueNAS and SmartOS. &#xA0;Both operating systems prefer direct hardware access, and passing through ZFS volumes to HVM for TrueNAS to format as ZFS would just be sacrilege to both parties.</p><p>Fortunately, it&apos;s trivial to configure a lightweight SmartOS zone to act as a network addressable storage (NAS) provider, albeit without the graphical configuration and monitoring of TrueNAS, but instead with the raw expressive power of configuration files!</p><p>Seriously though, the configuration is not that hard, and results in a solution that&apos;s both minimalistic and unassuming. &#xA0;Unlike TrueNAS, the approach documented here will bifurcate Samba configuration from snapshots/replication and system resource monitoring, two topics that we will address separately with more robust, hypervisor encompassing solutions in later articles.</p><h2 id="why-samba">Why Samba?</h2><p>The <a href="https://wiki.smartos.org/?ref=blog.brianewell.com">official SmartOS wiki</a> describes <a href="https://wiki.smartos.org/configuring-smb-in-smartos/?ref=blog.brianewell.com">a method for serving SMB from the kernel</a>. &#xA0;While this works, it doesn&apos;t quite offer the flexibility that Samba offers you.</p><h2 id="unmap-drives">Unmap Drives</h2><p>Depending on how you&apos;ve configured Samba, if you&apos;re upgrading from a previous SmartOS virtual machine image, it can be helpful to disconnect any drive mappings from your previous NAS before proceeding.</p><figure class="kg-card kg-image-card"><img src="https://blog.brianewell.com/content/images/2021/01/image.png" class="kg-image" alt="Samba on SmartOS" loading="lazy" width="382" height="252"></figure><h2 id="smartos-zone-configuration">SmartOS Zone Configuration</h2><p>We&apos;ll be providing NAS service from an isolated SmartOS zone. &#xA0;Below is an example manifest:</p><pre><code class="language-json">{
  &quot;image_uuid&quot;: &quot;1d05e788-5409-11eb-b12f-037bd7fee4ee&quot;,
  &quot;brand&quot;: &quot;joyent&quot;,
  &quot;alias&quot;: &quot;samba&quot;,
  &quot;hostname&quot;: &quot;samba&quot;,
  &quot;cpu_cap&quot;: 100,
  &quot;cpu_shares&quot;: 25,
  &quot;max_physical_memory&quot;: 256,
  &quot;quota&quot;: 20480,
  &quot;zfs_io_priority&quot;: 25,
  &quot;delegate_dataset&quot;: true,
  &quot;resolvers&quot;: [ &quot;10.0.0.1&quot; ],
  &quot;nics&quot;: [
    {
      &quot;nic_tag&quot;: &quot;admin&quot;,
      &quot;ips&quot;: [ &quot;10.0.0.3/24&quot; ],
      &quot;gateways&quot;: [ &quot;10.0.0.1&quot; ],
      &quot;primary&quot;: true
    }
  ],
  &quot;customer_metadata&quot;: {
    &quot;root_authorized_keys&quot;: &quot;ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDrStZlHS0yfE8n71meairBGvFnc5mlDFNKAJy7tQMi2&quot;,
    &quot;user-script&quot;: &quot;/usr/sbin/mdata-get root_authorized_keys &gt; /root/.ssh/authorized_keys&quot;
  }
}</code></pre><p>For more details about the properties configured for this manifest, please <a href="https://blog.brianewell.com/smartos-manifests/">read this article</a>.</p><p>Be sure to set the <code>resolvers</code>, <code>nics</code> and <code>root_authorized_keys</code> parameters as appropriate for your environment. &#xA0;Create the zone, and assign its IP address to a relevant Ansible host inventory.</p><h2 id="samba-configuration">Samba Configuration</h2><p>Use the <code>samba</code> ansible role from <code>ansible-smartos-tricks</code> to configure Samba on your zone. &#xA0;This will automatically deploy a ZFS filesystem at <code>/home</code> if a dataset has been delegated, and configure Samba as is normally comes configured when installed directly by pkgin.</p><p>Additionally, this role can modify the Samba configuration through the <code>samba</code> variable. &#xA0;Included below is my default Samba configuration modifications that specify a workgroup name, limit remote host access, and include <code>shadow_copy2</code> on the homes shares, configured to treat any ZFS snapshots as VSS snapshots made available over Samba:</p><pre><code>---
- name: &apos;Configuring Samba Servers&apos;
  hosts: samba
  roles:
  - samba
  vars:
    vim:
      colorscheme: elflord
    samba:
      global:
        &apos;workgroup&apos;: &apos;LOCALNET&apos;
        &apos;hosts allow&apos;: &apos;192.168.0. 127. fe80::&apos;
      homes:
        &apos;vfs objects&apos;: &apos;shadow_copy2&apos;
        &apos;shadow:snapdir&apos;: &apos;.zfs/snapshot&apos;
        &apos;shadow:sort&apos;: &apos;desc&apos;
        &apos;shadow:format&apos;: &apos;r%Y%m%d%H%M%S&apos;
        &apos;shadow:snapdirseverywhere&apos;: &apos;yes&apos;
        &apos;shadow:crossmountpoints&apos;: &apos;yes&apos;</code></pre><p>For a stand-alone server, both system and corresponding Samba users will need to be created in the zone to access the samba share, this can be done as follows:</p><pre><code>[root@samba ~]# useradd -m brian
144 blocks
[root@samba ~]# smbpasswd -a brian
New SMB password:
Retype new SMB password:
Added user brian.</code></pre><p>At this point, you should be able to connect to the server and login, and access your shares.</p><h2 id="managing-zfs-snapshots">Managing ZFS Snapshots</h2><p>The <code>vfs_shell_snap</code> module can be added to your Samba shares, allowing remote users to manage the manual creation and deletion of snapshots on your shares. &#xA0;I prefer not doing this, instead leaving it to automatic server side processes, as control over this functionality may be hijacked by ransomware and maliciously used against the user.</p><h2 id="clamav-configuration">ClamAV Configuration</h2><p>Samba also ships with the <code>vfs_virusfilter</code> module that allows for the scanning and filtering of virus files on Samba file servers with an external anti-virus scanner, such as ClamAV. &#xA0;While this may seem cool, ClamAV <strong>requires over 1GB of additional memory to operate</strong>.</p><p>The following ansible play enables ClamAV along side Samba on this zone and configures Samba to scan files with ClamAV before serving them to clients.</p><pre><code>---
- name: &apos;Configuring Samba Servers&apos;
  hosts: samba
  roles:
  - clamav
  - samba
  vars:
    vim:
      colorscheme: elflord
    samba:
      global:
        &apos;workgroup&apos;: &apos;LOCALNET&apos;
        &apos;hosts allow&apos;: &apos;192.168.0. 127. fe80::&apos;
      homes:
        &apos;vfs objects&apos;: &apos;virusfilter shadow_copy2&apos;
        &apos;virusfilter:scanner&apos;: &apos;clamav&apos;
        &apos;virusfilter:socket path&apos;: &apos;/var/clamav/clamd.sock&apos;
        &apos;shadow:snapdir&apos;: &apos;.zfs/snapshot&apos;
        &apos;shadow:sort&apos;: &apos;desc&apos;
        &apos;shadow:format&apos;: &apos;r%Y%m%d%H%M%S&apos;
        &apos;shadow:snapdirseverywhere&apos;: &apos;yes&apos;
        &apos;shadow:crossmountpoints&apos;: &apos;yes&apos;</code></pre><h2 id="dataset-transfer">Dataset Transfer</h2><p>Datasets from a previous file server zone can be transferred to this new file server zone using some previous <a href="https://blog.brianewell.com/transferring-zone-delegated-datasets">documentation on the topic</a>.</p><h2 id="active-directory">Active Directory</h2><p>Starting from version 4.0, Samba is able to function as an Active Directory Domain Controller. &#xA0;Since this configuration should usually be utilized with multiple DCs for failover reasons, we&apos;ll save it for another article.</p><h2 id="remap-drives">Remap Drives</h2><p>Once you are satisfied with your file server zone, be sure to remap any drives you had previously unmapped from your workstations.</p><figure class="kg-card kg-image-card"><img src="https://blog.brianewell.com/content/images/2021/01/image_2021-01-30_034523.png" class="kg-image" alt="Samba on SmartOS" loading="lazy" width="637" height="471"></figure><h2 id="multicast-dns">Multicast DNS</h2><p>If you&apos;d like your shares to be auto-discoverable in the MacOS X finder, enable the multicast DNS service with the following command:</p><pre><code>[root@samba ~]# svcadm enable svc:/network/dns/multicast:default</code></pre><p>This should probably be managed via Ansible, but it seems kind of silly to configure a role for it now. &#xA0;Perhaps later.</p><h2 id="conclusion">Conclusion</h2><p>While this isn&apos;t the <em>only</em> way to run a SMB file server from within a SmartOS Zone, it <em>is</em> a nice way to do it. &#xA0;If there&apos;s interest, I might end up benchmarking Samba vs the official recommended approach from the SmartOS Wiki.</p>]]></content:encoded></item><item><title><![CDATA[Ansible on SmartOS]]></title><description><![CDATA[<p>It&apos;s easy to have a love-hate relationship with Infrastructure Automation.</p><p>On one hand, the principle of Infrastructure Automation is fantastic: Using portable and well defined modules of code to ensure consistent deployments of software and configurations to potentially hundreds or thousands of physical or virtual machines is how</p>]]></description><link>https://blog.brianewell.com/ansible-on-smartos/</link><guid isPermaLink="false">5f8a4fb0a4033becc92138ff</guid><category><![CDATA[SmartOS]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 08 Jan 2021 04:07:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1496247749665-49cf5b1022e9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDF8fGZhY3Rvcnl8ZW58MHx8fA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1496247749665-49cf5b1022e9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDF8fGZhY3Rvcnl8ZW58MHx8fA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Ansible on SmartOS"><p>It&apos;s easy to have a love-hate relationship with Infrastructure Automation.</p><p>On one hand, the principle of Infrastructure Automation is fantastic: Using portable and well defined modules of code to ensure consistent deployments of software and configurations to potentially hundreds or thousands of physical or virtual machines is how we should have <em>always</em> done system administration, and in retrospect, it&apos;s a shame that it took cloud computing for all of us to see that. &#xA0;The improvements to fault detection, consistency and delivery time are massive boons to rapidly expanding deployments maintained by relatively small teams.</p><p>On the other hand, the implementation of Infrastructure Automation is... less than fantastic. &#xA0;<a href="https://www.chef.io/?ref=blog.brianewell.com">Chef</a> and <a href="https://puppet.com/?ref=blog.brianewell.com">Puppet</a> both initially present as resource hogs that are not trivial to setup or manage and have questionable economies of scale. &#xA0;<a href="https://saltstack.com/?ref=blog.brianewell.com">SaltStack</a>, while better, wasn&apos;t nearly as easy to get up and running as it should have been. &#xA0;<a href="https://www.terraform.io/?ref=blog.brianewell.com">Terraform</a> and <a href="https://aws.amazon.com/cloudformation/?ref=blog.brianewell.com">CloudFormation</a> both look interesting, but appear to be focused as provisioning tools instead of configuration management tools, and in the case of the later, is AWS only. &#xA0;It seems like Terraform also requires a <a href="https://github.com/john-terrell/terraform-provider-smartos?ref=blog.brianewell.com">third party provisioning provider</a> to use with SmartOS, which while I&apos;m not opposed to using, I&apos;d rather not have to learn Terraform with that added complexity.</p><p>That leaves us with <a href="https://www.ansible.com/?ref=blog.brianewell.com">Ansible</a>, which while easy to get started with, both in configuration and use, quickly becomes less palatable as your deployments increase in complexity. &#xA0;This specifically manifests in how Ansible uses YAML to describe tasks. &#xA0;Simple tasks are easy, but as soon as flow control structures such as branching or looping are introduced to your plays, the YAML structures become anything but minimal. &#xA0;There appears to be a complete ignorance of the DRY principle as well with an incredible preference to write everything multiple times, but it&apos;s what we&apos;ve got for now, so we&apos;re going to go with it.</p><h2 id="stupid-smartos-tricks">Stupid SmartOS Tricks</h2><p>That leads us to changes in how articles will be published to this blog moving forward.</p><p>In the principle of not repeating oneself (DRY), articles will focus on the overarching intentions and concerns one might encounter when deploying something to SmartOS. &#xA0;Instead of fully describing all required commands and configuration steps to achieve a given outcome, articles will instead link to a companion Ansible role in the accompanying <a href="https://github.com/brianewell/ansible-smartos-tricks?ref=blog.brianewell.com">ansible-smartos-tricks playbook</a> that will perform that deployment.</p><p>This should also be very helpful to ensure consistent deployment of software for the <a href="https://blog.brianewell.com/smartos-in-2021/#conclusion">previously-mentioned plans</a> of benchmarking SmartOS against FreeNAS and Proxmox.</p><p>The roles in this playbook are organized in the following fashion:</p><ul><li>A <code>common</code> role that will perform all boiler plate configurations to any base SmartOS zone including cleaning up and managing ZFS datasets, and disabling <code>inetd</code> and <code>sac</code> as recommended in <a href="https://blog.brianewell.com/the-base-smartos-zone/">this article</a>.</li><li>Service roles that depend on <code>common</code> and zero or more other service roles and install and configure commonly required programs and services including <code>mysql</code>, <code>neo4j</code>, <code>nginx</code>, <code>postgresql</code>, <code>redis</code> and <code>samba</code>.</li></ul><h2 id="installation">Installation</h2><p>Ansible plays should be run from their own isolated SmartOS zone, as this system will need root access to any other system configured by it. &#xA0;Below is an example manifest:</p><pre><code class="language-json">{
  &quot;image_uuid&quot;: &quot;1d05e788-5409-11eb-b12f-037bd7fee4ee&quot;,
  &quot;brand&quot;: &quot;joyent&quot;,
  &quot;alias&quot;: &quot;ansible&quot;,
  &quot;hostname&quot;: &quot;ansible&quot;,
  &quot;cpu_cap&quot;: 100,
  &quot;max_physical_memory&quot;: 1024,
  &quot;quota&quot;: 10,
  &quot;resolvers&quot;: [ &quot;10.0.0.1&quot; ],
  &quot;nics&quot;: [
    {
      &quot;nic_tag&quot;: &quot;admin&quot;,
      &quot;ips&quot;: [ &quot;10.0.0.2/24&quot; ],
      &quot;gateways&quot;: [ &quot;10.0.0.1&quot; ],
      &quot;primary&quot;: true
    }
  ]
}</code></pre><p>If you have any questions about the properties chosen for this manifest, please <a href="https://blog.brianewell.com/smartos-manifests/">read this article</a>.</p><p>Create the zone, <code>zlogin</code> to it, install <code>git</code>, clone <code>ansible-smartos-tricks</code> locally and then run <code>./ansible-bootstrap.sh</code> from within the <code>ansible-smartos-tricks</code> directory.</p><p>Output has been omitted for brevity:</p><pre><code class="language-bash">[root@home-gz ~]# vmadm create -f ansible.json
[root@home-gz ~]# zlogin &lt;uuid or &quot;ansible&lt;tab&gt;&quot;&gt;
[root@ansible ~]# pkgin -y install git
[root@ansible ~]# git clone https://github.com/brianewell/ansible-smartos-tricks
[root@ansible ~]# cd ansible-smartos-tricks
[root@ansible ~/ansible-smartos-tricks]# ./bootstrap.sh</code></pre><p> The bootstrap script will handle all of the rest of the configuration for you, including installing Redis locally and configuring ansible to utilize it to cache remote host facts, significantly improving ansible performance, as well as automatically ensuring the existence of an SSH key-pair to authenticate to remote systems when configuring them.</p><h2 id="using-smartos-tricks">Using SmartOS Tricks</h2><p>Ansible uses SSH to connect to and configure remote systems, and this project specifically uses ed25519 keypairs to handle authentication. &#xA0;While you <em>could</em> manually copy the public key to each system you&apos;d like to configure, it&apos;s much easier and more consistent to include the key at the end of each manifest under the <code>customer_metadata</code> key. &#xA0;An example:</p><pre><code>{
...
  &quot;customer_metadata&quot;: {
    &quot;root_authorized_keys&quot;: &quot;ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDrStZlHS0yfE8n71meairBGvFnc5mlDFNKAJy7tQMi2&quot;,
    &quot;user-script&quot;: &quot;/usr/sbin/mdata-get root_authorized_keys &gt; /root/.ssh/authorized_keys&quot;
  }
}</code></pre><p>So far I&apos;ve just been using static hosts in Ansible host inventories. &#xA0;You can either remember the IP address you set a given host to, or use <code>vmadm</code> to discover it from the global zone:</p><pre><code>[root@gz ~]# vmadm list -o alias,nics.0.ips
ALIAS       NICS.0.IPS
doudna      10.0.0.3/24,addrconf
plex        10.0.0.4/24,addrconf
router      10.0.0.1/24,addrconf
metrics     10.0.0.5/24,addrconf
ansible     10.0.0.2/24,addrconf
test        dhcp,addrconf</code></pre><p>Please note that zones using DHCP will not report their IP address through <code>vmadm</code>. &#xA0;Instead, it&apos;s probably best to login to the zone and check using <code>ipadm show-addr</code> from within the zone:</p><pre><code>[root@test ~]# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
net0/?            dhcp     ok           10.0.0.201/24
lo0/v6            static   ok           ::1/128</code></pre><p>Place any hosts that you would like Ansible to configure into the ansible static hosts inventory file. &#xA0;In this example we will include 10.0.0.201 as a member of the test inventory:</p><pre><code>[root@ansible ~]# cat /etc/ansible/hosts
[test]
10.0.0.201</code></pre><p>You can now create plays directly within <code>ansible-smartos-tricks</code> that refer to the provided roles, an example that applies the common role to test hosts:</p><pre><code>[root@ansible ~]# cat ~/ansible-smartos-tricks/common.yml
---
- name: &apos;Common Role&apos;
  hosts: test
  roles:
  - common
  vars:
    vim:
      colorscheme: elflord</code></pre><h2 id="future-plans">Future Plans</h2><p>It may seem by the title of this article that this is some kind of endorsement for Ansible, but I am honestly confused how software as misery inducing as this can be so popular. &#xA0;Most &apos;sophisticated&apos; Ansible plays appear to be written by closet masochists who enjoy typing <em>a lot</em> , and while that probably also says something about me, it also looks to be an incredible opportunity to do better in the space of Infrastructure Automation.</p><p>I&apos;ve already started designing an infrastructure automation replacement for Ansible.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.brianewell.com/content/images/2021/01/standards.png" class="kg-image" alt="Ansible on SmartOS" loading="lazy" width="500" height="283"><figcaption><a href="https://xkcd.com/927/?ref=blog.brianewell.com">XKCD</a> sums up what will undoubtedly be the fate of yet another infrastructure automation tool.</figcaption></figure><p>For now though, I will be using Ansible with SmartOS until my research indicates that a different tool would be preferable, either of my own or someone else&apos;s design.</p>]]></content:encoded></item><item><title><![CDATA[SmartOS in 2021]]></title><description><![CDATA[<p>What started years ago as a wild goose chase for a scalable first-class hyper-converged infrastructure ultimately led to the fantastic discovery of <a href="https://www.joyent.com/smartos?ref=blog.brianewell.com">Joyent SmartOS</a>.</p><p>While I&apos;ve barely looked back since, the world <em>has</em> changed significantly since then, especially with the exodus of multiple <a href="http://dtrace.org/blogs/brendan/2014/03/05/a-new-challenge/?ref=blog.brianewell.com">high</a> <a href="http://dtrace.org/blogs/bmc/2019/07/31/ex-joyeur/?ref=blog.brianewell.com">profile</a> Joyent engineers from</p>]]></description><link>https://blog.brianewell.com/smartos-in-2021/</link><guid isPermaLink="false">5f8a4fb0a4033becc9213912</guid><category><![CDATA[SmartOS]]></category><category><![CDATA[FreeNAS]]></category><category><![CDATA[Proxmox]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 01 Jan 2021 09:31:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1503803548695-c2a7b4a5b875?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1503803548695-c2a7b4a5b875?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="SmartOS in 2021"><p>What started years ago as a wild goose chase for a scalable first-class hyper-converged infrastructure ultimately led to the fantastic discovery of <a href="https://www.joyent.com/smartos?ref=blog.brianewell.com">Joyent SmartOS</a>.</p><p>While I&apos;ve barely looked back since, the world <em>has</em> changed significantly since then, especially with the exodus of multiple <a href="http://dtrace.org/blogs/brendan/2014/03/05/a-new-challenge/?ref=blog.brianewell.com">high</a> <a href="http://dtrace.org/blogs/bmc/2019/07/31/ex-joyeur/?ref=blog.brianewell.com">profile</a> Joyent engineers from that company. &#xA0;I occasionally wonder how my conclusions would have changed had I been searching in today&apos;s technological climate.</p><p>In this article, we&apos;ll explore the strengths and weaknesses of SmartOS in the context of today&apos;s open-source hyper-convergence space.</p><h2 id="smartos-illumos-"><a href="https://www.joyent.com/smartos?ref=blog.brianewell.com">SmartOS</a> (Illumos)</h2><p>Joyent SmartOS is an Illumos based open-source hypervisor/containerizor that integrates Crossbow, DTrace, KVM, Bhyve, ZFS and Zones into a light weight in-memory solution which can boot from either a local USB drive, a boot dataset embedded into the primary storage pool via <a href="https://github.com/joyent/smartos-live/blob/master/man/usr/share/man/man1m/piadm.1m.md?ref=blog.brianewell.com">piadm</a>, or over PXE.</p><h3 id="pros-">Pros:</h3><ul><li>Lightweight ephemeral in-memory global zone that is relatively immutable, improving security and enabling easy upgrades by simply re-deploying a boot image and rebooting.</li><li>Supports both containers (zones) for maximum performance and HVM (<a href="https://www.linux-kvm.org/page/Main_Page?ref=blog.brianewell.com">KVM</a> or <a href="https://bhyve.org/?ref=blog.brianewell.com">Bhyve</a>) based guests for maximum flexibility.</li><li>Strong default isolation between guests without any additional configuration. &#xA0;HVMs are run from within a zone, providing an additional layer of security between the guest and the host.</li><li>Zones can be Linux (lx) branded, which allows for Linux user-spaces to exist natively on Illumos. &#xA0;Debian, Ubuntu, and CentOS zone images are included by default.</li><li>A ZFS dataset can be delegated to a zone, allowing the zone to define and configure their own children datasets.</li><li>DTrace is accessible from both inside and outsize of zones, allowing for incredibly detailed instrumentation of production deployments.</li><li>Crossbow network virtualization allows for complex virtual networks to be configured between guests of a single hypervisor, or bridged between multiple hypervisors.</li><li>Guests can be tightly constrained by to follow very specific CPU, memory, file system and network restrictions.</li><li>Scalable, up to thousands, or down to a single host.</li><li>Support for Docker, Kubernetes and Object Store (Manta) through Triton Data Center.</li><li>Rapid Updates. &#xA0;Joyent usually has <a href="https://us-east.manta.joyent.com/Joyent_Dev/public/SmartOS/smartos.html?ref=blog.brianewell.com">new releases of SmartOS available every two weeks</a>.</li><li>Strict local node storage architecture ensures low file system latency and node independence.</li><li>Uses NetBSD&apos;s <a href="https://www.pkgsrc.org/?ref=blog.brianewell.com">pkgsrc</a> for package management.</li><li>Optional management layers such as <a href="https://www.joyent.com/triton/compute?ref=blog.brianewell.com">Triton DataCenter</a> and <a href="https://project-fifo.net/?ref=blog.brianewell.com">Project Fifo</a> allow for large clusters of hosts to be easily managed, as well as providing additional features.</li></ul><p>While none of these pros are completely unique to SmartOS, I have yet to find a project that incorporates all of them so well, even today. &#xA0;I&apos;m probably biased.</p><h3 id="cons-">Cons:</h3><ul><li>DTrace doesn&apos;t work across the HVM boundary.</li><li>Strict local node storage architecture means migration between compute nodes requires a ZFS send/recv to push guest data from the source node to the destination node, making instant migrations between hypervisors impossible.</li><li>SmartOS limits crossbow configurations to specific conventions.</li><li>Does not support as wide an array of hardware or release new drivers as quickly as Linux does.</li><li>LX branded zones do not support the latest Linux kernel interfaces, making them ill suited for the latest versions of leading Linux distributions.</li><li>Illumos has many fewer active developers than Linux does.</li><li>Illumos ZFS rather than OpenZFS.</li></ul><p>While I don&apos;t give the first three points on this list of cons much attention, the later ones have become a much bigger deal-breaker in the past few years. &#xA0;Linux has long been the focal point of performance improvements and technological innovation in this space, and any technological capital that may have been built up under Sun has almost certainly been eclipsed by Linux at this time.</p><p><em>Maybe.</em></p><p>There are certainly important performance metrics in which Linux surpasses Illumos. &#xA0;There are also technologies in the Linux ecosystem that completely fail to solve their intended problems. &#xA0;The best example is probably epoll: <a href="https://idea.popcount.org/2017-02-20-epoll-is-fundamentally-broken-12/?ref=blog.brianewell.com">that took over a decade</a> for Linux to &quot;get right&quot;, despite prime working examples from multiple other predating operating system implementations.</p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="356" height="200" src="https://www.youtube.com/embed/l6XQUciI-Sc?start=3362&amp;feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><figcaption>Bryan Cantrill puts it quite succinctly.</figcaption></figure><p>And that&apos;s fine.</p><p>While the number of developers contributing to a project doesn&apos;t necessarily have a causal effect on the quality of that project, it&apos;s still quite concerning to observe the continuing ablation of Illumos <a href="http://www.brendangregg.com/blog/2017-09-05/solaris-to-linux-2017.html?ref=blog.brianewell.com">engineering</a> and <a href="http://www.beginningwithi.com/2016/10/08/letting-go-of-a-beloved-technology/?ref=blog.brianewell.com">outreach</a> talent from Illumos is <em>very</em> concerning for the future of the operating system.</p><p>Probably my most significant concern is the combination of the above point and what appears to be the divergence of Illumos from OpenZFS. &#xA0;Illumos had enjoyed being the implementation of reference for years, but that changed early last year with what was likely a self-inflicted wound on Illumos&apos; part and the ZFS on Linux repo being renamed to <a href="https://github.com/openzfs/ZFS?ref=blog.brianewell.com">openzfs/ZFS</a>, that is no longer the case. &#xA0;Illumos ZFS will either need to maintain feature parity and ideally binary compatibility with OpenZFS, or port OpenZFS into Illumos.</p><p>The bad news: both of these prospects require Illumos kernel developer time, which is at a premium right now. &#xA0;The good news: this has clearly registered as a priority for the Illumos developers, and work appears to have been made towards porting OpenZFS into Illumos.</p><p>Lastly: I&apos;m not sure what exactly Samsung&apos;s intentions with Joyent are. &#xA0;Since purchasing Joyent in 2016, there has been a marked change in the way that Joyent does business, beginning with sweeping changes to the way it communicates to the public about its product offerings, and probably culminating in the 2019 closure of the Joyent Public Cloud. &#xA0;Besides the lack of any recent innovations and exodus of top-tier talent, it generally feels like Joyent is just treading water, and that&apos;s not a good position to be in for the long run.</p><h2 id="alternatives">Alternatives</h2><p>As of January 2021, there are numerous projects that overlap quite heavily with SmartOS. &#xA0;Lets briefly review some of them.</p><h3 id="truenas-core-freebsd-"><a href="https://www.truenas.com/?ref=blog.brianewell.com">TrueNAS Core</a> (FreeBSD)</h3><p>iXsystems&apos; TrueNAS Core is a FreeBSD based open-source Network Addressable Storage (NAS) Operating System that provides data accessibility through SMB, AFP, NFS, iSCSI, SSH, rsync, and FTP/TFTP, all managed through a nice shiny web interface. &#xA0;While it&apos;s main use case is as a NAS, it can also locally run FreeBSD Jails and BHyve, giving it hypervisor and containerizor functionality.</p><p>TrueNAS includes DTrace and OpenZFS, both of which are well supported on FreeBSD based operating systems. &#xA0;The installation process, while quite straight forward, does rely on installation to separate boot media which, unlike an ephemeral in-memory image, is continually being written to during normal operation. &#xA0;This will wear out SD cards and USB flash drives given enough time, meaning that your best bet will be to install directly to a hard drive or solid state drive. &#xA0;In most of my configurations, such drive space comes at a premium since I&apos;m usually using that space for my primary storage pool instead. &#xA0;This also makes it a bit less convenient to upgrade.</p><p>While FreeBSD based operating systems should have access to <a href="https://www.openvswitch.org/?ref=blog.brianewell.com">Open vSwitch</a>, it is unclear how accessible this feature is through TrueNAS. &#xA0;Meaning that custom network configurations may not be easily established without additional work.</p><p>In the past, iXsystems had been rather inconsistent in their release schedule and some of their updates have been full of regressions (lookin&apos; at you FreeNAS 10). &#xA0;This appears to have been ironed out with completely after the TrueNAS re-branding.</p><p>TrueNAS Core is definitely worth looking back into, as they have made major leaps and bounds on their platform since I last directly used FreeNAS.</p><h3 id="docker-engine-linux-"><a href="https://www.docker.com/products/container-runtime?ref=blog.brianewell.com">Docker Engine</a> (Linux)</h3><p>Docker is the world&apos;s most adopted containerizor solution, and can be directly installed into a pre-existing Linux installation. &#xA0;Docker is based on Linux Containers (LXC) and utilizes Linux cgroups to isolate processes from each other and create virtual environments, similarly to FreeBSD&apos;s Jails and Illumos&apos; Zones.</p><p>Having worked with the precursor to Linux Containers, I probably would have been at home with the feature set of LXC and the convenience and consistency of Docker. &#xA0;However, there were no solidly integrated light-weight docker-based solutions that integrated ZFS at the time I was searching for a solution to my problem at the time, which is how I ended up switching to SmartOS.</p><p>There are a few well-packaged and delivered solutions now though.</p><h3 id="proxmox-ve-linux-"><a href="https://pve.proxmox.com/?ref=blog.brianewell.com">Proxmox VE</a> (Linux)</h3><p>If Proxmox Virtual Environment existed <em>then</em> as it does <em>now</em>, I would probably never have looked any further. &#xA0;It&apos;s a hypervisor/containerizor with a pretty web-based interface that has buttons and graphs, making it visually appealing and generally easy to use. &#xA0;It ships with OpenZFS, actually supports PCIe hardware passthrough to virtual machines, has what appears to be solid Open vSwitch integration and can scale up with it&apos;s clustering support. &#xA0;It looks to be generally adaptable to various contortions that I&apos;d be sure to put it into. &#xA0;It&apos;s basically perfect.</p><p>Almost perfect.</p><p>Like TrueNAS, Proxmox VE needs to be installed onto a physical drive, both for the same reasons and with the same caveats.</p><p>While the UX is very nice, it is easier than it would seem to misconfigure things at times. &#xA0;I know that&apos;s ridiculous coming from someone who primarily works on CLIs. &#xA0;Dockers can be run from the Proxmox VE host, but due to the differences between &quot;application containers&quot; (docker) and &quot;system containers&quot; (pct), that practice is discouraged in the official documentation. &#xA0;Yes, they&apos;re both LXC based, but they&apos;re different above that layer, and apparently incompatible to manage.</p><p>As with TrueNAS, Proxmox VE is definitely something that&apos;s worth looking into.</p><h3 id="openstack-openqrm-opennebula-ovirt-linux-"><a href="https://www.openstack.org/?ref=blog.brianewell.com">OpenStack</a>, <a href="https://www.openqrm-enterprise.com/?ref=blog.brianewell.com">OpenQRM</a>, <a href="https://opennebula.io/?ref=blog.brianewell.com">OpenNebula</a>, <a href="https://www.ovirt.org/?ref=blog.brianewell.com">oVirt</a> (Linux)</h3><p>I don&apos;t know why but I&apos;m just not interested in any of these projects. &#xA0;They present as generally less suitable than the other options already listed above, usually due to appearing too large and inflexible. &#xA0;They&apos;re just not <em>exciting</em>.</p><p>I may end up exploring some of these projects in the future, but there are currently no foreseeable plans to do so.</p><h3 id="oxide"><a href="https://oxide.computer/?ref=blog.brianewell.com">Oxide</a></h3><p>Technology is always moving, and while they have yet to release any products, the cloud technology company that Bryan Cantrill and quite a few other Joyent talent is at the center of deserves an honorable mention here.</p><p>If Cantrill&apos;s <a href="https://www.infoq.com/presentations/os-rust/?ref=blog.brianewell.com">public speaking event</a> around the time of his leaving of Joyent is any indication, Oxide will be an attempt at building a full cloud-scale operating system using the Rust programming language and all of the experience and expertise that team embodies. &#xA0;It will definitely be worth keeping an eye out for any significant announcements.</p><h2 id="conclusion">Conclusion</h2><p>While SmartOS has been a good fit over the last decade, there are looming uncertainties on the horizon which lead me to question if that will still be the case come 2030.</p><p>Fortunately, there are also a lot of options moving forward.</p><p>If all goes as planned, these options will be benchmarked against SmartOS and each other on bare metal running both simple and complex workloads. &#xA0;Keep an eye out for that hopefully sooner rather than later.</p>]]></content:encoded></item><item><title><![CDATA[su vs sudo on SmartOS]]></title><description><![CDATA[<p>For as long as there have been multi-user operating systems, there has been the need to switch between those users. &#xA0;Clearly, this can be done by directly starting a session as a given user, or even logging in again through <code>localhost</code>, but this approach tends to break down when</p>]]></description><link>https://blog.brianewell.com/su-vs-sudo/</link><guid isPermaLink="false">5f8a4fb0a4033becc92138a1</guid><category><![CDATA[SmartOS]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Sat, 30 Jul 2016 02:56:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1483213097419-365e22f0f258?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1483213097419-365e22f0f258?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="su vs sudo on SmartOS"><p>For as long as there have been multi-user operating systems, there has been the need to switch between those users. &#xA0;Clearly, this can be done by directly starting a session as a given user, or even logging in again through <code>localhost</code>, but this approach tends to break down when manipulating system users (which are never meant to be directly logged into) or performing complex cross-user automation.</p><p>Today we will be exploring the command-line methods available on SmartOS for executing commands as other users, namely <code>su</code> and <code>sudo</code>.</p><h2 id="su">su</h2><p>The switch user (<code>su</code>) command executes a new shell owned by the specified user (or root if no user is specified). &#xA0;This effectively allows the ownership of a session to be changed without logging off to assume the role of the new user.</p><p>Non-superusers attempting to switch users will be prompted for the login credentials of the user being switched to, just as they might be if they were logging in directly from a terminal. &#xA0;Superusers are never prompted for login credentials when using <code>su</code>.</p><p>A few examples:</p><pre><code># su - brian
$ su -
Password:
#
</code></pre><p>The <code>-</code> parameter before the username further configures the login environment with the following additional changes:</p><ul><li>The <code>LC*</code>, <code>LANG</code> and <code>TZ</code> environment variables from the specified user&apos;s environment are also propagated to the new shell.</li><li>Sets the <code>MAIL</code> environment variable to <code>/var/mail/new_user</code>.</li></ul><p>Any parameters after the user will be passed to the executing shell, effectively emulating sudo&apos;s general functionality:</p><pre><code># su - brian -c whoami
brian
</code></pre><p>Additionally, the behavior of <code>su</code> can be modified by altering configuration parameters in <code>/etc/default/su</code>, specifically the following:</p><ul><li><code>SULOG</code> all attempts to use <code>su</code> are logged to the specified file.</li><li><code>CONSOLE</code> if defined, all attempts to <code>su</code> to the superuser are logged to the console.</li><li><code>PATH</code> sets the default path of a shell spawned by <code>su</code>.</li><li><code>SUPATH</code> sets the default path of a superuser shell spawned by <code>su</code>.</li><li><code>SYSLOG</code> uses syslog to log all <code>su</code> attempts.</li></ul><p>This command is the original and the simplest of the three, but you still may want to read the man page for <code>su</code> for additional information.</p><h2 id="sudo">sudo</h2><p>The <code>sudo</code> command permits users to execute commands as other users as allowed by a <code>sudo</code> specific security policy. &#xA0;This effectively allows the ownership of a <em>single command</em> to be changed without disrupting the rest of the session to assume the role of the new user. &#xA0;The major differences between <code>su</code> and <code>sudo</code> are as follows:</p><ul><li><code>sudo</code> allows any command to be run as a trailing parameter, not just the user&apos;s shell. &#xA0;<code>sudo</code> can also be passed the <code>-i</code> parameter to open an interactive shell, effectively emulating the functionality of <code>su</code>.</li><li><code>sudo</code> checks escalations against a security policy, allowing for fine-grained control over privilege escalation.</li><li><code>sudo</code> prompts users for the originating user&apos;s credentials while <code>su</code> prompts users for the credentials of the user being switched to.</li></ul><p>By default, the security policy is configured in <code>/opt/local/etc/sudoers</code>.</p><p><strong>Notice:</strong> the sudoers file should always be edited with <code>visudo</code> instead of directly.</p><p>Beyond global parameters, the <code>sudoers</code> file specifies host, user and command aliases:</p><pre><code>User_Alias ADMINS = brian, notbrian, alsonotbrian
Cmnd_Alias PROCESSES = /usr/bin/nice, /bin/kill, /usr/bin/renice, /usr/bin/pkill
Cmnd_Alias REBOOT = /sbin/halt, /sbin/reboot, /sbin/poweroff
</code></pre><p>As well as user privilege specifications:</p><pre><code>root ALL=(ALL) ALL
</code></pre><p>This specification allows root to run any command as any user.</p><pre><code>%sudoers ALL=(root) /bin/kill, (operator) /bin/ls
</code></pre><p>This specification allows a member of the sudoers group to run <code>/bin/kill</code> as root and <code>/bin/ls</code> as the operator user.</p><p>If the included flexibility wasn&apos;t enough, <code>sudo</code> is also a plugin-based architecture, which can be extended in many different ways. &#xA0;I would recommend thoroughly reading the <code>sudo</code> and <code>sudoers</code> manpages, as <code>sudo</code> is as complicated as <code>su</code> is simple, and the entire scope of its functionality is <em>way</em> beyond the scope of this brief post.</p><h2 id="conclusion">Conclusion</h2><p>If you need to escalate yourself to a superuser role or need to quickly and simply switch into another role, <code>su</code> should be your go-to command. &#xA0;It&apos;s simple, direct, and requires very little additional configuration or tweaking.</p><p>If you&apos;re working in a more complex multiuser environment and finer grained access control is a requirement, <code>sudo</code> is going to be your weapon of choice. &#xA0;Additionally, I find <code>sudo</code> more convienent if I need to perform a single command as a different user rather than entirely switching my context to them.</p><p>Ultimately, depending on the context, I use both.</p><p>Additionally, SmartOS supports an additional privilege escalation framework in profiles and Role Based Access Control (RBAC), however that is significantly more complicated than even <code>sudo</code>, and will be the topic of a future article.</p>]]></content:encoded></item><item><title><![CDATA[The Base SmartOS Zone]]></title><description><![CDATA[<p>SmartOS Zones make for excellent blank slates to do development or production work from.</p><p>Except as it turns out, they&apos;re not blank slates. &#xA0;The two most minimal Zone images, <code>base</code> and <code>minimal</code> start out with over a dozen running processes on them.</p><p>What are those processes and</p>]]></description><link>https://blog.brianewell.com/the-base-smartos-zone/</link><guid isPermaLink="false">5f8a4fb0a4033becc92138a6</guid><category><![CDATA[SmartOS]]></category><category><![CDATA[Zones]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 15 Jul 2016 15:12:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1481761289552-381112059e05?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1481761289552-381112059e05?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="The Base SmartOS Zone"><p>SmartOS Zones make for excellent blank slates to do development or production work from.</p><p>Except as it turns out, they&apos;re not blank slates. &#xA0;The two most minimal Zone images, <code>base</code> and <code>minimal</code> start out with over a dozen running processes on them.</p><p>What are those processes and what functionality do they provide? &#xA0;Which of them can we disable if we need or want to?</p><h2 id="environment">Environment</h2><p>Since SmartOS Zones version 16.2.0 released yesterday, let&apos;s spin up a base Zone image and check out our running processes.</p><p>Here&apos;s the manifest I used for this demonstration:</p><pre><code>{
        &quot;brand&quot;: &quot;joyent&quot;,
        &quot;image_uuid&quot;: &quot;13f711f4-499f-11e6-8ea6-2b9fb858a619&quot;,
        &quot;alias&quot;: &quot;base_test&quot;,
        &quot;hostname&quot;: &quot;base_test&quot;,
        &quot;max_physical_memory&quot;: 256,
        &quot;quota&quot;: 20,
        &quot;resolvers&quot;: [ &quot;8.8.8.8&quot;, &quot;8.8.4.4&quot; ],
        &quot;nics&quot;: [ {
                        &quot;nic_tag&quot;: &quot;admin&quot;,
                        &quot;ip&quot;: &quot;dhcp&quot;
                } ]
}
</code></pre><h2 id="processes">Processes</h2><p><strong>Notice:</strong> I performed this test using version 16.2.0 of both <code>base-64</code> and <code>minimal-64</code>. &#xA0;Besides a few slight deviations (minimal calls <code>rsyslogd -c5 -n</code> and base does not) all running processes were the same.</p><p>Immediately after logging in, I polled the process list:</p><pre><code># ps ax
   PID TT       S  TIME COMMAND
  7239 ?        S  0:00 zsched
  7299 ?        S  0:00 /sbin/init
  7323 ?        S  0:00 /lib/svc/bin/svc.startd
  7328 ?        S  0:02 /lib/svc/bin/svc.configd
  7384 ?        S  0:00 /lib/inet/ipmgmtd
  7607 ?        S  0:00 /usr/sbin/nscd
  7625 ?        S  0:00 /sbin/dhcpagent
  7626 ?        S  0:00 /usr/lib/pfexecd
  7774 ?        S  0:00 /opt/local/sbin/rsyslogd
  7779 ?        S  0:00 /usr/sbin/cron
  7782 ?        S  0:00 /usr/lib/inet/inetd start
  7784 ?        S  0:00 /usr/lib/saf/sac -t 300
  7787 ?        S  0:00 /usr/lib/saf/ttymon
  7788 ?        S  0:00 /usr/lib/utmpd
  7831 ?        S  0:00 /usr/lib/ssh/sshd
  7994 pts/2    S  0:00 /usr/bin/login -z global -f root
  7995 pts/2    S  0:00 -bash
  8062 pts/2    O  0:00 ps -ax
  7796 console  S  0:00 /usr/lib/saf/ttymon -g -d /dev/console -l console -m ldterm,ttcompat -h -p base_test console login:
</code></pre><p>We can immediately discount the three processes tied to the <code>pts/2</code> terminal since those processes are associated with our active login.</p><h3 id="zsched">zsched</h3><p>Each active zone has an associated kernel process, named <code>zsched</code>. &#xA0;This process owns all kernel threads doing work on behalf of the zone, and enables the zones subsystem to keep track of per-zone kernel threads.</p><p>This process is critical to the proper functioning of a zone, and as such, it is not possible to disable this process from within the zone.</p><h3 id="init">init</h3><p>In traditional UNIX, <code>init</code> is the &quot;father of all processes&quot; that was responsible for spawning and restarting service processes that made up the running operating system.</p><p>Since Solaris 10, most of this responsibility has now been offloaded to the Service Management Facility (SMF). &#xA0;Init is now primarily responsible for initializing core components of SMF (namely svc.startd and svc.configd) and restarting them if they fail.</p><p>This process is automatically restarted by the Illumos kernel if it is killed, and as such, it is not possible to disable this process from within the zone.</p><h3 id="svc-startd">svc.startd</h3><p>This process is the master process management daemon for the Service Management Facility subsystem. &#xA0;It&apos;s responsible for starting, stopping, restarting, and signaling services based on administrative requests as well as system or application failures.</p><p>While this process can be disabled (/etc/inittab), doing so would disable SMF entirely, and is not recommended.</p><h3 id="svc-configd">svc.configd</h3><p>This process is the configuration repository daemon for the Service Management Facility subsystem. &#xA0;It is responsible for maintaining the configurations for all services on the system, as well as passing administrative requests for services to be started, stopped, restarted, or signaled to the master process management daemon (described above).</p><p>This process is automatically started by <code>svc.startd</code> and cannot be independently disabled.</p><h3 id="ipmgmtd">ipmgmtd</h3><p>This process handles administrative events for network IP interfaces and IP/TCP/UDP/SCTP/ICMP tunables. &#xA0;It is managed by SMF and provides the back-end that <code>ipadm</code> uses.</p><p>While this process can be disabled (with the service identifier <code>svc:/network/ip-interface-management:default</code>), doing so would prevent network configuration, and is not recommended.</p><p>In testing with a non-networked SmartOS Zone, I was unable to get <code>svc:/network/physical:default</code> to properly online at all, with or without <code>ipmgmtd</code>.</p><h3 id="nscd">nscd</h3><p>This process provides a cache for most name service requests, improving local and network lookup performance. &#xA0;It specifically provides cache services for the following databases:</p><ul><li>passwd</li><li>group</li><li>hosts</li><li>ipnodes</li><li>exec_attr</li><li>prof_attr</li><li>user_attr</li><li>ethers</li><li>rpc</li><li>protocols</li><li>networks</li><li>bootparams</li><li>auth_attr</li><li>services</li><li>netmasks</li><li>printers</li><li>projects</li></ul><p>While this process can be disabled (with the service identifier <code>svc:/system/name-service-cache:default</code>) it really should be kept on due to the performance advantage it provides.</p><h3 id="dhcpagent">dhcpagent</h3><p>This process implements the client half of the dynamic host configuration protocol (DHCP) on Solaris/Illumos. &#xA0;It will only be running when the zone has network interfaces configured to use DHCP, and as such, should never be manually enabled or disabled.</p><h3 id="pfexecd">pfexecd</h3><p>This process manages the Solaris/Illumos Role Based Access Control (RBAC) system.</p><p>It is managed by SMF (with the service identifier <code>svc:/system/pfexec:default</code>) and probably shouldn&apos;t be disabled at risk of disrupting normal system operation.</p><h3 id="rsyslogd">rsyslogd</h3><p>This process provides a reliable message logging service for processes which do not handle their own logging.</p><p>It is managed by SMF (with the service identifier <code>svc:/pkgsrc/rsyslog:default</code>) and probably shouldn&apos;t be disabled at risk of disrupting normal system operation.</p><h3 id="cron">cron</h3><p>This process is able to start other processes as other users at specified dates and times, making it very convenient for running regularly scheduled commands. &#xA0;SmartOS already makes use of cron to perform periodic operations (such as rotating logs and checking for vulnerabilities in installed packages).</p><p>It is managed by SMF (with the service identifier <code>svc:/system/cron:default</code>) and while it could be disabled, I can&apos;t really think of a situation where I&apos;d recommend it.</p><h3 id="inetd">inetd</h3><p>This process is a delegated restarter for inet services. &#xA0;It is currently part of SMF, and is quite similar to <code>svc.startd</code> with the added functionality of optionally listening for network requests for services. &#xA0;Out of the box, it is responsible for maintaining the following services:</p><ul><li><code>svc:/network/nfs/rquota:default</code> The remote quota service (for remote NFS clients accessing local shares)</li><li><code>svc:/network/rpc/gss:default</code> The daemon that generates and validates security tokens between the kernel rpc and the GSS-API layers.</li><li><code>svc:/network/security/ktkt_warn:default</code> Notifies users when their Kerberos tickets are about to expire or automatically renews them before they expire.</li><li><code>svc:/network/rpc/rex:default</code> RPC remote execution.</li><li><code>svc:/network/login:eklogin</code> Remote login (rlogin) service (encrypted+kerberos).</li><li><code>svc:/network/login:klogin</code> Remote login (rlogin) service (kerberos).</li><li><code>svc:/network/login:rlogin</code> Remote login (rlogin) service.</li><li><code>svc:/network/rexec:default</code> Remote execution service.</li><li><code>svc:/network/shell:default</code> Remote shell server.</li><li><code>svc:/network/shell:kshell</code> Remote shell server (kerberos).</li></ul><p>Inetd is managed by SMF (with the service identifier <code>svc:/network/inetd:default</code>) and unless you&apos;re using NFS or rlogin (which has all but been replaced by ssh) I recommend that you disable this service.</p><p>You can also check with <code>inetadm</code> before you disable it to see if it would disrupt any services.</p><h3 id="sac">sac</h3><p>The Service Access Controller (SAC) appears to be part of the Service Access Facility or the subsystem that manages terminal connectivity into the system. &#xA0;Port monitors (ttymon, see below) as described by SAF would be the rough Linux equivalent of a getty, and SAC manages those terminal monitors.</p><p>SAC is managed by SMF (with the service identifier <code>svc:/system/sac:default</code>) and unless you&apos;re making extensive use of TTYs, I would recommend disabling this service as it poses no apparent disruption to the system.</p><h3 id="utmpd">utmpd</h3><p>This process is responsible for maintaining the user accounting databases (utmp/utmpx) in cases where individual processes are unable to correctly update the database, usually failing to properly terminate a session when they close.</p><p>It is managed by SMF (with the service identifier <code>svc:/system/utmp:default</code>) and probably shouldn&apos;t be disabled at risk of creating possibly erroneous user accounting databases.</p><h3 id="sshd">sshd</h3><p>This is the OpenSSH daemon and is responsible for providing the server end-point for secure encrypted communications via SSH. &#xA0;Chances are that you will want to keep this one on unless you never intend on logging directly into this zone from the network (instead, going through the global zone and <code>zlogin</code>).</p><p>OpenSSH is managed by SMF (with the service identifier <code>svc:/network/ssh:default</code>) and should only be disabled if you do not want SSH logins to be possible at all.</p><h3 id="ttymon">ttymon</h3><p>Besides using SAC/SAF, SMF also can call Port Monitors directly, and this service is an example of that.</p><p>As far as I can tell, this <code>ttymon</code> instance connects directly to the virtual console you would connect to with the <code>vmadm console &lt;uuid&gt;</code> command from the global zone.</p><p>It is directly managed by SMF (with the service identifier <code>svc:/system/console-login:default</code>) and while I would normally recommend disabling it, it does appear to still be required.</p><h2 id="conclusion">Conclusion</h2><p>Without too much effort, we&apos;ve developed a rough idea of what the default processes of a SmartOS zone are, as well as which ones can be disabled without too much of an impact on zone functionality.</p><p>In most normal circumstances, I would recommend disabling both the <code>inetd</code> and <code>sac</code> services unless they are required in your specific case:</p><pre><code># svcadm disable svc:/network/inetd:default svc:/system/sac:default
</code></pre><p>In situations where you do not need or want to support SSH login access, you can also safely disable the <code>sshd</code> process entirely.</p><pre><code># svcadm disable svc:/network/ssh:default
</code></pre><p>Consider that you really should establish some other mechanism to perform maintenance if you do this.</p>]]></content:encoded></item><item><title><![CDATA[Minecraft on SmartOS]]></title><description><![CDATA[<p>Minecraft.</p><p>It&apos;s one of those games that can appeal to many different people in many different ways. &#xA0;I got into playing under the assumption that modded Minecraft was the norm, and that calculating the optimum designs for automated power production and gargantuan resource-gathering apparatuses was the entire</p>]]></description><link>https://blog.brianewell.com/minecraft-on-smartos/</link><guid isPermaLink="false">5f8a4fb0a4033becc92138a7</guid><category><![CDATA[SmartOS]]></category><category><![CDATA[ZFS]]></category><category><![CDATA[SMF]]></category><category><![CDATA[Minecraft]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 01 Jul 2016 23:14:00 GMT</pubDate><media:content url="https://blog.brianewell.com/content/images/2020/06/minecraft.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.brianewell.com/content/images/2020/06/minecraft.png" alt="Minecraft on SmartOS"><p>Minecraft.</p><p>It&apos;s one of those games that can appeal to many different people in many different ways. &#xA0;I got into playing under the assumption that modded Minecraft was the norm, and that calculating the optimum designs for automated power production and gargantuan resource-gathering apparatuses was the entire point of the game.</p><p>Today, we&apos;ll explore how to robustly host Minecraft on SmartOS, either as a simple vanilla server, a Minecraft Forge server with mods, a Spigot server or as a BungeeCord proxy server.</p><h2 id="zone-setup">Zone setup</h2><p>We&apos;ll be operating within a current standard base-64 zone. &#xA0;A base-32 zone should work as well, assuming your server never exceeds 4GB of memory. &#xA0;You will likely need to tune <code>max_physical_memory</code>, <code>cpu_cap</code> and <code>quota</code> for your specific Minecraft server, and are likely to change over time. &#xA0;If you plan on hosting multiple Minecraft servers from this zone, your resource usage is going to skyrocket. &#xA0;Plan accordingly.</p><p>First, let&apos;s install Java. &#xA0;Some older versions of Minecraft may require Java 7, but you really should prefer using Java 8, as it&apos;s significantly more performant than it&apos;s predecessor.</p><p>Also, I&apos;d recommend installing <code>tmux</code>. &#xA0;It&apos;s the easiest way to periodically access a server console.</p><p>Lastly, you will want to install <code>git</code> if you&apos;re planning on building a Spigot server (which must be built from source). &#xA0;This additional package is unnecessary with both vanilla Minecraft and Minecraft Forge.</p><pre><code># pkgin in openjdk8 tmux git
</code></pre><p>I prefer cleaning up the delegated dataset cruft. &#xA0;We can create a dataset mounted under <code>/var/db/minecraft</code> as well.</p><pre><code># UUID=$(sysinfo | json UUID)
# zfs set mountpoint=none zones/$UUID/data
# rmdir -p /zones/$UUID
# zfs create -o mountpoint=/var/db/minecraft -o quota=8G zones/$UUID/data/minecraft
</code></pre><p>Next, we can create our Minecraft group and user, set up limits and set the proper permissions for their home directory.</p><pre><code># groupadd -g 900 minecraft
# useradd -u 900 -g minecraft -d /var/db/minecraft -s /bin/bash \
  -c &quot;Minecraft user&quot; minecraft
# projadd -U minecraft -G minecraft -c &quot;Minecraft server&quot; \
  -K &quot;process.max-file-descriptor=(basic,65536,deny)&quot; minecraft
# chown minecraft:minecraft /var/db/minecraft
# chmod 700 /var/db/minecraft
</code></pre><p>You will need to unlock the minecraft account if you want it to be able to run its own crontab (helpful later on).</p><pre><code># passwd -N minecraft
</code></pre><p>And then <code>su</code> into them to install Minecraft.</p><pre><code># su - minecraft
</code></pre><h2 id="installing-minecraft">Installing Minecraft</h2><p>There are effectively three different Minecraft servers that are popular today:</p><ul><li>(Vanilla) Minecraft from <a href="https://mojang.com/?ref=blog.brianewell.com">Mojang</a> is the easiest to install and maintain, and generally uses the least resources of the three.</li><li><a href="https://www.spigotmc.org/?ref=blog.brianewell.com">Spigot</a>, a high-performance Minecraft server that supports Bukkit plugins, and is compatible with Vanilla clients.</li><li><a>Minecraft Forge</a>, which supports client and server side mods and is arguably the most powerful of the three (also generally considered the most resource intensive). &#xA0;Minecraft Forge is also required if you&apos;re using a Minecraft Forge client and is generally incompatible with the vanilla client.</li></ul><p>We&apos;ll cover how to install all three below.</p><p><strong>Notice:</strong> Each of these sub-sections represents a different type of server install, and should be done separately.</p><h3 id="minecraft-server">Minecraft server</h3><p>The Vanilla Minecraft server can be <a href="https://minecraft.net/en-us/download/server?ref=blog.brianewell.com">downloaded directly</a> from Majong.</p><p>As the Minecraft user, use <code>wget</code> to download the Java Archive file.</p><pre><code>$ wget https://s3.amazonaws.com/Minecraft.Download/versions/1.10.2/minecraft_server.1.10.2.jar -O server.1.10.2.jar
</code></pre><p>Agree to the EULA.</p><pre><code>$ echo eula=true &gt; eula.txt
</code></pre><p>And then start the server.</p><pre><code>$ java -server -jar server.1.10.2.jar nogui
</code></pre><p>If all went well, you should have a functional vanilla Minecraft server.</p><h3 id="spigot-server">Spigot server</h3><p>Due to a legal battle between the Mojang and the developers of Spigot, the later project must be downloaded as source code and compiled locally (this is a legal requirement, not a technical one). &#xA0;Fortunately, the developers of Spigot have written <a href="https://www.spigotmc.org/wiki/buildtools/?ref=blog.brianewell.com">a tool</a> that does just that.</p><p>First, we will need to download this jar file.</p><pre><code>$ wget https://hub.spigotmc.org/jenkins/job/BuildTools/lastSuccessfulBuild/artifact/target/BuildTools.jar
</code></pre><p>And then just run it to build against the latest version of Minecraft.</p><pre><code>$ java -jar BuildTools.jar
</code></pre><p>If you want to build against a specific version of Minecraft, that can be specified through the <code>--rev</code> flag.</p><pre><code>$ java -jar BuildTools.jar --rev 1.9.4
</code></pre><p>Wait for the build process to complete (it can take quite a while) and if all goes well, you&apos;ll have a shiny new <code>spigot-1.10.jar</code> <em>and</em> <code>craftbukkit-1.10.jar</code> file to work with.</p><p>Agree to the EULA.</p><pre><code>$ echo eula=true &gt; eula.txt
</code></pre><p>And then start the Spigot server.</p><pre><code>$ java -server -jar spigot.1.10.jar nogui
</code></pre><p>If all went well, you should have a functional Spigot Minecraft server. &#xA0;Shut down the server and delete all of the additional files and directories produced by BuildTools.</p><pre><code>&gt; stop
...
$ rm -r .m2 BuildData BuildTools.log.txt Bukkit CraftBukkit Spigot apache-maven-3.5.0 work
</code></pre><p>You can also delete <code>BuildTools.jar</code> if you don&apos;t intend on using it again.</p><pre><code>$ rm BuildTools.jar
</code></pre><h3 id="minecraft-forge-server">Minecraft Forge server</h3><p>Minecraft Forge is most popularly distributed at the core of Modpacks: Community designed combinations of community developed Minecraft mods that have been balanced for reasonable gameplay.</p><p>Popular mod-pack projects such as <a href="https://www.feed-the-beast.com/?ref=blog.brianewell.com">Feed the Beast</a> and <a href="https://www.atlauncher.com/?ref=blog.brianewell.com">ATLauncher</a> ship client launchers that allow a user to download and install hundreds of different versions of dozens of different mod-packs. &#xA0;These launchers can also produce server packs which are suitable to be uploaded to and extracted on the SmartOS zone we&apos;re working on.</p><p>Feed the Beast distributes their server mod-packs as zip files directly from their CDN, making them the ideal example to use for this section.</p><p>Use <code>wget</code> to download and <code>unzip</code> to decompress a server mod-pack.</p><pre><code>$ wget http://ftb.cursecdn.com/FTB2/modpacks/FTBInfinity/2_0_0/FTBInfinityServer.zip
...
$ unzip FTBInfinityServer.zip
</code></pre><p>Download the vanilla Minecraft server and launch wrapper by running the <code>FTBInstall</code> script.</p><pre><code>$ sh FTBInstall.sh
</code></pre><p>Agree to the EULA.</p><pre><code>$ echo eula=true &gt; eula.txt
</code></pre><p>And then start the Minecraft server.</p><pre><code>$ java -server -Xms1G -Xmx4G -jar FTBServer-1.7.10-1448.jar nogui
</code></pre><p><strong>Notice:</strong> Minecraft Forge takes the longest to start of the three different server types, and won&apos;t work without extra memory. &#xA0;In the above example: 4G.</p><p>Once this test is complete, you can remove much of the install cruft that comes with FTB mod-pack servers. &#xA0;We will be replacing their start scripts with something more robust in the optimization section below.</p><pre><code>$ rm FTBInfinityServer.zip FTBInstall.* ServerStart.*
</code></pre><h3 id="bungeecord-server">BungeeCord Server</h3><p>BungeeCord is a Minecraft proxy server that sits between your clients and one or more Minecraft servers (as it&apos;s part of the Spigot project, it&apos;s safe to assume Spigot servers are supported), and enables large-scale server deployments. &#xA0;Details about the configuration of BungeeCord are <a href="https://www.spigotmc.org/wiki/bungeecord/?ref=blog.brianewell.com">available on their website</a>.</p><p>Download the BungeeCord server.</p><pre><code>wget https://ci.md-5.net/job/BungeeCord/lastSuccessfulBuild/artifact/bootstrap/target/BungeeCord.jar
</code></pre><p>Configure it, and then start the server.</p><pre><code>$ java -server -jar BungeeCord.jar
</code></pre><h2 id="server-optimizations">Server optimizations</h2><p>Getting a Minecraft server to start and run is just the first step. &#xA0;This guide will also focus on getting your server to run well, and to make the most of the SmartOS platform. &#xA0;What follows are a series of sections that focus on optimizing different aspects of running a Minecraft server on SmartOS.</p><h3 id="optimized-java-parameters">Optimized Java parameters</h3><p>Java Virtual Machines can be started with parameters which adjust their default behavior, and while the start commands in the above examples allowed us to start and test our Minecraft servers just fine, they&apos;re the bare minimum and hardly optimal for the daily operation of a high volume server.</p><p>The optional parameters we&apos;ll be examining below should be applied immediately after the java command:</p><pre><code>$ java &lt;parameters&gt; -jar server.1.10.2.jar nogui
</code></pre><p>We&apos;ll break them up into the following three classifications.</p><h4 id="essential-parameters">Essential parameters</h4><p>These parameters should always be set for a Minecraft server.</p><ul><li><code>-XmsM</code>: Sets the initial size of the heap to <strong>M</strong>. &#xA0;This is the memory consumed by the JVM when first starting. &#xA0;If you want to start with 1GB, the parameter would be: <code>-Xms1G</code>.</li><li><code>-XmxM</code>: Sets the maximum size of the heap to <strong>M</strong>. &#xA0;This is the maximum amount of memory that can be consumed by the JVM and should always be greater than or equal to <code>-XmsM</code>. &#xA0;If you want to limit the server to 4GB, the parameter would be: <code>-Xmx4G</code>.</li><li><code>-XX:+UseConcMarkSweepGC</code> or <code>-XX:+UseG1GC</code>: Enables the use of the CMS garbage collector for the old generation. &#xA0;It&apos;s recommended that you use this GC when application latency requirements cannot be met by the throughput garbage collector. &#xA0;The G1 garbage collector (<code>+UseG1GC</code>) is another alternative, but it might not be as memory efficient.</li><li><code>-XX:+UseLargePages</code>: SmartOS supports large pages and enabling this should translate to a performance improvement.</li></ul><h4 id="optional-parameters">Optional parameters</h4><p>These parameters may be useful for a Minecraft server in certain situations.</p><ul><li><code>-d64</code>: Forces JVM to be 64-bit. &#xA0;This is only practical on a <code>base-multiarch</code> image, as java on <code>base-32</code> and <code>base-64</code> will always be one or the other.</li><li><code>-server</code>: Selects the Java HotSpot Server VM. &#xA0;This is only necessary with the 32-bit JVM, as the 64-bit JVM only supports the Server VM.</li><li><code>-showversion</code>: Displays JVM version information before continuing execution of the application. &#xA0;This is practical for logging purposes.</li><li><code>-XX:+AggressiveOpts</code>: Enables the use of aggressive performance optimization features which are expected to become the default in upcoming Java releases. &#xA0;This option may improve performance, but that performance may come at the cost of stability.</li><li><code>-XX:MinHeapFreeRatio=N</code>, <code>-XX:MaxHeapFreeRatio=N</code>: Sets the minimum and maximum allowed percentage of free heap space (0 to 100) after a garbage collection event. &#xA0;If the free space after a garbage collection event is less than the minimum by the given percentage than the JVM will attempt to allocate additional memory to the heap until the difference is greater than the minimum given percentage. &#xA0;If the free space after a garbage collection event is greater than the maximum by the given percentage than the JVM will attempt to free memory from the heap until the difference is less than the maximum given percentage. &#xA0;The current defaults of 40% and 75% should be suitable, but I&apos;ve seen configurations as extreme as 5% and 10%.</li><li><code>-XX:MaxGCPauseMillis=10</code>: Sets a target for the maximum garbage collection pause time (in milliseconds). &#xA0;This is a soft goal, and the JVM will make it&apos;s best effort to achieve it. &#xA0;In the case of Minecraft, 10ms is unreasonably short, but this should help to tune the JVM to run shorter more frequent garbage collection cycles.</li></ul><h4 id="depreciated-parameters">Depreciated parameters</h4><p>Despite often being recommended by other guides, these parameters should not be used on SmartOS.</p><ul><li><code>-Xincgc</code>: Enables incremental garbage collection. &#xA0;This option was deprecated in Java 8 with no replacement. &#xA0;Use CMS, G1, or let the JVM decide instead.</li><li><code>-XX:PermSize=M</code>: Sets the space allocated to the permanent generation that triggers garbage collection if it is exceeded. &#xA0;This parameter has been deprecated in Java 8 and superseded by the <code>-XX:MetaspaceSize</code> parameter, which shouldn&apos;t really be tuned either.</li><li><code>-XX:+UseParNewGC</code>, <code>-XX:+UseParallelGC</code>: Explicitly tells the JVM which garbage collection strategy should be used. &#xA0;Neither of these garbage collectors should be selected (UPN will be automatically enabled with CMS).</li><li><code>-XX:+CMSIncrementalPacing</code>: Enables automatic adjustment of the incremental mode duty cycle based on statistics collected while the JVM is running. &#xA0;This option has been deprecated in Java 8 with no replacement.</li><li><code>-XX:+CMSClassUnloadingEnabled</code>: Enables class unloading when using the concurrent mark-sweep (CMS) garbage collector. &#xA0;This option is enabled by default, meaning there&apos;s no reason to enable it explicitly.</li><li><code>-XX:ParallelGCThreads=2</code>: Sets the number of threads used for parallel garbage collection. &#xA0;This should be automatically determined by the JVM.</li><li><code>-XX:-UseVMInterruptibleIO</code>: Thread interrupts before or with EINTR for I/O operations results in OS_INTRPT. &#xA0;I could find no additional documentation describing what this parameter does, so I can&apos;t really recommend using it.</li></ul><h4 id="example">Example</h4><p>Based on the above parameters, I came up with the following JVM options for a Minecraft server running on SmartOS. &#xA0;It&apos;s not perfect, but it leaves most of the finer configuration up to the JVM and out of our hair.</p><pre><code>$ java -Xms256M -Xmx4G -XX:+UseConcMarkSweepGC -XX:+UseLargePages \
-jar server.1.10.2.jar nogui
</code></pre><p>If you&apos;d prefer something more complicated, there are <a>other recommendations</a> out there, but I would recommend you cross-reference them with <a href="https://docs.oracle.com/javase/8/docs/technotes/tools/windows/java.html?ref=blog.brianewell.com">the Java8 command line documentation</a> to ensure you&apos;re not specifying a default or depreciated parameter.</p><p>Now with that sorted, you can either take this string of options and write them into a startup.sh script to manually call from within your Minecraft directory, or you can start up Minecraft the fun way.</p><h3 id="minecraft-and-smf">Minecraft and SMF</h3><p>SmartOS ships with the Solaris Service Management Facility, which is the ideal tool to ensure our Minecraft server stays running. &#xA0;And since we&apos;re on SmartOS, It&apos;d be a shame not to use it!</p><p><strong>Note:</strong> Minecraft can either be managed directly by SMF or wrapped with <code>tmux</code>. &#xA0;Direct is simpler, but you will be unable to issue commands directly to the Minecraft server console, which will cause issues later on. &#xA0;For the sake of completeness, we will explore both options below.</p><h4 id="import-smf-manifest">Import SMF Manifest</h4><p>As root, download the following SMF manifest:</p><!--kg-card-begin: html--><script src="https://gist.github.com/brianewell/e7f1bbdc099818f3894359d6a9a1bcc6.js"></script><!--kg-card-end: html--><p>This manifest has been configured for a default Minecraft server instance wrapped with <code>tmux</code>, if you just want to use Java, uncomment the java only sections and comment out the <code>tmux</code> wrapped sections (both the <code>exec_method</code> and <code>propval</code> tags).</p><p>Edit the <code>value_node</code> tags to reflect the java parameters you want to use with your server. &#xA0;This step can be skipped and done later.</p><p>Edit the server <code>propval</code> tag to reflect your specific minecraft server. &#xA0;This can be done via symlink or skipped and done later as well.</p><pre><code># su - minecraft
$ ln -sf server.1.10.2.jar server.jar
$ logout
</code></pre><p>Import the manifest and enable the minecraft service.</p><pre><code># svccfg import minecraft-single-smf.xml
# svcadm enable minecraft
</code></pre><p>You should now have an SMF managed Minecraft server.</p><h4 id="edit-java-parameters">Edit Java parameters</h4><p>Java parameters can be changed and updated directly within SMF without the need to delete and re-import the manifest.</p><p>For instance, if you want to expand the initial heap (<code>-XmsN</code>) from 256M to 1G, you&apos;ll need to remove the old value and replace it with a new value:</p><pre><code># svccfg -s minecraft
svc:/gameserver/minecraft&gt; delpropvalue options/parameters -Xms256M
svc:/gameserver/minecraft&gt; addpropvalue options/parameters -Xms1G
svc:/gameserver/minecraft&gt; exit
</code></pre><p>You can also completely clear a property list and replace it with new values (note, the astring: is only required for the first value and the quotations are optional):</p><pre><code># svccfg -s minecraft
svc:/gameserver/minecraft&gt; delprop options/parameters
svc:/gameserver/minecraft&gt; addpropvalue options/parameters astring: &quot;-Xms1G&quot;
svc:/gameserver/minecraft&gt; addpropvalue options/parameters &quot;-Xmx4G&quot;
svc:/gameserver/minecraft&gt; addpropvalue options/parameters &quot;-XX:+UseConcMarkSweepGC&quot;
svc:/gameserver/minecraft&gt; addpropvalue options/parameters &quot;-XX:+UseLargePages&quot;
svc:/gameserver/minecraft&gt; exit
</code></pre><p>You can also check the pending property list:</p><pre><code># svccfg -s minecraft listprop options/parameters
</code></pre><p>If you&apos;re happy with what you have, refresh (commit) the configuration and restart the Minecraft service:</p><pre><code># svccfg -s minecraft:default refresh
# svcadm restart minecraft
</code></pre><h4 id="edit-server-jar">Edit server JAR</h4><p>Along with the parameters, the Minecraft server string can be changed and updated directly within SMF without the need to delete and re-import the manifest.</p><p>For example, if you want to update the server jar to <code>server.1.10.2.jar</code>, set the new value using <code>svccfg</code> (quotations are optional):</p><pre><code># svccfg -s minecraft setprop options/server=&quot;server.1.10.2.jar&quot;
</code></pre><p>Pending configuration data can be read with <code>svccfg</code>:</p><pre><code># svccfg -s minecraft listprop options/server
options/server  astring  server.1.10.2.jar
</code></pre><p>If the pending configuration is correct, refresh (commit) the configuration and restart the Minecraft service:</p><pre><code># svccfg -s minecraft:default refresh
# svcadm restart minecraft
</code></pre><h4 id="delegated-authorization">Delegated authorization</h4><p>By default, only the superuser can manage the Minecraft service state and properties. &#xA0;Permission to alter the state or properties of the Minecraft service can be independently granted to additional users through Role-Based Access Control (RBAC).</p><p>First, define authorization descriptions under <code>/etc/security/auth_attr</code>:</p><pre><code>solaris.smf.manage.minecraft:::Manage Minecraft Service States::
solaris.smf.value.minecraft:::Change Values of Minecraft Service Properties::
</code></pre><p>Then authorize one or more users with these descriptions. &#xA0;For example, the following line will grant state and property management permission to the Minecraft user:</p><pre><code># usermod -A solaris.smf.manage.minecraft,solaris.smf.value.minecraft minecraft
</code></pre><p>This user is now able to reboot the server and adjust the configuration options discussed above.</p><pre><code># su - minecraft
$ svcadm restart minecraft
</code></pre><p>This is especially practical if you want to grant one or more users access to reboot your server without having any access to its files:</p><pre><code># useradd -m -A solaris.smf.manage.minecraft brian
# passwd brian
...
# su - brian
$ svcadm restart minecraft
$ ls /var/db/minecraft/
ls: cannot open directory &apos;/var/db/minecraft/&apos;: Permission denied
</code></pre><p>Now Brian can log in and reboot the server without any additional access to server data.</p><h3 id="snapshots-backups">Snapshots &gt; backups</h3><p>What is possibly the best reason to host Minecraft on SmartOS is the first-class availability of ZFS. &#xA0;Setting up a dataset to contain the world directory along with cron-driven periodic snapshots completely supersedes any functionality provided by CraftBukkit plugins or MinecraftForge mods.</p><h4 id="setup-the-world-dataset">Setup the World Dataset</h4><p>First, switch your Minecraft server into maintenance state if it&apos;s been enabled.</p><pre><code># svcadm mark maintenance minecraft
</code></pre><p>Move the current world directory to a temporary location.</p><pre><code># mv /var/db/minecraft/world /var/db/minecraft/world_tmp
</code></pre><p>Create a new dataset for the world data. &#xA0;Optionally set a quota (20G in our example).</p><pre><code># zfs create -o quota=20G zones/$(sysinfo | json UUID)/data/minecraft/world
</code></pre><p>Chown the root of the new dataset and copy the contents of the old world directory to the new (this may take some time on established worlds). &#xA0;Delete the temporary directory when you&apos;re done.</p><pre><code># chown minecraft:minecraft /var/db/minecraft/world
# mv /var/db/minecraft/world_tmp/* /var/db/minecraft/world/
# rmdir /var/db/minecraft/world_tmp
</code></pre><p>If you&apos;re done in maintenance mode, you can now clear your Minecraft service.</p><pre><code># svcadm clear minecraft
</code></pre><h4 id="delegated-zfs-management">Delegated ZFS management</h4><p>Normally ZFS administration is handled by the superuser, but authorizations over actions can be delegated to other users on a per dataset basis. &#xA0;In our case, we want our Minecraft user to be able to create snapshots of the world dataset so that the periodic snapshot script can be managed under that user.</p><pre><code># zfs allow -lu minecraft snapshot zones/$(sysinfo|json UUID)/data/minecraft/world
</code></pre><p>Destroying and renaming snapshots can be achieved by granting destroy and rename permissions to any descendant datasets (of which snapshots are). &#xA0;Note that this will allow any user to destroy or rename <em>any</em> descendant dataset.</p><pre><code># zfs allow -du brian destroy,rename zones/$(sysinfo|json UUID)/data/minecraft/world
</code></pre><p>You can review what permissions have been granted by calling <code>zfs allow &lt;dataset&gt;</code>:</p><pre><code># zfs allow zones/$(sysinfo|json UUID)/data/minecraft/world
</code></pre><p>And you can revoke permissions with <code>zfs unallow</code>:</p><pre><code># zfs unallow -du brian destroy,rename zones/$(sysinfo|json UUID)/data/minecraft/world
</code></pre><h4 id="setup-periodic-snapshots">Setup periodic snapshots</h4><p>The easiest way to create snapshots of the world directory is by using a script that disables auto-saving, issues an explicit save creates the snapshot, and then re-enables auto-saving.</p><p>I put together a simple implementation in bash that does these things, based on the script described in <a href="https://www.electricmonk.nl/log/2011/07/22/minecraft-server-optimization/?ref=blog.brianewell.com">this blog post</a>.</p><p><strong>Notice:</strong> This script needs to be able to communicate with the Minecraft server console to function. &#xA0;It&apos;s designed to interface with <code>tmux</code>, as configured in the previous section, and will not work in a java-only configuration.</p><p>As the minecraft user, copy the following script to the Minecraft directory:</p><!--kg-card-begin: html--><script src="https://gist.github.com/brianewell/d78c4b41f6d92e84745746a50df4de36.js"></script><!--kg-card-end: html--><p>Be sure to adjust <code>snappath</code> to reflect your dataset name. &#xA0;Add the script to your crontab scheduled to run as often as you want it to.</p><p>For example, here&apos;s a crontab that will snapshot your world dataset every 15 minutes:</p><pre><code>0,15,30,45 * * * * ./snapshot.sh
</code></pre><p><strong>Notice:</strong> If you get errors about Minecraft not being allowed to execute cronjobs, you will need to unlock the account as root.</p><pre><code># passwd -N minecraft
</code></pre><p>I am aware that this script does not manage existing snapshots (expiring old ones or rotating snapshots) or send snapshots to other systems for remote replication. &#xA0;I leave extending this script to encompass additional functionality as an exercise for the reader.</p><h2 id="conclusion">Conclusion</h2><p>This is pretty much everything you need to get a single instance of Minecraft up and running within a single SmartOS zone. &#xA0;I had a bunch more notes that delved into advanced Minecraft multitenancy but I figured that was a bit much for a single blog post, so we&apos;ll save that for another day.</p>]]></content:encoded></item><item><title><![CDATA[Configuring Readline on SmartOS]]></title><description><![CDATA[<p>One of the more annoying differences I noticed in the switch from Linux to SmartOS was while connecting to a terminal via PuTTY: The former supports using the Home and End keys to navigate to the beginning and end of a command-line buffer while the latter does not, instead spitting</p>]]></description><link>https://blog.brianewell.com/configuring-readline-on-smartos/</link><guid isPermaLink="false">5f8a4fb0a4033becc92138a8</guid><category><![CDATA[SmartOS]]></category><dc:creator><![CDATA[Brian Ewell]]></dc:creator><pubDate>Fri, 17 Jun 2016 19:21:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1511057630-054448f045a3?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1511057630-054448f045a3?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Configuring Readline on SmartOS"><p>One of the more annoying differences I noticed in the switch from Linux to SmartOS was while connecting to a terminal via PuTTY: The former supports using the Home and End keys to navigate to the beginning and end of a command-line buffer while the latter does not, instead spitting tilde characters into the command-line wherever you happen to have the cursor.</p><p>My Google-fu was weak and unfocused (see: distracted) at the time, so I&apos;d usually just spend a few minutes fruitlessly searching for an answer until deciding to give up and focus on the original problem I was working on, trying my best to ignore the fact that I couldn&apos;t zip back and forth across a multi-row command like I was used to being able to on Linux.</p><p>It finally got bad enough that I committed myself to finding a solution before I moved back to my original task. &#xA0;This blog post is the culmination of my findings. &#xA0;Thanks to <a href="https://unix.stackexchange.com/users/40490/geedoubleya?ref=blog.brianewell.com">geedoubleya</a> and <a href="https://unix.stackexchange.com/users/37431/rofrol?ref=blog.brianewell.com">rofrol</a> from Stack Exchange for their work on <a href="https://unix.stackexchange.com/questions/161979/solaris-home-end-keys-not-working-like-debian-ubuntu?ref=blog.brianewell.com">this answer</a> which set me on the right track for researching this article.</p><p><strong>Bonus:</strong> Most of this guide should apply well to any modern readline installation!</p><h2 id="bash-and-readline">Bash and Readline</h2><p>What I was used to were features of <a href="https://www.gnu.org/software/bash/?ref=blog.brianewell.com">GNU Bash</a> and more specifically <a href="https://www.gnu.org/software/readline/?ref=blog.brianewell.com">GNU Readline</a>, the library that provides the line-editing and history functionality for Bash. &#xA0;Fortunately, with SmartOS, my problem was one of misconfiguration, not of missing software, as both Bash and Readline are already installed in most (if not all?) SmartOS images.</p><p>You can actually confirm this by testing against the default Readline key bindings. &#xA0;The relevant ones are as follows:</p><ul><li><code>beginning-of-line</code> (<code>ctrl-a</code>) moves the cursor to the beginning of the line.</li><li><code>end-of-line</code> (<code>ctrl-e</code>) moves the cursor to the end of the line.</li></ul><p>And while we could just stick with those, where&apos;s the fun in limiting ourselves to that? &#xA0;Readline supports reconfiguration via a simple configuration file (<code>~/.inputrc</code> or <code>/etc/inputrc</code>).</p><h2 id="readline-configuration">Readline configuration</h2><p>Readline configuration file syntax supports two common definition structures: Variables can be set with the form <code>set &lt;variable&gt; &lt;value&gt;</code> and key bindings can be defined with the form <code>keyname: function-name</code> or <code>&quot;key sequence&quot;: function-name</code> forms.</p><p>Setting variables will alter the run-time behavior of Readline. &#xA0;Variables that I found interesting include:</p><ul><li><code>bell-style</code> set to <code>none</code> if you don&apos;t want to hear a terminal ring again.</li><li><code>colored-stats</code> set to <code>on</code> if you want to see completions in different colors to indicate their file type.</li><li><code>editing mode</code> set to either <code>emacs</code> (default) or <code>vi</code> depending on your religious affiliation.</li><li><code>enable-keypad</code> set to <code>on</code> if you want to make your life easier with arrow keys.</li><li><code>expand-tilde</code> set to <code>on</code> if you want to expand the tilde to a full path.</li><li><code>history-size</code> control the number of lines stored in the history buffer (and saved to .bash_history).</li><li><code>horizontal-scroll-mode</code> set to on will cause a command-line to scroll across the bottom of the screen instead of wrap to a new line.</li><li><code>keymap</code> sets the overall keymap for key binding commands, options include <code>emacs</code> (default), <code>emacs-standard</code>, <code>emacs-meta</code>, <code>emacs-ctlx</code>, <code>vi</code>, <code>vi-move</code>, <code>vi-command</code> and <code>vi-insert</code>.</li><li><code>keyseq-timeout</code> sets the timeout duration in milliseconds for Readline to wait when reading an ambiguous key sequence.</li><li><code>mark-symlinked-directories</code> set to <code>on</code> if you want symlinks to directories to have a <code>/</code> appended to their names.</li></ul><p>Key bindings will allow you to map specific functions or macros to keys. &#xA0;Functions that I found interesting and relevant include:</p><ul><li><code>beginning-of-line</code> moves the cursor to the beginning of the current line.</li><li><code>end-of-line</code> moves the cursor to the end of the current line.</li><li><code>forward-char</code> moves the cursor forward one character.</li><li><code>backward-char</code> moves the cursor backward one character.</li><li><code>forward-word</code> moves the cursor forward to the end of the next word.</li><li><code>backward-word</code> moves the cursor backward to the start of the current or previous word.</li><li><code>clear-screen</code> clears the screen and redraws the current line at the top of the screen.</li><li><code>redraw-current-line</code> redraws the current line in its current location.</li><li><code>previous-history</code> moves back through the history list, displaying the previous command.</li><li><code>next-history</code> moves forward through the history list, displaying the next command.</li><li><code>beginning-of-history</code> displays the first entry of the history list.</li><li><code>end-of-history</code> displays the last entry of the history list, the current line being entered.</li><li><code>history-search-backward</code> moves backward through the history list, displaying lines that match your partially typed command.</li><li><code>history-search-forward</code> moves forwards through the history list, displaying lines that match your partially typed command.</li></ul><p>There&apos;s a bunch of other stuff in the documentation (like conditional blocks and a <strong>BUNCH</strong> of other bindable functions) that while I found interesting, didn&apos;t have a really practical place to apply it while writing this article. &#xA0;You may want to <a href="https://cnswww.cns.cwru.edu/php/chet/readline/rluserman.html?ref=blog.brianewell.com#SEC10">give it a proper read</a> if you&apos;d like to know more.</p><p>When testing your configuration, you can prompt a reload by using the key sequence <code>Ctrl-X</code> <code>Ctrl-R</code>, instead of logging out and back in again.</p><h2 id="guest-zone-configuration">Guest Zone configuration</h2><p>As mentioned before, the system-wide configuration file is located at <code>/etc/inputrc</code> which can be overridden by individual users with a file at <code>~/.inputrc</code>. &#xA0;If you tend to switch roles into different users often, I recommend using the system-wide configuration file, as those configuration directives will be applied to all users on the system.</p><h2 id="global-zone-configuration">Global Zone Configuration</h2><p>Since SmartOS Global Zones are refreshed on reboot, we&apos;ll need to set up a transient SMF manifest to load our settings into the global zone each time it boots up.</p><!--kg-card-begin: html--><script src="https://gist.github.com/brianewell/30d87dc89542e3006c2174793fac6076.js"></script><!--kg-card-end: html--><p>Download the above manifest to <code>/opt/custom/smf/readline.xml</code> and copy your inputrc file to <code>/usbkey/config.inc/inputrc</code>. &#xA0;This will ensure that your SmartOS global zone will copy <code>/usbkey/config.inc/inputrc</code> from persistent memory into the ramdisk (<code>/etc/inputrc</code>) upon each boot.</p><h2 id="my-inputrc">My Inputrc</h2><p>While a vim user, I prefer not having to deal with that editor&apos;s intricacies while I&apos;m in bash. &#xA0;Also, I tend to ssh from Windows, PuTTY handles my copy buffer, so all I really want is for readline to handle is command-line navigation. &#xA0;I originally wanted to do a pretty straightforward interface where basic arrow keys provided navigation and modifier keys (Alt, and Ctrl) increased magnitude, but it turns out that arrow keys in a terminal are just &quot;Metafied&quot; ASCII, and pressing the Alt key while pressing one of those keys is just the equivalent of a normal ASCII sequence. &#xA0;If I were to attempt these bindings, I would end up with normal ASCII in my terminal: Not exactly what I was looking for.</p><p>So, for now, I&apos;m stuck using <code>Home</code>, <code>End</code>, <code>PageUp</code> and <code>PageDown</code> just like the rest of us mortals.</p><p><strong>/etc/inputrc:</strong></p><pre><code>&quot;\e[1~&quot;: beginning-of-line
&quot;\e[4~&quot;: end-of-line
&quot;\e[5~&quot;: history-search-backward
&quot;\e[6~&quot;: history-search-forward
&quot;\e[3~&quot;: delete-char
</code></pre>]]></content:encoded></item></channel></rss>