Tag Archives: red hat

TMUX: A Command Line Must!

When I started with Unix, it was during my college days on a VT-100 terminal, with text command lines. There was even an online chat window using text (remember “talk”?). When a GUI was introduced using X Windows on Sun Microsystem Solaris machines, the experience was so different and it was considered to improve productivity because we get to multitask. However, old habits die hard, so even with a GUI, I would have dedicated X-Term windows for command line stuff. I would run “screen” (aka “Gnu Screen”) to have multiple (and switchable) windows within X-Term.

The advantages of using screen are:

  1. When my SSH connection is broken, the command line sessions are still working. Useful when running shell scripts that take a long time to complete.
  2. Having a shell with command line history, I could review the previous executions, in case I forgot to document something.
  3. Instead of using the mouse to click on a different window, I use the keyboard shortcut Ctrl-A and the number keys, to switch between screens. Way quicker.

With the introduction of Red Hat Enterprise Linux 8, I was introduced (read: forced) to use a new screen replacement called TMUX. Apparently, it’s not a new util but it’s way more powerful – and useful. After using it for a while, I saw these advantages:

  1. Having a vendor managed Firewall, I didn’t have a choice for connection keep-alives. My SSH connections will drop after inactivity. With TMUX, there’s a clock display that forces the connection to send data once a minute. Thus keeping the connection alive – indefinitely. No more dropped connections and reconnecting effort.
  2. Being able to run screen within TMUX window is pretty nifty. I have another layer of switchable window, which is really handy when I have multiple servers representing the different layers for a site (ie. web, JBOSS, database, etc.) This is possible because TMUX’s key bindings for switching window is configurable and by default it’s different than screen’s.
  3. TMUX has window panes, for dashboard like monitoring. Plus, it looks awesome!
My tmux screen with split panes (For demo only. I usually like to see one window at a time)

Most of the Red Hat Enterprise Linux I’m working with is version 6.x, TMUX is not included as part of RHN repository. Thus, I had to build it from source. These are the steps to do it:

  1. Download, compile, and install the latest libevent and ncurses.
  2. Download TMUX and compile using the following configure flags (note, I installed on local home directory):
    CFLAGS="-I$HOME/local/include -I$HOME/local/include/ncurses" LDFLAGS="-L$HOME/local/lib -L$HOME/local/include/ncurses -L$HOME/local/include" CPPFLAGS="-I$HOME/local/include -I$HOME/local/include/ncurses"

If there’s a doubt that command line is important to a sysadmin’s daily work, Microsoft Developers are proud to present an expanded version of Windows OS command prompt. The video below has the full highlights and it looks great!

Windows Terminal: Building a better command line experience for developers

There’s even a trailer that rivals an iPhone launch commercial!

The new Windows Terminal: Trailer

I’m excited that Operating System vendors are now providing more robust terminal tools, making command line a much better experience for all of IT folks!

Moving the Default Docker Data Directory in RHEL 7

Red Hat Docker

In every application, the install directory is set to defaults such as /var, /opt, or /usr/local (even the / root directory) for data and logs.  This is fine for testing purposes. However, for production use, especially when the application becomes really active, those data and log directories can be big.  An alternate storage location, such as LVM or xfs, will be needed that can re-sized for future expansion.

In this example, let’s perform the requirement to move Docker’s default directory into a separate xfs formatted disk. For Red Hat Enterprise Linux 7 installation, this Docker setup is off the RPM repository.  The default is /var/lib/docker for the data files.  In order to change the path into somewhere else, for example /disk2/docker, first change the /etc/sysconfig/docker file to reflect the change:

OPTIONS=’–selinux-enabled –log-driver=journald –signature-verification=false –graph=/disk2/docker –iptables=False –storage-driver=overlay2′

Move the files from /var/lib/docker into the new /disk2/docker directory.  Since SELinux is enabled for production environment, Docker will need the permission to write into the new directory:

semanage fcontext -a -s system_u -t container_var_lib_t ‘/disk2/docker(/.)?’

semanage fcontext -a -s system_u -t container_share_t ‘/disk2/docker/./config.env’

semanage fcontext -a -s system_u -t container_file_t ‘/disk2/docker/vfs(/.)?’semanage fcontext -a -s system_u -t container_share_t ‘/disk2/docker/init(/.)?’

semanage fcontext -a -s system_u -t container_share_t ‘/disk2/docker/overlay(/.)?’semanage fcontext -a -s system_u -t container_share_t ‘/disk2/docker/overlay2(/.)?’

semanage fcontext -a -s system_u -t container_share_t ‘/disk2/docker/containers/./hosts’semanage fcontext -a -s system_u -t container_log_t ‘/disk2/docker/containers/./..log’

semanage fcontext -a -s system_u -t container_share_t ‘/disk2/docker/containers/./hostname’

And finally, restore the file context for /disk2/docker:

restorecon -R /disk2/docker

Start up the Docker service again, and the environment is now ready to use!

Elasticsearch Logo

Using Elasticsearch for JBOSS Logs

Elasticsearch Logo

Ever since the GSA been decommissioned, there seems to be one clear winner as a replacement:  Elasticsearch.  The search engine software is also quite powerful and versatile.  It can be adapted to do customized site searches, or use the ready-made tools to ingest logs from Apache web servers, or others like systems data, network packets, and even Oracle databases.  Best of all, it’s based on open-source software (Apache Lucene) and the functional basic version is free to use!

Naturally, as part of a sysadmin job, being able to analyze logs and have it searchable and visualized (in Kibana) will make the job easier. For Enterprise environments that use JBOSS EAP as an app container, one can use Elasticsearch to parse through the logs, both historical and in real-time.  The tools are:

From the search engine itself, to the individual tools, there are a lot of information on the Elastic site on how to configure and run them, including examples.  It is assumed Elasticsearch and Kibana have been configured and running, and Logstash and Filebeat have been setup.  The purpose of this post is only to show the possibility of parsing through JBOSS logs.

When JBOSS logs are enabled, use Filebeat to read through all of the access_log files using a wildcard. Filebeat is a lightweight (written in Go) application that can sit on the JBOSS or Web servers, and not interfere with the current operations.  It’s ideal for production environments.  The filebeat.yml file looks something like this:

filebeat.inputs:
- type: log
  enabled: true
  paths:
  - /apps/jboss-home/standalone/log/default-host/access_log_*
tags: ["support"]
output.logstash:
    hosts: ["logstash-hostname:5044"]

Filebeat has a nifty feature that continues to read a log file as it is appended.  However, be warned, if the log file gets truncated (deleted or re-written), then Filebeat may erroneously  send partial messages to Logstash, and will cause parsing failures.

In Logstash, all the Filebeat input will now need to parsed for the relevant data to be ingested into Elasticsearch.  This is the heart of the ingestion process, as Logstash is the place where the data transformation is happening.   A configuration file in the /etc/logstash/conf.d directory looks like this:

input {
   beats {
   port => 5044
   }
}

filter {
 if "beats_input_codec_plain_applied" in [tags] {
    mutate {
       remove_tag => ["beats_input_codec_plain_applied"]
    }
 }

grok {
   match => {
"message" => '%{IPORHOST:clientip} %{USER:ident} %{USER:auth} [%{HTTPDATE:timestamp}] "%{WORD:verb} %{DATA:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response:int} (?:-|%{NUMBER:bytes:int}) (?:-|%{NUMBER:perf:float})'
   }
}

date {
    match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
    locale => en
    remove_field => "timestamp"
}

mutate {
    remove_field => [ "message", "@version", "[beat][version]", "[beat][name]", "[beat][hostname]" ]
   }
}

output {
   if "support" in [tags] {
      elasticsearch {
        hosts => ["elasticsearch-hostname:9200"]
        manage_template => false
        index => "jbosslogs-support-%{+YYYY.MM.dd}"

      }
}

Logstash listens on port 5044, on the same (or separate) server as Elasticsearch.  When ingesting a lot of data, both Logstash and Elasticsearch engines (Java based apps) will consume quite a bit of CPU and Memory, so it’s a good idea to separate them.

In this example, a JBOSS access_log entry is something like:

192.168.0.0 – – [09/Nov/2018:15:50:16 -0800] “GET /support/warrantyResults HTTP/1.1” 200 77 0.002

The most important number is the last field, which is a floating-point value for the URL execution time (in seconds).  It’s assigned to a field name “perf”, as in performance.  Kibana can be used to gather/visualize the perf values and see if there’s any issue with the JBOSS application.

Kibana Snapshot

The above screenshot indicates the top few URLs with average performance times above 3 seconds.  The timestamp column shows the time it happened during the timespan selected (in this example, “today”).  Then just zoom into the specific time and troubleshoot the Java app, accordingly.

This is just one way to dive into the JBOSS logs using Elasticsearch and Kibana. An Elastic engineer can spend hours creating and tweaking this setup in order to get the most of the available data. At least the tools are friendly enough to configure, with plenty of documentation available on their website.  The software has been around long enough, with plenty of community support, that searching the forums (via Google) can give helpful hints for the customization effort.  In general, this is an impressive (and fun) way to perform log analysis.  For the price, it’s quite impressive. No wonder Elastic’s IPO raised over $250 million on the first day!  They’re on the right track to be the next hot company with products Enterprise customers can really use.