Juniper EX/SRX switch/router software Update

So, … I migrated away from the ubiquiti router. It was a great router, it just didn’t have the capability to handle the gigabit pppoe.
I’m sure there were better ubiquiti routers to handle the load but decided to move to juniper since seems to be the industry standard …

So, doing the updates are simple. Juniper even has an active/alternate boot layout ! Now I have to do some upgrades !

It seems to be the same procedure on my router and on the switch !

$ scp /Software/Juniper/SRX320/junos-srxsme-15.1X49-D75.5-domestic.tgz user@ipX:/var/tmp/
$ scp /Software/Juniper/EX2200/jinstall-ex-2200-15.1R5.5-domestic-signed.tgz user@ipY:/var/tmp/
user@Device> request system software add /var/tmp/package.tgz
user@Device> request system reboot
user@Device> request system snapshot slice alternate
user@Device> request system storage cleanup
user@Device> request system software delete-backup


Signing a DNS ISC bind / named zone for DNSSEC

So, recently I had to update some stuff in my zone and I kept wondering why they weren’t picked up on the internet.
I just remembered that I have DNSSEC enabled. So I need to do something .. not just change the .zone file.

The line needed to regenerate the .signed zone based on my clear text zone is:

dnssec-signzone -A -3 $(head -c 1000 /dev/urandom | sha1sum | cut -b 1-16) -N INCREMENT -o -t

I should probably get the time to make a post on how to actually generate the signing keys and stuff.
Basically, I have my KSK ( Key signing key ) and ZSK ( Zone signing key ) public and private key in the zone dir with the right permissions. The above overwrites the old signed zone.

A nice tutorial I’ve used is How To Setup DNSSEC on an Authoritative BIND DNS Server

Simple RPM repo in Gentoo


So I’ve started creating a RPM repo so I can easily update my rabbitmq consumers. Here’s how I did it !

The code

echo '=app-arch/createrepo-0.10.4 ~amd64
=dev-python/pyliblzma-0.5.3-r1 ~amd64
=app-arch/deltarpm-3.6_pre20110223-r1 ~amd64
=sys-apps/yum-3.4.3_p20130218-r1 ~amd64
=dev-util/rpmdevtools-8.5 ~amd64
=dev-util/checkbashisms-2.15.10 ~amd64' >> /etc/portage/package.keywords
echo 'app-arch/deltarpm python
app-arch/rpm python' >> /etc/portage/package.use
emerge createrepo rpmdevtools

mkdir -p /usr/src/rpm
cd /usr/src/rpm
cd /usr/src/rpm/SPECS
rpmdev-newspec serverstuff
echo '[serverstuff]
name=ServerStuff Repo
gpgcheck=0' > /usr/src/rpm/SOURCES/serverstuff.repo

echo 'Name: serverstuff-repo
Version: 1
Release: 1%{?dist}
Summary: ServerStuff Repository
BuildArch: noarch

License: GPL
Source0: serverstuff.repo

ServerStuff Repository

mkdir -p $RPM_BUILD_ROOT/etc/yum.repos.d
cp %SOURCE0 $RPM_BUILD_ROOT/etc/yum.repos.d


* Sat Feb 6 2016 root
- Initial Creation' >/usr/src/rpm/SPECS/serverstuff-repo.spec

rpmbuild -bb serverstuff-repo.spec --define "_topdir /usr/src/rpm"
# I have my apache serve from the htdocs dir
mkdir -p /var/www/
cp /usr/src/rpm/RPMS/noarch/serverstuff-repo-1-1.noarch.rpm /var/www/
createrepo /var/www/

Other issues

Now the same applies if you want to create another package. Just put it in the repo dir and run

createrepo --update /var/www/

If you’re doing this too fast, yum caches the repo and might have to do a clean metadata

yum --enablerepo=serverstuff clean metadata

GWX uninstall and disable

GWX disabling and uinstalling

I don’t usually post windows stuff, but here it comes.

You want to disable windows updates completely !!!! Just make sure you check for them manually often enough.

If you got here you probably did the updates and want to revert but don’t know what. Here it is in powershell for W8.1.

I know it’s a rude implementation and might need to install powershell on W7, and it doesn’t treat some errors that might appear and only works on your local computer but the links bellow should point you in the right direction.

The following script needs to be saved as a file.ps1 and ran as administrator

$array = @("KB3035583","KB2952664","KB2976978")
for ($i=0; $i -lt $array.length; $i++) {
$hotfix = Get-HotFix -ComputerName $env:COMPUTERNAME | Where-Object {$_.HotfixID -eq $array[$i]}
$HotFixNum = $array[$i].Replace("KB","")
wusa.exe /uninstall /KB:$HotFixNum /quiet /norestart


I’ll update at a later time with a script to take the ownership of the gwx directory and delete it too.
Maybe I’ll even figure out a way to mark those updates as hidden directly.
And a way to add a reg key to disable the notification and scheduled tasks !

Microsoft really did a number on this one trying so hard to push its W10. Unfortunately I’m not a fan of Windows .. nor the features W10 brings. A back to classic Xp style would of been awesome. I don’t need no XBOX crap and other “cloud” stuff. Maybe modules for later times but it’s bloated and I don’t feel like it !

Useful links

Running with hbase+opentsdb+tcollector+grafana

Getting grafana opentsdb hbase and tcollector

git clone


REPLACE ~ in this guide with the actual path ! Not all params are expanded by the service.

CHANGE THE ~/hbase-1.1.2/conf/hbase-site.xml and add this in the section

Otherwise you’ll probably loose your data at a reboot since the default is in /tmp and it will be cleaned.

env COMPRESSION=NONE HBASE_HOME=~/hbase-1.1.2 ~/opentsdb-2.1.1/
mkdir ~/opentsdb-tmp

Start Everything Up

# Run everything as a different user, NOT ROOT !!!!
~/opentsdb-2.1.1/build/tsdb tsd --port=4242 --staticroot=~/opentsdb-2.1.1/build/staticroot/ --cachedir=~/opentsdb-tmp --zkquorum=localhost:2181 --auto-metric --config=~/opentsdb-2.1.1/opentsdb.conf &
~/tcollector/startstop start --allowed-inactivity-time=3600 --backup-count=10 -v
~/grafana-2.6.0/bin/grafana-server -homepath=~/grafana-2.6.0/ -pidfile=~/grafana-2.6.0/ &

My opentsdb.conf looks like this

tsd.http.request.cors_domains = * = 4242 =

My grafana’s defaults.ini looks like this. No need to tinker with the mysqldb. Just create the db and give privileges and it’ll take care of creating its tables.

app_mode = production

data = data
logs = data/log

protocol = http
http_addr =
http_port = 3000
domain =
enforce_domain = false
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana/
router_logging = false
static_root_path = public
enable_gzip = false
cert_file =
cert_key =

type = mysql
host =
name = grafana
user = grafana
password = password
ssl_mode = disable
path = grafana.db

provider = file
provider_config = sessions
cookie_name = grafana_sess
cookie_secure = false
session_life_time = 86400
gc_interval_time = 86400

reporting_enabled = true
google_analytics_ua_id =
google_tag_manager_id =

admin_user = admin
admin_password = admin
secret_key = keyhere
login_remember_days = 7
cookie_username = grafana_user
cookie_remember_name = grafana_remember
disable_gravatar = false
data_source_proxy_whitelist =

allow_sign_up = false
allow_org_create = false
auto_assign_org = true
auto_assign_org_role = Viewer
verify_email_enabled = false

enabled = false
org_name = Main Org.
org_role = Viewer

enabled = false
allow_sign_up = false
client_id = some_id
client_secret = some_secret
scopes = user:email
auth_url =
token_url =
api_url =
team_ids =
allowed_organizations =

enabled = false
allow_sign_up = false
client_id = some_client_id
client_secret = some_client_secret
scopes =
auth_url =
token_url =
api_url =
allowed_domains =

enabled = true

enabled = false
header_name = X-WEBAUTH-USER
header_property = username
auto_sign_up = true

enabled = false
config_file = /etc/grafana/ldap.toml

enabled = false
host = localhost:25
user =
password =
cert_file =
key_file =
skip_verify = false
from_address = admin@grafana.localhost

welcome_email_on_sign_up = false
templates_pattern = emails/*.html

mode = console, file
buffer_len = 10000
level = Info

level =
formatting = true

level =
log_rotate = true
max_lines = 1000000
max_lines_shift = 28
daily_rotate = true
max_days = 7

enabled = false
rabbitmq_url = amqp://localhost/
exchange = grafana_events

enabled = false
path = /var/lib/grafana/dashboards

enabled = false
org_user = 10
org_dashboard = 100
org_data_source = 10
org_api_key = 10
user_org = 10
global_user = -1
global_org = -1
global_dashboard = -1
global_api_key = -1
global_session = -1


For Grafana:

#wget new grafana
tar -xzf grafana-version.linux-x64.tar.gz
pkill -9 grafana-server
cp grafana-old/conf/defaults.ini grafana-new/conf/defaults.ini
~/grafana-new/bin/grafana-server -homepath=~/grafana-new/ -pidfile=~/grafana-new/ &

Extend your lvm VolumeGroup

First, create your partition as you normally would. No need to set up a FileSystem on it.
In my case I’ll try to extend my vg01-var.

pvcreate /dev/sdb1
vgextend vg01 /dev/sdb1
lvextend /dev/vg01/var /dev/sdb1
resize2fs /dev/vg01/var

Additional info:
My sdb1 was already lvmed into something by someone, so I had to umount and to stuff:

umount /raid
vgchange -a n vg02
vgremove vg02

nfs export on centos7

So I wanted to export my /backup to some machines so I don’t have to scp stuff to it.

On the server:

yum install nfs-utils nfs-utils-lib
systemctl enable nfs-server.service
systemctl enable nfs-lock.service
systemctl enable nfs-rquotad.service
systemctl enable nfs-idmap.service
systemctl enable nfs-mountd.service
systemctl enable rpcbind.service
echo '/backup,sync,no_root_squash),sync,no_root_squash)' >/etc/exports
systemctl start rpcbind.service
systemctl start nfs-server.service
systemctl start nfs-lock.service
systemctl start nfs-idmap.service

On the client:

yum install nfs-utils nfs-utils-lib
# mount -t nfs /backup/
echo ' /backup/ nfs rw,sync 0 0' >>/etc/fstab
mount /backup

Nested ESXi Virtualization

Basically, I had to test something on an ESXi upgrade procedure before putting it into production and I didn’t want to mess up my working environment.
The following is done in an ESXi 5.5 SSH console:

cd /vmfs/volumes
# cd [your volume]/[your machine name]
vi [your machine name].vmx
#make sure you have enough ram
#find and replace: memSize = "8192" with something that feets your needs ( at least 2048 though )
#find and replace or add: numvcpus = "4" and cpuid.coresPerSocket = "2" to something that meets your demands
#set guestOS = "vmkernel5" here if you don't want to manually set it through the interface and you'll nest an ESXi 5 host
monitor.virtual_mmu = "hardware"
monitor.virtual_exec = "hardware"
cpuid.1.ecx = "---- ---- ---- ---- ---- ---- --H- ----"
hypervisor.cpuid.v0 = "FALSE"
vhv.enable = "TRUE"  
sched.mem.maxmemctl = "0"
# search for your vm id
vim-cmd /vmsvc/getallvms | grep "[your machine name]"
vim-cmd /vmsvc/reload [id]

After doing this go to the machine settings in the vSphere Client and set “Options” -> “General Options” -> “Guest Operating System” to “Other” -> “VMware ESXi 5.x”
Also, be sure to have size your VM disk if you want machines in it .. AND at least 2 cores !



Simple off-site mysql and website backup

Here are two scripts I wrote that I needed to easily backup databases and some websites.

Having the fact that I use SELinux .. with a custom data dir. I needed to this on my server:

yum install rssh
mkdir /backup/.ssh
cd /backup/.ssh
ssh-keygen -t rsa -f ./backup
cat >authorized_keys
sed -i 's/#allowsftp/allowsftp/g' /etc/rssh.conf
adduser -m backup -s /usr/bin/rssh
semanage fcontext -at user_home_dir_t /backup/
semanage fcontext -at ssh_home_t /backup/.ssh/
semanage fcontext -at ssh_home_t /backup/.ssh/authorized_keys
restorecon -Rv /backup

You just need to get /backup/.ssh/backup private key file to the servers (make sure it’s chmod 0600 on the clients too ) you want to backup from so they can use it to connect to this server.

I’ve put the following script on my mysql server

mkdir -p /root/scripts
cat >/root/scripts/<<_EOF_
mkdir "${OUTPUT}"
databases=$(mysql --host=${HOST} --user=${USER} --password=${PASSWORD} --skip-column-names -s -N -e "SHOW DATABASES;")
for db in $databases; do
        if [[ "$db" == "information_schema" ]] ; then
        if [[ "$db" == "performance_schema" ]] ; then 
        if [[ "$db" != _* ]] ; then
                file=sql_$db.`date +%Y%m%d_%s`.sql.gz
                mysqldump --force --opt --host=${HOST} --user=$USER --password=${PASSWORD} --databases $db | gzip > $OUTPUT/${file}
                scp -oPort=${PORT} -i ${KEY} $OUTPUT/${file} ${DEST}
                rm -rf "${OUTPUT}/*"
chmod +x /root/scripts/
echo '0 2 * * * root nice /root/scripts/ >/dev/null 2>&1' >> /etc/crontab

I’ve put the following script on my web server, feel free to adapt.

mkdir -p /root/scripts
cat >/root/scripts/<<_EOF_
mkdir "${OUTPUT}"
for site in $(ls "${SOURCE}" | grep -Ev '(cgi-bin|html)')
        file=site_$site.`date +%Y%m%d_%s`.tar.gz
        tar -czf $OUTPUT/${file} -C /var/www ${site}
        scp -oPort=${PORT} -i ${KEY} $OUTPUT/${file} ${DEST}
        rm -rf "${OUTPUT}/*"
chmod +x /root/scripts/
echo '0 2 * * * root nice /root/scripts/ >/dev/null 2>&1' >> /etc/crontab

You should probably do a scp connection to the server first so you can accept the newly learned key for the client.