Juniper EX/SRX switch/router software Update

So, … I migrated away from the ubiquiti router. It was a great router, it just didn’t have the capability to handle the gigabit pppoe.
I’m sure there were better ubiquiti routers to handle the load but decided to move to juniper since seems to be the industry standard …

So, doing the updates are simple. Juniper even has an active/alternate boot layout ! Now I have to do some upgrades !

It seems to be the same procedure on my router and on the switch !


$ scp /Software/Juniper/SRX320/junos-srxsme-15.1X49-D75.5-domestic.tgz user@ipX:/var/tmp/
$ scp /Software/Juniper/EX2200/jinstall-ex-2200-15.1R5.5-domestic-signed.tgz user@ipY:/var/tmp/
user@Device> request system software add /var/tmp/package.tgz
user@Device> request system reboot
user@Device> request system snapshot slice alternate
user@Device> request system storage cleanup
user@Device> request system software delete-backup

Enjoy.

Signing a DNS ISC bind / named zone for DNSSEC

So, recently I had to update some stuff in my zone and I kept wondering why they weren’t picked up on the internet.
I just remembered that I have DNSSEC enabled. So I need to do something .. not just change the .zone file.

The line needed to regenerate the .signed zone based on my clear text zone is:


dnssec-signzone -A -3 $(head -c 1000 /dev/urandom | sha1sum | cut -b 1-16) -N INCREMENT -o asandu.eu -t asandu.eu.zone

I should probably get the time to make a post on how to actually generate the signing keys and stuff.
Basically, I have my KSK ( Key signing key ) and ZSK ( Zone signing key ) public and private key in the zone dir with the right permissions. The above overwrites the old signed zone.

A nice tutorial I’ve used is How To Setup DNSSEC on an Authoritative BIND DNS Server

Simple RPM repo in Gentoo

INTRO

So I’ve started creating a RPM repo so I can easily update my rabbitmq consumers. Here’s how I did it !

The code

echo '=app-arch/createrepo-0.10.4 ~amd64
=dev-python/pyliblzma-0.5.3-r1 ~amd64
=app-arch/deltarpm-3.6_pre20110223-r1 ~amd64
=sys-apps/yum-3.4.3_p20130218-r1 ~amd64
=dev-util/rpmdevtools-8.5 ~amd64
=dev-util/checkbashisms-2.15.10 ~amd64' >> /etc/portage/package.keywords
echo 'app-arch/deltarpm python
app-arch/rpm python' >> /etc/portage/package.use
emerge createrepo rpmdevtools

mkdir -p /usr/src/rpm
cd /usr/src/rpm
mkdir –p {BUILD,RPMS,SOURCES,SPECS,SRPMS,tmp}
cd /usr/src/rpm/SPECS
rpmdev-newspec serverstuff
echo '[serverstuff]
name=ServerStuff Repo
baseurl=https://serverstuff.info/repo/
enabled=1
gpgcheck=0' > /usr/src/rpm/SOURCES/serverstuff.repo

echo 'Name: serverstuff-repo
Version: 1
Release: 1%{?dist}
Summary: ServerStuff Repository
BuildArch: noarch

License: GPL
URL: https://serverstuff.info/
Source0: serverstuff.repo

%description
ServerStuff Repository

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/etc/yum.repos.d
cp %SOURCE0 $RPM_BUILD_ROOT/etc/yum.repos.d

%files
/etc/yum.repos.d

%changelog
* Sat Feb 6 2016 root
- Initial Creation' >/usr/src/rpm/SPECS/serverstuff-repo.spec

rpmbuild -bb serverstuff-repo.spec --define "_topdir /usr/src/rpm"
# I have my serverstuff.info apache serve from the htdocs dir
mkdir -p /var/www/serverstuff.info/htdocs/repo/
cp /usr/src/rpm/RPMS/noarch/serverstuff-repo-1-1.noarch.rpm /var/www/serverstuff.info/htdocs/repo/
createrepo /var/www/serverstuff.info/htdocs/repo/

Other issues

Now the same applies if you want to create another package. Just put it in the repo dir and run

createrepo --update /var/www/serverstuff.info/htdocs/repo/

If you’re doing this too fast, yum caches the repo and might have to do a clean metadata

yum --enablerepo=serverstuff clean metadata

GWX uninstall and disable

GWX disabling and uinstalling

I don’t usually post windows stuff, but here it comes.

You want to disable windows updates completely !!!! Just make sure you check for them manually often enough.

If you got here you probably did the updates and want to revert but don’t know what. Here it is in powershell for W8.1.

I know it’s a rude implementation and might need to install powershell on W7, and it doesn’t treat some errors that might appear and only works on your local computer but the links bellow should point you in the right direction.

The following script needs to be saved as a file.ps1 and ran as administrator

$array = @("KB3035583","KB2952664","KB2976978")
for ($i=0; $i -lt $array.length; $i++) {
$hotfix = Get-HotFix -ComputerName $env:COMPUTERNAME | Where-Object {$_.HotfixID -eq $array[$i]}
if($hotfix){
$HotFixNum = $array[$i].Replace("KB","")
wusa.exe /uninstall /KB:$HotFixNum /quiet /norestart
}
}

TODO

I’ll update at a later time with a script to take the ownership of the gwx directory and delete it too.
Maybe I’ll even figure out a way to mark those updates as hidden directly.
And a way to add a reg key to disable the notification and scheduled tasks !

Microsoft really did a number on this one trying so hard to push its W10. Unfortunately I’m not a fan of Windows .. nor the features W10 brings. A back to classic Xp style would of been awesome. I don’t need no XBOX crap and other “cloud” stuff. Maybe modules for later times but it’s bloated and I don’t feel like it !

Useful links

http://www.myce.com/news/how-to-uninstall-kb3035583-the-windows-10-downloader-for-windows-7-and-8-1-75681/
https://gallery.technet.microsoft.com/scriptcenter/Uninstall-security-update-76f2dcb7
http://helpdeskgeek.com/windows-7/windows-7-how-to-delete-files-protected-by-trustedinstaller/
http://ss64.com/ps/syntax-arrays.html https://powertoe.wordpress.com/2009/12/14/powershell-part-4-arrays-and-for-loops/
http://blog.ultimateoutsider.com/2015/08/using-gwx-stopper-to-permanently-remove.html
https://techjourney.net/disable-remove-get-windows-10-upgrade-reservation-notification-system-tray-icon/
http://www.infoworld.com/article/2979572/microsoft-windows/gwx-stopper-an-easy-way-to-permanently-delete-get-windows-10-nagware-in-windows-7-and-81.html

Running with hbase+opentsdb+tcollector+grafana

Getting grafana opentsdb hbase and tcollector

https://grafanarel.s3.amazonaws.com/builds/grafana-2.6.0.linux-x64.tar.gz

https://github.com/OpenTSDB/opentsdb/releases/download/2.1.1/opentsdb-2.1.1.tar.gz

https://www.eu.apache.org/dist/hbase/1.1.2/hbase-1.1.2-bin.tar.gz

git clone https://github.com/OpenTSDB/tcollector.git

Setup

REPLACE ~ in this guide with the actual path ! Not all params are expanded by the service.

CHANGE THE ~/hbase-1.1.2/conf/hbase-site.xml and add this in the section
<property>
<name>hbase.rootdir</name>
<value>file:///your/user/home/hbase-rootdir/hbase</value>
</property>

Otherwise you’ll probably loose your data at a reboot since the default is in /tmp and it will be cleaned.

env COMPRESSION=NONE HBASE_HOME=~/hbase-1.1.2 ~/opentsdb-2.1.1/create_table.sh
mkdir ~/opentsdb-tmp

Start Everything Up


# Run everything as a different user, NOT ROOT !!!!
~/hbase-1.1.2/bin/start-hbase.sh
~/opentsdb-2.1.1/build/tsdb tsd --port=4242 --staticroot=~/opentsdb-2.1.1/build/staticroot/ --cachedir=~/opentsdb-tmp --zkquorum=localhost:2181 --auto-metric --config=~/opentsdb-2.1.1/opentsdb.conf &
~/tcollector/startstop start --allowed-inactivity-time=3600 --backup-count=10 -v
~/grafana-2.6.0/bin/grafana-server -homepath=~/grafana-2.6.0/ -pidfile=~/grafana-2.6.0/grafana.pid &

My opentsdb.conf looks like this

tsd.http.request.cors_domains = *
tsd.network.port = 4242
tsd.network.bind = 127.0.0.1

My grafana’s defaults.ini looks like this. No need to tinker with the mysqldb. Just create the db and give privileges and it’ll take care of creating its tables.


app_mode = production

[paths]
data = data
logs = data/log

[server]
protocol = http
http_addr =
http_port = 3000
domain = serverstuff.info
enforce_domain = false
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana/
router_logging = false
static_root_path = public
enable_gzip = false
cert_file =
cert_key =

[database]
type = mysql
host = 127.0.0.1:3306
name = grafana
user = grafana
password = password
ssl_mode = disable
path = grafana.db

[session]
provider = file
provider_config = sessions
cookie_name = grafana_sess
cookie_secure = false
session_life_time = 86400
gc_interval_time = 86400

[analytics]
reporting_enabled = true
google_analytics_ua_id =
google_tag_manager_id =

[security]
admin_user = admin
admin_password = admin
secret_key = keyhere
login_remember_days = 7
cookie_username = grafana_user
cookie_remember_name = grafana_remember
disable_gravatar = false
data_source_proxy_whitelist =

[users]
allow_sign_up = false
allow_org_create = false
auto_assign_org = true
auto_assign_org_role = Viewer
verify_email_enabled = false

[auth.anonymous]
enabled = false
org_name = Main Org.
org_role = Viewer

[auth.github]
enabled = false
allow_sign_up = false
client_id = some_id
client_secret = some_secret
scopes = user:email
auth_url = https://github.com/login/oauth/authorize
token_url = https://github.com/login/oauth/access_token
api_url = https://api.github.com/user
team_ids =
allowed_organizations =

[auth.google]
enabled = false
allow_sign_up = false
client_id = some_client_id
client_secret = some_client_secret
scopes = https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email
auth_url = https://accounts.google.com/o/oauth2/auth
token_url = https://accounts.google.com/o/oauth2/token
api_url = https://www.googleapis.com/oauth2/v1/userinfo
allowed_domains =

[auth.basic]
enabled = true

[auth.proxy]
enabled = false
header_name = X-WEBAUTH-USER
header_property = username
auto_sign_up = true

[auth.ldap]
enabled = false
config_file = /etc/grafana/ldap.toml

[smtp]
enabled = false
host = localhost:25
user =
password =
cert_file =
key_file =
skip_verify = false
from_address = admin@grafana.localhost

[emails]
welcome_email_on_sign_up = false
templates_pattern = emails/*.html

[log]
mode = console, file
buffer_len = 10000
level = Info

[log.console]
level =
formatting = true

[log.file]
level =
log_rotate = true
max_lines = 1000000
max_lines_shift = 28
daily_rotate = true
max_days = 7

[event_publisher]
enabled = false
rabbitmq_url = amqp://localhost/
exchange = grafana_events

[dashboards.json]
enabled = false
path = /var/lib/grafana/dashboards

[quota]
enabled = false
org_user = 10
org_dashboard = 100
org_data_source = 10
org_api_key = 10
user_org = 10
global_user = -1
global_org = -1
global_dashboard = -1
global_api_key = -1
global_session = -1

Upgrading

For Grafana:


#wget new grafana
tar -xzf grafana-version.linux-x64.tar.gz
pkill -9 grafana-server
cp grafana-old/conf/defaults.ini grafana-new/conf/defaults.ini
~/grafana-new/bin/grafana-server -homepath=~/grafana-new/ -pidfile=~/grafana-new/grafana.pid &

Extend your lvm VolumeGroup

First, create your partition as you normally would. No need to set up a FileSystem on it.
In my case I’ll try to extend my vg01-var.


pvcreate /dev/sdb1
vgextend vg01 /dev/sdb1
lvextend /dev/vg01/var /dev/sdb1
resize2fs /dev/vg01/var

Additional info:
My sdb1 was already lvmed into something by someone, so I had to umount and to stuff:


umount /raid
vgchange -a n vg02
vgremove vg02

nfs export on centos7

So I wanted to export my /backup to some machines so I don’t have to scp stuff to it.

On the server:

yum install nfs-utils nfs-utils-lib
systemctl enable nfs-server.service
systemctl enable nfs-lock.service
systemctl enable nfs-rquotad.service
systemctl enable nfs-idmap.service
systemctl enable nfs-mountd.service
systemctl enable rpcbind.service
echo '/backup 192.168.1.1(rw,sync,no_root_squash) 192.168.1.3(rw,sync,no_root_squash)' >/etc/exports
systemctl start rpcbind.service
systemctl start nfs-server.service
systemctl start nfs-lock.service
systemctl start nfs-idmap.service

On the client:

yum install nfs-utils nfs-utils-lib
# mount -t nfs 192.168.1.4:/backup/ /backup/
echo '192.168.1.4:/backup/ /backup/ nfs rw,sync 0 0' >>/etc/fstab
mount /backup

Nested ESXi Virtualization

Basically, I had to test something on an ESXi upgrade procedure before putting it into production and I didn’t want to mess up my working environment.
The following is done in an ESXi 5.5 SSH console:

cd /vmfs/volumes
# cd [your volume]/[your machine name]
vi [your machine name].vmx
#make sure you have enough ram
#find and replace: memSize = "8192" with something that feets your needs ( at least 2048 though )
#find and replace or add: numvcpus = "4" and cpuid.coresPerSocket = "2" to something that meets your demands
#set guestOS = "vmkernel5" here if you don't want to manually set it through the interface and you'll nest an ESXi 5 host
monitor.virtual_mmu = "hardware"
monitor.virtual_exec = "hardware"
cpuid.1.ecx = "---- ---- ---- ---- ---- ---- --H- ----"
hypervisor.cpuid.v0 = "FALSE"
vhv.enable = "TRUE"  
sched.mem.maxmemctl = "0"
:wq
# search for your vm id
vim-cmd /vmsvc/getallvms | grep "[your machine name]"
vim-cmd /vmsvc/reload [id]

After doing this go to the machine settings in the vSphere Client and set “Options” -> “General Options” -> “Guest Operating System” to “Other” -> “VMware ESXi 5.x”
Also, be sure to have size your VM disk if you want machines in it .. AND at least 2 cores !

Enjoy.

P.S.: https://communities.vmware.com/message/2120826

Simple off-site mysql and website backup

Here are two scripts I wrote that I needed to easily backup databases and some websites.

Having the fact that I use SELinux .. with a custom data dir. I needed to this on my server:

yum install rssh
mkdir /backup/.ssh
cd /backup/.ssh
ssh-keygen -t rsa -f ./backup
cat backup.pub >authorized_keys
sed -i 's/#allowsftp/allowsftp/g' /etc/rssh.conf
adduser -m backup -s /usr/bin/rssh
semanage fcontext -at user_home_dir_t /backup/
semanage fcontext -at ssh_home_t /backup/.ssh/
semanage fcontext -at ssh_home_t /backup/.ssh/authorized_keys
restorecon -Rv /backup

You just need to get /backup/.ssh/backup private key file to the servers (make sure it’s chmod 0600 on the clients too ) you want to backup from so they can use it to connect to this server.

I’ve put the following script on my mysql server

mkdir -p /root/scripts
cat >/root/scripts/backup.sh<<_EOF_
#!/bin/bash
 
USER="root"
PASSWORD='l33tP4ssw0rd'
HOST="localhost"
OUTPUT="/backup"
PORT=5522
KEY="/root/.ssh/backup"
DEST="backup@192.168.1.1:"
 
mkdir "${OUTPUT}"
databases=$(mysql --host=${HOST} --user=${USER} --password=${PASSWORD} --skip-column-names -s -N -e "SHOW DATABASES;")
 
for db in $databases; do
        if [[ "$db" == "information_schema" ]] ; then
                continue
        fi
        if [[ "$db" == "performance_schema" ]] ; then 
                continue
        fi
        if [[ "$db" != _* ]] ; then
                file=sql_$db.`date +%Y%m%d_%s`.sql.gz
                mysqldump --force --opt --host=${HOST} --user=$USER --password=${PASSWORD} --databases $db | gzip > $OUTPUT/${file}
                scp -oPort=${PORT} -i ${KEY} $OUTPUT/${file} ${DEST}
                rm -rf "${OUTPUT}/*"
        fi
done
_EOF_
 
chmod +x /root/scripts/backup.sh
echo '0 2 * * * root nice /root/scripts/backup.sh >/dev/null 2>&1' >> /etc/crontab

I’ve put the following script on my web server, feel free to adapt.

mkdir -p /root/scripts
cat >/root/scripts/backup.sh<<_EOF_
#!/bin/bash
 
OUTPUT="/backup"
SOURCE="/var/www/"
PORT=5522
KEY="/root/.ssh/backup"
DEST="backup@192.168.1.1:"
 
mkdir "${OUTPUT}"
 
for site in $(ls "${SOURCE}" | grep -Ev '(cgi-bin|html)')
do
        file=site_$site.`date +%Y%m%d_%s`.tar.gz
        tar -czf $OUTPUT/${file} -C /var/www ${site}
        scp -oPort=${PORT} -i ${KEY} $OUTPUT/${file} ${DEST}
        rm -rf "${OUTPUT}/*"
done
chmod +x /root/scripts/backup.sh
echo '0 2 * * * root nice /root/scripts/backup.sh >/dev/null 2>&1' >> /etc/crontab

You should probably do a scp connection to the server first so you can accept the newly learned key for the client.