NSIS ReadCustomerData without invalidating SignTool Digital Signature

Introduction

Are you using NSIS (Nullsoft Scriptable Install System) to create Windows Installer executables? Do you send custom versions of your .exe installer file to each user (e.g. embedded user token or username/password)? Would you like to have NSIS build one master installer file and easily embed the user data into the installer without having to rebuild or resign the installer? Then this article is for you!

I will demonstrate how to implement the NSIS ReadCustomerData function to customize your master installer file for each user.

I will share my custom version of the AppendPayload program originally written by Aymeric Barthe and shared in his blog post Changing a Signed Executable Without Altering Windows Digital Signature. I will also demonstrate Linux and macOS (OSX) compilation/usage.

Our users can login to our website to download a custom installer that includes their account details. The Linux program reads the original signed NSIS installer, reads a data file that contains the user data, and writes a new installer file with the embedded customer data. Windows recognizes the original digital signature since we are just adding arbitrary data and did not modify the code.

Background

We provide a Windows application for our users. When I researched moving this particular Windows application to NSIS, I was very interested in the NSIS ReadCustomerData function that read arbitrary data appended to the end of the executable (.exe) file. This would allow us to customize our signed installer file with user data (e.g. user token or username/password) without generating and signing a separate installer for each user.

NSIS ReadCustomerData

The NSIS ReadCustomerData documentation was a little slim, so I will discuss a few nontrivial steps.

  • Decide which “ReadCustomerData” function you will use. Both functions should provide the same results, though the second function seems faster. Save the function to a file named “ReadCustomerData.nsi”.
  • Add the following code near the top of your primary .nsi file to import the function you saved in the previous step.
; Include ReadCustomerData function
!include ReadCustomerData.nsi
  • Add the following code to your main Section to verify that you can (or cannot) read your custom data.
; Read Customer Data (e.g. "username;password")
Push "CUSTDATA:"
Call ReadCustomerData
Pop $1
StrCmp $1 "" 0 +3
MessageBox MB_OK "No data found"
Abort
MessageBox MB_OK "Customer data: [$1]"
  • Create (and sign, if applicable) your NSIS installer executable file.

Now you are ready to append data to the end of your installer! When you run your installer, it will display a message box that says “No data found”. Continue reading to learn how to add a custom payload.

If you have a complex payload with multiple values, I highly recommend that you go back to the NSIS ReadCustomerData function page and use their functions to parse the data.

Embed User Data Method #1 – INVALID DIGITAL SIGNATURE

I initially attempted the following method as discussed in this Stack Overflow article. While NSIS does seem to be able to read the custom user data appended to the end of the .exe file using this method, Windows 10 no longer recognizes the digital signature of the NSIS installer file.

REM DO NOT USE THIS METHOD IF YOU SIGNED YOUR EXECUTABLE!
copy setup.exe user123.exe
echo "USERDATA:username;password" >> user123.exe

To clarify, the Digital Signature tab on the file properties page is no longer present when data is appended to a signed executable in this way. Windows 10 does not throw an error warning that the signature is invalid. Rather, Windows treats the executable as unsigned.

Embed User Data Method #2 – VALID DIGITAL SIGNATURE

I was able to successfully embed custom user data into the signed NSIS installer .exe file by using the method described in Aymeric Barthe‘s blog post Changing a Signed Executable Without Altering Windows Digital Signature.

We compile and sign our installers on Windows, but we would like to customize and distribute the installer from our Linux servers (e.g. generate custom installer on the fly when requested by user). I was able to make a few minor changes to Aymeric’s code so that it would compile and run correctly on Linux and macOS (OSX).

Download my port to Linux/macOS in my “Signed NSIS exe Append Payload” GitHub repository.

Suppress GnuPG entries from systemd and gpg-agent flooding Debian daemon.log during SSH connection

Update for Debian 11 (bullseye)

I had to apply further filtering after upgrading to Debian 11, but the approach was the same as described for Debian 10.

I noticed excessive Disk I/O write operations (10-100x more iops than usual) on some servers after the upgrade to Debian 11, which turned out to be systemd-journald logging all of these SSH messages to disk (including the filtered messages). It seems all SSH messages are sent to journald, then forwarded to syslog. The filtering described on this page does not apply to the journald logs. I was able to resolve this separate Disk I/O issue by forcing journald to write to logs in memory rather than writing to disk. Be aware journald logs stored in memory will be lost after reboot.

If you would like to configure Debian 11 (bullseye) journald to save journals in memory instead of to disk, add a “Storage=volatile” setting to “/etc/systemd/journald.conf”, then restart journald via “systemctl restart systemd-journald”.

Debian 10 (buster)

Ever since upgrading to Debian 9 Stretch, certain logs (e.g. /var/log/daemon.log and /var/log/syslog) on our very active SSH hosts has been flooded with many GnuPG entries generated by systemd and gpg-agent every time a user connects to one of our hosts via SSH. These lines were generating several GB of daily logs that were not necessary for our environment.

I implemented an rsyslog filter that suppresses these “Listening on GnuPG” and “Closed GnuPG” messages after considering solutions specific to this issue discussed here and here as well as solutions related to excessive logging discussed here and here.

Create a new rsyslog conf file for this filter:

vi /etc/rsyslog.d/ignore-systemd-gpg.conf

And populate the conf file:

# Disable the following messages that occur before/after every SSH login 
#
# Dec 10 02:31:39 data-e1-prd-001 systemd[991]: Listening on GnuPG network certificate management daemon.
# Dec 10 02:31:39 data-e1-prd-001 systemd[991]: Listening on GnuPG cryptographic agent and passphrase cache.
# Dec 10 02:31:39 data-e1-prd-001 systemd[991]: Listening on GnuPG cryptographic agent (access for web browsers).
# Dec 10 02:31:39 data-e1-prd-001 systemd[991]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
# Dec 10 02:31:39 data-e1-prd-001 systemd[991]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
# Dec 10 02:31:40 data-e1-prd-001 systemd[991]: Closed GnuPG cryptographic agent and passphrase cache.
# Dec 10 02:31:40 data-e1-prd-001 systemd[991]: Closed GnuPG cryptographic agent (access for web browsers).
# Dec 10 02:31:40 data-e1-prd-001 systemd[991]: Closed GnuPG cryptographic agent and passphrase cache (restricted).
# Dec 10 02:31:40 data-e1-prd-001 systemd[991]: Closed GnuPG cryptographic agent (ssh-agent emulation).
# Dec 10 02:31:40 data-e1-prd-001 systemd[991]: Closed GnuPG network certificate management daemon.

if $programname == "systemd" and ($msg contains "Listening on GnuPG" or $msg contains "Closed GnuPG") then stop

Then restart the rsyslogd service.

/etc/init.d/rsyslog restart

Your daemon.log should no longer contain the extra GnuPG related entries!

tail -f /var/log/daemon.log

Here is another example that suppresses messages related to “User Slice”, “Reached target”, and “Stopped target”.

First, create a new rsyslog conf file for this filter:

vi /etc/rsyslog.d/ignore-systemd-session-slice.conf

And populate the file with content similar to the following:

# SYSTEM IS CREATING THESE LOG ENTRIES FOR EACH SSH CONNECTION
# Dec 10 03:00:56 HOST systemd[1]: Created slice User Slice of USERNAME.
# Dec 10 03:00:56 HOST systemd[1]: Starting User Manager for UID 2556...
# Dec 10 03:00:56 HOST systemd[1]: Started Session 4160929 of user USERNAME.
# Dec 10 03:00:56 HOST systemd[1190]: Reached target Timers.
# Dec 10 03:00:56 HOST systemd[1190]: Reached target Paths.
# Dec 10 03:00:56 HOST systemd[1190]: Reached target Sockets.
# Dec 10 03:00:56 HOST systemd[1190]: Reached target Basic System.
# Dec 10 03:00:56 HOST systemd[1190]: Reached target Default.
# Dec 10 03:00:56 HOST systemd[1190]: Startup finished in 16ms.
# Dec 10 03:00:56 HOST systemd[1]: Started User Manager for UID 2556.
# Dec 10 03:00:57 HOST systemd[1]: Stopping User Manager for UID 2556...
# Dec 10 03:00:57 HOST systemd[1190]: Stopped target Default.
# Dec 10 03:00:57 HOST systemd[1190]: Stopped target Basic System.
# Dec 10 03:00:57 HOST systemd[1190]: Stopped target Timers.
# Dec 10 03:00:57 HOST systemd[1190]: Stopped target Sockets.
# Dec 10 03:00:57 HOST systemd[1190]: Stopped target Paths.
# Dec 10 03:00:57 HOST systemd[1190]: Reached target Shutdown.
# Dec 10 03:00:57 HOST systemd[1190]: Starting Exit the Session...
# Dec 10 03:00:57 HOST systemd[1190]: Received SIGRTMIN+24 from PID 1223 (kill).
# Dec 10 03:00:57 HOST systemd[1]: user@2556.service: Killing process 1223 (kill) with signal SIGKILL.
# Dec 10 03:00:57 HOST systemd[1]: Stopped User Manager for UID 2556.
# Dec 10 03:00:57 HOST systemd[1]: Removed slice User Slice of USERNAME.

# SUPPRESS ALL LOG ENTRIES ABOVE, EXCEPT THE FOLLOWING
# Dec 10 03:00:56 HOST systemd[1]: Starting User Manager for UID 2556...
# Dec 10 03:00:56 HOST systemd[1190]: Startup finished in 16ms.
# Dec 10 03:00:57 HOST systemd[1190]: Received SIGRTMIN+24 from PID 1223 (kill).
# Dec 10 03:00:57 HOST systemd[1]: user@2556.service: Killing process 1223 (kill) with signal SIGKILL.
# Dec 10 03:00:57 HOST systemd[1]: Stopped User Manager for UID 2556.

if $programname == "systemd" and ($msg contains "Created slice User Slice of " or $msg contains "Started Session " or $msg contains "Reached target " or $msg contains "Started User Manager for UID " or $msg contains "Stopping User Manager for UID " or $msg contains "Stopped target " or $msg contains "Reached target Shutdown" or $msg contains "Starting Exit the Session." or $msg contains "Removed slice User Slice of ") then stop

Then restart the rsyslogd service.

/etc/init.d/rsyslog restart

If you would like to add or remove additional lines generated by the “systemd” service from your daemon.log file, modify the filter in the conf file by adding or removing rule sections.

Here is another example that suppresses “Deprecated option” messages generated by the “sshd” service.

Create new conf file

vi /etc/rsyslog.d/ignore-sshd-deprecated-option.conf

With sample of output we intend to suppress and associated rule:

# Dec 10 03:42:53 HOST sshd[16069]: rexec line 21: Deprecated option KeyRegenerationInterval
# Dec 10 03:42:53 HOST sshd[16069]: rexec line 22: Deprecated option ServerKeyBits
# Dec 10 03:42:53 HOST sshd[16069]: rexec line 34: Deprecated option RSAAuthentication
# Dec 10 03:42:53 HOST sshd[16069]: rexec line 41: Deprecated option RhostsRSAAuthentication
# Dec 10 03:42:53 HOST sshd[16069]: reprocess config line 34: Deprecated option RSAAuthentication
# Dec 10 03:42:53 HOST sshd[16069]: reprocess config line 41: Deprecated option RhostsRSAAuthentication

if $programname == "sshd" and ($msg contains "Deprecated option ") then stop

Then restart the rsyslogd service.

/etc/init.d/rsyslog restart

AWS EC2 Secondary Private IPs on Debian 9 (Squeeze)

I use this script to automatically bind AWS EC2 “Secondary Private IP” addresses to my Debian 9 (Squeeze) instance. I set a cron job to run this script several times per hour so that new IP addresses are automatically added to the instance.

Prerequisites

  1. The EC2 instance is running Debian 9 (Squeeze). Script may also work with Ubuntu 17.04 or upcoming Ubuntu 17.10.
  2. The EC2 instance has a single interface (eth0) with one or more “Secondary Private IP” addresses configured in the AWS EC2 console.
#!/bin/bash
# Automatically Bind AWS EC2 Secondary Private IPs to this instance
# Source: Jason Klein
# https://jrklein.com/2017/08/19/aws-ec2-secondary-private-ips-on-debian-9-squeeze/
MAC_ADDR=$(/sbin/ifconfig eth0 | sed -n 's/.*ether \([a-f0-9:]*\).*/\1/p')
IP=($(curl -s http://169.254.169.254/latest/meta-data/network/interfaces/macs/$MAC_ADDR/local-ipv4s))
DATE=`date "+%Y/%m/%d %H:%M:%S"`

echo "$DATE MAC $MAC_ADDR"

for ip in ${IP[@]:1}; do
  ipaddr=`ip addr show dev eth0 | grep "inet $ip"`
  if [ -z "$ipaddr" ]; then
    echo "$DATE IP $ip ADDING"
    ip addr add dev eth0 $ip/20
  else
    echo "$DATE IP $ip OK"
  fi
done
echo "$DATE DONE"

How does this script work?

  1. Parse ethernet mac address from output of “ifconfig eth0”
  2. Request list of local IPv4 addresses configured for this interface in AWS console.
  3. Loop through IP addresses. Ignore first address (e.g. primary address) since it is automatically bound via DHCP.
  4. If IP address has not been bound to eth0 interface, bind IP address.

Sample Cron Job

Save the following to a new file in the /etc/cron.d/ directory. This will bind secondary IP addresses 15 seconds after a reboot, and check for any new secondary IP addresses every 15 minutes. Adjust path to your script and path to your log file as necessary.

# Automatically Bind AWS EC2 Secondary Private IPs to this instance
@reboot root sleep 15 && /usr/local/sbin/aws-ips.sh 2>&1 >> /var/log/cron-aws-ips.log
*/15 * * * * root /usr/local/sbin/aws-ips.sh 2>&1 >> /var/log/cron-aws-ips.log

Sample Log Output

This shows the IP addresses were successfully added during boot, and checked during the 15 minute cron job interval.

2017/08/19 23:34:21 MAC f3:3d:00:00:b3:ef
2017/08/19 23:34:21 IP 172.31.2.2 ADDING
2017/08/19 23:34:21 IP 172.31.2.3 ADDING
2017/08/19 23:34:21 IP 172.31.2.4 ADDING
2017/08/19 23:34:21 DONE
2017/08/19 23:45:01 MAC f3:3d:00:00:b3:ef
2017/08/19 23:45:01 IP 172.31.2.2 OK
2017/08/19 23:45:01 IP 172.31.2.3 OK
2017/08/19 23:45:01 IP 172.31.2.4 OK
2017/08/19 23:45:01 DONE

Acknowledgements

Based on this article posted by Jurian in 2012. His solution appeared to be based on Debian 7 (wheezy) or Debian 8 (Jesse) and was easily modified to correctly parse the new “ifconfig” output in Debian 9 (Squeeze) due to major changes in “net-tools” package. Added check for existing bindings so that I could safely run this in a cron job.

MySQL Master/Slave Replication on AWS EC2/EBS

This very brief guide assumes you have already setup a Master/Slave relationship between two MySQL servers hosted on EC2 and are looking for an efficient way to clone large Master databases to the Slave server.

I use this method to clone a 150GB EBS volume containing Master databases to a Slave in about an hour. Most of the hour is waiting for EBS to complete the snapshot of the Master volume.

Requirements and Assumptions

  • Servers must both be hosted on EC2
  • Ideally, servers are running same OS and same OS version
  • Servers MUST be running identical version of MySQL
  • Servers MUST both be MySQL 32 bit or both be MySQL 64 bit. You cannot use this method to move data from MySQL 32 bit to MySQL 64 bit.
  • MySQL data MUST be hosted on a dedicated EBS volume (e.g. /data)
  • MySQL master must be able to shutdown for several seconds/minutes during this process
  • No MySQL slaves currently exist, or you are reloading ALL of your slaves. You MUST skip the RESET MASTER step if need to maintain existing slave relationships.

Let’s begin!

MASTER SERVER

We will open a mysql console on the master server, grant replication rights to the new slave server IP, perform a global lock, reset our master binary logs (optional), make note of our master binary log file position, and stop our MySQL server daemon.

mysql
GRANT REPLICATION SLAVE ON *.* to 'repl'@'172.x.x.x' IDENTIFIED BY 'repl-password-here';

# Close all open tables and locks all tables for all databases
flush tables with read lock;

# Delete all binary log files, reset binary log index, create new binary log file.
# WARNING: Skip this command if you need to maintain existing slave relationships!
# WARNING: Only reset master if you are reloading ALL slaves.
reset master;

# Make note of the current master file name and file position.
show master status;
 +------------------+----------+--------------+------------------+
 | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
 +------------------+----------+--------------+------------------+
 | mysql-bin.000001 | 107      |              |                  |
 +------------------+----------+--------------+------------------+

# Stop your MySQL server
/etc/init.d/mysql stop

At this point, you need to freeze your volume (example for XFS volume below) or temporarily shutdown your EC2 instance.

xfs_freeze -f /myxfs
# Take snapshot of file system 
xfs_freeze -u /myxfs

Once your volume has been frozen (or your EC2 instance has been shutdown), login to AWS console and take a snapshot of the dedicated volume that contains your MySQL data. My understanding is that it is safe to unfreeze the volume (or start your EC2 instance) as soon as AWS begins performing the EBS snapshot. You should NOT need to wait for the EBS snapshot to complete (e.g. an hour for a 150GB volume).

Once your EBS snapshot begins, unfreeze your volume and restart MySQL (or start your EC2 instance).

# Start MySQL server daemon
/etc/init.d/mysql start

# Open MySQL console
mysql

# Release global lock on all of your tables, 
# so writes can be performed again.
unlock tables;

PRO TIP: Since we store our MySQL data on a dedicated XFS volume, we use “ec2-consistent-snapshot” to freeze the XFS filesystem, begin an EBS snapshot, and unfreeze the XFS filesystem in just a few seconds. We use this to snapshot our servers (and have successfully tested/restored the snapshots). We have a custom script that runs along side this and automatically purges old snapshots. These scripts require creating AWS credentials and require configuring IAM permissions for the EC2 instance to request the EBS snapshot and view/delete old EBS snapshots. It is easier to manually create the EBS snapshot from the AWS console if you are only needing to do this once in a while.

Your MASTER SERVER should be fully operational at this point. You must wait for AWS to finish writing the EBS snapshot before proceeding.

SLAVE SERVER

Monitor EBS snapshot progress in your AWS console. Once the EBS snapshot is complete, create a new EBS volume and select the snapshot as the source for that new volume.

Make note of the old volume device name in AWS console AND in your EC2 server console. In our case, AWS shows our “/data” volume as /dev/sdb but EC2 shows the device as /dev/xvdb1.

Unmount the existing “/data” volume in the slave EC2 server console.

unmount /data

Force detach the existing “/data” volume for the slave EC2 server in your AWS console.

WARNING: Make sure you are detaching the correct volume for the correct server! A forced detach is no different than pulling a disk out of a server and will likely lead to data loss/corruption if you detach a mounted disk!

Attach the new “/data” volume to your slave EC2 server in your AWS console. Specify the volume device name shown for the old/existing volume (e.g. “/dev/sdb”).

Mount the new “/data” volume in your slave EC2 server console. If the server cannot find the volume, the volume device name may have changed. Determine new volume device name, update /etc/fstab, and try to mount again.

mount /data

At this point, an exact copy of the “/data” volume from our master MySQL server is now mounted on our slave MySQL server.

PRO TIP: AWS will restore the entire volume from a snapshot in just a few seconds, thanks to their lazy loading technique. MySQL performance will be slow until the data has been loaded. You can pre-load all EBS volumes if you are concerned about performance. Example:

# Determine device name (e.g. /dev/xvdX) for your "/data" volume
lsblk
# Read entire data volume forces EBS to write entire volume from snapshot
sudo dd if=/dev/xvdX of=/dev/null bs=1M

If you choose to pre-load your data, you can proceed with the steps below while the “dd” command above is running. We do not pre-load data for slaves that are only used to run mysqldump backups.

A few more commands, and our slave MySQL server should begin syncing to the master!

# Start the MySQL server daemon
/etc/init.d/mysql start

# Open the MySQL console
mysql

# Configure your Master settings
# NOTES:
# * Must input correct IP, username, password
# * Must input LOG FILE and LOG POS shown before starting an EBS snapshot
# * If using SSL, generate SSL certificates in advance and input paths.

# Example WITHOUT SSL
CHANGE MASTER TO
 MASTER_HOST='172.x.x.x',
 MASTER_USER='repl',
 MASTER_PASSWORD='repl-password-here',
 MASTER_LOG_FILE = 'mysql-bin.000001',
 MASTER_LOG_POS = 107;

# Example WITH SSL
CHANGE MASTER TO
    MASTER_HOST='172.x.x.x',
    MASTER_USER='repl',
    MASTER_PASSWORD='repl-password-here',
    MASTER_SSL=1,
    MASTER_SSL_CA = '/etc/ssl/certs/ssl-cert-FQDN-bundle.crt',
    MASTER_SSL_CAPATH = '/etc/ssl/certs/',
    MASTER_SSL_CERT = '/etc/ssl/certs/ssl-cert-FQDN.crt',
    MASTER_SSL_KEY = '/etc/ssl/private/ssl-cert-FQDN.key',
    MASTER_SSL_CIPHER = 'DHE-RSA-AES256-SHA',
    MASTER_LOG_FILE = 'mysql-bin.000001',
    MASTER_LOG_POS = 107;

# Start slave replication
START SLAVE;

If all goes well, your slave should begin replicating data from the master! Check slave replication status.

show slave status \G

Slave status should show “X seconds behind master”. If “X” is 0, replication is current. If “X” > 0, replication is working and X should slowly continue to decrease each time you show slave status.

If “X” is “N/A”, replication is not working and you likely need to resolve errors. Here are two common troubleshooting tips I follow.

TROUBLESHOOTING

I do not provide detailed master/slave troubleshooting. Here are a few basic

Confirm slave can connect to master. The following command is also helpful for troubleshooting SSL, but should connect to the server and output your SSL Cipher (e.g. NONE) even if you are not using SSL. Useful for identifying and resolving basic network level or permission issues, including AWS Security Group problems or MySQL replication username/password problems.

mysql -h 172.x.x.x -u repl -p -e "SHOW STATUS LIKE 'Ssl_cipher';"

Resolve errors regarding duplicate records. With the method above, I occasionally encounter an issue where the slave will try to run several queries that were already run on the master prior to the snapshot. The “show slave status” error will tell you that a duplicate key exists, or a duplicate record cannot be inserted.

To resolve this issue, I run the following commands to stop the slave, skip ONE replication query, and start the slave.

STOP SLAVE;
SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;
START SLAVE;
SHOW SLAVE STATUS \G;

This will normally resolve the duplicate record error. This may lead to a NEW duplicate record error. Our database receives several writes per second when idle. I usually have to repeat this 2-5 times. I would expect performing the global write lock and then showing the master status would give me an exact binary log position and avoid this duplicate record issue? If you know why this is happening, please leave a comment below!

I can not assist you with replication errors! MySQL replication is usually straightforward, but many different issues can lead to replication errors. Please perform your own research if you have replication errors. If you would like to share your replication error and your solution in a comment below, I will approve your comment for others to see.

Setup a Personal Server on Amazon Web Services

I have been managing Linux Virtual Servers for well over 10 years. I visited the AWS offering many times before taking the plunge. Their huge variety of server options, storage options, bandwidth options, etc made their offering very confusing. Since I did not understand which options I would need, I could not determine what costs to expect. Let’s review the total cost associated with a small Debian server on AWS and walk through setup of that server.

Costs

Server (EC2 t2.micro w/ 1GB ram):  $162 ($109 initial + $1.46/mo)
Storage (EBS SSD 30GB): $108 ($0.10/GB/MO)
Outgoing Bandwidth (540GB): $49 ($0.09/GB)
Incoming Bandwidth: FREE
Static IP (Elastic IP): FREE
TOTAL 3-YEAR COST:  $319  ($109 initial + $5.83/month)

AWS offers a free tier that provides all new customers with the server/storage/bandwidth configuration shown above one (1) full year.

Personal Setup Instructions

  1. Create a new AWS account and login to the account.
  2. Drill-down to the EC2 services control panel.
  3. Click “Launch Instance”
    1. Choose an Amazon Machine Image (AMI) – This is a prebuilt OS image. I typically go to the “Community AMI” page, search for “Debian amd64 hvm”, and click “Select” next to the most recent AMI listed.
    2. Choose an Instance Type – These are your server specs. Let’s choose the smallest (cheapest) instance type, currently “t2.micro” with 1 CPU and 1GB of RAM. Click Next.
    3. Configure Instance Details – “Enable Termination Protection” should be enabled, to protect your instance from accidental “Termination” (deletion). The rest of the default settings are fine for now. Click Next.
    4. Add Storage – Configure your server disk size here. By default, disk size currently defaults to 8GB on a General Purpose (SSD) drive. Increase the disk size if you’d like, otherwise accept defaults and click Next.
    5. Tag Instance – We will not create any tags. If you use AWS for multiple customers or projects, you can use tags to classify resources by customer, by project, etc. which can be helpful for billing and overall management.  Click Next.
    6. Configure Security Group – This page allows you to create basic firewall rules. By default, you will create a new security group allowing SSH access from ANY IP. Accept the default for now and click Next. You can easily edit your Security Groups (firewall rules) later.
    7. You have an opportunity to review your settings. Click Launch.
    8. Key Pair – Create a new SSH key pair and download the keys. You will use this SSH key pair (instead of an SSH password) to login to your AWS server.  Click Next.
    9. Congrats!  AWS is building your new server!  The server should be ready for use in less than a minute.
  4. Go back to the AWS console, drill-down to EC2, and click on “Instances”. Your new server should be listed. Click on your new server to view server details, including “Public IP”
  5. SSH into your new server!
    1. GUI – If you are using a GUI such as PuTTY, the SSH hostname will be the “Public IP” (e.g. 10.9.8.7) shown in your server details and the username for your Debian server should be “admin”.  Be sure to tell the GUI to login using your private key.
    2. CLI – Assuming “Public IP” is “10.9.8.7” and you saved your SSH key to “private.key”, you would run the following command:  “ssh -i private.key admin@10.9.8.7”
  6. Update your Debian OS (e.g. “sudo apt-get update” and “sudo apt-get upgrade”)
  7. Oops! The OS cannot access the internet! You need to allow outbound traffic in your Security Group. Let’s edit your outbound firewall rules…
    1. Go back to the AWS console, drill-down to EC2, and choose “Security Groups”
    2. Click on your existing Security Group in the upper window pane, click on the “Outbound” tab in the lower window pane, and Add Rule to allow “All Traffic” to “Anywhere”. Click Save. Changes should take effect almost instantly (less than 5-10 seconds).
  8. Try to update your Debian OS again.
  9. You should be good to go at this point!

Reserved Instances

After your free year of service has ended, You need to decide which Instance size you plan to use (e.g. “t2.micro” or “t2.small”) and purchase a “Reserved Instance” (RI) for that Instance size to help keep your costs down. You are essentially pre-paying part of your hourly fee for 1 or 3 years in exchange for either a 30% or 50% discount, respectively.You do not link an RI to a specific instance. As long as you have an instance type running in the same “Availability Zone” (AZ) as the RI instance type and RI AZ, you will automatically receive the discounted hourly rate.

Amazon allows you to resell the unused months of your RI in their marketplace and recoup nearly all of the unused value (e.g. pay $109 for 36-mo “t2.micro” RI, receive around $50 if you sell 18-mo “t2.micro” RI on the market).