Create the IAM role and create access / secret keys

On unix host, download and install cloudwatch agent

--2019-10-16 13:50:07--
Resolving (
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 58801088 (56M) [application/octet-stream]
Saving to: ‘amazon-cloudwatch-agent.deb’ 100%[========================================>] 56.08M 6.20MB/s in 11s

2019-10-16 13:50:19 (5.27 MB/s) - ‘amazon-cloudwatch-agent.deb’ saved [58801088/58801088]

Install the agent:

sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
[sudo] password for etangle:
Selecting previously unselected package amazon-cloudwatch-agent.
(Reading database ... 229872 files and directories currently installed.)
Preparing to unpack ./amazon-cloudwatch-agent.deb ...
create group cwagent, result: 0
create user cwagent, result: 0
Unpacking amazon-cloudwatch-agent (1.229438.0-1) ...
Setting up amazon-cloudwatch-agent (1.229438.0-1) ...
Processing triggers for ureadahead (0.100.0-21) ...

Create credentials file where ever you like, i created in home directory /home/etangle/credentials

region = ap-southeast-2

Update common-config.toml for credentials file location:

sudo vim /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml

shared_credential_file = "/home/etangle/credentials"

Create config, and store in SSM Parameter store with the name “AmazonCloudWatch-linux”

sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
= Welcome to the AWS CloudWatch Agent Configuration Manager =
On which OS are you planning to use the agent?
1. linux
2. windows
default choice: [1]:

Trying to fetch the default region based on ec2 metadata...
Are you using EC2 or On-Premises hosts?
1. EC2
2. On-Premises
default choice: [2]:
Please make sure the credentials and region set correctly on your hosts.
Refer to
Which user are you planning to run the agent?
1. root
2. cwagent
3. others
default choice: [1]:
Do you want to turn on StatsD daemon?
1. yes
2. no
default choice: [1]:
Do you want to monitor metrics from CollectD?
1. yes
2. no
default choice: [1]:
Do you want to monitor any host metrics? e.g. CPU, memory, etc.
1. yes
2. no
default choice: [1]:
Do you have any existing CloudWatch Log Agent ( configuration file to import for migration?
1. yes
2. no
default choice: [2]:
Do you want to monitor any log files?
1. yes
2. no
default choice: [1]:
Log file path:
Log group name:
default choice: [syslog]

Log stream name:
default choice: [{hostname}]

Do you want to specify any additional log files to monitor?
1. yes
2. no
default choice: [1]:
Saved config file to /opt/aws/amazon-cloudwatch-agent/bin/config.json successfully.
Current config as follows:
"agent": {
"run_as_user": "root"
"logs": {
"logs_collected": {
"files": {
"collect_list": [
"file_path": "/var/log/syslog",
"log_group_name": "syslog",
"log_stream_name": "{hostname}"
Please check the above content of the config.
The config file is also located at /opt/aws/amazon-cloudwatch-agent/bin/config.json.
Edit it manually if needed.
Do you want to store the config in the SSM parameter store?
1. yes
2. no
default choice: [1]:

What parameter store name do you want to use to store your config? (Use 'AmazonCloudWatch-' prefix if you use our managed AWS policy)
default choice: [AmazonCloudWatch-linux]

Which region do you want to store the config in the parameter store?
default choice: [us-east-1]
Please provide credentials to upload the json config file to parameter store.
AWS Access Key:
AWS Secret Key:
Successfully put config to parameter store AmazonCloudWatch-linux.
Program exits now.

Import the config, and start the agent

sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c ssm:AmazonCloudWatch-linux -s
/opt/aws/amazon-cloudwatch-agent/bin/config-downloader --output-dir /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d --download-source ssm:AmazonCloudWatch-linux --mode onPrem --config /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml --multi-config default
Got Home directory: /root
I! Set home dir Linux: /root
I! SDKRegionWithCredsMap region: ap-southeast-2
Region: ap-southeast-2
credsConfig: map[shared_credential_file:/home/etangle/credentials]
Successfully fetched the config and saved in /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/ssm_AmazonCloudWatch-linux.tmp
Start configuration validation...
/opt/aws/amazon-cloudwatch-agent/bin/config-translator --input /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json --input-dir /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d --output /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml --mode onPrem --config /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml --multi-config default
2019/10/16 14:00:57 Reading json config file path: /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/ssm_AmazonCloudWatch-linux.tmp ...
Valid Json input schema.
I! Detecting runasuser...
Got Home directory: /root
I! Set home dir Linux: /root
I! SDKRegionWithCredsMap region: ap-southeast-2
2019/10/16 14:01:09 E! ec2metadata is not available
2019/10/16 14:01:21 E! ec2metadata is not available
No csm configuration found.
Under path : /logs/ | Info : Got hostname osboxes as log_stream_name
No metric configuration found.
Configuration validation first phase succeeded
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent -schematest -config /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml
Configuration validation second phase succeeded
Configuration validation succeeded
Created symlink /etc/systemd/system/ → /etc/systemd/system/amazon-cloudwatch-agent.service.

HP recently unveiled its cloud services (#IaaS) based on #OpenStack. They are providing a $20 per month credit for the first three months to try out the service. I jumped in, and signed up for a cloud account, in order to try out the offer.

On the first log on to control panel, there are currently two services available: #Compute and Object #Storage. Other services are still in beta, where include: #DNS, #LoadBalancer, #Messaging, #Monitoring and #Relational DB MySQL. I went on, and created a basic server using #CentOS 5.8 Server 64-bit image in US West zone, US East is available as well. There are pretty much images available to choose from, both Unix flavors as well as Microsoft Windows. The Unix images available are:

CentOS 5.8 Server 64-bit
 CentOS 6.3 Server 64-bit
 Debian Squeeze 6.0.3 Server 64-bit
 Fedora 18 Server 64-bit
 Ubuntu Lucid 10.04 LTS Server 64-bit
 Ubuntu Maverick 10.10 Server 64-bit
 Ubuntu Natty 11.04 Server 64-bit
 Ubuntu Oneiric 11.10 Server 64-bit
 Ubuntu Precise 12.04 LTS Server 64-bit
 Ubuntu Quantal 12.10 Server 64-bit
BitNami DevPack 1.3-0-linux-ubuntu-12.04 64-bit
 BitNami Drupal 7.17-0-hp-linux-ubuntu-12.04 64-bit
 BitNami WebPack 1.4-0-linux-ubuntu-12.04 64-bit
 EnterpriseDB PPAS 9.1.2
 EnterpriseDB PSQL 9.1.3
 SOASTA TestResultService 1.0
 Ubuntu Server 12.04.2 LTS
Windows Server 2008 Enterprise SP2 x86

The #Compute instance configurations available are:

xsmall - 1 vCPU / 1 GB RAM / 30 GB HD
small - 2 vCPU / 2 GB RAM / 60 GB HD
medium - 2 vCPU / 4 GB RAM / 120 GB HD 
large - 4 vCPU / 8 GB RAM / 240 GB HD 
xlarge - 4 vCPU / 16 GB RAM / 480 GB HD 
2xlarge - 8 vCPU / 32 GB RAM / 960 GB HD

Once created, a default security group is assigned to this virtual instance.


The number of visitors per day doesn’t really mean anything, it is the peaks that kill you. If all 2000 hits per day come within a one minute peroid, then you might have problems, but if they are evenly spread out throughout the day even on a highly computational webapp you shouldn’t have much issues.

Regardless, if you wish to scale, Varnish will probably help you the most as it allows you to set up client side caching which is as efficient as you can get as it limits the amount of interaction with your server.

APC and memcache are a fallback for when Varnish isn’t able to serve a result. APC will speed up your PHP. memcache allows you to do things like grabbing some complex data from you database for a user and then serving up a cached version of that data for users for the next x minutes/days/weeks. This can make a huge difference if you have any time consuming queries.

Edit: I’ve been meaning to try out Cloudflare CDN for a while now and after doing so I had to come back and recommend it. They have a free account (which I’m using) and setting it up is pretty easy as long as you know how to change DNS records. Out of all the technologies mentioned, this will probably be the best first step you can take to speed up your site. Just so you know I don’t have shares in Cloudflare or anything like that, but I’m seriously considering it. :)

combination of all 3 is useful but use them for different things: Varnish: can cache static content and deliver it extremely fast (reducing load on apache) APC: stores php opcode so that calls which are processed by php are faster Memcache: use as a temporary data store for your application to reduce calls to your db (db is typically a bottleneck)

if you have time on your hands, go for it with all 3 in the following order: APC (fast to get up and running) Varnish (needs a bit of configuration but is well worth it for static pages) Memcache (code changes to make use of it, so obviously needs more thought and time)

Because APC is so easy to put into a LAMP system, I’d put it in there (and have for my own website, and that might get 5 visitors a day), but not bother about the others unless you had some kind of problem, that required the additional effor, such as far larger numbers of visitors, or hundreds of gigabytes of image/video downloads.

Memcache would also require some active use beyond basic configuration as well (or at least using a plugin that used it appropriately, for WordPress or some other off-the-shelf software) – just installing the software does nothing at all.

You have to use Slave database for read (select queries) operations and Master database for write( insert and update queries). Make changes in the following config file of magento: app/etc/local.xml

<initStatements>SET NAMES utf8</initStatements>

Prior to this setup , you must already have configured your mysql master and slave server. To configure master server, edit /etc/my.cnf and add below content in the file:

server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
binlog_do_db = magento_demo
binlog_ignore_db = mysql
then restart your mysql server.
Configuration for slave server
edit /etc/my.cnf 

add below content in the file:



then restart your mysql server.

Xen or “Zen” is a bare metal hypervisor which allows multiple computer operating systems to run on the same computer hardware concurrently. It was developed by University of Cambridge Computer Lab, and now being maintained by Xen Community as free software under GNU General Pubic License (GPL v2). Xen is currently available for IA-32, x86-64 and ARM architectures.

There are some very useful linux commands / utilities, which can be used to monitor the system performance at any time. Some of them, which I recommend for any sysadmin are summarized below:

  1. vmstat
    Reports virtual memory statistics, for example:

    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
    r b swpd free     buff   cache    si    so   bi     bo  in cs us sy id wa st
    0 0 4    1325196  151544 299408   0     0    2      54  0  0  2  1  97 0  0