Wednesday, November 27, 2013

YCSB on HBase 0.96 and Hadoop 2.2

My previous post on YCSB on HBase is for Hadoop 1.* and also HBase 0.94.*. And since Hadoop 2.2 is offcialy released and HBase also move to 0.96.0. I will share with you what to change to run YCSB on HBase 0.96 and Hadoop 2.2

The first step to use this benchmark is to donwload the source from YCSB git:
git clone http://github.com/brianfrankcooper/YCSB.git

Once the clone is done, you will see a folder call YCSB on your current path. cd into the newly created directory YCSB and edit the following files using your favorite editor.

-YCSB/hbase/pom.xml

Edit the corresponding line as shown below to reflect the changes.
For HBase, instead of using hbase, you need to change the artifactid to hbase-client and version to 0.96.0-hadoop2.
For Hadoop 2.2, there is no more hadoop-core and for YCSB to work, change the artifactid to hadoop-common and version to 2.2.0.


-YCSB/pom.xml

I pretty sure this is optional but for completeness, you can also choose to change the following


If you refer to my previous post, you notice i don't change slf4j version anymore. This is because HBase 0.96.0 also using same version as stated in the original pom.xml file which is 1.6.4.
cd into YCSB directory and run mvn clean package to build the package. Once you see the following output, it means the build is successful.

[INFO] YCSB Root ......................................... SUCCESS [40.653s]
[INFO] Core YCSB ......................................... SUCCESS [46.852s]
[INFO] Cassandra DB Binding .............................. SUCCESS [44.413s]
[INFO] HBase DB Binding .................................. SUCCESS [1:49.114s]
[INFO] Hypertable DB Binding ............................. SUCCESS [45.091s]
[INFO] DynamoDB DB Binding ............................... SUCCESS [38.011s]
[INFO] ElasticSearch Binding ............................. SUCCESS [3:22.121s]
[INFO] Infinispan DB Binding ............................. SUCCESS [2:43.266s]
[INFO] JDBC DB Binding ................................... SUCCESS [13.182s]
[INFO] Mapkeeper DB Binding .............................. SUCCESS [8.313s]
[INFO] Mongo DB Binding .................................. SUCCESS [5.941s]
[INFO] OrientDB Binding .................................. SUCCESS [15.621s]
[INFO] Redis DB Binding .................................. SUCCESS [4.171s]
[INFO] Voldemort DB Binding .............................. SUCCESS [14.630s]
[INFO] YCSB Release Distribution Builder ................. SUCCESS [13.381s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 12:45.433s
[INFO] Finished at: Thu Sep 12 18:25:37 SGT 2013
[INFO] Final Memory: 65M/165M
[INFO] ------------------------------------------------------------------------


You should be able to look for ycsb-0.1.4.tar.gz file inside YCSB/distribution/target directory. Copy this file to the directory where you have the access permission and untar it. Once untar, copy the hbase-site.xml file from your hbase conf directory to your ycsb-0.1.4/hbase-binding/conf/ directory. Also, you should copy the hadoop-auth-2.2.0.jar from your Hadoop installation directory to your  ycsb-0.1.4/hbase-binding/lib/ directory. If not, you might see the following error when you try to run YCSB.

Exception in thread "Thread-3" java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName
        at org.apache.hadoop.security.UserGroupInformation.getOSLoginModuleName(UserGroupInformation.java:303)
        at org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:348)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)
        at org.apache.hadoop.hbase.security.User.call(User.java:414)
        at org.apache.hadoop.hbase.security.User.callStatic(User.java:404)
        at org.apache.hadoop.hbase.security.User.access$200(User.java:48)
        at org.apache.hadoop.hbase.security.User$SecureHadoopUser.(User.java:221)
        at org.apache.hadoop.hbase.security.User$SecureHadoopUser.(User.java:216)
        at org.apache.hadoop.hbase.security.User.getCurrent(User.java:139)
        at org.apache.hadoop.hbase.client.HConnectionKey.(HConnectionKey.java:67)
        at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:240)
        at org.apache.hadoop.hbase.client.HTable.(HTable.java:187)
        at org.apache.hadoop.hbase.client.HTable.(HTable.java:149)
        at com.yahoo.ycsb.db.HBaseClient.getHTable(HBaseClient.java:118)
        at com.yahoo.ycsb.db.HBaseClient.update(HBaseClient.java:303)
        at com.yahoo.ycsb.db.HBaseClient.insert(HBaseClient.java:358)
        at com.yahoo.ycsb.DBWrapper.insert(DBWrapper.java:148)
        at com.yahoo.ycsb.workloads.CoreWorkload.doInsert(CoreWorkload.java:461)
        at com.yahoo.ycsb.ClientThread.run(Client.java:269)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.PlatformName
        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

Before you can run the test, you need to start your hdfs (start-dfs.sh) and hbase (start-hbase.sh). Go into hbase shell and create a table call usertable with column family call family. You can ignore the warning message, you can refer to this link for more information.


After create the table and the column family, you can start loading data into your database

$ ~/ycsb-0.1.4/bin/ycsb load hbase -P ~/ycsb-0.1.4/workloads/workloada -p columnfamily=family -p recordcount=10000 -p threadcount=4 -s | tee -a workloada_load.dat

Start running the benchmark with the command below:

$ ~/ycsb-0.1.4/bin/ycsb run hbase -P ~/ycsb-0.1.4/workloads/workloada -p columnfamily=family -p operationcount=10000 -p recordcount=10000 -p threadcount=4 -s | tee -a workloada_run.dat

The steps above are very simple validation using workloada with only 10000 records loaded into database and 10000 operations during the run. Please take note for run operation (especially for read and update operation tests), you need to specify the recordcount also for your test database size. If you never specify, it will use the default value which is specified in the workload files (default is 1000) and this will cause your test to only execute 10000 operations again and again on 1000 records and the rest of the 9000 records will not be accessed at all.

For more details on what are the available workloads, you can refer to the offcial git site.

Thursday, November 21, 2013

Proxmox 3.1-3 with Brocade FC HBA Card

If you are using Proxmox 3.1-3 with Brocade FC HBA card, you probably will face the issue where Brocade module simply will not load. Looking at dmesg and you will find the following error message.


Brocade BFA FC/FCOE SCSI driver - version: 3.0.23.0
bfa 0000:81:00.0: firmware: requesting cbfw-3.0.3.1.bin
Can't locate firmware cbfw-3.0.3.1.bin
bfa 0000:81:00.1: firmware: requesting cbfw-3.0.3.1.bin
Can't locate firmware cbfw-3.0.3.1.bin

To solve this issue, you need to get the firmware file from Brocade. Download it from here. Select this package - Linux Adapter Firmware package for 3.0.23.x Drivers in RHEL 6.4 (bfa_fw_update_to_v3.0.23.0.tgz, 827 KB)

After you downloaded the file, extract the content and copy all the files to /lib/firmware. install bfa module using "modprobe bfa" and the module should load successfully this time.

Thursday, September 19, 2013

YCSB on HBase

* This post is for using YCSB on HBase 0.94.11 and Hadoop 1.2.1, for YCSB on HBase 0.96 and Hadoop 2.2, please go to this post.

YCSB (Yahoo Cloud Serving Benchmark) is a benchmark tool with common set of workloads for evaluating the performance of different “key-value” and “cloud” serving stores. HBase is one of the targets that can be benchmarked using YCSB.

The first step to use this benchmark is to donwload the source from YCSB git:
git clone http://github.com/brianfrankcooper/YCSB.git

Although it is mentioned you are able to download the binary from the site but the binary will not work when your hbase server version is different compare to the hbase client version used in YCSB binary and you will most likely get the error like below:

java.lang.IllegalArgumentException: Not a host:port pair: 

Once finish cloning, cd into the newly created directory YCSB and edit the following files using your favorite editor.

-YCSB/hbase/pom.xml
Edit the following line shown below to the hbase and hadoop version you have in your environment. In my case, my hbase is 0.94.11 and hadoop is 1.2.1.


-YCSB/pom.xml
Edit the following line to change the slf4j version to 1.4.3.


-YCSB/elasticsearch/pom.xml
Edit the following line to change the slf4j version to 1.4.3.


The changes to the last 2 pom.xml files is to make sure hbase and ycsb use the same version of slf4j. If this is not changed, you might face the problem shown below when running ycsb.

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoopuser/ycsb-0.1.4/hbase-binding/lib/hbase-binding-0.1.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoopuser/ycsb-0.1.4/hbase-binding/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: slf4j-api 1.6.x (or later) is incompatible with this binding.
SLF4J: Your binding is version 1.5.5 or earlier.
SLF4J: Upgrade your binding to version 1.6.x. or 2.0.x
Exception in thread "Thread-1" java.lang.NoSuchMethodError: org.slf4j.impl.StaticLoggerBinder.getSingleton()Lorg/slf4j/impl/StaticLoggerBinder;
        at org.slf4j.LoggerFactory.bind(LoggerFactory.java:128)
        at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:108)
        at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:279)
        at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:252)
        at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:265)
        at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:94)
        at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.(RecoverableZooKeeper.java:98)
        at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:127)
        at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:153)
        at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:127)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1507)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.ensureZookeeperTrackers(HConnectionManager.java:716)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:986)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:961)
        at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:227)
        at org.apache.hadoop.hbase.client.HTable.(HTable.java:170)
        at org.apache.hadoop.hbase.client.HTable.(HTable.java:129)
        at com.yahoo.ycsb.db.HBaseClient.getHTable(HBaseClient.java:118)
        at com.yahoo.ycsb.db.HBaseClient.update(HBaseClient.java:302)
        at com.yahoo.ycsb.db.HBaseClient.insert(HBaseClient.java:357)
        at com.yahoo.ycsb.DBWrapper.insert(DBWrapper.java:148)
        at com.yahoo.ycsb.workloads.CoreWorkload.doInsert(CoreWorkload.java:461)
        at com.yahoo.ycsb.ClientThread.run(Client.java:269)

Cd into YCSB directory and run mvn clean package to build the package. Once you see the following output, it means the build is successful.

[INFO] YCSB Root ......................................... SUCCESS [40.653s]
[INFO] Core YCSB ......................................... SUCCESS [46.852s]
[INFO] Cassandra DB Binding .............................. SUCCESS [44.413s]
[INFO] HBase DB Binding .................................. SUCCESS [1:49.114s]
[INFO] Hypertable DB Binding ............................. SUCCESS [45.091s]
[INFO] DynamoDB DB Binding ............................... SUCCESS [38.011s]
[INFO] ElasticSearch Binding ............................. SUCCESS [3:22.121s]
[INFO] Infinispan DB Binding ............................. SUCCESS [2:43.266s]
[INFO] JDBC DB Binding ................................... SUCCESS [13.182s]
[INFO] Mapkeeper DB Binding .............................. SUCCESS [8.313s]
[INFO] Mongo DB Binding .................................. SUCCESS [5.941s]
[INFO] OrientDB Binding .................................. SUCCESS [15.621s]
[INFO] Redis DB Binding .................................. SUCCESS [4.171s]
[INFO] Voldemort DB Binding .............................. SUCCESS [14.630s]
[INFO] YCSB Release Distribution Builder ................. SUCCESS [13.381s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 12:45.433s
[INFO] Finished at: Thu Sep 12 18:25:37 SGT 2013
[INFO] Final Memory: 65M/165M
[INFO] ------------------------------------------------------------------------


You should be able to look for ycsb-0.1.4.tar.gz file inside YCSB/distribution/target directory. Copy this file to the directory where you have the access permission and untar it. Once untar, copy the hbase-site.xml file from your hbase conf directory to your ycsb-0.1.4/hbase-binding/conf/ directory.

Before you can run the test, you need to start your hdfs (start-dfs.sh)and hbase (start-hbase.sh). Go into hbase shell and create a table call usertable with column family call family.


After create the table and the column family, you can start loading data into your database

$ ~/ycsb-0.1.4/bin/ycsb load hbase -P ~/ycsb-0.1.4/workloads/workloada -p columnfamily=family -p recordcount=10000 -p threadcount=4 -s | tee -a workloada_load.dat

Start running the benchmark with the command below:

$ ~/ycsb-0.1.4/bin/ycsb run hbase -P ~/ycsb-0.1.4/workloads/workloada -p columnfamily=family -p operationcount=10000 -p recordcount=10000 -p threadcount=4 -s | tee -a workloada_run.dat

The steps above are very simple validation using workloada with only 10000 records loaded into database and 10000 operations during the run. Please take note for run operation (especially for read and update operation tests), you need to specify the recordcount also for your test database size. If you never specify, it will use the default value which is specified in the workload files (default is 1000) and this will cause your test to only execute 10000 operations again and again on 1000 records and the rest of the 9000 records will not be accessed at all.

For more details on what are the available workloads, you can refer to the offcial git site.

Wednesday, September 11, 2013

Bad Table Rendering When Converting Word Document to PDF

It is always frustrated to see what is being formatted nicely in your Word document become mess up when converted to pdf. One of the problems is table format. Example below shows table see in Word and table see in pdf.
Table display nicely when you see in Office Word

What a mess after converting to pdf

Turn out this is because of the cell margin. Open your table properties and go to cell tab as shown below. Click on the Options... button.


This will bring up Cell Options window as shown below. Noticed that the top and bottom margin is not zero. Change this to zero and click OK.


Convert your pdf document again and you will see that the table format rendering is ok now.

The only problem now is the cell margin gone. To solve this, you just need to use Line Spacing Option as shown below to create the margin you like.



Finally, you got the table format you want in your pdf file.


Friday, August 2, 2013

Install Adaptec Storage Manager (ASM) For Adaptec 2420SA Card On Ubuntu System

Adaptec 2420SA is actually quite an old card and therefore when you check this link, there is no available package for Ubuntu system (i not sure whether latest ASM can use for old Adaptec card or not, if you have the information, please comment below).

So i downloaded the rpm package into my Ubuntu system and use alien to convert it into deb package.

sudo alien asm_linux_x64_v5_20_17414.rpm

sudo dpkg -i storman_5.20-1_amd64.deb

After finish installing, i change the arcconf permission to make it executable.
sudo chmod 744 arcconf

To check the card information, i run the following command
sudo /usr/StorMan/arcconf getconfig 1

If you encounter the following error when run the arrconf command, you need to install libstdc++5
/usr/StorMan/arcconf: error while loading shared libraries: libstdc++.so.5: cannot open shared object file: No such file or directory

Install libstdc++5
sudo apt-get install libstdc++5

Format Faster on Linux Drive (ext4)

This is actually not really a problem for most of the users as they only format filesystem once and use it afterwards. But for users who need to do repeated testing on multiple drives, this could be really helpful as it can save you a lot of time.

One way to make format faster is to modify the bytes-per-inode value. By specifying higher value for this parameter, less inode will be created. As such, this is probably not suitable for all the situation but it is suitable if your filesystem only need to store virtual machine image which each file is big in size but the number of files in the filesystem is less.

bytes-per-inode Time Free inode
16384 (default) 1m29.712s 30490613
32768 48.809s 15245301
65536 23.675s 7622645
131072 14.793s 3811317

As you can see, the lesser inode need to be created, your format can be done faster. Command to do this:

mkfs -t ext4 -i 131072 /dev/sdb1

Another way is to use lazy_itable_init when formatting filesystem. This is actually the fastest way because inode table is not fully initialized when you do the formatting, instead, it is initialized when it is mounted. So this again might not be suitable if you are doing some IO test as you never know how this initialization process will affect your real IO performance. So the time taken is -- only 3.785s.

Command to do this

mkfs -t ext4 -E lazy_itable_init=1 /dev/sdb1

Notes:

time command was used to capture the elpased time
Disk size is 500GB

Monday, July 29, 2013

Excel Automation Using vbs

This post share how to do Excel automation through the use of vbs. Please take note that this might not be the most optimum way to get things done and you are always welcome to suggest new way by comment on this post.

Initialized Excel application object

This are the few lines you must include in your script whenever you need to do Excel automation.
' Create the excel object
Set objExcel = CreateObject("Excel.Application")

' Do not show the excel workbook 
objExcel.Visible = False

Open workbook to process 

You can use this for csv file also
Set objWorkbook = objExcel.Workbooks.Open (filePath)

Create an object to represent your first worksheet


Set objWriteSheet = objWorkbook.Worksheets(1)

To find the last row or last column in the worksheet

 These 2 lines help to find the last column or last row in your worksheet which has data occupied.
numRows = objWriteSheet.UsedRange.Rows.Count
numCols = objWriteSheet.UsedRange.Columns.Count

To delete column 

These loop will loop through all the column from right to left and it will look for column with value of "test4" or "test5" and delete the column. Please note that it is always better to loop from right to left (column 7->column 6->column 5 .... column 1) as after you delete the column, the number of column will change. You can do the same for delete row by replacing EntireColumn with EntireRow.
For i = numCols to 1 step -1
 if InStr(objWriteSheet.Cells(1,i).value, "test4") Or InStr(objWriteSheet.Cells(1,i).value, "test5") Then
  objWriteSheet.Cells(1,i).EntireColumn.Delete
 End If
Next

To assign value to a cell

 You need to get a worksheet object and just enter the correct row and column index for the correct cell.
objWriteSheet3.Cells(1,14).value = "test" 

To use Excel formula 

The loop below loop through all the rows (starting from row 2)in the worksheet. By creating an object for the range (objRange), you can use this objRange as an argument for this statement - objExcel.WorksheetFunction.Sum(objRange). You can use other formula such as Average...
For i = 2 to numRows
 Set objRange = objWriteSheet3.Range(objWriteSheet3.Cells(i,2), objWriteSheet3.Cells(i,13))
 objWriteSheet3.Cells(i,14).value = objExcel.WorksheetFunction.Sum(objRange)
 Set objRange = Nothing
Next

Copy 2 different columns and paste it to another worksheet 

The lines below define 2 range object for different column (column with index 14 and 28) and use the union to combine them so that the copy can be done for 2 different ranges.
' Copy the result to the new worksheet
Set objRangeD = objWriteSheet3.Range(objWriteSheet3.Cells(1,14), objWriteSheet3.Cells(numRows,14))
Set objRangeE = objWriteSheet3.Range(objWriteSheet3.Cells(1,28), objWriteSheet3.Cells(numRows,28))
 
objExcel.Union(objRangeD,objRangeE).copy
objWriteSheet2.Range("D1").PasteSpecial 

Create chart

First line create a chart object with the following arguments (left, top, width, height).
Second line define this as line chart.
Third line set the data source where you need to define range for it.
The most critical thing here is vbs actually don't understand xlLine. So to make this works, you have to define a constant for this Const xlLine = 4. This link let you have the number for all the different constant use in Excel application.
Set objMychart = objWriteSheet2.ChartObjects.Add(400, 20, 375, 200).Chart
objMychart.ChartType = xlLine
objMychart.SetSourceData objWriteSheet2.Range(objWriteSheet2.Cells(1,1), objWriteSheet2.Cells(numRows,numCols))

Add chart title and axis title 

Again, you need to define your own constant for xlCategory and xlValue. Of course, you can also put number directly.
objMyChart.hasTitle = True
objMychart.ChartTitle.Text = "Sales"

objMychart.Axes(xlCategory).hasTitle = True
objMychart.Axes(xlCategory).AxisTitle.Text = "Month"

objMychart.Axes(xlValue).hasTitle = True
objMychart.Axes(xlValue).AxisTitle.Text = "$"

Set the maximum scale for y axis


objMychart.Axes(xlValue).MaximumScale = 100

Set the line colour 

These line set the first line in the chart, you can change to indicate which line you want to change.
objMychart.SeriesCollection(1).Format.Line.Visible = True
objMychart.SeriesCollection(1).Format.Line.ForeColor.RGB = RGB(0,32,96)
objMychart.SeriesCollection(1).Format.Line.Transparency = 0

Save your Excel document

Very important step!
objWorkbook.SaveAs (D:\result.xls), -4143

objExcel.Quit

set objExcel = Nothing

Sunday, July 7, 2013

Hadoop Rack Awareness (1.0.4)

To enable Hadoop rack awareness, you need to create a script that do the mapping and specify the path to this script using the topology.script.file.name property in core-site.xml file.

Below is the script i use which i actually obtain from this site. The script name is topology.sh
#!/bin/sh
HADOOP_CONF=/etc/hadoop
while [ $# -gt 0 ] ; do
  nodeArg=$1
  exec< ${HADOOP_CONF}/topology.data
  result=""
  while read line ; do
    ar=( $line )
    if [ "${ar[0]}" = "$nodeArg" ] ; then
      result="${ar[1]}"
    fi
  done
  shift
  if [ -z "$result" ] ; then
    echo -n "/default-rack "
  else
    echo -n "$result "
  fi
done

And the topology.data file is as shown below
10.0.0.11  /rack1
10.0.0.12  /rack1
10.0.0.13  /rack1
10.0.0.14  /rack1
10.0.0.15  /rack2
10.0.0.16  /rack2
10.0.0.17  /rack2
10.0.0.18  /rack2
10.0.0.19  /rack3
10.0.0.20  /rack3
10.0.0.21  /rack3
10.0.0.22  /rack3

Place these 2 files on /etc/hadoop folder on your namenode only (you can of course specify other directory but make sure you change the path information in the script file and core-site.xml file). After you done with this, you can proceed to add the topology.script.file.name property in the core-site.xml file. You only need to do this on namenode.

Once you done, you can restart Hadoop. (execute stop-all.sh follow by start-all.sh) To validate your cluster is indeed rack awareness, use this command: hadoop dfsadmin -report. You should be able to see the extra line showing Rack information for each datanode.
[hadoopuser@hadoop-name-node hadoop]$ hadoop dfsadmin -report
Configured Capacity: 5143534043136 (4.68 TB)
Present Capacity: 4881124573184 (4.44 TB)
DFS Remaining: 2865411211264 (2.61 TB)
DFS Used: 2015713361920 (1.83 TB)
DFS Used%: 41.3%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 12 (12 total, 0 dead)

Name: 10.0.0.12:50010
Rack: /rack1
Decommission Status : Normal
Configured Capacity: 428627836928 (379.19 GB)
DFS Used: 188051693553 (155.14 GB)
Non DFS Used: 21867446287 (40.37 GB)
DFS Remaining: 218708697088(183.69 GB)
DFS Used%: 43.87%
DFS Remaining%: 51.03%
Last contact: Mon Jul 08 09:44:33 SGT 2013


Name: 10.0.0.26:50010
Rack: /rack3
Decommission Status : Normal
Configured Capacity: 428627836928 (379.19 GB)
DFS Used: 166991044608 (135.52 GB)
Non DFS Used: 21867454464 (40.37 GB)
DFS Remaining: 239769337856(203.3 GB)
DFS Used%: 38.96%
DFS Remaining%: 55.94%
Last contact: Mon Jul 08 09:44:33 SGT 2013

Friday, June 14, 2013

Calculate time difference using bash script

This can be done easily if you convert your time string into epoch which is the number of seconds since 1st January 1970. Before you can do it, make sure the time string you want to convert is in correct format. One of the accepted format is "yyyy/mm/dd hh:mm:ss". In my case, the time string format is actually "yy/mm/dd/hh:mm:ss", but with little tweak by appending 20 to my time string, i can get the accepted format.

So to do that you need to use date command with -d option (display time described by STRING) and +%s format control (epoch).

Example:
currDate="2013/05/30 18:18:20"
prevDate="2013/05/30 18:18:10"
currDateEpoch=`date -d "$currDate" +%s`
prevDateEpoch=`date -d "$prevDate" +%s`
delta=$(($currDateEpoch-$prevDateEpoch))

And the delta result is 10 seconds.

Monday, April 22, 2013

CentOS - How to install kernel-debuginfo

First, check if you have the following this file (CentOS-Debuginfo.repo) in /etc/yum.repos.d
[root@abc yum.repos.d]# ls
CentOS-Base.repo  CentOS-Debuginfo.repo  CentOS-Media.repo

The content of CentOS-Debuginfo.repo is as shown below, from this file, we know the repo is debug (the name is enclosed with [ ]) and this repo is not enabled by default (enabled=0).
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#

# All debug packages from all the various CentOS-5 releases
# are merged into a single repo, split by BaseArch
#
# Note: packages in the debuginfo repo are currently not signed
#

[debug]
name=CentOS-6 - Debuginfo
baseurl=http://debuginfo.centos.org/6/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Debug-6
enabled=0

To install kernel-debuginfo for your kernel, use yum as shown below
[root@abc yum.repos.d]# yum --enablerepo=debug install kernel-debuginfo-2.6.32-220.el6

If you omit kernel version, yum will pull the latest kernel-debuginfo file as shown below
[root@abc yum.repos.d]# yum --enablerepo=debug install kernel-debuginfo
Loaded plugins: fastestmirror, refresh-packagekit, security
Determining fastest mirrors
 * base: mirror.rndc.or.id
 * extras: kartolo.sby.datautama.net.id
 * updates: buaya.klas.or.id
base                                                     | 3.7 kB     00:00
debug                                                    | 1.9 kB     00:00
debug/primary_db                                         | 753 kB     00:19
extras                                                   | 3.5 kB     00:00
updates                                                  | 3.5 kB     00:00
updates/primary_db                                       | 1.5 MB     00:34
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package kernel-debuginfo.x86_64 0:2.6.32-358.2.1.el6.centos.plus will be installed
--> Processing Dependency: kernel-debuginfo-common-x86_64 = 2.6.32-358.2.1.el6.centos.plus for package: kernel-debuginfo-2.6.32-358.2.1.el6.centos.plus.x86_64
--> Running transaction check
---> Package kernel-debuginfo-common-x86_64.x86_64 0:2.6.32-358.2.1.el6.centos.plus will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package                      Arch   Version                        Repository
                                                                           Size
================================================================================
Installing:
 kernel-debuginfo             x86_64 2.6.32-358.2.1.el6.centos.plus debug 249 M
Installing for dependencies:
 kernel-debuginfo-common-x86_64
                              x86_64 2.6.32-358.2.1.el6.centos.plus debug  38 M

Transaction Summary
================================================================================
Install       2 Package(s)

Total download size: 287 M
Installed size: 1.6 G
Is this ok [y/N]: N

Another option is to download both the rpm packages (kernel-debuginfo and kernel-debuginfo-common) from this link. (look for the correct architecture as well as kernel version)

Wednesday, April 17, 2013

How to increase max_sectors_kb for MegaRaid 9280 in CentOS

The quick way to adjust it is to follow the steps listed in this blog

Basically is

rmmod megaraid_sas
modprobe megaraid_sas max_sectors=2048

Please take note that the unit is in sector. If sector size is 512 bytes, setting 2048 is equivalent to 1024kb.

If your boot drive is also from Megaraid card, you will not be able to rmmod as the system will complaint it is in use. In that situation, you need to create a file call megaraid_sas.conf in /etc/modprobe.d with the following contents

options megaraid_sas max_sectors=2048

After that, you need to generate a new initramfs.

Backup your existing initramfs file (take note you have to find the correct img for your kernel version)

mv initramfs-2.6.32-220.el6.x86_64.img initramfs-2.6.32-220.el6.x86_64.img.bk

Create the new initramfs file

dracut initramfs-2.6.32-220.el6.x86_64.img 2.6.32-220.el6.x86_64

After that reboot your system and your max_hw_sectors_kb should show 1024 and you can increase your max_sectors_kb to the maximum 1024.

Wednesday, April 10, 2013

Reuse PC Power Supply

To reuse PC power supply without motherboard, you have to short 2 pins on the 24 pin ATX power connector which is the connector that connect to your motherboard. To know which pin to connect together, you can refer to this site which has a detailed pin layout.

As shown in the diagram below, you need to short pin 16 (power Supply On) and pin 17 (Ground) together and this have to stay there as long as you want to use the power supply. Please take note that there are few other ground pins and i choose pin 17 is for convenient.


Picture below shows my setup and i use DVD ROM to test out the power supply.


A closer look on the 24 pin ATX power connector with the cable to short pin 16 and 17.


Tuesday, April 9, 2013

Xrdp not working on CentOS 6.2

If you install Xrdp (xrdp-0.5.0-0.13.el6.x86_64.rpm) on CentOS 6.2 without any update. You might face the following problem when you try to RDP to your Linux machine.


And when you check your error log (/var/log/xrdp-sesman.log), you find the following
[20130410-09:39:03] [WARN ] [init:45] libscp initialized
[20130410-09:39:03] [CORE ] starting sesman with pid 4374
[20130410-09:39:03] [INFO ] listening...
[20130410-09:40:15] [INFO ] scp thread on sck 7 started successfully
[20130410-09:40:16] [INFO ] ++ created session (access granted): username root,
ip 10.217.242.65:2137 - socket: 7
[20130410-09:40:16] [INFO ] starting Xvnc session...
[20130410-09:40:25] [ERROR] X server for display 10 startup timeout
[20130410-09:40:25] [INFO ] starting xrdp-sessvc - xpid=4841 - wmpid=4840
[20130410-09:40:26] [ERROR] X server for display 10 startup timeout
[20130410-09:40:26] [ERROR] another Xserver is already active on display 10
[20130410-09:40:26] [DEBUG] aborting connection...
[20130410-09:40:26] [INFO ] ++ terminated session:  username root, display :10.0
, session_pid 4839, ip 10.217.242.65:2137 - socket: 7

You can try to solve this problem by doing update on your OS. And if that is not the option, because you have certain driver that will not work with the updated OS. You can choose to update these 2 packages:

yum install pixman libXfont

After updating these 2 packages, restart your xrdp service

service xrdp restart

In my case, these updates solve my problem.

Friday, February 15, 2013

Connect-VIServer -- Network Connectivity Error

If you are using Connect-VIServer from vSphere PowerCLI to connect to your vCenter but you encounter network connectivity error, this might be caused by Tomcat server in your vCenter is not running.


To solve this problem, we need to start Tomcat service in your vCenter.


Click on Monitor Tomcat


 And you can start the service through the Tomcat icon in the system tray.

After that, you should be able to connect to your vCenter using Connect-VIServer

Wednesday, February 6, 2013

Bad Font Rendering in Adobe Reader

Most of the time i use Adobe Reader to read pdf files and i always find the font rendering very bad as shown below. This example is very bad as there are some bold texts in those paragraph but i seen none at all.


After looking through the preference settings, then i realize that there is one setting that i never set properly. To go to this setting, you first go to "Preferences"


And on the Preferences window, go to Page Display and look for Rendering section. As you can see, under Smooth Text, my setting is "none"


 So i change it and set it to "For Laptop/LCD screens"


And the end result, finally a smooth rendering text and bold text is obvious. :)